add regression-test of decimalv3 for nereids and refactor some suites.
too many suites will be changed, so this pr we just add arithmetic test.
1. some tests are disabled because of unfixed results and precision, detailed a big integer mul and div a float will cause the latter and bit-op will cause the former.
2. the disabled tests with tag original planner are caused by unfixed results.
consider the query like this:
```sql
SELECT
k3, k4
FROM
test
WHERE
EXISTS( SELECT
d.*
FROM
(SELECT
k1 AS _1234, SUM(k2)
FROM
`test` d
GROUP BY _1234) d
LEFT JOIN
(SELECT
k1 AS _1234,
SUM(k2)
FROM
`test`
GROUP BY _1234) temp ON d._1234 = temp._1234)
ORDER BY k3, k4
```
when we analyze group by exprs in `temp` inline view. we bind the `_1234` on `d._1234` by mistake.
that because, when we do analyze in a **SUB-QUERY**, we will resolve SlotRef by itself **AND** parent's tuple. in the meanwhile, we register child's tuple to parent's analyzer. So, in a **SUB-QUERY**, the brother's tuple will affect the resolve result of current inlineview's slot.
This PR:
1. add a flag on the function `resolveColumnRef` in `Analyzer`
```java
private TupleDescriptor resolveColumnRef(String colName, boolean requestFromChild);
private TupleDescriptor resolveColumnRef(TableName tblName, String colName, boolean requestByChild);
```
2. add a flag to specify whether the tuple is from child.
```java
// alias name -> <from child, tupleDesc>
private final Multimap<String, Pair<Boolean, TupleDescriptor>> tupleByAlias;
```
when `requestByChild == true`, we **SKIP** the tuple from other child to avoid resolve error.
Complete the type coercion of the subquery in the function Binder process.
Expressions generated when subqueries are nested are uniformly converted to implicit types in the analyze stage.
Method: Add a typeCoercionExpr field to the subquery expression to store the generated cast information.
Fix scenario where scalarSubQuery handles arithmetic expressions when implicitly converting types
This problem is caused by the slots with same hashcodes was put in the hashset results into the wrong rules was selected.Use list instead of set as return type of getDistinctArguments method
Before this PR when encountering null values with some columns which is specified as `NOT NULL`, null values will not be filtered,thi behavior does not match with the original load behavior.
Second column alignment logic has bug :
```
template <typename ColumnInserterFn>
void align_variant_by_name_and_type(ColumnObject& dst, const ColumnObject& src, size_t row_cnt,
ColumnInserterFn inserter) {
CHECK(dst.is_finalized() && src.is_finalized());
// Use rows() here instead of size(), since size() will check_consistency
// but we could not check_consistency since num_rows will be upgraded even
// if src and dst is empty, we just increase the num_rows of dst and fill
// num_rows of default values when meet new data
size_t num_rows = dst.rows();
```
basic functions for map datatype:
- MAP<K, V> map(K k1, V v1, ...)
- BIGINT map_size(MAP<K, V> m)
- BOOL map_contains_key(MAP<K, V> m, K k1)
- BOOL map_contains_value(MAP<K, V> m, V v1)
- ARRAY< K> map_keys(MAP<K, V> m)
- ARRAY< V> map_values(MAP<K, V> m)
aggregate_strategies execution too slow, use smaller table valued function to speed up
add a p2 case nereids_syntax_p2/aggregate_strategies to use larger table valued function to ensure correct
remove case nereids_syntax_p0/test_join_nereids since it redundant with nereids_p0/join/test_join
remove unstable case in query_p0/aggregate/aggregate
When compaction case, memory map offsets coming to same olap convertor which is from 0 to 0+size
but it should be continue in different pages when in one segment writer .
eg :
last block with map offset : [3, 6, 8, ... 100]
this block with map offset : [5, 10, 15 ..., 100]
the same convertor should record last offset to make later coming offset followed last offset.
so after convertor :
the current offset should [105, 110, 115, ... 200], then column writer just call append_data() to make the right offset data append pages
* [Feature](vectorized)(quantile_state): support vectorized quantile state functions
1. now quantile column only support not nullable
2. add up some regression test cases
3. set default enable_quantile_state_type = true
---------
Co-authored-by: spaces-x <weixiang06@meituan.com>
WITH t0 AS(
SELECT report.date1 AS date2 FROM(
SELECT DATE_FORMAT(date, '%Y%m%d') AS date1 FROM cir_1756_t1
) report GROUP BY report.date1
),
t3 AS(
SELECT date_format(date, '%Y%m%d') AS date3
FROM cir_1756_t2
)
SELECT row_number() OVER(ORDER BY date2)
FROM(
SELECT t0.date2 FROM t0 LEFT JOIN t3 ON t0.date2 = t3.date3
) tx;
The DATE_FORMAT(date, '%Y%m%d') was calculated in GROUP BY node, which is wrong. This expr should be calculated inside the subquery.
Support delta encoding and rle(bool) to read Glue data
add delta bit pack decoder,
add delta length byte array decoder,
add delta byte array decoder.
add rle bool decoder.
We find some data type is read with delta encoding on AWS Glue, so it should be supported.
The definition of delta encoding can refer to the delta encoding in parquet.
1. add support `CAST AS Struct` from Struct type;
2. fix crash while `CAST('{}' AS Struct)`;
3. `CAST('' AS complext_type)` should return NULL instead of empty object;
---------
Co-authored-by: cambyzju <zhuxiaoli01@baidu.com>
the input replicate_offsets should be the same size as ColumnArray's offset.
```
IColumn::Offsets replicate_offsets(get_offsets().size(), 0);
// |---------------------|-------------------------|-------------------------|
// [0, begin) [begin, begin + count_sz) [begin + count_sz, size())
// do not need to copy copy counts[n] times do not need to copy
```
we should