the udaf case is unstable reason:
when enable_pipeline_engine=true, the case of agg function only 1 instance,
so not merge the default value, but if instance>1, will merge the default value
This pr refactor the column pruning by the visitor, the good sides
1. easy to provide ability of column pruning for new plan by implement the interface `OutputPrunable` if the plan contains output field or do nothing if not contains output field, don't need to add new rule like `PruneXxxChildColumns`, few scenarios need to override the visit function to write special logic, like prune the LogicalSetOperation and Aggregate
2. support shrink output field in some plans, this can skip some useless operations so improvement
example:
```sql
select id
from (
select id, sum(age)
from student
group by id
)a
```
we should prune the useless `sum (age)` in the aggregate.
before refactor:
```
LogicalProject ( distinct=false, projects=[id#0], excepts=[], canEliminate=true )
+--LogicalSubQueryAlias ( qualifier=[a] )
+--LogicalAggregate ( groupByExpr=[id#0], outputExpr=[id#0, sum(age#2) AS `sum(age)`#4], hasRepeat=false )
+--LogicalProject ( distinct=false, projects=[id#0, age#2], excepts=[], canEliminate=true )
+--LogicalOlapScan ( qualified=default_cluster:test.student, indexName=<index_not_selected>, selectedIndexId=10007, preAgg=ON )
```
after refactor:
```
LogicalProject ( distinct=false, projects=[id#0], excepts=[], canEliminate=true )
+--LogicalSubQueryAlias ( qualifier=[a] )
+--LogicalAggregate ( groupByExpr=[id#0], outputExpr=[id#0], hasRepeat=false )
+--LogicalProject ( distinct=false, projects=[id#0], excepts=[], canEliminate=true )
+--LogicalOlapScan ( qualified=default_cluster:test.student, indexName=<index_not_selected>, selectedIndexId=10007, preAgg=ON )
```
in PR #17813 , we want to forbid bind slot on brother's column
howerver the fix is not in correct way.
the correct way to do that is forbid subquery register itself in parent's analyzer.
This reverts commit b91a3b5a72520105638dad1079b71a05f02c10a0.
add regression-test of decimalv3 for nereids and refactor some suites.
too many suites will be changed, so this pr we just add arithmetic test.
1. some tests are disabled because of unfixed results and precision, detailed a big integer mul and div a float will cause the latter and bit-op will cause the former.
2. the disabled tests with tag original planner are caused by unfixed results.
consider the query like this:
```sql
SELECT
k3, k4
FROM
test
WHERE
EXISTS( SELECT
d.*
FROM
(SELECT
k1 AS _1234, SUM(k2)
FROM
`test` d
GROUP BY _1234) d
LEFT JOIN
(SELECT
k1 AS _1234,
SUM(k2)
FROM
`test`
GROUP BY _1234) temp ON d._1234 = temp._1234)
ORDER BY k3, k4
```
when we analyze group by exprs in `temp` inline view. we bind the `_1234` on `d._1234` by mistake.
that because, when we do analyze in a **SUB-QUERY**, we will resolve SlotRef by itself **AND** parent's tuple. in the meanwhile, we register child's tuple to parent's analyzer. So, in a **SUB-QUERY**, the brother's tuple will affect the resolve result of current inlineview's slot.
This PR:
1. add a flag on the function `resolveColumnRef` in `Analyzer`
```java
private TupleDescriptor resolveColumnRef(String colName, boolean requestFromChild);
private TupleDescriptor resolveColumnRef(TableName tblName, String colName, boolean requestByChild);
```
2. add a flag to specify whether the tuple is from child.
```java
// alias name -> <from child, tupleDesc>
private final Multimap<String, Pair<Boolean, TupleDescriptor>> tupleByAlias;
```
when `requestByChild == true`, we **SKIP** the tuple from other child to avoid resolve error.
Complete the type coercion of the subquery in the function Binder process.
Expressions generated when subqueries are nested are uniformly converted to implicit types in the analyze stage.
Method: Add a typeCoercionExpr field to the subquery expression to store the generated cast information.
Fix scenario where scalarSubQuery handles arithmetic expressions when implicitly converting types
This problem is caused by the slots with same hashcodes was put in the hashset results into the wrong rules was selected.Use list instead of set as return type of getDistinctArguments method
Before this PR when encountering null values with some columns which is specified as `NOT NULL`, null values will not be filtered,thi behavior does not match with the original load behavior.
Second column alignment logic has bug :
```
template <typename ColumnInserterFn>
void align_variant_by_name_and_type(ColumnObject& dst, const ColumnObject& src, size_t row_cnt,
ColumnInserterFn inserter) {
CHECK(dst.is_finalized() && src.is_finalized());
// Use rows() here instead of size(), since size() will check_consistency
// but we could not check_consistency since num_rows will be upgraded even
// if src and dst is empty, we just increase the num_rows of dst and fill
// num_rows of default values when meet new data
size_t num_rows = dst.rows();
```
basic functions for map datatype:
- MAP<K, V> map(K k1, V v1, ...)
- BIGINT map_size(MAP<K, V> m)
- BOOL map_contains_key(MAP<K, V> m, K k1)
- BOOL map_contains_value(MAP<K, V> m, V v1)
- ARRAY< K> map_keys(MAP<K, V> m)
- ARRAY< V> map_values(MAP<K, V> m)
aggregate_strategies execution too slow, use smaller table valued function to speed up
add a p2 case nereids_syntax_p2/aggregate_strategies to use larger table valued function to ensure correct
remove case nereids_syntax_p0/test_join_nereids since it redundant with nereids_p0/join/test_join
remove unstable case in query_p0/aggregate/aggregate
When compaction case, memory map offsets coming to same olap convertor which is from 0 to 0+size
but it should be continue in different pages when in one segment writer .
eg :
last block with map offset : [3, 6, 8, ... 100]
this block with map offset : [5, 10, 15 ..., 100]
the same convertor should record last offset to make later coming offset followed last offset.
so after convertor :
the current offset should [105, 110, 115, ... 200], then column writer just call append_data() to make the right offset data append pages
* [Feature](vectorized)(quantile_state): support vectorized quantile state functions
1. now quantile column only support not nullable
2. add up some regression test cases
3. set default enable_quantile_state_type = true
---------
Co-authored-by: spaces-x <weixiang06@meituan.com>