1. close (https://github.com/apache/doris/issues/16458) for nereids
2. varchar and string type should be treated as same type in bucket shuffle join scenario.
```
create table shuffle_join_t1 ( a varchar(10) not null )
create table shuffle_join_t2 ( a varchar(5) not null, b string not null, c char(3) not null )
```
the bellow 2 sqls can use bucket shuffle join
```
select * from shuffle_join_t1 t1 left join shuffle_join_t2 t2 on t1.a = t2.a;
select * from shuffle_join_t1 t1 left join shuffle_join_t2 t2 on t1.a = t2.b;
```
3. PushdownExpressionsInHashCondition should consider both hash and other conjuncts
4. visitPhysicalProject should handle MarkJoinSlotReference
1. add the global keyword.
SHOW [GLOBAL] [FULL] [BUILTIN] FUNCTIONS [IN|FROM db] [LIKE 'function_pattern']
SHOW CREATE GLOBAL FUNCTION function_name(arg_type [, ...]);
2. show the details of the global udf.
The information in the document is incomplete, user may be get error message like:
mysql> INSTALL PLUGIN FROM "http://127.0.0.1:8039/auditloader.zip";
ERROR 1105 (HY000): errCode = 2, detailMessage = http://127.0.0.1:8039/auditloader.zip.md5. you should set md5sum in plugin properties or provide a md5 URI to check plugin file
Fix function analysis repeat add child.
select list expression not produced by aggregation output (missing from GROUP BY clause?): if(length(`r_2_3`.`name`) % 32 = 0, aes_decrypt(unhex(`r_2_3`.`name`), '***'), `r_2_3`.`name`)
1. some bitmap functions like bitmap_or, bitmap_and_count, bitmap_or_count etc shouldn't follow constant fold rule for PropagateNullable functions. So remove PropagateNullable property and these functions would use their own constant fold logic correctly
2. dphyper's PlanReceiver class shouldn't change hyperGraph's complex project info. So make PlanReceiver use its own copy of complex project info now.
MTMV regression tests may loop forever due to some potential bugs. Therefore, we add a timeout to avoid endless loop. The value of the timeout is hard coded 30 minutes now.
When we use a hive client to submit a `INSERT INTO TBL SELECT * FROM ...` or `INSERT INTO TBL VALUES ...`
sql and the table is non-partitioned table, the hms will generate an insert event. The insert stmt may changed the
hdfs file distribution of this table, but currently we do not handle this, so the file cache of this table may be inaccurate.
the udaf case is unstable reason:
when enable_pipeline_engine=true, the case of agg function only 1 instance,
so not merge the default value, but if instance>1, will merge the default value
the legacy PartitionPruner only support some simple cases, some useful cases not support:
1. can not support evaluate some builtin functions, like `cast(part_column as bigint) = 1`
2. can not prune multi level range partition, for partition `[[('1', 'a'), ('2', 'b'))`, it has some constraints:
- first_part_column between '1' and '2'
- if first_part_column = '1' then second_part_column >= 'a'
- if first_part_column = '2' then second_part_column < 'a'
This pr refactor it and support:
1. use visitor to evaluate function and fold constant
2. if the partition is discrete like int, date, we can expand it and evaluate, e.g `[1, 5)` will be expand to `[1, 2, 3, 4]`
3. support prune multi level range partition, as previously described
4. support evaluate capabilities for a range slot, e.g. datetime range partition `[('2023-03-21 00:00:00'), ('2023-03-21 23:59:59'))`, if the filter is `date(col1) = '2023-03-22'`, this partition will be pruned, we can do this prune because we know that the date always is `2023-03-21`. you can implement the visit method in FoldConstantRuleOnFE and OneRangePartitionEvaluator to support this functions.
### How can we do it so finely ?
Generally, the range partition can separate to three parts: `const`, `range`, `other`.
for example, the partition `[(1, 'a', 'D'), ('1', 'c', 'D'))` exist
1. first partition column is `const`: always equals to '1'
2. second partition column is `range`: `slot >= 'a' and <= 'c'`. If not later slot, it must be `slot >= 'a' and < 'c'`
3. third partition column is `other`: regardless of whether the upper and lower bounds are the same, it must exist multi values, e.g. `('1', 'a', 'D')`, `('1', 'a', 'F')`, `('1', 'b', 'A')`, `('1', 'c', 'A')`
In a partition, there is one and only one `range` slot can exist; maybe zero or one or many `const`/`other` slots.
Normally, a partition look like [const*, range, other*], these are the possible shapes:
1. [range], e.g `[('1'), ('10'))`
2. [const, range], e.g. `[('1', 'a'), ('1', 'd'))`
3. [range, other, other], e.g. `[('1', '1', '1'), ('2', '1', '1'))`
4. [const, const, ..., range, other, other, ...], e.g. `[('1', '1', '2', '3', '4'), ('1', '1', '3', '3', '4'))`
The properties of `const`:
1. we can replace slot to literal to evaluate expression tree.
The properties of `range`:
1. if the slot date type is discrete type, like int, and date, we can expand it to literal and evaluate expression tree
2. if not discrete type, like datetime, or the discrete values too much, like [1, 1000000), we can keep the slot in the expression tree, and assign a range for it, when evaluate expression tree, we also compute the range and check whether range is empty set, if so we can simplify to BooleanLiteral.FALSE to skip this partition.
5. if the range slot satisfied some conditions , we can fold the slot with some function too, see the datetime example above
The properties of `other`:
1. only when the previous slot is literal and equals to the lower bound or upper bound of partition, we can shrink the range of the `other` slot
According this properties, we can do it finely.
at the runtime, the `range` and `other` slot maybe shrink the range of values,
e.g.
1. the partition `[('a'), ('b'))` with predicate `part_col = 'a'` will shrink range `['a', 'b')` to `['a']`, like a `range` slot change/downgrading to `const` slot;
2. the partition `[('a', '1'), ('b', '10'))` with predicate `part_col1 = 'a'` will shrink the range of `other` slot from unknown(all range) to `['1', +∞)`, like a `other` slot change/downgrading to `range` slot.
But to simplify, I haven't change the type at the runtime, just shrink the ColumnRange.
This pr refactor the column pruning by the visitor, the good sides
1. easy to provide ability of column pruning for new plan by implement the interface `OutputPrunable` if the plan contains output field or do nothing if not contains output field, don't need to add new rule like `PruneXxxChildColumns`, few scenarios need to override the visit function to write special logic, like prune the LogicalSetOperation and Aggregate
2. support shrink output field in some plans, this can skip some useless operations so improvement
example:
```sql
select id
from (
select id, sum(age)
from student
group by id
)a
```
we should prune the useless `sum (age)` in the aggregate.
before refactor:
```
LogicalProject ( distinct=false, projects=[id#0], excepts=[], canEliminate=true )
+--LogicalSubQueryAlias ( qualifier=[a] )
+--LogicalAggregate ( groupByExpr=[id#0], outputExpr=[id#0, sum(age#2) AS `sum(age)`#4], hasRepeat=false )
+--LogicalProject ( distinct=false, projects=[id#0, age#2], excepts=[], canEliminate=true )
+--LogicalOlapScan ( qualified=default_cluster:test.student, indexName=<index_not_selected>, selectedIndexId=10007, preAgg=ON )
```
after refactor:
```
LogicalProject ( distinct=false, projects=[id#0], excepts=[], canEliminate=true )
+--LogicalSubQueryAlias ( qualifier=[a] )
+--LogicalAggregate ( groupByExpr=[id#0], outputExpr=[id#0], hasRepeat=false )
+--LogicalProject ( distinct=false, projects=[id#0], excepts=[], canEliminate=true )
+--LogicalOlapScan ( qualified=default_cluster:test.student, indexName=<index_not_selected>, selectedIndexId=10007, preAgg=ON )
```
In the batch insertion scenario, sap hana database does not support syntax insert into tables values (...),(...);
what it supports is:
```sql
INSERT INTO table(col1,col2)
SELECT c1v1, c2v1 FROM dummy
UNION ALL
SELECT c1v2, c2v2 FROM dummy;
```
in PR #17813 , we want to forbid bind slot on brother's column
howerver the fix is not in correct way.
the correct way to do that is forbid subquery register itself in parent's analyzer.
This reverts commit b91a3b5a72520105638dad1079b71a05f02c10a0.
The metadata storage format of the materialized view has changed, and the new optimizer adapts to the new storage method.
The column storage format of the metadata for the materialized view is changed to start with mv_ or start with mva_
This pr allows the new optimizer to recognize the new materialized view columns and select the correct materialized view.
TODO: support advance mv
Current getSplits for TVF is to create one split for each file. In this case, large file scan performance maybe bad.
This pr is to implement the getSplits function in TVFSplitter to support split file to multiple blocks which
may improve the performance for large files.
when apply inverted index will use predicate_params() from ColumnPredicate, if comparison predicate be cloned, but the clone one not copy the predicate_params() together, that resulting when applying inverted index make the wrong choice.
1. adjust in cost model
the cost of broadcast should lower than the cost of shuffle when data size is small.
In broadcast, we do not known the number of receiver BEs, so we use the number of BEs in the system.
2. debug message adjust
a. in explain, print row count after filter
b. if join is not marked join, do not print marked join info
Reference to `org.apache.doris.planner.external.HiveSplitter`, the file cache of `HiveMetaStoreCache`
may be created even the table is a non-partitioned table,
so the `RefreshTableStmt` should consider this scene and handle it.