when olap table is dynamic partition enable, if drop and recover olap table, the table should be added to DynamicPartitionScheduler again
---------
Co-authored-by: caiconghui1 <caiconghui1@jd.com>
Refactoring the filtering conditions in the current ExecNode from an expression tree to an array can simplify the process of adding runtime filters. It eliminates the need for complex merge operations and removes the requirement for the frontend to combine expressions into a single entity.
By representing the filtering conditions as an array, each condition can be treated individually, making it easier to add runtime filters without the need for complex merging logic. The array can store the individual conditions, and the runtime filter logic can iterate through the array to apply the filters as needed.
This refactoring simplifies the codebase, improves readability, and reduces the complexity associated with handling filtering conditions and adding runtime filters. It separates the conditions into discrete entities, enabling more straightforward manipulation and management within the execution node.
Increase the functionality of advanced materialized view
This feature already supported by legacy planner with PR #19650
This PR implement it in Nereids. This PR implement the features as below:
1. Support multiple columns in aggregate function. eg: select sum(c1 + c2) from t1;
2. Supports complex expressions. eg: select abs(c1), sum(abc(c1+1) + 1) from t1;
TODO:
1. Support adding where in materialized view
* [Bug](point query) checkAndSetPointQuery before checkEnableTwoPhaseRead
1. checkEnableTwoPhaseRead rely on thr short circuit flag
2. add more metric to display lookup profile
* fix rebase
We can not use the string as the variable key to use in the hint.
Before this PR
mysql> SET enable_nereids_planner=true;
Query OK, 0 rows affected (0.01 sec)
mysql> set enable_fallback_to_original_planner=false;
Query OK, 0 rows affected (0.10 sec)
mysql> explain select /*+ SET_var("enable_nereids_planner" = "false") */ 1;
ERROR 1105 (HY000): Exception, msg: Nereids cannot parse the SQL, and fallback disabled. caused by:
no viable alternative at input 'select /*+ SET_var("enable_nereids_planner"'(line 1, pos 27)
After this PR
mysql> SET enable_nereids_planner=true;
Query OK, 0 rows affected (0.01 sec)
mysql> set enable_fallback_to_original_planner=false;
Query OK, 0 rows affected (0.10 sec)
mysql> select /*+ SET_var("enable_nereids_planner" = "false") */ 1;
+------+
| 1 |
+------+
| 1 |
+------+
1 row in set (0.00 sec)
Describe your changes.
Support the string for the hint key in the Parser.
Before change, when doing optimize use Nereids planner, input will serialize to memory first. And when bug happen, it would be dump to minidump file when catching the exception.
We found that serialization process will cause the performance when statistic message too large or when optimization time be small enough.
So the user minidump using should change to ONLY YOU OPEN MINIDUMP SWITCH(set enable_minidump=true;) can you use it.
1. Before this PR if rowset does not contain column which should be read for related SlotDescriptor will call `insert_default` to column, but it's not this real defautl value.Real default value relevant information should be provided by the frontend side.
2. Support fetch when light schema change is not enabled, but disable for AGG or UNIQUE MOR model
When the postgresql bit type size is 1, it reads as a java.lang.boolean via jdbc, and if we match against string,
it will display true or false. But the normal display should be a number,
so when I detect that the size of bit is 1, I will match it with boolean
1. Fix duplicate '/' in front-end request URI.
2. When the FileSystemSeparator is '\\', replace '\\' as '/'
Co-authored-by: labuladuo <labuladuo@douyu.tv>
Support the operator `PartitionTopN`, which can partition first and do the topn operation later in each partition. It used in the following case
```
-- Support push the filter down to the window and generate the PartitionTopN.
-- The plan change from `window -> filter` to `partitionTopN -> window -> filter`.
explain select * from (select * , row_number() over(partition by b order by a) as num from t ) tt where num <= 10;
-- Support push the limit down to the window and generate the PartitionTopN.
-- The plan change from `window -> limit` to `partitionTopN -> window -> limit `.
explain select row_number() over(partition by b order by a) as num from t limit 10;
-- Support push the topn down to the window and generate the PartitionTopN.
-- The plan change from `window -> topn` to `partitionTopN -> window -> topn `.
explain select row_number() over(partition by b order by a) as num from t order by num limit 10;
```
The FE part detail design:
1. Add the following rewrite rules:
- PUSHDOWN_FILTER_THROUGH_WINDOW
- PUSH_LIMIT_THROUGH_PROJECT_WINDOW
- PUSH_LIMIT_THROUGH_WINDOW
- PUSHDOWN_TOP_N_THROUGH_PROJECTION_WINDOW
- PUSHDOWN_TOP_N_THROUGH_WINDOW
2. Add the PartitionTopN node(LogicalPlan/ PhysicalPlan/ TranslatorPlan)
3. For the rewrite plan, there are several requests that need to meet:
- For the `Filter` part, only consider `</ <=/ =` conditions. And the filter conditions will be stored.
- For the `Window` part, we only support one window function. And we support the `row_number`, `rank`, `dense_rank` window functions. And the `partition by` key and `order by` key can not be empty at the same time. The `Window Frame` should be `UNBOUNDED to CURRENT`.
4. For the `PhysicalPartitionTopN`, the requested property is `Any`and the output property is its children's property.
That's the main details that are very important. For the other part, you can directly check the code.
Issue Number #18646
BE Part #19708
If the user manually removed a hive partition (remove the partition dir through hdfs), doris will failed to query the hive
table with an error message get file split failed for table. That is because the Hive metadata still contains the removed partition.
This pr is to fix this bug. Skip the not exist dirs.
The variable logType in ExternalCatalog is not persistent to disk, after refresh, it will be set to NULL and cause NPE. This pr is to fix the bug.
Also, remove the old type variable in ExternalCatalog, use logType instead.
Support collect hive external table statistics by running sql against hive table.
By running sql, we could collect all the statistics collected for Olap table, including the min, max value of String column.
With 3 BE (16 core, 64 GB), it cost less than 2 minutes to collect TPCH 100GB statistics for all columns of all tables.
Also less than 2 minutes to collect all columns statistics for SSB 100GB tables.
Support reading Hudi MOR table by using jni connector.
Note:
the FE part of the current PR is not completed all, and the BE part will be supplemented in next PR.
# Proposed changes
Before the change:
```
mysql> SET enable_nereids_planner=true;
Query OK, 0 rows affected (0.01 sec)
mysql> explain select /*+ SET_var(enable_nereids_planner = false) */ year_floor(cast('2023-04-28' as date));
-- omit the result here
10 rows in set (0.01 sec)
mysql> select @@enable_nereids_planner;
+--------------------------+
| @@enable_nereids_planner |
+--------------------------+
| 0 |
+--------------------------+
1 row in set (0.00 sec)
```
After the change:
```
mysql> SET enable_nereids_planner=true;
Query OK, 0 rows affected (0.01 sec)
mysql> explain select /*+ SET_var(enable_nereids_planner = false) */ year_floor(cast('2023-04-28' as date));
-- omit the result here
10 rows in set (0.14 sec)
mysql> select @@enable_nereids_planner;
+------+
| TRUE |
+------+
| 1 |
+------+
1 row in set (0.25 sec)
```
# Problem summary
We have already recorded the old session vars when we use the `Nereids` to handle the `set_var` hint.
But after we change the optimizer to the old one, it will handle the `set_var` hint again. But it has already taken effect before. So the old value has already changed. But we will use the changed value to overwrite again.
# Describe your changes.
We will check the old session var value when we want to record it first. If there exists the value, just skip it.
the rule of constant folding on Logical Operator is:
true and true -> true
true and false -> false
false and false -> false
true and x -> x
false and x -> false
null and true -> null
null and false -> false
null and null -> null
null and x -> null and x
true or true -> true
true or false -> true
false or false -> false
true or x -> true
false or x -> false or x
null or true -> true
null or false -> null
null or null -> null
null or x -> null or x
support insert the ret-value of a query into a table with `partition`, `with label`, `cols` tags:
```
insert into t partition (p1, p2)
with label label_1
(c1, c2, c3)
[hint1, hint2]
with cte as (
select * from src
)
select k1, k2, k3 from cte
```
we create new class: InsertIntoTableCommand, Unbound/Logical/PhysicalOlapTableSink to describe the command of insert and the olapTableSink for Nereids.
We make UnboundOlapTableSink in parsing phase and bind it, then implement and translate the node to OlapTableSink.
Then we run the command with a transaction.
Fix bugs:
1. should return the other side child of Or if current side is NULL after constant fold
2. Lead should has three parameters, remove the default value ctors
Not enable Nereids case under nereids_p0
1. nereids_p0/join/sql
2. nereids_p0/sql_functions/horology_functions/sql
Should disble Nereids explicitly because the result is not same
1. query_p0/sql_functions/horology_functions/sql
2. query_p0/stats/query_stats_test.groovy
3. query_profile/test_profile.groovy
Unstable regression test case
1. nereids_syntax_p0/join.groovy