Commit Graph

7828 Commits

Author SHA1 Message Date
fd1e0e933e [opt](Nereids) outer join with is null stats estimation enhancement (#31875) 2024-03-15 17:58:01 +08:00
94a75c27e7 [feature](pipelineX) support paritition tablet sink shuffle (#31689) 2024-03-15 17:58:01 +08:00
8a8a06d8b6 [feat](Nereids) update struct info map when there is new expr (#32119) 2024-03-15 17:58:01 +08:00
df5ec16d7c [Refactor](exectuor)Add schema type table active_queries (#32057)
* Add schema type table active_queries
2024-03-15 17:57:28 +08:00
58675c271b [opt](statistics) Add wait row count reported logic for sample analyze. (#32030)
If getRowCount returns 0, and table visible version is 2, the table is probably not empty, so we wait a moment for the row count to be reported.
2024-03-15 17:54:43 +08:00
847ec368be [Fix](smooth-upgrade) Fix incompatibility when upgrade from 2.0 to 2.1 (#32220) 2024-03-14 11:23:05 +08:00
6d2924668e [fix](audit-loader) fix invalid token check logic (#32095)
The check of the token should be forwarded to Master FE.
I add a new RPC method `checkToken()` in Frontend for this logic.
Otherwise, after enable the audit loader, the log from non-master FE can not be loaded to audit table
with `Invalid token` error.
2024-03-12 22:52:11 +08:00
84af8e0a53 [enhance](mtmv)mtmv support hive default partition (#32051) 2024-03-12 22:51:11 +08:00
4956d5de83 [fix](planner) remove input slot for aggregate slot which is not materialized (#32092)
introduced by #26886

run this sql:
SELECT
        caseId
    FROM
        (
            SELECT
                caseId,
                count(judgementDateId)
            FROM
                (
                    SELECT
                        abs(caseId) AS caseId,
                        id as judgementDateId
                    FROM
                        dr_user_test_t2
                ) AGG_RESULT
            GROUP BY
                caseId
        ) TOTAL
        order by 1;


will get:

ERROR 1105 (HY000): errCode = 2, detailMessage = (172.17.0.1)[INTERNAL_ERROR]couldn't resolve slot descriptor 1, desc: tuples:
Tuple(id=5 slots=[Slot(id=10 type=DOUBLE col=-1, colname=, nullable=1), Slot(id=11 type=VARCHAR col=-1, colname=id, nullable=1)] has_varlen_slots=1)
Tuple(id=4 slots=[Slot(id=8 type=DOUBLE col=-1, colname=, nullable=1)] has_varlen_slots=0)
Tuple(id=2 slots=[Slot(id=4 type=DOUBLE col=-1, colname=caseId, nullable=1)] has_varlen_slots=0)
Tuple(id=0 slots=[Slot(id=0 type=VARCHAR col=-1, colname=caseId, nu
2024-03-12 18:50:26 +08:00
781a45d93c [Fix](nereids) fix date function rewrite (#32060) 2024-03-12 18:50:26 +08:00
2da57526a3 [feat](Nereids): use table map to construct struct info (#32058) 2024-03-12 18:50:06 +08:00
cf04c9c300 [enhancement](Nereids) refine and speedup analyzer (#31792) (#32111)
## Proposed changes
1. check data type whether can applied should not throw exception when real data type is subclass of signature data type
2. merge `SlotBinder` and `FunctionBinder` to `ExpressionAnalyzer` to skip rewrite the whole expression tree multiple times.
3. `ExpressionAnalyzer.buildCustomSlotBinderAnalyzer()` provide more refined code to bind slot by different parts and different priority
4. the origin slot binder has O(n^2) complexity, this pr use `Scope.nameToSlot` to support O(n) bind
5. modify some `Collection.stream()` to `ImmutableXxx.builder()` to remove some method call which are difficult to inline by jvm in the hot path, e.g. `Expression.<init>` and `AbstractTreeNode.<init>`
6. modify some `ImmutableXxx.copyOf(xxx)` to `Utils.fastToImmutableList(xxx)` to skip addition copy of the array
7. set init size to `Immutable.builder()` to skip some useless resize
8. lazy compute and cache some heavy operations, like `Scope.nameToSlot` and `CaseWhen.computeDataTypesForCoercion()`

(cherry picked from commit 83c2f5a95827136aac4f0a78c5e841e9a099858c)
2024-03-12 17:09:38 +08:00
da60a111d0 [refactor](nereids) rename PlanNode.distributeExprLists to childrenDistributeExprLists #32069 2024-03-12 14:20:39 +08:00
194f3432ab [Improvement](executor)Routine load support workload group #31671 2024-03-12 14:20:18 +08:00
926908ece2 [fix](hive) fix spelling mistakes for "separatorChar" #32061 2024-03-12 14:20:18 +08:00
ccd21a6ea4 [Improve](InPredict) enhance in predict with array type (#31828) 2024-03-12 14:19:14 +08:00
a7a85dd330 [chore](config) support select experimental session variable (#31837)
support select experimental variables:

Before change:


Before change:
select @@experimental_enable_nereids_planner;
ERROR 1193 (HY000): errCode = 2, detailMessage = Unknown system variable 'experimental_enable_nereids_planner'

show variables like 'experimental_enable_nereids_planner';
+-------------------------------------+-------+---------------+---------+
| Variable_name                       | Value | Default_Value | Changed |
+-------------------------------------+-------+---------------+---------+
| experimental_enable_nereids_planner | false | true          | 1       |
+-------------------------------------+-------+---------------+---------+

After change:
> select @@experimental_enable_nereids_planner;
+---------------------------------------+
| @@experimental_enable_nereids_planner |
+---------------------------------------+
|                                     1 |
+---------------------------------------+
2024-03-12 14:18:26 +08:00
ab21d85e8c [nereids](topn-filter) support multi-topn filter (FE part) (#31485)
support multi-topn-filter
2024-03-12 14:17:48 +08:00
3358f76a7f [feature](spill) Implement spill to disk for hash join, aggregation and sort for pipelineX (#31910)
Co-authored-by: Jerry Hu <mrhhsg@gmail.com>
2024-03-12 14:12:09 +08:00
cf6b22c621 [fix](jdbc catalog) fix type conversion error in MySQL JDBC Driver 5.x (#31880) 2024-03-12 14:07:57 +08:00
1fee736ca4 [fix](jdbc catalog) Clean up the connection pool after failure to initialize the client (#31949) 2024-03-12 14:07:00 +08:00
c5390d00bb [Improvement]Add schema table backend_active_tasks (#31945) 2024-03-09 19:55:48 +08:00
d5bf20c96e [improvement](mtmv) Improve the performance for query rewritting by materialized view (#31886)
- Limit the number of times for the query rewritting to the group
- Remove the unnecessary log and explain detail info in query
2024-03-09 19:55:47 +08:00
78feb7f519 [fix](forward) set error code for query state to handle exception of (#31975) 2024-03-09 19:55:47 +08:00
4bdea7c324 [opt](fe) Reduce jvm heap memory consumed by profiles of BrokerLoadJob (#31985)
* it may cause FE OOM when there are a lot of broker load jobs
  if the profile is enabled
2024-03-09 19:45:50 +08:00
cc0e58faec [enhancement](regression-test) upgrade groovy to 4.x and enable run test by jdk17/21 (#31906)
upgrade groovy to 4.x and enable run test by jdk17 / 21
2024-03-09 19:45:46 +08:00
5f9eb5eb52 [feature](external catalog)Add partition grammar for external catalog to create table (#31585)
The `PARTITION BY` syntax used by external catalogs has been added. 
You can specify a column directly, or a partition function as a partition condition.
Like:
 `PARTITION BY LIST(col1, col2, func(param), func(param1, param2), func(param1, param2, param3))`

NOTICE:
This PR change the grammar of `AUTO PARTITION`
From 
```
AUTO PARTITION BY RANGE date_trunc(`TIME_STAMP`, 'month')
```
To
```
AUTO PARTITION BY RANGE (date_trunc(`TIME_STAMP`, 'month'))
```
2024-03-09 19:45:46 +08:00
62db7094ea Revert "Problem: When the old optimizer processes an INSERT INTO statement that contains two quotation marks, it results in only one quotation mark being written into the database. (#31890)" (#31986)
This reverts commit 8c309652e04698f311b6c9158105352e8416c69a.
2024-03-09 19:45:46 +08:00
5909237ab1 [Fix](nereids) Add semantic check that the hash bucket column must be a key column when creating table for aggregate and unique models (#31951) 2024-03-09 19:45:46 +08:00
609761567c [Fix](partial-update) Fix wrong column number passing to BE when partial and enable nereids (#31461)
* Problem:
Inconsistent behavior occurs when executing partial column update `UPDATE` statements and `INSERT` statements on merge-on-write tables with the Nereids optimizer enabled. The number of columns passed to BE differs; `UPDATE` operations incorrectly pass all columns, while `INSERT` operations correctly pass only the updated columns.

Reason:
The Nereids optimizer does not handle partial column update `UPDATE` statements properly. The processing logic for `UPDATE` statements rewrites them as equivalent `INSERT` statements, which are then processed according to the logic of `INSERT` statements. For example, assuming a MoW table structure with columns k1, k2, v1, v2, the correct rewrite should be:
* `UPDATE` table t1 set v1 = v1 + 1 where k1 = 1 and k2 = 2
 * =>
 * `INSERT` into table (v1) select v1 + 1 from table t1 where k1 = 1 and k2 = 2

However, the actual rewriting process does not consider the logic for partial column updates, leading to all columns being included in the `INSERT` statement, i.e., the result is:
* `INSERT` into table (k1, k2, v1, v2) select k1, k2, v1 + 1, v2 from table t1 where k1 = 1 and k2 = 2

This results in `UPDATE` operations incorrectly passing all columns to BE.

Solution:
Having analyzed the cause, the solution is straightforward: when rewriting partial column update `UPDATE` statements to `INSERT` statements, only retain the updated columns and all key columns (as partial column updates must include all key columns). Additionally, this PR includes error injection cases to verify the number of columns passed to BE is correct.

* 2

* 3

* 4

* 5
2024-03-09 19:45:42 +08:00
e8b4bf5be9 [enhancement](Nereids) Speedup PartitionPrunner (#31970)
This pr imporve the high QPS query by speed up PartitionPrunner

1. remove useless Date parse/format, use LocalDate instead
2. fast evaluate path for single value partition
3. change Collection.stream() to ImmutableXxx.builderWithExpectedSize(n) to skip useless method call and collection resize
4. change lots of if-else to switch
5. don't parse to string to compare dateLiteral, use int field compare instead
2024-03-09 19:45:03 +08:00
e3611f6a1d [improve](routine-load) increase routing load max_batch _size max limit (#31846) 2024-03-09 19:45:03 +08:00
Pxl
19e6ebd09c [Feature](materialized-view) support mv with bitmap_union(bitmap_from_array()) case (#31962)
support mv with bitmap_union(bitmap_from_array()) case
2024-03-09 19:45:03 +08:00
679cd0ab45 [opt](mtmv) ensure rewritten plan output order correct even project been eliminated (#31870) 2024-03-09 19:45:03 +08:00
1721bfb87a [fix](nereids)forbid some join reorder rules for mark join (#31966) 2024-03-09 19:45:03 +08:00
4bfecac08a [enhancement](plsql) Support show procedure and show create procedure (#31297) (#31763) 2024-03-09 19:45:03 +08:00
eb280d374b [case](Nereids) add leading tpc-h (#30405)
add tpc-h shape cases using leading hint
except:

single table without join q1 q6
not support feature include tables after subquery unnested q2 q16 q18 q20 q21 q22
2024-03-09 19:45:03 +08:00
861461403f add missing RuleType LOGICAL_REPEAT_TO_PHYSICAL_REPEAT_RULE (#31877) 2024-03-09 19:45:03 +08:00
b2de83f250 [agg](conf) Add a knob to control distinct agg (#31930)
Add a knob to control distinct agg
2024-03-09 19:44:54 +08:00
908dff551a [fix](statistics)Add synchronize for modify analysisTaskInfoMap and analysisJobInfoMap. #31940 2024-03-09 19:44:54 +08:00
aff09fc9bc [feature](Nereids) support make miss slot as null alias when converting anti join (#31854)
transform

project(A.*, B.slot)
  - filter(B.slot is null)
    - LeftOuterJoin(A, B)

to

project(A.*, null as B.slot)
  - LeftAntiJoin(A, B)
2024-03-09 19:43:21 +08:00
5b52812af2 Problem: When the old optimizer processes an INSERT INTO statement that contains two quotation marks, it results in only one quotation mark being written into the database. (#31890)
Reason: During syntax parsing, the old optimizer interprets two quotation marks as a single quotation mark.
Solution: Remove the logic that consolidates two quotation marks into one.
2024-03-09 19:43:21 +08:00
3b56c4bcfa [enhancement](nereids)send is_nereids flag to be (#31752) 2024-03-09 19:43:12 +08:00
db389d7d4e [feat](nereids) support null safe eq runtime filter (FE part) (#31655)
be part has been merged in #31754
2024-03-07 16:53:49 +08:00
667b1fba04 [enhance](mtmv) MTMV Use partial partition of base table (#31632)
MTMV add 3 properties:
partition_sync_limit: digit
partition_sync_time_unit: DAY/MONTH/YEAR
partition_sync_date_format: like "%Y-%m-%d"/"%Y%m%d"

For example, the current time is 2020-02-03 20:10:10
- If partition_sync_limit is set to 1 and partition_sync_time_unit is set to DAY, only partitions with a time greater than or equal to 2020-02-03 00:00:00 will be synchronized to the MTMV
- If partition_sync_limit is set to 1 and partition_sync_time_unit is set to MONTH, only partitions with a time greater than or equal to 2020-02-01 00:00:00 will be synchronized to the MTMV
- If partition_sync_limit is set to 1 and partition_sync_time_unit is set to YEAR, only partitions with a time greater than or equal to 2020-01-01 00:00:00 will be synchronized to the MTMV
- If partition_sync_limit is set to 3 and partition_sync_time_unit is set to MONTH, only partitions with a time greater than or equal to 2019-12-01 00:00:00 will be synchronized to the MTMV
- If partition_sync_limit is set to 4 and partition_sync_time_unit is set to DAY, only partitions with a time greater than or equal to 2020-01-31 00:00:00 will be synchronized to the MTMV
2024-03-07 16:53:49 +08:00
56342278f1 [bug](meta) exit when get RollbackException in observer (#31687) 2024-03-07 16:53:20 +08:00
e91d16854b [fix](function) fix date_format function execution error on fe (#31645) 2024-03-07 16:53:19 +08:00
a3c24b84e3 [fix](mtmv) support null value in partition for updating (#31843) 2024-03-07 16:53:19 +08:00
da5a40077f [fix](http stream) http stream support memtable_on_sink_node header (#31866) 2024-03-07 16:53:19 +08:00
9d5c51f5c3 [enhancement](nereids) show nullaware semi join in plan (#31738) 2024-03-07 16:53:19 +08:00