1. optimize expression comparison by
a) flatter method call stack
b) short circuit: if some simple field not equals, then return, and not compare to the big field, like Alias.name
2. lazy compute Alias.name which is computed by toSql. Now Alias.toSlot() will not generate long name immediately
3. cache Expresssion.inputSlots, it can save time when invoke this method multiple times
4. always compute Expression.unbound, it can avoid traverse the big expression tree
this pr can save about 200ms when submit some long sqls
The check of the token should be forwarded to Master FE.
I add a new RPC method `checkToken()` in Frontend for this logic.
Otherwise, after enable the audit loader, the log from non-master FE can not be loaded to audit table
with `Invalid token` error.
## Proposed changes
1. check data type whether can applied should not throw exception when real data type is subclass of signature data type
2. merge `SlotBinder` and `FunctionBinder` to `ExpressionAnalyzer` to skip rewrite the whole expression tree multiple times.
3. `ExpressionAnalyzer.buildCustomSlotBinderAnalyzer()` provide more refined code to bind slot by different parts and different priority
4. the origin slot binder has O(n^2) complexity, this pr use `Scope.nameToSlot` to support O(n) bind
5. modify some `Collection.stream()` to `ImmutableXxx.builder()` to remove some method call which are difficult to inline by jvm in the hot path, e.g. `Expression.<init>` and `AbstractTreeNode.<init>`
6. modify some `ImmutableXxx.copyOf(xxx)` to `Utils.fastToImmutableList(xxx)` to skip addition copy of the array
7. set init size to `Immutable.builder()` to skip some useless resize
8. lazy compute and cache some heavy operations, like `Scope.nameToSlot` and `CaseWhen.computeDataTypesForCoercion()`
(cherry picked from commit 83c2f5a95827136aac4f0a78c5e841e9a099858c)
The `PARTITION BY` syntax used by external catalogs has been added.
You can specify a column directly, or a partition function as a partition condition.
Like:
`PARTITION BY LIST(col1, col2, func(param), func(param1, param2), func(param1, param2, param3))`
NOTICE:
This PR change the grammar of `AUTO PARTITION`
From
```
AUTO PARTITION BY RANGE date_trunc(`TIME_STAMP`, 'month')
```
To
```
AUTO PARTITION BY RANGE (date_trunc(`TIME_STAMP`, 'month'))
```
* Problem:
Inconsistent behavior occurs when executing partial column update `UPDATE` statements and `INSERT` statements on merge-on-write tables with the Nereids optimizer enabled. The number of columns passed to BE differs; `UPDATE` operations incorrectly pass all columns, while `INSERT` operations correctly pass only the updated columns.
Reason:
The Nereids optimizer does not handle partial column update `UPDATE` statements properly. The processing logic for `UPDATE` statements rewrites them as equivalent `INSERT` statements, which are then processed according to the logic of `INSERT` statements. For example, assuming a MoW table structure with columns k1, k2, v1, v2, the correct rewrite should be:
* `UPDATE` table t1 set v1 = v1 + 1 where k1 = 1 and k2 = 2
* =>
* `INSERT` into table (v1) select v1 + 1 from table t1 where k1 = 1 and k2 = 2
However, the actual rewriting process does not consider the logic for partial column updates, leading to all columns being included in the `INSERT` statement, i.e., the result is:
* `INSERT` into table (k1, k2, v1, v2) select k1, k2, v1 + 1, v2 from table t1 where k1 = 1 and k2 = 2
This results in `UPDATE` operations incorrectly passing all columns to BE.
Solution:
Having analyzed the cause, the solution is straightforward: when rewriting partial column update `UPDATE` statements to `INSERT` statements, only retain the updated columns and all key columns (as partial column updates must include all key columns). Additionally, this PR includes error injection cases to verify the number of columns passed to BE is correct.
* 2
* 3
* 4
* 5
This pr imporve the high QPS query by speed up PartitionPrunner
1. remove useless Date parse/format, use LocalDate instead
2. fast evaluate path for single value partition
3. change Collection.stream() to ImmutableXxx.builderWithExpectedSize(n) to skip useless method call and collection resize
4. change lots of if-else to switch
5. don't parse to string to compare dateLiteral, use int field compare instead