The predicate column type for char, varchar and string is PredicateColumnType<TYPE_STRING>, so _base_evaluate method should convert the input column to PredicateColumnType<TYPE_STRING> always.
This PR fix:
2 Backends.
Create tables with colocation group, 1 replica.
Decommission one of Backends.
The tablet on decommissioned Backend is not reduced.
This is a bug of ColocateTableCheckerAndBalancer.
Every time a new broker load comes in, Doris will update the start time of Kerberos authentication,
but this logic is wrong.
Because the authentication duration of Kerberos is calculated from the moment when the ticket is obtained.
This PR change the logic:
1. If it is kerberos, check fs expiration by create time.
2.Otherwise, check fs expiration by access time
After the consume mem tracker exceeds the mem limit in the mem hook, the boost stacktrace will be printed. A query/load will only be printed once, and the process tracker will only be printed once per second.
After the process memory reaches the upper limit, the boost stacktrace will be printed every second. The observed phenomena are as follows:
After query/load is canceled, the memory increases instantly;
tcmalloc profile total physical memory is less than perf process memory;
The process mem tracker is smaller than the perf process memory;
tcmalloc/jemalloc allocator cache does not participate in the mem check as part of the process physical memory.
because new/malloc will trigger mem hook when using tcmalloc/jemalloc allocator cache, but it may not actually alloc physical memory, which is not expected in mem hook fail.
in addition:
The value of tcmalloc/jemalloc allocator cache is used as a mem tracker, the parent is the process mem tracker, which is updated every 1s.
Modify the process default mem_limit to 90%. expect mem tracker to effectively limit the memory usage of the process.
Fix _delete_sign_idx and _seq_col_idx when append_column or build_schema when load.
Tablet schema cache support recycle when schema sptr use count equals 1.
Add a http interface for flink-connector to sync ddl.
Improve tablet->tablet_schema() by max_version_schema.
When the flush is triggered when the load channel exceeds the mem limit, if the flush fails, an error message is returned and the load is terminated.
Usually flush failure is -238 error code. Because the memtable is frequently flushed after the load channel exceeds the mem limit, the number of segments exceeds the max value.
This pr did these things:
1. Change the nullable mode of 'from_unixtime' and 'parse_url' from DEPEND_ON_ARGUMENT to ALWAYS_NULLABLE, which nullable configuration was missing previously.
2. Add some new interfaces for origin NullableMode. This change inspired by the grammar of scala's mix-in trait, It help us to quickly understand the traits of function without read the lengthy procedural code and save the work to write some template code, like `class Substring extends ScalarFunction implements ImplicitCastInputTypes, PropagateNullable`. These are the interfaces:
- PropagateNullable: equals to NullableMode.DEPEND_ON_ARGUMENT
- AlwaysNullable: equals to NullableMode.ALWAYS_NULLABLE
- AlwaysNotNullable: equals to NullableMode.ALWAYS_NOT_NULLABLE
- others ComputeNullable: equals to NullableMode.CUSTOM
3. Add `GenerateScalarFunction` to generate nereids-style function code from legacy functions, but not actual generate any new function class yet, because the function's trait is not ready for use. I need add some traits for the legacy function's CompareMode and NonDeterministic, this thought is the same as ComputeNullable.
For eliminate all unessential cross join in TPC-H benchmark, this PR:
1. push all predicates that can be push down through join before do ReorderJoin rule.
Then we could eliminate all cross join that can be eliminated in ReorderJoin rule since this rule need matching a LogicalFilter as a root pattern. (Q2, Q15, Q16, Q17, Q18)
2. enable expression optimization rule - extract common expression. (Q19)
3. fix cast translate failed. (Q19)
test sql: TPC-H q21
```
select count(*)
from lineitem l3 right anti join lineitem l1
on l3.l_orderkey = l1.l_orderkey and l3.l_suppkey <> l1.l_suppkey;
```
if we have other join conjuncts, we have to put all slots from left and right into `slotReferenceMap` instead of `hashjoin.getOutput()`
After splitting intermediate tuple and output tuple, we meet several issues in regression test. And hence, we make following changes:
1. since translating project will replace underlying hash-join node's output tuple, we add PhysicalHashJoin.shouldTranslateOutput
2. because PhysicalPlanTranslator will merge filter and hashJoin, we add PhysicalHashJoin.filterConjuncts and translate filter conjuncts in physicalHashJoin
3. In this pr, we set HashJoinNode.hashOutputSlotIds properly when using nereids planner.
4. in order to be compatible with BE, in substring function, nullable() returns true
actual result
select cast("0.0000031417" as date);
+------------------------------+
| CAST('0.0000031417' AS DATE) |
+------------------------------+
| 2000-00-00 |
+------------------------------+
expect result
select cast("0.0000031417" as date);
+------------------------------+
| CAST('0.0000031417' AS DATE) |
+------------------------------+
| NULL |
+------------------------------+
When we do type coercion on CaseWhen expression, such as sql like this:
```
CASE WHEN n_nationkey > 1 THEN n_regionkey ELSE 0 END
```
The ELSE part 0 need do type coercion as CAST (0 AS INT). But we miss it in PR #11802
This PR add runtime filter to Nereids planner. Now only support push through join node and scan node.
TODO:
1. current support inner join, cross join, right outer join, and will support other join type in future.
2. translate left outer join to inner join if there are inner join ancestors.
3. some complex situation cannot be handled now, see more details in test case: testPushDownThroughJoin.
4. support src key is aggregate group key.
NamedExpressionUtil::clear should reset the nextId rather than create a new IdGenerator<ExprId> because the old one may be referenced by other objects and it may cause some cases start in a dirty environment when we run test cases in package.