in #9755, we split plan into plan & operator, but in subsequent development, we found the rule became complex and counter intuition:
1. we must create an operator instance, then wrap a plan by the operator type.
2. relational algebra(operator) not contains children
e.g.
```java
logicalProject().then(project -> {
List<NamedExpression> boundSlots =
bind(project.operator.getProjects(), project.children(), project);
LogicalProject op = new LogicalProject(flatBoundStar(boundSlots));
// wrap a plan
return new LogicalUnaryPlan(op, project.child());
})
```
after combine operator and plan, the code become to:
```java
logicalProject().then(project -> {
List<NamedExpression> boundSlots =
bind(project.getProjects(), project.children(), project);
return new LogicalProject(flatBoundStar(boundSlots), project.child());
})
```
Originally, we thought it would be convenient for `Memo.copyIn()` after split plan & operator, because Memo don't known how to re-new the plan(assembling child plan in the children groups) by the plan type. So plan must provide the `withChildren()` abstract method to assembling children. The less plan type, the lower code cost we have(logical/physical with leaf/unary/binary plan, about 6 plans, no concrete plan e.g. LogicalAggregatePlan).
But the convenient make negative effect that difficult to understand, and people must known the concept then can develop some new rules, and rule become ugly. So we combine the plan & operator, make the rule as simple as possible, the negative effect is we must overwrite some withXxx for all concrete plan, e.g. LogicalAggregate, PhysicalHashJoin.
Currently, we convert array<Int> to array<BigInt>
For example, the input array_sum([1, 2, 3]) can match function array_sum(Array<Int>) as well as array_sum(Array<BigInt>).
But when a function has more than one argument, the function may be match incorrectly.
For example, the input array_contains([1, 2, 3], 2147483648) will match the function array_contains(Array<BigInt>, BigInt), but the correct match should be array_contains(Array<Int>, Int)
The correct match should be:
array_contains([1, 2, 3], 1) match array_contains(Array<Int>, Int)
array_contains([1, 2, 3], 2147483648) match array_contains(Array<Int>, Int)
array_contains([2147483648, 2147483649, 2147483650], 2147483648) match array_contains(Array<BigInt>, BigInt)
now is:
array_contains([1, 2, 3], 1) match array_contains(Array<Int>, Int)
array_contains([1, 2, 3], 2147483648) match array_contains(Array<BigInt>, BigInt)
array_contains([2147483648, 2147483649, 2147483650], 2147483648) match array_contains(Array<BigInt>, BigInt)
And this will cause some trouble.
Assume that there are two functions being defined:
Int array_functions(Array<Int>, Int)
BigInt array_functions(Array<BigInt>, BigInt)
And array_functions([1,2,3], 2147483648) will match BigInt array_functions(Array<BigInt>, BigInt), but the result type should not be BigInt, but should be Int.
* [Schema Change] support fast add/drop column (#49)
* [feature](schema-change) support fast schema change. coauthor: yixiutt
* [schema change] Using columns desc from fe to read data. coauthor: Lchangliang
* [feature](schema change) schema change optimize for add/drop columns.
1.add uniqueId field for class column.
2.schema change for add/drop columns directly update schema meta
Co-authored-by: yixiutt <yixiu@selectdb.com>
Co-authored-by: SWJTU-ZhangLei <1091517373@qq.com>
[Feature](schema change) fix write and add regression test (#69)
Co-authored-by: yixiutt <yixiu@selectdb.com>
[schema change] be ssupport that delete use newest schema
add delete regression test
fix regression case (#107)
tmp
[feature](schema change) light schema change exclude rollup and agg/uniq/dup key type.
[feature](schema change) fe olapTable maxUniqueId write in disk.
[feature](schema change) add rpc iface for sc add column.
[feature](schema change) add columnsDesc to TPushReq for ligtht sc.
resolve the deadlock when schema change (#124)
fix columns from fe don't has bitmap_index flag (#134)
add update/delete case
construct MATERIALIZED schema from origin schema when insert
fix not vectorized compaction coredump
use segment cache
choose newest schema by schema version when compaction (#182)
[bugfix](schema change) fix ligth schema change problem.
[feature](schema change) light schema change add alter job. (#1)
fix be ut
[bug] (schema change) unique drop key column should not light schema
change
[feature](schema change) add schema change regression-test.
fix regression test
[bugfix](schema change) fix multi alter clauses for light schema change. (#2)
[bugfix](schema change) fix multi clauses calculate column unique id (#3)
modify PushTask process (#217)
[Bugfix](schema change) fix jobId replay cause bdbje exception.
[bug](schema change) fix max col unique id repeatitive. (#232)
[optimize](schema change) modify pendingMaxColUniqueId generate rule.
fix compaction error
* fix be ut
* fix snapshot load core
fix unique_id error (#278)
[refact](fe) remove redundant code for light schema change. (#4)
[refact](fe) remove redundant code for light schema change. (#4)
format fe core
format be core
fix be ut
modify fe meta version
fix rebase error
flush schema into rowset_meta in old table
[refactor](schema change) refact fe light schema change. (#5)
delete the change of schemahash and support get max version schema
* modify for review
* fix be ut
* fix schema change test
in the past, we use generic type for plan and expression to support pattern match framework, it can support type inference without unsafely type cast. then, we observed that expression usually traverse or rewrite by visitor pattern, so generic type is useless for expression and introduces complexity. so we remove generic type from concrete expressions.
Issue Number: close#9640
Add enforcer job for cascades.
Inspired by to *NoisePage enforcer job*, and *ORCA paper*
During this period, we will derive physical property for plan tree, and prune the plan according to the cos.
enhancement
- refactor compute output expression on root fragment in nereids planner
- refactor aggregate plan translator
- refactor aggregate disassemble rule
- slightly refactor sort plan translator
- add exchange node on the top of plan node tree if it is needed
- slightly refactor PhysicalPlanTranslator#translatePlan
fix
- slotDescriptor should not reuse between TupleDescriptors
- expression's nullable now works fine
- remove quotes when parse string literal
- set resolvedTupleExprs in SortNode to control output
- remove the extra column in sortTupleSlotExprs in SortInfo
known issues
- aggregate function must be the top expression in output expression (need project in ExecNode in BE)
- first phase aggregate could not convert to stream mode.
- OlapScanNode do not set data partition
- Sort could not process expression like 'order by a + 1' and SortInfo generated in a trick way and should be refactor when we want to support 'order by a + 1'
- column prune do not work as expected
Physical sort:
* 1. Build sortInfo
* There are two types of slotRef:
* one is generated by the previous node, collectively called old.
* the other is newly generated by the sort node, collectively called new.
* Filling of sortInfo related data structures,
* a. ordering use newSlotRef.
* b. sortTupleSlotExprs use oldSlotRef.
* 2. Create sortNode
* 3. Create mergeFragment
TODO:
1.Currently, columns that do not exist in select but exist in order by cannot be parsed.
eg: select key from table order by value;
2.For the combination of Literal and slotRefrance in select, there is a problem with parsing,
eg: select key ,(10-value) from table;
for example:
select * from t1 inner join t2 on t1.a = t2.b inner join t3 on t3.c = t2.b;
If t3 is a large table, it will be placed first after the reorderTable,
and the problem that t2.b does not exist will occur in reanalyzing.
add codes for collect_list and collect_set and update regression output, before output format for ARRAY(string) already changed.
Co-authored-by: cambyzju <zhuxiaoli01@baidu.com>
This PR supports rowset level data upload on the BE side, so that there can be both cold data and hot data in a tablet,
and there is no necessary to prohibit loading new data to cooled tablets.
Each rowset is bound to a `FileSystem`, so that the storage layer can read and write rowsets without
perceiving the underlying filesystem.
The abstracted `RemoteFileSystem` can try local caching strategies with different granularity,
instead of caching segment files as before.
To avoid conflicts with the code in be/src/io, we temporarily put the file system related code in the be/src/io/fs directory.
In the future, `FileReader`s and `FileWriter`s should be unified.
Fix https://github.com/apache/doris/pull/10521, multi-catalog query failed for two reasons:
1. The `SelectStmt` does not get the correct catalog.
2. External table should have three level aliases.
Disable querying external views.
Support show create table for external table&view.
SortInfo is in SortNode. But there are some replicated field in SortNode
Issue Number: close#10616
Remove the redundant field in `TSortNode` which exist in `TSortInfo`.
[API-BREAK] This has changed `Thrift` file.
Refactor Context in Cascades:
use two context in cascades framework.
JobContext is used in each job, contains such attributes:
- reference to PlannerContext
- current cost upper bound
- current required physical properties
PlannerContext is used to hold global info for query planner, contains such attributes:
- reference to Memo
- reference to connectContext
- reference to ruleset could be used for plan
- job pool to maintain unexecuted jobs
- job scheduler to schedule unexecuted jobs
- current job context for next job to be executed
During the query planning phase, the binary predicate rewrite optimization process converting DecimalLiteral to integers may overflow, resulting in false values like "id = 12345678901.0" (see the issue for detailed examples).
This pr fixes a possible overflow and optimizes the case where DecimalLiteral is not in the column type value range.
Issue Number: close#10544
`ExprBuilder` use stack to build the expr.
The input order is : col, value and the output is value, col, but the `>=` is not reverse.
Example:
`col >= 1` => `1 >= col`
In this case, it's better use the queue to keeper the input order.
And also the `CompoundPredicate(OR)` have some problems, it should be `alwaysTrue` whenever it's not a partition key or it's not a supported op.
enhancement
- add functions `finalizeForNereids` and `finalizeImplForNereids` in stale expression to generate some attributes using in BE.
- remove unnecessary parameter `Analyzer` in function `getBuiltinFunction`
- swap join condition if its left hand expression related to right table
- change join physical implementation to broadcast hash join
- add push predicate rule into planner
fix
- swap join children visit order to ensure the last fragment is root
- avoid visit join left child twice
known issues
- expression compute will generate a wrong answer when expression include arithmetic with two literal children.
Add Rule for disassemble the logical aggregate node, this is necessary since our execution framework is distributed and the execution of aggregate always in two steps, first, aggregate locally then merge them.
Add some fields to logical aggregate to determine whether a logical aggreate operator has been disasembled and mark the aggregate phase it belongs and add the logic to mapping the new aggregate function to its stale definition to get the function intermediate type.