For routine load (kafka load), user can produce all data for different
table into single topic and doris will dispatch them into corresponding
table.
Signed-off-by: freemandealer <freeman.zhang1992@gmail.com>
1.move PushdownFilterThroughCTEAnchor and PushdownProjectThroughCTEAnchor into PUSH_DOWN_FILTERS rule set
2.move PushdownFilterThroughProject before MergeProjectPostProcessor
1. skip forbidUnknownColStats check for in-visible columns
2. use columsStatistics.isUnknown to tell if this stats is unknown
3. skip unknown stats check for internal schema
Add the rule for checking the join node in `analysis/CheckAnalysis.java` file. When we check the join node, we should check its' on clause. If there are some subquery expression, we should throw exception.
Before this PR
```
mysql> select a.k1 from baseall a join test b on b.k2 in (select 49);
ERROR 1105 (HY000): errCode = 2, detailMessage = Unexpected exception: nul
```
After this PR
```
mysql> select a.k1 from baseall a join test b on b.k2 in (select 49);
ERROR 1105 (HY000): errCode = 2, detailMessage = Unexpected exception: Not support OnClause contain Subquery, expr:k2 IN (INSUBQUERY) (LogicalOneRowRelation ( projects=[49 AS `49`#28], buildUnionNode=true ))
```
Support user-defined variables.
After this PR, we can use `set @a = xx` to define a user variable and use it in the query like `select @a`.
the changes of this PR:
1. Support the grammar for `set user variable` in the parser.
2. Add the `userVars` in `VariableMgr` to store the user-defined variables.
3. For the `set @a = xx`, we will store the variable name and its value in the `userVars` in `VariableMgr`.
4. For the `select @a`, we will get the value for the variable name in `userVars`.
* [fix](load) in strict mode, return error for load and insert if datatype convert fails
Revert "[fix](MySQL) the way Doris handles boolean type is consistent with MySQL (#19416)"
This reverts commit 68eb420cabe5b26b09d6d4a2724ae12699bdee87.
Since it changed other behaviours, e.g. in strict mode insert into t_int values ("a"),
it will result 0 is inserted into table, but it should return error instead.
* fix be ut
* fix regression tests
1. update all date related functions' signatures order.
1.1. if return value need to be compute with time info, args with datetimev2 at the top of the list, followed by datev2, datetime and date
1.2. if return value need to be compute with only date info, args with datev2 at the top of list, followed by datetimev2, date and datetime
2. Priority for use datev2, if we must cast date to datev2 or datetime/datetimev2
When querying Trino's JDBC catalog, if our WHERE filter condition is k1 >= '2022-01-01', this format is incorrect.
In Trino, the correct format should be k1 >= date '2022-01-01' or k1 >= timestamp '2022-01-01 00:00:00'.
Therefore, the date string in the WHERE condition needs to be converted to the date or timestamp format supported by Trino.
This commit support a function allows return a field column in named struct column.
Since the function can return any type, this commit also supports ANY_STRUCT_TYPE
and ANY_ELEMENT_TYPE.
There should be 2 kinds of ScanNode:
OlapScanNode
ExternalScanNode
The Backends used for ExternalScanNode should be controlled by FederationBackendPolicy.
But currently, only FileScanNode is controlled by FederationBackendPolicy, other scan node such as MysqlScanNode,
JdbcScanNode will use Mix Backend even if we enable and prefer to use Compute Backend.
In this PR, I modified the hierarchy of ExternalScanNode, the new hierarchy is:
ScanNode
OlapScanNode
SchemaScanNode
ExternalScanNode
MetadataScanNode
DataGenScanNode
EsScanNode
OdbcScanNode
MysqlScanNode
JdbcScanNode
FileScanNode
FileLoadScanNode
FileQueryScanNode
MaxComputeScanNode
IcebergScanNode
TVFScanNode
HiveScanNode
HudiScanNode
And previously, the BackendPolicy is the member of FileScanNode, now I moved it to the ExternalScanNode.
So that all subtype ExternalScanNode can use BackendPolicy to choose Compute Backend to execute the query.
All all ExternalScanNode should implement the abstract method createScanRangeLocations().
For scan node like jdbc scan node/mysql scan node, the scan range locations will be selected randomly from
compute node(if preferred).
And for compute node selection. If all scan nodes are external scan nodes, and prefer_compute_node_for_external_table
is set to true, the BE for this query will only select compute nodes.
1. derive analytic node stats, add support for rank()
2. filter estimation stats derive updated. update row count of filter column.
3. use ColumnStatistics.orginal to replace ColumnStatistics.orginalNdv, where ColumnStatistics.orginal is the column statisics get from TableScan.
TPCDS 70 on tpcds_sf100 improved from 23sec to 2 sec
This pr has no performance downgrade on other tpcds queries and tpch queries.
Minidump file wants to get information as much as possible, but when close the switch, these methods should not be called after refactor pr: #20049. Other place of doing more jobs after add Minidump feature also be checked.
This pr fix following two problems:
Problem1: Alter column comment make add dynamic partition failed inside issue #10811
create table with dynamic partition policy;
restart FE;
alter distribution column comment;
alter dynamic_partition.end to trigger add new partition by dynamic partition scheduler;
Then we got the error log, and the new partition create failed.
dynamic add partition failed: errCode = 2, detailMessage = Cannot assign hash distribution with different distribution cols. default is: [id int(11) NULL COMMENT 'new_comment_of_id'], db: default_cluster:example_db, table: test_2
Problem2: rename distributed column, make old partition insert failed. inside #20405
The key point of the reproduce steps is restart FE.
It seems all versions will be affected, include master and lts-1.1 and so on.
Currently, sql-block-rule can only be used for query statements, while it's useful for other stmts like insert / delete / alter / drop etc. Now remove the limitation and expand its using scenario.
1. Implement write/read for AnalysisManager
2. If database or table has any column with complex type, the analyze stmt would fail directly. Enable to ignore complex type columns and analyze rest of them in this PR
in legacy planner, when we new exchange, it inherit its child's limit and offset.
but in Nereids, we should not do this. because if we need set limit or offset, we will set it manually.
In this PR, we use a new ctor of ExchangeNode to ensure not set limit or offset unexpected.