Currently, ExprId in Nereids is generated by a global gnerator and shared by all statement. There are three problems:
1. ExprId could out of bound
2. hard to debug
3. could not use bitset to present ExprId set
This PR solve this problem by new Id generator for each statement. after this PR ExprId always start from 0 for each statement.
TODO:
1. refactor all place that new StatementContext in test code to ensure the logic is same with main code.
1. remove FE config `enable_array_type`
2. limit the nested depth of array in FE side.
3. Fix bug that when loading array from parquet, the decimal type is treated as bigint
4. Fix loading array from csv(vec-engine), handle null and "null"
5. Change the csv array loading behavior, if the array string format is invalid in csv, it will be converted to null.
6. Remove `check_array_format()`, because it's logic is wrong and meaningless
7. Add stream load csv test cases and more parquet broker load tests
1. enable varchar/char type set min/max value.
take first 8 chars as long, and convert to double.
2. fix bug when set min/max value for date and datav2
1. Refactor the file reader creation in FileFactory, for simplicity.
Previously, FileFactory had too many `create_file_reader` interfaces.
Now unified into two categories: the interface used by the previous BrokerScanNode,
and the interface used by the new FileScanNode.
And separate the creation methods of readers that read `StreamLoadPipe` and other readers that read files.
2. Modify the StreamLoadPlanner on FE side to support using ExternalFileScanNode
3. Now for generic reader, the file reader will be created inside the reader, not passed from the outside.
4. Add some test cases for csv stream load, the behavior is same as the old broker scanner.
Doris do not support explicitly cast NULL_TYPE to ANY type .
```
mysql> select cast(NULL as int);
ERROR 1105 (HY000): errCode = 2, detailMessage = Invalid type cast of NULL from NULL_TYPE to INT
```
So we should also forbid user from casting NULL_TYPE to ARRAY type.
This commit will produce the following effect:
```
mysql> select cast(NULL as array<int>);
ERROR 1105 (HY000): errCode = 2, detailMessage = Invalid type cast of NULL from NULL_TYPE to ARRAY<INT(11)>
```
We should prevent insert while value overflow.
1. create table:
`CREATE TABLE test_array_load_test_array_int_insert_db.test_array_load_test_array_int_insert_tb ( k1 int NULL, k2 array<int> NULL ) DUPLICATE KEY(k1) DISTRIBUTED BY HASH(k1) BUCKETS 5`
2. try insert data less than INT_MIN.
`insert into test_array_load_test_array_int_insert_tb values (1005, [-2147483649])`
Before this pr, the insert will success, but the value it not correct.
Co-authored-by: cambyzju <zhuxiaoli01@baidu.com>
* [fix](agg)the reseet the content of grouping exprs instead of replace it with original exprs
* keep old behavior if the grouping type is not GROUP_BY
Manually drop statistics for tables or partitions. Table or partition can be specified, if neither is specified, all statistics under the current database will be deleted.
syntax:
```SQL
DROP STATS [tableName [PARTITIONS(partitionNames)]];
-- e.g.
DROP STATS; -- drop all table statistics under the current database
DROP STATS t0; -- drop t0 statistics
DROP STATS t1 PARTITIONS(p1); -- drop partition p1 statistics of t1
```
### Motivation
TABLESAMPLE allows you to limit the number of rows from a table in the FROM clause.
Used for data detection, quick verification of the accuracy of SQL, table statistics collection.
### Grammar
```
[TABLET tids] TABLESAMPLE n [ROWS | PERCENT] [REPEATABLE seek]
```
Limit the number of rows read from the table in the FROM clause,
select a number of Tablets pseudo-randomly from the table according to the specified number of rows or percentages,
and specify the number of seeds in REPEATABLE to return the selected samples again.
In addition, can also manually specify the TableID,
Note that this can only be used for OLAP tables.
### Example
Q1:
```
SELECT * FROM t1 TABLET(10001,10002) limit 1000;
```
explain:
```
partitions=1/1, tablets=2/12, tabletList=10001,10002
```
Select the specified tabletID of the t1.
Q2:
```
SELECT * FROM t1 TABLESAMPLE(1000000 ROWS) REPEATABLE 1 limit 1000;
```
explain:
```
partitions=1/1, tablets=3/12, tabletList=10001,10002,10003
```
Q3:
```
SELECT * FROM t1 TABLESAMPLE(1000000 ROWS) REPEATABLE 2 limit 1000;
```
explain:
```
partitions=1/1, tablets=3/12, tabletList=10002,10003,10004
```
Pseudo-randomly sample 1000 rows in t1.
Note that several Tablets are actually selected according to the statistics of the table,
and the total number of selected Tablet rows may be greater than 1000,
so if you want to explicitly return 1000 rows, you need to add Limit.
### Design
First, determine how many rows to sample from each partition according to the number of partitions.
Then determine the number of Tablets to be selected for each partition according to the average number of rows of Tablet,
If seek is not specified, the specified number of Tablets are pseudo-randomly selected from each partition.
If seek is specified, it will be selected sequentially from the seek tablet of the partition.
And add the manually specified Tablet id to the selected Tablet.
1. add NereidsException to wrap any exception thrown by Nereids
2. when we catch NereidsException and switch 'enableFallbackToOriginalPlanner' is on, we will use Legacy Planner to plan again
#12716 removed the mem limit for single load task, in this PR I propose to remove the session variable load_mem_limit, to avoid confusing.
For compatibility, load_mem_limit in thrift not removed, the value is set equal to exec_mem_limit in FE
Disable max_dynamic_partition_num check when disable DynamicPartition by ALTER TABLE tbl_name SET ("dynamic_partition.enable" = "false"), when max_dynamic_partition_num changed to larger and then changed to a lower value, the actual dynamic partition num may larger than max_dynamic_partition_num, and cannot disable DynamicPartition
Add restore new property 'reserve_dynamic_partition_enable', which means you can
get a table with dynamic_partition_enable property which has the same value
as before the backup. before this commit, you always get a table with property
'dynamic_partition_enable=false' when restore.
* squash
change data type of metrics to double
unit test
add stats for some function
add stats for arithmeticExpr
1. set max/min of ColumnStats to double
2. add stats for binaryExpr/compoundExpr
in predicate
* Add LiteralExpr in ColumnStat just for user display only.
The key keyword definition section of `sql_parser.cup` is unordered and messy:
1. It is almost unreadable
2. There are no rules to format it when we make a change to it
3. **It takes unnecessary effort to resolve conflict caused by the unordered keywords**
We can apply some simple rules to format it:
1. Sort in lexicographical order
4. Break into several "sections", keywords in each section have the same prefix `KW_${first_letter}`
5. Every 2 sections are connected with an empty line containing only 4 white spaces
e.g.
```
terminal String
KW_A...
KW_B...
...
KW_Z...
```
dump memo info and physical plan in stdout and log
set `enable_nereids_trace` variable true/false to open/close this dump.
following is a fragment of memo:
```
Group[GroupId#8]
GroupId#8(plan=PhysicalHashJoin ( type=INNER_JOIN, hashJoinCondition=[(r_regionkey#250 = n_regionkey#255)], otherJoinCondition=Optional.empty, stats=null )) children=[GroupId#6 GroupId#7 ] stats=(rows=25, isReduced=false, width=2)
GroupId#8(plan=PhysicalHashJoin ( type=INNER_JOIN, hashJoinCondition=[(r_regionkey#250 = n_regionkey#255)], otherJoinCondition=Optional.empty, stats=null )) children=[GroupId#7 GroupId#6 ] stats=(rows=25, isReduced=false, width=2)
```
The toThrift method will be called mutilple times for sending data to different be but the changes of resolvedTupleExprs should be done only once. This pr make sure the resolvedTupleExprs can only be changed only once