1. this pr is used to update the json load docs for import data to array column.
when we use json to import data to array column, the Rapidjson will cause precision problems.
so we update the json-load docs to specify how to avoid these problems.
Issue Number: #7570
Co-authored-by: hucheng01 <hucheng01@baidu.com>
Doris do not support explicitly cast NULL_TYPE to ANY type .
```
mysql> select cast(NULL as int);
ERROR 1105 (HY000): errCode = 2, detailMessage = Invalid type cast of NULL from NULL_TYPE to INT
```
So we should also forbid user from casting NULL_TYPE to ARRAY type.
This commit will produce the following effect:
```
mysql> select cast(NULL as array<int>);
ERROR 1105 (HY000): errCode = 2, detailMessage = Invalid type cast of NULL from NULL_TYPE to ARRAY<INT(11)>
```
We should prevent insert while value overflow.
1. create table:
`CREATE TABLE test_array_load_test_array_int_insert_db.test_array_load_test_array_int_insert_tb ( k1 int NULL, k2 array<int> NULL ) DUPLICATE KEY(k1) DISTRIBUTED BY HASH(k1) BUCKETS 5`
2. try insert data less than INT_MIN.
`insert into test_array_load_test_array_int_insert_tb values (1005, [-2147483649])`
Before this pr, the insert will success, but the value it not correct.
Co-authored-by: cambyzju <zhuxiaoli01@baidu.com>
disable page cache by default
disable chunk allocator by default
not use chunk allocator for vectorized allocator by default
add a new config memory_linear_growth_threshold = 128Mb, not allocate memory by RoundUpToPowerOf2 if the allocated size is larger than this threshold. This config is added to MemPool, ChunkAllocator, PodArray, Arena.
* [fix](agg)the reseet the content of grouping exprs instead of replace it with original exprs
* keep old behavior if the grouping type is not GROUP_BY
Allocator of rapidjson does not release memory, this fix use allocator with local buffer and call Clear to release memory allocated beyond local buffer.
Manually drop statistics for tables or partitions. Table or partition can be specified, if neither is specified, all statistics under the current database will be deleted.
syntax:
```SQL
DROP STATS [tableName [PARTITIONS(partitionNames)]];
-- e.g.
DROP STATS; -- drop all table statistics under the current database
DROP STATS t0; -- drop t0 statistics
DROP STATS t1 PARTITIONS(p1); -- drop partition p1 statistics of t1
```
* support local cache gc by disk usage
* support gc per disk
* refractor file cache size logic
* also consider unused file caches while GC by disk size
* change config file_cache_max_size_per_disk from GB to B
* bugfix
* update
* use two stage locks to avoid lock while disk io
* rdlock one by one for dummy file cache gc
Co-authored-by: cambyzju <zhuxiaoli01@baidu.com>
### Motivation
TABLESAMPLE allows you to limit the number of rows from a table in the FROM clause.
Used for data detection, quick verification of the accuracy of SQL, table statistics collection.
### Grammar
```
[TABLET tids] TABLESAMPLE n [ROWS | PERCENT] [REPEATABLE seek]
```
Limit the number of rows read from the table in the FROM clause,
select a number of Tablets pseudo-randomly from the table according to the specified number of rows or percentages,
and specify the number of seeds in REPEATABLE to return the selected samples again.
In addition, can also manually specify the TableID,
Note that this can only be used for OLAP tables.
### Example
Q1:
```
SELECT * FROM t1 TABLET(10001,10002) limit 1000;
```
explain:
```
partitions=1/1, tablets=2/12, tabletList=10001,10002
```
Select the specified tabletID of the t1.
Q2:
```
SELECT * FROM t1 TABLESAMPLE(1000000 ROWS) REPEATABLE 1 limit 1000;
```
explain:
```
partitions=1/1, tablets=3/12, tabletList=10001,10002,10003
```
Q3:
```
SELECT * FROM t1 TABLESAMPLE(1000000 ROWS) REPEATABLE 2 limit 1000;
```
explain:
```
partitions=1/1, tablets=3/12, tabletList=10002,10003,10004
```
Pseudo-randomly sample 1000 rows in t1.
Note that several Tablets are actually selected according to the statistics of the table,
and the total number of selected Tablet rows may be greater than 1000,
so if you want to explicitly return 1000 rows, you need to add Limit.
### Design
First, determine how many rows to sample from each partition according to the number of partitions.
Then determine the number of Tablets to be selected for each partition according to the average number of rows of Tablet,
If seek is not specified, the specified number of Tablets are pseudo-randomly selected from each partition.
If seek is specified, it will be selected sequentially from the seek tablet of the partition.
And add the manually specified Tablet id to the selected Tablet.
1. add NereidsException to wrap any exception thrown by Nereids
2. when we catch NereidsException and switch 'enableFallbackToOriginalPlanner' is on, we will use Legacy Planner to plan again
gperftools/tcmalloc[https://github.com/gperftools/gperftools] is outdated, there are no new features for many years, only fix bugs. doris is currently used by default.
google/tcmalloc[https://github.com/google/tcmalloc], very active recently, has many new features, and is expected to perform better than jemalloc, but there is currently no stable version.
Moreover, the compilation dependencies are complex and difficult to integrate, and are incompatible with gperftools/tcmalloc, and there are few reference documents.
jemalloc[https://github.com/jemalloc/jemalloc] performs better than gperftools/tcmalloc under high concurrency, and is mature and stable, looking forward to being the default memory allocator.
Tested in Doris: #12496
There will be personal info in doris_be --version, like this:
doris-0.0.0-trunk RELEASE (build git://hk-dev01/mnt/disk2/ygl/code/github/apache-doris/be/../@8b7d928af26318f71098f1be2ab03ed83b1955fd)
Built on Wed, 12 Oct 2022 18:36:44 CST by ygl@hk-dev01
Since we always not need this info, commit id is enough, I remove these redundant info, the new result is like this:
doris-0.0.0-trunk RELEASE (build git://hk-dev01@8b7d928)
Built on Thu, 13 Oct 2022 15:03:01 CST by hk-dev01
#12716 removed the mem limit for single load task, in this PR I propose to remove the session variable load_mem_limit, to avoid confusing.
For compatibility, load_mem_limit in thrift not removed, the value is set equal to exec_mem_limit in FE
Disable max_dynamic_partition_num check when disable DynamicPartition by ALTER TABLE tbl_name SET ("dynamic_partition.enable" = "false"), when max_dynamic_partition_num changed to larger and then changed to a lower value, the actual dynamic partition num may larger than max_dynamic_partition_num, and cannot disable DynamicPartition
Add restore new property 'reserve_dynamic_partition_enable', which means you can
get a table with dynamic_partition_enable property which has the same value
as before the backup. before this commit, you always get a table with property
'dynamic_partition_enable=false' when restore.