Now, regression data is stored in sf1DataPath, which is local or remote.
For performance reason, we use local dir for community pipeline, however, we need prepare data for every machine,
this process is easy mistake. So we cache data from s3 in local transparently, thus, we just need to config one data source.
Currently, we always disassemble aggregation into two stage: local and global. However, in some case, one stage aggregation is enough, there are two advantage of one stage aggregation.
1. avoid unnecessary exchange.
2. have a chance to do colocate join on the top of aggregation.
This PR move AggregateDisassemble rule from rewrite stage to optimization stage. And choose one stage or two stage aggregation according to cost.
In the policy changed by PR #12716, when reaching the hard limit, there might be multiple threads can pick same LoadChannel and call reduce_mem_usage on same TabletsChannel. Although there's a lock and condition variable can prevent multiple threads to reduce mem usage concurrently, but they still can do same reduce-work on that channel multiple times one by one, even it's just reduced.
eliminate outer join if we have non-null predicate on slots of inner side of outer join.
TODO:
1. use constant viariable to handle it (we can handle more case like nullsafeEqual ......)
2. using constant folding to handle of null values, is more general and does not require writing long logical judgments
3. handle null safe equals(<=>)
This is the second step for #12303.
The previous PR #12464 added the framework to select the rollup index for OLAP table, but pre-aggregation is turned on by default.
This PR set pre-aggregation for scan OLAP table.
The main steps are as below:
1. Select rollup index when aggregate is present, this is handled by `SelectRollupWithAggregate` rule. Expressions in aggregate functions, grouping expressions, and pushdown predicates would be used to check whether the pre-aggregation should be turned off.
2. When selecting from olap scan table without aggregate plan, it would be handled by `SelectRollupWithoutAggregate`.
This pull request includes some implementations of the statistics(https://github.com/apache/incubator-doris/issues/6370).
Execute these sql such as "`ANALYZE`, `SHOW ANALYZE`, `SHOW TABLE/COLUMN STATS...`" to collect statistics information and query them.
The following are the changes in this PR:
1. Added the necessary test cases for statistics.
2. Statistics optimization. To ensure the validity of statistics, statistics can only be updated after the statistics task is completed or manually updated by SQL, and the collected statistics should not be changed in other ways. The reason is to ensure that the statistics are not distorted.
3. Some code or comments have been adjusted to fix checkStyle problem.
4. Remove some code that was previously added because statistics were not available.
5. Add a configuration, which indicates whether to enable the statistics. The current statistics may not be stable, and it is not enabled by default (`enable_cbo_statistics=false`). Currently, it is mainly used for CBO test.
See this PR(#12766) syntax, some simple examples of statistics:
```SQL
-- enable statistics
SET enable_cbo_statistics=true;
-- collect statistics for all tables in the current database
ANALYZE;
-- collect all column statistics for table1
ANALYZE test.table1;
-- collect statistics for siteid of table1
ANALYZE test.table1(siteid);
ANALYZE test.table1(pv, citycode);
-- collect statistics for partition of table1
ANALYZE test.table1 PARTITION(p202208);
ANALYZE test.table1 PARTITIONS(p202208, p202209);
-- display table statistics
SHOW TABLE STATS test.table1;
-- display partition statistics of table1
SHOW TABLE STATS test.table1 PARTITION(p202208);
-- display column statistics of table1
SHOW COLUMN STATS test.table1;
-- display column statistics of partition
SHOW COLUMN STATS test.table1 PARTITION(p202208);
-- display the details of the statistics jobs
SHOW ANALYZE;
SHOW ANALYZE idxxxx;
```
> Due to the dangers inherent to automatic processing of PRs, GitHub’s standard pull_request workflow trigger by
default prevents write permissions and secrets access to the target repository. However, in some scenarios such
access is needed to properly process the PR. To this end the pull_request_target workflow trigger was introduced.
According to the article [Keeping your GitHub Actions and workflows secure](https://securitylab.github.com/research/github-actions-preventing-pwn-requests/) , the trigger condition in
`shellcheck.yml` which is `pull_request` can't comment the PR due to the lack of write permissions of the workflow.
Despite the `ShellCheck` workflow checkouts the source, but it doesn't build and test the source code. I think it is safe
to change the trigger condition from `pull_request` to `pull_request_target` which can make the workflow have write
permissions to comment the PR.
When the physical memory of the process reaches 90% of the mem limit, trigger the load channel mgr to brush down
The default value of be.conf mem_limit is changed from 90% to 80%, and stability is the priority.
Fix deadlock in arena_locks in BufferPool::BufferAllocator::ScavengeBuffers and _lock in DebugString
In order to avoid different mem tracker consumption values of multiple queries/loads, and the difference between the virtual memory of alloc and the physical memory actually increased by the process.
The memory alloc in PODArray and mempool will not be recorded in the query/load mem tracker immediately, but will be gradually recorded in the mem tracker during the memory usage.
But mem pool allocates memory from chunk allocator. If this chunk is used after the second time, it may have used physical memory. The above mechanism will cause the load channel memory statistics to be less than the actual value.
TPCH q7, we have expression like
(n1.n_name = 'FRANCE' and n2.n_name = 'GERMANY') or (n1.n_name = 'GERMANY' and n2.n_name = 'FRANCE')
this expression implies
(n1.n_name='FRANCE' or n1.n_name=''GERMANY)
The implied expression is logical redundancy, but it could be used to reduce the output tuple number of scan(n1), if nereids pushes this expression down.
This pr introduces a RULE to extract such expressions.
NOTE:
1. we only extract expression on a single table.
2. if the extracted expression cannot be pushed down, e.g. it is on right table of left outer join, we need another rule to remove all the useless expressions.
* [fix](parquet) fix write error data as parquet format.
Fix incorrect data conversion when writing tiny int and small int data
to parquet files in non-vectorized engine.