The cost estimation can be more accurate if the statistics of partition are available. But we are running big data like 1T, can not really import.
So now we want to extend this by injecting partition statistics.
Syntax:
ALTER TABLE table_name MODIFY COLUMN column_name SET STATS ('stat_name' = 'stat_value', ...)
[ PARTITION (partition_name) ];
Explanation:
- Table_name: The table to which the statistics are dropped. It can be a db_name.table_name form.
Column_name: Specified target column. table_name Must be a column that exists in. Statistics can only be modified one column at a time.
- Stat _ name and stat _ value: The corresponding stat name and the value of the stat info. Multiple stats are comma separated. Statistics that can be modified include row_count, ndv, num_nulls min_value max_value, and data_size.
- Partition_name: specifies the target partition. Must be a partition existing in table_name. Multiple partitions are separated by commas.
after we forbid some cases off agg candidate plans,
all local phase agg require DistributionSpecAny for child.
So, we could enable parallel scan for it
Add alias name for system variable to fix the col name is the values of system variable like:
```
mysql> select @@character_set_client;
+--------+
| 'utf8' |
+--------+
| utf8 |
+--------+
==================================
mysql> select @@character_set_client;
+------------------------+
| @@character_set_client |
+------------------------+
| utf8 |
+------------------------+
```
1. forbid all candidates that need to gather process except must do it
2. forbid do local agg after reshuffle of two phase agg of distinct
3. forbid one phase agg after reshuffle
4. forbid three or four phase agg for distinct if any stage need reshuffle
5. forbid multi distinct for one distinct agg if do not need reshuffle
support sql_select_limit for original planner and Nereids.
if enable the variable
In original planner, add limit to the top planNode
In Nereids, add limit node to the top in preprocess phase.
Problem: When using no boolean type as return type in where or having clause, the analyzer will check the return type and throw an error. But in some other databases, this usage is enable.
Solved: Cast return type to boolean in where clause and having clause. select *** from *** where case when *** then 1 else 0 end;
For file scan node, this is a special field `requiredSlot`, this field is set depends on the `isMaterialized` info of slot.
But `isMaterialized` info can be changed during the plan process, so we must update the `requiredSlot`
in `finalize` phase of scan node, otherwise, it may causing BE crash due to mismatching slot info.
Older MySQL client (< 5.7.28) will try to connect to server with tls1.1,
which is insecure and is not supported by Doris FE. The connection will
fail.
We disable ssl connection support on Doris FE to keep the users' application
unaffected. To enable ssl support explicitly, just put
the following to fe.conf
```
enable_ssl = true
```
Before, we get hive partition using HMS getPartition api. In this case, each partition need to call the api once. The performance is very poor when partition number is large. This pr use getPartitionsByNames to get multiple partitions in one api call.
To get 90000 partitions, the time costing is reduced to 14s from 108s.
Fetch iceberg table stats automatically while querying a table.
Collect accurate statistics for Iceberg table by running analyze sql in Doris (remove collect by meta option).
1. allow cast boolean as date like type in nereids, the result is null
2. PruneOlapScanTablet rule can prune tablet even if a mv index is selected.
3. constant conjunct should not be pushed through agg node in old planner
Problem:
Select list should be non const when from list have tables or multiple tuples. Or upper query will regard wrong of isConstant
And make wrong constant folding
For example: when using nullif funtion with subquery which result in two alternative constant, planner would treat it as constant expr. So analyzer would report an error of order by clause can not be constant
Solusion:
Change inline view output to non constant, because (select 1 a from table) as view , a in output is no constant when we see
view.a outside
Add important time in planning process. Add time points of:
// Join reorder end time
queryJoinReorderFinishTime means time after analyze and before join reorder
// Create single node plan end time
queryCreateSingleNodeFinishTime means time after join reorder and before finish create single node plan
// Create distribute plan end time
queryDistributedFinishTime means time after create single node plan and before finish create distributed node plan
I will enhance performance about querying meta cache of hms tables by 2 steps:
**Step1** : use concurrent batch loading for meta cache
**Step2** : execute some other tasks concurrently as soon as possible
**This pr mainly for step1 and it mainly do the following things:**
- Create a `CacheBulkLoader` for batch loading
- Remove the executor of the previous async cache loader and change the loader's type to `CacheBulkLoader` (We do not set any refresh strategies for LoadingCache, so the previous executor is not useful)
- Use a `FixedCacheThreadPool` to replace the `CacheThreadPool` (The previous `CacheThreadPool` just log warn infos and will not throw any exceptions when the pool is full).
- Remove parallel streams and use the `CacheBulkLoader` to do batch loadings
- Change the value of `max_external_cache_loader_thread_pool_size` to 64, and set the pool size of hms client pool to `max_external_cache_loader_thread_pool_size`
- Fix the spelling mistake for `max_hive_table_catch_num`
Add a session var & config enable_strong_consistency_read to solve the problem that loading result may be shortly invisible to follwers, to meet users requirements in strong consistency read scenario.
Will sync max journal id from master and wait for replaying.
If this is no `info file` in repository, the mysql connection may lost when user executing `show snapshot on repo`,
```
2023-07-05 09:22:48,689 WARN (mysql-nio-pool-0|199) [ReadListener.lambda$handleEvent$0():60] Exception happened in one session(org.apache.doris.qe.ConnectContext@730797c1).
java.io.IOException: Error happened when receiving packet.
at org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:691) ~[doris-fe.jar:1.2-SNAPSHOT]
at org.apache.doris.mysql.ReadListener.lambda$handleEvent$0(ReadListener.java:52) ~[doris-fe.jar:1.2-SNAPSHOT]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_322]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_322]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_322]
```
This is because there are some field missing in returned result set.
Support estimate table row count based on file size.
With sample size=3000 (total partition number is 87491), load cache time is 45s.
With sample size=100000 (more than total partition number 87505), load cache time is 388s.
Current runtime filter pushing down to cte internal, we construct the runtime filter expr_order with incremental number, which is not correct. For cte internal rf pushing down, the join node will be always different, the expr_order should be fixed as 0 without incrementation, otherwise, it will lead the checking for expr_order and probe_expr_size illegal or wrong query result.
This pr will revert 2827bc1 temporarily, it will break the cte rf pushing down plan pattern.