Optimize for key topn query like `SELECT * FROM store_sales ORDER BY ss_sold_date_sk, ss_sold_time_sk LIMIT 100`
(ss_sold_date_sk, ss_sold_time_sk is prefix of table sort key).
Check per scanner limit and set eof true to reduce the data need to be read.
* [fix](Inbitmap) fix in bitmap result error when left expr is constant
1. When left expr of the in predicate is a constant, instead of generating a bitmap filter, rewrite sql to use `bitmap_contains`.
For example,"select k1, k2 from (select 2 k1, 11 k2) t where k1 in (select bitmap_col from bitmap_tbl)"
=> "select k1, k2 from (select 2 k1, 11 k2) t left semi join bitmap_tbl b on bitmap_contains(b.bitmap_col, t.k1)"
* add regression test
**Histogram statistics**
Currently doris collects statistics, but no histogram data, and by default the optimizer assumes that the different values of the columns are evenly distributed. This calculation can be problematic when the data distribution is skewed. So this pr implements the collection of histogram statistics.
For columns containing data skew columns (columns with unevenly distributed data in the column), histogram statistics enable the optimizer to generate more accurate estimates of cardinality for filtering or join predicates involving these columns, resulting in a more precise execution plan.
The optimization of the execution plan by histogram is mainly in two aspects: the selection of where condition and the selection of join order. The selection principle of the where condition is relatively simple: the histogram is used to calculate the selection rate of each predicate, and the filter with higher selection rate is preferred.
The selection of join order is based on the estimation of the number of rows in the join result. In the case of uneven data distribution in the join condition columns, histogram can greatly improve the accuracy of the prediction of the number of rows in the join result. At the same time, if the number of rows of a bucket in one of the columns is 0, you can mark it and directly skip the bucket in the subsequent join process to improve efficiency.
---
Histogram statistics are mainly collected by the histogram aggregation function, which is used as follows:
**Syntax**
```SQL
histogram(expr)
```
> The histogram function is used to describe the distribution of the data. It uses an "equal height" bucking strategy, and divides the data into buckets according to the value of the data. It describes each bucket with some simple data, such as the number of values that fall in the bucket. It is mainly used by the optimizer to estimate the range query.
**example**
```
MySQL [test]> select histogram(login_time) from dev_table;
+------------------------------------------------------------------------------------------------------------------------------+
| histogram(`login_time`) |
+------------------------------------------------------------------------------------------------------------------------------+
| {"bucket_size":5,"buckets":[{"lower":"2022-09-21 17:30:29","upper":"2022-09-21 22:30:29","count":9,"pre_sum":0,"ndv":1},...]}|
+------------------------------------------------------------------------------------------------------------------------------+
```
**description**
```JSON
{
"bucket_size": 5,
"buckets": [
{
"lower": "2022-09-21 17:30:29",
"upper": "2022-09-21 22:30:29",
"count": 9,
"pre_sum": 0,
"ndv": 1
},
{
"lower": "2022-09-22 17:30:29",
"upper": "2022-09-22 22:30:29",
"count": 10,
"pre_sum": 9,
"ndv": 1
},
{
"lower": "2022-09-23 17:30:29",
"upper": "2022-09-23 22:30:29",
"count": 9,
"pre_sum": 19,
"ndv": 1
},
{
"lower": "2022-09-24 17:30:29",
"upper": "2022-09-24 22:30:29",
"count": 9,
"pre_sum": 28,
"ndv": 1
},
{
"lower": "2022-09-25 17:30:29",
"upper": "2022-09-25 22:30:29",
"count": 9,
"pre_sum": 37,
"ndv": 1
}
]
}
```
TODO:
- histogram func supports parameter and sample statistics (It's got another pr)
- use histogram statistics
- add p0 regression
This PR adds the rewriting and matching logic for the bitmap_union column in materialized index.
If a materialized index has bitmap_union column, we try to rewrite count distinct or bitmap_union_count to the bitmap_union column in materialized index.
when we process not in subquery. if the subquery return column is nullable, we need a NULL AWARE ANTI JOIN instead of ANTI JOIN.
Doris already support NULL AWARE ANTI JOIN in PR #13871
Nereids need to do that so.
if switch external catalog, and use a database that has same name with one database of internal catalog,
query 'show data', will get data info from internal catalog.
in previous pr(#14876) we compact equals like "a=1 or a=2 or a = 3 " in to "a in (1, 2, 3)"
this pr set a lower bound for the number of equals COMPACT_EQUAL_TO_IN_PREDICATE_THRESHOLD (default is 2)
for performance reason, we create a hashSet to collect literals, like {1,2,3}. and hence, the literals in "in-predicates" are in random order.
for regression test, if we need stable output of explain string, set COMPACT_EQUAL_TO_IN_PREDICATE_THRESHOLD to a large number to avoid compact rule.
It is very difficult to investigate the data inconsistency of multiple replicas.
When loading data, the number of rows between replicas is checked to avoid some data inconsistency problems.
when create table like table of jdbc, it will get error like
'errCode = 2, detailMessage = Failed to execute CREATE TABLE LIKE baseall_mysql.
Reason: errCode = 2, detailMessage = property table_type must be set'
this pr fix it.
The segment group is useless in current codebase, remove all the related code inside Doris. As for the related protobuf code, use reserved flag to prevent any future user from using that field.
The pr implements the SetOperation.
- Adapt to the EliminateUnnecessaryProject rule to ensure that the project under SetOperation is not deleted.
- Add predicate pushdown of SetOperation
- Optimization: Merge multiple SetOperations with the same type and the same qualifier
- Optimization: merge oneRowRelation and union