Commit Graph

7768 Commits

Author SHA1 Message Date
e331e0420b [improvement](topn)add per scanner limit check for new scanner (#15231)
Optimize for key topn query like `SELECT * FROM store_sales ORDER BY ss_sold_date_sk, ss_sold_time_sk LIMIT 100` 
(ss_sold_date_sk, ss_sold_time_sk is prefix of table sort key). 

Check per scanner limit and set eof true to reduce the data need to be read.
2022-12-22 22:39:31 +08:00
d38461616c [Pipeline](error msg) format error message (#15247) 2022-12-22 20:55:06 +08:00
1fdd4172bd [fix](Inbitmap) fix in bitmap result error when left expr is constant (#15271)
* [fix](Inbitmap) fix in bitmap result error when left expr is constant

1. When left expr of the in predicate is a constant, instead of generating a bitmap filter, rewrite sql to use `bitmap_contains`.
  For example,"select k1, k2 from (select 2 k1, 11 k2) t where k1 in (select bitmap_col from bitmap_tbl)"
  => "select k1, k2 from (select 2 k1, 11 k2) t left semi join bitmap_tbl b on bitmap_contains(b.bitmap_col, t.k1)"

* add regression test
2022-12-22 19:25:09 +08:00
77c15729d4 [fix](memory) Fix too many repeat cause OOM (#15217) 2022-12-22 17:16:18 +08:00
6fb61b5bbc [enhancement] (streamload) allow table in url when do two-phase commit (#15246) (#15248)
Make it works even if user provide us with (unnecessary) table info in url.
i.e. `curl -X PUT --location-trusted -u user:passwd -H "txn_id:18036" -H \
"txn_operation:commit" http://fe_host:http_port/api/{db}/{table}/_stream_load_2pc`
can still works!

Signed-off-by: freemandealer <freeman.zhang1992@gmail.com>
2022-12-22 17:00:51 +08:00
754fceafaf [feature-wip](statistics) add aggregate function histogram and collect histogram statistics (#14910)
**Histogram statistics**

Currently doris collects statistics, but no histogram data, and by default the optimizer assumes that the different values of the columns are evenly distributed. This calculation can be problematic when the data distribution is skewed. So this pr implements the collection of histogram statistics.

For columns containing data skew columns (columns with unevenly distributed data in the column), histogram statistics enable the optimizer to generate more accurate estimates of cardinality for filtering or join predicates involving these columns, resulting in a more precise execution plan.

The optimization of the execution plan by histogram is mainly in two aspects: the selection of where condition and the selection of join order. The selection principle of the where condition is relatively simple: the histogram is used to calculate the selection rate of each predicate, and the filter with higher selection rate is preferred.

The selection of join order is based on the estimation of the number of rows in the join result. In the case of uneven data distribution in the join condition columns, histogram can greatly improve the accuracy of the prediction of the number of rows in the join result. At the same time, if the number of rows of a bucket in one of the columns is 0, you can mark it and directly skip the bucket in the subsequent join process to improve efficiency.

---

Histogram statistics are mainly collected by the histogram aggregation function, which is used as follows:

**Syntax**

```SQL
histogram(expr)
```

> The histogram function is used to describe the distribution of the data. It uses an "equal height" bucking strategy, and divides the data into buckets according to the value of the data. It describes each bucket with some simple data, such as the number of values that fall in the bucket. It is mainly used by the optimizer to estimate the range query.

**example**

```
MySQL [test]> select histogram(login_time) from dev_table;
+------------------------------------------------------------------------------------------------------------------------------+
| histogram(`login_time`)                                                                                                      |
+------------------------------------------------------------------------------------------------------------------------------+
| {"bucket_size":5,"buckets":[{"lower":"2022-09-21 17:30:29","upper":"2022-09-21 22:30:29","count":9,"pre_sum":0,"ndv":1},...]}|
+------------------------------------------------------------------------------------------------------------------------------+
```
**description**

```JSON
{
    "bucket_size": 5, 
    "buckets": [
        {
            "lower": "2022-09-21 17:30:29", 
            "upper": "2022-09-21 22:30:29", 
            "count": 9, 
            "pre_sum": 0, 
            "ndv": 1
        }, 
        {
            "lower": "2022-09-22 17:30:29", 
            "upper": "2022-09-22 22:30:29", 
            "count": 10, 
            "pre_sum": 9, 
            "ndv": 1
        }, 
        {
            "lower": "2022-09-23 17:30:29", 
            "upper": "2022-09-23 22:30:29", 
            "count": 9, 
            "pre_sum": 19, 
            "ndv": 1
        }, 
        {
            "lower": "2022-09-24 17:30:29", 
            "upper": "2022-09-24 22:30:29", 
            "count": 9, 
            "pre_sum": 28, 
            "ndv": 1
        }, 
        {
            "lower": "2022-09-25 17:30:29", 
            "upper": "2022-09-25 22:30:29", 
            "count": 9, 
            "pre_sum": 37, 
            "ndv": 1
        }
    ]
}
```

TODO:
- histogram func supports parameter and sample statistics (It's got another pr)
- use histogram statistics
- add  p0 regression
2022-12-22 16:42:17 +08:00
d0a4a8e047 [Feature](Nereids) Push limit through union all. (#15272)
This PR push limit through the union all into the child plan.
2022-12-22 14:46:47 +08:00
f8b368a85e [Feature](Nereids) Support bitmap for materialized index. (#14863)
This PR adds the rewriting and matching logic for the bitmap_union column in materialized index.

If a materialized index has bitmap_union column, we try to rewrite count distinct or bitmap_union_count to the bitmap_union column in materialized index.
2022-12-22 14:40:25 +08:00
0fa4c78e84 [Improvement](external table) support hive external table which stores data on tencent chdfs (#15125) 2022-12-22 14:32:55 +08:00
a87f905a2d [Feature](Nereids) unnest subquery in 'not in' predicate into NULL AWARE ANTI JOIN (#15230)
when we process not in subquery. if the subquery return column is nullable, we need a NULL AWARE ANTI JOIN instead of ANTI JOIN.
Doris already support NULL AWARE ANTI JOIN in PR #13871
Nereids need to do that so.
2022-12-22 14:13:47 +08:00
87756f5441 [regresstion](query) query with limit 0 regresstion test (#15245) 2022-12-22 14:06:44 +08:00
e9a201e0ec [refactor](non-vec) delete some non-vec exec node (#15239)
* [refactor](non-vec) delete some non-vec exec node
2022-12-22 14:05:51 +08:00
1520a4af6d [refactor](resource) use resource to create external catalog (#14978)
Use resource to create external catalog.
-- HMS
mysql> create resource hms_resource properties(
    -> "type"="hms",
    -> 'hive.metastore.uris' = 'thrift://172.21.0.44:7004',
    -> 'dfs.nameservices'='HANN',
    -> 'dfs.ha.namenodes.HANN'='nn1,nn2',
    -> 'dfs.namenode.rpc-address.HANN.nn1'='172.21.0.32:4007',
    -> 'dfs.namenode.rpc-address.HANN.nn2'='172.21.0.44:4007',
    -> 'dfs.client.failover.proxy.provider.HANN'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
    -> );

-- MYSQL
mysql> create resource mysql_resource properties (
    -> "type"="jdbc",
    -> "user"="root",
    -> "password"="123456",
    -> "jdbc_url" = "jdbc:mysql://127.0.0.1:3316/doris_test?useSSL=false",
    -> "driver_url" = "https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/jdbc_driver/mysql-connector-java-8.0.25.jar",
    -> "driver_class" = "com.mysql.cj.jdbc.Driver");

-- ES
mysql> create resource es_resource properties (
    -> "type"="es",
    -> "hosts"="http://127.0.0.1:29200",
    -> "nodes_discovery"="false",
    -> "enable_keyword_sniff"="true");
2022-12-22 13:45:55 +08:00
2bb4ea5dea [regresion-test](icebergv2) add icebergv2 test case (#15187) 2022-12-22 13:45:07 +08:00
c9f26183b0 [feature-wip](MTMV) Support importing data to materialized view with multiple tables (#14944)
## Use Case

create table t_user(
     event_day DATE,
     id bigint,
     username varchar(20)
)
DISTRIBUTED BY HASH(id) BUCKETS 10 
PROPERTIES (
   "replication_num" = "1"
 );
insert into  t_user values("2022-10-26",1,"clz");
insert into  t_user values("2022-10-28",2,"zhangsang");
insert into  t_user values("2022-10-29",3,"lisi");
create table t_user_pv(
    event_day DATE,
    id bigint,
    pv bigint
)
DISTRIBUTED BY HASH(id) BUCKETS 10 
PROPERTIES (
   "replication_num" = "1"
 );
insert into  t_user_pv  values("2022-10-26",1,200);
insert into  t_user_pv  values("2022-10-28",2,200);
insert into  t_user_pv  values("2022-10-28",3,300);

DROP MATERIALIZED VIEW  if exists multi_mv;
CREATE MATERIALIZED VIEW  multi_mv
BUILD IMMEDIATE 
REFRESH COMPLETE 
start with "2022-10-27 19:35:00"
next  60 second
KEY(username)   
DISTRIBUTED BY HASH (username)  buckets 1
PROPERTIES ('replication_num' = '1') 
AS 
select t_user.username, t_user_pv.pv  from t_user, t_user_pv where t_user.id=t_user_pv.id;
2022-12-22 11:46:41 +08:00
c81a3bfe1b [docs](compile)Add Windows compilation documentation (#15253)
Add Windows compilation documentation
2022-12-22 10:16:58 +08:00
fdcabf16b1 [fix](multi-catalog) fix show data on external catalog (#15227)
if switch external catalog, and use a database that has same name with one database of internal catalog,
query 'show data', will get data info from internal catalog.
2022-12-22 09:43:15 +08:00
7d49ddf50c [bugfix](thirdparty) patch simdjson to avoid conflict with odbc macro BOOL (#15223)
fix conflit name BOOL in odbc sqltypes.h and simdjson element.h. Change BOOL to BOOLEAN in simdjson.

- thirdparty/installed/include/sqltypes.h

> #define	BOOL				int


- thirdparty/src/simdjson-1.0.2/include/simdjson/dom/element.h

> enum class element_type {
>   ARRAY = '[',     ///< dom::array
>   OBJECT = '{',    ///< dom::object
>   INT64 = 'l',     ///< int64_t
>   UINT64 = 'u',    ///< uint64_t: any integer that fits in uint64_t but *not* int64_t
>   DOUBLE = 'd',    ///< double: Any number with a "." or "e" that fits in double.
>   STRING = '"',    ///< std::string_view
>   BOOL = 't',      ///< bool
>   NULL_VALUE = 'n' ///< null
> };
>
2022-12-22 09:40:04 +08:00
b4f5b7a4c9 [fix](load) fix load failure caused by incorrect file format (#15222)
Issue Number: close #15221
2022-12-22 09:38:37 +08:00
cc995c4307 [fix](load) fix new_load_scan_node load finished but no data actually caused by wrong file size (#15211) 2022-12-22 09:28:00 +08:00
1cc79510c9 [enhancement](compaction) add delete_sign_index check before filter delete (#15190) 2022-12-22 09:26:37 +08:00
1ca1417824 [feature](multi-catalog) support show tables/table status from catalog.db (#15180)
support 'show tables from catalog.db' and 'show table status from catalog.db'
2022-12-22 09:22:40 +08:00
56f7ba19c0 [opt](planner) add session var: COMPACT_EQUAL_TO_IN_PREDICATE_THRESHOLD (#15225)
in previous pr(#14876) we compact equals like "a=1 or a=2 or a = 3 " in to "a in (1, 2, 3)"
this pr set a lower bound for the number of equals COMPACT_EQUAL_TO_IN_PREDICATE_THRESHOLD (default is 2)

for performance reason, we create a hashSet to collect literals, like {1,2,3}. and hence, the literals in "in-predicates" are in random order.

for regression test, if we need stable output of explain string, set COMPACT_EQUAL_TO_IN_PREDICATE_THRESHOLD to a large number to avoid compact rule.
2022-12-21 21:10:47 +08:00
c0b39de61c [Feature](Nereids) Support join hint (#13601)
Support join hint for nereids planner. Hints for broadcast and shuffle are supported by this PR.
2022-12-21 21:09:13 +08:00
649bbc1e58 [fix](nereids) Fix case-when (#15150) 2022-12-21 21:03:50 +08:00
8ecf69b09b [pipeline](regression) nested loop join test get error result in pipeline engine and refactor the code for need more input data (#15208) 2022-12-21 19:03:51 +08:00
e65b577f90 [fix](InBitmap) Check whether the in bitmap contains correlated subqueries (#15184) 2022-12-21 16:52:27 +08:00
af54299b26 [Pipeline](projection) Support projection on pipeline engine (#15220) 2022-12-21 15:47:29 +08:00
a447121fc3 [fix](scanner scheduler) fix coredump of ScannerScheduler::_scanner_scan (#15199)
* [fix](scanner scheduler) fix coredump of ScannerScheduler::_scanner_scan

* fix
2022-12-21 15:44:47 +08:00
2445ac9520 [Bug](runtimefilter) Fix BE crash due to init failure (#15228) 2022-12-21 15:36:22 +08:00
5aefb793f9 [Bugfix](round) fix round function may coredump (#15203)
* [Bugfix](round) fix round function may coredump
2022-12-21 14:36:10 +08:00
e83bab4e44 [typo](docs)add spark-doris-connector config (#15214) 2022-12-21 14:12:41 +08:00
90349f0e61 [Feature](Nereids) support mask function (#15120)
support function for nereids: mask, mask_first_n, mask_last_n
2022-12-21 10:25:11 +08:00
efdc73777a [enhancement](load) verify the number of rows between different replicas when load data to avoid data inconsistency (#15101)
It is very difficult to investigate the data inconsistency of multiple replicas.
When loading data, the number of rows between replicas is checked to avoid some data inconsistency problems.
2022-12-21 09:50:13 +08:00
d0d7a6d8ad [fix](multi-catalog) can't show databases when creating a new user in external catalog (#15204)
Fix bug: A new user with grants to access external catalog can't show databases.
2022-12-21 08:58:06 +08:00
8969c19cd4 [fix](jdbc) fix create table like table of jdbc error (#15179)
when create table like table of jdbc, it will get error like
'errCode = 2, detailMessage = Failed to execute CREATE TABLE LIKE baseall_mysql.
Reason: errCode = 2, detailMessage = property table_type must be set'
this pr fix it.
2022-12-21 08:56:43 +08:00
732417258c [Bug](pipeline) Fix bugs to pass TPCDS cases (#15194) 2022-12-20 22:29:55 +08:00
5c35f02bdb [fix](nereids) add signature for IF to support HLL type (#15188) 2022-12-20 22:22:11 +08:00
c3712b1114 [bug](jdbc) fix error of jdbc with datetime type in oracle (#15205) 2022-12-20 22:05:55 +08:00
d6b4d214ce 1.1.5 sidebar (#15206) 2022-12-20 20:08:45 +08:00
2501198800 [Bug](compile) Fix compiling error (#15207) 2022-12-20 20:05:49 +08:00
821c12a456 [chore](BE) remove all useless segment group related code #15193
The segment group is useless in current codebase, remove all the related code inside Doris. As for the related protobuf code, use reserved flag to prevent any future user from using that field.
2022-12-20 17:11:47 +08:00
c172e2396a [docs](releasenote) Release Note 1.1.5 (#15182) 2022-12-20 16:38:33 +08:00
5cf21fa7d1 [feature](planner) mark join to support subquery in disjunction (#14579)
Co-authored-by: Gabriel <gabrielleebuaa@gmail.com>
2022-12-20 15:22:43 +08:00
d9550c311e [feature](Nereids) implement setOperation (#15020)
The pr implements the SetOperation.

- Adapt to the EliminateUnnecessaryProject rule to ensure that the project under SetOperation is not deleted.
- Add predicate pushdown of SetOperation
- Optimization: Merge multiple SetOperations with the same type and the same qualifier
- Optimization: merge oneRowRelation and union
2022-12-20 15:14:29 +08:00
fdb54a346d [feature] (nereids) support aggregate function group_bit_and/or/xor (#15003)
support

group_bit_and
group_bit_xor
group_bit_or
2022-12-20 14:11:07 +08:00
6712f1fc1d [fix](Nereids) encryption function with 4 params should auto-complate last param with config (#15038) 2022-12-20 13:55:54 +08:00
737fe49f6f [Bug](FE) fix compile error due to code refactor (#15192) 2022-12-20 13:20:55 +08:00
9d48154cdc [minor](non-vec) delete unused interface in RowBatch (#15186) 2022-12-20 13:06:34 +08:00
a2d56af7d9 [profile](datasender) add more detail profile in data stream sender (#15176)
* [profile](datasender) add more detail profile in data stream sender


Co-authored-by: yiguolei <yiguolei@gmail.com>
2022-12-20 12:07:34 +08:00