Commit Graph

44 Commits

Author SHA1 Message Date
e1d7233e9c [feature](vectorization) Support Vectorized Exec Engine In Doris (#7785)
# Proposed changes

Issue Number: close #6238

    Co-authored-by: HappenLee <happenlee@hotmail.com>
    Co-authored-by: stdpain <34912776+stdpain@users.noreply.github.com>
    Co-authored-by: Zhengguo Yang <yangzhgg@gmail.com>
    Co-authored-by: wangbo <506340561@qq.com>
    Co-authored-by: emmymiao87 <522274284@qq.com>
    Co-authored-by: Pxl <952130278@qq.com>
    Co-authored-by: zhangstar333 <87313068+zhangstar333@users.noreply.github.com>
    Co-authored-by: thinker <zchw100@qq.com>
    Co-authored-by: Zeno Yang <1521564989@qq.com>
    Co-authored-by: Wang Shuo <wangshuo128@gmail.com>
    Co-authored-by: zhoubintao <35688959+zbtzbtzbt@users.noreply.github.com>
    Co-authored-by: Gabriel <gabrielleebuaa@gmail.com>
    Co-authored-by: xinghuayu007 <1450306854@qq.com>
    Co-authored-by: weizuo93 <weizuo@apache.org>
    Co-authored-by: yiguolei <guoleiyi@tencent.com>
    Co-authored-by: anneji-dev <85534151+anneji-dev@users.noreply.github.com>
    Co-authored-by: awakeljw <993007281@qq.com>
    Co-authored-by: taberylyang <95272637+taberylyang@users.noreply.github.com>
    Co-authored-by: Cui Kaifeng <48012748+azurenake@users.noreply.github.com>


## Problem Summary:

### 1. Some code from clickhouse

**ClickHouse is an excellent implementation of the vectorized execution engine database,
so here we have referenced and learned a lot from its excellent implementation in terms of
data structure and function implementation.
We are based on ClickHouse v19.16.2.2 and would like to thank the ClickHouse community and developers.**

The following comment has been added to the code from Clickhouse, eg:
// This file is copied from
// https://github.com/ClickHouse/ClickHouse/blob/master/src/Interpreters/AggregationCommon.h
// and modified by Doris

### 2. Support exec node and query:
* vaggregation_node
* vanalytic_eval_node
* vassert_num_rows_node
* vblocking_join_node
* vcross_join_node
* vempty_set_node
* ves_http_scan_node
* vexcept_node
* vexchange_node
* vintersect_node
* vmysql_scan_node
* vodbc_scan_node
* volap_scan_node
* vrepeat_node
* vschema_scan_node
* vselect_node
* vset_operation_node
* vsort_node
* vunion_node
* vhash_join_node

You can run exec engine of SSB/TPCH and 70% TPCDS stand query test set.

### 3. Data Model

Vec Exec Engine Support **Dup/Agg/Unq** table, Support Block Reader Vectorized.
Segment Vec is working in process.

### 4. How to use

1. Set the environment variable `set enable_vectorized_engine = true; `(required)
2. Set the environment variable `set batch_size = 4096; ` (recommended)

### 5. Some diff from origin exec engine

https://github.com/doris-vectorized/doris-vectorized/issues/294

## Checklist(Required)

1. Does it affect the original behavior: (No)
2. Has unit tests been added: (Yes)
3. Has document been added or modified: (No)
4. Does it need to update dependencies: (No)
5. Are there any changes that cannot be rolled back: (Yes)
2022-01-18 10:07:15 +08:00
6c6380969b [refactor] replace boost smart ptr with stl (#6856)
1. replace all boost::shared_ptr to std::shared_ptr
2. replace all boost::scopted_ptr to std::unique_ptr
3. replace all boost::scoped_array to std::unique<T[]>
4. replace all boost:thread to std::thread
2021-11-17 10:18:35 +08:00
7297b275f1 [Optimize] Optimize cpu consumption when importing parquet files (#6782)
Remove part of dynamic_cast, reduce the overhead caused by type conversion,
and probably reduce the cpu consumption of parquet file import by about 10%
2021-10-03 12:14:35 +08:00
8738ce380b Add long text type STRING, with a maximum length of 2GB. Usage is similar to varchar, and there is no guarantee for the performance of storing extremely long data (#6391) 2021-08-18 09:05:40 +08:00
739c0268ff [refactor] Remove decimal v1 related code from code base (#6079)
remove ALL DECIMAL V1 type code , this is a part of #6073
2021-07-07 10:26:32 +08:00
bde60280b8 [Optimize] use string_view instead of std::string in string function (#6010) 2021-06-16 09:40:13 +08:00
75db273b93 [Doris On ES][WIP] Support external ES table with SSL secured and configurable node sniffing (#5325)
Support external ES  table with `SSL` secured and configurable node sniffing
2021-04-12 11:23:49 +08:00
6dcc1b0a55 [Doris on ES] Fix query failed when ES field value is null (#5363)
* Update fe-idea-dev.md

use `brew install thrift@0.9` to install thrift 0.9.3.1
`brew edit thrift090 | head` shows thrift@0.9 uses thrift 0.9.3.1

* [Refactor] Remove the unnecessary if statement

Future<?> submit(Runnable task)
Submits a Runnable task for execution and returns a Future representing that task. The Future's get method will return null upon successful completion.

* Fix null type

* add comment

Co-authored-by: tanhao <tanhao.0902@bytedance.com>
2021-02-23 10:42:25 +08:00
93a4c7efc1 [LOG] Standardize the use of VLOG in code (#5264)
At present, the application of vlog in the code is quite confusing.
It is inherited from impala VLOG_XX format, and there is also VLOG(number) format.
VLOG(number) format does not have a unified specification, so this pr standardizes the use of VLOG
2021-01-21 12:09:09 +08:00
af06adb57f [Doris On ES][Bug-fix] fix boolean predicate pushdown manner (#4990)
Correct handling `boolean` field predicate through set the predicate value to `true`、`false` or `empty set` for DOE
2020-12-02 10:13:13 +08:00
6fedf5881b [CodeFormat] Clang-format cpp sources (#4965)
Clang-format all c++ source files.
2020-11-28 18:36:49 +08:00
10e1e29711 Remove header file common/names.h (#4945) 2020-11-26 17:00:48 +08:00
09f97f8a05 [Refactor] Fixes some be typo part 2 (#4747) 2020-10-20 09:28:57 +08:00
75e0ba32a1 Fixes some be typo (#4714) 2020-10-13 09:37:15 +08:00
265c26f67d [Doris On ES] Add docvalue limitation for doc_values scan and enable doc_values scan default (#4055) 2020-07-10 18:37:36 +08:00
1e813df3fd [Doris On ES] [Bug-Fix][Refactor] Fix potential null pointer exception and refactor function process logic (#3985)
fix: https://github.com/apache/incubator-doris/issues/3984

1. add `conjunct.size` checking and `slot_desc nullptr` checking logic
2. For historical reasons, the function predicates are added one by one, I just refactor the processing make thelogic for function predicate processing more clearly
2020-07-02 22:32:16 +08:00
dc603de4bd [Doris On ES][Bug-fix] Solve the problem of time format processing (#3941)
https://github.com/apache/incubator-doris/issues/3936
Doris On ES can obtain field value from `_source` or `docvalues`:
1. From `_source` ,  get the origin value as you put, ES process indexing、docvalues for `date` field is converted to millisecond
2. From `docvalues`,  before( 6.4 you get `millisecond timestamp` value, after(include) 6.4 you get the formatted `date` value :2020-06-18T12:10:30.000Z, but ES (>=6.4) provide `format` parameter for  `docvalue` field request, this would coming soon for Doris On ES

After this PR was merged into Doris, Doris On ES would only correctly support to process `millisecond` timestamp and string format date, if you provided a `seconds` timestamp, Doris On ES would process wrongly which (divided by 1000 internally)

ES mapping:

```
{
   "timestamp_test": {
      "mappings": {
         "doc": {
            "properties": {
               "k1": {
                  "type": "date",
                  "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
               }
            }
         }
      }
   }
}
```

ES documents:

```
       {
            "_index": "timestamp_test",
            "_type": "doc",
            "_id": "AXLbzdJY516Vuc7SL51m",
            "_score": 1,
            "_source": {
               "k1": "2020-6-25"
            }
         },
         {
            "_index": "timestamp_test",
            "_type": "doc",
            "_id": "AXLbzddn516Vuc7SL51n",
            "_score": 1,
            "_source": {
               "k1": 1592816393000  ->  2020/6/22 16:59:53
            }
         }
```
Doris Table:

```
CREATE EXTERNAL TABLE `timestamp_source` (
  `k1` date NULL COMMENT ""
) ENGINE=ELASTICSEARCH
```


### enable_docvalue_scan = false

**For ES 5.5**:
```
mysql> select k1 from timestamp_source;
+------------+
| k1         |
+------------+
| 2020-06-25 |
| 2020-06-22 |
+------------+
```

**For ES 6.5 or above**:

```
mysql> select * from timestamp_source;
+------------+
| k1         |
+------------+
| 2020-06-25 |
| 2020-06-22 |
+------------+
```

###  enable_docvalue_scan = true 

**For ES 5.5**:

```
mysql> select k1 from timestamp_dv; 
+------------+
| k1         |
+------------+
| 2020-06-25 |
| 2020-06-22 |
+------------+
```

**For ES 6.5 or above**:

```
mysql> select * from timestamp_dv; 
+------------+
| k1         |
+------------+
| 2020-06-25 |
| 2020-06-22 |
+------------+
```
2020-06-28 09:21:22 +08:00
be5fc76557 [Doris On ES][Optimization] Ignore _total node for efficiency (#3932)
Prior to this PR, Doris On ES merged another PR https://github.com/apache/incubator-doris/pull/3513 which misusing the `total` node.  After Doris On ES introduce `terminate_after` (https://github.com/apache/incubator-doris/issues/2576), the `total` documents would not be computed, rely  on this `total` field would be dangerous, we just rely on the actual document count by counting the `inner hits` node which it means to be. So we just remove all total parsing and related logic from Doris On ES, this maybe improve performance slightly because of ignoring and skipping `total` json node.
2020-06-26 17:42:33 +08:00
355df127b7 [Doris On ES] Support fetch _id field from ES (#3900)
More information can be found: https://github.com/apache/incubator-doris/issues/3901

The created ES external Table must contains `_id` column if you want to fetch the Elasticsearch document `_id`.
```
CREATE EXTERNAL TABLE `doe_id2` (
  `_id` varchar COMMENT "",
   `city`  varchar COMMENT ""
) ENGINE=ELASTICSEARCH
PROPERTIES (
"hosts" = "http://10.74.167.16:8200",
"user" = "root",
"password" = "root",
"index" = "doe",
"type" = "doc",
"version" = "6.5.3",
"enable_docvalue_scan" = "true",
"transport" = "http"
);

Query:

```
mysql> select * from doe_id2 limit 10;
+----------------------+------+
| _id                  | city |
+----------------------+------+
| iRHNc3IB8XwmcbhB7lEB | gz   |
| jBHNc3IB8XwmcbhB71Ef | gz   |
| jRHNc3IB8XwmcbhB71GI | gz   |
| jhHNc3IB8XwmcbhB71Hx | gz   |
| ThHNc3IB8XwmcbhBkFHB | sh   |
| TxHNc3IB8XwmcbhBkFH9 | sh   |
| URHNc3IB8XwmcbhBklFA | sh   |
| ahHNc3IB8XwmcbhBxlFq | gz   |
| axHNc3IB8XwmcbhBxlHw | gz   |
| bxHNc3IB8XwmcbhByVFO | gz   |
+----------------------+------+
```

NOTICE:
This change the column name format to support column name start with "_".
2020-06-19 17:07:07 +08:00
8c608bbad5 [Doris On ES] Skip function_call expr when process predicate (#3813)
[Doris On ES] Skip function_call expr when process predicate

Fixed #3801
Do not push-down function_call such as split_xxx when process predicate, Doris BE is responsible for processing these predicate

All rows in table:

```
+------+------+------+------------+------------+
| k1   | k2   | k3   | UpdateTime | ArriveTime |
+------+------+------+------------+------------+
| NULL | NULL | kkk1 |  123456789 |       NULL |
| kkk1 | NULL | NULL |  123456789 |       NULL |
| NULL | kkk2 | NULL |  123456789 |       NULL |
+------+------+------+------------+------------+
```

The following predicate could not push down to ES.

```
SQL 1:
mysql> select * from (select split_part(k1, "1", 1) as kk from case_replay_for_milimin) t where t.kk is not null;
+------+
| kk   |
+------+
| kkk  |
+------+
1 row in set (0.02 sec)

SQL 2:
mysql> select * from (select split_part(k1, "1", 1) as kk from case_replay_for_milimin) t where t.kk > 'a';      
+------+
| kk   |
+------+
| kkk  |
+------+

SQL 3:
mysql> select * from (select split_part(k1, "1", 1) as kk from case_replay_for_milimin) t where t.kk > '2';
+------+
| kk   |
+------+
| kkk  |
+------+
1 row in set (0.03 sec)
```
2020-06-10 11:22:53 +08:00
484e7de3c5 [Doirs On ES] fix bug for sparse docvalue context and remove the mistake usage of total (#3751)
The other PR : https://github.com/apache/incubator-doris/pull/3513 (https://github.com/apache/incubator-doris/issues/3479) try to resolved the `inner hits node is not an array` because when a  query( batch-size) run against new segment without this field, as-well the filter_path just only take `hits.hits.fields` 、`hits.hits._source` into account, this would appear an null inner hits node:
```
{
   "_scroll_id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAHaUWY1ExUVd0ZWlRY2",
   "hits": {
      "total": 1
   }
}
```

Unfortunately this PR introduce another serious inconsistent result with different batch_size because of misusing the `total`.

To avoid this two problem,  we just add `hits.hits._score` to filter_path when `docvalue_mode` is true,   `_score`  would always `null` ,  and populate the inner hits node:

```
{
   "_scroll_id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAHaUWY1ExUVd0ZWlRY2",
   "hits": {
      "total": 1,
      "hits": [
         {
            "_score": null
         }
      ]
   }
}
```

related issue: https://github.com/apache/incubator-doris/issues/3752
2020-06-04 16:31:18 +08:00
8018b1c348 [Doris on ES]Fix bug of like not translate correctly (#3602)
Why this case happened
In current implement, translation into dsl only if it is not the first charactor.
Thus, when sql is write like '%abc', translation would not run.

How fixed

Now, translation will trigger with charactor '?' or '*'
if it is the first charactor, translate directly
else, check the preceding char is escaped or not to determin translation or not
2020-05-19 17:06:46 +08:00
5a57ecca15 [Doris On ES]fix bug of query failed in doc_value_mode when fields have none value (#3513)
#3479 

Here I try to explain the cause of the problem and how to fix it.

**The Cause of The problem**
Take the case in issue(#3479 ) as an example:
The general results are as follows:
```
GET table/_doc/_search
{"query":{"match_all":{}},"stored_fields":"_none_","docvalue_fields":["k1"],"sort":["_doc"],"size":100}

{
  "took": 6,
  "timed_out": false,
  "_shards": {
    ……
  },
  "hits": {
    "total": 3,
    "max_score": null,
    "hits": [
      {
        "_index": "table",
        "_score": null,
        "sort": [
          0
        ]
      },
      {
        "_index": "table",
        "_score": null,
        "fields": {
          "k1": [
            "kkk1"
          ]
        },
        "sort": [
          0
        ]
      },
      {
        "_index": "table",
        "_score": null,
        "sort": [
          0
        ]
      }
    ]
  }
}
```

But in Doris on ES,Be fetched data parallelly on all shards, and use `filter_path` to reduce the network cost. The process will be as follows:
```
GET table/_doc/_search?preference=_shards:1&filter_path=_scroll_id,hits.hits._source,hits.total,_id,hits.hits._source.fields,hits.hits.fields
{"query":{"match_all":{}},"stored_fields":"_none_","docvalue_fields":["k1"],"sort":["_doc"],"size":100}

{
  "hits": {
    "total": 0
  }
}

GET table/_doc/_search?preference=_shards:2&filter_path=_scroll_id,hits.hits._source,hits.total,_id,hits.hits._source.fields,hits.hits.fields
{"query":{"match_all":{}},"stored_fields":"_none_","docvalue_fields":["k1"],"sort":["_doc"],"size":100}
{
  "hits": {
    "total": 1
  }
}

GET table/_doc/_search?preference=_shards:3&filter_path=_scroll_id,hits.hits._source,hits.total,_id,hits.hits._source.fields,hits.hits.fields
{"query":{"match_all":{}},"stored_fields":"_none_","docvalue_fields":["k1"],"sort":["_doc"],"size":100}
{
  "hits": {
    "total": 1,
    "hits": [
      {
        "fields": {
          "k1": [
            "kkk1"
          ]
        }
      }
    ]
  }
}
```
*Scan-Worker On BE which processed result of shard2  will failed.* 

**The reasons are as follows:**
1. "filter_path" causes the hits.hits object not exist.  
2. In the current implementation, if there are some data rows(total > 0), the hits.hits. object must be an array

**How To Fix it**

Two Method:
1. modify "filter_path" to contain the hits.  
Pros: Fixed Code is very simple
Cons: More network cost
2. Deal with the case where fields are missing in a batch. 
Pros: No loss of performance
Cons: Code is more complex 

Performance first, I use Method2.

**Design**
1. Add a variable "_doc_value_mode" into Class "EsScrollParser" to =indicate whether the data processed by this parser is doc_value_mode or not.
2. "_doc_value_mode" is passed from ESScollReader <- ESScanner <- ScrollQueryBuilder::build() that determines whether DSL is enable doc_value_mode
3. When hits.hits of response from ES is empty and total > 0. We know there are data lines, but the corresponding fields do not exist. EsScrollParser will use "_doc_value_mode"  and _total to construct _total lines which fields are assigned with 'NULL'
2020-05-11 15:34:12 +08:00
4a7a88ede1 [LSAN] Fix some memory leak detected by LSAN (#3326) 2020-04-22 22:59:44 +08:00
b60aabda11 [Doris On ES] Pushdown some castexpr predicate to ES (#3351)
Process castexpr, such as: k (float) > 2.0, k(int) > 3.2, Doris On Es should ignore this doris native cast transformation for every row's col value, we push down this `cast semantic` to Elasticsearch.  

I believe in this `predicate` situation, would decrease the mount of data for transmission。

k1 is float:

````
k1 >= 5
````

push-down filter:

```
{"range":{"k1":{"gte":"5.000000"}}}
```
k2 is int :

```
k2 > 3.2
```

push-down filter:

```
{"range":{"k2":{"gte":"3.2"}}}
```
2020-04-21 08:34:20 +08:00
688927918c [Doris on ES] Fix bug: when Doris and ES type not match (#3315) 2020-04-14 20:15:13 +08:00
a467c6f81f [ES Connector] Add field context for string field keyword type (#3305)
This PR is just a transitional way,but it is better to move the predicates transformation from Doris BE to Doris BE, in this way, Doris BE is responsible for fetching data from ES.

 Add a  `enable_keyword_sniff ` configuration item in creating External Elasticsearch Table ,it default to true , would to sniff the `keyword` type on the `text analyzed` Field and return the `json_path` which substitute the origin col name.

```
CREATE EXTERNAL TABLE `test` (
  `k1` varchar(20) COMMENT "",
  `create_time` datetime COMMENT ""
) ENGINE=ELASTICSEARCH
PROPERTIES (
"hosts" = "http://10.74.167.16:8200",
"user" = "root",
"password" = "root",
"index" = "test",
"type" = "doc",
"enable_keyword_sniff" = "true"
);
```
note: `enable_keyword_sniff` default to  "true"

run this SQL:

```
select * from test where k1 = "wu yun feng"
```
 Output predicate DSL:

```
{"term":{"k1.keyword":"wu yun feng"}}
```
and in this PR, I remove the elasticsearch version detected logic for now this is useless, maybe future is needed.
2020-04-13 23:07:33 +08:00
614a76beea [Doris on ES] Support compound_and predicate push down to Elasticsearch (#3277)
Relate Issue: https://github.com/apache/incubator-doris/issues/3248


SQL:

```
select * from test where (k2 = 6 and k3 = 1) or (k2 = 2 and k3 =3 and k4 = 'beijing');
```

Output filter:

```
((#k2:[6 TO 6] #k3:[1 TO 1]) (#(#k2:[2 TO 2] #k3:[3 TO 3]) #k4:beijing))~1
```

SQL:

```
select * from test where (k2 = 6 or k3 = 7) or (k2 = 2 and k3 =3 and (k4 = 'beijing' or k4 = 'zhaochun'));
```
Output filter:

```
(k2:[6 TO 6] k3:[7 TO 7] (#(#k2:[2 TO 2] #k3:[3 TO 3]) #((k4:beijing k4:zhaochun)~1)))~1
```

SQL:

```
select * from test where (k2 = 6 or k3 = 7) or (k2 = 2 and abs(k3) =3 and (k4 = 'beijing' or k4 = 'zhaochun'));
```

Output filter (`abs` can not be pushed down to es, so doris on es would not process this scenario ):

```
match_all
```
2020-04-08 21:09:39 +08:00
d4c1938b5c Open datetime min value limit (#3158)
the min_value in olap/type.h of datetime is 0000-01-01 00:00:00, so we don't need restrict datetime min in tablet_sink
2020-03-24 10:52:57 +08:00
fd492e3b6f [Doris on ES] Support escape character (#2865) 2020-02-13 11:32:48 +08:00
b35e8153c0 [Doris on Es] Fix lte and gte error expression (#2851)
LE should LTE
GE should GTE
2020-02-06 20:52:14 +08:00
e41aef54f2 [Doirs-On-Elasticsearch] Accerlate first scroll search (#2575)
Add terminate_after for the first scroll to avoid decompress all postings list
2019-12-27 14:05:21 +08:00
5a3f71dd6b Push limit to Elasticsearch external table (#2400) 2019-12-07 21:13:44 +08:00
1532282942 Support push down is null predicate for Doris-On-ES (#2378) 2019-12-04 22:56:22 +08:00
0f00febd21 Optimize Doris On Elasticsearch performance (#2237)
Pure DocValue optimization for doris-on-es

Future todo:
Today, for every tuple scan we check if pure_docvalue is enabled, this is not reasonable,  should check pure_docvalue enabled for one whole scan outside,  I will add this todo in future
2019-12-04 12:57:45 +08:00
a465b38874 Enhance doris on es error message (#2297)
Enhance doris on es error message and modify some field data transform error.
For varchar/char type, sometimes elasticsearch user post some not-string value to Elasticsearch Index. because of reading value from _source, we can not process all json type and then just transfer the value to original string representation this may be a tricky, but we can workaround this issue
2019-11-26 18:32:25 +08:00
0f94b685ab Add ES7.x compatibility for doris on es (#2033) 2019-10-22 17:23:33 +08:00
1c229fbd92 Fix es_scan_reader_test in debug mode (#1905) 2019-09-28 00:02:30 +08:00
8034d83e20 Add scroll keepalive and http timeout configuration (#1731) 2019-09-02 19:04:30 +08:00
c6dfe83b6d Add particular log info for doris on es (#1711) 2019-08-27 22:16:28 +08:00
57a1a718c7 print logs when parse scroll result failed (#1661) 2019-08-16 17:48:23 +08:00
69af50aa8c Time zone related BE function (#1598)
Details can be found in time-zone.md document
2019-08-12 20:57:59 +08:00
9d03ba236b Uniform Status (#1317) 2019-06-14 23:38:31 +08:00
9c82d41981 Support Doris query ES by HTTP way (#925) 2019-04-28 17:14:44 +08:00