Commit Graph

236 Commits

Author SHA1 Message Date
b4663062da [feature-wip](parquet-reader) bug fix, parquet footer buffer is small when containing many columns (#12477)
Failed when reading parquet file with many columns(>1600).

mysql> select int_col from types_sf100_r100w limit 5;
ERROR 1105 (HY000): errCode = 2, detailMessage = Couldn't deserialize thrift msg:
TProtocolException: Invalid data
parse_thrift_footer uses fixed length buffer(=64k) to read parquet footer, but the meta data of a parquet file with 1600 columns can exceed 5MB.

Therefore, the buffer size needs to be applied according to the actual length.
2022-09-09 09:12:34 +08:00
2ccbbb5392 [fix](stream load) Fix wrong conversion of null value when vstream load json format (#12460) 2022-09-08 16:48:35 +08:00
14221adbbd [fix](agg) crash caused by failure of prepare (#12437) 2022-09-08 15:03:45 +08:00
dd2f834c79 [feature-wip](parquet-reader) bug fix, create compress codec before parsing dictionary (#12422)
## Fix five bugs:
1. Parquet dictionary data may be compressed, but `ColumnChunkReader` try to parse dictionary data before creating compression codec, causing unexpected data errors.
2. `FE` doesn't resolve array type
3. `ParquetFileHdfsScanner`  doesn't fill partition values when the table is partitioned
4. `ParquetFileHdfsScanner` set `_scanner_eof = true` when a scan range is empty, causing the end of the scanner, and resulting in data loss
5. typographical error in `PageReader`
2022-09-08 09:54:25 +08:00
86e347f3bb [Bug](doe) fix closing scanner twice (#12408) 2022-09-07 22:45:30 +08:00
449d0c219f [Improvement](sort) Accumulate blocks to do partial sort (#12336) 2022-09-07 10:34:28 +08:00
42bdde8750 [Feature](Vectorized) support jdbc scan node (#12010) 2022-09-07 10:29:41 +08:00
3485dfa927 [chore](profile) add some counters in aggregatation & sender (#12385) 2022-09-07 10:09:05 +08:00
893567628e [fix](exec-node) fix nullptr of runtime state (#12395)
Remove default nullptr runtime state, which is very error-prone
2022-09-07 08:46:42 +08:00
4a55b504c0 [feature-wip](parquet-reader) bug fix, get the correct group reader (#12294)
Fix the problem that cannot read the lineitem table of TPCH , and the error of allocate memory
Co-authored-by: jinzhe <jinzhe@selectdb.com>
2022-09-06 13:59:35 +08:00
b8e38b9167 [Bug](load) block call clear_column_data may have ref not equal 1 (#12350) 2022-09-05 20:40:40 +08:00
e175a7ed63 [fix](memtracker) Fix the exceeded limit of the first query execution (#12332)
In some cases, when the user executes the query for the first time, an error of the exceeded mem limit will be reported, and the query will be successful only after the second execution.

This is because when the query is executed for the first time, the memory consumed by adding the page cache and other caches is recorded in the query mem tracker, hoping to unify the behavior of multiple queries.

A temporary solution, remove the hook of scanner thread, test clickbench q13

Before removing the scanner thread hook
Enable page cache: 3G for the first query, 3G for the tracker; 900M for the second query, 900M for the tracker.
Turn off page cache: 1.9G for the first query, 1.9G for the tracker; 900M for the second query, 900M for the tracker
After removing the scanner thread hook and fix MemTrackerLimiter::cache_consume_local bug
Enable page cache: 2916M for the first query, 1147M for the tracker; 979M for the second query, 1144M for the tracker
Turn off page cache: 1809M for the first query, 1147M for the tracker; 975M for the second query, 1145M for the tracker
TODO, a better solution is to track storage-related memory separately, in the scanner thread. Otherwise, it is impossible to know where the process memory grows when querying.
2022-09-05 19:22:46 +08:00
202ad5c659 [feature-wip](parquet-reader) bug fix, the number of rows are different among columns in a block (#12228)
1. `ExprContext` is delete in `ParquetReader::close()`, but it has not been closed,
so the `DCHECH` in `~ExprContext()` is failed. the lifetime of `ExprContext` is managed by scan node,
so we should not delete its pointer in `ParquetReader::close()`.
2. `RowGroupReader::next_batch` will update `_read_rows` in every column loop,
and does not ensure the number of rows in every column are equal.
3.  The skipped row ranges are variables in stack, which are released when calling `ArrayColumnReader::read_column_data`, so we should copy them out.
2022-09-02 09:50:25 +08:00
3ce305134a [fix](scan) fix potential wrong cancel when sql has limit (#12224) 2022-09-01 19:11:40 +08:00
8c8078ad28 [fix](projections) get error row_descriptor when have projections on ExecNode (#12232)
When ExecNode's projections is not empty, it use output row descriptor to initialize the block before doing projection. But we should use original row descriptor. This PR fix it.
2022-09-01 10:48:10 +08:00
ec4863b63a [feature-wip](new-scan)Add new file scan node (#12048)
Related pr: #11582
This is the new file scan node and scanner for external hms catalog.
2022-09-01 10:01:20 +08:00
1cc9eeeb1a [feature-wip](parquet-reader) read and generate array column (#12166)
Read and generate parquet array column.

When D=1, R=0, representing an empty array. Empty array is not a null value, so the NullMap for this row is false,
the offset for this row is [offset_start, offset_end) whose `offset_start == offset_end`,
and offset_end is the start offset of the next row, so there is no value in the nested primitive column.

When D=0, R=0, representing a null array, and the NullMap for this row is true.
2022-08-31 17:08:12 +08:00
573e5476dd [Opt](load) Speed up the vectorized load (#12146)
* [Opt](load) Speed up the vectorized load
2022-08-31 16:23:36 +08:00
9a74ad1702 [feature](Nereids)add the ability of projection on each ExecNode and add column prune on OlapScan (#11842)
We have added logical project before, but to actually finish the prune to reduce the data IO, we need to add related supports in translator and BE.
This PR:
- add projections on each ExecNode in BE
- translate PhysicalProject into projections on PlanNode in FE
- do column prune on ScanNode in FE

Co-authored-by: HappenLee <happenlee@hotmail.com>
2022-08-30 16:17:10 +08:00
a16cf0e2c8 [feature-wip](scan) add profile for new olap scan node (#12042)
Copy most of profiles from VOlapScanNode and VOlapScanner to NewOlapScanNode and NewOlapScanner.
Fix some blocking bug of new scan framework.
TODO:

Memtracker
Opentelemetry spen
The new framework is still disabled by default, so it will not effect other feature.
2022-08-30 10:55:48 +08:00
2f192019d3 [bugfix](delete hanlder) delete predicate is merged and could not find schema cause core dump (#12161)
Co-authored-by: yiguolei <yiguolei@gmail.com>
2022-08-30 09:18:21 +08:00
dec576a991 [feature-wip](parquet-reader) generate null values and NullMap for parquet column (#12115)
Generate null values and NullMap for the nullable column by analyzing the definition levels.
2022-08-29 09:30:32 +08:00
0b5bb565a7 [feature-wip](parquet-reader) parquet dictionary decoder (#11981)
Parse parquet data with dictionary encoding.

Using the PLAIN_DICTIONARY enum value is deprecated in the Parquet 2.0 specification.
Prefer using RLE_DICTIONARY in a data page and PLAIN in a dictionary page for Parquet 2.0+ files.
refer: https://github.com/apache/parquet-format/blob/master/Encodings.md
2022-08-26 19:24:37 +08:00
f3f17eb222 [Bugfix](load) fix be will coredump when parsing malformed json file using simdjson (#12062)
* [Bugfix](load) fix be will coredump when parsing malformed json file using simdjson
2022-08-26 18:01:19 +08:00
ccff3f5711 [bugfix](light weight schema change) support delete condition in schema change (#11869)
* [bugfix](light weight schema change) support delete condition in schema change


Co-authored-by: yiguolei <yiguolei@gmail.com>
2022-08-26 11:45:55 +08:00
0f4a1e811b [Enhancement](table_function) table function node enhancement (#12038)
* table function node enhancement

* also avoid copy for non-vec table function node

* fix table function node output slots calculation while lateral view involves subquery

Co-authored-by: cambyzju <zhuxiaoli01@baidu.com>
2022-08-26 10:37:15 +08:00
0c16740f5c [feature-wip](parquet-reader) parquert scanner can read data (#11970)
Co-authored-by: jinzhe <jinzhe@selectdb.com>
2022-08-26 09:43:46 +08:00
17b809210a [Bug](runtime filter) fix bug for late-arrival runtime filters (#12049) 2022-08-26 09:13:10 +08:00
73a3471fbd [minor](conjuncts) remove row-based conjuncts from vectorized engine (#12053) 2022-08-25 10:13:20 +08:00
d06edd4b8b [minor](runtime-filter) add DCHECK for runtimefilter bug (#11996)
Not a fix, just add debug info to try find root cause of #11995
2022-08-24 07:53:30 +08:00
05da3d947f [feature-wip](new-scan) add scanner scheduling framework (#11582)
There are currently many types of ScanNodes in Doris. And most of the logic of these ScanNodes is the same, including:

Runtime filter
Predicate pushdown
Scanner generation and scheduling
So I intend to unify the common logic of all ScanNodes.
Different data sources only need to implement different Scanners for data access.
So that the future optimization for scan can be applied to the scan of all data sources,
while also reducing the code duplication.

This PR mainly adds 4 new class:

VScanner
All Scanners' parent class. The subclasses can inherit this class to implement specific data access methods.

VScanNode
The unified ScanNode, and is responsible for common logic including RuntimeFilter, predicate pushdown, Scanner generation and scheduling.

ScannerContext
ScannerContext is responsible for recording the execution status
of a group of Scanners corresponding to a ScanNode.
Including how many scanners are being scheduled, and maintaining
a producer-consumer blocks queue between scanners and scan nodes.

ScannerContext is also the scheduling unit of ScannerScheduler.
ScannerScheduler schedules a ScannerContext at a time,
and submits the Scanners to the scanner thread pool for data scanning.

ScannerScheduler
Unified responsible for all Scanner scheduling tasks

Test:
This work is still in progress and default is disabled.
I tested it with jmeter with 50 concurrency, but currently the scanner is just return without data.
The QPS can reach about 9000.
I can't compare it to origin implement because no data is read for now. I will test it when new olap scanner is ready.
Co-authored-by: morningman <morningman@apache.org>
2022-08-23 08:45:18 +08:00
6d925054de [feature-wip](parquet-reader) decode parquet time & datetime & decimal (#11845)
1. Spark can set the timestamp precision by the following configuration:
spark.sql.parquet.outputTimestampType = INT96(NANOS), TIMESTAMP_MICROS, TIMESTAMP_MILLIS
DATETIME V1 only keeps the second precision, DATETIME V2 keeps the microsecond precision.
2. If using DECIMAL V2, the BE saves the value as decimal128, and keeps the precision of decimal as (precision=27, scale=9). DECIMAL V3 can maintain the right precision of decimal
2022-08-22 10:15:35 +08:00
dc8f64b3e3 [improvement](agg) Serialize the fixed-length aggregation results with corresponding columns instead of ColumnString (#11801) 2022-08-22 10:12:06 +08:00
b1fd701493 [fix](memtracker) Improve memory tracking accuracy for exec nodes (#11947) 2022-08-22 08:56:05 +08:00
5eb5444476 [fix](memtracker) Remove useless memory exceed check #11939 2022-08-22 08:40:19 +08:00
Pxl
64dc3b360f [Bug](function) fix dcheck fail on close vexpr ctx (#11908) 2022-08-19 19:11:10 +08:00
124b4f7694 [feature-wip](parquet-reader) row group reader ut finish (#11887)
Co-authored-by: jinzhe <jinzhe@selectdb.com>
2022-08-18 17:18:14 +08:00
1da39771e3 [Bug](runtime filter) Fix bug for runtime filter in concurrent scanners (#11848) 2022-08-18 14:47:08 +08:00
Pxl
cac317430f [Bug](aggregation) fix core dump on 2nd phase aggregate (#11843) 2022-08-18 14:42:34 +08:00
e1a1a04c2f [Enhancement](Doe) Be query es use fe generate dsl. (#11840) 2022-08-18 10:31:17 +08:00
cfb90b39c7 (vec-stream-load-json) simdjson throw execption lead to core dump (#11880)
when config::enable_simdjson_parser=true in vec streamload, may lead to core dump when json input invalid format string like '{ "a', or all the fields is null like '{}', this may lead to simdjson lib throw some unhandled expection like `Objects and arrays can only be iterated when they are first encountered`.We should take care of these cases

Signed-off-by: eldenmoon <15605149486@163.com>
2022-08-18 10:27:34 +08:00
50ef6e35be [enhancement](RowDescriptor) enhance tuple_idx check during runtime (#11835) 2022-08-17 17:50:48 +08:00
3a49156e30 [performance] (vectorization)optimize In Expr (#11826)
Co-authored-by: Wang Bo <wangbo36@meituan.com>
2022-08-17 10:46:37 +08:00
f39f57636b [feature-wip](parquet-reader) update column read model and add page index (#11601) 2022-08-16 15:04:07 +08:00
01383c3217 [Enhancement](stream-load-json) using simdjson to parse json (#11665)
Currently we use rapidjson to parse json document, It's fast but not fast enough compare to simdjson.And I found that the simdjson has a parsing front-end called simdjson::ondemand which will parse json when accessing fields and could strip the field token from the original document, using this feature we could reduce the cost of string copy(eg. we convert everthing to a string literal in _write_data_to_column by sprintf, I saw a hotspot from the flamegrame in this function, using simdjson::to_json_string will strip the token(a string piece) which is std::string_view and this is exactly we need).And second in _set_column_value we could iterate through the json document by for (auto field: object_val) {xxx}, this is much faster than looking up a field by it's field name like objectValue.FindMember("k1").The third optimization is the at_pointer interface simdjson provided, this could directly get the json field from original document.
2022-08-16 14:49:50 +08:00
4be6e70f1c [fix](query) fix orderby keys limit return less or no result (#11757)
The bug is caused by use _num_rows_read for limit check. _num_rows_read is count of rows read from storage, but may be filtered by filter_block for WHERE predicate.

Add a _num_rows_return, which is rows after filter_block for WHERE predicate, for count for really returned rows.
2022-08-16 14:31:47 +08:00
288b440b14 [improvement](vectorized) Improve count distinct performance by using fastunion (#11516)
Improve count distinct performance by using fastunion.
Testing our user real data has a 10-40% performance improvement.
2022-08-16 12:18:46 +08:00
5104982614 [enhancement](tracing) append the profile counter to trace. (#11458)
1. append the profile counter and infos to span attributes.
2. output traceid to audit log.
2022-08-15 21:36:38 +08:00
0b9bfd15b7 [feature-wip](parquet-reader) parquet physical type to doris logical type (#11769)
Two improvements have been added:
1. Translate parquet physical type into doris logical type.
2. Decode parquet column chunk into doris ColumnPtr, and add unit tests to show how to use related API.
2022-08-15 16:08:11 +08:00
1c4927eac3 [fix](core)fix bug for status not init(#11730) 2022-08-12 17:42:37 +08:00