Commit Graph

493 Commits

Author SHA1 Message Date
aaaaae53b5 [feature] (memory) Switch TLS mem tracker to separate more detailed memory usage (#8605)
In pr #8476, all memory usage of a process is recorded in the process mem tracker,
and all memory usage of a query is recorded in the query mem tracker,
and it is still necessary to manually call `transfer to` to track the cached memory size.

We hope to separate out more detailed memory usage based on Hook TCMalloc new/delete + TLS mem tracker.

In this pr, the more detailed mem tracker is switched to TLS, which automatically and accurately
counts more detailed memory usage than before.
2022-03-24 14:29:34 +08:00
Pxl
7fc22c2456 [fix][vectorized] fix core on get_predicate_column_ptr && fix double copy on _read_columns_by_rowids (#8581) 2022-03-24 09:12:42 +08:00
bea9a7ba4f [feature] Support pre-aggregation for quantile type (#8234)
Add a new column-type to speed up the approximation of quantiles.
1. The  new column-type is named `quantile_state` with fixed aggregation function `quantile_union`, which stores the intermediate results of pre-aggregated approximation calculations for quantiles.
2. support pre-aggregation of new column-type and quantile_state related functions.
2022-03-24 09:11:34 +08:00
b89e4c7bba [feature-wip](java-udf) support java UDF with fixed-length input and output (#8516)
This feature is propsoed in [DSIP-1](https://cwiki.apache.org/confluence/display/DORIS/DSIP-001%3A+Java+UDF). 
This PR support fixed-length input and output Java UDF. Phase I in DIP-1 is done after this PR.

To support Java UDF effeciently, I use no data copy in JNI call and all compute operations are off-heap in Java.
To achieve that, I use a UdfExecutor instead. 

For users, a UDF class must have a public evaluate method.
2022-03-23 10:32:50 +08:00
b638c07533 [feature-wip](array-type) Support nested array insertion. (#8305) (#8586)
Please refer to #8304 .
2022-03-22 15:28:26 +08:00
a498463ab5 [feature-wip](array-type)support select ARRAY data type on vectorized engine (#8217) (#8584)
Usage Example:
1. create table for test;
```
`CREATE TABLE `array_test` (
  `k1` tinyint(4) NOT NULL COMMENT "",
  `k2` smallint(6) NULL COMMENT "",
  `k3` ARRAY<int(11)> NULL COMMENT ""
) ENGINE=OLAP
DUPLICATE KEY(`k1`)
COMMENT "OLAP"
DISTRIBUTED BY HASH(`k1`) BUCKETS 5
PROPERTIES (
"replication_allocation" = "tag.location.default: 1",
"in_memory" = "false",
"storage_format" = "V2"
);`
```

2. insert some data
```
`insert into array_test values(1, 2, [1, 2]);`
`insert into array_test values(2, 3, null);`
`insert into array_test values(3, null, null);`
`insert into array_test values(4, null, []);`
```

3. open vectorized
`set enable_vectorized_engine=true;`

4. query array data
`select * from array_test;`
+------+------+--------+
| k1   | k2   | k3     |
+------+------+--------+
|    4 | NULL | []     |
|    2 |    3 | NULL   |
|    1 |    2 | [1, 2] |
|    3 | NULL | NULL   |
+------+------+--------+
4 rows in set (0.061 sec)

Code Changes include:
1. add column_array, data_type_array codes;
2. codes about data_type creation by Field, TabletColumn, TypeDescriptor, PColumnMeta move to DataTypeFactory;
3. support create data_type for ARRAY date type;
4. RowBlockV2::convert_to_vec_block support ARRAY date type;
5. VMysqlResultWriter::append_block support ARRAY date type;
6. vectorized::Block serialize and deserialize support ARRAY date type;
2022-03-22 15:21:44 +08:00
38ec3cbbdf [feature-wip](array-type) Support ArrayLiteral in SQL. (#8089) (#8582)
Please refer to #8074
2022-03-22 15:07:06 +08:00
eeae516e37 [Feature](Memory) Hook TCMalloc new/delete automatically counts to MemTracker (#8476)
Early Design Documentation: https://shimo.im/docs/DT6JXDRkdTvdyV3G

Implement a new way of memory statistics based on TCMalloc New/Delete Hook,
MemTracker and TLS, and it is expected that all memory new/delete/malloc/free
of the BE process can be counted.
2022-03-20 23:06:54 +08:00
2ec0b81030 [improvement](storage) Low cardinality string optimization in storage layer (#8318)
Low cardinality string optimization in storage layer
2022-03-20 23:04:25 +08:00
25cdd0be1a [refactor] CalcPageLenForRow return void rather than always Status::Ok (#8490)
Thus we can remove branches depending on CalcPageLenForRow.
2022-03-18 09:34:49 +08:00
991dc7fc5c [fix][routine-load] fix bug that routine load cannot cancel task when append_data return error (#8457) 2022-03-14 10:18:14 +08:00
2c63fc1d6c [improvement](vectorized) Support BetweenPredicate enable fold const expr (#8450) 2022-03-13 09:36:24 +08:00
e17aef9467 [refactor] refactor the implement of MemTracker, and related usage (#8322)
Modify the implementation of MemTracker:
1. Simplify a lot of useless logic;
2. Added MemTrackerTaskPool, as the ancestor of all query and import trackers, This is used to track the local memory usage of all tasks executing;
3. Add cosume/release cache, trigger a cosume/release when the memory accumulation exceeds the parameter mem_tracker_consume_min_size_bytes;
4. Add a new memory leak detection mode (Experimental feature), throw an exception when the remaining statistical value is greater than the specified range when the MemTracker is destructed, and print the accurate statistical value in HTTP, the parameter memory_leak_detection
5. Added Virtual MemTracker, cosume/release will not sync to parent. It will be used when introducing TCMalloc Hook to record memory later, to record the specified memory independently;
6. Modify the GC logic, register the buffer cached in DiskIoMgr as a GC function, and add other GC functions later;
7. Change the global root node from Root MemTracker to Process MemTracker, and remove Process MemTracker in exec_env;
8. Modify the macro that detects whether the memory has reached the upper limit, modify the parameters and default behavior of creating MemTracker, modify the error message format in mem_limit_exceeded, extend and apply transfer_to, remove Metric in MemTracker, etc.;

Modify where MemTracker is used:
1. MemPool adds a constructor to create a temporary tracker to avoid a lot of redundant code;
2. Added trackers for global objects such as ChunkAllocator and StorageEngine;
3. Added more fine-grained trackers such as ExprContext;
4. RuntimeState removes FragmentMemTracker, that is, PlanFragmentExecutor mem_tracker, which was previously used for independent statistical scan process memory, and replaces it with _scanner_mem_tracker in OlapScanNode;
5. MemTracker is no longer recorded in ReservationTracker, and ReservationTracker will be removed later;
2022-03-11 22:04:23 +08:00
8eec4bf99d [feature](thread-local) Add thread local variable ThreadContext (#7234)
The thread context saves some info about a working thread.
1. thread_id:   Current thread id, Auto generated.
2. type:           The type is a enum value indicating which type of task current thread is running.
                        For example: QUERY, LOAD, COMPACTION, ...
3. task id:       A unique id to identify this task. maybe query id, load job id, etc.

Using gcc11 compiles thread_local variable on lower versions of GLIBC will report an error, see https://github.com/apache/incubator-doris/pull/7911
This is very difficult to solve, so kudu Class-scoped static thread local implementation was introduced.
Solve the above problem by Thread-scopedthread local + Class-scoped thread local.
See the comments for ThreadContextPtr for details.
2022-03-10 09:05:40 +08:00
d880559214 [refactor] remove old schema change code on BE (#8342) 2022-03-09 13:05:44 +08:00
baa3b14870 [fix] Use fmt::to_string replace memory buffer::data() (#8311) 2022-03-06 13:44:11 +08:00
538df28737 [improvement](routine-load) Support routine load task succeed with empty data consumed (#8256) 2022-03-03 22:35:50 +08:00
09bfb8b9d3 [fix] (rpc-udf) Fixed the problem that the query could not be interrupted (#8248)
if an error occurred in the rpc server during the execution of rpc-udf.
Add java,cpp,python demo of rpc-udf server
2022-03-03 09:30:03 +08:00
ada39dd9ad [improvement][vec] better memequal impl to speed up string compare (#8229)
like #8214

faster string compare operator in vec engine.
2022-03-01 11:25:12 +08:00
27d2e3e949 [refactor](fe) Remove old fe meta version (#8246)
Remove old FE meta version < 100.
2022-02-28 17:47:01 +08:00
87b96cfcd6 [feature](iceberg) Step3: Support query iceberg external table (#8179)
1. Add Iceberg scan node 
2. Add Iceberg/Hive table type in thrift 
3. Support querying Iceberg tables of format types `parquet` and `orc`
2022-02-26 17:04:11 +08:00
9a7931cfed [fix](mem-pool) fix bug that mem pool failed to allocate in ASAN mode (#8216)
Also fix BE ut:

1. fix scheme_change_test memory leak
2. fix mem_pool_test
    Do not using DEFAULT_PADDING_SIZE = 0x10 in mem_pool when running ut.
3. remove plugin_test
2022-02-24 10:52:58 +08:00
d6aebc0c2c [improvement] make asan work as much as possible (#8148)
* make ASAN poisoning work as much as possible

Before this patch a use after poison is reported like below
==19305==ERROR: AddressSanitizer: unknown-crash on address
0x625000137013 at pc 0x561c44bcf6b8 bp 0x7ffb75a00910 sp 0x7ffb75a000b8

After this patch the use after poison is reported like below
==17782==ERROR: AddressSanitizer: use-after-poison on address
0x625000137033 at pc 0x55633c8f56b8 bp 0x7ff3dc437930 sp 0x7ff3dc43

Before this patch, a false memory usage is reported like below
==33080==AddressSanitizer CHECK failed: ../../../../src/libsanitizer/
asan/asan_allocator.cpp:189 "((old)) == ((kAllocBegMagic))"
2022-02-22 09:29:22 +08:00
6e8d52f3fc [fix](stream-load) fix bug that stream load may be blocked with unqualified data (#8176)
Co-authored-by: morningman <chenmingyu@baidu.com>
2022-02-22 09:26:23 +08:00
47067e40a6 [refactor](common) optimize Status implemention: no dynamic new (#8117) 2022-02-22 09:23:29 +08:00
d0ee101c2f [refactor] (runtime)tidy up the plan_fragment_executor codes (#8110)
Co-authored-by: zuochunwei <zuochunwei@meituan.com>
2022-02-22 09:20:27 +08:00
409aefdfbf [refactor] add some log when close parquet file (#8144) 2022-02-21 09:36:53 +08:00
826738d97f [docs]Some doc improvements and typo fix (#8153) 2022-02-21 09:36:01 +08:00
50864aca7d [refactor] fix warings when compile with clang (#8069) 2022-02-19 11:29:02 +08:00
936da4f10a [feature](thread-pool) Support thread pool per disk for scanners (#7994)
Support thread pool per disk for scanners to prevent pool performance from some high ioutil disks happening

key point:
1. each disk has a thread pool for scanners
2. whenever a thread pool of one disk runs out of local work, tasks can be retrieved from other threads(disks). This is done round-robin.

performance testing: 
vec version: 25% faster than single thread pool in a high io util disk test case
normal version: 8% faster than single thread pool in a high io util disk test case
2022-02-18 09:40:58 +08:00
f8411f3c6a [refactor](mysql_table_writer)split into two parts of vectorized and row mode (#8081) 2022-02-17 11:29:25 +08:00
26289c28b0 [fix](load)(compaction) Fix NodeChannel coredump bug and modify some compaction logic (#8072)
1. Fix the problem of BE crash caused by destruct sequence. (close #8058)
2. Add a new BE config `compaction_task_num_per_fast_disk`

    This config specify the max concurrent compaction task num on fast disk(typically .SSD).
    So that for high speed disk, we can execute more compaction task at same time,
    to compact the data as soon as possible

3. Avoid frequent selection of unqualified tablet to perform compaction.
4. Modify some log level to reduce the log size of BE.
5. Modify some clone logic to handle error correctly.
2022-02-17 10:52:08 +08:00
0003822da7 [feature](vec) add ColumnHLL to support hll type (#7828) 2022-02-17 10:44:42 +08:00
a6bf8c13eb [Feature](Transaction) Support two phase commit (2PC) for stream load (#7473)
The two phase batch commit means:
During Stream load, after data is written, the message will be returned to the client,
the data is invisible at this point and the transaction status is PRECOMMITTED.
The data will be visible only after COMMIT is triggered by client.
    
1. User can invoke the following interface to trigger commit operations for transaction:

curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:commit" \
http://fe_host:http_port/api/{db}/_stream_load_2pc

or

curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:commit" \
http://be_host:webserver_port/api/{db}/_stream_load_2pc

    
2.User can invoke the following interface to trigger abort operations for transaction:

curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:abort" \
http://fe_host:http_port/api/{db}/_stream_load_2pc

or

curl -X PUT --location-trusted -u user:passwd -H "txn_id:txnId" -H "txn_operation:abort" \
http://be_host:webserver_port/api/{db}/_stream_load_2pc
2022-02-16 11:55:04 +08:00
25d64775d1 [Vectorized][Feature] Support mysql external table insert into stm (#7979) 2022-02-15 14:58:58 +08:00
884fddbf33 [fix](compatibility) Fix compatibility issue of PRowBatch and some tablet sink bugs (#8000)
1. set both `tuple_offsets` and `new_tuple_offsets` in PRowBatch for compatibility
2. set FE config `repair_slow_replica` default to false
   Avoid impacting the load process after upgrading.
   Eg, if there are only 2 replicas, one is with high version count. After upgrade,
   that replica will be set to bad, so that the load process will be stopped
   because only 1 replica is alive.
3. Fix a bug that NodeChannel may be blocked at `close_wait()`
   Forget to set `add_batch_finish` flag after the last rpc finished.
4. Fix a NPE of RoutineLoadScheduler
2022-02-15 11:23:19 +08:00
aea3e4e59b [refactor] Remove version hash from BE and related test in BE (#8027) 2022-02-14 09:29:27 +08:00
7a73645eee [refactor] remove some unused code (#8022) 2022-02-12 15:17:28 +08:00
6b9cb49779 [Refactor] remove plugin folder in be since it is useless and it need fPIC tag to build and we will remove all fPIC tag in the future (#8008) 2022-02-12 12:28:14 +08:00
Pxl
a4e7c76336 [Enhancement] use std::search to replace custom search (#7999) 2022-02-11 10:47:58 +08:00
ef233701b3 [feature](vec)(load) Support vtablet sink to enable insert into by using vec query engine (#7957)
Support vtablet sink to enable insert into query in vec query engine
2022-02-08 11:04:09 +08:00
f8d086d87f [feature](rpc) (experimental)Support implement UDF through GRPC protocol. (#7519)
Support implement UDF through GRPC protocol. This brings several benefits: 
1. The udf implementation language is not limited to c++, users can use any familiar language to implement udf
2. UDF is decoupled from Doris, udf will not cause doris coredump, udf computing resources are separated from doris, and doris services are not affected

But RPC's UDF has a fixed overhead, so its performance is much slower than C++ UDF, especially when the amount of data is large.

Create function like

```
CREATE FUNCTION rpc_add(INT, INT) RETURNS INT PROPERTIES (
  "SYMBOL"="add_int",
  "OBJECT_FILE"="127.0.0.1:9999",
  "TYPE"="RPC"
);
```
Function service need to implement `check_fn` and `fn_call` methods
Note:
THIS IS AN EXPERIMENTAL FEATURE, THE INTERFACE AND DATA STRUCTURE MAY BE CHANGED IN FUTURE !!!
2022-02-08 09:25:09 +08:00
c0e59e59aa [fix][refactor] fix bugs and refactor some code by lint (#7871)
1. Fix some `passedByValue` issues.
2. Fix some `dereferenceBeforeCheck` issues.
3. Fix some `uninitMemberVar` issues.
4. Fix some iterator `eraseDereference` issues.
5. Fix compile issue introduced from #7923 #7905 #7848
2022-02-01 14:31:14 +08:00
82f421a019 [fix](brpc-attachment) Fix bug that may cause BE crash when enable transfer_data_by_brpc_attachment (#7921)
This PR mainly changes:

1. Fix bug when enable `transfer_data_by_brpc_attachment`

    In `data_stream_sender`, we will send a serialized PRowBatch data to multiple Channels.
    And if `transfer_data_by_brpc_attachment` is enabled, we will mistakenly clear the data in PRowBatch
    after sending PRowBatch to the first Channel.
    As a result, the following Channel cannot receive the correct data, causing an error.

    So I use a separate buffer instead of `tuple_data` in PRowBatch to store the serialized data
    and reuse it in multiple channels.

2. Fix bug that the the offset in serialized row batch may overflow

    Use int64 to replace int32 offset. And for compatibility, add a new field `new_tuple_offsets` in PRowBatch.
2022-02-01 08:51:16 +08:00
Pxl
3ee000c13c [chore] support build with libc++ && add some build config (#7903)
support LIBCPP/LDD/BUILD_META_TOOL for build.sh
2022-01-30 16:47:22 +08:00
6a1a2a2ed5 [fix](query) Add init function for result_file_sink (#7927)
Add init function in `result_file_sink` to fix the error "Empty partition info",
which is occasional reported when using SELECT INFO OUTFILE.
2022-01-29 10:08:57 +08:00
1d900d8605 (fix)[planner] Fix the right tuple ids in empty set node (#7931)
The tuple ids of the empty set node must be exactly the same as the tuple ids of the origin root node.
In the issue, we found that once the tree where the root node is located has a window function,
the tuple ids of the empty set node cannot be calculated correctly.

This pr mostly fixes the problem.
In order to calculate the correct tuple ids,
the tuple ids obtained from the SelectStmt.getMaterializedTupleIds() function in the past
are changed to directly use the tuple ids of the origin root node.

Although we tried to fix #7929 by modifying the SelectStmt.getMaterializedTupleIds() function,
this method can't get the tuple of the last correct window function.
So we use other ways to construct tupleids of empty nodes.
2022-01-29 09:46:05 +08:00
d2386dd85d [improvement](rewrite) Make RewriteDateLiteralRule to be compatible with mysql (#7876) 2022-01-27 10:32:18 +08:00
800a36343a [chore] Prolog of hermetic build with GCC 11 and Clang 13. (#7712)
Prepare to generate hermetic build using GCC 11 and Clang 13.
The ideal toolchain would be ldb toolchain generated by [ldb_toolchain_gen.sh](https://github.com/amosbird/ldb_toolchain_gen/releases/download/v0.3/ldb_toolchain_gen.sh)

To kick off a clang build, set `DORIS_TOOLCHAIN=clang` before running any build scripts.
2022-01-21 12:12:04 +08:00
ef984a6a72 [improvement](load) Improve load fault tolerance (#7674)
Currently, if we encounter a problem with a replica of a tablet during the load process,
such as a write error, rpc error, -235, etc., it will cause the entire load job to fail,
which results in a significant reduction in Doris' fault tolerance.

This PR mainly changes:

1. refined the judgment of failed replicas in the load process, so that the failure of a few replicas will not affect the normal completion of the load job.
2. fix a bug introduced from #7754 that may cause BE coredump
2022-01-20 09:23:21 +08:00