Commit Graph

4747 Commits

Author SHA1 Message Date
460399f214 [fix](profile) remove same profile in join node (#20734) 2023-06-15 08:08:39 +08:00
2a2e485456 [Enhancement](compaction) time-series scenario cumulative compaction policy (#20715)
new compaction policy for log and time-series scenario
2023-06-14 23:48:44 +08:00
09d187ec77 [improvement](ck jdbc) Optimized reading of datetime and ip types of the ClickHouse JDBC Catalog (#20804) 2023-06-14 23:28:08 +08:00
bb617ee2cc [fix](parquet-reader)fix page v2 header offset (#20814)
fix page v2 header offset.
get correct offset when read next page in file.
2023-06-14 23:27:31 +08:00
Pxl
3727483c06 [Chore](build) update ldb_toolchain to v0.18 (#20802)
* update ldb_toolchain to v0.18

* update
2023-06-14 18:38:35 +08:00
0ecc98df82 [Bug](rowset) expire delayed rowsets should be ignored and should not be deleted in _tablet_meta (#20803) 2023-06-14 18:30:13 +08:00
31a4f96f01 [refactor](exprcontext) move close to expr context's dector method (#20747)
The close method does nothing. But I am not sure we could remove it. So that I add it to dector method and remove many many calls.
2023-06-14 18:01:07 +08:00
b97537b04b [Fix](MOW) Fix load data publish timeout when enable unique key MOW (#20720) 2023-06-14 17:56:02 +08:00
615778924e [feature](fs) add fs benchmark tool framework (#20770)
Add an optional executable binary fs_benchmark_tool, for test the performance of file system such as hdfs, s3.
Usage:

./fs_benchmark_tool --conf my.conf --fs_type=s3 --operation=read --iterations=5
in my.conf, you can add any config key value with following format:

key1=value1
key2=value2
By default, this binary will not be built. Only build it when setting BUILD_FS_BENCHMARK=ON.
The binary will be installed in output/be/lib.

For developer, you can add new subclass of BaseBenchmark to add your own benchmark.
See be/src/io/fs/benchmark/s3_benchmark.hpp for an example
2023-06-14 17:50:06 +08:00
Pxl
a0d4f11667 [Bug](function) catch error state in function cast to avoid core dump (#20751)
catch error state in function cast to avoid core dump
2023-06-14 17:34:34 +08:00
d922a4a9fa [Feature-WIP](inverted index) add inverted index file size method (#20758)
This PR calculates the size of the inverted index files. The changes consist of:

Introduction of a new get_inverted_index_size() method in different column writers such as ScalarColumnWriter, StructColumnWriter, ArrayColumnWriter, and MapColumnWriter. This method will fetch the size of the inverted index file associated with that column. If the file size cannot be fetched, it defaults to 0.

A new method file_size() has been added in InvertedIndexColumnWriter class which retrieves the size of the file stored on disk. If the file size cannot be fetched, it logs an error and returns -1.

Additionally, a new method get_inverted_index_file_size() is introduced in SegmentWriter which aggregates the inverted index file sizes of all the column writers.
2023-06-14 17:18:20 +08:00
dd5b82fe00 [Enhancement](merge-on-write) optimize contains_agg when calculate delete bitmap (#20762) 2023-06-14 16:25:11 +08:00
0f470fec0e [Bug](topn opt) Fix Two-Phase read when some rowset swept (#20732)
* [Bug](topn opt) Fix Two-Phase read when some rowset swept

If this is a Two-Phase read query, and we need to delay the release of Rowset by row->update_delayed_expired_timestamp() to expand the lifespan of rowsets. This is necessary to avoid data loss during the second phase reading, where some stale rowsets may be swept and result in missing data.
2023-06-14 15:46:29 +08:00
f2025b9eed [fix](memory) before compaction run, check memory exceed limit #20782 2023-06-14 14:20:48 +08:00
9b4b0d4bf9 [fix](cooldown) Fix bug when cooldown a dropped tablet (#20750) 2023-06-14 09:42:55 +08:00
fd97587aff [fix](merge-on-write) fix the merged rows is not equal to missed rows when do cumulative compaction (#20754) 2023-06-13 22:18:59 +08:00
Pxl
9244cb6553 [Chore](runtime-filter) do not make query fail when rf publish failed (#20742)
do not make query fail when rf publish failed
2023-06-13 18:23:46 +08:00
ad2f1b5647 [Update](clucene) synchronize clucene version to address PFOR adaptation issue (#20736) 2023-06-13 18:04:48 +08:00
feb21fc9e9 [fix](group_concat) use default seperator ',' instead of ', ' for group_concat, to be consistant with mysql (#20741) 2023-06-13 17:20:29 +08:00
2dddab03a1 [compatibility](schema cache) ensure schema version when using schema cache (#20729)
When FE is old version, be is new version, issue a schema change(add column) and
then query, old version of FE query without schema version could result in reading
stale schema from schema cache
2023-06-13 15:19:26 +08:00
4b15185e25 [improvement](hdfs) add parquet footer cache and hdfs file handle cache (#20544)
1. Add hdfs file handle cache for hdfs file reader

    Copied from Impala, `https://github.com/apache/impala/blob/master/be/src/util/lru-multi-cache.h`. (Thanks for the Impala team)
    This is a lru cache that can store multi entries with same key.
    The key is build with {file name + modification time}
    The value is the hdfsFile pointer that point to a certain hdfs file.
    
    This cache is to avoid reopen same hdfs file mutli time, which can save
    query time.
    
    Add a BE config `max_hdfs_file_handle_cache_num` to limit the max number
    of file handle cache, default is 20000.

2. Add file meta cache

	The file meta cache is a lru cache. the key is {file name + modification time},
	the value is the parsed file meta info of the certain file, which can save
	the time of re-parsing file meta everytime.
	Currently, it is only used for caching parquet file footer.
	
The test show that is cache is hit, the `FileOpenTime` and `ParseFooterTime` is reduce to almost 0
in query profile, which can save time when there are lots of files to read.
2023-06-13 15:13:57 +08:00
Pxl
e010fa8d4f [Chore](runtime filter) remove runtime filter ready_for_publish/publish_finally (#20593) 2023-06-13 11:20:49 +08:00
57656b2459 [Enhancement](java-udf) java-udf module split to sub modules (#20185)
The java-udf module has become increasingly large and difficult to manage, making it inconvenient to package and use as needed. It needs to be split into multiple sub-modules, such as : java-commom、java-udf、jdbc-scanner、hudi-scanner、 paimon-scanner.

Co-authored-by: lexluo <lexluo@tencent.com>
2023-06-13 09:41:22 +08:00
51bbf17786 [Refactor](Profile) Add and refactor the join profile (#20693) 2023-06-13 09:06:51 +08:00
73ad885e19 [Feature][Fix](multi-catalog) Implements transactional hive full acid tables. (#20679)
After supporting insert-only transactional hive full acid tables #19518, #19419, this PR support transactional hive full acid tables.

Support hive3 transactional hive full acid tables.
Hive2 transactional hive full acid tables need to run major compactions.
2023-06-13 08:55:16 +08:00
Pxl
5e3a96d605 [Bug](pipeline) fix memory leak because pipeline shared ptr not release #20710 2023-06-13 08:50:34 +08:00
283c55720d [bug](cooldown) Fix the issue of unused remote files not being deleted (#19785) 2023-06-12 21:05:09 +08:00
1433544c56 [fix](case expr) fix coredump of case for null value 3 #20711 2023-06-12 20:58:01 +08:00
Pxl
5fd9f58bd3 [Chore](pipeline-engine) adjus queryt canceled log on pipeline engine (#20702)
adjus queryt canceled log on pipeline engine
2023-06-12 18:23:19 +08:00
9d47c6a871 [fix](columnstring) fix bug of columnstring prefetch (#20698) 2023-06-12 17:03:44 +08:00
99c0592157 [Feature](array-function) Support array_pushback function #17417 (#19988)
Implement array_pushback.

mysql> select array_pushback([1, 2], 3);
+--------------------------------+
| array_pushback(ARRAY(1, 2), 3) |
+--------------------------------+
| [1, 2, 3]                      |
+--------------------------------+
1 row in set (0.01 sec)
2023-06-12 16:51:12 +08:00
ea264ce9de [Opt](join) short circuit probe for join node (#20585)
Support the _short_circuit_for_probe for join node
2023-06-12 16:01:09 +08:00
0b228b3414 [fix](load)Support load json data with default value (#20624)
* support json default value

---------

Co-authored-by: duanxujian <duanxujian@jd.com>
2023-06-12 14:51:31 +08:00
Pxl
7f8c5c81e7 [Feature](agg_state) support agg_state combinator on nereids (#20164)
support agg_state combinator on nereids
2023-06-12 12:49:26 +08:00
14f59bef1d [improvement](profile)add sum/avg rpc time (#20511) 2023-06-12 11:34:49 +08:00
4c340f2851 [Feature] (Multi-Catalog) support query hll column in doris jdbc table - part 1 (#19413)
Issue Number: close #17895
2023-06-12 11:16:19 +08:00
a6f625676b [profile](remove child) child is for node, should not be used to organize counters (#20676)
Currently, there are many profiles using add child profile to orgnanize profile into blocks. But it is wrong. Child profile will have a total time counter. Actually, what we should use is just a label.

                          -  MemoryUsage:  
                              -  HashTable:  23.98  KB
                              -  SerializeKeyArena:  446.75  KB
Add a new macro ADD_LABEL_COUNTER to add just a label in the profile.

---------

Co-authored-by: yiguolei <yiguolei@gmail.com>
2023-06-12 10:00:35 +08:00
Pxl
ab7ac31d89 [Chore](case) fix failed on test_big_pad when enable pipeline engine #20644 2023-06-12 09:15:55 +08:00
a347063390 [fix](case expr) fix coredump of case for null value 2 (#20635)
fix coredump of case for null value 2
2023-06-11 23:08:53 +08:00
8ea61a1ce6 [fix](streamload) fix crash when be exit (#20662) 2023-06-11 15:58:44 +08:00
bd9a9a32f5 [bugfix](s3 fs) fix s3 uri parsing for http/https uri (#20656) 2023-06-11 14:00:04 +08:00
9a83d78dfe [Enhancement](hudi) support hudi mor table, step2 follow #19909 (#20570)
PR(https://github.com/apache/doris/pull/19909) has implemented the framework of hudi reader for MOR table. This PR completes all functions of reading MOR table and enables end-to-end queries.
Key Implementations:
1. Use hudi meta information to generate the table schema, not from hive client.
2. Use hive client to list hudi partitions, so it strongly depends the sync-tools(https://hudi.apache.org/docs/syncing_metastore/) which syncs the partitions of hudi into hive metastore. However, we may get the hudi partitions directly from .hoodie directory.
3. Remove `HudiHMSExternalCatalog`, because other catalogs like glue is compatible with hive catalog.
4. Read the COW table originally from c++.
5. Hudi RecordReader will use ProcessBuilder to start a hotspot debugger process, which may be stuck when attaching the origin JNI process, soI use a tricky method to kill this useless process.
2023-06-10 12:25:53 +08:00
Pxl
ab6c1f152c [Chore](build) adjust build script about pch setting (#20637)
try to make be-ut workflow stable
2023-06-09 22:27:13 +08:00
656b9ad3da [enhancement](index) Nereids support no need to read raw data for index column that only in filter conditions (#20605) 2023-06-09 21:54:48 +08:00
0f21166110 [fix](memory) Fix runtime state default mem tracker (#20615)
start time: Wed 07 Jun 2023 06:50:14 PM CST
*** Query id: e9000000e9-eb00000073 ***
*** Aborted at 1686136356 (unix time) try "date -d @1686136356" if you are using GNU date ***
*** Current BE git commitID: 5c33dd7a2c ***
*** SIGSEGV address not mapped to object (@0x23000000235) received by PID 2131238 (TID 2132258 OR 0x7f708eff7700) from PID 565; stack trace: ***
 0# doris::signal::(anonymous namespace)::FailureSignalHandler(int, siginfo_t*, void*) at /mnt/hdd01/repo_center/doris_branch-2.0-beta/doris/be/src/common/signal_handler.h:413
 1# 0x00007F727BBE3090 in /lib/x86_64-linux-gnu/libc.so.6
 2# doris::AttachTask::AttachTask(doris::RuntimeState*) at /mnt/hdd01/repo_center/doris_branch-2.0-beta/doris/be/src/runtime/thread_context.cpp:43
 3# std::_Function_handler<void (doris::PTabletWriterAddBlockResult const&, bool), doris::stream_load::VNodeChannel::open_wait()::$_1>::_M_invoke(std::_Any_data const&, doris::PTabletWriterAddBlockResult const&, bool&&) at /var/local/ldb_toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/std_function.h:291
 4# doris::stream_load::ReusableClosure<doris::PTabletWriterAddBlockResult>::Run() at /mnt/hdd01/repo_center/doris_branch-2.0-beta/doris/be/src/vec/sink/vtablet_sink.h:176
 5# brpc::Controller::EndRPC(brpc::Controller::CompletionInfo const&) in /root/20230607171843-doris-branch-2.0-beta-5c33dd7a/be/lib/doris_be
 6# brpc::Controller::OnVersionedRPCReturned(brpc::Controller::CompletionInfo const&, bool, int) in /root/20230607171843-doris-branch-2.0-beta-5c33dd7a/be/lib/doris_be
 7# brpc::policy::ProcessRpcResponse(brpc::InputMessageBase*) in /root/20230607171843-doris-branch-2.0-beta-5c33dd7a/be/lib/doris_be
 8# brpc::InputMessenger::InputMessageClosure::~InputMessageClosure() in /root/20230607171843-doris-branch-2.0-beta-5c33dd7a/be/lib/doris_be
 9# brpc::InputMessenger::OnNewMessages(brpc::Socket*) in /root/20230607171843-doris-branch-2.0-beta-5c33dd7a/be/lib/doris_be
10# brpc::Socket::ProcessEvent(void*) in /root/20230607171843-doris-branch-2.0-beta-5c33dd7a/be/lib/doris_be
11# bthread::TaskGroup::task_runner(long) in /root/20230607171843-doris-branch-2.0-beta-5c33dd7a/be/lib/doris_be
12# bthread_make_fcontext in /root/20230607171843-doris-branch-2.0-beta-5c33dd7a/be/lib/doris_be
2023-06-09 21:09:07 +08:00
93b53cf2f4 [improvement](exception-safe) create and prepare node/sink support exception safe (#20551) 2023-06-09 21:06:59 +08:00
abb2048d5d [performance](executor) remove repeated call within the loop in validate_column 2023-06-09 19:59:25 +08:00
05438eab0d remove DCHECK for rpc time (#20621) 2023-06-09 13:38:12 +08:00
3b17cc8eb3 [Improvement](column) reduce cache miss for data copy (#20583) 2023-06-09 13:10:57 +08:00
b60860c5e5 [refactor](profile) refactor the join profile when its shared hash table (#20391)
in join node, if it's broadcast_join
and shared hash table, some counter/timer about build hash table is useless,
so we could add those counter/timer in faker profile, and those will not display in web profile.
2023-06-09 08:59:49 +08:00