67afea73b1
[enhancement](merge-on-write) add more version and txn information for mow publish ( #21257 )
2023-07-07 16:18:47 +08:00
871002c882
[fix](kerberos) should renew the kerberos ticket each half of ticket lifetime ( #21546 )
...
Follow #21265 , the renew interval of kerberos ticket should be half of config::kerberos_expiration_time_seconds
2023-07-07 14:52:36 +08:00
57729bad68
[Enhancement](multi-catalog) Add hdfs read statistics profile. ( #21442 )
...
Add hdfs read statistics profile.
```
- HdfsIO: 0ns
- TotalBytesRead: 133.47 MB
- TotalLocalBytesRead: 133.47 MB
- TotalShortCircuitBytesRead: 133.47 MB
- TotalZeroCopyBytesRead: 0.00
```
2023-07-07 14:52:14 +08:00
70f2ac308a
[fix](sink) fix OlapTableSink early close causes load failure #21545
2023-07-07 14:03:54 +08:00
2a721be4f7
[fix](partial update) correct col_nums when init agg state in memtable ( #21592 )
2023-07-07 14:03:33 +08:00
612265c717
[Enhancement](inverted index) reset global instance for InvertedIndexSearcherCache when destroy ( #21601 )
...
This PR aims to address the need for resetting the InvertedIndexSearcherCache during the destroy of doris_be. Given that InvertedIndexSearcherCache is a global instance, it is necessary to explicitly reset its members.
Implementing this change will effectively eliminate the memory leak information that currently appears when doris_be is stopped gracefully. This contributes to a cleaner and more efficient shutdown process.
2023-07-07 13:00:43 +08:00
bb985cd9a1
[refactor](udf) refactor java-udf execute method by using for loop ( #21388 )
2023-07-07 11:43:11 +08:00
9ee7fa45d1
[Refactor](multi-catalog) Refactor to process splitted conjuncts for dict filter. ( #21459 )
...
Conjuncts are currently split, so refactor source code to handle split conjuncts for dict filters.
2023-07-07 09:19:08 +08:00
181dad4181
[fix](executor) make elt / repeat smooth upgrade. ( #21493 )
...
BE : 2.0,FE : 1.2
before
mysql [(none)]>select elt(1, 'aaa', 'bbb');
ERROR 1105 (HY000): errCode = 2, detailMessage = (127.0.0.1)[INTERNAL_ERROR]Function elt get failed, expr is VectorizedFnCall[elt](arguments=,return=String) and return type is String.
mysql [test]> INSERT INTO tbb VALUES (1, repeat("test1111", 8192))(2, repeat("test1111", 131072));
mysql [test]>select k1, md5(v1), length(v1) from tbb;
+------+----------------------------------+--------------+
| k1 | md5(`v1`) | length(`v1`) |
+------+----------------------------------+--------------+
| 1 | d41d8cd98f00b204e9800998ecf8427e | 0 |
| 2 | d41d8cd98f00b204e9800998ecf8427e | 0 |
+------+----------------------------------+--------------+
now
mysql [test]>select elt(1, 'aaa', 'bbb');
+----------------------+
| elt(1, 'aaa', 'bbb') |
+----------------------+
| aaa |
+----------------------+
mysql [test]>select k1, md5(v1), length(v1) from tbb;
+------+----------------------------------+--------------+
| k1 | md5(`v1`) | length(`v1`) |
+------+----------------------------------+--------------+
| 1 | 1f44fb91f47cab16f711973af06294a0 | 65536 |
| 2 | 3c514d3b89e26e2f983b7bd4cbb82055 | 1048576 |
+------+----------------------------------+--------------+
2023-07-06 19:15:06 +08:00
2d94477748
[fix](type system) fix datetimev2 write column to arrow ( #21529 )
...
*** Query id: c1d804d455a24dee-a8967d16a258fc15 ***
*** Aborted at 1688530361 (unix time) try "date -d @1688530361" if you are using GNU date ***
*** Current BE git commitID: f2025b9 ***
*** SIGSEGV unknown detail explain (@0x0) received by PID 3709755 (TID 3710413 OR 0x7f5661d57700) from PID 0; stack trace: ***
0# doris::signal::(anonymous namespace)::FailureSignalHandler(int, siginfo_t*, void*) at /root/doris/be/src/common/signal_handler.h:413
1# os::Linux::chained_handler(int, siginfo*, void*) in /usr/lib/jvm/TencentKona-8.0.12-352/jre/lib/amd64/server/[libjvm.so](http://libjvm.so/ )
2# JVM_handle_linux_signal in /usr/lib/jvm/TencentKona-8.0.12-352/jre/lib/amd64/server/[libjvm.so](http://libjvm.so/ )
3# signalHandler(int, siginfo*, void*) in /usr/lib/jvm/TencentKona-8.0.12-352/jre/lib/amd64/server/[libjvm.so](http://libjvm.so/ )
4# 0x00007F5795EE5B50 in /lib64/libc.so.6
5# doris::vectorized::DateV2Value<doris::vectorized::DateTimeV2ValueType>::to_buffer(char*, int) const at /root/doris/be/src/vec/runtime/vdatetime_value.cpp:2409
6# doris::vectorized::DataTypeDateTimeV2SerDe::write_column_to_arrow(doris::vectorized::IColumn const&, unsigned char const*, arrow::ArrayBuilder*, int, int) const in /mnt/disk2/zhaobingquan/doris/be/lib/doris_be
7# doris::vectorized::DataTypeNullableSerDe::write_column_to_arrow(doris::vectorized::IColumn const&, unsigned char const*, arrow::ArrayBuilder*, int, int) const at /root/doris/be/src/vec/data_types/serde/data_type_nullable_serde.cpp:120
8# doris::FromBlockConverter::convert(std::shared_ptr<arrow::RecordBatch>*) at /root/doris/be/src/util/arrow/block_convertor.cpp:392
9# doris::convert_to_arrow_batch(doris::vectorized::Block const&, std::shared_ptr<arrow::Schema> const&, arrow::MemoryPool*, std::shared_ptr<arrow::RecordBatch>*) in /mnt/disk2/zhaobingquan/doris/be/lib/doris_be
10# doris::vectorized::MemoryScratchSink::send(doris::RuntimeState*, doris::vectorized::Block*, bool) at /root/doris/be/src/vec/sink/vmemory_scratch_sink.cpp:83
11# doris::PlanFragmentExecutor::open_vectorized_internal() in /mnt/disk2/zhaobingquan/doris/be/lib/doris_be
12# doris::PlanFragmentExecutor::open() at /root/doris/be/src/runtime/plan_fragment_executor.cpp:273
13# doris::FragmentExecState::execute() at /root/doris/be/src/runtime/fragment_mgr.cpp:263
14# doris::FragmentMgr::_exec_actual(std::shared_ptr<doris::FragmentExecState>, std::function<void (doris::RuntimeState*, doris::Status*)> const&) at /root/doris/be/src/runtime/fragment_mgr.cpp:527
15# std::_Function_handler<void (), doris::FragmentMgr::exec_plan_fragment(doris::TExecPlanFragmentParams const&, std::function<void (doris::RuntimeState*, doris::Status*)> const&)::$_0>::_M_invoke(std::_Any_data const&) at /var/local/ldb-toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/std_function.h:291
16# doris::ThreadPool::dispatch_thread() in /mnt/disk2/zhaobingquan/doris/be/lib/doris_be
17# doris::Thread::supervise_thread(void*) at /root/doris/be/src/util/thread.cpp:466
18# start_thread in /lib64/libpthread.so.0
19# __clone in /lib64/libc.so.6
2023-07-06 17:33:49 +08:00
8e6b9b4026
[fix](sink) Fix NodeChannel add_block_closure null pointer ( #21534 )
...
NodeChannel add_block_closure null pointer when canceled before open_wait new closure.
2023-07-06 17:09:43 +08:00
dac2b638c6
[refactor](load) move memtable flush logic to flush token and rowset writer ( #21547 )
2023-07-06 17:04:30 +08:00
457de3fc55
[refactor](load) move find_tablet out of VOlapTableSink ( #21462 )
2023-07-06 16:51:32 +08:00
fde73b6cc6
[Fix](multi-catalog) Fix hadoop short circuit reading can not enabled in some environments. ( #21516 )
...
Fix hadoop short circuit reading can not enabled in some environments.
- Revert #21430 because it will cause performance degradation issue.
- Add `$HADOOP_CONF_DIR` to `$CLASSPATH`.
- Remove empty `hdfs-site.xml`. Because in some environments it will cause hadoop short circuit reading can not enabled.
- Copy the hadoop common native libs(which is copied from https://github.com/apache/doris-thirdparty/pull/98
) and add it to `LD_LIBRARY_PATH`. Because in some environments `LD_LIBRARY_PATH` doesn't contain hadoop common native libs, which will cause hadoop short circuit reading can not enabled.
2023-07-06 15:00:26 +08:00
06451c4ff1
fix: infinit loop when handle exceed limit memory ( #21556 )
...
In some situation, _handle_mem_exceed_limit will alloc a large memory block, more than 5G. After add some log, we found that:
alloc memory was made in vector::insert_realloc
writers_to_reduce_mem's size is more than 8 million.
which indicated that an infinite loop was met in while (!tablets_mem_heap.empty()).
By reviewing codes, """ if (std::get<0>(tablet_mem_item)++ != std::get<1>(tablet_mem_item)) """ is wrong,
which must be """ if (++std::get<0>(tablet_mem_item) != std::get<1>(tablet_mem_item)) """.
In the original code, we will made ++ on end iterator, and then compare to end iterator, the behavior is undefined.
2023-07-06 14:34:29 +08:00
4d17400244
[profile](join) add collisions into profile ( #21510 )
2023-07-06 14:30:10 +08:00
009b300abd
[Fix](ScannerScheduler) fix dead lock when shutdown group_local_scan_thread_pool ( #21553 )
2023-07-06 13:09:37 +08:00
9d2f879bd2
[Enhancement](inverted index) make InvertedIndexReader shared_from_this ( #21381 )
...
This PR proposes several changes to improve code safety and readability by replacing raw pointers with smart pointers in several places.
use enable_factory_creator in InvertedIndexIterator and InvertedIndexReader, remove explicit new constructor.
make InvertedIndexReader shared_from_this, it may desctruct when InvertedIndexIterator use it.
2023-07-06 11:52:59 +08:00
fb14950887
[refactor](load) split flush_segment_writer into two parts ( #21372 )
2023-07-06 11:13:34 +08:00
80be2bb220
[bugfix](RowsetIterator) use valid stats when creating segment iterator ( #21512 )
2023-07-06 10:35:16 +08:00
688a1bc059
[refactor](load) expand OlapTableValidator to VOlapTableBlockConvertor ( #21476 )
2023-07-06 10:11:53 +08:00
a2e679f767
[fix](status) Return the correct error code when clucene error occured ( #21511 )
2023-07-06 09:08:11 +08:00
5d2739b5c5
[Fix](submodule) revert clucene version wrong rollback ( #21523 )
2023-07-05 19:10:15 +08:00
242a35fa80
[fix](s3) fix s3 fs benchmark tool ( #21401 )
...
1. fix concurrency bug of s3 fs benchmark tool, to avoid crash on multi thread.
2. Add `prefetch_read` operation to test prefetch reader.
3. add `AWS_EC2_METADATA_DISABLED` env in `start_be.sh` to avoid call ec2 metadata when creating s3 client.
4. add `AWS_MAX_ATTEMPTS` env in `start_be.sh` to avoid warning log of s3 sdk.
2023-07-05 16:20:58 +08:00
39590f95b0
[pipeline](load) return error status in pipeline load ( #21303 )
2023-07-05 16:13:32 +08:00
d8a549fe61
[Fix](Comment) Comment should be in English ( #20964 )
2023-07-05 15:41:34 +08:00
38c8657e5e
[improve](memory) more grace logging for memory exceed limit ( #21311 )
...
more grace logging for Allocator and MemTracker when memory exceed limit
fix bthread grace exit.
2023-07-05 14:59:06 +08:00
f02bec8ad1
[Chore](runtime filter) change runtime filter dcheck to error status or exception ( #21475 )
...
change runtime filter dcheck to error status or exception
2023-07-05 14:03:55 +08:00
0469c02202
[Test](regression) Temporarily disable quickTest for SHOW CREATE TABLE to adapt to enable_feature_binlog=true ( #21247 )
2023-07-05 10:12:02 +08:00
122f5f6c2d
[enchanment](udf) add more info when download jar package failed ( #21440 )
...
when download jar package, some times show the checksum is not equal,
but the root reason is unknown, now add some error msg if failed.
2023-07-04 20:35:35 +08:00
3b73604f74
[fix](memory) fix jemalloc purge arena dirty pages core dump ( #21486 )
...
Issue Number: close #xxx
jemalloc/jemalloc#2470
Occasional core dump during stress test.
2023-07-04 20:35:13 +08:00
81ee4d7402
[performance](group_concat) avoid extra copy in group_concat ( #21432 )
...
avoid extra copy in group_concat
2023-07-04 20:21:44 +08:00
13fb69550a
[improvement](kerberos) disable hdfs fs handle cache to renew kerberos ticket at fix interval ( #21265 )
...
Add a new BE config `kerberos_ticket_lifetime_seconds`, default is 86400.
Better set it same as the value of `ticket_lifetime` in `krb5.conf`
If a HDFS fs handle in cache is live longer than HALF of this time, it will be set as invalid and recreated.
And the kerberos ticket will be renewed.
2023-07-04 17:13:34 +08:00
9adbca685a
[opt](hudi) use spark bundle to read hudi data ( #21260 )
...
Use spark-bundle to read hudi data instead of using hive-bundle to read hudi data.
**Advantage** for using spark-bundle to read hudi data:
1. The performance of spark-bundle is more than twice that of hive-bundle
2. spark-bundle using `UnsafeRow` can reduce data copying and GC time of the jvm
3. spark-bundle support `Time Travel`, `Incremental Read`, and `Schema Change`, these functions can be quickly ported to Doris
**Disadvantage** for using spark-bundle to read hudi data:
1. More dependencies make hudi-dependency.jar very cumbersome(from 138M -> 300M)
2. spark-bundle only provides `RDD` interface and cannot be used directly
2023-07-04 17:04:49 +08:00
90dd8716ed
[refactor](multicast) change the way multicast do filter, project and shuffle ( #21412 )
...
Co-authored-by: Jerry Hu <mrhhsg@gmail.com >
1. Filtering is done at the sending end rather than the receiving end
2. Projection is done at the sending end rather than the receiving end
3. Each sender can use different shuffle policies to send data
2023-07-04 16:51:07 +08:00
09f414e0f4
fix lru cache handle field order ( #21435 )
...
For LRUHandle, all fields should be put ahead of key_data.
The LRUHandle is allocated using malloc and starting from field key_data is for key data.
2023-07-04 16:10:05 +08:00
b5da3f74f5
[improvement](join) avoid unnecessary copying in _build_output_block ( #21360 )
...
If the source columns are mutually exclusive within a temporary block, there is no need to duplicate the data.
2023-07-04 12:13:49 +08:00
b86dd11a7d
[fix](pipeline) refactor olap table sink close ( #20771 )
...
For pipeline, olap table sink close is divided into three stages, try_close() --> pending_finish() --> close()
only after all node channels are done or canceled, pending_finish() will return false, close() will start.
this will avoid block pipeline on close().
In close, check the index channel intolerable failure status after each node channel failure,
if intolerable failure is true, the close will be terminated in advance, and all node channels will be canceled to avoid meaningless blocking.
2023-07-04 11:27:51 +08:00
b1c16b96d6
[refactor](load) move validator out of VOlapTableSink ( #21460 )
2023-07-04 10:16:56 +08:00
938c0765cd
[improvement](memory) improve inserting sparse rows into string column ( #21420 )
...
For the following test, which simulate hash join outputing 435699854 rows from 5131 buiding rows:
{
auto col = doris::vectorized::ColumnString::create();
constexpr int build_rows = 5131;
constexpr int output_rows = 435699854;
std::string str("01234567");
for (int i = 0; i < build_rows; ++i) {
col->insert_data(str.data(), str.size());
}
int indices[output_rows];
for (int i = 0; i < output_rows; ++i) {
indices[i] = i % build_rows;
}
auto col2 = doris::vectorized::ColumnString::create();
doris::MonotonicStopWatch watch;
watch.start();
col2->insert_indices_from(*col, indices, indices + output_rows);
watch.stop();
LOG(WARNING) << "string column insert_indices_from, rows: " << output_rows << ", time: " << doris::PrettyPrinter::print(watch.elapsed_time(), doris::TUnit::TIME_NS);
}
The ColumnString::insert_indices_from inserting time improve from 6s665ms to 3s158ms:
W0702 23:08:39.672044 1277989 doris_main.cpp:545] string column insert_indices_from, rows: 435699854, time: 3s153ms
W0702 23:09:36.368853 1282061 doris_main.cpp:545] string column insert_indices_from, rows: 435699854, time: 3s158ms
W0703 00:30:26.093307 1468640 doris_main.cpp:545] string column insert_indices_from, rows: 435699854, time: 6s761ms
W0703 00:31:21.043638 1472937 doris_main.cpp:545] string column insert_indices_from, rows: 435699854, time: 6s665ms
2023-07-04 09:34:10 +08:00
790b771a49
[improvement](execute) Eliminate virtual function calls when serializing and deserializing aggregate functions ( #21427 )
...
Eliminate virtual function calls when serializing and deserializing aggregate functions.
For example, in AggregateFunctionUniq::deserialize_and_merge method, calling read_pod_binary(ref, buf) in the for loop generates a large number of virtual function calls.
void deserialize_and_merge(AggregateDataPtr __restrict place, BufferReadable& buf,
Arena* arena) const override {
auto& set = this->data(place).set;
UInt64 size;
read_var_uint(size, buf);
set.rehash(size + set.size());
for (size_t i = 0; i < size; ++i) {
KeyType ref;
read_pod_binary(ref, buf);
set.insert(ref);
}
}
template <typename Type>
void read_pod_binary(Type& x, BufferReadable& buf) {
buf.read(reinterpret_cast<char*>(&x), sizeof(x));
}
BufferReadable has only one subclass, VectorBufferReader, so it is better to implement the BufferReadable class directly.
The following sql was tested on SSB-flat dataset:
SELECT COUNT (DISTINCT lo_partkey), COUNT (DISTINCT lo_suppkey) FROM lineorder_flat;
before: MergeTime: 415.398ms
after opt: MergeTime: 174.660ms
2023-07-04 09:26:37 +08:00
f7c724f8a3
[Bug](excution) avoid core dump on filter_block_internal and add debug information ( #21433 )
...
avoid core dump on filter_block_internal and add debug information
2023-07-03 18:10:30 +08:00
7e02566333
[fix](pipeline) fix coredump caused by uncaught exception ( #21387 )
...
For pipeline engine, ExecNode::create_tree may throw exception sometimes, e.g. SELECT MIN(-3.40282347e+38) FROM t0; will throw a exception bacause of invalid decimal precision.
*** Query id: 346886bf48494e77-96eeea5361233618 ***
*** Aborted at 1688101183 (unix time) try "date -d @1688101183" if you are using GNU date ***
*** Current BE git commitID: 2fcb0e090b ***
*** SIGABRT unknown detail explain (@0x13ef42) received by PID 1306434 (TID 1306918 OR 0x7ff0763e1700) from PID 1306434; stack trace: ***
terminate called recursively
0# doris::signal::(anonymous namespace)::FailureSignalHandler(int, siginfo_t*, void*) at /home/zcp/repo_center/doris_master/doris/be/src/common/signal_handler.h:413
1# 0x00007FFA8780E090 in /lib/x86_64-linux-gnu/libc.so.6
2# raise at ../sysdeps/unix/sysv/linux/raise.c:51
3# abort at /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:81
4# __gnu_cxx::__verbose_terminate_handler() [clone .cold] at ../../../../libstdc++-v3/libsupc++/vterminate.cc:75
5# __cxxabiv1::__terminate(void (*)()) at ../../../../libstdc++-v3/libsupc++/eh_terminate.cc:48
6# 0x000055B6C30C7401 in /mnt/ssd01/doris-master/VEC_ASAN/be/lib/doris_be
7# 0x000055B6C30C7554 in /mnt/ssd01/doris-master/VEC_ASAN/be/lib/doris_be
8# doris::vectorized::create_decimal(unsigned long, unsigned long, bool) at /home/zcp/repo_center/doris_master/doris/be/src/vec/data_types/data_type_decimal.cpp:167
9# doris::vectorized::DataTypeFactory::create_data_type(doris::TypeDescriptor const&, bool) at /home/zcp/repo_center/doris_master/doris/be/src/vec/data_types/data_type_factory.cpp:185
10# doris::vectorized::AggFnEvaluator::AggFnEvaluator(doris::TExprNode const&) at /home/zcp/repo_center/doris_master/doris/be/src/vec/exprs/vectorized_agg_fn.cpp:79
11# std::unique_ptr<doris::vectorized::AggFnEvaluator, std::default_delete<doris::vectorized::AggFnEvaluator> > doris::vectorized::AggFnEvaluator::create_unique<doris::TExprNode const&>(doris::TExprNode const&) at /home/zcp/repo_center/doris_master/doris/be/src/vec/exprs/vectorized_agg_fn.h:49
12# doris::vectorized::AggFnEvaluator::create(doris::ObjectPool*, doris::TExpr const&, doris::TSortInfo const&, doris::vectorized::AggFnEvaluator**) at /home/zcp/repo_center/doris_master/doris/be/src/vec/exprs/vectorized_agg_fn.cpp:92
13# doris::vectorized::AggregationNode::init(doris::TPlanNode const&, doris::RuntimeState*) at /home/zcp/repo_center/doris_master/doris/be/src/vec/exec/vaggregation_node.cpp:158
14# doris::ExecNode::create_tree_helper(doris::RuntimeState*, doris::ObjectPool*, std::vector<doris::TPlanNode, std::allocator<doris::TPlanNode> > const&, doris::DescriptorTbl const&, doris::ExecNode*, int*, doris::ExecNode**) at /home/zcp/repo_center/doris_master/doris/be/src/exec/exec_node.cpp:276
15# doris::ExecNode::create_tree(doris::RuntimeState*, doris::ObjectPool*, doris::TPlan const&, doris::DescriptorTbl const&, doris::ExecNode**) at /home/zcp/repo_center/doris_master/doris/be/src/exec/exec_node.cpp:231
16# doris::pipeline::PipelineFragmentContext::prepare(doris::TPipelineFragmentParams const&, unsigned long) at /home/zcp/repo_center/doris_master/doris/be/src/pipeline/pipeline_fragment_context.cpp:253
17# doris::FragmentMgr::exec_plan_fragment(doris::TPipelineFragmentParams const&, std::function<void (doris::RuntimeState*, doris::Status*)> const&)::$_1::operator()(int) const at /home/zcp/repo_center/doris_master/doris/be/src/runtime/fragment_mgr.cpp:895
18# doris::FragmentMgr::exec_plan_fragment(doris::TPipelineFragmentParams const&, std::function<void (doris::RuntimeState*, doris::Status*)> const&)::$_0::operator()() const at /home/zcp/repo_center/doris_master/doris/be/src/runtime/fragment_mgr.cpp:926
19# void std::__invoke_impl<void, doris::FragmentMgr::exec_plan_fragment(doris::TPipelineFragmentParams const&, std::function<void (doris::RuntimeState*, doris::Status*)> const&)::$_0&>(std::__invoke_other, doris::FragmentMgr::exec_plan_fragment(doris::TPipelineFragmentParams const&, std::function<void (doris::RuntimeState*, doris::Status*)> const&)::$_0&) at /var/local/ldb_toolchain/bin/../lib/gcc/x86_64-linux-gnu/11/../../../../include/c++/11/bits/invoke.h:61
2023-07-03 10:58:13 +08:00
a3d34e1e08
[decimalv2](compatibility) add config to allow invalid decimalv2 literal ( #21327 )
2023-07-03 10:55:27 +08:00
59c1bbd163
[Feature](materialized view) support query match mv with agg_state on nereids planner ( #21067 )
...
* support create mv contain aggstate column
* update
* update
* update
* support query match mv with agg_state on nereids planner
update
* update
* update
2023-07-03 10:19:31 +08:00
f90e8fcb26
[Chore](storage) add debug info for TabletColumn::get_aggregate_function ( #21408 )
2023-07-03 10:02:44 +08:00
ca0953ea51
[improvement](join) Serialize build keys in a vectorized (columnar) way ( #21361 )
...
There is a significant performance improvement in serializing keys in the aggregate node through vectorization. Now, applying it to the join node also brings performance improvement.
2023-07-03 09:29:10 +08:00
1c961f2272
[refactor](load) move generate_delete_bitmap from memtable to beta rowset writer ( #21329 )
2023-07-01 17:22:45 +08:00
4ad3a7a8de
[fix](exec) run exec_plan_fragment in pthread to avoid BE crash ( #21343 )
...
If there is only one fragment of a query plan, FE will call `exec_plan_fragment` rpc to BE.
And on BE side, the `exec_plan_fragment()` will be executed directly in bthread, but it may call
some JNI method like `AttachCurrentThread()`, which will return error in bthread.
So I modify the `exec_plan_fragment` to make sure it will be executed in pthread pool.
2023-07-01 12:29:22 +08:00
1fe04b7242
[Chore](metrics) remove trace metrics code using runtime profile instead ( #21394 )
...
* commit
* fix
* format
2023-07-01 12:18:23 +08:00