fix:
F20240411 10:26:06.693233 1368925 thread_context.h:293] __builtin_unreachable, If you crash here, it means that SCOPED_ATTACH_TASK and SCOPED_SWITCH_THREAD_MEM_TRACKER_LIMITER are not used correctly. starting position of each thread is expected to use SCOPED_ATTACH_TASK to bind a MemTrackerLimiter belonging to Query/Load/Compaction/Other Tasks, otherwise memory alloc using Doris Allocator in the thread will crash. If you want to switch MemTrackerLimiter during thread execution, please use SCOPED_SWITCH_THREAD_MEM_TRACKER_LIMITER, do not repeat Attach. Of course, you can modify enable_memory_orphan_check=false in be.conf to avoid this crash.
*** Check failure stack trace: ***
@ 0x562d9b5aa6a6 google::LogMessage::SendToLog()
@ 0x562d9b5a70f0 google::LogMessage::Flush()
@ 0x562d9b5aaee9 google::LogMessageFatal::~LogMessageFatal()
@ 0x562d7ebd1b7e doris::thread_context()
@ 0x562d7ec203b8 Allocator<>::sys_memory_check()
@ 0x562d7ec255a3 Allocator<>::memory_check()
@ 0x562d7ec274a1 Allocator<>::alloc_impl()
@ 0x562d7ec27227 Allocator<>::alloc()
@ 0x562d67a12207 doris::vectorized::PODArrayBase<>::alloc<>()
@ 0x562d67a11fde doris::vectorized::PODArrayBase<>::realloc<>()
@ 0x562d67a11e26 doris::vectorized::PODArrayBase<>::reserve<>()
@ 0x562d77331ee3 doris::vectorized::ColumnVector<>::reserve()
@ 0x562d7e64328e doris::vectorized::ColumnNullable::reserve()
@ 0x562d7ec79a84 doris::vectorized::Block::Block()
@ 0x562d6b86b81b doris::PInternalServiceImpl::_multi_get()
@ 0x562d6b8a4a07 doris::PInternalServiceImpl::multiget_data()::$_0::operator()()
Fix failed in regression_test/suites/query_p0/group_concat/test_group_concat.groovy
select
group_concat( distinct b1, '?'), group_concat( distinct b3, '?')
from
table_group_concat
group by
b2
exception:
lowestCostPlans with physicalProperties(GATHER) doesn't exist in root group
The root cause is '?' is push down to slot by NormalizeAggregate, AggregateStrategies treat the slot as a distinct parameter and generate a invalid PhysicalHashAggregate, and then reject by ChildOutputPropertyDeriver.
I fix this bug by avoid push down literal to slot in NormalizeAggregate, and forbidden generate stream aggregate node when group by slots is empty
Previously, strings_pool was allocated within each tree node. However, due to the Arena's alignment of allocated chunks to at least 4K, this allocation size was excessively large for a single tree node. Consequently, when there are numerous nodes within the SubcolumnTree, a significant portion of memory was wasted. Moving strings_pool to the tree itself optimizes memory usage and reduces wastage, improving overall efficiency.
* The default delete bitmap cache is set to 100MB, which can be insufficient and cause performance issues when the amount of user data is large. To mitigate the problem of an inadequate cache, we will take the larger of 5% of the total memory and 100MB as the delete bitmap cache size.