Jemalloc dirty page only use madvise MADV_FREE, memory is not release back to system, RSS won't reduce in time,
So when the process memory exceed limit or system available memory is insufficient,
manually transfer dirty page to the muzzy page, which will call MADV_DONTNEED to release the physical memory back to the system.
https://jemalloc.net/jemalloc.3.html#opt.dirty_decay_ms
Only switch to bthread local when modifying the mem tracker in the thread context. No longer switches to bthread local by default when bthread starts
mem tracker increases brpc IOBufBlockMemory memory
remove thread mem tracker metrics
Expect vm overcommit memory value to be 1, system will no longer throw bad_alloc, memory alloc are always accepted,
memory limit check is handed over to Doris Allocator, make sure throw exception position is controllable,
otherwise bad_alloc can be thrown anywhere and it will be difficult to achieve exception safety.
Support to gc inverted index cache when there is not enough memory.
previous problem: The inverted index cache (InvertedIndexSearcherCache and InvertedIndexQueryCache) may use 20% memory which can't be released.
In mow, primary key cache have a big impact on load performance, so we add a new cache type to seperate
it from page cache to make it more flexible in some cases
fix mem_limit default value
memory_gc_sleep_time_s to memory_gc_sleep_time_ms
LoadChannelMgr::_handle_mem_exceed_limit process_mem_limit to process soft mem limit
fix query mem tracker print
With the default config of 90%, be may meet OOM when the load pressure is big.
when set to 80%, be works well with the same load pressure in my cluster.
Currently, there are some useless includes in the codebase. We can use a tool named include-what-you-use to optimize these includes. By using a strict include-what-you-use policy, we can get lots of benefits from it.
Fix MacOS mem_limit parse result is 0.
Fix GC after env Init, otherwise, when the memory is insufficient, BE will start failure.
*** Query id: 0-0 ***
*** Aborted at 1677833773 (unix time) try "date -d @1677833773" if you are using GNU date ***
*** Current BE git commitID: 8ee5f45 ***
*** SIGSEGV address not mapped to object (@0x70) received by PID 24145 (TID 0x7fa53c9fd700) from PID 112; stack trace: ***
0# doris::signal::(anonymous namespace)::FailureSignalHandler(int, siginfo_t*, void*) at be/src/common/signal_handler.h:420
1# os::Linux::chained_handler(int, siginfo*, void*) in /usr/local/jdk/jre/lib/amd64/server/libjvm.so
2# JVM_handle_linux_signal in /usr/local/jdk/jre/lib/amd64/server/libjvm.so
3# signalHandler(int, siginfo*, void*) in /usr/local/jdk/jre/lib/amd64/server/libjvm.so
4# 0x00007FA56295A400 in /lib64/libc.so.6
5# doris::MemTrackerLimiter::log_process_usage_str(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool) at be/src/runtime/memory/mem_tracker_limiter.cpp:208
6# doris::MemTrackerLimiter::print_log_process_usage(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool) at be/src/runtime/memory/mem_tracker_limiter.cpp:226
7# doris::Daemon::memory_maintenance_thread() at be/src/common/daemon.cpp:245
8# doris::Thread::supervise_thread(void*) at be/src/util/thread.cpp:455
9# start_thread in /lib64/libpthread.so.0
10# clone in /lib64/libc.so.6
Modify the default value of mem_limit to auto. auto means process mem limit is equal to max(physical mem * 0.9, 6.4G).
6.4G is the maximum memory reserved for the system.
Fix Redhat 4.x OS /proc/meminfo has no MemAvailable, disable MemAvailable to control memory.
vm_rss_str and mem_available_str recorded when gc is triggered, to avoid memory changes during gc and cause inaccurate logs.
join probe catch bad_alloc, this may alloc 64G memory at a time, avoid OOM.
Modify document doris_be_all_segments_num and doris_be_all_rowsets_num names.
1. When the process memory is insufficient, print the process memory statistics in a more timely and detailed manner.
2. Support regular GC cache, currently only page cache and chunk allocator are included, because many people reported that the memory does not drop after the query ends.
3. Reduce system available memory warning water mark to reduce memory waste
4. Optimize soft mem limit logging
Overcommit memory means that when the memory is sufficient, it no longer checks whether the memory of query/load exceeds the exec mem limit.
Instead, when the process memory exceeds the limit or the available system memory is insufficient, cancel the top overcommit query in minor gc, and cancel top memory query in full gc.
Previously only query supported overcommit memory, this pr supports load, including insert into and stream load.
Detailed explanation, I will update the memory document in these two days~
15bd56cd43/docs/zh-CN/docs/admin-manual/maint-monitor/memory-management/memory-limit-exceeded-analysis.md
Add conf enable_query_memroy_overcommit
If true, when the process does not exceed the soft mem limit, the query memory will not be limited; when the process memory exceeds the soft mem limit, the query with the largest ratio between the currently used memory and the exec_mem_limit will be canceled.
If false, cancel query when the memory used exceeds exec_mem_limit, same as before.
When the system MemAvailable is less than the warning water mark, or the memory used by the BE process exceeds the mem soft limit, run minor gc and try to release cache.
When the MemAvailable of the system is less than the low water mark, or the memory used by the BE process exceeds the mem limit, run fucc gc, try to release the cache, and start canceling from the query with the largest memory usage until the memory of mem_limit * 20% is released.
mem tracker can be logically divided into 4 layers: 1)process 2)type 3)query/load/compation task etc. 4)exec node etc.
type includes
enum Type {
GLOBAL = 0, // Life cycle is the same as the process, e.g. Cache and default Orphan
QUERY = 1, // Count the memory consumption of all Query tasks.
LOAD = 2, // Count the memory consumption of all Load tasks.
COMPACTION = 3, // Count the memory consumption of all Base and Cumulative tasks.
SCHEMA_CHANGE = 4, // Count the memory consumption of all SchemaChange tasks.
CLONE = 5, // Count the memory consumption of all EngineCloneTask. Note: Memory that does not contain make/release snapshots.
BATCHLOAD = 6, // Count the memory consumption of all EngineBatchLoadTask.
CONSISTENCY = 7 // Count the memory consumption of all EngineChecksumTask.
}
Object pointers are no longer saved between each layer, and the values of process and each type are periodically aggregated.
other fix:
In [fix](memtracker) Fix transmit_tracker null pointer because phamp is not thread safe #13528, I tried to separate the memory that was manually abandoned in the query from the orphan mem tracker. But in the actual test, the accuracy of this part of the memory cannot be guaranteed, so put it back to the orphan mem tracker again.
# Proposed changes
This PR fixed lots of issues when building from source on macOS with Apple M1 chip.
## ATTENTION
The job for supporting macOS with Apple M1 chip is too big and there are lots of unresolved issues during runtime:
1. Some errors with memory tracker occur when BE (RELEASE) starts.
2. Some UT cases fail.
...
Temporarily, the following changes are made on macOS to start BE successfully.
1. Disable memory tracker.
2. Use tcmalloc instead of jemalloc.
This PR kicks off the job. Guys who are interested in this job can continue to fix these runtime issues.
## Use case
```shell
./build.sh -j 8 --be --clean
cd output/be/bin
ulimit -n 60000
./start_be.sh --daemon
```
## Something else
It takes around _**10+**_ minutes to build BE (with prebuilt third-parties) on macOS with M1 chip. We will improve the development experience on macOS greatly when we finish the adaptation job.
When the physical memory of the process reaches 90% of the mem limit, trigger the load channel mgr to brush down
The default value of be.conf mem_limit is changed from 90% to 80%, and stability is the priority.
Fix deadlock in arena_locks in BufferPool::BufferAllocator::ScavengeBuffers and _lock in DebugString
The mem hook record tracker cannot guarantee that the final consumption is 0, nor can it guarantee that the memory alloc and free are recorded in a one-to-one correspondence.
In the life cycle of a memtable from insert to flush, the memory free of hook is more than that of alloc, resulting in tracker consumption less than 0.
In order to avoid the cumulative error of the upper load channel tracker, the memtable tracker consumption is reset to zero on destructor.
tcmalloc/jemalloc allocator cache does not participate in the mem check as part of the process physical memory.
because new/malloc will trigger mem hook when using tcmalloc/jemalloc allocator cache, but it may not actually alloc physical memory, which is not expected in mem hook fail.
in addition:
The value of tcmalloc/jemalloc allocator cache is used as a mem tracker, the parent is the process mem tracker, which is updated every 1s.
Modify the process default mem_limit to 90%. expect mem tracker to effectively limit the memory usage of the process.
1. MonoTime/MonoDelta
MonoTime: The MonoTime represents a particular point in time, relative to some fixed but unspecified reference point.
MonoDelta: The MonoDelta class represents an elapsed duration of time, the delta between two MonoTime instances.
2. CountDownLatch
This is a C++ implementation of the Java CountDownLatch
* Reduce UT binary size
Almost every module depend on ExecEnv, and ExecEnv contains all
singleton, which make UT binary contains all object files.
This patch seperate ExecEnv's initial and destory to anthor file to
avoid other file's dependence. And status.cc include debug_util.h which
depend tuple.h tuple_row.h, and I move get_stack_trace() to
stack_util.cpp to reduce status.cc's dependence.
I add USE_RTTI=1 to build rocksdb to avoid linking librocksdb.a
Issue: #292
* Update