The mem hook record tracker cannot guarantee that the final consumption is 0, nor can it guarantee that the memory alloc and free are recorded in a one-to-one correspondence.
In the life cycle of a memtable from insert to flush, the memory free of hook is more than that of alloc, resulting in tracker consumption less than 0.
In order to avoid the cumulative error of the upper load channel tracker, the memtable tracker consumption is reset to zero on destructor.
1. use rlock in most logic instead of wrlock
2. filter stale rowset's delete bitmap in save meta
3. add a delete_bitmap lock to handle compaction and publish_txn confict
Co-authored-by: yixiutt <yixiu@selectdb.com>
In our origin design, we calc delete bitmap in publish txn, and this operation
will cost too much time as it will load segment data and lookup row key in pre
rowset and segments.And publish version task should run in order, so it'll lead
to timeout in publish_txn.
In this pr, we seperate delete_bitmap calculation to tow part, one of it will be
done in flush mem table, so this work can run parallel. And we calc final
delete_bitmap in publish_txn, get a rowset_id set that should be included and
remove rowsets that has been compacted, the rowset difference between memtable_flush
and publish_txn is really small so publish_txn become very fast.In our test,
publish_txn cost about 10ms.
Co-authored-by: yixiutt <yixiu@selectdb.com>
1. Added memory leak detection for `DeltaWriter` and `MemTable` mem tracker
2. Modify memtable mem tracker to virtual to avoid frequent recursive consumption of parent tracker.
3. Disable memtable flush thread attach memtable tracker, ensure that memtable mem tracker is completely accurate.
4. Modify `memory_verbose_track=false`. At present, there is a performance problem in the frequent switch thread mem tracker.
- Because the mem tracker exists as a shared_ptr in the thread local. Each time it is switched, the atomic variable use_count in the shared_ptr of the current tracker will be -1, and the tracker to be replaced use_count +1, multi-threading Frequent changes to the same tracker shared_ptr are slow.
- TODO: 1. Reduce unnecessary thread mem tracker switch, 2. Consider using raw pointers for mem tracker in thread local.
introduce in stream-load-vec #9280, it will cause multi-thread
operate to same segment_write cause BetaRowset enable multi-thread
of memtable flush, memtable flush call rowset_writer.add_block, it
use member variable _segment_writer to write, so it will cause
multi-thread in segment write.
Co-authored-by: yixiutt <yixiu@selectdb.com>
Currently, there are 2 status code in BE, one is common/Status.h,
and the other is olap/olap_define.h called OLAPStatus.
OLAPStatus is just an enum type, it is very simple and could not save many informations,
I will unify these code to common/Status.
Early Design Documentation: https://shimo.im/docs/DT6JXDRkdTvdyV3G
Implement a new way of memory statistics based on TCMalloc New/Delete Hook,
MemTracker and TLS, and it is expected that all memory new/delete/malloc/free
of the BE process can be counted.
Modify the implementation of MemTracker:
1. Simplify a lot of useless logic;
2. Added MemTrackerTaskPool, as the ancestor of all query and import trackers, This is used to track the local memory usage of all tasks executing;
3. Add cosume/release cache, trigger a cosume/release when the memory accumulation exceeds the parameter mem_tracker_consume_min_size_bytes;
4. Add a new memory leak detection mode (Experimental feature), throw an exception when the remaining statistical value is greater than the specified range when the MemTracker is destructed, and print the accurate statistical value in HTTP, the parameter memory_leak_detection
5. Added Virtual MemTracker, cosume/release will not sync to parent. It will be used when introducing TCMalloc Hook to record memory later, to record the specified memory independently;
6. Modify the GC logic, register the buffer cached in DiskIoMgr as a GC function, and add other GC functions later;
7. Change the global root node from Root MemTracker to Process MemTracker, and remove Process MemTracker in exec_env;
8. Modify the macro that detects whether the memory has reached the upper limit, modify the parameters and default behavior of creating MemTracker, modify the error message format in mem_limit_exceeded, extend and apply transfer_to, remove Metric in MemTracker, etc.;
Modify where MemTracker is used:
1. MemPool adds a constructor to create a temporary tracker to avoid a lot of redundant code;
2. Added trackers for global objects such as ChunkAllocator and StorageEngine;
3. Added more fine-grained trackers such as ExprContext;
4. RuntimeState removes FragmentMemTracker, that is, PlanFragmentExecutor mem_tracker, which was previously used for independent statistical scan process memory, and replaces it with _scanner_mem_tracker in OlapScanNode;
5. MemTracker is no longer recorded in ReservationTracker, and ReservationTracker will be removed later;
## Case
In the load process, each tablet will have a memtable to save the incoming data,
and if the data in a memtable is larger than 100MB, it will be flushed to disk as a `segment` file. And then
a new memtable will be created to save the following data/
Assume that this is a table with N buckets(tablets). So the max size of all memtables will be `N * 100MB`.
If N is large, it will cost too much memory.
So for memory limit purpose, when the size of all memtables reach a threshold(2GB as default), Doris will
try to flush all current memtables to disk(even if their size are not reach 100MB).
So you will see that the memtable will be flushed when it's size reach `2GB/N`, which maybe much smaller
than 100MB, resulting in too many small segment files.
## Solution
When decide to flush memtable to reduce memory consumption, NOT to flush all memtable, but to flush part
of them.
For example, there are 50 tablets(with 50 memtables). The memory limit is 1GB, so when each memtable reach
20MB, the total size reach 1GB, and flush will occur.
If I only flush 25 of 50 memtables, then next time when the total size reach 1GB, there will be 25 memtables with
size 10MB, and other 25 memtables with size 30MB. So I can flush those memtables with size 30MB, which is larger
than 20MB.
The main idea is to introduce some jitter during flush to ensure the small unevenness of each memtable, so as to ensure that flush will only be triggered when the memtable is large enough.
In my test, loading a table with 48 buckets, mem limit 2G, in previous version, the average memtable size is 44MB,
after modification, the average size is 82MB
1 Make some MemTracker have reasonable parent MemTracker not the root tracker
2 Make each MemTracker can be easily to trace.
3 Add show level of MemTracker to reduce the MemTracker show in the web page to have a way to control show how many tracker in web page.
In the previous implementation, in an load job,
multiple memtables of the same tablet are written to disk sequentially.
In fact, multiple memtables can be written out of order in parallel,
only need to ensure that each memtable uses a different segment writer.
Sometimes we want to detect the hotspot of a cluster, for example, hot scanned tablet, hot wrote tablet,
but we have no insight about tablets in the cluster.
This patch introduce tablet level metrics to help to achieve this object, now support 4 metrics on tablets: `query_scan_bytes `, `query_scan_rows `, `flush_bytes `, `flush_count `.
However, one BE may holds hundreds of thousands of tablets, so I add a parameter for the metrics HTTP request,
and not return tablet level metrics by default.
We make all MemTrackers shared, in order to show MemTracker real-time consumptions on the web.
As follows:
1. nearly all MemTracker raw ptr -> shared_ptr
2. Use CreateTracker() to create new MemTracker(in order to add itself to its parent)
3. RowBatch & MemPool still use raw ptrs of MemTracker, it's easy to ensure RowBatch & MemPool destructor exec
before MemTracker's destructor. So we don't change these code.
4. MemTracker can use RuntimeProfile's counter to calc consumption. So RuntimeProfile's counter need to be shared
too. We add a shared counter pool to store the shared counter, don't change other counters of RuntimeProfile.
Note that, this PR doesn't change the MemTracker tree structure. So there still have some orphan trackers, e.g. RowBlockV2's MemTracker. If you find some shared MemTrackers are little memory consumption & too time-consuming, you could make them be the orphan, then it's fine to use the raw ptr.
* Improve comparison and printing of Version
There are two members in `Version`:` first` and `second`.
There are many places where we need to print one `Version` object and
compare two `Version` objects, but in the current code, these two members
are accessed directly, which makes the code very tedious.
This patch mainly do:
1. Adds overloaded methods for `operator<<()` for `Version`, so
we can directly print a Version object;
2. Adds the `cantains()` method to determine whether it is an containment
relationship;
3. Uses `operator==()` to determine if two `Version` objects are equal.
Because there are too many places need to be modified, there are still some
naked codes left, which will be modified later.
This patch also removes some necessary header file references.
No functional changes in this patch.
Currently, special treatment is used for HLL types (and OBJECT types).
When loading data, because there is no need to serialize HLL content
(the upper layer has already done), we directly save the pointer
of `HyperLogLog` object in `Slice->data` (at the corresponding `Cell`
in each `Row`) and make `Slice->size` to be 0. This logic is different
from when reading the HLL column. When reading, we need to deserialize
the HLL object from the `Slice` object. This causes us to have different
implementations of `copy_row()` when loading and reading.
In the optimization(commit: 177fec8917304e399aa7f3facc4cc4804e72ce8b),
the logic of `copy_row()` was added before a row can be added into the
`MemTable`, but the current `copy_row()` treats the `HLL column Cell`
as a normal Slice object(i.e. will memcpy its data according its size).
So this change adds a `copy_object()` method to `TypeInfo`, which is
used to copy the HLL column during loading data.
Note: The way of copying rows should be unified in the future. At that
time, we can delete the `copy_object()` method.
The problem with the current implementation is that all data to be
inserted will be counted in memory, but for the aggregation model or
some other special cases, not all data will be inserted into `MemTable`,
and these data should not be counted in memory.
This change makes the `SkipList` use the exclusive `MemPool`,
and only the data will be inserted into the `SkipList` can use this
`MemPool`. In other words, those discarded rows will not be
counted by the `MemPool` of` SkipList`.
In order to avoid duplicate checking whether a row already exists in
`SkipList`, this change also modifies the `SkipList` interface(A `Hint`
will be fetched when `Find()`, and then use it in `InsertUseHint()`),
and made `SkipList` no longer aware of the aggregation logic.
At present, because of the data row(`Tuple`) generated by the upper layer
is different from the data row(`Row`) internally represented by the
engine, when inserting `MemTable`, the data row must be copied.
If the row needs to be inserted into SkipList, we need copy it again
to `MemPool` of `SkipList`.
And, at present, the aggregation function only supports `MemPool` when
copying, so even if the data will not be inserted into` SkipList`,
`MemPool` is still used (in the future, it can be replaced with an
ordinary` Buffer`). However, we reuse the allocated memory in MemPool,
that is, we do not reallocate new memory every time.
Note: Due to the characteristics of `MemPool` (once inserted, it cannot
be partially cleared), the following scenarios may still cause multiple
flushes. For example, the aggregation model of a string column is `MAX`,
and the data inserted at the same time is in ascending order, then for
each data row, it must apply for memory from `MemPool` in `SkipList`,
that is, although the old rows in SkipList` will be discarded,
the memory occupied will still be counted.
I did a test on my development machine using `STREAM LOAD`: a table with
only one tablet and all columns are keys, the original data was
1.1G (9318799 rows), and there were 377745 rows after removing duplicates.
It can be found that both the number of files and the query efficiency are
greatly improved, the price paid is only a slight increase in load time.
before:
```
$ ll storage/data/0/10019/1075020655/
total 4540
-rw------- 1 dev dev 393152 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_0_0.dat
-rw------- 1 dev dev 1135 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_0_0.idx
-rw------- 1 dev dev 421660 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_10_0.dat
-rw------- 1 dev dev 1185 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_10_0.idx
-rw------- 1 dev dev 184214 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_1_0.dat
-rw------- 1 dev dev 610 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_1_0.idx
-rw------- 1 dev dev 329181 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_11_0.dat
-rw------- 1 dev dev 935 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_11_0.idx
-rw------- 1 dev dev 343813 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_12_0.dat
-rw------- 1 dev dev 985 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_12_0.idx
-rw------- 1 dev dev 315364 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_2_0.dat
-rw------- 1 dev dev 885 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_2_0.idx
-rw------- 1 dev dev 423806 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_3_0.dat
-rw------- 1 dev dev 1185 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_3_0.idx
-rw------- 1 dev dev 294811 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_4_0.dat
-rw------- 1 dev dev 835 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_4_0.idx
-rw------- 1 dev dev 403241 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_5_0.dat
-rw------- 1 dev dev 1135 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_5_0.idx
-rw------- 1 dev dev 350753 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_6_0.dat
-rw------- 1 dev dev 860 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_6_0.idx
-rw------- 1 dev dev 266966 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_7_0.dat
-rw------- 1 dev dev 735 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_7_0.idx
-rw------- 1 dev dev 451191 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_8_0.dat
-rw------- 1 dev dev 1235 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_8_0.idx
-rw------- 1 dev dev 398439 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_9_0.dat
-rw------- 1 dev dev 1110 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_9_0.idx
{
"TxnId": 16,
"Label": "cd9f8392-dfa0-4626-8034-22f7cb97044c",
"Status": "Success",
"Message": "OK",
"NumberTotalRows": 9318799,
"NumberLoadedRows": 9318799,
"NumberFilteredRows": 0,
"NumberUnselectedRows": 0,
"LoadBytes": 1079581477,
"LoadTimeMs": 46907
}
mysql> select count(*) from xxx_before;
+----------+
| count(*) |
+----------+
| 377745 |
+----------+
1 row in set (0.91 sec)
```
aftr:
```
$ ll storage/data/0/10013/1075020655/
total 3612
-rw------- 1 dev dev 3328992 Dec 2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_0_0.dat
-rw------- 1 dev dev 8460 Dec 2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_0_0.idx
-rw------- 1 dev dev 350576 Dec 2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_1_0.dat
-rw------- 1 dev dev 985 Dec 2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_1_0.idx
{
"TxnId": 12,
"Label": "88f606d5-8095-4f15-b61d-49b7080c16b8",
"Status": "Success",
"Message": "OK",
"NumberTotalRows": 9318799,
"NumberLoadedRows": 9318799,
"NumberFilteredRows": 0,
"NumberUnselectedRows": 0,
"LoadBytes": 1079581477,
"LoadTimeMs": 48771
}
mysql> select count(*) from xxx_after;
+----------+
| count(*) |
+----------+
| 377745 |
+----------+
1 row in set (0.38 sec)
```