MemTracker can provide memory consumption for us to find out which
module consume more memory, but it's just a current value, this patch
add metrics for some large memory consumers, then we can find out
which module consume more memory in timeline, it would be useful to
troubleshoot OOM problems and optimize configs.
* [Enhance] Make MemTracker more accurate (#5515)
This PR main about:
1. Improve the readability of MemTrackers' name
2. Add the MemTracker of:
* Load
* Compaction
* SchemaChange
* StoragePageCache
* TabletManager
3. Change SchemaChange to a Singleon
* revise some code for Code Review
* change the name of mem_tracker
* keep reader_context have the same lifetime of rowset_reader in schema change.
* change vlog notice to log(warning) in schema change
There are some redundant code for report task, disk and tablet in be, and when fe return error report message, there is no any warn log showing report failed.
Co-authored-by: caiconghui [蔡聪辉] <caiconghui@xiaomi.com>
In the previous broker load, multiple OlapTableSinks would send data to the same LoadChannel,
and because of the lock granularity problem, LoadChannel could only process these requests serially,
which made it impossible to make full use of cluster resources.
This CL modifies the related locks so that LoadChannel can process these requests in parallel.
In the test, with a size of 20G, the load speed of 334 million rows of data in 3 nodes has been
increased from 9min to 5min, and after enabling 2 concurrency, it can be increased to 3min.
Also modify the profile of load job.
In version 0.13, we support a more efficient compaction logic.
This logic will maintain multiple version paths of the tablet.
This can avoid -230 errors and can also support incremental clone.
But the previous incremental clone uses the incremental rowset meta recorded in `incr_rs_meta`.
At present, the incremental rowset meta recorded in `incr_rs_meta` and the records
in `stale_rs_meta` are duplicated, and the current clone logic does not adapt to the
new multi-version path, resulting in many cases not triggering incremental clone.
This CL mainly modified:
1. Removed `incr_rs_meta` metadata
2. Modified the clone logic. When the clone is incremented, it will try to read the rowset in `stale_rs_meta`.
3. Delete a lot of code that was previously used for version compatibility.
At present, the application of vlog in the code is quite confusing.
It is inherited from impala VLOG_XX format, and there is also VLOG(number) format.
VLOG(number) format does not have a unified specification, so this pr standardizes the use of VLOG
For the task of rebalancing tablet among different disks on the same BE,
It might be an effective strategy to ensure all tablets under the same partition
evenly distribute on the different disks. Thus, it is necessary to obtain the
distribution of tablets under the same partition between different disks on a BE.
This patch add a new http interface for BE to acquire the distribution of tablets
under a partition between different disks on the same BE.
There are some long loops and sleeps in unit tests, it will cost a
very long time to run all unit tests, especially run in TSAN mode.
This patch speed up unit tests by shortening long loops and sleeps,
on my environment all unit tests finished in 1 minite. It's useful
to do basic functional unit tests.
You can switch to run in this mode by adding a new environment variable
'DORIS_ALLOW_SLOW_TESTS'. For example, you can set:
export DORIS_ALLOW_SLOW_TESTS=1
and also you can disable it by setting:
export DORIS_ALLOW_SLOW_TESTS=0
#4996
When BE is restarting and the older tablet have been added to the garbage collection queue but not deleted yet.
In this case, since the data_dirs are parallel loaded, a later loaded tablet may be older than previously loaded one, which should not be acknowledged as a failure.
It should be noted that the _add_tablet_unlocked() method will also be called when creating a new tablet. In that case, the changes in this pull request will not be accessed so there is no affect on the tablet creating process.
Add trace for create tablet tasks, it's a useful tool for admin to find
out the bottleneck when create tablets timeouted.
For example, admin could enlarge 'tablet_map_shard_size' when found
'got tablets shard lock' procedure cost too much time.
_tablets_under_clone in TabletManager is not sharded but the lock
used to prevent concurrent access is sharded, so when shards size
is not 1, it will cause coredump.
This patch fix this bug, and also do some refactor to make shard
locks more convenient to use.
* Optimized the read performance of the table when have multi versions,
changed the merge method of the unique table,
merged the cumulative version data first, and then merged with the base version.
For the data with only one base version, read directly without merging
A large number of small segment files will lead to low efficiency for scan operations.
Multiple small files can be merged into a large file by compaction operation.
So we could take the tablet scan frequency into consideration when selecting an tablet for compaction
and preferentially do compaction for those tablets which are scanned frequently during a
latest period of time at the present.
Using the compaction strategy of Kudu for reference, scan frequency can be calculated
for tablet during a latest period of time and be taken into consideration when calculating compaction score.
(1) The implementation logic of `tablets web page` is that:
Firstly, get all the tablets of the BE;
Secondly, return specific number tablets in front to web page according to the value of request parameter `limit`.
This patch optimize the implementation logic of `tablets web page` that getting specific number tablets in BE
according to the value of request parameter `limit` and then return them to web page.
(2) It will return default `1000` tablets through http interface `http://be_host:webserver_port/tablets_page`.
If the number of tablet in the BE is less than 1000, there will be `coredump` in the following code:
be/src/http/action/tablets_info_action.cpp
Currently, there are M threads to do base compaction and N threads to do cumulative compaction for each disk.
Too many compaction tasks may run out of memory, so the max concurrency of running compaction tasks
is limited by semaphore.
If the running threads cost too much memory, we can't defense it. In addition, reducing concurrency to avoid OOM
will lead to some compaction tasks can't be executed in time and we may encounter more heavy compaction.
Therefore, concurrency limitation is not enough.
The strategy proposed in #3624 may be effective to solve the OOM.
A CompactionPermitLimiter is used for compaction limitation, and use single-producer/multi-consumer model.
Producer will try to generate compaction tasks and acquire `permits` for each task.
The compaction task which can hold `permits` will be executed in thread pool and each finished task will
release its `permits`.
`permits` should be applied for before a compaction task can execute. When the sum of `permits`
held by executing compaction tasks reaches a threshold, subsequent compaction task will be no longer allowed,
until some `permits` are released. Tablet compaction score is used as `permits` of compaction task here.
To some extent, memory consumption can be limited by setting appropriate `permits` threshold.
Sometimes we want to detect the hotspot of a cluster, for example, hot scanned tablet, hot wrote tablet,
but we have no insight about tablets in the cluster.
This patch introduce tablet level metrics to help to achieve this object, now support 4 metrics on tablets: `query_scan_bytes `, `query_scan_rows `, `flush_bytes `, `flush_count `.
However, one BE may holds hundreds of thousands of tablets, so I add a parameter for the metrics HTTP request,
and not return tablet level metrics by default.
A new feature has been added to acquire tablet id and schema hash of all the tablets on a particular BE node
via Web page,so that more detailed information of each tablet can be obtained according to these
tablet id and schema hash. In accordance with different web request, there are two ways
(table and json)to show these acquired tablet id and schema hash on Web page.
Related issue #4017, main changes as follows:
1. Add expired_snapshot_rs_version_map,_expired_snapshot_rs_metas,
2. Add VersionedRowsetTracker record compacted path version
3. Record path version when rowsets compact
4. In gc process, add expired snapshot rowsets to unused set to remove.
TabletMeta's _preferred_rowset_type is not initialized after object constructing and
may be a random value, and this field is not updated when create ALPHA_ROWSET tablet,
and it will not be serialized into pb in this case. So if cloning an ALPHA_ROWSET
tablet from another BE, this new created local tablet's _preferred_rowset_type field
may be random as BETA_ROWSET and can not be overwrote after cloned, then new input
rows will be wrote as BETA_ROWSET format which is not we expect.
This patch fix this bug by giving _preferred_rowset_type a default value and updating
this field when create any type of tablet, and add an unit test and related overwrite
equal operator functions.
* [Fix] Fix bug that rowset meta is deleted after compaction
After compaction, the tablet rowset meta will be modified by
adding to new output rowsets and deleting the old input rowsets.
The output version may equals to the input version.
So we should delete the "input" version from _rs_version_map
before adding the "output" version to _rs_version_map. Otherwise,
the new "output" version will be lost in _rs_version_map.
There is no functional changes in this patch.
Key refactor points are:
- Remove meaningless return value of functions in class Tablet, and
also some related functions in other classes
- Allow RowsetGraph::capture_consistent_versions to pass a nullptr
to the output parameter
- Use CHECK instead of LOG(FATAL) to simplify code
main refactor points are:
- Use a single get_absolute_tablet_path function instead of 3
independent functions
- Remove meaningless return value of register_tablet and deregister_tablet
- Some typo and format
This PR is to enhance the performance for txn manage task, when there are so many txn in
BE, the only one txn_map_lock and additional _txn_locks may cause poor performance, and
now we remove the additional _txn_locks and split the txn_map_lock into many small locks.
It's not possible to insert duplicated transaction ids for a specific tablet, therefore we could use map<TabletInfo, vector<int64_t>> instead of map<TabletInfo, set<int64_t>> for expire_txn_map.
Earlier we introduced `BlockManager` to separate data access logic from
underlying file read and write logic.
This CL further unifies all `SegmentV2` data access to the `BlockManager`,
removes the previous `FileManager` class, and move the file cache to the `FileBlockManager`.
There are no logical changes to this CL.
After this CL, all user table data is read through the `WritableBlock` and `ReadableBlock`
returned by the `BlockManager`, and no file operations are performed directly.
1. Add some comments to make the code easier to understand;
2. Make the metric `create_tablet_requests_failed` to be accurate;
3. Some internal methods use naked pointers directly instead of `shared_ptr`;
4. The `using` in `.h` files are contagious when included by other files,
so we should only use it in `.cpp` files;
5. Some formatting changes: such as wrapping lines that are too long
6. Parameters that need to be modified, use pointers instead of references
No functional changes in this patch.
Currently, the report from BE to FE is completed in the background
threads of `AgentServer` (`report_tablet_thread` and
`report_disk_stat_thread`). These two threads will sleep and be in
a standby state after each report, if there is any need to report
immediately, they will be notified and wake up immediately to report.
For example, when background thread (`disk_monitor_thread`) in
`StorageEngine` finds some tablets were deleted, it will notify
`AgentServer` to trigger a report immediately.
In the current implementation, in order to report ASAP, a local variable
(`_is_drop_tables`) and two other flags are used to record whether
reporting is needed, and then `StorageEngine::disk_monitor_thread` checks
the value of this variable every time it runs, to determine whether it
needs to be triggered Reporting. This is actually superfluous, and it
may result in untimely notifications, as shown below:
```
(thread_1) (thread_2)
disk-monitor disk-stat-reporter
| |
| reporting
| |
notify_1 |
| |
| wait_for_notify(will wait until timeout or next notification)
| |
V V
```
When `report_tablet_thread` has not started waiting,
`StorageEngine::disk_monitor_thread` triggers a notification, so this
notification will not be received by `report_tablet_thread`,
resulting in the BE not reporting to the FE until the lock times out
or the next round of `disk_monitor_thread` detection.
This change restructures the triggering implementation, and solves the above problem.
This change also changes some methods(that do not need to be public) to private.