Commit Graph

34 Commits

Author SHA1 Message Date
60fddd56e7 [feature-wip](unique-key-merge-on-write) opt lock and only save valid delete_bitmap (#11953)
1. use rlock in most logic instead of wrlock
2. filter stale rowset's delete bitmap in save meta
3. add a delete_bitmap lock to handle compaction and publish_txn confict

Co-authored-by: yixiutt <yixiu@selectdb.com>
2022-08-23 14:43:40 +08:00
0a5fd99d02 [feature-wip](unique-key-merge-on-write) speed up publish_txn (#11557)
In our origin design, we calc delete bitmap in publish txn, and this operation
will cost too much time as it will load segment data and lookup row key in pre
rowset and segments.And publish version task should run in order, so it'll lead
to timeout in publish_txn.

In this pr, we seperate delete_bitmap calculation to tow part, one of it will be
done in flush mem table, so this work can run parallel. And we calc final
delete_bitmap in publish_txn, get a rowset_id set that should be included and
remove rowsets that has been compacted, the rowset difference between memtable_flush
and publish_txn is really small so publish_txn become very fast.In our test,
publish_txn cost about 10ms.

Co-authored-by: yixiutt <yixiu@selectdb.com>
2022-08-08 18:57:55 +08:00
f730a048b1 [feature-wip](load) Support single replica load (#10298)
During load process, the same operation are performed on all replicas such as sort and aggregation,
which are resource-intensive.
Concurrent data load would consume much CPU and memory resources.
It's better to perform write process (writing data into MemTable and then data flush) on single replica
and synchronize data files to other replicas before transaction finished.
2022-08-02 11:44:18 +08:00
01e108cb7b [feature-wip](unique-key-merge-on-write) update delete bitmap while publish version (#11195)
1.make version publish work in version order
2.update delete bitmap while publish version, load current version rowset
primary key and search in pre rowsets
3.speed up publish version task by parallel tablet publish task

Co-authored-by: yixiutt <yixiu@selectdb.com>
2022-07-27 16:26:42 +08:00
89e56ea67f [refactor] remove alpha rowset related code and vectorized row batch related code (#10584) 2022-07-05 20:33:34 +08:00
c9961c9bb9 [style] clang-format all c++ code (#9305)
- sh build-support/clang-format.sh  to  clang-format all c++ code
2022-04-29 16:14:22 +08:00
e5e0dc421d [refactor] Change ALL OLAPStatus to Status (#8855)
Currently, there are 2 status code in BE, one is common/Status.h,
and the other is olap/olap_define.h called OLAPStatus.
OLAPStatus is just an enum type, it is very simple and could not save many informations,
I will unify these code to common/Status.
2022-04-14 11:43:49 +08:00
290366787c [refactor] refactor code, replace some file with stl libs (#8759)
1. replace ConditionVariables with std::condition_variable
2. repalace Mutex with std::mutex
3. repalce MonoTime with std::chrono
2022-04-13 09:55:29 +08:00
c86d469baf [Refactor](storage_engine) Use std::shared_mutex to replace RWMutex (#8387) 2022-03-11 18:14:24 +08:00
26289c28b0 [fix](load)(compaction) Fix NodeChannel coredump bug and modify some compaction logic (#8072)
1. Fix the problem of BE crash caused by destruct sequence. (close #8058)
2. Add a new BE config `compaction_task_num_per_fast_disk`

    This config specify the max concurrent compaction task num on fast disk(typically .SSD).
    So that for high speed disk, we can execute more compaction task at same time,
    to compact the data as soon as possible

3. Avoid frequent selection of unqualified tablet to perform compaction.
4. Modify some log level to reduce the log size of BE.
5. Modify some clone logic to handle error correctly.
2022-02-17 10:52:08 +08:00
aea3e4e59b [refactor] Remove version hash from BE and related test in BE (#8027) 2022-02-14 09:29:27 +08:00
61af76b8fb [Log] fix log error when commit transaction in txn manager (#5937)
Co-authored-by: weizuo <weizuo@xiaomi.com>
2021-06-06 22:05:40 +08:00
49b2bc39ae [Optimize] Reduce meaningless memory copies (#5748)
Reduce meaningless memory copies of rowset_meta pb
2021-05-05 10:20:09 +08:00
d641a26490 [Refactor] Remove boost filesystem (#5579)
* use std::filesystem instead of boost
Co-authored-by: Mingyu Chen <morningman.cmy@gmail.com>
2021-04-08 09:11:59 +08:00
93a4c7efc1 [LOG] Standardize the use of VLOG in code (#5264)
At present, the application of vlog in the code is quite confusing.
It is inherited from impala VLOG_XX format, and there is also VLOG(number) format.
VLOG(number) format does not have a unified specification, so this pr standardizes the use of VLOG
2021-01-21 12:09:09 +08:00
6fedf5881b [CodeFormat] Clang-format cpp sources (#4965)
Clang-format all c++ source files.
2020-11-28 18:36:49 +08:00
03cf9b2a24 [Compaction] Add delayed deletion of rowsets function, fix -230 error. (#4039)
Related issue #4017, main changes as follows:
1. Add expired_snapshot_rs_version_map,_expired_snapshot_rs_metas,
2. Add  VersionedRowsetTracker record compacted path version
3. Record path version when rowsets compact
4. In gc process, add expired snapshot rowsets to unused set to remove.
2020-07-19 22:03:59 +08:00
67b0da5652 Fix rowset_meta race condition for commit_txn in TxnManager (#3330) 2020-04-18 18:38:48 +08:00
a5703ef114 [Performance] Support sharding txn_map_lock into more small map locks to make good performance for txn manage task (#3222)
This PR is to enhance the performance for txn manage task, when there are so many txn in 
BE, the only one txn_map_lock and additional _txn_locks may cause poor performance, and 
now we remove the additional _txn_locks and split the txn_map_lock into many small locks.
2020-04-09 22:35:15 +08:00
3557b12de5 [Bug] Avoid compacting recengly added rowset (#3271)
This CL fixes #3270 by skipping recently added version when performing cumulative compaction. A new config named "cumulative_compaction_skip_window_seconds" is added to adjust the time window.
2020-04-08 18:58:12 +08:00
d110629a5f Optimize performance of TxnManager::build_expire_txn_map (#3269)
It's not possible to insert duplicated transaction ids for a specific tablet, therefore we could use map<TabletInfo, vector<int64_t>> instead of map<TabletInfo, set<int64_t>> for expire_txn_map.
2020-04-07 23:54:05 +08:00
08e4035a41 1 (#3134) 2020-03-17 20:11:41 +08:00
42931d22cb [Bug] tablet meta is not updated correctly after compaction (#3098)
This CL try to fix a potential bug describe in ISSUE: #3097. But I'm not sure this is the root cause.

Also remove lots of verbose log, and fix a memory leak.
2020-03-14 23:39:11 +08:00
e90170a5d0 Fix bug: map erase in txn_manager (#2705) 2020-01-08 18:53:11 +08:00
5312e840d2 Fix heap-use-after-free in TxnManager::force_rollback_tablet_related_txns (#2435) 2019-12-11 21:49:26 +08:00
c5ce72215d Optimize tablet report with expired transaction. (#2215)
When there are lots of expired transactions on BE, and with large
number of tablet, the report thread may become to slow. Because it
has to iterate the whole transaction map for each tablet.

But this is unnecessary. We should first build a expired transaction
map with 'tablet id' as key. And for each tablet, we only need to seek
the expired transaction map once with tablet id, instead of traversing
the whole transaction map.
2019-11-15 23:03:21 +08:00
11872d5cf6 Sending clear txn task explicitly after transaction being aborted (#2182) 2019-11-13 11:22:45 +08:00
0e4b3755a2 Refactor txn manager methods (#1950) 2019-10-11 17:16:13 +08:00
e4f3e8fda7 Remove redundant method in rowset meta manager (#1949) 2019-10-10 19:29:59 +08:00
9aa2045987 Refactor alter job (#1695) 2019-09-12 16:31:29 +08:00
6f4feca3dc Add rowset id generator to FE and BE (#1678) 2019-09-02 18:51:31 +08:00
d938f9a6ea Implement the initial version of BetaRowset (#1568) 2019-08-06 10:40:16 +08:00
a88b55e649 Add more logs and metrics to trace the broker load process (#1530)
The Operator wants to known when the job being scheduled as PENDING
and LOADING. And how long it takes to finish these sub states.

Also add 2 metrics on BE to monitor the memtable's flush time.
`memtable_flush_total` and `memtable_flush_duration_us`
2019-07-23 21:42:44 +08:00
0d48a3961c Refactor Storage Engine (#1478)
NOTE: This patch would modify all Backend's data.
And this will cause a very long time to restart be.
So if you want to interferer your product environment,
you should upgrade backend one by one.

1. Refactoring be is to clarify the structure the codes.
2. Use unique id to indicate a rowset.
   Nameing rowset with tablet_id and version will lead to
   many conflicts among compaction, clone, restore.
3. Extract an rowset interface to encapsulate rowsets
   with different format.
2019-07-15 21:18:22 +08:00