Commit Graph

69 Commits

Author SHA1 Message Date
f329d33666 [chore](fix) Fix some spell errors in be's comments. #13452 2022-10-20 08:56:01 +08:00
9e42804298 [feature-wip](unique-key-merge-on-write) unique key with merge on write table support schema change (#12886) 2022-10-09 11:31:53 +08:00
c55d08fa2f [fix](memtracker) Refactor load channel mem tracker to improve accuracy (#12791)
The mem hook record tracker cannot guarantee that the final consumption is 0, nor can it guarantee that the memory alloc and free are recorded in a one-to-one correspondence.

In the life cycle of a memtable from insert to flush, the memory free of hook is more than that of alloc, resulting in tracker consumption less than 0.

In order to avoid the cumulative error of the upper load channel tracker, the memtable tracker consumption is reset to zero on destructor.
2022-09-21 20:16:19 +08:00
bac58a4774 [feature-wip](unique-key-merge-on-write) fix calculate delete bitmap when flush memtable (#12668) 2022-09-17 17:04:03 +08:00
554ba40b13 [feature-wip](unique-key-merge-on-write) update delete bitmap when increamental clone (#12364) 2022-09-09 17:03:27 +08:00
60fddd56e7 [feature-wip](unique-key-merge-on-write) opt lock and only save valid delete_bitmap (#11953)
1. use rlock in most logic instead of wrlock
2. filter stale rowset's delete bitmap in save meta
3. add a delete_bitmap lock to handle compaction and publish_txn confict

Co-authored-by: yixiutt <yixiu@selectdb.com>
2022-08-23 14:43:40 +08:00
0a5fd99d02 [feature-wip](unique-key-merge-on-write) speed up publish_txn (#11557)
In our origin design, we calc delete bitmap in publish txn, and this operation
will cost too much time as it will load segment data and lookup row key in pre
rowset and segments.And publish version task should run in order, so it'll lead
to timeout in publish_txn.

In this pr, we seperate delete_bitmap calculation to tow part, one of it will be
done in flush mem table, so this work can run parallel. And we calc final
delete_bitmap in publish_txn, get a rowset_id set that should be included and
remove rowsets that has been compacted, the rowset difference between memtable_flush
and publish_txn is really small so publish_txn become very fast.In our test,
publish_txn cost about 10ms.

Co-authored-by: yixiutt <yixiu@selectdb.com>
2022-08-08 18:57:55 +08:00
18864ab7fe weak relationship between MemTracker and MemTrackerLimiter (#11347) 2022-07-30 18:33:54 +08:00
70c7e3d7aa [feature-wip](unique-key-merge-on-write) remove AggType on unique table with MoW, enable preAggreation, DSIP-018[5/2] (#11205)
remove AggType on unique table with MoW, enable preAggreation
2022-07-28 17:03:05 +08:00
b6bdb3bdbc [fix] (mem tracker) Fix MemTracker accuracy (#11190) 2022-07-27 18:59:24 +08:00
8551ceaa1b [Bug][Vectorized] Fix use-after-free bug of memtable shrink (#11197)
Co-authored-by: lihaopeng <lihaopeng@baidu.com>
2022-07-26 16:10:44 +08:00
4960043f5e [enhancement] Refactor to improve the usability of MemTracker (step2) (#10823) 2022-07-21 17:11:28 +08:00
f78db1d773 release memory allocated in agg function in vec stream load (#10739)
release memory allocated in agg function in vec stream load

When a load is cancelled, memory allocated by agg functions should
be freeed.
2022-07-16 15:32:53 +08:00
Pxl
4190f7354c [Bug][Memtable] fix core dump on int128 because not aligned by 16 byte (#10775)
* fix core dump on int128 because not aligned by 16 byte

* update
2022-07-13 08:30:58 +08:00
502ac4e76b [Load][Vectorized] opt the mem use of aggregate function in load to speed up (#10448)
opt the mem use of aggregate function in load to speed up
2022-07-10 13:34:25 +08:00
89e56ea67f [refactor] remove alpha rowset related code and vectorized row batch related code (#10584) 2022-07-05 20:33:34 +08:00
6ad024a2bf [fix] (mem tracker) Refactor memtable mem tracker, fix flush memtable DCHECK failed (#10156)
1. Added memory leak detection for `DeltaWriter` and `MemTable` mem tracker
2. Modify memtable mem tracker to virtual to avoid frequent recursive consumption of parent tracker.
3. Disable memtable flush thread attach memtable tracker, ensure that memtable mem tracker is completely accurate.
4. Modify `memory_verbose_track=false`. At present, there is a performance problem in the frequent switch thread mem tracker. 
      - Because the mem tracker exists as a shared_ptr in the thread local. Each time it is switched, the atomic variable use_count in the shared_ptr of the current tracker will be -1, and the tracker to be replaced use_count +1, multi-threading Frequent changes to the same tracker shared_ptr are slow.
      - TODO: 1. Reduce unnecessary thread mem tracker switch, 2. Consider using raw pointers for mem tracker in thread local.
2022-06-19 16:48:42 +08:00
Pxl
f2aa5f32b8 [Feature] [Vectorized] Some pre-refactorings or interface additions for schema change (#9811)
Some pre-refactorings or interface additions for schema change
2022-06-07 15:04:57 +08:00
c426c2e4b1 [Vectorized-Load] Support vectorized load table with materialized view (#9923)
* [Vectorized-Load] Support vectorized load table with materialized view

* fix ut

Co-authored-by: lihaopeng <lihaopeng@baidu.com>
2022-06-02 14:59:01 +08:00
7199102d7c [Opt][VecLoad] Opt the vec stream load performance (#9772)
Co-authored-by: lihaopeng <lihaopeng@baidu.com>
2022-05-31 11:53:32 +08:00
Pxl
13c1d20426 [Bug] [Vectorized] add padding when load char type data (#9734) 2022-05-26 16:51:01 +08:00
73e31a2179 [stream-load-vec]: memtable flush only if necessary after aggregated (#9459)
Co-authored-by: weixiang <weixiang06@meituan.com>
2022-05-25 21:12:24 +08:00
73c4ec7167 Fix some typos in be/. (#9681) 2022-05-19 20:55:39 +08:00
c9ab5e22fe [fixbug](vec-load) fix core of segment_writer while it is not thread-safe (#9569)
introduce in stream-load-vec #9280, it will cause multi-thread
operate to same segment_write cause BetaRowset enable multi-thread
of memtable flush, memtable flush call rowset_writer.add_block, it
use member variable _segment_writer to write, so it will cause
multi-thread in segment write.

Co-authored-by: yixiutt <yixiu@selectdb.com>
2022-05-18 11:29:15 +08:00
4cd579b155 [refactor] Check status precise_code instead of construct OLAPInternalError (#9514)
* check status precise_code instead of construct OLAPInternalError
* move is_io_error to Status
2022-05-12 15:39:29 +08:00
718a51a388 [refactor][style] Use clang-format to sort includes (#9483) 2022-05-10 21:25:35 +08:00
2ccaa6338c [enhancement](load) optimize load string data and dict page write (#9123)
* [enhancement](load) optimize load string data and dict page write
2022-05-07 10:27:27 +08:00
c9961c9bb9 [style] clang-format all c++ code (#9305)
- sh build-support/clang-format.sh  to  clang-format all c++ code
2022-04-29 16:14:22 +08:00
d330bc3806 [Vectorized](stream-load-vec) Support stream load in vectorized engine (#8709) (#9280)
Implement vectorized stream load.
Added fe configuration option `enable_vectorized_load` to enable vectorized stream load.

    Co-authored-by: tengjp@outlook.com
    Co-authored-by: mrhhsg@gmail.com
    Co-authored-by: minghong.zhou@163.com
    Co-authored-by: HappenLee <happenlee@hotmail.com>
    Co-authored-by: zhoubintao <35688959+zbtzbtzbt@users.noreply.github.com>
2022-04-29 09:50:51 +08:00
e5e0dc421d [refactor] Change ALL OLAPStatus to Status (#8855)
Currently, there are 2 status code in BE, one is common/Status.h,
and the other is olap/olap_define.h called OLAPStatus.
OLAPStatus is just an enum type, it is very simple and could not save many informations,
I will unify these code to common/Status.
2022-04-14 11:43:49 +08:00
519305cb22 [feature-wip] (memory tracker) (step4) Switch TLS mem tracker to separate more detailed memory usage (#8669)
Based on #8605, Separate out the memory usage of each operator from the Query/Load/StorageEngine mem tracker.
2022-04-08 09:02:26 +08:00
e285d09157 [Enhancement](load) speed up stream load for duplicate table, use template for faster get_type_info. (#8500) 2022-03-25 15:18:43 +08:00
eeae516e37 [Feature](Memory) Hook TCMalloc new/delete automatically counts to MemTracker (#8476)
Early Design Documentation: https://shimo.im/docs/DT6JXDRkdTvdyV3G

Implement a new way of memory statistics based on TCMalloc New/Delete Hook,
MemTracker and TLS, and it is expected that all memory new/delete/malloc/free
of the BE process can be counted.
2022-03-20 23:06:54 +08:00
e17aef9467 [refactor] refactor the implement of MemTracker, and related usage (#8322)
Modify the implementation of MemTracker:
1. Simplify a lot of useless logic;
2. Added MemTrackerTaskPool, as the ancestor of all query and import trackers, This is used to track the local memory usage of all tasks executing;
3. Add cosume/release cache, trigger a cosume/release when the memory accumulation exceeds the parameter mem_tracker_consume_min_size_bytes;
4. Add a new memory leak detection mode (Experimental feature), throw an exception when the remaining statistical value is greater than the specified range when the MemTracker is destructed, and print the accurate statistical value in HTTP, the parameter memory_leak_detection
5. Added Virtual MemTracker, cosume/release will not sync to parent. It will be used when introducing TCMalloc Hook to record memory later, to record the specified memory independently;
6. Modify the GC logic, register the buffer cached in DiskIoMgr as a GC function, and add other GC functions later;
7. Change the global root node from Root MemTracker to Process MemTracker, and remove Process MemTracker in exec_env;
8. Modify the macro that detects whether the memory has reached the upper limit, modify the parameters and default behavior of creating MemTracker, modify the error message format in mem_limit_exceeded, extend and apply transfer_to, remove Metric in MemTracker, etc.;

Modify where MemTracker is used:
1. MemPool adds a constructor to create a temporary tracker to avoid a lot of redundant code;
2. Added trackers for global objects such as ChunkAllocator and StorageEngine;
3. Added more fine-grained trackers such as ExprContext;
4. RuntimeState removes FragmentMemTracker, that is, PlanFragmentExecutor mem_tracker, which was previously used for independent statistical scan process memory, and replaces it with _scanner_mem_tracker in OlapScanNode;
5. MemTracker is no longer recorded in ReservationTracker, and ReservationTracker will be removed later;
2022-03-11 22:04:23 +08:00
50864aca7d [refactor] fix warings when compile with clang (#8069) 2022-02-19 11:29:02 +08:00
dd36ccc3bf [feature](storage-format) Z-Order Implement (#7149)
Support sort data by Z-Order:

```
CREATE TABLE table2 (
siteid int(11) NULL DEFAULT "10" COMMENT "",
citycode int(11) NULL COMMENT "",
username varchar(32) NULL DEFAULT "" COMMENT "",
pv bigint(20) NULL DEFAULT "0" COMMENT ""
) ENGINE=OLAP
DUPLICATE KEY(siteid, citycode)
COMMENT "OLAP"
DISTRIBUTED BY HASH(siteid) BUCKETS 1
PROPERTIES (
"replication_allocation" = "tag.location.default: 1",
"data_sort.sort_type" = "ZORDER",
"data_sort.col_num" = "2",
"in_memory" = "false",
"storage_format" = "V2"
);
```
2021-12-02 11:39:51 +08:00
db1c281be5 [Enhance][Load] Reduce the number of segments when loading a large volume data in one batch (#6947)
## Case

In the load process, each tablet will have a memtable to save the incoming data,
and if the data in a memtable is larger than 100MB, it will be flushed to disk as a `segment` file. And then
a new memtable will be created to save the following data/

Assume that this is a table with N buckets(tablets). So the max size of all memtables will be `N * 100MB`.
If N is large, it will cost too much memory.

So for memory limit purpose, when the size of all memtables reach a threshold(2GB as default), Doris will
try to flush all current memtables to disk(even if their size are not reach 100MB).

So you will see that the memtable will be flushed when it's size reach `2GB/N`, which maybe much smaller
than 100MB, resulting in too many small segment files.

## Solution

When decide to flush memtable to reduce memory consumption, NOT to flush all memtable, but to flush part
of them.
For example, there are 50 tablets(with 50 memtables). The memory limit is 1GB, so when each memtable reach
20MB, the total size reach 1GB, and flush will occur.

If I only flush 25 of 50 memtables, then next time when the total size reach 1GB, there will be 25 memtables with
size 10MB, and other 25 memtables with size 30MB. So I can flush those memtables with size 30MB, which is larger
than 20MB.

The main idea is to introduce some jitter during flush to ensure the small unevenness of each memtable, so as to ensure that flush will only be triggered when the memtable is large enough.

In my test, loading a table with 48 buckets, mem limit 2G, in previous version, the average memtable size is 44MB,
after modification, the average size is 82MB
2021-11-01 10:51:50 +08:00
63662194ab [BUG] Fix Stream Load cost too much memory (#5875) 2021-05-25 10:34:10 +08:00
1a81b9e160 [MemTracker] Some enchance of MemTracker (#5783)
1 Make some MemTracker have reasonable parent MemTracker not the root tracker
2 Make each MemTracker can be easily to trace.
3 Add show level of MemTracker to reduce the MemTracker show in the web page to have a way to control show how many tracker in web page.
2021-05-19 09:27:50 +08:00
0131c33966 [Enhance] Improve the readability of memtrackers' name (#5455)
Improve the readability of memtrackers' name, then you will be happy to read website be_ip:port/mem_tracker
2021-03-11 22:33:31 +08:00
ab06e92021 [Load Parallel][2/3] Support parallel flushing memtable during load (#5163)
In the previous implementation, in an load job,
multiple memtables of the same tablet are written to disk sequentially.
In fact, multiple memtables can be written out of order in parallel,
only need to ensure that each memtable uses a different segment writer.
2021-01-24 10:10:30 +08:00
6fedf5881b [CodeFormat] Clang-format cpp sources (#4965)
Clang-format all c++ source files.
2020-11-28 18:36:49 +08:00
068707484d Support sequence column for UNIQUE_KEYS Table (#4256)
* add sequence  col

Co-authored-by: yangwenbo6 <yangwenbo3@jd.com>
2020-09-04 10:10:17 +08:00
498b06fbe2 [Metrics] Support tablet level metrics (#4428)
Sometimes we want to detect the hotspot of a cluster, for example, hot scanned tablet, hot wrote tablet,
but we have no insight about tablets in the cluster.
This patch introduce tablet level metrics to help to achieve this object, now support 4 metrics on tablets: `query_scan_bytes `, `query_scan_rows `, `flush_bytes `, `flush_count `. 
However, one BE may holds hundreds of thousands of tablets, so I add a parameter for the metrics HTTP request,
and not return tablet level metrics by default.
2020-09-02 10:39:41 +08:00
10f822eb43 [MemTracker] make all MemTrackers shared (#4135)
We make all MemTrackers shared, in order to show MemTracker real-time consumptions on the web.
As follows:
1. nearly all MemTracker raw ptr -> shared_ptr
2. Use CreateTracker() to create new MemTracker(in order to add itself to its parent)
3. RowBatch & MemPool still use raw ptrs of MemTracker, it's easy to ensure RowBatch & MemPool destructor exec 
     before MemTracker's destructor. So we don't change these code.
4. MemTracker can use RuntimeProfile's counter to calc consumption. So RuntimeProfile's counter need to be shared 
    too. We add a shared counter pool to store the shared counter, don't change other counters of RuntimeProfile.
Note that, this PR doesn't change the MemTracker tree structure. So there still have some orphan trackers, e.g. RowBlockV2's MemTracker. If you find some shared MemTrackers are little memory consumption & too time-consuming, you could make them be the orphan, then it's fine to use the raw ptr.
2020-07-31 21:57:21 +08:00
b58b1b3953 [metrics] Make DorisMetrics to be a real singleton (#3417) 2020-05-04 09:20:53 +08:00
7c4149cf27 Improve comparison and printing of Version (#2796)
* Improve comparison and printing of Version

There are two members in `Version`:` first` and `second`.
There are many places where we need to print one `Version` object  and
compare two `Version` objects, but in the current code, these two members
are accessed directly, which makes the code very tedious.

This patch mainly do:
1. Adds overloaded methods for `operator<<()` for `Version`, so
   we can directly print a Version object;
2. Adds the `cantains()` method to determine whether it is an containment
   relationship;
3. Uses `operator==()` to determine if two `Version` objects are equal.

Because there are too many places need to be modified, there are still some
naked codes left, which will be modified later.

This patch also removes some necessary header file references.

No functional changes in this patch.
2020-01-19 18:04:28 +08:00
913792ce2b Add copy_object() method for HLL columns when loading (#2422)
Currently, special treatment is used for HLL types (and OBJECT types).
When loading data, because there is no need to serialize HLL content
(the upper layer has already done), we directly save the pointer
of `HyperLogLog` object in `Slice->data` (at the corresponding `Cell`
in each `Row`) and make `Slice->size` to be 0. This logic is different
from when reading the HLL column.  When reading, we need to deserialize
the HLL object from the `Slice` object. This causes us to have different
implementations of `copy_row()` when loading and reading.

In the optimization(commit: 177fec8917304e399aa7f3facc4cc4804e72ce8b),
the logic of `copy_row()` was added before a row can be added into the
`MemTable`, but the current `copy_row()` treats the `HLL column Cell`
as a normal Slice object(i.e. will memcpy its data according its size).

So this change adds a `copy_object()` method to `TypeInfo`, which is
used to copy the HLL column during loading data.

Note: The way of copying rows should be unified in the future. At that
time, we can delete the `copy_object()` method.
2019-12-11 22:07:51 +08:00
177fec8917 Improve SkipList memory usage tracking (#2359)
The problem with the current implementation is that all data to be
inserted will be counted in memory, but for the aggregation model or
some other special cases, not all data will be inserted into `MemTable`,
and these data should not be counted in memory.

This change makes the `SkipList` use the exclusive `MemPool`,
and only the data will be inserted into the `SkipList` can use this
`MemPool`. In other words, those discarded rows will not be
counted by the `MemPool` of` SkipList`.

In order to avoid duplicate checking whether a row already exists in
`SkipList`, this change also modifies the `SkipList` interface(A `Hint`
will be fetched when `Find()`, and then use it in `InsertUseHint()`),
and made `SkipList` no longer aware of the aggregation logic.

At present, because of the data row(`Tuple`) generated by the upper layer
is different from the data row(`Row`) internally represented by the
engine, when inserting `MemTable`, the data row must be copied.
If the row needs to be inserted into SkipList, we need copy it again
to `MemPool` of `SkipList`.

And, at present, the aggregation function only supports `MemPool` when
copying, so even if the data will not be inserted into` SkipList`,
`MemPool` is still used (in the future, it can be replaced with an
ordinary` Buffer`). However, we reuse the allocated memory in MemPool,
that is, we do not reallocate new memory every time.

Note: Due to the characteristics of `MemPool` (once inserted, it cannot
be partially cleared), the following scenarios may still cause multiple
flushes. For example, the aggregation model of a string column is `MAX`,
and the data inserted at the same time is in ascending order, then for
each data row, it must apply for memory from `MemPool` in `SkipList`,
that is, although the old rows in SkipList` will be discarded,
the memory occupied will still be counted.

I did a test on my development machine using `STREAM LOAD`: a table with
only one tablet and all columns are keys, the original data was
1.1G (9318799 rows), and there were 377745 rows after removing duplicates.

It can be found that both the number of files and the query efficiency are
greatly improved, the price paid is only a slight increase in load time.

before:
```
  $ ll storage/data/0/10019/1075020655/
  total 4540
  -rw------- 1 dev dev 393152 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_0_0.dat
  -rw------- 1 dev dev   1135 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_0_0.idx
  -rw------- 1 dev dev 421660 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_10_0.dat
  -rw------- 1 dev dev   1185 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_10_0.idx
  -rw------- 1 dev dev 184214 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_1_0.dat
  -rw------- 1 dev dev    610 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_1_0.idx
  -rw------- 1 dev dev 329181 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_11_0.dat
  -rw------- 1 dev dev    935 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_11_0.idx
  -rw------- 1 dev dev 343813 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_12_0.dat
  -rw------- 1 dev dev    985 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_12_0.idx
  -rw------- 1 dev dev 315364 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_2_0.dat
  -rw------- 1 dev dev    885 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_2_0.idx
  -rw------- 1 dev dev 423806 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_3_0.dat
  -rw------- 1 dev dev   1185 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_3_0.idx
  -rw------- 1 dev dev 294811 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_4_0.dat
  -rw------- 1 dev dev    835 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_4_0.idx
  -rw------- 1 dev dev 403241 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_5_0.dat
  -rw------- 1 dev dev   1135 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_5_0.idx
  -rw------- 1 dev dev 350753 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_6_0.dat
  -rw------- 1 dev dev    860 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_6_0.idx
  -rw------- 1 dev dev 266966 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_7_0.dat
  -rw------- 1 dev dev    735 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_7_0.idx
  -rw------- 1 dev dev 451191 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_8_0.dat
  -rw------- 1 dev dev   1235 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_8_0.idx
  -rw------- 1 dev dev 398439 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_9_0.dat
  -rw------- 1 dev dev   1110 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_9_0.idx

  {
    "TxnId": 16,
    "Label": "cd9f8392-dfa0-4626-8034-22f7cb97044c",
    "Status": "Success",
    "Message": "OK",
    "NumberTotalRows": 9318799,
    "NumberLoadedRows": 9318799,
    "NumberFilteredRows": 0,
    "NumberUnselectedRows": 0,
    "LoadBytes": 1079581477,
    "LoadTimeMs": 46907
  }

  mysql> select count(*) from xxx_before;
  +----------+
  | count(*) |
  +----------+
  |   377745 |
  +----------+
1 row in set (0.91 sec)

```

aftr:
```
  $ ll storage/data/0/10013/1075020655/
  total 3612
  -rw------- 1 dev dev 3328992 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_0_0.dat
  -rw------- 1 dev dev    8460 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_0_0.idx
  -rw------- 1 dev dev  350576 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_1_0.dat
  -rw------- 1 dev dev     985 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_1_0.idx

  {
    "TxnId": 12,
    "Label": "88f606d5-8095-4f15-b61d-49b7080c16b8",
    "Status": "Success",
    "Message": "OK",
    "NumberTotalRows": 9318799,
    "NumberLoadedRows": 9318799,
    "NumberFilteredRows": 0,
    "NumberUnselectedRows": 0,
    "LoadBytes": 1079581477,
    "LoadTimeMs": 48771
  }

  mysql> select count(*) from xxx_after;
  +----------+
  | count(*) |
  +----------+
  |   377745 |
  +----------+
  1 row in set (0.38 sec)

```
2019-12-06 17:31:18 +08:00
c5f7f7e0f4 Check the return status of _flush_memtable_async() (#2332)
This commit also contains some adjustments of the forward declaration
2019-11-29 21:05:17 +08:00