Commit Graph

19 Commits

Author SHA1 Message Date
16a394da0e [chore](build) Use include-what-you-use to optimize includes (PART III) (#18958)
Currently, there are some useless includes in the codebase. We can use a tool named include-what-you-use to optimize these includes. By using a strict include-what-you-use policy, we can get lots of benefits from it.
2023-04-24 14:51:51 +08:00
d9fe5f7b67 [enhancement](memory) Remove MemPool and replace it with Arena (#17820)
Arena can replace MemPool in most scenarios. Except for memory reuse, MemPool supports reuse of previous memory chunks after clear, but Arena does not.

Some comparisons between MemPool and Arena:

 1. Expansion
     Arena is less than 128M index 2 alloc chunk; more than 128M memory, allocate 128M * n > `size`, n is equal to the minimum value that satisfies the expression;
     MemPool less than 512K index 2 alloc chunk, greater than 512K memory, separately apply for a `size` length chunk
     
     After Arena applied for a chunk larger than 128M last time, the minimum chunk applied for after that is 128M. Does this seem to be a waste of memory? MemPool is also similar. After the chunk of 512K was applied for last time, the minimum chunk of subsequent applications is 512K.

 2. Alignment
     MemPool defaults to 16 alignment, because memtable and other places that use int128 require 16 alignment;
     Arena has no default alignment;

 3. Memory reuse
     Arena only supports `rollback`, which reuses the memory of the current chunk, usually the memory requested last time.
     MemPool supports clear(), all chunks can be reused; or call ReturnPartialAllocation() to roll back the last requested memory; if the last chunk has no memory, search for the most free chunk for allocation

 4. Realloc
     Arena supports realloc contiguous memory; it also supports realloc contiguous memory from any position at the time of the last allocation. The difference between `alloc_continue` and `realloc` is:
         1. Alloc_continue does not need to specify the old size, but the default old size = head->pos - range_start
         2. alloc_continue supports expansion from range_start when additional_bytes is between head and pos, which is equivalent to reusing a part of memory, while realloc completely allocates a new memory
     MemPool does not support realloc, but supports transferring or absorbing chunks between two MemPools

 5. check mem limit
     MemPool checks the mem limit, and Arena checks at the Allocator layer.

 6. Support for ASAN
     Arena does something extra

 7. Error handling
     MemPool supports returning the error message of application failure directly through `Status`, and Arena throws Exception.
Tests that Arena can consider

 1. After the last applied chunk is larger than 128M, the minimum applied chunk is 128M, which seems to waste memory;

 2. Support clear, memory multiplexing;

 3. Increase the large list, alloc the memory larger than 128M, and the size is equal to `size`, so as to avoid the current chunk not being fully used, which is wasteful.

 4. In some cases, it may be possible to allocate backwards to find chunks t
2023-03-29 20:56:49 +08:00
295d887cf5 [improvement](thread) set name for priority thread pool (#13552) 2022-10-26 09:32:15 +08:00
e7f18e998a [chore](be-ut) Remove useless lines which cause compilation errors (#13053) 2022-09-30 11:26:25 +08:00
4960043f5e [enhancement] Refactor to improve the usability of MemTracker (step2) (#10823) 2022-07-21 17:11:28 +08:00
290366787c [refactor] refactor code, replace some file with stl libs (#8759)
1. replace ConditionVariables with std::condition_variable
2. repalace Mutex with std::mutex
3. repalce MonoTime with std::chrono
2022-04-13 09:55:29 +08:00
5a44eeaf62 [refactor] Unify all unit tests into one binary file (#8958)
1. solved the previous delayed unit test file size is too large (1.7G+) and the unit test link time is too long problem problems
2. Unify all unit tests into one file to significantly reduce unit test execution time to less than 3 mins
3. temporarily disable stream_load_test.cpp, metrics_action_test.cpp, load_channel_mgr_test.cpp because it will re-implement part of the code and affect other tests
2022-04-12 15:30:40 +08:00
dd36ccc3bf [feature](storage-format) Z-Order Implement (#7149)
Support sort data by Z-Order:

```
CREATE TABLE table2 (
siteid int(11) NULL DEFAULT "10" COMMENT "",
citycode int(11) NULL COMMENT "",
username varchar(32) NULL DEFAULT "" COMMENT "",
pv bigint(20) NULL DEFAULT "0" COMMENT ""
) ENGINE=OLAP
DUPLICATE KEY(siteid, citycode)
COMMENT "OLAP"
DISTRIBUTED BY HASH(siteid) BUCKETS 1
PROPERTIES (
"replication_allocation" = "tag.location.default: 1",
"data_sort.sort_type" = "ZORDER",
"data_sort.col_num" = "2",
"in_memory" = "false",
"storage_format" = "V2"
);
```
2021-12-02 11:39:51 +08:00
6c6380969b [refactor] replace boost smart ptr with stl (#6856)
1. replace all boost::shared_ptr to std::shared_ptr
2. replace all boost::scopted_ptr to std::unique_ptr
3. replace all boost::scoped_array to std::unique<T[]>
4. replace all boost:thread to std::thread
2021-11-17 10:18:35 +08:00
11c0aafa5c [UT] Speed up BE unit test (#5131)
There are some long loops and sleeps in unit tests, it will cost a
very long time to run all unit tests, especially run in TSAN mode.
This patch speed up unit tests by shortening long loops and sleeps,
on my environment all unit tests finished in 1 minite. It's useful
to do basic functional unit tests.
You can switch to run in this mode by adding a new environment variable
'DORIS_ALLOW_SLOW_TESTS'. For example, you can set:
export DORIS_ALLOW_SLOW_TESTS=1
and also you can disable it by setting:
export DORIS_ALLOW_SLOW_TESTS=0
2020-12-27 22:19:56 +08:00
6fedf5881b [CodeFormat] Clang-format cpp sources (#4965)
Clang-format all c++ source files.
2020-11-28 18:36:49 +08:00
10f822eb43 [MemTracker] make all MemTrackers shared (#4135)
We make all MemTrackers shared, in order to show MemTracker real-time consumptions on the web.
As follows:
1. nearly all MemTracker raw ptr -> shared_ptr
2. Use CreateTracker() to create new MemTracker(in order to add itself to its parent)
3. RowBatch & MemPool still use raw ptrs of MemTracker, it's easy to ensure RowBatch & MemPool destructor exec 
     before MemTracker's destructor. So we don't change these code.
4. MemTracker can use RuntimeProfile's counter to calc consumption. So RuntimeProfile's counter need to be shared 
    too. We add a shared counter pool to store the shared counter, don't change other counters of RuntimeProfile.
Note that, this PR doesn't change the MemTracker tree structure. So there still have some orphan trackers, e.g. RowBlockV2's MemTracker. If you find some shared MemTrackers are little memory consumption & too time-consuming, you could make them be the orphan, then it's fine to use the raw ptr.
2020-07-31 21:57:21 +08:00
1cf0fb9117 Use ThreadPool to refactor MemTableFlushExecutor (#2931)
1. MemTableFlushExecutor maintain a ThreadPool to receive FlushTask.
2. FlushToken is used to seperate different tasks from different tablets.
   Every DeltaWriter of tablet constructs a FlushToken,
   task in FlushToken are handle serially, task between FlushToken are
   handle concurrently.
3. I have remove thread limit on data_dir, because of I/O is not the main
   timer consumer of Flush thread. Much of time is consumed in CPU decoding
   and compress.
2020-02-18 18:39:04 +08:00
9ee1704859 [util] Import util tools from KUDU (#2905)
1. MonoTime/MonoDelta
   MonoTime: The MonoTime represents a particular point in time, relative to some fixed but unspecified reference point.
   MonoDelta: The MonoDelta class represents an elapsed duration of time, the delta between two MonoTime instances.

2. CountDownLatch
   This is a C++ implementation of the Java CountDownLatch
2020-02-14 18:01:16 +08:00
177fec8917 Improve SkipList memory usage tracking (#2359)
The problem with the current implementation is that all data to be
inserted will be counted in memory, but for the aggregation model or
some other special cases, not all data will be inserted into `MemTable`,
and these data should not be counted in memory.

This change makes the `SkipList` use the exclusive `MemPool`,
and only the data will be inserted into the `SkipList` can use this
`MemPool`. In other words, those discarded rows will not be
counted by the `MemPool` of` SkipList`.

In order to avoid duplicate checking whether a row already exists in
`SkipList`, this change also modifies the `SkipList` interface(A `Hint`
will be fetched when `Find()`, and then use it in `InsertUseHint()`),
and made `SkipList` no longer aware of the aggregation logic.

At present, because of the data row(`Tuple`) generated by the upper layer
is different from the data row(`Row`) internally represented by the
engine, when inserting `MemTable`, the data row must be copied.
If the row needs to be inserted into SkipList, we need copy it again
to `MemPool` of `SkipList`.

And, at present, the aggregation function only supports `MemPool` when
copying, so even if the data will not be inserted into` SkipList`,
`MemPool` is still used (in the future, it can be replaced with an
ordinary` Buffer`). However, we reuse the allocated memory in MemPool,
that is, we do not reallocate new memory every time.

Note: Due to the characteristics of `MemPool` (once inserted, it cannot
be partially cleared), the following scenarios may still cause multiple
flushes. For example, the aggregation model of a string column is `MAX`,
and the data inserted at the same time is in ascending order, then for
each data row, it must apply for memory from `MemPool` in `SkipList`,
that is, although the old rows in SkipList` will be discarded,
the memory occupied will still be counted.

I did a test on my development machine using `STREAM LOAD`: a table with
only one tablet and all columns are keys, the original data was
1.1G (9318799 rows), and there were 377745 rows after removing duplicates.

It can be found that both the number of files and the query efficiency are
greatly improved, the price paid is only a slight increase in load time.

before:
```
  $ ll storage/data/0/10019/1075020655/
  total 4540
  -rw------- 1 dev dev 393152 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_0_0.dat
  -rw------- 1 dev dev   1135 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_0_0.idx
  -rw------- 1 dev dev 421660 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_10_0.dat
  -rw------- 1 dev dev   1185 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_10_0.idx
  -rw------- 1 dev dev 184214 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_1_0.dat
  -rw------- 1 dev dev    610 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_1_0.idx
  -rw------- 1 dev dev 329181 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_11_0.dat
  -rw------- 1 dev dev    935 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_11_0.idx
  -rw------- 1 dev dev 343813 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_12_0.dat
  -rw------- 1 dev dev    985 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_12_0.idx
  -rw------- 1 dev dev 315364 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_2_0.dat
  -rw------- 1 dev dev    885 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_2_0.idx
  -rw------- 1 dev dev 423806 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_3_0.dat
  -rw------- 1 dev dev   1185 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_3_0.idx
  -rw------- 1 dev dev 294811 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_4_0.dat
  -rw------- 1 dev dev    835 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_4_0.idx
  -rw------- 1 dev dev 403241 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_5_0.dat
  -rw------- 1 dev dev   1135 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_5_0.idx
  -rw------- 1 dev dev 350753 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_6_0.dat
  -rw------- 1 dev dev    860 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_6_0.idx
  -rw------- 1 dev dev 266966 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_7_0.dat
  -rw------- 1 dev dev    735 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_7_0.idx
  -rw------- 1 dev dev 451191 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_8_0.dat
  -rw------- 1 dev dev   1235 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_8_0.idx
  -rw------- 1 dev dev 398439 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_9_0.dat
  -rw------- 1 dev dev   1110 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_9_0.idx

  {
    "TxnId": 16,
    "Label": "cd9f8392-dfa0-4626-8034-22f7cb97044c",
    "Status": "Success",
    "Message": "OK",
    "NumberTotalRows": 9318799,
    "NumberLoadedRows": 9318799,
    "NumberFilteredRows": 0,
    "NumberUnselectedRows": 0,
    "LoadBytes": 1079581477,
    "LoadTimeMs": 46907
  }

  mysql> select count(*) from xxx_before;
  +----------+
  | count(*) |
  +----------+
  |   377745 |
  +----------+
1 row in set (0.91 sec)

```

aftr:
```
  $ ll storage/data/0/10013/1075020655/
  total 3612
  -rw------- 1 dev dev 3328992 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_0_0.dat
  -rw------- 1 dev dev    8460 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_0_0.idx
  -rw------- 1 dev dev  350576 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_1_0.dat
  -rw------- 1 dev dev     985 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_1_0.idx

  {
    "TxnId": 12,
    "Label": "88f606d5-8095-4f15-b61d-49b7080c16b8",
    "Status": "Success",
    "Message": "OK",
    "NumberTotalRows": 9318799,
    "NumberLoadedRows": 9318799,
    "NumberFilteredRows": 0,
    "NumberUnselectedRows": 0,
    "LoadBytes": 1079581477,
    "LoadTimeMs": 48771
  }

  mysql> select count(*) from xxx_after;
  +----------+
  | count(*) |
  +----------+
  |   377745 |
  +----------+
  1 row in set (0.38 sec)

```
2019-12-06 17:31:18 +08:00
cafb9f1e62 Replace Arena with MemPool first step (#1899) 2019-09-28 01:12:22 +08:00
37b4cafe87 Change variable and namespace name in BE (#268)
Change 'palo' to 'doris'
2018-11-02 10:22:32 +08:00
2868793b6b Change license to Apache License 2.0 (#262) 2018-11-01 09:06:01 +08:00
5d3fc80067 Added:
* Add streaming load feature. You can execute 'help stream load;' to see more information.

Changed:
* Loading phase of a certain table can be parallelized, to reduce the load job execution time when multi load jobs to a single table.
* Using RocksDB to save the header info of tablets in Backends, to reduce the IO operations and increate speeding of restarting.

Fixed:
* A lot of bugs fixed.
2018-10-31 14:46:22 +08:00