related issue: #3306
Note: this PR just remove the es_scan_node_test.cpp which is useless
For the moment, just add a simple explain syntax for EsTable without translating the native predicates to ES queryDSL which is better to finished with moving the predicate translating from Doris BE to Doris FE, the whole work is still WIP.
Now, column with REPLACE/REPLACE_IF_NOT_NULL can be filtered by ZoneMap/BloomFilter
when the rowset is base(version starts with zero). Always we think is an optimization.
But when some case, it will occurs bug.
create table test(
k1 int,
v1 int replace,
v2 int sum
);
If I have two records on different two versions
1 2 2 on version [0-10]
1 3 1 on version 11
If I perform a query
select * from test where k1 = 1 and v1 = 3;
The result will be 1 3 1, this is not right because of the first record is filtered.
The right answer is 1 3 3, the v2 should be summed.
Remove this optimization is necessity to make the result is right.
main refactor points are:
- Use a single get_absolute_tablet_path function instead of 3
independent functions
- Remove meaningless return value of register_tablet and deregister_tablet
- Some typo and format
This PR is to enhance the performance for txn manage task, when there are so many txn in
BE, the only one txn_map_lock and additional _txn_locks may cause poor performance, and
now we remove the additional _txn_locks and split the txn_map_lock into many small locks.
Relate Issue: https://github.com/apache/incubator-doris/issues/3248
SQL:
```
select * from test where (k2 = 6 and k3 = 1) or (k2 = 2 and k3 =3 and k4 = 'beijing');
```
Output filter:
```
((#k2:[6 TO 6] #k3:[1 TO 1]) (#(#k2:[2 TO 2] #k3:[3 TO 3]) #k4:beijing))~1
```
SQL:
```
select * from test where (k2 = 6 or k3 = 7) or (k2 = 2 and k3 =3 and (k4 = 'beijing' or k4 = 'zhaochun'));
```
Output filter:
```
(k2:[6 TO 6] k3:[7 TO 7] (#(#k2:[2 TO 2] #k3:[3 TO 3]) #((k4:beijing k4:zhaochun)~1)))~1
```
SQL:
```
select * from test where (k2 = 6 or k3 = 7) or (k2 = 2 and abs(k3) =3 and (k4 = 'beijing' or k4 = 'zhaochun'));
```
Output filter (`abs` can not be pushed down to es, so doris on es would not process this scenario ):
```
match_all
```
In the past, when we want to modify some BE configs, we have to modify be.conf and then restart BE.
This patch provides a way to modify configs in the type of 'threshold', 'interval', 'enable flag'
when BE is running without restarting it.
You can update a single config once by BE's http API: `be_host:be_http_port/api/update_config?config_name=new_value`
This CL fixes a bug that could cause wrong answer for beta rowset with nullable column. The root cause is that NullBitmapBuilder is not reset when the current page doesn't contain NULL, which leads to wrong null map to be written for the next page.
Added a test case to reproduce the problem.
The main optimization points:
1. Use std::unordered_set instead of std::set, and use RowsetId.hi as RowsetId's hash value.
2. Minimize the scope of SpinLock in UniqueRowsetIdGenerator.
Profile comparation:
* Run UniqueRowsetIdGeneratorTest.GenerateIdBenchmark 10 times
old version | new version
6s962ms | 3s647ms
6s139ms | 3s393ms
6s234ms | 3s686ms
6s060ms | 3s447ms
5s966ms | 4s127ms
5s786ms | 3s994ms
5s778ms | 4s072ms
6s193ms | 4s082ms
6s159ms | 3s560ms
5s591ms | 3s654ms
Support BE plugin framework, include:
* update Plugin Manager, support Plugin find method
* support Builtin-Plugin register method
* plugin install/uninstall process
* PluginLoader:
* dynamic install and check Plugin .so file
* dynamic uninstall and check Plugin status
* PluginZip:
* support plugin remote/local .zip file download and extract
TODO:
* We should support a PluginContext to transmit necessary system variable when the plugin's init/close method invoke
* Add the entry which is BE dynamic Plugin install/uninstall process, include:
* The FE send install/uninstall Plugin statement (RPC way)
* The FE meta update request with Plugin list information
* The FE operation request(update/query) with Plugin (maybe don't need)
* Add the plugin status upload way
* Load already install Plugin when BE start
Earlier we introduced `BlockManager` to separate data access logic from
underlying file read and write logic.
This CL further unifies all `SegmentV2` data access to the `BlockManager`,
removes the previous `FileManager` class, and move the file cache to the `FileBlockManager`.
There are no logical changes to this CL.
After this CL, all user table data is read through the `WritableBlock` and `ReadableBlock`
returned by the `BlockManager`, and no file operations are performed directly.
1. BlockManager has been added into StorageEngine.
So StorageEngine should be initialized when starting BetaRowset unit test.
2. Cache should not use the same buf to store value, otherwise the address
will be freed twice and crash.
The timestamp value load from orc file is error, the value has an offset with hive and spark.
Becuase the time zone of orc's timestamp is stored inside orc's stripe information, so the timestamp obtained here is an offset timestamp, so parse timestamp with UTC is actual datetime literal.
eg:
select str_to_date('2014-12-21 12%3A34%3A56', '%Y-%m-%d %H%%3A%i%%3A%s');
select unix_timestamp('2007-11-30 10:30%3A19', '%Y-%m-%d %H:%i%%3A%s');
This also enable us to extract column fields from HDFS file path with contains '%'.
The abstraction of the Block layer, inspired by Kudu, lies between the "business
layer" and the "underlying file storage layer" (`Env`), making them no longer
strongly coupled.
In this way, for the business layer (such as `SegmentWriter`),
there is no need to directly do the file operation, which will bring better
encapsulation. An ideal situation in the future is: when we need to support a
new file storage system, we only need to add a corresponding type of
BlockManager without modifying the business code (such as `SegmentWriter`).
With the Block layer, there are some benefits:
1. First and foremost, the mapping relationship between data and `Env` is more
flexible. For example, in the storage engine, the data of the tablet can be
placed in multiple file systems (`Env`) at the same time. That is, one-to-many
relationships can be supported. For example: one on the local and one on the
remote storage.
2. The mapping relationship between blocks and files can be adjusted, for example,
it may not be a one-to-one relationship. For example, the data of multiple
blocks can be stored in a physical file, which can reduce the number of files
that need to be opened during querying. It is like `LogBlockManager` in Kudu.
3. We can move the opened-file-cache under the Block layer, which can automatically
close and open the files used by the upper layer, so that the upper business
level does not need to be aware of the restrictions of the file handle at all
(This problem is often encountered online now).
4. Better automatic cleanup logic when there are exceptions. For example, a block
that is not closed explicitly can automatically clean up its corresponding file,
thereby avoiding generating most garbage files.
5. More convenient for batch file creation and deletion. Some business operations
create multiple files, such as compaction. At present, the processing flow that
these files go through is executed one by one: 1) creation; 2) writing data;
3) fsync to disk. But in fact, this is not necessary, we only need to fsync this
batch of files at the end. The advantage is that it can give the operating system
more opportunities to perform IO merge, thereby improving performance. However,
this operation is relatively tedious, there is no need to be coupled in the
business code, it is an ideal place to put it in the Block layer.
This is the first patch, just add related classes, laying the groundwork for later
switching of read and write logic.
Fixes#2892
IMPORTANT NOTICE: this CL makes incompatible changes to V2 storage format, developers need to create new tables for test.
This CL refactors the metadata and page format for segment_v2 in order to
* make it easy to extend existing page type
* make it easy to add new page type while not sacrificing code reuse
* make it possible to use SIMD to speed up page decoding
Here we summary the main code changes
* Page and index metadata is redesigned, please see `segment_v2.proto`
* The new class `PageIO` is the single place for reading and writing all pages. This removes lots of duplicated code. `PageCompressor` and `PageDecompressor` are now useless and removed.
* The type of value ordinal is changed from `rowid_t` to 64-bits `ordinal_t`, this affects ordinal index as well.
* Column's ordinal index is now implemented by IndexPage, the same with IndexedColumn.
* Zone map index is now implemented by IndexedColumn
Remove unused LLVM related codes of directory (step 4):be/src/runtime (#2910)
there are many LLVM related codes in code base, but these codes are not really used.
The higher version of GCC is not compatible with the LLVM 3.4.2 version currently used by Doris.
The PR delete all LLVM related code of directory: be/src/runtime
Remove unused LLVM related codes of directory (step 3):be/src/exprs (#2910)
there are many LLVM related codes in code base, but these codes are not really used.
The higher version of GCC is not compatible with the LLVM 3.4.2 version currently used by Doris.
The PR delete all LLVM related code of directory: be/src/exprs
For a tablet, there may be multiple memtables, which will be
flushed to disk one by one in the order of generation.
If a memtable flush fails, then the load job will definitely
fail, but the previous implementation will overwrite `_flush_status`,
which may make the error can not be detected, leads to an error
load job to be success.
This patch also have two other changes:
1. Use `std::bind` to replace `boost::bind`;
2. Removes some unneeded headers.
1. MemTableFlushExecutor maintain a ThreadPool to receive FlushTask.
2. FlushToken is used to seperate different tasks from different tablets.
Every DeltaWriter of tablet constructs a FlushToken,
task in FlushToken are handle serially, task between FlushToken are
handle concurrently.
3. I have remove thread limit on data_dir, because of I/O is not the main
timer consumer of Flush thread. Much of time is consumed in CPU decoding
and compress.
1. Add some comments to make the code easier to understand;
2. Make the metric `create_tablet_requests_failed` to be accurate;
3. Some internal methods use naked pointers directly instead of `shared_ptr`;
4. The `using` in `.h` files are contagious when included by other files,
so we should only use it in `.cpp` files;
5. Some formatting changes: such as wrapping lines that are too long
6. Parameters that need to be modified, use pointers instead of references
No functional changes in this patch.
Thread pool design point:
All tasks submitted directly to the thread pool enter a FIFO queue and are
dispatched to a worker thread when one becomes free. Tasks may also be
submitted via ThreadPoolTokens. The token wait() and shutdown() functions
can then be used to block on logical groups of tasks.
A token operates in one of two ExecutionModes, determined at token
construction time:
1. SERIAL: submitted tasks are run one at a time.
2. CONCURRENT: submitted tasks may be run in parallel.
This isn't unlike submitted without a token, but the logical grouping that tokens
impart can be useful when a pool is shared by many contexts (e.g. to
safely shut down one context, to derive context-specific metrics, etc.).
Tasks submitted without a token or via ExecutionMode::CONCURRENT tokens are
processed in FIFO order. On the other hand, ExecutionMode::SERIAL tokens are
processed in a round-robin fashion, one task at a time. This prevents them
from starving one another. However, tokenless (and CONCURRENT token-based)
tasks can starve SERIAL token-based tasks.
Thread design point:
1. It is a thin wrapper around pthread that can register itself with the singleton ThreadMgr
(a private class implemented in thread.cpp entirely, which tracks all live threads so
that they may be monitored via the debug webpages). This class has a limited subset of
boost::thread's API. Construction is almost the same, but clients must supply a
category and a name for each thread so that they can be identified in the debug web
UI. Otherwise, join() is the only supported method from boost::thread.
2. Each Thread object knows its operating system thread ID (TID), which can be used to
attach debuggers to specific threads, to retrieve resource-usage statistics from the
operating system, and to assign threads to resource control groups.
3. Threads are shared objects, but in a degenerate way. They may only have
up to two referents: the caller that created the thread (parent), and
the thread itself (child). Moreover, the only two methods to mutate state
(join() and the destructor) are constrained: the child may not join() on
itself, and the destructor is only run when there's one referent left.
These constraints allow us to access thread internals without any locks.
1. MonoTime/MonoDelta
MonoTime: The MonoTime represents a particular point in time, relative to some fixed but unspecified reference point.
MonoDelta: The MonoDelta class represents an elapsed duration of time, the delta between two MonoTime instances.
2. CountDownLatch
This is a C++ implementation of the Java CountDownLatch
Mainly contains the following modifications:
1. Use `std::unique_ptr` to replace some naked pointers
2. Modify some methods from member-method to local-static-function
3. Modify some methods do not need to be public to private
4. Some formatting changes: such as wrapping lines that are too long
5. Remove some useless variables
6. Add or modify some comments for easier understanding
No functional changes in this patch.