select date_format(k10, '%Y%m%d') as myk10 from baseall group by myk10;
The date_format function in query above will be stored in MemPool during
the query execution. If the query handles millions of rows, it will
consume much memory. Should clear the MemPool at interval.
This CL fixes a bug that could cause wrong answer for beta rowset with nullable column. The root cause is that NullBitmapBuilder is not reset when the current page doesn't contain NULL, which leads to wrong null map to be written for the next page.
Added a test case to reproduce the problem.
The main optimization points:
1. Use std::unordered_set instead of std::set, and use RowsetId.hi as RowsetId's hash value.
2. Minimize the scope of SpinLock in UniqueRowsetIdGenerator.
Profile comparation:
* Run UniqueRowsetIdGeneratorTest.GenerateIdBenchmark 10 times
old version | new version
6s962ms | 3s647ms
6s139ms | 3s393ms
6s234ms | 3s686ms
6s060ms | 3s447ms
5s966ms | 4s127ms
5s786ms | 3s994ms
5s778ms | 4s072ms
6s193ms | 4s082ms
6s159ms | 3s560ms
5s591ms | 3s654ms
Support BE plugin framework, include:
* update Plugin Manager, support Plugin find method
* support Builtin-Plugin register method
* plugin install/uninstall process
* PluginLoader:
* dynamic install and check Plugin .so file
* dynamic uninstall and check Plugin status
* PluginZip:
* support plugin remote/local .zip file download and extract
TODO:
* We should support a PluginContext to transmit necessary system variable when the plugin's init/close method invoke
* Add the entry which is BE dynamic Plugin install/uninstall process, include:
* The FE send install/uninstall Plugin statement (RPC way)
* The FE meta update request with Plugin list information
* The FE operation request(update/query) with Plugin (maybe don't need)
* Add the plugin status upload way
* Load already install Plugin when BE start
Earlier we introduced `BlockManager` to separate data access logic from
underlying file read and write logic.
This CL further unifies all `SegmentV2` data access to the `BlockManager`,
removes the previous `FileManager` class, and move the file cache to the `FileBlockManager`.
There are no logical changes to this CL.
After this CL, all user table data is read through the `WritableBlock` and `ReadableBlock`
returned by the `BlockManager`, and no file operations are performed directly.
1. BlockManager has been added into StorageEngine.
So StorageEngine should be initialized when starting BetaRowset unit test.
2. Cache should not use the same buf to store value, otherwise the address
will be freed twice and crash.
improve performent of hash join when build table has to many duplicated rows, this will cause hash table collisions and slow down the probe performence.
In this pr when join type is semi join or anti join, we will build a hash table without duplicated rows.
benchmark:
dataset: tpcds dataset `store_sales` and `catalog_sales`
```
mysql> select count(*) from catalog_sales;
+----------+
| count(*) |
+----------+
| 14401261 |
+----------+
1 row in set (0.44 sec)
mysql> select count(distinct cs_bill_cdemo_sk) from catalog_sales;
+------------------------------------+
| count(DISTINCT `cs_bill_cdemo_sk`) |
+------------------------------------+
| 1085080 |
+------------------------------------+
1 row in set (2.46 sec)
mysql> select count(*) from store_sales;
+----------+
| count(*) |
+----------+
| 28800991 |
+----------+
1 row in set (0.84 sec)
mysql> select count(distinct ss_addr_sk) from store_sales;
+------------------------------+
| count(DISTINCT `ss_addr_sk`) |
+------------------------------+
| 249978 |
+------------------------------+
1 row in set (2.57 sec)
```
test querys:
query1: `select count(*) from (select store_sales.ss_addr_sk from store_sales left semi join catalog_sales on catalog_sales.cs_bill_cdemo_sk = store_sales.ss_addr_sk) a;`
query2: `select count(*) from (select catalog_sales.cs_bill_cdemo_sk from catalog_sales left semi join store_sales on catalog_sales.cs_bill_cdemo_sk = store_sales.ss_addr_sk) a;`
benchmark result:
||query1|query2|
|:--:|:--:|:--:|
|before|14.76 sec|3 min 16.52 sec|
|after|12.64 sec|10.34 sec|
fix a bug of const union query like `select null union select null`, this because the type of SlotDescriptor when clause is `select null` is null ,this will cause BE core dump, and FE find wrong cast function.
Related issue: #2663, #2828.
This CL support loading data into specified temporary partitions.
```
INSERT INTO tbl TEMPORARY PARTITIONS(tp1, tp2, ..) ....;
curl .... -H "temporary_partition: tp1, tp, .. " ....
LOAD LABEL db1.label1 (
DATA INFILE("xxxx")
INTO TABLE `tbl2`
TEMPORARY PARTITION(tp1, tp2, ...)
...
```
NOTICE: this CL change the FE meta version to 77.
There 3 major changes in this CL
## Syntax reorganization
Reorganized the syntax related to the `specify-partitions`. Removed some redundant syntax
definitions, and unified the syntax related to the `specify-partitions` under one syntax entry.
## Meta refactor
In order to be able to support specifying temporary partitions,
I made some changes to the way the partition information in the table is stored.
Partition information is now organized as follows:
The following two maps are reserved in OlapTable for storing formal partitions:
```
idToPartition
nameToPartition
```
Use the `TempPartitions` class for storing temporary partitions.
All the partition attributes of the formal partition and the temporary partition,
such as the range, the number of replicas, and the storage medium, are all stored
in the `partitionInfo` of the OlapTable.
In `partitionInfo`, we use two maps to store the range of formal partition
and temporary partition:
```
idToRange
idToTempRange
```
Use separate map is because the partition ranges of the formal partition and
the temporary partition may overlap. Separate map can more easily check the partition range.
All partition attributes except the partition range are stored using the same map,
and the partition id is used as the map key.
## Method to get partition
A table may contain both formal and temporary partitions.
There are several methods to get the partition of a table.
Typically divided into two categories:
1. Get partition by id
2. Get partition by name
According to different requirements, the caller may want to obtain
a formal partition or a temporary partition. These methods are
described below in order to obtain the partition by using the correct method.
1. Get by name
This type of request usually comes from a user with partition names. Such as
`select * from tbl partition(p1);`.
This type of request has clear information to indicate whether to obtain a
formal or temporary partition.
Therefore, we need to get the partition through this method:
`getPartition(String partitionName, boolean isTemp)`
To avoid modifying too much code, we leave the `getPartition(String
partitionName)`, which is same as:
`getPartition(partitionName, false)`
2. Get by id
This type of request usually means that the previous step has obtained
certain partition ids in some way,
so we only need to get the corresponding partition through this method:
`getPartition(long partitionId)`.
This method will try to get both formal partitions and temporary partitions.
3. Get all partition instances
Depending on the requirements, the caller may want to obtain all formal
partitions,
all temporary partitions, or all partitions. Therefore we provide 3 methods,
the caller chooses according to needs.
`getPartitions()`
`getTempPartitions()`
`getAllPartitions()`
During the use of the `block`, some methods in the block manager will be referenced.
So `file_block_mgr` should be a resident and globally unique object.
I put it in `StorageEngine`.
TODO: the `BlockManager`, `Env` need to be reorganized.
This CL try to fix a potential bug describe in ISSUE: #3097. But I'm not sure this is the root cause.
Also remove lots of verbose log, and fix a memory leak.
In a large scale cluster, we may rolling upgrade BEs, this patch add a
column named 'Version' for command 'show backends;', as well as website
'/system?path=//backends', to provide a method to check whether there
is any BE missing upgraded.
```
be/src/olap/rowset/segment_v2/ordinal_page_index.cpp:103:22: warning: ‘ordinal’ may be used
uninitialized in this function [-Wmaybe-uninitialized]
_ordinals[i] = ordinal;
```
This bug occurred when BE make snapshot, the version required by fe had been merged into the cumulative version, so the snapshot task could not complete the task even if it retried. In order to solve this problem, the BackupJob could be set to CANCELLED, and the user could continue to retry the job.
Fix#3057
If delete predicate exists in meta in Doris-0.10, all of this predicates should
be remained. There is an confused place in Doris-0.10. The delete predicate
only exists in OLAPHeaderMessage and PPendingDelta, not in PDelta.
This trick results this bug.