The tablet and disk information reporting threads need to report to the FE periodically.
At the same time these two reporting threads will also be triggered by certain events.
The modification in PR #4440 caused these two threads to be triggered only by events,
and could not report regularly.
1. Find the cache node by SQL Key, then find the corresponding partition data by Partition Key, and then decide whether to hit Cache by LastVersion and LastVersionTime
2. Refers to the classic cache algorithm LRU, which is the least recently used algorithm, using a three-layer data structure to achieve
3. The Cache elimination algorithm is implemented by ensuring the range of the partition as much as possible, to avoid the situation of partition discontinuity, which will reduce the hit rate of the Cache partition,
4. Use the two thresholds of maximum memory and elastic memory to control to avoid frequent elimination of data
When the keystype of mv and base table is difference, Doris should execute
sorting schema change instead of linked schema change.
If doesn't, the data size of mv actually is same as base table.
This will cause mv to have no pre-aggregation effect at all.
The query will not choose mv.
This commit fixed this problem. Fixed#4586
After 0.12 version, doris remove the format convert functiion which can convert from hdr_ format
to tabletmeta_ format when loading metas, the commit link: 3bca253
When we update doris version and there are old format meta in storage,
BE will not read the old format tablet. It can lead to data loss.
So we add meta format detection function to prevent data loss.
When there are old format meta in olap_meta, BE can find and print log or exit.
For[Spark Load]
1 support decimal andl largeint
2 add validate logic for char/varchar/decimal
3 check data load from hive with strict mode
4 support decimal/date/datetime aggregator
Fix stale path delete checking logic.
When current main path is version missing, then delete checking logic is always core dumped. So we fix the checking logic to tolerate current main version missing.
After tablet level metrics is supported, the http metrics API may response
a very large body when a BE holds a large number of tablets, and cause heavy
network traffic.
This patch introduce http content compression to reduce network traffic.
BE can not graceful exit because some threads are running in endless
loop. This patch do the following optimization:
- Use the well encapsulated Thread and ThreadPool instead of std::thread
and std::vector<std::thread>
- Use CountDownLatch in thread's loop condition to avoid endless loop
- Introduce a new class Daemon for daemon works, like tcmalloc_gc,
memory_maintenance and calculate_metrics
- Decouple statistics type TaskWorkerPool and StorageEngine notification
by submit tasks to TaskWorkerPool's queue
- Reorder objects' stop and deconstruct in main(), i.e. stop network
services at first, then internal services
- Use libevent in pthreads mode, by calling evthread_use_pthreads(),
then EvHttpServer can exit gracefully in multi-threads
- Call brpc::Server's Stop() and ClearServices() explicitly
Fix segment group add zone map bug when schema change.
(1) WrapperField null point check
(2) in DUP_KEYS, let _zone_maps index consistent with _schema column index
Main CL:
1. Copy the code from BE to implement the `str_to_date()` function in FE.
2. `str_to_date("2020-08-08", "%Y-%m-%d %H:%i:%s")` will return `2020-08-08 00:00:00` instead of `2020-08-08`.
The parameter 'part' of parse_url function does not support lower case, and parse protocol not right.
And This function does not support parse 'port'.
This PR tries to make parse_url function case insensitive and support parse 'port'.
The issue: #4451
(1) fix recover persistent stale rowsets bug from multi-single version rowset in stale rowsets
(2) delete_expired_inc_rowsets check consistent version convert to [0, max_version]
Sometimes we want to detect the hotspot of a cluster, for example, hot scanned tablet, hot wrote tablet,
but we have no insight about tablets in the cluster.
This patch introduce tablet level metrics to help to achieve this object, now support 4 metrics on tablets: `query_scan_bytes `, `query_scan_rows `, `flush_bytes `, `flush_count `.
However, one BE may holds hundreds of thousands of tablets, so I add a parameter for the metrics HTTP request,
and not return tablet level metrics by default.
1. When WITH_MYSQL is off, load error hub does not suport MySQL load error hub,
we should check its return value.
2. misjudge the return value of `change_row_block` in schema_change.cpp
Segment index file content is not set as 0 when it is constructed in write procedure,
so when load index from this file, and meet a null VARCHAR cell,
the null field of this cell is 0, but the length field which is not initialized maybe a large random number,
then memory copy may cause overflow.
This patch fix this bug, and also skip useless memory copy to improve a bit of performance.
Persistence stale rowsets meta. When BE reboots, stale rowsets meta
can resume and the stale version can also be readable before stale gc time.
ISSUE: #4453
In the process of historical data transformation of materialized views, it may occur that the transformation fails due to data quality.
Add an error status code :OLAP_ERR_DATE_QUALITY_ERR to determine if a data problem is causing the failure
#3344
1. Support convert(expr, target_type) function, which is same as CastExpr
2. Support cast (expr as signed/unsigned int)
This is just for compatibility, the signed/unsigned specification is meaningless.
1. Fix core bug wild pointer in PlanFragmentExecutor, fix issue #4447
2. Fix core bug wild pointer json load, fix issue #4452
3. Change the declare order of ODBC type in thrift for compatibility
1. The base column of bitmap_union could must be integer. The largeint is not supported too.
2. The base column of hll_union could not be decimal.
Check error msg of const expr in Union Node
If user wants to insert a negative number into bitmap mv, Doris will thrown exception 'invalid input'.
The const value in Union Node is checked in this commit.
When creating a tablet, it is necessary to select a disk from all disks that
meet the requirements on the BE node to store the tablet.
In Doris, the current disk selection strategy is to randomly select a disk
from all disks that meet the requirements for tablet creation.
After the cluster has been running for a long time, we found that the
distribution of the number of tablets on different disks in a BE node is unbalanced.
In order to solve this problem, we introduced the algorithm of "two random choices"
for disk selection when creating the tablet:
(1) Select two disks from all disks that meet the requirements on the BE node randomly;
(2) Choose the disk with a smaller number of tablet from the two disks selected in (1) for tablet creation.
1. Disable the MySQL client and LZO library by default when building the Doris.
MySQL client library is used for MySQL external table feature.
This feature will be replaced by the new ODBC external table soon.
LZO library is used to compress/decompress data of some old data format of Doris,
which is no longer used anymore.
2. Add missing license to some files.
3. For all non-Apache-License code, all are explained in NOTICE file and the corresponding license is declared.
4. Remove the js source code from webroot, it will be downloaded as thirdparty
Since the Segment V2 has been released for a long time, we should make it as default storage format for newly created table.
This CL mainly changes:
1. For all newly created tables, their default storage format is Segment V2.
2. For all already exist tablets, their storage format remain unchanged.
3. Fix bugs described in Fix#4384 and Fix#4385
* Implements the grammar of the batch delete #4051
* Process create, alter table when table has delete sign column
* Support the syntax for enabling the delete column
* Automatically filtered deleted data in the select statement.
* Automatically add delete sign when create rollup table
TODO:
* Optimize the reading and compaction logic on the be side, so that the data marked as deleted will be completely deleted during base compaction
```
SELECT *
FROM
(SELECT cs_order_number,
cs_warehouse_sk
FROM catalog_sales
WHERE cs_order_number = 125005
AND cs_warehouse_sk = 4) cs1
LEFT SEMI JOIN
(SELECT cs_order_number,
cs_warehouse_sk
FROM catalog_sales
WHERE cs_order_number = 125005) cs2
ON cs1.cs_order_number = cs2.cs_order_number
AND cs1.cs_warehouse_sk <> cs2.cs_warehouse_sk;
```
The above query has an equal predicate and a not equal predicate.
If there exists not equal preidcate, the build table should be remained
as it is. So the deduplication should be removed.
replace is an user defined function, which is to replace all old substrings with a new substring in a string, as follow:
mysql> select replace("http://www.baidu.com:9090", "9090", "");
+------------------------------------------------------+
| replace('http://www.baidu.com:9090', '9090', '') |
+------------------------------------------------------+
| http://www.baidu.com: |
+------------------------------------------------------+