This commit is the first stage of #6287
In this commit, we support:
1、Sync Job
1)、 Creating sync job and data channel in Fe.
2)、Pause sync job.
3)、Resume sync job.
4)、Stop sync job.
5)、Show sync jobs.
2、Canal
1)、Subscribing and getting the binlog data of canal with creating syncjob.
```
SELECT count(distinct products_id) FROM a_table as a WHERE 1=1 AND products_id in ( SELECT products_id from b_table );
```
Because hash table construction errors may lead to unstable results
The system view names in information_schema are case-insensitive,
but we should not refer to one of these using different cases within the same statement.
The following sql is correct:
```
select * from information_schema.TAbles where TAbles.ENGINE = 'Doris';
```
The following sql is wrong because `TAbles` and `tables` are used:
```
select * from information_schema.TAbles order by tables.CREATE_TIME;
```
The problem I want to solve is described in #6355.
This CL mainly changes:
1. Support compacting tablets under alter operations
On BE side, the compaction logic will select tablets which state is "TABLET_NOTREADY" to do cumulative compaction.
2. Remove "alter_task" field in tablet's meta on BE side.
"alter_task" field is never used long time ago
3. Support doing delete operation when table is doing alter operation.
Previously, when a table is doing alter operation, execution of delete will return error: Table's state is not NORMAL.
But now, delete can be executed successfully only if the condition column is not under schema change.
And delete condition will be applied to all materialized indexes.
1. `StorageEngine::_delete_tablets_on_unused_root_path` will try to obtain tablet shard write lock in `TabletManager`
```
StorageEngine::_delete_tablets_on_unused_root_path
TabletManager::drop_tablets_on_error_root_path
obtain each tablet shard's write lock
```
2. `TabletManager::build_all_report_tablets_info` and other methods will obtain tablet shard read lock frequently.
So, `StorageEngine::_delete_tablets_on_unused_root_path` will hold `_store_lock` for a long time.
This will make it difficult for other threads to get write `_store_lock`, such as `StorageEngine::get_stores_for_create_tablet`
`drop_tablets_on_error_root_path` is a small probability event, `TabletManager::drop_tablets_on_error_root_path` should return when its param `tablet_info_vec` is empty
Currently, Doris supports loading OSS/S3A files by using params like fs.s3a.access.key, but there is a bug when using it to load such type files. The root cause is broker can not handle FSDataInputStream which does not implement ByteBufferReadable.
See Issue #6307
S3A input stream to support ByteBufferReadable
https://issues.apache.org/jira/browse/HADOOP-14603
SHOW DATA SKEW FROM tbl PARTITION(p1)
to view the data distribution of a specified partition
```
mysql> admin show data skew from tbl1 partition(tbl1);
+-----------+-------------+-------+---------+
| BucketIdx | AvgDataSize | Graph | Percent |
+-----------+-------------+-------+---------+
| 0 | 0 | | 100.00% |
+-----------+-------------+-------+---------+
1 row in set (0.01 sec)
```
Also modify the result of `admin show replica distribution`, add replica size distribution
```
mysql> admin show replica distribution from tbl1 partition(tbl1);
+-----------+------------+-------------+----------+------------+-----------+-------------+
| BackendId | ReplicaNum | ReplicaSize | NumGraph | NumPercent | SizeGraph | SizePercent |
+-----------+------------+-------------+----------+------------+-----------+-------------+
| 10002 | 1 | 0 | > | 100.00% | | 100.00% |
+-----------+------------+-------------+----------+------------+-----------+-------------+
```
* [Optimize] optimize the speed of converting integer to string
* Use fmt and std::from_chars to make convert integer to string and convert string to integer more efficient
Co-authored-by: caiconghui <caiconghui@xiaomi.com>
In RuntimeFilter BloomFilter, decimal column will got a wrong hash value because violating aliasing rules
decimal12_t decimal = { 12, 12 };
murmurhash3(decimal) in bloom filter: 2167721464
expect: 4203026776
Use `commitAsync` to commit offset to kafka, instead of using `commitSync`, which may block for a long time.
Also assign a group.id to routine load if user not specified "property.group.id" property, so that all consumer of
this job will use same group.id instead of a random id for each consume task.
```
alter routine load for cmy2 from kafka("kafka_broker_list" = "ip2:9094", "kafka_topic" = "my_topic");
```
This is useful when the kafka broker list or topic has been changed.
Also modify `show create routine load`, support showing "kafka_partitions" and "kafka_offsets".
Fix#6316
If the size of memtable is greater than max segment size and the memtable will flush more than
one segment file. BE coredump will be triggered when flushing memtable.
[Update] Support update syntax
The current update syntax only supports updating the filtered data of a single table.
Syntax:
* UPDATE table_reference
* SET assignment_list
* [WHERE where_condition]
*
* value:
* {expr}
*
* assignment:
* col_name = value
*
* assignment_list:
* assignment [, assignment] ...
Example
Update unique_table
set v1=1
where k1=1
New Frontend Config: enable_concurrent_update
This configuration is used to control whether multi update stmt can be executed concurrently in one table.
Default value is false which means A table can only have one update task being executed at the same time.
If users want to update the same table concurrently,
they need to modify the configuration value to true and restart the master frontend.
Concurrent updates may cause write conflicts, the result is uncertain, please be careful.
The main realization principle:
1. Read the rows that meet the conditions according to the conditions set by where clause.
2. Modify the result of the row according to the set clause.
3. Write the modified row back to the table.
Some restrictions on the use of update syntax.
1. Only the unique table can be updated
2. Only the value column of the unique table can be updated
3. The where clause currently only supports single tables
Possible risks:
1. Since the current implementation method is a row update,
when the same table is updated concurrently, there may be concurrency conflicts which may cause the incorrect result.
2. Once the conditions of the where clause are unsatisfactory, it is likely to cause a full table scan and affect query performance.
Please pay attention to whether the column in the where clause can match the index when using it.
[Docs][Update] Add update document and sql-reference
Fixed#6229
#6206
At present, our image file does not have file header/footer. When we need to change the image format (such as adding different journal versions to the image), there is no way to distinguish different image formats.
Therefore, we suggest adding file header and footer to the image. By the new image format, we can freely distinguish and define different image reading ways.
The format of the image is as follows:
```
/**
* Image Format:
* |- Image --------------------------------------|
* | - Magic String (4 bytes) |
* | - Header Length (4 bytes) |
* | |- Header -----------------------------| |
* | | |- Json Header ---------------| | |
* | | | - version | | |
* | | | - other key/value(undecided)| | |
* | | |-----------------------------| | |
* | |--------------------------------------| |
* | |
* | |- Image Body -------------------------| |
* | | Object a | |
* | | Object b | |
* | | ... | |
* | |--------------------------------------| |
* | |
* | |- Footer -----------------------------| |
* | | | - Checksum (8 bytes) | |
* | | |- object index --------------| | |
* | | | - index a | | |
* | | | - index b | | |
* | | | ... | | |
* | | |-----------------------------| | |
* | | - other value(undecided) | |
* | |--------------------------------------| |
* | - Footer Length (8 bytes) |
* | - Magic String (4 bytes) |
* |----------------------------------------------|
*/
```
1. Magic Number
One image format is identified by one magic string and one version field. The magic string is save in the first 4 bytes and last 4 bytes in the images.
2. Image Header:
The version is save in the header with json format now.
3. Image Body:
Equal to the original image.
4.Image Footer:
Image footer stores the file offset(index) of many image objects. If necessary, we can read some objects in the image by the footer.
The main purpose of this project is to let doris use development, which can get started quickly,
and give sample codes for the use of various new functions as a reference.
Submit the framework first, and the code will be submitted one after another
Doris developers quickly use the development sample code framework, and the code is submitted one after another
when right table has null value in string column, runtime filter may coredump
```
select count(*) from baseall t1 join test t2 where t1.k7 = t2.k7;
```
Currently, the function lower()/upper() can only handle one char at a time.
A vectorized function has been implemented, it makes performance 2 times faster. Here is the performance test:
The length of char: 26, test 100 times
vectorized-function-cost: 99491 ns
normal-function-cost: 134766 ns
The length of char: 260, test 100 times
vectorized-function-cost: 179341 ns
normal-function-cost: 344995 ns
fix#6265
The reason for the error is that the `Grouping Func Exprs` is substituted twice. In the first substitution, `VirtualSlotRef` is used to replace the original `SlotRef`, and in the second substitution, `VirtualSlotRef` is reported in the `getTable()` Times Null pointer. IN
```
} else if (((SlotRef) child).getDesc().getParent().getTable().getType()
```
For the first substitution, the List of executable exprs in select clause has been substituted.
```
groupingInfo = new GroupingInfo(analyzer, groupByClause.getGroupingType());
groupingInfo.substituteGroupingFn(resultExprs, analyzer);
```
In the second substitution, actually only need to substitute the unique expr in Ordering exprs.
```
createSortInfo(analyzer);
if (sortInfo != null && CollectionUtils.isNotEmpty(sortInfo.getOrderingExprs())) {
if (groupingInfo != null) {
groupingInfo.substituteGroupingFn(sortInfo.getOrderingExprs(), analyzer);
}
}
```
change into:
```
createSortInfo(analyzer);
if (sortInfo != null && CollectionUtils.isNotEmpty(sortInfo.getOrderingExprs())) {
if (groupingInfo != null) {
List<Expr> orderingExprNotInSelect = sortInfo.getOrderingExprs().stream()
.filter(item -> !resultExprs.contains(item)).collect(Collectors.toList());
groupingInfo.substituteGroupingFn(orderingExprNotInSelect, analyzer);
}
}
```