* Change label of broker load txn
1. put broker load label into txn label
2. fix the bug of `label is already used`
3. fix partition error of new broker load
* Fix count error in mini load and broker load
There are three params (num_rows_load_total, num_rows_load_filtered, num_rows_load_unselected) which are used to count dpp.norm.ALL and dpp.abnorm.ALL.
num_rows_load_total is the number rows of source file.
num_rows_load_unselected is the not satisfied (where conjuncts) rows of num_rows_load_total
num_rows_load_filtered is the rows (quality not good enough) of (num_rows_load_total-num_rows_load_unselected)
This change include the show load of loadv2 and some bug fix of loadv2.
Firstly, the show load will perform both load and loadv2 info. According to loadv2, the ETL progress of loadv2 is N/A during the period of loading.
Secondly, the loadv2 will be created when version of property is v2.
This is a temporary property which will not influence the old broker load.
After the loadv2 is finished, the default load will be changed to loadv2.
Finally, there are some bug in LoadingTaskPlanner fixed by this change.
* Enhence the usabilities
1. Add metrics to monitor transactions and steaming load process in BE.
2. Modify BE config 'result_buffer_cancelled_interval_time' to 300s.
3. Modify FE config 'enable_metric_calculator' to true.
4. Add more log for tracing broker load process.
5. Modify the query report process, to cancel query immediately if some instance failed.
* Fix bugs
1. Avoid NullPointer when enabling colocation join with broker load
2. Return immediately when pull load task coordinator execution failed
1. max io util of disks
2. max network send/receive bytes rate of all network devices
3. base/cumulative compaction request counter and failure counter
1. Use a data consumer group to share a single stream load pipe with multi data consumers. This will increase the consuming speed of Kafka messages, as well as reducing the task number of routine
load job.
Test results:
* 1 consumer, 1 partitions:
consume time: 4.469s, rows: 990140, bytes: 128737139. 221557 rows/s, 28M/s
* 1 consumer, 3 partitions:
consume time: 12.765s, rows: 2000143, bytes: 258631271. 156689 rows/s, 20M/s
blocking get time(us): 12268241, blocking put time(us): 1886431
* 3 consumers, 3 partitions:
consume time(all 3): 6.095s, rows: 2000503, bytes: 258631576. 328220 rows/s, 42M/s
blocking get time(us): 1041639, blocking put time(us): 10356581
The next 2 cases show that we can achieve higher speed by adding more consumers. But the bottle neck transfers from Kafka consumer to Doris ingestion, so 3 consumers in a group is enough.
I also add a Backend config `max_consumer_num_per_group` to change the number of consumers in a data consumer group, and default value is 3.
In my test(1 Backend, 2 tablets, 1 replicas), 1 routine load task can achieve 10M/s, which is same as raw stream load.
2. Add OFFSET_BEGINNING and OFFSET_END support for Kafka routine load
1. stream load executor will abort txn when no correct data in task
2. change txn label to DebugUtil.print(UUID) which is same as task id printed by be
3. change print uuid to hi-lo
1. init cmt offset in stream load context
2. init default max error num = 5000 rows / per 10000 rows
3. add log builder for routine load job and task
4. clone plan fragment param for every task
5. be does not throw too many filter rows while the init max error ratio is 1
1. Check if properties is null before check routine load properties
2. Change transactionStateChange reason to string
3. calculate current num by beId
4. Add kafka offset properties
5. Prefer to use previous be id
6. Add before commit listener of txn: if txn is committed after task is aborted, commit will be aborted
7. queryId of stream load plan = taskId
Add a variable enable_insert_strict, this default value is false. When
this value is set to true, insert will fail if there is any filtered
data. If this value is false, insert will ignore filtered data and
success