Currently, there are some useless includes in the codebase. We can use a tool named include-what-you-use to optimize these includes. By using a strict include-what-you-use policy, we can get lots of benefits from it.
```
CREATE ROUTINE LOAD iaas.dws_nat ON dws_nat
WITH APPEND PROPERTIES (
"desired_concurrent_number"="2",
"max_batch_interval" = "20",
"max_batch_rows" = "400000",
"max_batch_size" = "314572800",
"format" = "json",
"max_error_number" = "0"
)
FROM KAFKA (
"kafka_broker_list" = "xxxx:xxxx",
"kafka_topic" = "nat_nsq",
"property.kafka_default_offsets" = "2022-04-19 13:20:00"
);
```
In the create statement example below, you can see
The user didn't specify the custom partitions.
So that 1. Fe will get all kafka partitions from server in routine load's scheduler.
The user set the default offset by datetime.
So that 2. Fe will get kafka offset by time from server in routine load's scheduler.
When 1 is success, meanwhile 2 is failed, the progress of this routine load may not contains any partitions and offsets.
Nevertheless, since newCurrentKafkaPartition which is get by kafka server may be always equal to currentKafkaPartitions,
the wrong progress will never be updated.
At present, the application of vlog in the code is quite confusing.
It is inherited from impala VLOG_XX format, and there is also VLOG(number) format.
VLOG(number) format does not have a unified specification, so this pr standardizes the use of VLOG
BE can not graceful exit because some threads are running in endless
loop. This patch do the following optimization:
- Use the well encapsulated Thread and ThreadPool instead of std::thread
and std::vector<std::thread>
- Use CountDownLatch in thread's loop condition to avoid endless loop
- Introduce a new class Daemon for daemon works, like tcmalloc_gc,
memory_maintenance and calculate_metrics
- Decouple statistics type TaskWorkerPool and StorageEngine notification
by submit tasks to TaskWorkerPool's queue
- Reorder objects' stop and deconstruct in main(), i.e. stop network
services at first, then internal services
- Use libevent in pthreads mode, by calling evthread_use_pthreads(),
then EvHttpServer can exit gracefully in multi-threads
- Call brpc::Server's Stop() and ClearServices() explicitly
1. Use a data consumer group to share a single stream load pipe with multi data consumers. This will increase the consuming speed of Kafka messages, as well as reducing the task number of routine
load job.
Test results:
* 1 consumer, 1 partitions:
consume time: 4.469s, rows: 990140, bytes: 128737139. 221557 rows/s, 28M/s
* 1 consumer, 3 partitions:
consume time: 12.765s, rows: 2000143, bytes: 258631271. 156689 rows/s, 20M/s
blocking get time(us): 12268241, blocking put time(us): 1886431
* 3 consumers, 3 partitions:
consume time(all 3): 6.095s, rows: 2000503, bytes: 258631576. 328220 rows/s, 42M/s
blocking get time(us): 1041639, blocking put time(us): 10356581
The next 2 cases show that we can achieve higher speed by adding more consumers. But the bottle neck transfers from Kafka consumer to Doris ingestion, so 3 consumers in a group is enough.
I also add a Backend config `max_consumer_num_per_group` to change the number of consumers in a data consumer group, and default value is 3.
In my test(1 Backend, 2 tablets, 1 replicas), 1 routine load task can achieve 10M/s, which is same as raw stream load.
2. Add OFFSET_BEGINNING and OFFSET_END support for Kafka routine load