This work is in the early stage, current progress is not accurate because the scan range will be too large
for gathering information, what's more, only file scan node and import job support new progress manager
## How it works
for example, when we use the following load query:
```
LOAD LABEL test_broker_load
(
DATA INFILE("XXX")
INTO TABLE `XXX`
......
)
```
Initial Progress: the query will call `BrokerLoadJob` to create job, then `coordinator` is called to calculate scan range and its location.
Update Progress: BE will report runtime_state to FE and FE update progress status according to jobID and fragmentID
we can use `show load` to see the progress
PENDING:
```
State: PENDING
Progress: 0.00%
```
LOADING:
```
State: LOADING
Progress: 14.29% (1/7)
```
FINISH:
```
State: FINISHED
Progress: 100.00% (7/7)
```
At current time, full output of `show load\G` looks like:
```
*************************** 1. row ***************************
JobId: 25052
Label: test_broker
State: LOADING
Progress: 0.00% (0/7)
Type: BROKER
EtlInfo: NULL
TaskInfo: cluster:N/A; timeout(s):250000; max_filter_ratio:0.0
ErrorMsg: NULL
CreateTime: 2023-05-03 20:53:13
EtlStartTime: 2023-05-03 20:53:15
EtlFinishTime: 2023-05-03 20:53:15
LoadStartTime: 2023-05-03 20:53:15
LoadFinishTime: NULL
URL: NULL
JobDetails: {"Unfinished backends":{"5a9a3ecd203049bc-85e39a765c043228":[10080]},"ScannedRows":39611808,"TaskNumber":1,"LoadBytes":7398908902,"All backends":{"5a9a3ecd203049bc-85e39a765c043228":[10080]},"FileNumber":1,"FileSize":7895697364}
TransactionId: 14015
ErrorTablets: {}
User: root
Comment:
```
## TODO:
1. The current partition granularity of scan range is too large, resulting in an uneven loading process for progress."
2. Only broker load supports the new Progress Manager, support progress for other query
This PR enables periodic collection of statistics and is a precursor to automatic statistics collection. It mainly includes the following contents:
support periodic collection of statistics.
Change the type of Date in statistics p0 to DateV2(see [Enhancement](data-type) add FE config to prohibit create date and decimalv2 type #19077) for test locally. complement cases(remove Chinese characters, optimize code, etc) , improve stability.
Supports setting whether to keep records of statistics synchronization job info, convenient for use in p0 testing.
The statistics job table was modified, and some auxiliary judgments were added to avoid the user perceiving the modification. This function was removed when the table schema is stable.
For files less than 5MB, we don't need to use multi part upload which would at least takes 3 network IO.
Instead we can just call PutObject which only takes one shot.
when query from slave node , here maybe can not select the stream load result , even the publish version is visible , so make a sync sql to enable result in slave node.
The original method signature is Block VExprContext::get_output_block_after_execute_exprs(
const std::vectorvectorized::VExprContext*& output_vexpr_ctxs, const Block& input_block,
Status& status)
It return error status as a out parameter and the block as return value. It has to check the block.rows == 0 and then check error status.
It is not conforming to the convention.
---------
Co-authored-by: yiguolei <yiguolei@gmail.com>
Previously, we use RuntimeProfile class directly, and because there are multiple level in profile, so you can see there may be several RuntimeProfile instances be to maintain.
I created several new classes for profile:
class Profile:
The root profile of a execution task(query or load)
class SummaryProfile:
The profile that contains summary info of a execution task,
such as start time, end time, query id. etc.
class ExecutionProfile:
The profile for a single Coordinator. Each Coordinator will
have a ExecutionProfile.
The profile structure is as following:
Profile:
SummaryProfile:
ExecutionProfile 1:
Fragment 0:
Instance 0:
Instance 1:
...
Fragment 1:
...
ExecutionProfile 2:
...
You can see, each Profile has a SummaryProfile and one or more ExecutionProfile.
For most kinds of job, such as query/insert, there is only one ExecutionProfile. But for broker load job, will may be more than one ExecutionProfile, corresponding to each sub task of the load job.
How to use
For query/insert, etc:
Each StmtExcutor will have a Profile instance.
Each Coordinator will have a ExecutionProfile instance.
StmtExcutor is responsible for the SummaryProfile, it will update the SummaryProfile during the execution.
Coordinator is responsible for the ExecutionProfile, it will first add ExecutionProfile to the child of Profile, and update the ExecutionProfile periodically during the execution.
For Load/Export, etc:
Each job will hava a Profile instance.
For each Coordinator of this job, add its ExecutionProfile to the children of job's Profile.
Behavior Change
The columns of show load profile/show query profile and QueryProfile Web UI has changed to:
| Profile ID | Task Type | Start Time | End Time | Total | Task State | User | Default Db| Sql Statement | Is Cached | Total Instances Num | Instances Num Per BE | Parallel Fragment Exec Instance Num | Trace ID |
The Query Id and Job Id is removed and using Profile ID instead.
For load job, the profile id is job id, for query/insert, is query id.
Move getSplits function to ScanNode, remove Splitter interface.
For each kind of data source, create a specific ScanNode and implement the getSplits interface. For example, HiveScanNode.
Remove FileScanProviderIf move the code to each ScanNode.
After #19246, when compilng FE, it will automatically generate Config and Session Variables doc and overwrite the origin one.
Need to avoid it because it is not ready to use yet
see #18960
PR1: add new storage file system template and move old storage to new package
PR2: extract some method in old storage to new file system.
PR3: use storages to access remote object storage, and use file systems to access file in local or remote location. Will add some unit tests.
this is PR3.
The document of configs(FE and BE) and session variables is hard to maintain.
Because developer need to modify both code and document.
And you can see that some of config's document is missing.
So I plan to write the document of config or variables directly in code, and using
script to generate document automatically.
How To
This CL mainly changes:
Add field in Config and Session Variables' annaotion
description: The description of the config or variable item. It is a String array. And first element is in Chinese, second is in English
options: the valid options if the config or variable is enum.
Add a scripts docs/generate-config-and-variable-doc.sh
Simple run sh docs/generate-config-and-variable-doc.sh and it will generate docs of FE config and variables,
And save it under docs/admin-manual/config/fe-config.md and docs/advanced/variables.md,
both in Chinese and in English.
And there are template markdowns for this script to read and replace with real doc content.
TODO
Too many description need to be filled. I will finish them in next PR. And now the origin doc remain unchanged.
Find a way to check the description field of config and variables, to make sure we won't missing it.
Generate doc for BE config.
Add file cache metrics and management.
1. Get file cache metrics
> If the performance of file cache is not efficient, there are currently no metrics to investigate the cause. In practice, hit ratio, disk usage, and segments removed status are very important information.
API: `http://be_host:be_webserver_port/metrics`
File cache metrics for each base path start with `doris_be_file_cache_` prefix. `hits_ratio` is the hit ratio of the cache since BE startup; `removed_elements` is the num of removed segment files since BE startup; Every cache path has three queues: index, normal and disposable. The capacity ratio of the three queues is 1:17:2.
```
doris_be_file_cache_hits_ratio{path="/mnt/datadisk1/gaoxin/file_cache"} 0.500000
doris_be_file_cache_hits_ratio{path="/mnt/datadisk1/gaoxin/small_file_cache"} 0.500000
doris_be_file_cache_removed_elements{path="/mnt/datadisk1/gaoxin/file_cache"} 0
doris_be_file_cache_removed_elements{path="/mnt/datadisk1/gaoxin/small_file_cache"} 0
doris_be_file_cache_normal_queue_max_size{path="/mnt/datadisk1/gaoxin/file_cache"} 912680550400
doris_be_file_cache_normal_queue_max_size{path="/mnt/datadisk1/gaoxin/small_file_cache"} 8500000000
doris_be_file_cache_normal_queue_max_elements{path="/mnt/datadisk1/gaoxin/file_cache"} 217600
doris_be_file_cache_normal_queue_max_elements{path="/mnt/datadisk1/gaoxin/small_file_cache"} 102400
doris_be_file_cache_normal_queue_curr_size{path="/mnt/datadisk1/gaoxin/file_cache"} 14129846
doris_be_file_cache_normal_queue_curr_size{path="/mnt/datadisk1/gaoxin/small_file_cache"} 14874904
doris_be_file_cache_normal_queue_curr_elements{path="/mnt/datadisk1/gaoxin/file_cache"} 18
doris_be_file_cache_normal_queue_curr_elements{path="/mnt/datadisk1/gaoxin/small_file_cache"} 22
...
```
2. Release file cache
> Frequent segment files swapping can seriously affect the performance of file cache. Adding a deletion interface helps users clean up the file cache.
API: `http://be_host:be_webserver_port/api/file_cache?op=release&base_path=${file_cache_base_path}`
Return the number of released segment files. If `base_path` is not provide in url, all cache paths will be released.
It's thread-safe to call this api, so only the segment files not been read currently can be released.
```
{"released_elements":22}
```
3. Specify the base path to store cache data
> Currently, regression testing lacks test cases of file cache, which cannot guarantee the stability of file cache. This interface is generally used in regression testing scenarios. Different queries use different paths to verify different usage cases and performance.
User can set session variable `file_cache_base_path` to specify the base path to store cache data. `file_cache_base_path="random"` as default, means chosing a random path from cached paths to store cache data. If `file_cache_base_path` is not one of the base paths in BE configuration, a random path is used.
Prevent BE to be recompiled many files:
When we execute build.sh, it clean thrift code so that BE will be recompiled many files. It is added by this pr
#19217
We can use build.sh --clean to clean the thrift code. No need to clean it every time.