1. Add a new dynamic partition property `create_history_partition`.
If set to true, Doris will create all partitions from `start` to `end`.
2. Add a new FE config `max_dynamic_partition_num`
To limit the number of partitions created when creating one table.
1. Fix NPE in ReplicasProcNode when backend does not exist
2. Forbid the create table like statement to specify the view.
3. Check self ip when starting FE to see if it use the origin ip.
4. Modify the error msg of tablet sink to show more detail errors.
1. Add /api/compaction/run_status to show the running compaction tasks.
2. Support do base and cumulative compaction for one tablet at same time.
3. Modify some log level.
4. Add a feedback document.
The previous replay logic does not record the size of the map, which eventually resulted in EOF when reading the log.
This pr replaces the replay logic directly with json.
At the same time, the replay logic of image is supplemented.
The pr ensure that the attributes 'lastStreamLoadTime' of backend can be correctly recorded in the image.
Doris version-0.12 V1 segment only supports creating zone map index for key columns.
Doris version-0.13 supports creating zone map index for key columns and duplicate model columns.
Latest version supports creating zone map for key columns in AGG_KEYS or all columns in UNIQUE_KEYS or DUP_KEYS.
In V1 Segment file, the tablet meta contains zone map index. But in V2 Segment file, it does not.
When upgrade Doris from version-0.12 to version-0.13, we found the memory is obviousely increasing in table
with V1 segment file. Because a lots value columns create zone map index.
Therefore, it is better to be compatible with version-0.12. The config `enable_value_column_zonemap` is default false.
If it is need to add zone map index for value columns, you can set it true.
Add new lib, Backend can read data from hdfs without broker,
this patch include libhdfs3.a which can read file on hdfs.
This patch will make reading the data from hdfs with parquet possible.
By this, we will support more format of file on hdfs in the future,
and we will support other metadata in the future.
This CL mainly changes:
1. Add a config to control the expire time of load job
Add a new FE config "streaming_label_keep_max_second" to control
the expire time of some high frequency load job such as INSERT and STREAM LOAD.
2. Remove expired txn in batch to avoid holding transaction lock for a long time
Currently Tablet meta checkpoint is a memory-exhausted operation.
If a host has 12 disks, it will start 12 threads to do tablet meta checkpoint.
In our experience, the data size of one tablet can be as high as 2G.
If 12 threads do the checkpoint at the same time, it maybe cause OOM.
Therefore, this PR try to solve this problem.
Firstly, it only start one thread to produce table meta checkpoint tasks.
Secondly, it creates a thread pool to handle these tasks.
You can configure the size of the thread pool to control the parallelism in case of OOM.
It is a producer-customer model.
Before drop a tablet, it will try to find the tablet in tablet map.
But the tablet maybe has been not existed.
Therefore, it is better to print the error message and error status.
Record finished stream load job (both successful job and failed job) into audit log
so that we can see when the stream load job was executed and check the details of stream load jobs.
* [Metrics] Add metrics to monitor BE's agent task queue size
Sometimes, user's DDL or background task may last a long time,
it's not easy to find out which procedure has problem.
This patch add metric to monitor BE's agent task queue size,
which would be helpful for troubleshooting.
The raw metrics on BE looks like:
doris_be_agent_task_queue_size{type="REPORT_OLAP_TABLE"} 0
doris_be_agent_task_queue_size{type="REPORT_DISK_STATE"} 0
doris_be_agent_task_queue_size{type="REPORT_TASK"} 0
doris_be_agent_task_queue_size{type="CHECK_CONSISTENCY"} 0
doris_be_agent_task_queue_size{type="DELETE"} 0
doris_be_agent_task_queue_size{type="CLEAR_TRANSACTION_TASK"} 0
doris_be_agent_task_queue_size{type="PUBLISH_VERSION"} 0
doris_be_agent_task_queue_size{type="UPLOAD"} 0
doris_be_agent_task_queue_size{type="DROP_TABLE"} 0
doris_be_agent_task_queue_size{type="CREATE_TABLE"} 39
doris_be_agent_task_queue_size{type="RELEASE_SNAPSHOT"} 0
doris_be_agent_task_queue_size{type="STORAGE_MEDIUM_MIGRATE"} 245
doris_be_agent_task_queue_size{type="CLONE"} 0
doris_be_agent_task_queue_size{type="MOVE"} 0
doris_be_agent_task_queue_size{type="ALTER_TABLE"} 0
doris_be_agent_task_queue_size{type="DOWNLOAD"} 0
doris_be_agent_task_queue_size{type="PUSH"} 0
doris_be_agent_task_queue_size{type="UPDATE_TABLET_META_INFO"} 0
doris_be_agent_task_queue_size{type="MAKE_SNAPSHOT"} 0
* fix typo
MemTracker can provide memory consumption for us to find out which
module consume more memory, but it's just a current value, this patch
add metrics for some large memory consumers, then we can find out
which module consume more memory in timeline, it would be useful to
troubleshoot OOM problems and optimize configs.
1. The MaterializedViewSelector should be reset for each scan node
2. On the BE side, columns with delete conditions must be added to the return column.
* [Enhance] Make MemTracker more accurate (#5515)
This PR main about:
1. Improve the readability of MemTrackers' name
2. Add the MemTracker of:
* Load
* Compaction
* SchemaChange
* StoragePageCache
* TabletManager
3. Change SchemaChange to a Singleon
* revise some code for Code Review
* change the name of mem_tracker
* keep reader_context have the same lifetime of rowset_reader in schema change.
* change vlog notice to log(warning) in schema change