1. Add a new dynamic partition property `create_history_partition`.
If set to true, Doris will create all partitions from `start` to `end`.
2. Add a new FE config `max_dynamic_partition_num`
To limit the number of partitions created when creating one table.
1. Fix NPE in ReplicasProcNode when backend does not exist
2. Forbid the create table like statement to specify the view.
3. Check self ip when starting FE to see if it use the origin ip.
4. Modify the error msg of tablet sink to show more detail errors.
In the previous code, the routine load task did not catch the exception of opening the transaction.
As a result, although the task cannot be executed,
no exceptions can be seen during show routine load, only the routine load job is stuck.
The PR catch the QuotaExceedException when opening a transaction.
If the routine load task cannot be executed due to the exhaustion of the quota,
the routine load will be paused and an error message will be presented to the user.
Similarly, other load method will also catch similar exceptions and cancel job.
1. Add /api/compaction/run_status to show the running compaction tasks.
2. Support do base and cumulative compaction for one tablet at same time.
3. Modify some log level.
4. Add a feedback document.
The distinct count result of bitmap/hll column may be incorrect in the spark load mode.
Fix some bugs in spark load to solve the above problem.
1. FE is big end but BE is little end. BitmapValues should be transfered to little end in FE's serialization
2. BitmapUnionAggregator/HllUnionAggregator ignore `null` value
3. Make sure encodeVarint64 in FE is consistent with BE
Co-authored-by: weixiang <weixiang06@meituan.com>
The previous replay logic does not record the size of the map, which eventually resulted in EOF when reading the log.
This pr replaces the replay logic directly with json.
At the same time, the replay logic of image is supplemented.
The pr ensure that the attributes 'lastStreamLoadTime' of backend can be correctly recorded in the image.
Doris version-0.12 V1 segment only supports creating zone map index for key columns.
Doris version-0.13 supports creating zone map index for key columns and duplicate model columns.
Latest version supports creating zone map for key columns in AGG_KEYS or all columns in UNIQUE_KEYS or DUP_KEYS.
In V1 Segment file, the tablet meta contains zone map index. But in V2 Segment file, it does not.
When upgrade Doris from version-0.12 to version-0.13, we found the memory is obviousely increasing in table
with V1 segment file. Because a lots value columns create zone map index.
Therefore, it is better to be compatible with version-0.12. The config `enable_value_column_zonemap` is default false.
If it is need to add zone map index for value columns, you can set it true.
* [Apache] Change download link to archive page
Only the latest package should appear in svn.
The old release package already has been archived, so the download link should be replaced by archive page.
Although the table lock can control the simultaneous modification of the table by different threads.
But it cannot control the drop operation of the table by other threads.
For example, when drop table and table update occur at the same time.
1. get table object by thread 1
2. drop table by thread 2 with table lock
3. update table object by thread 1
The above process is possible.
At this time, step 3 actually operates a table that no longer exists, which will eventually cause the wrong metadata to be recorded.
Fixed#5687
Add new lib, Backend can read data from hdfs without broker,
this patch include libhdfs3.a which can read file on hdfs.
This patch will make reading the data from hdfs with parquet possible.
By this, we will support more format of file on hdfs in the future,
and we will support other metadata in the future.
This CL mainly changes:
1. Add a config to control the expire time of load job
Add a new FE config "streaming_label_keep_max_second" to control
the expire time of some high frequency load job such as INSERT and STREAM LOAD.
2. Remove expired txn in batch to avoid holding transaction lock for a long time
Currently Tablet meta checkpoint is a memory-exhausted operation.
If a host has 12 disks, it will start 12 threads to do tablet meta checkpoint.
In our experience, the data size of one tablet can be as high as 2G.
If 12 threads do the checkpoint at the same time, it maybe cause OOM.
Therefore, this PR try to solve this problem.
Firstly, it only start one thread to produce table meta checkpoint tasks.
Secondly, it creates a thread pool to handle these tasks.
You can configure the size of the thread pool to control the parallelism in case of OOM.
It is a producer-customer model.