The version information of the tablet will be stored in the memory
in an adjacency graph data structure.
And as the new version is written and the old version is deleted,
the data structure will begin to have empty vertex with no edge associations(orphan vertex).
These orphan vertexs should be removed somehow.
Fix#5931
The reason is that sometime the method coordinate.exec() is not call when the job is timeout,
so that the query profile in this coordinate is not be initialized,
which will cause an NPE error in the execution of ExportExportingTask.
1. The partitions set by the admin repair command are prioritized
to ensure that the tablets of these partitions can be repaired as soon as possible.
2. Add an FE metric "query_begin" to monitor the number of queries submitted to the Doris.
For PR #5792. This patch add a new param `cache type` to distinguish sql cache and partition cache.
When update sql cache, we make assure one sql key only has one version cache.
When parsing memory parameters in `ParseUtil::parse_mem_spec`, convert the percentage to `double` instead of `int`.
The currently affected parameters include `mem_limit` and `storage_page_cache_limit`
The Repeat Node will change the data partition of fragment
when the origin data partition of fragment is HashPartition.
The Repeat Node will generate some new rows.
The distribution of these new rows is completely inconsistent with the original data distribution,
their distribution is RANDOM.
If the data distribution is not corrected,
an error will occur when the agg node determines whether to perform colocate.
Wrong data distribution will cause the agg node to think that agg can be colocated,
leading to wrong results.
For example, the following query can not be colocated although the distributed column of table is k1:
```
SELECT k1, k2, SUM( k3 )
FROM table
GROUP BY GROUPING SETS ( (k1, k2), (k1), (k2), ( ) )
```
According to the LRU priority, the `lru list` is split into `lru normal list` and `lru durable list`,
and the two lists are traversed in sequence during LRU evict, avoiding invalid cycles.
If query is memory exceed, a detail info where memory exceed is required.
However it is not necessary to return the entire query stack to the end user.
The query stack only needs to be printed in the be log.
In some scenarios, when users use dynamic partitions, they hope to use Doris' hierarchical storage
function at the same time.
For example, for the dynamic partition rule of partitioning by day, we hope that the partitions of the last 3 days
are stored on the SSD storage medium and automatically migrated to the HDD storage medium after expiration.
This CL add a new dynamic partition property: "hot_partition_num".
This parameter is used to specify how many recent partitions need to be stored on the SSD storage medium.
In BE, when a problem happened, in the log, we can find the database id, table id, partition id,
but no database name, table name, partition name.
In FE, there also no way to find database name/table name/partition name accourding to
database id/table id/partition id. Therefore, this patch add 3 new commands:
1. show database id;
mysql> show database 10002;
+----------------------+
| DbName |
+----------------------+
| default_cluster:test |
+----------------------+
2. show table id;
mysql> show table 11100;
+----------------------+-----------+-------+
| DbName | TableName | DbId |
+----------------------+-----------+-------+
| default_cluster:test | table2 | 10002 |
+----------------------+-----------+-------+
3. show partition id;
mysql> show partition 11099;
+----------------------+-----------+---------------+-------+---------+
| DbName | TableName | PartitionName | DbId | TableId |
+----------------------+-----------+---------------+-------+---------+
| default_cluster:test | table2 | p201708 | 10002 | 11100 |
+----------------------+-----------+---------------+-------+---------+
1. When an oom error occurs when writing bdbje, catch the error and exit the process.
2. Increase the timeout period of bdbje replica ack and change it to a configuration.
The buffered reader's _cur_offset should be initialized as same as the inner file reader's,
to make sure that the reader will start to read at rignt position.
Support when creating a kafka routine load, start consumption from a specified point in time instead of a specific offset.
eg:
```
FROM KAFKA
(
"kafka_broker_list" = "broker1:9092,broker2:9092",
"kafka_topic" = "my_topic",
"property.kafka_default_offsets" = "2021-10-10 11:00:00"
);
or
FROM KAFKA
(
"kafka_broker_list" = "broker1:9092,broker2:9092",
"kafka_topic" = "my_topic",
"kafka_partitions" = "0,1,2",
"kafka_offsets" = "2021-10-10 11:00:00, 2021-10-10 11:00:00, 2021-10-10 12:00:00"
);
```
This PR also reconstructed the analysis method of properties when creating or altering
routine load jobs, and unified the analysis process in the `RoutineLoadDataSourceProperties` class.
The old colocate aggregation can only cover the case where the child is scan.
In fact, as long as the child's data distribution meets the requirements,
no matter what the plan node on the child node is, a colocate aggregation can be performed.
This PR also fixes the correct data partition attribute of fragment.
The data partition of fragment which contains scan node is Hash Partition rather than Random.
This modification is mainly to determine the possibility of colocate
through the correct distribution of child fragments.
The expose annotation is used in the persistence logic used by the old backup recovery.
This annotation by itself is meant to ignore some variables when serializing and deserializing.
However, this variable was used incorrectly and gson did not ignore the variables that should have been ignored.
This results in duplicate initialization when fe is restarted.
This pr uses the doris wrapped Gson directly, than eliminates the use of the expose annotation.
Fixed sortedTabletInfoList being repeatedly initialized resulting in incorrect numbers.
Fixed#5852