In some scenarios, when users use dynamic partitions, they hope to use Doris' hierarchical storage
function at the same time.
For example, for the dynamic partition rule of partitioning by day, we hope that the partitions of the last 3 days
are stored on the SSD storage medium and automatically migrated to the HDD storage medium after expiration.
This CL add a new dynamic partition property: "hot_partition_num".
This parameter is used to specify how many recent partitions need to be stored on the SSD storage medium.
In BE, when a problem happened, in the log, we can find the database id, table id, partition id,
but no database name, table name, partition name.
In FE, there also no way to find database name/table name/partition name accourding to
database id/table id/partition id. Therefore, this patch add 3 new commands:
1. show database id;
mysql> show database 10002;
+----------------------+
| DbName |
+----------------------+
| default_cluster:test |
+----------------------+
2. show table id;
mysql> show table 11100;
+----------------------+-----------+-------+
| DbName | TableName | DbId |
+----------------------+-----------+-------+
| default_cluster:test | table2 | 10002 |
+----------------------+-----------+-------+
3. show partition id;
mysql> show partition 11099;
+----------------------+-----------+---------------+-------+---------+
| DbName | TableName | PartitionName | DbId | TableId |
+----------------------+-----------+---------------+-------+---------+
| default_cluster:test | table2 | p201708 | 10002 | 11100 |
+----------------------+-----------+---------------+-------+---------+
1. When an oom error occurs when writing bdbje, catch the error and exit the process.
2. Increase the timeout period of bdbje replica ack and change it to a configuration.
Support when creating a kafka routine load, start consumption from a specified point in time instead of a specific offset.
eg:
```
FROM KAFKA
(
"kafka_broker_list" = "broker1:9092,broker2:9092",
"kafka_topic" = "my_topic",
"property.kafka_default_offsets" = "2021-10-10 11:00:00"
);
or
FROM KAFKA
(
"kafka_broker_list" = "broker1:9092,broker2:9092",
"kafka_topic" = "my_topic",
"kafka_partitions" = "0,1,2",
"kafka_offsets" = "2021-10-10 11:00:00, 2021-10-10 11:00:00, 2021-10-10 12:00:00"
);
```
This PR also reconstructed the analysis method of properties when creating or altering
routine load jobs, and unified the analysis process in the `RoutineLoadDataSourceProperties` class.
1 Make some MemTracker have reasonable parent MemTracker not the root tracker
2 Make each MemTracker can be easily to trace.
3 Add show level of MemTracker to reduce the MemTracker show in the web page to have a way to control show how many tracker in web page.
Currently, the `show data` does not support sorting. When the number of tables increases, it is inconvenient to manage. Need to support sorting
like:
```
mysql> show data order by ReplicaCount desc,Size asc;
+-----------+-------------+--------------+
| TableName | Size | ReplicaCount |
+-----------+-------------+--------------+
| table_c | 3.102 KB | 40 |
| table_d | .000 | 20 |
| table_b | 324.000 B | 20 |
| table_a | 1.266 KB | 10 |
| Total | 4.684 KB | 90 |
| Quota | 1024.000 GB | 1073741824 |
| Left | 1024.000 GB | 1073741734 |
+-----------+-------------+--------------+
```
* Solve the situation that the hardware information of the Web UI home page cannot be loaded
Solve the situation that the hardware information of the Web UI home page cannot be loaded
* Add flink doris connector design document
Add flink doris connector design document
* flink doris connector design english docment
flink doris connector design english docment
Co-authored-by: zhangjf@shuhaisc.com <zhangfeng800729>
1. Add a new dynamic partition property `create_history_partition`.
If set to true, Doris will create all partitions from `start` to `end`.
2. Add a new FE config `max_dynamic_partition_num`
To limit the number of partitions created when creating one table.
1. Add /api/compaction/run_status to show the running compaction tasks.
2. Support do base and cumulative compaction for one tablet at same time.
3. Modify some log level.
4. Add a feedback document.
* [Apache] Change download link to archive page
Only the latest package should appear in svn.
The old release package already has been archived, so the download link should be replaced by archive page.
1.
Add timer to count the time the transfer thread waits for the scaner thread to return rowbatch.
2.
Add timer to count the time that the scanner thread waits for the available worker threads in the thread pool.
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
1.
User can export query result to local disk like:
`select * from tbl into outfile ("file:///disk1/result_");`
And modify the return result to show the details of export:
```
mysql> select * from tbl1 limit 10 into outfile "file:///home/work/path/result_";
+------------+-----------+----------+--------------+
| FileNumber | TotalRows | FileSize | URL |
+------------+-----------+----------+--------------+
| 1 | 2 | 8 | 192.168.1.10 |
+------------+-----------+----------+--------------+
```
2.
Support create a mark file after export successfully finished.
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
1. Support where clause in export stmt which only export selected rows.
The syntax is following:
Export table [table name]
where [expr]
To xxx
xxxx
It will filter table rows.
Only rows that meet the where condition can be exported.
2. Support utf8 separator
3. Support export to local
The syntax is following:
Export table [table name]
To (file:///xxx/xx/xx)
If user export rows to local, the broker properties is not requried.
User only need to create a local folder to store data, and fill in the path of the folder starting with file://
Change-Id: Ib7e7ece5accb3e359a67310b0bf006d42cd3f6f5