1. Fix bug of UNKNOWN Operation Type 91
2. Support using resource_tag property of user to limit the usage of BE
3. Add new FE config `disable_tablet_scheduler` to disable tablet scheduler.
4. Add documents for resource tag.
5. Modify the default value of FE config `default_db_data_quota_bytes` to 1PB.
6. Add a new BE config `disable_compaction_trace_log` to disable the trace log of compaction time cost.
7. Modify the default value of BE config `remote_storage_read_buffer_mb` to 16MB
8. Fix `show backends` results error
9. Add new BE config `external_table_connect_timeout_sec` to set the timeout when connecting to odbc and mysql table.
10. Modify issue template to enable blank issue, for release note or other specific usage.
11. Fix a bug in alpha_row_set split_range() function.
for issue #6474
```sql
create table test.table1 like test.table with rollup r1,r2 -- copy some rollup
create table test.table1 like test.table with rollup all -- copy all rollup
create table test.table1 like test.table -- only copy base table
```
SHOW DATA SKEW FROM tbl PARTITION(p1)
to view the data distribution of a specified partition
```
mysql> admin show data skew from tbl1 partition(tbl1);
+-----------+-------------+-------+---------+
| BucketIdx | AvgDataSize | Graph | Percent |
+-----------+-------------+-------+---------+
| 0 | 0 | | 100.00% |
+-----------+-------------+-------+---------+
1 row in set (0.01 sec)
```
Also modify the result of `admin show replica distribution`, add replica size distribution
```
mysql> admin show replica distribution from tbl1 partition(tbl1);
+-----------+------------+-------------+----------+------------+-----------+-------------+
| BackendId | ReplicaNum | ReplicaSize | NumGraph | NumPercent | SizeGraph | SizePercent |
+-----------+------------+-------------+----------+------------+-----------+-------------+
| 10002 | 1 | 0 | > | 100.00% | | 100.00% |
+-----------+------------+-------------+----------+------------+-----------+-------------+
```
```
alter routine load for cmy2 from kafka("kafka_broker_list" = "ip2:9094", "kafka_topic" = "my_topic");
```
This is useful when the kafka broker list or topic has been changed.
Also modify `show create routine load`, support showing "kafka_partitions" and "kafka_offsets".
[Update] Support update syntax
The current update syntax only supports updating the filtered data of a single table.
Syntax:
* UPDATE table_reference
* SET assignment_list
* [WHERE where_condition]
*
* value:
* {expr}
*
* assignment:
* col_name = value
*
* assignment_list:
* assignment [, assignment] ...
Example
Update unique_table
set v1=1
where k1=1
New Frontend Config: enable_concurrent_update
This configuration is used to control whether multi update stmt can be executed concurrently in one table.
Default value is false which means A table can only have one update task being executed at the same time.
If users want to update the same table concurrently,
they need to modify the configuration value to true and restart the master frontend.
Concurrent updates may cause write conflicts, the result is uncertain, please be careful.
The main realization principle:
1. Read the rows that meet the conditions according to the conditions set by where clause.
2. Modify the result of the row according to the set clause.
3. Write the modified row back to the table.
Some restrictions on the use of update syntax.
1. Only the unique table can be updated
2. Only the value column of the unique table can be updated
3. The where clause currently only supports single tables
Possible risks:
1. Since the current implementation method is a row update,
when the same table is updated concurrently, there may be concurrency conflicts which may cause the incorrect result.
2. Once the conditions of the where clause are unsatisfactory, it is likely to cause a full table scan and affect query performance.
Please pay attention to whether the column in the where clause can match the index when using it.
[Docs][Update] Add update document and sql-reference
Fixed#6229
## Proposed changes
Add transaction for the operation of insert. It will cost less time than non-transaction(it will cost 1/1000 time) when you want to insert a amount of rows.
### Syntax
```
BEGIN [ WITH LABEL label];
INSERT INTO table_name ...
[COMMIT | ROLLBACK];
```
### Example
commit a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
commit;
```
rollback a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
rollback;
```
commit a transaction with label:
```
begin with label test_label;
insert into Tbl values(11, 22, 33);
commit;
```
### Description
```
begin: begin a transaction, the next insert will execute in the transaction until commit/rollback;
commit: commit the transaction, the data in the transaction will be inserted into the table;
rollback: abort the transaction, nothing will be inserted into the table;
```
### The main realization principle:
```
1. begin a transaction in the session. next sql is executed in the transaction;
2. insert sql will be parser and get the database name and table name, they will be used to select a be and create a pipe to accept data;
3. all inserted values will be sent to the be and write into the pipe;
4. a thread will get the data from the pipe, then write them to disk;
5. commit will complete this transaction and make these data visible;
6. rollback will abort this transaction
```
### Some restrictions on the use of update syntax.
1. Only ```insert``` can be called in a transaction.
2. If something error happened, ```commit``` will not succeed, it will ```rollback``` directly;
3. By default, if part of insert in the transaction is invalid, ```commit``` will only insert the other correct data into the table.
4. If you need ```commit``` return failed when any insert in the transaction is invalid, you need execute ```set enable_insert_strict = true``` before ```begin```.
fix the issue #5995
Add the property "dynamic_partition.history_partition_num" to specify the history partition number when enable create_history_partition to fix the invalid date format value
and add these two properties to docs
In BE, when a problem happened, in the log, we can find the database id, table id, partition id,
but no database name, table name, partition name.
In FE, there also no way to find database name/table name/partition name accourding to
database id/table id/partition id. Therefore, this patch add 3 new commands:
1. show database id;
mysql> show database 10002;
+----------------------+
| DbName |
+----------------------+
| default_cluster:test |
+----------------------+
2. show table id;
mysql> show table 11100;
+----------------------+-----------+-------+
| DbName | TableName | DbId |
+----------------------+-----------+-------+
| default_cluster:test | table2 | 10002 |
+----------------------+-----------+-------+
3. show partition id;
mysql> show partition 11099;
+----------------------+-----------+---------------+-------+---------+
| DbName | TableName | PartitionName | DbId | TableId |
+----------------------+-----------+---------------+-------+---------+
| default_cluster:test | table2 | p201708 | 10002 | 11100 |
+----------------------+-----------+---------------+-------+---------+
Support when creating a kafka routine load, start consumption from a specified point in time instead of a specific offset.
eg:
```
FROM KAFKA
(
"kafka_broker_list" = "broker1:9092,broker2:9092",
"kafka_topic" = "my_topic",
"property.kafka_default_offsets" = "2021-10-10 11:00:00"
);
or
FROM KAFKA
(
"kafka_broker_list" = "broker1:9092,broker2:9092",
"kafka_topic" = "my_topic",
"kafka_partitions" = "0,1,2",
"kafka_offsets" = "2021-10-10 11:00:00, 2021-10-10 11:00:00, 2021-10-10 12:00:00"
);
```
This PR also reconstructed the analysis method of properties when creating or altering
routine load jobs, and unified the analysis process in the `RoutineLoadDataSourceProperties` class.
Currently, the `show data` does not support sorting. When the number of tables increases, it is inconvenient to manage. Need to support sorting
like:
```
mysql> show data order by ReplicaCount desc,Size asc;
+-----------+-------------+--------------+
| TableName | Size | ReplicaCount |
+-----------+-------------+--------------+
| table_c | 3.102 KB | 40 |
| table_d | .000 | 20 |
| table_b | 324.000 B | 20 |
| table_a | 1.266 KB | 10 |
| Total | 4.684 KB | 90 |
| Quota | 1024.000 GB | 1073741824 |
| Left | 1024.000 GB | 1073741734 |
+-----------+-------------+--------------+
```