Implement the lower_case_table_names variable of mysql. The value meaning is as follows:
0: the table names are case-sensitive.
1: table names are stored in lowercase and comparisons are not case sensitive.
2: table names are stored as given but compared case-insensitively.
Encapsulate some http interfaces for better management and maintenance of doris clusters.
The http interface includes getting cluster connection information, node information, node configuration information, batch modifying node configuration, and getting query profile.
For details, please refer to the document:
`docs/zh-CN/administrator-guide/http-actions/fe/manager/`
- Make load_parallelism configurable.
- Different clusters should be configured with different load_parallelism values.
- Some user don't know how to set load_parallelism, or don't know the best load_parallelism value.
SHOW DATA SKEW FROM tbl PARTITION(p1)
to view the data distribution of a specified partition
```
mysql> admin show data skew from tbl1 partition(tbl1);
+-----------+-------------+-------+---------+
| BucketIdx | AvgDataSize | Graph | Percent |
+-----------+-------------+-------+---------+
| 0 | 0 | | 100.00% |
+-----------+-------------+-------+---------+
1 row in set (0.01 sec)
```
Also modify the result of `admin show replica distribution`, add replica size distribution
```
mysql> admin show replica distribution from tbl1 partition(tbl1);
+-----------+------------+-------------+----------+------------+-----------+-------------+
| BackendId | ReplicaNum | ReplicaSize | NumGraph | NumPercent | SizeGraph | SizePercent |
+-----------+------------+-------------+----------+------------+-----------+-------------+
| 10002 | 1 | 0 | > | 100.00% | | 100.00% |
+-----------+------------+-------------+----------+------------+-----------+-------------+
```
```
alter routine load for cmy2 from kafka("kafka_broker_list" = "ip2:9094", "kafka_topic" = "my_topic");
```
This is useful when the kafka broker list or topic has been changed.
Also modify `show create routine load`, support showing "kafka_partitions" and "kafka_offsets".
[Update] Support update syntax
The current update syntax only supports updating the filtered data of a single table.
Syntax:
* UPDATE table_reference
* SET assignment_list
* [WHERE where_condition]
*
* value:
* {expr}
*
* assignment:
* col_name = value
*
* assignment_list:
* assignment [, assignment] ...
Example
Update unique_table
set v1=1
where k1=1
New Frontend Config: enable_concurrent_update
This configuration is used to control whether multi update stmt can be executed concurrently in one table.
Default value is false which means A table can only have one update task being executed at the same time.
If users want to update the same table concurrently,
they need to modify the configuration value to true and restart the master frontend.
Concurrent updates may cause write conflicts, the result is uncertain, please be careful.
The main realization principle:
1. Read the rows that meet the conditions according to the conditions set by where clause.
2. Modify the result of the row according to the set clause.
3. Write the modified row back to the table.
Some restrictions on the use of update syntax.
1. Only the unique table can be updated
2. Only the value column of the unique table can be updated
3. The where clause currently only supports single tables
Possible risks:
1. Since the current implementation method is a row update,
when the same table is updated concurrently, there may be concurrency conflicts which may cause the incorrect result.
2. Once the conditions of the where clause are unsatisfactory, it is likely to cause a full table scan and affect query performance.
Please pay attention to whether the column in the where clause can match the index when using it.
[Docs][Update] Add update document and sql-reference
Fixed#6229
## Proposed changes
Add transaction for the operation of insert. It will cost less time than non-transaction(it will cost 1/1000 time) when you want to insert a amount of rows.
### Syntax
```
BEGIN [ WITH LABEL label];
INSERT INTO table_name ...
[COMMIT | ROLLBACK];
```
### Example
commit a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
commit;
```
rollback a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
rollback;
```
commit a transaction with label:
```
begin with label test_label;
insert into Tbl values(11, 22, 33);
commit;
```
### Description
```
begin: begin a transaction, the next insert will execute in the transaction until commit/rollback;
commit: commit the transaction, the data in the transaction will be inserted into the table;
rollback: abort the transaction, nothing will be inserted into the table;
```
### The main realization principle:
```
1. begin a transaction in the session. next sql is executed in the transaction;
2. insert sql will be parser and get the database name and table name, they will be used to select a be and create a pipe to accept data;
3. all inserted values will be sent to the be and write into the pipe;
4. a thread will get the data from the pipe, then write them to disk;
5. commit will complete this transaction and make these data visible;
6. rollback will abort this transaction
```
### Some restrictions on the use of update syntax.
1. Only ```insert``` can be called in a transaction.
2. If something error happened, ```commit``` will not succeed, it will ```rollback``` directly;
3. By default, if part of insert in the transaction is invalid, ```commit``` will only insert the other correct data into the table.
4. If you need ```commit``` return failed when any insert in the transaction is invalid, you need execute ```set enable_insert_strict = true``` before ```begin```.
1. Use parallelStream to speed up tabletReport.
2. Add partitionIdInMemorySet to speed up tabletToInMemory check.
3. Add disable_storage_medium_check to disable storage medium check when user doesn't care what tablet's storage medium is, and remove enable_strict_storage_medium_check config to fix some potential migration task failures.
Co-authored-by: caiconghui <caiconghui@xiaomi.com>
At present, some constant expression calculations are implemented on the FE side,
but they are incomplete, and some expressions cannot be completely consistent with
the value calculated by BE (such as part of the time function)
Therefore, we provide a way to pass all the constants in SQL to BE for calculation,
and then begin to analyze and plan SQL. This method can also solve the problem that some
complex constant calculations issued by BI cannot be processed on the FE side.
Here through a session variable enable_fold_constant_by_be to control this function,
which is disabled by default.
When the config "enable_bdbje_debug_mode" of FE is set to true,
start FE and enter debug mode.
In this mode, only MySQL server and http server will be started.
After that, users can log in to Doris through the web front-end or MySQL client,
and then use "show proc "/bdbje"" to view the data in bdbje.
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
fix the issue #5995
Add the property "dynamic_partition.history_partition_num" to specify the history partition number when enable create_history_partition to fix the invalid date format value
and add these two properties to docs