## Proposed changes
Add transaction for the operation of insert. It will cost less time than non-transaction(it will cost 1/1000 time) when you want to insert a amount of rows.
### Syntax
```
BEGIN [ WITH LABEL label];
INSERT INTO table_name ...
[COMMIT | ROLLBACK];
```
### Example
commit a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
commit;
```
rollback a transaction:
```
begin;
insert into Tbl values(11, 22, 33);
rollback;
```
commit a transaction with label:
```
begin with label test_label;
insert into Tbl values(11, 22, 33);
commit;
```
### Description
```
begin: begin a transaction, the next insert will execute in the transaction until commit/rollback;
commit: commit the transaction, the data in the transaction will be inserted into the table;
rollback: abort the transaction, nothing will be inserted into the table;
```
### The main realization principle:
```
1. begin a transaction in the session. next sql is executed in the transaction;
2. insert sql will be parser and get the database name and table name, they will be used to select a be and create a pipe to accept data;
3. all inserted values will be sent to the be and write into the pipe;
4. a thread will get the data from the pipe, then write them to disk;
5. commit will complete this transaction and make these data visible;
6. rollback will abort this transaction
```
### Some restrictions on the use of update syntax.
1. Only ```insert``` can be called in a transaction.
2. If something error happened, ```commit``` will not succeed, it will ```rollback``` directly;
3. By default, if part of insert in the transaction is invalid, ```commit``` will only insert the other correct data into the table.
4. If you need ```commit``` return failed when any insert in the transaction is invalid, you need execute ```set enable_insert_strict = true``` before ```begin```.
1. Use parallelStream to speed up tabletReport.
2. Add partitionIdInMemorySet to speed up tabletToInMemory check.
3. Add disable_storage_medium_check to disable storage medium check when user doesn't care what tablet's storage medium is, and remove enable_strict_storage_medium_check config to fix some potential migration task failures.
Co-authored-by: caiconghui <caiconghui@xiaomi.com>
At present, some constant expression calculations are implemented on the FE side,
but they are incomplete, and some expressions cannot be completely consistent with
the value calculated by BE (such as part of the time function)
Therefore, we provide a way to pass all the constants in SQL to BE for calculation,
and then begin to analyze and plan SQL. This method can also solve the problem that some
complex constant calculations issued by BI cannot be processed on the FE side.
Here through a session variable enable_fold_constant_by_be to control this function,
which is disabled by default.
When the config "enable_bdbje_debug_mode" of FE is set to true,
start FE and enter debug mode.
In this mode, only MySQL server and http server will be started.
After that, users can log in to Doris through the web front-end or MySQL client,
and then use "show proc "/bdbje"" to view the data in bdbje.
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
fix the issue #5995
Add the property "dynamic_partition.history_partition_num" to specify the history partition number when enable create_history_partition to fix the invalid date format value
and add these two properties to docs
Doris BE development and debugging environment construction
Add installation under ubuntu, dependent installation
Compile on ubuntu 20.04 physical machine, the actual test needs to install these dependencies:
autoconf automake libtool autopoint
This PR mainly adds a rewrite rule 'ExtractCommonFactorsRule'
used to extract wide common factors in the planning stage for 'Expr'.
The main purpose of this rule is to extract (Range or In) expressions
that can be combined from each or clause.
E.g:
Origin expr: (1<a<3 and b in ('a') ) or (2<a<4 and b in ('b'))
Rewritten expr: (1<a<4 ) and (b in ('a', 'b')) and ((1<a<3 and b in ('a') ) or (2<a<4 and b in ('b')))
Although the range of the wide common factors is larger than the real range,
the wide common factors only involve a single column, so it can be pushed down to the scan node,
thereby reducing the amount of scanned data in advance and improving the query speed.
It should be noted that this optimization strategy is not for all scenarios.
When filter rate of the wide common factor is too low,
the query will consume an extra time to calculate the wide common factors.
So this strategy can be switched by configuring session vairables 'extract_wide_range_expr'.
The default policy is enabled which means this strategy takes effect.
If you encounter unsatisfactory filtering rate, you can set the variable to false.
It will turn off the strategy.
Fixed#6082
* Organize FE configuration file description
Organize FE configuration file description
* Delete redundant numbers
Delete redundant numbers
* Add two configuration parameters of spring boot upload file
Add two configuration parameters of spring boot upload file
* Add configuration instructions
Add configuration instructions
* Fix typos
Fix typos
* Add English documentation of BE configuration
Add English documentation of BE configuration
* Modify style
Modify style
* Modify punctuation
Modify punctuation
* Correct the errors in the text
Correct the errors in the text
* Modify some ads and content issues
Modify some ads and content issues