* [Bug] Filter out unavaiable backends when getting scan range location
In the previous implementation, we will eliminate non-surviving BEs in the Coordinator phase.
But for Spark or Flink Connector, there is no such logic, so when a BE node is down,
it will cause the problem of querying errors through the Connector.
* fix ut
* fix compiule
1.
Only Master FE has these info.
Also catch more exception of dynamic partition scheduler.
2.
Forward admin show frontend config stmt to Master if set forward_to_master=true
## Proposed changes
When a delete error occurs, the error message is ambiguous.
```sql
mysql> DELETE FROM nebula_trade_health_trade PARTITION q3_2021 WHERE event_day = '20210706';
ERROR 1064 (HY000): errCode = 2, detailMessage = failed to execute delete. transaction id 7215554, timeout(ms) 160000, unfinished replicas: 4718319=7345841
```
We do not know the meaning of `4718319=7345841`.
Actually the former is `BackendId` and the latter is `TabletId`.
I'll add an instruction here to help locate the problem quickly. The error message will be
```sql
ERROR 1064 (HY000): errCode = 2, detailMessage = failed to execute delete. transaction id 7215554, timeout(ms) 160000, unfinished replicas [BackendId=TabletId]: 4718319=7345841
```
refactor runtime filter bloomfilter and eliminate some virtual function calls which obtained a performance improvement of about 5%
import block bloom filter, for avx version obtained 40% performance improvement
before: bloomfilter size:default, about 2000W item cost about 1s400ms
after: bloomfilter size:524288, about 2000W item cost about 400ms
fix the issue #5995
Add the property "dynamic_partition.history_partition_num" to specify the history partition number when enable create_history_partition to fix the invalid date format value
and add these two properties to docs
The previous alignment of Doris is up to 8 bytes.
For types with more than 8 bytes, such as Date, Datetime is not aligned.
This PR is mainly to relax the maximum 8-byte limitation
Also, because the data type Decimal v1 no longer exists,
the logic of the 40-byte Decimal v1 is also discarded.
Provides basic property related classes supports create, query, read, write, etc.
Currently, Doris FE mostly uses `if` statement to check properties in SQL. There is a lot of redundancy in the code.
The `PropertySet` class can be used in the analysis phase of `Statement`. The validation and correctness of the input properties are automatic verified. It can simplify the code and improve the readability of the code.
Usage:
1. Create a custom class that implements `SchemaGroup` interface.
2. Define the properties to be used. If it's a required parameter, there is no need to set the default value.
3. According the the requirements, in the logic called `readFromStrMap` and other functions to check and obtain parameters.
Demo:
Class definition
```
public class FileFormat implements PropertySchema.SchemaGroup {
public static final PropertySchema<FileFormat.Type> FILE_FORMAT_TYPE =
new PropertySchema.EnumProperty<>("type", FileFormat.Type.class).setDefauleValue(FileFormat.Type.CSV);
public static final PropertySchema<String> RECORD_DELIMITER =
new PropertySchema.StringProperty("record_delimiter").setDefauleValue("\n");
public static final PropertySchema<String> FIELD_DELIMITER =
new PropertySchema.StringProperty("field_delimiter").setDefauleValue("|");
public static final PropertySchema<Integer> SKIP_HEADER =
new PropertySchema.IntProperty("skip_header", true).setMin(0).setDefauleValue(0);
private static final FileFormat INSTANCE = new FileFormat();
private ImmutableMap<String, PropertySchema> schemas = PropertySchema.createSchemas(
FILE_FORMAT_TYPE,
RECORD_DELIMITER,
FIELD_DELIMITER,
SKIP_HEADER);
public ImmutableMap<String, PropertySchema> getSchemas() {
return schemas;
}
public static FileFormat get() {
return INSTANCE;
}
}
```
Usage
```
public class CreateXXXStmt extends DdlStmt {
private PropertiesSet<FileFormat> analyzedFileFormat = PropertiesSet.empty(FileFormat.get());
private final Map<String, String> fileFormatOptions;
...
public void analyze(Analyzer analyzer) throws UserException {
...
if (fileFormatOptions != null) {
try {
analyzedFileFormat = PropertiesSet.readFromStrMap(FileFormat.get(), fileFormatOptions);
} catch (IllegalArgumentException e) {
...
}
}
// 1. Get property value
String recordDelimiter = analyzedFileFormat.get(FileFormat.RECORD_DELIMITER)
// 2. Check the validity of parameters
PropertiesSet.verifyKey(FileFormat.get(), fileFormatOptions);
...
}
}
```
1. support in/bloomfilter/minmax
2. support broadcast/shuffle/bucket shuffle/colocate join
3. opt memory use and cpu cache miss while build runtime filter
4. opt memory use in left semi join (works well on tpcds-95)
Doris BE development and debugging environment construction
Add installation under ubuntu, dependent installation
Compile on ubuntu 20.04 physical machine, the actual test needs to install these dependencies:
autoconf automake libtool autopoint
In the previous version, we changed the db level lock to the table level
to reduce lock contention. But this change will cause some metadata playback problems.
Because most operations on a table will only acquire table locks. These operations
may occur at the same time as the operation of dropping the table.
This may cause the metadata operation sequence to be inconsistent with the log replay sequence,
which may cause some problems.
This PR mainly adds a rewrite rule 'ExtractCommonFactorsRule'
used to extract wide common factors in the planning stage for 'Expr'.
The main purpose of this rule is to extract (Range or In) expressions
that can be combined from each or clause.
E.g:
Origin expr: (1<a<3 and b in ('a') ) or (2<a<4 and b in ('b'))
Rewritten expr: (1<a<4 ) and (b in ('a', 'b')) and ((1<a<3 and b in ('a') ) or (2<a<4 and b in ('b')))
Although the range of the wide common factors is larger than the real range,
the wide common factors only involve a single column, so it can be pushed down to the scan node,
thereby reducing the amount of scanned data in advance and improving the query speed.
It should be noted that this optimization strategy is not for all scenarios.
When filter rate of the wide common factor is too low,
the query will consume an extra time to calculate the wide common factors.
So this strategy can be switched by configuring session vairables 'extract_wide_range_expr'.
The default policy is enabled which means this strategy takes effect.
If you encounter unsatisfactory filtering rate, you can set the variable to false.
It will turn off the strategy.
Fixed#6082