* Revert "[Optimize] Put _Tuple_ptrs into mempool when RowBatch is initialized (#6036)"
This reverts commit f254870aeb18752a786586ef5d7ccf952b97f895.
* [BUG][Timeout][QueryLeak] Fixed memory not released in time, Fix Core dump in bloomfilter
* [Bug][RoutineLoad] Fix bug that routine load thread on BE may be blocked
This bug will cause the routine load job throw TOO MANY TASK error, and routine
load job is blocked.
* fix ut
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
1. To be compatible with response body of GetLoadInfoAction in httpv1.
2. Not drop partition by force in dynamic partition scheduler.
Change-Id: I50864ddadf1a1c25efa16a465940a1129f937d3d
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
The log4j-config.xml will be generated at startup of FE and also when modifying FE config.
But in some deploy environment such as k8s, the conf dir is not writable.
So change the dir of log4j-config.xml to Config.custom_conf_dir.
Also fix some small bugs:
1. Typo "less then" -> "less than"
2. Duplicated `exec_mem_limit` showed in SHOW ROUTINE LOAD
3. Allow MAXVALUE in single partition column table.
4. Add IP info for "intolerate index channel failure" msg.
Change-Id: Ib4e1182084219c41eae44d3a28110c0315fdbd7d
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
The current JoinReorder algorithm mainly sorts according to the star model,
and only considers the query association relationship between the table and the table.
The problems are following:
1. Only applicable to user data whose data model is a star model, data of other models cannot be sorted.
2. Regardless of the cost of the table, it is impossible to determine the size of the join table relationship,
and the real query optimization ability is weak.
3. It is impossible to avoid possible time-consuming joins such as cross joins by sorting.
The new JoinReorder algorithm mainly introduces a new sorting algorithm for Join
The new ranking algorithm introduces the cost evaluation model to Doris.
The sorting algorithm is mainly based on the following three principles:
1. The order is: Largest node, Smallest node. . . Second largest node
2. Cross join is better than Inner join
3. The right children of Outer join, semi join, and anti join do not move
PlanNode's cost model evaluation mainly relies on two values: cardinality and selectivity.
cardinality: cardinality, can also be simply understood as the number of rows.
selectivity: selectivity, a value between 0 and 1. Predicate generally has selectivity.
The cost model generally calculates the final cardinality of a PlanNode based on the pre-calculated
cardinality of PlanNode and the selectivity of the predicate to which it belongs.
Currently, you can configure "enable_cost_based_join_reorder" to control the opening and closing of JoinReorder.
When the configuration is turned on, the new sorting algorithm will take effect, when it is turned off,
the old sorting algorithm will take effect, and it is turned off by default.
The new sorting algorithm currently has no cost base evaluation for external tables (odbc, es)
and set calculations (intersect, except). When using these queries, it is not recommended to enable cost base join reorder.
When using these queries, it is not recommended to enable cost base join reorder.
At the code architecture level:
1. The new sorting algorithm occurs in the single-node execution planning stage.
2. Refactored the init and finalize phases of PlanNode to ensure that PlanNode planning
and cost evaluation have been completed before the sorting algorithm occurs.
This is part of the array type support and has not been fully completed.
The following functions are implemented
1. fe array type support and implementation of array function, support array syntax analysis and planning
2. Support import array type data through insert into
3. Support select array type data
4. Only the array type is supported on the value lie of the duplicate table
this pr merge some code from #4655#4650#4644#4643#4623#2979
- `RuntimeFilterGenerator` is used to generate Runtime Filter and assign it to the node that uses Runtime Filter in the query plan.
- `RuntimeFilter` represents a filter in the query plan, including the specific properties of the filter, the binding method of expr and tuple slot, etc.
- `RuntimeFilterTarget` indicates the filter information provided to ScanNode, including target expr, whether to merge, etc.
* [Bug] Filter out unavaiable backends when getting scan range location
In the previous implementation, we will eliminate non-surviving BEs in the Coordinator phase.
But for Spark or Flink Connector, there is no such logic, so when a BE node is down,
it will cause the problem of querying errors through the Connector.
* fix ut
* fix compiule
1.
Only Master FE has these info.
Also catch more exception of dynamic partition scheduler.
2.
Forward admin show frontend config stmt to Master if set forward_to_master=true
## Proposed changes
When a delete error occurs, the error message is ambiguous.
```sql
mysql> DELETE FROM nebula_trade_health_trade PARTITION q3_2021 WHERE event_day = '20210706';
ERROR 1064 (HY000): errCode = 2, detailMessage = failed to execute delete. transaction id 7215554, timeout(ms) 160000, unfinished replicas: 4718319=7345841
```
We do not know the meaning of `4718319=7345841`.
Actually the former is `BackendId` and the latter is `TabletId`.
I'll add an instruction here to help locate the problem quickly. The error message will be
```sql
ERROR 1064 (HY000): errCode = 2, detailMessage = failed to execute delete. transaction id 7215554, timeout(ms) 160000, unfinished replicas [BackendId=TabletId]: 4718319=7345841
```
refactor runtime filter bloomfilter and eliminate some virtual function calls which obtained a performance improvement of about 5%
import block bloom filter, for avx version obtained 40% performance improvement
before: bloomfilter size:default, about 2000W item cost about 1s400ms
after: bloomfilter size:524288, about 2000W item cost about 400ms
fix the issue #5995
Add the property "dynamic_partition.history_partition_num" to specify the history partition number when enable create_history_partition to fix the invalid date format value
and add these two properties to docs
The previous alignment of Doris is up to 8 bytes.
For types with more than 8 bytes, such as Date, Datetime is not aligned.
This PR is mainly to relax the maximum 8-byte limitation
Also, because the data type Decimal v1 no longer exists,
the logic of the 40-byte Decimal v1 is also discarded.
Provides basic property related classes supports create, query, read, write, etc.
Currently, Doris FE mostly uses `if` statement to check properties in SQL. There is a lot of redundancy in the code.
The `PropertySet` class can be used in the analysis phase of `Statement`. The validation and correctness of the input properties are automatic verified. It can simplify the code and improve the readability of the code.
Usage:
1. Create a custom class that implements `SchemaGroup` interface.
2. Define the properties to be used. If it's a required parameter, there is no need to set the default value.
3. According the the requirements, in the logic called `readFromStrMap` and other functions to check and obtain parameters.
Demo:
Class definition
```
public class FileFormat implements PropertySchema.SchemaGroup {
public static final PropertySchema<FileFormat.Type> FILE_FORMAT_TYPE =
new PropertySchema.EnumProperty<>("type", FileFormat.Type.class).setDefauleValue(FileFormat.Type.CSV);
public static final PropertySchema<String> RECORD_DELIMITER =
new PropertySchema.StringProperty("record_delimiter").setDefauleValue("\n");
public static final PropertySchema<String> FIELD_DELIMITER =
new PropertySchema.StringProperty("field_delimiter").setDefauleValue("|");
public static final PropertySchema<Integer> SKIP_HEADER =
new PropertySchema.IntProperty("skip_header", true).setMin(0).setDefauleValue(0);
private static final FileFormat INSTANCE = new FileFormat();
private ImmutableMap<String, PropertySchema> schemas = PropertySchema.createSchemas(
FILE_FORMAT_TYPE,
RECORD_DELIMITER,
FIELD_DELIMITER,
SKIP_HEADER);
public ImmutableMap<String, PropertySchema> getSchemas() {
return schemas;
}
public static FileFormat get() {
return INSTANCE;
}
}
```
Usage
```
public class CreateXXXStmt extends DdlStmt {
private PropertiesSet<FileFormat> analyzedFileFormat = PropertiesSet.empty(FileFormat.get());
private final Map<String, String> fileFormatOptions;
...
public void analyze(Analyzer analyzer) throws UserException {
...
if (fileFormatOptions != null) {
try {
analyzedFileFormat = PropertiesSet.readFromStrMap(FileFormat.get(), fileFormatOptions);
} catch (IllegalArgumentException e) {
...
}
}
// 1. Get property value
String recordDelimiter = analyzedFileFormat.get(FileFormat.RECORD_DELIMITER)
// 2. Check the validity of parameters
PropertiesSet.verifyKey(FileFormat.get(), fileFormatOptions);
...
}
}
```
1. support in/bloomfilter/minmax
2. support broadcast/shuffle/bucket shuffle/colocate join
3. opt memory use and cpu cache miss while build runtime filter
4. opt memory use in left semi join (works well on tpcds-95)
Doris BE development and debugging environment construction
Add installation under ubuntu, dependent installation
Compile on ubuntu 20.04 physical machine, the actual test needs to install these dependencies:
autoconf automake libtool autopoint
In the previous version, we changed the db level lock to the table level
to reduce lock contention. But this change will cause some metadata playback problems.
Because most operations on a table will only acquire table locks. These operations
may occur at the same time as the operation of dropping the table.
This may cause the metadata operation sequence to be inconsistent with the log replay sequence,
which may cause some problems.