* [Bug][RoutineLoad] Fix bug that routine load thread on BE may be blocked
This bug will cause the routine load job throw TOO MANY TASK error, and routine
load job is blocked.
* fix ut
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
1. To be compatible with response body of GetLoadInfoAction in httpv1.
2. Not drop partition by force in dynamic partition scheduler.
Change-Id: I50864ddadf1a1c25efa16a465940a1129f937d3d
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
The log4j-config.xml will be generated at startup of FE and also when modifying FE config.
But in some deploy environment such as k8s, the conf dir is not writable.
So change the dir of log4j-config.xml to Config.custom_conf_dir.
Also fix some small bugs:
1. Typo "less then" -> "less than"
2. Duplicated `exec_mem_limit` showed in SHOW ROUTINE LOAD
3. Allow MAXVALUE in single partition column table.
4. Add IP info for "intolerate index channel failure" msg.
Change-Id: Ib4e1182084219c41eae44d3a28110c0315fdbd7d
Co-authored-by: chenmingyu <chenmingyu@baidu.com>
The current JoinReorder algorithm mainly sorts according to the star model,
and only considers the query association relationship between the table and the table.
The problems are following:
1. Only applicable to user data whose data model is a star model, data of other models cannot be sorted.
2. Regardless of the cost of the table, it is impossible to determine the size of the join table relationship,
and the real query optimization ability is weak.
3. It is impossible to avoid possible time-consuming joins such as cross joins by sorting.
The new JoinReorder algorithm mainly introduces a new sorting algorithm for Join
The new ranking algorithm introduces the cost evaluation model to Doris.
The sorting algorithm is mainly based on the following three principles:
1. The order is: Largest node, Smallest node. . . Second largest node
2. Cross join is better than Inner join
3. The right children of Outer join, semi join, and anti join do not move
PlanNode's cost model evaluation mainly relies on two values: cardinality and selectivity.
cardinality: cardinality, can also be simply understood as the number of rows.
selectivity: selectivity, a value between 0 and 1. Predicate generally has selectivity.
The cost model generally calculates the final cardinality of a PlanNode based on the pre-calculated
cardinality of PlanNode and the selectivity of the predicate to which it belongs.
Currently, you can configure "enable_cost_based_join_reorder" to control the opening and closing of JoinReorder.
When the configuration is turned on, the new sorting algorithm will take effect, when it is turned off,
the old sorting algorithm will take effect, and it is turned off by default.
The new sorting algorithm currently has no cost base evaluation for external tables (odbc, es)
and set calculations (intersect, except). When using these queries, it is not recommended to enable cost base join reorder.
When using these queries, it is not recommended to enable cost base join reorder.
At the code architecture level:
1. The new sorting algorithm occurs in the single-node execution planning stage.
2. Refactored the init and finalize phases of PlanNode to ensure that PlanNode planning
and cost evaluation have been completed before the sorting algorithm occurs.
This is part of the array type support and has not been fully completed.
The following functions are implemented
1. fe array type support and implementation of array function, support array syntax analysis and planning
2. Support import array type data through insert into
3. Support select array type data
4. Only the array type is supported on the value lie of the duplicate table
this pr merge some code from #4655#4650#4644#4643#4623#2979
- `RuntimeFilterGenerator` is used to generate Runtime Filter and assign it to the node that uses Runtime Filter in the query plan.
- `RuntimeFilter` represents a filter in the query plan, including the specific properties of the filter, the binding method of expr and tuple slot, etc.
- `RuntimeFilterTarget` indicates the filter information provided to ScanNode, including target expr, whether to merge, etc.
* [Bug] Filter out unavaiable backends when getting scan range location
In the previous implementation, we will eliminate non-surviving BEs in the Coordinator phase.
But for Spark or Flink Connector, there is no such logic, so when a BE node is down,
it will cause the problem of querying errors through the Connector.
* fix ut
* fix compiule
1.
Only Master FE has these info.
Also catch more exception of dynamic partition scheduler.
2.
Forward admin show frontend config stmt to Master if set forward_to_master=true
## Proposed changes
When a delete error occurs, the error message is ambiguous.
```sql
mysql> DELETE FROM nebula_trade_health_trade PARTITION q3_2021 WHERE event_day = '20210706';
ERROR 1064 (HY000): errCode = 2, detailMessage = failed to execute delete. transaction id 7215554, timeout(ms) 160000, unfinished replicas: 4718319=7345841
```
We do not know the meaning of `4718319=7345841`.
Actually the former is `BackendId` and the latter is `TabletId`.
I'll add an instruction here to help locate the problem quickly. The error message will be
```sql
ERROR 1064 (HY000): errCode = 2, detailMessage = failed to execute delete. transaction id 7215554, timeout(ms) 160000, unfinished replicas [BackendId=TabletId]: 4718319=7345841
```
fix the issue #5995
Add the property "dynamic_partition.history_partition_num" to specify the history partition number when enable create_history_partition to fix the invalid date format value
and add these two properties to docs
The previous alignment of Doris is up to 8 bytes.
For types with more than 8 bytes, such as Date, Datetime is not aligned.
This PR is mainly to relax the maximum 8-byte limitation
Also, because the data type Decimal v1 no longer exists,
the logic of the 40-byte Decimal v1 is also discarded.
In the previous version, we changed the db level lock to the table level
to reduce lock contention. But this change will cause some metadata playback problems.
Because most operations on a table will only acquire table locks. These operations
may occur at the same time as the operation of dropping the table.
This may cause the metadata operation sequence to be inconsistent with the log replay sequence,
which may cause some problems.
This PR mainly adds a rewrite rule 'ExtractCommonFactorsRule'
used to extract wide common factors in the planning stage for 'Expr'.
The main purpose of this rule is to extract (Range or In) expressions
that can be combined from each or clause.
E.g:
Origin expr: (1<a<3 and b in ('a') ) or (2<a<4 and b in ('b'))
Rewritten expr: (1<a<4 ) and (b in ('a', 'b')) and ((1<a<3 and b in ('a') ) or (2<a<4 and b in ('b')))
Although the range of the wide common factors is larger than the real range,
the wide common factors only involve a single column, so it can be pushed down to the scan node,
thereby reducing the amount of scanned data in advance and improving the query speed.
It should be noted that this optimization strategy is not for all scenarios.
When filter rate of the wide common factor is too low,
the query will consume an extra time to calculate the wide common factors.
So this strategy can be switched by configuring session vairables 'extract_wide_range_expr'.
The default policy is enabled which means this strategy takes effect.
If you encounter unsatisfactory filtering rate, you can set the variable to false.
It will turn off the strategy.
Fixed#6082
* [Enhance] Support show unrecoverable tablets
The unrecoverable tablets are tablets which non of their replicas are healthy.
We should be able to find out these tablets then manual intervention.
And these tablets should not be added to the tablet scheduler.
* [Bug-fix] Fix wrong data distribution judgment
The Fragment where OlapScanNode is located has three data distribution possibilities.
1. UNPARTITIONED: The scan range of OlapScanNode contains only one instance(BE)
2. RANDOM: Involving multi-partitioned tables in OlapScanNode.
3. HASH_PARTITIONED: The involving table is in the colocate group.
For a multi-partition table, although the data in each individual partition is distributed according to the bucketing column,
the same bucketing column between different partitions is not necessarily in the same be.
So the data distribution is RANDOM.
If Doris wrongly plan RANDOM as HASH_PARTITIONED, it will lead to the wrong colocate agg node.
The result of query is incorrect.
QueryDetail is used to statistic the current query details.
This property will only be set when the query starts to execute.
So in the query planning stage, using this attribute in the first query will cause 'NullPointerException'.
After that, this attribute retains the value of the previous query
until it is updated by the subsequent process.
Because code of 'colocateagg' uses this attribute incorrectly in its planning,
it causes 'NullPointerException' when clients like pymysql connect to doris and send the first query.
Fixed#6017