This PR is the first step to make Doris stream load more robust with higher concurrent
performance(#3368),the main work is to support txn management in db level isolation
and use ArrayDeque to stored final status txns.
The callback added to the CallbackFactory should not be removed until the
transaction is aborted or visible. Otherwise, some callback method may failed
to be called.
For SQL like:
```
select * from
join1 left semi join join2
on join1.id = join2.id and join2.id > 1;
```
the predicate `join2.id > 1` can not be pushed down to table join1.
Add a new config `drop_backend_after_decommission` in FE. if this config
is false, the BE will not be dropped after finishing decommission operation.
This new config is try to solve the problem described in ISSUE: #3460 .
TODO:
This method will generate a lot of data migration, so it is only a temporary solution.
After that, we should try to solve the problem of data balancing within the BE.
This CL also add the documents of FE and BE configuration.
These documents are incomplete and can be added later.
When there is subquery in where clause, the query will be rewritten to join operation.
And some auxiliary binary predicates will be generated. These binary predicates
will not go through the ExprRewriteRule, so they are not normalized as
"column to the left and constant to the right" format.
We need to take this case into account so that the `canPushDownPredicate()` judgement
will not throw exception.
Current implement is very simply and conservative, because our query planner is error-prone.
After we implement the new query planner, we could do this work by `Predicate Equivalence Class` and `PredicatePushDown` rule like presto.
* [Fix] Fix bug that rowset meta is deleted after compaction
After compaction, the tablet rowset meta will be modified by
adding to new output rowsets and deleting the old input rowsets.
The output version may equals to the input version.
So we should delete the "input" version from _rs_version_map
before adding the "output" version to _rs_version_map. Otherwise,
the new "output" version will be lost in _rs_version_map.
Fix#3403
The new version of Guava has move the `toStringHelper` from `Object` to `MoreObject`.
This CL has passed our test environment, and looks running well.
Fix#3390
This CL add more info in `JobDetails` column of `SHOW LOAD` result for Broker Load Job.
For example:
```
{
"Unfinished backends": {
"9c3441027ff948a0-8287923329a2b6a7": [10002]
},
"All backends": {
"9c3441027ff948a0-8287923329a2b6a7": [10002, 10004, 10006]
},
"ScannedRows": 2390016,
"TaskNumber": 1,
"FileNumber": 1,
"FileSize": 1073741824
}
```
2 newly added keys:
`Unfinished backends` indicates the BE which task on them are not finished.
`All backends` indicates the BE which this job has tasks on it.
One more thing, I pass the Backend Id along with the heartbeat msg from FE to BE, so that BE can
know the Id of themselves.
1. The correlated slot ref should be bound by the agg tuple of outer query.
However, the correlated having clause can not be analyzed correctly so the result is incorrect.
For example:
```
SELECT k1 FROM test GROUP BY k1
HAVING EXISTS(SELECT k1 FROM baseall GROUP BY k1 HAVING SUM(test.k1) = k1);
```
The correlated predicate is not executed.
2. The limit offset should also be rewritten when there is subquery in having clause.
For example:
```
select k1, count(*) cnt from test group by k1 having k1 in
(select k1 from baseall order by k1 limit 2) order by k1 limit 5 offset 3;
```
The new stmt should has a limit element with offset.
This CL add new command to set replication number of table in one time.
```
alter table test_tbl set ("replication_num" = "3");
```
It changes replication num of a unpartitioned table.
and
```
alter table test_tbl set ("default.replication_num" = "3");
```
It changes default replication num of the specified table.
If not reset, all queries comes from same session will have save isQuery field value.
This bug will cause all entries in fe.audit.log has same IsQuery=true.
This CL also fix another bug:
The resolved IPs of domain of a user should not appear in other user's white list. Fix#3380
This CL mainly solve the problem that when recycle `OlapTableSink`
object, GC thread will not do it immediately because the class override
the `finalize` method, and it will cause OOM.
This CL mainly made the following modifications:
1. Reorganized SegmentV2 upgrade document.
2. When the variable `use_v2_rollup` is set to true, the base rollup in v2 format is forcibly queried for verifying the data.
3. Fix a problem that there is no persistent storage format information in the schema change operation that performs v2 conversion.
4. Allow users to directly create v2 format tables.
when add a parition with storage_cooldown_time property like this:
alter table tablexxx ADD PARTITION p20200421 VALUES LESS THAN("1588262400") ("storage_medium" = "SSD", "storage_cooldown_time" = "2020-05-01 00:00:00");
and show partitions from tablexxx;
the CooldownTime is wrong: 2610-02-17 10:16:40, and what is more, the storage migration is based on the wrong timestamp.
The reason is that the result of DateLiteral.getLongValue is not timestamp.
related issue: #3306
Note: this PR just remove the es_scan_node_test.cpp which is useless
For the moment, just add a simple explain syntax for EsTable without translating the native predicates to ES queryDSL which is better to finished with moving the predicate translating from Doris BE to Doris FE, the whole work is still WIP.
This CL solve problem:
- FE can't aware Coordinate BE down and cancel the txns because the txns can't finish.
- Do some code style refactor
NOTICE: FE meta version upgrade to 83
Our current DELETE strategy reuses the LoadChecker framework.
LoadChecker runs jobs in different stages by polling them in every 5 seconds.
There are four stages of a load job, Pending/ETL/Loading/Quorum_finish,
each of them is allocated to a LoadChecker. Four example, if a load job is submitted,
it will be initialized to the Pending state, then wait for running by the Pending LoadChecker.
After the pending job is ran, its stage will change to ETL stage, and then wait for
running by the next LoadChecker(ETL). Because interval time of the LoadChecker is 5s,
in worst case, a pending job need to wait for 20s during its life cycle.
In particular, the DELETE jobs do not need to wait for polling, they can run the pushTask()
function directly to delete. In this commit, I add a delete handler to concurrently
processing delete tasks.
All delete tasks will push to BE immediately, not required to wait for LoadCheker,
without waiting for 2 LoadChecker(delete job started in LOADING state),
at most 10s will be save(5s per LoadCheker). The delete process now is synchronized
and users get response only after the delete finished or be canceled.
If a delete is running over a certain period of time, it will be cancelled with a timeout exception.
NOTICE: this CL upgrade FE meta version to 82
I don't why, but I found that sometimes when I use "==" to judge the equality of type,
it return false, even if the types are exactly same.
ISSUE: #3309
This CL only changes == to equals() to solve the problem, but the reason is still unknown.
This PR is to limit the replica usage, admin need to know the replica usage for every db and
table, be able to set replica quota for every db.
```
ALTER DATABASE db_name SET REPLICA QUOTA quota;
```
This PR is just a transitional way,but it is better to move the predicates transformation from Doris BE to Doris BE, in this way, Doris BE is responsible for fetching data from ES.
Add a `enable_keyword_sniff ` configuration item in creating External Elasticsearch Table ,it default to true , would to sniff the `keyword` type on the `text analyzed` Field and return the `json_path` which substitute the origin col name.
```
CREATE EXTERNAL TABLE `test` (
`k1` varchar(20) COMMENT "",
`create_time` datetime COMMENT ""
) ENGINE=ELASTICSEARCH
PROPERTIES (
"hosts" = "http://10.74.167.16:8200",
"user" = "root",
"password" = "root",
"index" = "test",
"type" = "doc",
"enable_keyword_sniff" = "true"
);
```
note: `enable_keyword_sniff` default to "true"
run this SQL:
```
select * from test where k1 = "wu yun feng"
```
Output predicate DSL:
```
{"term":{"k1.keyword":"wu yun feng"}}
```
and in this PR, I remove the elasticsearch version detected logic for now this is useless, maybe future is needed.
Multiple subqueries in the having statement need to be rewritten into multiple tables for join. The current rewriting rules need to be transformed.
And this writing is not common, and there is no strong requirement from the business side.
This function will be added later if it is required.