Support hdfs in select outfile clause without broker.
This PR implement a HDFS writer in BE which is used to write HDFS file directly without using broker.
Also the hdfs outfile clause syntax check has been added in FE.
The syntax:
```
select * from xx into outfile "hdfs://user/outfile_" format as csv
properties ("hdfs.fs.dafultFS" = "xxx", "hdfs.hdfs_user" = "xxx");
```
Note that all hdfs configurations need to carry a prefix `hdfs.`.
1. Fix a memory leak in `collect_iterator.cpp` (Fix#6700)
2. Add a new BE config `max_segment_num_per_rowset` to limit the num of segment in new rowset.(Fix#6701)
3. Make the error msg of stream load more friendly.
1.Fix a potential BE coredump of sending batch when loading data. (Fix [Bug] BE crash when loading data #6656)
2.Fix a potential BE coredump when doing schema change. (Fix [Bug] BE crash when doing alter task #6657)
3.Optimize the metric of base_compaction_request_failed.
4.Add Order column in show tablet result. (Fix [Feature] Add order column in SHOW TABLET stmt result #6658)
5.Fix bug that tablet repair slot not being released. (Fix [Bug] Tablet scheduler stop working #6659)
6.Fix bug that REPLICA_MISSING error can not be handled. (Fix [Bug] REPLICA_MISSING error can not be handled. #6660)
7.Modify column name of SHOW PROC "/cluster_balance/cluster_load_stat"
8.Optimize the result of SHOW PROC "/statistic" to show COLOCATE_MISMATCH tablets (Fix [Feature] the health status of colocate table's tablet is not shown in show proc statistic #6663)
9.Fix bug that show load where state='pending' can not be executed. (Fix [Bug] show load where state='pending' can not be executed. #6664)
This commit is going to reduce thread number of SyncJob .
1、Submit send task to thread pool to send data.
2、Submit eof task to thread pool to block and wake up client to commit transactions.
3、Use SerialExecutorService to ensure correct order of sent data in every channel.
Besides,some bugs have been fixed in this commit
1、Failed to resume syncJob.
2、Failed to do sync data when set multiple tables in a syncJob.
3、In a cluster with multiple Fe, master may hang up after creating syncJob.
The current SqlCache sql_key is generated by taking the md5 value of selectStmt.toSql(), but selectStmt.toSql() is spliced through the operator tree, and sometimes some specific parameters cannot be displayed, resulting in sql hits with different parameters The same cache is used, and the query results are inconsistent with expectations.
For example, our user has a sql with more than 300 rows, which contains a lot of parameters, including partitions. But the result of selectStmt.toSql() is:
SELECT `tb`.`type` AS `type`, `tb`.`name` AS `name`, `tb`.`name1` AS `name1`, `tb`.`name2` AS `name2`, `tb`.`name3` AS `name3`
FROM (
SELECT 3 AS `type`, `cc`.`name` AS `name`, `cc`.`name1` AS `name1`
, coalesce(`bb`.`name`, '请联系您的品牌业务经理进行咨询。') AS `name2`, `bb`.`name1` AS `name3`
FROM `cc`
LEFT JOIN `bb` ON `cc`.`id` = `bb`.`id1`
UNION ALL
SELECT `dd`.`type` AS `type`, `dd`.`name` AS `name`, `dd`.`name1` AS `name1`, `dd`.`name2` AS `name2`, `dd`.`name3` AS `name3`
FROM `dd`
UNION ALL
SELECT `ee`.`type` AS `type`, `ee`.`name` AS `name`, `ee`.`name1` AS `name1`, `ee`.`name2` AS `name2`, `ee`.`name3` AS `name3`
FROM `ee`
) tb
LIMIT 10
In this way, the user specified different partitions for query, and the same cache was queried, which was inconsistent with the expected result. Therefore, it is recommended to use originStmt instead of selectStmt.toSql() to generate sql_key.
1. Support boolean data type for spark-doris-connector because Doris has previously supported the boolean data type
2. Bug-Fix for the Doris BE core when spark request data from be
* [BUG][Profile] Fixed the problem that BE's profile could not add child profile in the specified correct location
bug:
runtime_profile()->add_child(build_phase_profile, false, nullptr);
child profile will add to second location
* Update runtime_profile.cpp
1. insert very large string value may coredump
2. some analitic functiuon and agg function result may be incorrect
3. string compare may be coredump when string type is too large
4. string type in delete condition can not process correctly
5. add text/blob as alias of string to compitable with mysql
6. fix string type min/max agg may process incorrectly
There are many historical job records in Doris, such as load jobs, alter jobs, export jobs and so on.
These historical jobs are generally cleaned up periodically by the cleanup thread, to avoid taking too much memory.
This PR reorganized the cleanup logic of historical jobs and optimized the cleanup logic of some historical jobs
to reduce the memory usage of historical jobs.
The following FE configuration items are related to historical job cleaning:
1. label_keep_max_second
Used to determine whether LoadJob, LoadJobV2, RoutineLoadJob or TransactionState are expired.
2. streaming_label_keep_max_second
Used to determine whether InsertJob, DeleteJob or TransactionState are expired.
Different from label_keep_max_second, this config is used to clean up these frequently submitted jobs or load transactions.
3. history_job_keep_max_second
Used to determine whether AlterJob, ExportJob are expired
The basic idea is adding a rewrite rule to rewriete the expr of `SlotRef`,
which type is fixedPointValue, aka Integer-Types, to a `CastExpr`, it will cast the fixedPoint value to string.
Fix bug that if a tablet belongs to a colocation table, and one of its
replica is in DECOMMISSION state. This tablet can not be repaired.
Also fix a bug that quota does not escape in show create table result.
```
COMMENT "a"bc" to COMMENT "a\"bc"
```
Fix#6551
`Truncate` operation will create new partition to replace base partition.
Tablets in new partition would be created based on table default bucket number.
If the number of tablet for a partition is different from table default bucket number,
the number of tablet will be different between the new created partition and base partition after truncate operation.
In any case, the new partition should have the same number of tablet as the base partition after truncate operation.
Tablets in new partition should be created based on tablet number of base partition rather than table default bucket number for truncate operation.
This pr mainly supports
1. Export query result sets concurrently
2. Query result set export supports s3 protocol
Among them, there are several preconditions for concurrently exporting query result sets
1. Enable concurrent export variables
2. The query itself can be exported concurrently
(some queries containing sort nodes at the top level cannot be exported concurrently)
3. Export the s3 protocol used instead of the broker
After exporting the result set concurrently,
the file prefix is changed to outfile_{query_instance_id}_filenumber.{file_format}
Counter of image write success or failure:
image_write{"type" = "success"}
image_write{"type" = "failed"}
Counter of image push to other FE nodes success or failure:
image_push{"type" = "success"}
image_push{"type" = "failed"}
Counter of old image clean success or failure:
image_clean{"type" = "success"}
image_clean{"type" = "failed"}
Counter of old edit log clean success or failure:
edit_log_clean{"type" = "success"}
edit_log_clean{"type" = "failed"}
add a rewrite rule of compoundPredicate 'OR' 'AND' to hit prefix index
unit test added.
`````
case true AND expr ==> expr
case expr AND true ==> expr
case false Or expr ==> expr
case expr Or false ==> expr
case false AND expr ==> false
case expr AND false ==> false
case true Or expr ==> true
case expr Or true ==> true
`````
When no backend(be) available, eg first time before setup or all be is down, we cannot login to fe using most sql tools, eg datagrip, querious.
This is because these tools call `select version()` or `select @@version_comment …` right after login, when there is no backend available, the login will fail.
I make this pr to support login when no be available, so we can add backend or modify some configuaration using GUI sql tools, especially the first setup time.
This pr is working at precondition that,sql tools only query very simple information that fe can handle, there is no need to send the request to be.
So I check the query type and BE status, if we can handle the query and no BE is available, we will intercept and process in fe.
Redirect the following http requests to master:
/rest/v2/api/cluster_overview
/rest/v2/manager/node/frontends
/rest/v2/manager/node/backends
/rest/v2/manager/node/brokers
If the colocate group of the table is not stable,
then even if it is a colocate table,
its data distribution is Random.
So the distribution info of OlapScanNode is also Random instead of Hash Partition.
Fixed#6558
#5902
This CL mainly changes:
1. Support setting tags for BE nodes:
```
alter system add backend "1272:9050, 1212:9050" properties("tag.location": "zoneA");
alter system modify backend "1272:9050, 1212:9050" set ("tag.location": "zoneB");
```
And for compatibility, all BE nodes will be set a "default" tag when upgrading: `"tag.location": "default"`.
2. Create a new class `ReplicaAllocation` to replace the previous `replication_num`.
`ReplicaAllocation` represents the allocation of the replicas of a tablet. It contains a map from
Tag to number of replicas.
For example, if user set a table's replication num to 3, it will be converted to a ReplicaAllocation
like: `"tag.location.default" : "3"`, which means the tablet will have 3 replicas and all of them will be
allocated in BE nodes with tag "default";
3. Support create table with replication allocation:
```
CREATE TABLE example_db.table_hash
(
k1 TINYINT
)
DISTRIBUTED BY HASH(k1) BUCKETS 32
PROPERTIES (
"replication_allocation"="tag.location.zone1:1, tag.location.zone2:2"
);
```
Also support set replica allocation for dynamic tables, and modify replica allocation at runtime.
For compatibility, user can still set "replication_num" = "3", and it will be automatically converted to:
` "replication_allocation"="tag.location.default:3"`
4. Support tablet repair and balance based on Tag
1. For tablets of non-colocate table, most of the logic is the same as before,
but when selecting the destination node for clone, the tag of the node will be considered.
If the required tag does not exist, it cannot be repaired.
Similarly, under the condition of ensuring that the replicas are complete, the tablet will be
reallocated according to the tag or the replicas will be balanced.
Balancing is performed separately within each resource group.
2. For tablets of colocate table, the backends sequence of buckets will be splitted by tag.
For example, if replica allocation is "tag.location.zone1:1, tag.location.zone2:2",
And zone1 has 2 BE: A, B; zone2 has 3 BE: C, D, F
there will be 2 backend sequences: one is for zone1, and the other is for zone2.
And one posible seqeunces will be:
zone1: [A] [B] [A] [B]
zone2: [C, D][D, F][F, C][C, D]
5. Support setting tags for user and restrict execution node with tags:
```
set property for 'cmy' 'resource_tags.location' : 'zone1, zone2';
```
After setting, the user 'cmy' can only query data stored on backends with tag zone1 and zone2,
And query can only be executed on backends with tag zone1 and zone2
For compatibility, after upgrading, the property `resource_tags.location` will be empty,
so that user can still query data stored on any backends.
6. Modify the Unit test frame of FE so that we can created multi backends with different mocked IP in unit test.
This help us to easily test some distributed cases like query, tablet repair and balance
The document will be added in another PR.
Also fix a bug described in #6194
fix#5378#5391#5688#5973#6155 and all replay NPE. All replay method can now throw MetaNotFoundException and caught to log a warning for potential inconsistent metadata cases.
try to establish a clear notice for future developer to check null.
* fix(sparkload): bitmap deep copy in `or` operator
fix multi rollup hold the same Ref of bitmapvalue which may be updated repeatedly.
* fix(sparkload): bitmap deep copy in `or` operator
fix multi rollup hold the same Ref of bitmapvalue which may be updated repeatedly.
Co-authored-by: weixiang <weixiang06@meituan.com>