Commit Graph

18263 Commits

Author SHA1 Message Date
5148bc6fa7 [fix](partial update)allow delete sign column in partial update in planForPipeline (#23034) 2023-08-16 14:20:39 +08:00
4510e16845 [improvement](delete) support delete predicate on value column for merge-on-write unique table (#21933)
Previously, delete statement with conditions on value columns are only supported on duplicate tables. After we introduce delete sign mechanism to do batch delete, a delete statement with conditions on value columns on unique tables will be transformed into the corresponding insert into ..., __DELETE_SIGN__ select ... statement. However, for unique table with merge-on-write enabled, the overhead of inserting these data can be eliminated. So this PR add the ability to allow delete predicate on value columns for merge-on-write unique tables.
2023-08-16 12:18:05 +08:00
3efa06e63e [Fix](View)varchar type conversion error (#22987) 2023-08-16 11:49:04 +08:00
c41179b8e9 [fix](regression) Improve the robustness when close target connection (#23012) 2023-08-16 11:42:58 +08:00
221e7bdd17 [test](jdbc external) fix mysql and pg external regression test (#22998) 2023-08-16 10:44:47 +08:00
a2095b7d9e [fix](docs) add enable_single_replica_load on be config doc (#22948) 2023-08-16 10:31:01 +08:00
Pxl
d5df3bae25 [Bug](exchange) fix dcheck fail when VDataStreamRecvr input empty block (#22992)
fix dcheck fail when VDataStreamRecvr input empty block
2023-08-16 10:21:19 +08:00
3b8981bee7 [chore](third-party) Speed the download up for aws-crt-cpp (#22997)
The package aws-sdk-cpp was upgraded in #20252. We can speed the download up for aws-crt-cpp.
2023-08-16 09:47:18 +08:00
da097629ea [chore](build) Fix the build with MySQL support (#23020) 2023-08-16 09:28:56 +08:00
cb6678adb9 [fix](case) Update repositoryAffinityList1.sql (#22941) 2023-08-16 09:23:46 +08:00
c8c46e042d [Improve](regress-test)add regress test for map_agg with nested type and insert to doris inner table #23006 2023-08-16 09:21:02 +08:00
d3dddeea8a [fix](load) remove incorrect DCHECK in BetaRowsetWriter dtor (#23016)
The DCHECK may not always be right in case of Vertical compaction.
remove it to let DEBUG run.

Signed-off-by: freemandealer <freeman.zhang1992@gmail.com>
2023-08-15 23:55:02 +08:00
423002b20a [fix](nereids) partitionTopN & Window estimation (#22953)
* partitionTopN & winExpr estimation

* tpcds 44/47/57
2023-08-15 20:19:03 +08:00
fe08db191f [typo](docs) Optimize the release note 2.0.0 (#22926) 2023-08-15 20:09:56 +08:00
61d2f37bdc [fix](jdbc catalog) fix string type insert into odbc table (#22961) 2023-08-15 20:09:38 +08:00
f191736bfe [bug](shuffle) Fix DCHECK failure if exchange node has limit (#22993) 2023-08-15 19:14:37 +08:00
41a52d45d3 [pipeline](branch-2.0) pr to branch-2.0 also run checks (#23004) 2023-08-15 19:13:13 +08:00
80566f7fed [stats](nereids)support partition stats (#22606) 2023-08-15 17:52:25 +08:00
9b2323b7fd [Pipeline](exec) support async writer in pipelien query engine (#22901) 2023-08-15 17:32:53 +08:00
50f66b1246 [fix](pipeline) fix bug of datastream sender when doing BUCKET_SHFFULE_HASH_PARTITIONED shuffle (#22988)
This issue is introduced by #22765, if #22765 is picked to 2.0, then also need to pick this PR.

When shuffle type is BUCKET_SHFFULE_HASH_PARTITIONED, since data of multi buckets maybe sent to the same channel, send eos too early may cause data lost.
2023-08-15 17:30:27 +08:00
d7a5c37672 [improvement](tablet clone) update the capacity coeficient for calculating backend load score (#22857)
update the capacity coeficient for calcutating the backend load score:
1. Add fe config entry `backend_load_capacity_coeficient` to allow setting the capacity coeficient manually;
2. Adjust calculating capacity coeficient as below.

We emphasize disk usage for calculating load score. 
If a be has a high used capacity percent, we should increase its load score.
So we increase capacity coefficient with a be's used capacity percent.

But this is not enough. For example, if the tablets have a big difference in data size.
Then for below two BEs, their load score maybe the same:
BE A:  disk usage = 60%,  replica number = 2000  (it contains the big tablets)
BE B:  disk usage = 30%,  replica number = 4000  (it contains the small tablets)

But what we want is: firstly move some big tablets from A to B, after their disk usages are close,
then move some small tablets from B to A, finally both of their disk usages and replica number
are close.

To achieve this, when the max difference between all BE's disk usages >= 30%,  we set the capacity cofficient to 1.0 and avoid the affect of replica num. After the disk usage difference decrease, then decrease the capacity cofficient to make replica num effective.
2023-08-15 17:27:31 +08:00
7de362f646 [fix](Nereids): expand other join which has or condition (#22809) 2023-08-15 16:49:19 +08:00
dd09e42ca9 [enhancement](Nereids): expression unify constructor by using List (#22985) 2023-08-15 16:47:58 +08:00
140ab60a74 [Enhancement](multi-catalog) add a BE selection strategy for hdfs short-circuit-read. (#22697)
Sometimes the BEs will be deployed on the same node with DataNode, so we can use a more reasonable BE selection policy to use the hdfs short-circuit-read as much as possible.
2023-08-15 15:34:39 +08:00
a2e00727d6 [feature](auth)support Col auth (#22629)
support GRANT privilege[(col1,col2...)] [, privilege] ON db.tbl TO user_identity [ROLE 'role'];
2023-08-15 15:32:51 +08:00
f1864d9fcf [fix](function) fix str_to_date with specific format #22981 2023-08-15 15:30:48 +08:00
9b42093742 [feature](agg) Make 'map_agg' support array type as value (#22945) 2023-08-15 14:44:50 +08:00
1d825f57bc [fix](load) expose error root cause msg for load (#22968)
Currently, we only return ambiguous "INTERNAL ERROR" to the user when
load. This commit will no more hide the root cause.

Signed-off-by: freemandealer <freeman.zhang1992@gmail.com>
2023-08-15 13:22:45 +08:00
c2ff940947 [refactor](parquet)change decimal type export as fixed-len-byte on parquet write (#22792)
before the parquet write export decimal as byte-binary,
but can't be import those fied to Hive.
Now, change to export decimal as fixed-len-byte-array in order to import hive directly.
2023-08-15 13:17:50 +08:00
94bf8fb3c5 [performance](executor) optimize time_round function only one arg (#22855) 2023-08-15 13:16:42 +08:00
f6ca16e273 [fix](analysis) fix error msg #22950 2023-08-15 13:15:13 +08:00
1eab93b368 [chore](Nereids): remove useless code (#22960) 2023-08-15 13:14:20 +08:00
707a527775 [FIX](map)insert into doris table with array/map type by local tvf (#22955) 2023-08-15 13:11:23 +08:00
Pxl
34399e2965 [Bug](exchange) init _instance_to_rpc_ctx on register_sink (#22976)
init _instance_to_rpc_ctx on register_sink
2023-08-15 13:02:28 +08:00
13d24297a7 [fix](Nereids) type check could not work when root node is table or file sink (#22902)
type check could not work because no expression in plan.
sink and scan have no expression at all. so cannot check type.
this pr add expression on logical sink to let type check work well
2023-08-15 11:45:16 +08:00
ce3267fcca [refactor](load) change segcompaction worker interface (#22928) 2023-08-15 11:29:57 +08:00
d431a35721 [Fix](inverted index) fix non-index match function core (#22959) 2023-08-15 11:27:12 +08:00
xy
b5ea3454a6 [Bug](aggregation)fix for map_agg when columns[1] is nullable (#22932)
In the map_agg handler function, added the judgment on columns[1]->is_nullable()
2023-08-15 11:26:03 +08:00
Pxl
3f55d5d4d5 [Chore](excution) change some log fatal and dcheck to exception (#22890)
change some log fatal and dcheck to exception
2023-08-15 10:45:00 +08:00
bfc1efe1aa [fix](createTableStmt)fix bug that createTableStmt toSql (#22750)
Issue Number: https://github.com/apache/doris/issues/22749
2023-08-15 10:35:22 +08:00
8318dfa9a3 [fix](datastream sender) fix wrong result of BUCKET_SHFFULE_HASH_PARTITIONED shuffle (#22973)
fix wrong result of BUCKET_SHFFULE_HASH_PARTITIONED shuffle
2023-08-15 10:21:14 +08:00
911bd0e818 [bug](if) fix if function not handle const nullable value (#22823)
fix if function not handle const nullable value
2023-08-15 10:16:48 +08:00
27f5b623e6 [Chore](docs)Add SSL Faq (#22956) 2023-08-15 09:49:39 +08:00
b49dc8042d [feature](load) refactor CSV reading process during scanning, and support enclose and escape for stream load (#22539)
## Proposed changes

Refactor thoughts: close #22383
Descriptions about `enclose` and `escape`: #22385

## Further comments

2023-08-09: 
It's a pity that experiment shows that the original way for parsing plain CSV is faster. Therefor, the refactor is only applied on enclose related code. The plain CSV parser use the original logic.

Fallback of performance is unavoidable anyway. From the `CSV reader`'s perspective, the real weak point may be the write column behavior, proved by the flame graph.
 
Trimming escape will be enable after fix: #22411 is merged

Cases should be discussed: 

1. When an incomplete enclose appears in the beginning of a large scale data, the line delimiter will be unreachable till the EOF, will the buffer become extremely large?
2. What if an infinite line occurs in the case? Essentially,  `1.` is equivalent to this.  

Only support stream load as trial in this PR, avoid too many unrelated changes. Docs will be added when `enclose` and `escape` is available for all kinds of load.
2023-08-15 09:23:53 +08:00
7bc98748cf [fix](datastream sender) fix wrong result of broadcast join; fix wrong result of pipeline (#22942)
Fix bug of #22765
Close #22924
2023-08-14 18:59:19 +08:00
hzq
ad8a8203a2 [fix](mysql compatibility) add an internal database mysql to improve mysql compatibility (#22868) 2023-08-14 17:03:11 +08:00
45481f5fe2 [optimize](Nereids): optimize Nereids performance (#22885) 2023-08-14 15:21:29 +08:00
8f471a3a1f [fix](Nereids) push agg to meta scan is not work well (#22811) 2023-08-14 14:35:21 +08:00
fa6110accd [fix](catalog)paimon support more data type (#22899) 2023-08-14 13:48:33 +08:00
Pxl
d371101bfd [Improvement](aggregation) make fixed hashmap's bitmap_size flexable (#22573)
make fixed hashmap's bitmap_size flexable
2023-08-14 10:47:06 +08:00