Commit Graph

7033 Commits

Author SHA1 Message Date
22b4c6af20 [feature](Nereids) support statement having aggregate function in order by list (#13976)
1. add a feature that support statement having aggregate function in order by list. such as:
    SELECT COUNT(*) FROM t GROUP BY c1 ORDER BY COUNT(*) DESC;
2. add clickbench analyze unit tests
2022-11-07 17:01:31 +08:00
0031304015 [typo](docs)fix config doc #14010 2022-11-07 17:00:16 +08:00
bb9182d602 [fix](repeat)remove unmaterialized expr from repeat node (#13953) 2022-11-07 14:13:05 +08:00
7254999f02 [typo](docs) fix docs,delete redundant words #13849 2022-11-07 13:51:10 +08:00
3c8524b9d8 [security](fe jar) upgrade commons-codec:commons-codec to 1.13 #13951 2022-11-07 13:50:07 +08:00
32fea672b0 [chore](gutil) remove some gutil macros and solve some macro conflict with brpc (#13954)
Co-authored-by: yiguolei <yiguolei@gmail.com>
2022-11-07 13:39:52 +08:00
e8d2fb6778 [feature](function)add search functions: multi_search_all_positions & multi_match_any (#13763)
Co-authored-by: yiliang qiu <yiliang.qiu@qq.com>
2022-11-07 11:50:55 +08:00
7ffe88b579 [feature-array](array-type) Add array function array_popback (#13641)
Remove the last element from array.

```
mysql> select array_popback(['test', NULL, 'value']);
+-----------------------------------------------------+
| array_popback(ARRAY('test', NULL, 'value')) |
+-----------------------------------------------------+
| [test, NULL]                                        |
+-----------------------------------------------------+
```
2022-11-07 10:48:16 +08:00
c7b2b90504 [fix](memtracker) Fix DCHECK !std::count(_consumer_tracker_stack.begin(), _consumer_tracker_stack.end(), tracker) 2022-11-06 16:41:03 +08:00
27549564a7 [feature](table-valued-function) Support S3 tvf (#13959)
This pr does three things:

1. Modified the framework of table-valued-function(tvf).
2. be support `fetch_table_schema` rpc.
3. Implemented `S3(path, AK, SK, format)` table-valued-function.
2022-11-06 11:04:26 +08:00
fb5a3e118a [feature-wip](dlf) prepare to support aliyun dlf (#13969)
[What is DLF](https://www.alibabacloud.com/product/datalake-formation)

This PR is a preparation for support DLF, with some changes of multi catalog

1. Add RuntimeException for most of hive meta store or es client visit operation.
2. Add DLF related dependencies.
3. Move the checks of es catalog properties to the analysis phase of creating es catalog

TODO(in next PR):

1. Refactor the `getSplit` method to support not only hdfs, but s3-compatible object storage.
2. Finish the implementation of supporting DLF
2022-11-06 10:01:57 +08:00
f29e43fee9 [fix](storage) rm unacessary check (#13986) (#13988)
Signed-off-by: freemandealer <freeman.zhang1992@gmail.com>

Signed-off-by: freemandealer <freeman.zhang1992@gmail.com>
2022-11-05 23:46:30 +08:00
1724faf9a5 [test](java-udf)add java udf RegressionTest about the currently supported data types #13972 2022-11-05 19:25:58 +08:00
d01f7c546a [refactor](iceberg-hudi) disable iceberg and hudi table by default (#13932) 2022-11-05 19:22:27 +08:00
wxy
620a137bd7 [enhancement](test) support tablet repair and balance process in ut (#13940) 2022-11-05 19:20:23 +08:00
380395a61f [doc](routineload)Common mistakes in adding routine load #13975 2022-11-05 19:17:33 +08:00
087488db3b [typo](doc) fixed spelling errors (#13974) 2022-11-05 15:40:55 +08:00
2ee7ba79a8 [Improvement](javaudf) improve java loader usage (#13962) 2022-11-05 13:20:04 +08:00
04830af039 [fix](tablet sink) fallback to non-vectorized interface in tablet_sink if is in progress of upgrding from 1.1-lts to 1.2-lts (#13966) 2022-11-05 10:19:51 +08:00
f87be09d69 [fix](load) Fix load channel mgr lock (#13960)
hot fix load channel mgr lock
2022-11-05 00:48:30 +08:00
a19e6881c7 [chore](be web ui)upgrade jquery version to 3.6.0 (#13942)
* upgrade jquery version to 3.6.0

* update license dist
2022-11-04 16:20:17 +08:00
06a1efdb01 [fix](Nerieds) fix tpch and support trace plan's change event (#13957)
This pr fix some bugs for run tpc-h
1. fix the avg(decimal) crash the backend. The fix code in `Avg.getFinalType()` and every child class of `ComputeSinature`
2. fix the ReorderJoin dead loop. The fix code in `ReorderJoin.findInnerJoin()`
3. fix the TimestampArithmetic can not bind the functions in the child. The fix code in `BindFunction.FunctionBinder.visitTimestampArithmetic()`

New feature: support trace the plan's change event, you can `set enable_nereids_trace=true` to open trace log and see some log like this:
```
2022-11-03 21:07:38,391 INFO (mysql-nio-pool-0|208) [Job.printTraceLog():128] ========== RewriteBottomUpJob ANALYZE_FILTER_SUBQUERY ==========
before:
LogicalProject ( projects=[S_ACCTBAL#17, S_NAME#13, N_NAME#4, P_PARTKEY#19, P_MFGR#21, S_ADDRESS#14, S_PHONE#16, S_COMMENT#18] )
+--LogicalFilter ( predicates=((((((((P_PARTKEY#19 = PS_PARTKEY#7) AND (S_SUPPKEY#12 = PS_SUPPKEY#8)) AND (P_SIZE#24 = 15)) AND (P_TYPE#23 like '%BRASS')) AND (S_NATIONKEY#15 = N_NATIONKEY#3)) AND (N_REGIONKEY#5 = R_REGIONKEY#0)) AND (R_NAME#1 = 'EUROPE')) AND (PS_SUPPLYCOST#10 =  (SCALARSUBQUERY) (QueryPlan: LogicalAggregate ( phase=LOCAL, outputExpr=[min(PS_SUPPLYCOST#31) AS `min(PS_SUPPLYCOST)`#33], groupByExpr=[] )), (CorrelatedSlots: [P_PARTKEY#19, S_SUPPKEY#12, S_NATIONKEY#15, N_NATIONKEY#3, N_REGIONKEY#5, R_REGIONKEY#0, R_NAME#1]))) )
   +--LogicalJoin ( type=CROSS_JOIN, hashJoinCondition=[], otherJoinCondition=[] )
      |--LogicalJoin ( type=CROSS_JOIN, hashJoinCondition=[], otherJoinCondition=[] )
      |  |--LogicalJoin ( type=CROSS_JOIN, hashJoinCondition=[], otherJoinCondition=[] )
      |  |  |--LogicalJoin ( type=CROSS_JOIN, hashJoinCondition=[], otherJoinCondition=[] )
      |  |  |  |--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.part, output=[P_PARTKEY#19, P_NAME#20, P_MFGR#21, P_BRAND#22, P_TYPE#23, P_SIZE#24, P_CONTAINER#25, P_RETAILPRICE#26, P_COMMENT#27], candidateIndexIds=[], selectedIndexId=11076, preAgg=ON )
      |  |  |  +--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.supplier, output=[S_SUPPKEY#12, S_NAME#13, S_ADDRESS#14, S_NATIONKEY#15, S_PHONE#16, S_ACCTBAL#17, S_COMMENT#18], candidateIndexIds=[], selectedIndexId=11124, preAgg=ON )
      |  |  +--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.partsupp, output=[PS_PARTKEY#7, PS_SUPPKEY#8, PS_AVAILQTY#9, PS_SUPPLYCOST#10, PS_COMMENT#11], candidateIndexIds=[], selectedIndexId=11092, preAgg=ON )
      |  +--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.nation, output=[N_NATIONKEY#3, N_NAME#4, N_REGIONKEY#5, N_COMMENT#6], candidateIndexIds=[], selectedIndexId=11044, preAgg=ON )
      +--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.region, output=[R_REGIONKEY#0, R_NAME#1, R_COMMENT#2], candidateIndexIds=[], selectedIndexId=11108, preAgg=ON )

after:
LogicalProject ( projects=[S_ACCTBAL#17, S_NAME#13, N_NAME#4, P_PARTKEY#19, P_MFGR#21, S_ADDRESS#14, S_PHONE#16, S_COMMENT#18] )
+--LogicalFilter ( predicates=((((((((P_PARTKEY#19 = PS_PARTKEY#7) AND (S_SUPPKEY#12 = PS_SUPPKEY#8)) AND (P_SIZE#24 = 15)) AND (P_TYPE#23 like '%BRASS')) AND (S_NATIONKEY#15 = N_NATIONKEY#3)) AND (N_REGIONKEY#5 = R_REGIONKEY#0)) AND (R_NAME#1 = 'EUROPE')) AND (PS_SUPPLYCOST#10 = min(PS_SUPPLYCOST)#33)) )
   +--LogicalProject ( projects=[P_PARTKEY#19, P_NAME#20, P_MFGR#21, P_BRAND#22, P_TYPE#23, P_SIZE#24, P_CONTAINER#25, P_RETAILPRICE#26, P_COMMENT#27, S_SUPPKEY#12, S_NAME#13, S_ADDRESS#14, S_NATIONKEY#15, S_PHONE#16, S_ACCTBAL#17, S_COMMENT#18, PS_PARTKEY#7, PS_SUPPKEY#8, PS_AVAILQTY#9, PS_SUPPLYCOST#10, PS_COMMENT#11, N_NATIONKEY#3, N_NAME#4, N_REGIONKEY#5, N_COMMENT#6, R_REGIONKEY#0, R_NAME#1, R_COMMENT#2, min(PS_SUPPLYCOST)#33] )
      +--LogicalApply ( correlationSlot=[P_PARTKEY#19, S_SUPPKEY#12, S_NATIONKEY#15, N_NATIONKEY#3, N_REGIONKEY#5, R_REGIONKEY#0, R_NAME#1], correlationFilter=Optional.empty )
         |--LogicalJoin ( type=CROSS_JOIN, hashJoinCondition=[], otherJoinCondition=[] )
         |  |--LogicalJoin ( type=CROSS_JOIN, hashJoinCondition=[], otherJoinCondition=[] )
         |  |  |--LogicalJoin ( type=CROSS_JOIN, hashJoinCondition=[], otherJoinCondition=[] )
         |  |  |  |--LogicalJoin ( type=CROSS_JOIN, hashJoinCondition=[], otherJoinCondition=[] )
         |  |  |  |  |--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.part, output=[P_PARTKEY#19, P_NAME#20, P_MFGR#21, P_BRAND#22, P_TYPE#23, P_SIZE#24, P_CONTAINER#25, P_RETAILPRICE#26, P_COMMENT#27], candidateIndexIds=[], selectedIndexId=11076, preAgg=ON )
         |  |  |  |  +--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.supplier, output=[S_SUPPKEY#12, S_NAME#13, S_ADDRESS#14, S_NATIONKEY#15, S_PHONE#16, S_ACCTBAL#17, S_COMMENT#18], candidateIndexIds=[], selectedIndexId=11124, preAgg=ON )
         |  |  |  +--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.partsupp, output=[PS_PARTKEY#7, PS_SUPPKEY#8, PS_AVAILQTY#9, PS_SUPPLYCOST#10, PS_COMMENT#11], candidateIndexIds=[], selectedIndexId=11092, preAgg=ON )
         |  |  +--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.nation, output=[N_NATIONKEY#3, N_NAME#4, N_REGIONKEY#5, N_COMMENT#6], candidateIndexIds=[], selectedIndexId=11044, preAgg=ON )
         |  +--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.region, output=[R_REGIONKEY#0, R_NAME#1, R_COMMENT#2], candidateIndexIds=[], selectedIndexId=11108, preAgg=ON )
         +--LogicalAggregate ( phase=LOCAL, outputExpr=[min(PS_SUPPLYCOST#31) AS `min(PS_SUPPLYCOST)`#33], groupByExpr=[] )
            +--LogicalFilter ( predicates=(((((P_PARTKEY#19 = PS_PARTKEY#28) AND (S_SUPPKEY#12 = PS_SUPPKEY#29)) AND (S_NATIONKEY#15 = N_NATIONKEY#3)) AND (N_REGIONKEY#5 = R_REGIONKEY#0)) AND (CAST(R_NAME AS STRING) = CAST(EUROPE AS STRING))) )
               +--LogicalOlapScan ( qualified=default_cluster:regression_test_tpch_sf1_p1_tpch_sf1.partsupp, output=[PS_PARTKEY#28, PS_SUPPKEY#29, PS_AVAILQTY#30, PS_SUPPLYCOST#31, PS_COMMENT#32], candidateIndexIds=[], selectedIndexId=11092, preAgg=ON )

```
2022-11-04 15:01:06 +08:00
554f566217 [enhancement](compaction) introduce segment compaction (#12609) (#12866)
## Design

### Trigger

Every time when a rowset writer produces more than N (e.g. 10) segments, we trigger segment compaction. Note that only one segment compaction job for a single rowset at a time to ensure no recursing/queuing nightmare.

### Target Selection

We collect segments during every trigger. We skip big segments whose row num > M (e.g. 10000) coz we get little benefits from compacting them comparing our effort. Hence, we only pick the 'Longest Consecutive Small" segment group to do actual compaction.

### Compaction Process

A new thread pool is introduced to help do the job. We submit the above-mentioned 'Longest Consecutive Small" segment group to the pool. Then the worker thread does the followings:

- build a MergeIterator from the target segments
- create a new segment writer
- for each block readed from MergeIterator, the Writer append it

### SegID handling

SegID must remain consecutive after segment compaction. 

If a rowset has small segments named seg_0, seg_1, seg_2, seg_3 and a big segment seg_4:

- we create a segment named "seg_0-3" to save compacted data for seg_0, seg_1, seg_2 and seg_3
- delete seg_0, seg_1, seg_2 and seg_3
- rename seg_0-3 to seg_0
- rename seg_4 to seg_1

It is worth noticing that we should wait inflight segment compaction tasks to finish before building rowset meta and committing this txn.
2022-11-04 14:12:51 +08:00
948e080b31 [minor](error msg) Fix wrong error message (#13950) 2022-11-04 13:49:46 +08:00
dc01fb4085 [enhancement](Nereids) remove unnecessary string cast (#13730)
convert string like literal to the cast type instead of run cast in runtime
2022-11-04 11:18:22 +08:00
9bf20a7b5d [enhancement](Nereids) remove unnecessary int cast (#13881) 2022-11-04 11:07:59 +08:00
efb2596c7a [enhancment](Nereids) enable push down filter through aggregation (#13938) 2022-11-04 11:04:00 +08:00
e09033276e [fix](runtime-filter) build thread destruct first may cause probe thread coredump (#13911) 2022-11-04 09:29:37 +08:00
f2d84d81e6 [feature-wip][refactor](multi-catalog) Persist external catalog related metadata. (#13746)
Persist external catalog/db/table, including the columns of external tables.
After this change, external objects could have their own uniq ID through their lifetime,
this is required for the statistic information collection.
2022-11-04 09:04:00 +08:00
698541e58d [improvement](exec) add more debug info on fragment exec error (#13899) 2022-11-04 08:55:31 +08:00
5d56fe6d32 [fix](meta)(recover) fix recover info persist bug (#13948)
introduced from #13830
2022-11-04 07:40:21 +08:00
9869915279 [refactor](crossjoin) refactor cross join (#13896) 2022-11-03 22:42:56 +08:00
0a228a68d6 [Improvement](javaudf) support different date argument for date/datetime type (#13920) 2022-11-03 20:33:20 +08:00
5d7b51dcc2 [BugFix](Concat) output of string concat function exceeds UINT makes crash (#13916) 2022-11-03 19:44:44 +08:00
1b36843664 [doc](jsonb type)add documents for JSONB datatype (#13792) 2022-11-03 19:33:51 +08:00
ff935ca1a0 [enhancement](chore) remove debug log which is really too frequent #13909
Co-authored-by: cambyzju <zhuxiaoli01@baidu.com>
2022-11-03 19:32:44 +08:00
d183199319 [Bug](array-type) Fix array product calculate decimal type return wrong result (#13794) 2022-11-03 17:26:34 +08:00
8043418db4 [optimization](array-type) update the exception message when create table with array column (#13731)
This pr is used to update the exception message when create table with array column.
Co-authored-by: hucheng01 <hucheng01@baidu.com>
2022-11-03 17:12:17 +08:00
c1438cbad6 [revert](Nereids): revert GroupExpression Children ImmutableList. (#13918) 2022-11-03 16:29:54 +08:00
ee934483eb [Enhancement](function) optimize the upper and lower functions using the simd instruction. (#13326)
optimize the `upper` and `lower` functions using the simd instruction.
2022-11-03 15:12:25 +08:00
b1816d49e7 [fix](typo) check catalog enable exception message spelling mistake (#13925) 2022-11-03 14:44:37 +08:00
6ff306b1ea [docs](round) complement round function documentation (#13838) 2022-11-03 14:30:49 +08:00
5fe3342aa3 [Vectorized](function) support bitmap_to_array function (#13926) 2022-11-03 14:29:28 +08:00
29e01db7ce [Fix](Nereids) add comments to CostAndEnforcerJob and fix view test case (#13046)
1. add comments to cost and enforcer job as some code is too hard to understand
2. fix nereids_syntax_p0/view.groovy's multi-answer bug.
2022-11-03 12:12:24 +08:00
bfba058ecf [Feature](join) Support null aware left anti join (#13871) 2022-11-03 12:11:25 +08:00
57ee5c4a65 [feature](nereids) Support authentication (#13434)
Add a rule to check the permission of a user who are executing a query. Forbid users who don't have SELECT_PRIV on some tables from executing queries on these tables.
2022-11-03 11:58:14 +08:00
31d8fdd9e4 [fix](Nereids) finalize local aggregate should not turn on stream pre agg (#13922) 2022-11-03 11:08:06 +08:00
a4a991207b [fix](agg)fix group by constant value bug (#13827)
* [fix](agg)fix group by constant value bug

* keep only one const grouping exprs if no agg exprs
2022-11-03 10:26:59 +08:00
5a700223fe [fix](function) fix coredump cause by return type mismatch of vectorized repeat function (#13868)
Will not support repeat function during upgrade in vectorized engine.
2022-11-03 09:53:02 +08:00
32a029d9dc [enhancement](memtracker) Refactor load channel + memtable mem tracker (#13795) 2022-11-03 09:47:12 +08:00