Commit Graph

11987 Commits

Author SHA1 Message Date
32fce013f7 [feature](docs) add docs dbt-doris adapter (#22067) 2023-07-21 23:34:47 +08:00
0b1c82b021 [opt](nereids) enhance runtime filter pushdown (#21883)
Current runtime filter can't be pushed down into complicated plan pattern, such as set operation as join child and cte sender as filter before shuffling. This pr refines the pushing down ability and can able to push the filter into different plan tree layer recursively, such as nested subquery, set op, cte sender, etc.
2023-07-21 23:31:30 +08:00
afeac4419f [Bug](node) fix partition sort node forget handle some type of key in hashmap (#22037)
* [enhancement](repeat) add filter in repeat node in BE

* update
2023-07-21 23:30:40 +08:00
f7ac827c90 [fix](compaction) fix time series compaction point policy (#21670) 2023-07-21 23:09:02 +08:00
ef01988ae1 [opt](inverted index) support the same column create different type index (#21972) 2023-07-21 23:02:39 +08:00
acf4aa2818 [fix](planner)shouldn't force push down conjuncts for union statement (#22079)
* [fix](planner)shouldn't force push down conjuncts for union statement
2023-07-21 21:12:56 +08:00
85cc044aaa [feature](create-table) support setting replication num for creating table opertaion globally (#21848)
Add a new FE config `force_olap_table_replication_num`.
If this config is larger than 0, when doing creating table operation, the replication num of table will
forcibly be this value.
Default is 0, which make no effect.
This config will only effect the creating olap table operation, other operation such as `add partition`,
`modify table properties` will not be effect.

The motivation of this config is that the most regression test cases are creating table will single replica,
this will be the regression test running well in p0, p1 pipeline.
But we also need to run these cases in multi backend Doris cluster, so we need test cases will multi replicas.
But it is hard to modify each test cases. So I add this config, so that we can simply set it to create all tables with
specified replication number.
2023-07-21 19:36:04 +08:00
e489b60ea3 [feature](load) support line delimiter for old broker load (#22030) 2023-07-21 19:31:19 +08:00
b76d0d84ac [enhancement](Nereids) support other join framework in DPHyper (#21835)
implement CD-A algorithm in order to support others join in DPHyper.
The algorithm details are in on the correct and complete enumeration of the core search
2023-07-21 18:31:52 +08:00
bed940b7fc [fix](log) column index off-by-one error in scanner logs (#19747) 2023-07-21 18:30:01 +08:00
7cac36d9e8 [chore](Nereids) fix typo in some plan visitor (#21830) 2023-07-21 18:22:20 +08:00
37f230ee3e [pipeline](regression) do not run build if only modified regression conf (#22075)
in order to fast exclude cases that block regression pipeline.
2023-07-21 17:13:28 +08:00
c3663c5ff1 [Fix](Sonar)sonar not working due to changing thrift code generation … (#22076) 2023-07-21 17:08:48 +08:00
40299d280d [Fix](json reader) fix rapidjson array->PushBack may take ownership… (#21988)
With bellow json path
`["$.data","$.data.datatimestamp"]`

After `array_obj->PushBack` the `data` field owner will be taken from array_obj, and lead to null values for json path `$.data.datatimestamp`

Rapidjson doc:
```
//! Append a GenericValue at the end of the array.
  \note The ownership of \c value will be transferred to this array on success.
 */
GenericValue& PushBack(GenericValue& value, Allocator& allocator);
```
2023-07-21 17:02:01 +08:00
d1c5025bce [Fix](compaction) add error message when load unsupport compression code (#22033) 2023-07-21 16:50:46 +08:00
6b20cdb170 [Fix](compaction) Fix SizeBasedCumulativeCompactionPolicy pick_input_rowsets (#21732) 2023-07-21 16:48:53 +08:00
2b2ac10e93 [feature](partial update) add failure tolerance for strict mode partial update stream load 2023-07-21 16:46:44 +08:00
63b17bc7ba [typo](docs) fix some mistake in Doris & Spark Column Type Mapping (#19998) 2023-07-21 16:37:51 +08:00
67a3f37779 [doc](routineload)add routine load ssl example for access ali-kafka (#21877) 2023-07-21 16:03:10 +08:00
db69af1165 [fix](meger-on-write) fix query result wrong when schema change (#22044) 2023-07-21 15:29:04 +08:00
e4c6b9893a [improve](load) add more profiles in tablets channel (#21838) 2023-07-21 13:59:15 +08:00
732e0d14ff [Enhancement](window-funnel)add different modes for window_funnel() function (#20563) 2023-07-21 13:57:27 +08:00
94e2c3cf0f [fix](tablet clone) sched wait slot if has be path (#22015) 2023-07-21 13:27:40 +08:00
74313c7d54 [feature-wip](autoinc)(step-3) add auto increment support for unique table (#22036) 2023-07-21 13:24:41 +08:00
6512893257 [refactor](vectorized) Remove useless control variables to simplify aggregation node code (#22026)
* [refactor](vectorized) Remove useless control variables to simplify aggregation node code

* fix
2023-07-21 12:45:23 +08:00
fb5b412698 [fix](planner)fix bug of pushing conjuncts into inlineview (#21962)
1. markConstantConjunct method shouldn't change the input conjunct
2. Use Expr's comeFrom method to check if the pushed expr is one of the group by exprs, this is the correct way to check if the conjunct can be pushed down through the agg node.
3. migrateConstantConjuncts should substitute the conjuncts using inlineViewRef's analyzer to make the analyzer recognize the column in the conjuncts in the following analyze phase
2023-07-21 11:34:56 +08:00
b09c4d490a [fix](test) should not create and read internal table when use mock cluster in UT (#21660) 2023-07-21 11:30:26 +08:00
0b2b1cbd58 [improvement](multi-catalog)add last sync time for external catalog (#21873)
which operation can update this time:

1.when refresh catalog,lastUpdateTime of catalog will be update
2.when refresh db,lastUpdateTime of db will be update
3.when reload table schema to cache,lastUpdateTime of dbtable will be update
4.when receive add/drop table event,lastUpdateTime of db will be update
5.when receive alter table event,lastUpdateTime of table will be update
2023-07-21 09:42:35 +08:00
6875ef4b8b [refactor](mem_reuse) refactor mem_reuse in MutableBlock (#21564) 2023-07-20 22:53:19 +08:00
f3d9a843dd [Fix](planner)fix ctas incorrect string types of the target table. (#21754)
string types from src table will be replaced to text type in ctas table, we change it to be corresponding to the src table.
2023-07-20 22:14:43 +08:00
a151326268 [Fix](planner)fix failed running alias function with an alias function in original function. (#21024)
failed to run sql:
```sql
create alias function f1(int) with parameter(n) as dayofweek(hours_add('2023-06-18', n))
create alias function f2(int) with parameter(n) as dayofweek(hours_add(makedate(year('2023-06-18'), f1(3)), n))

 select f2(f1(3))
```
it will throw an exception: f1 is not a builtin-function.
because f2's original function contains f1, and f1 is not a builtin-function, should be rewritten firstly.
we should avoid of it. And we will support it later.
2023-07-20 22:12:10 +08:00
398defe10b [fix](publish) fix check_version_exist coredump (#22038) 2023-07-20 22:01:15 +08:00
ab11dea98d [Enhancement](config) optimize behavior of default_storage_medium (#20739) 2023-07-20 22:00:11 +08:00
7d488688b4 [fix](multi-catalog)fix minio default region and throw minio error msg, support s3 bucket root path (#21994)
1. check minio region, set default region if user region is not provided, and throw minio error msg
2. support read root path s3://bucket1
3. fix max compute public access
2023-07-20 20:48:55 +08:00
eabd5d386b [Fix](multi catalog)Fix nereids context table always use internal catalog bug (#21953)
The getTable function in CascadesContext only handles the internal catalog case (try to find table only in internal 
catalog and dbs). However, it should take all the external catalogs into consideration, otherwise, it will failed to find a 
table or get the wrong table while querying external table. This pr is to fix this bug.
2023-07-20 20:32:01 +08:00
e4ac52b2aa [Improvement](profile)Add init and finalize external scan node time in profile (#21749)
Add more profile information for external table plan time. Including init and finalize scan node time, getSplits time, create scan range time, get all partitions time and get all files for all partitions time. Also modified the Indentation to make it easier to read.

This is an example output of the new profile summary. 
```
    Execution  Summary:
          -  Analysis  Time:  3ms
          -  Plan  Time:  26s885ms
              -  JoinReorder  Time:  N/A
              -  CreateSingleNode  Time:  N/A
              -  QueryDistributed  Time:  N/A
              -  Init  Scan  Node  Time:  1ms
              -  Finalize  Scan  Node  Time:  26s868ms
                  -  Get  Splits  Time:  26s554ms
                      -  Get  PARTITIONS  Time:  20s189ms
                      -  Get  PARTITION  FILES  Time:  6s289ms
                  -  Create  Scan  Range  Time:  314ms
          -  Schedule  Time:  1s67ms
          -  Fetch  Result  Time:  56ms
          -  Write  Result  Time:  0ms
          -  Wait  and  Fetch  Result  Time:  57ms
```
2023-07-20 20:29:18 +08:00
0e8432526e [fix](multi-catalog) check properties when alter catalog (#20130)
When we were altering the catalog, we did not verify the new parameters of the catalog, and now we have added verification
My changes:
When We are altering the catalog, I have carried out a full inspection, and if an exception occurs, the parameters will be rolled back
2023-07-20 20:18:14 +08:00
aabe379527 [fix](stats) support utf-8 string range compare (#22024)
in previous version, some utf-8 string literal are mapped to negative double. this issue makes our range check misfunction.
2023-07-20 18:39:41 +08:00
ee65e0a6b1 [fix](Nereids) should not remove any limit from uncorrelated subquery (#21976)
We should not remove any limit from uncorrelated subquery. For Example
```sql
-- should return nothing, but return all tuple of t if we remove limit from exists
SELECT * FROM t WHERE EXISTS (SELECT * FROM t limit 0);

-- should return the tuple with smallest c1 in t,
-- but report error if we remove limit from scalar subquery
SELECT * FROM t WHERE c1 = (SELECT * FROM t ORDER BY c1 LIMIT 1);
```
2023-07-20 18:37:04 +08:00
7947569993 [Bug][RegressionTest] fix the DCHECK failed in join code (#22021) 2023-07-20 18:12:20 +08:00
367ad9164a [feature-wip](auto-inc)(step-2) support auto-increment column for duplicate table (#19917) 2023-07-20 18:03:39 +08:00
c31e826756 [opt](config) rename alter_inverted_index_worker_count to alter_index_worker_count, and add docs (#21985) 2023-07-20 17:50:04 +08:00
650d7cfc8c [enhancement](repeat) add filter in repeat node in BE (#21984)
[enhancement](repeat) add filter in repeat node in BE (#21984)
2023-07-20 17:25:13 +08:00
be2754e1a2 [fuzzy](modify) enable pipeline and nereids in regression env by default (#21824)
enable pipeline and nereids in regression env by default
2023-07-20 17:12:21 +08:00
4cfe990095 [enhancement](Nereids) add test framework for otherjoin (#21887) 2023-07-20 16:35:55 +08:00
c65781d4b8 [feature](dbt) materialization table skip the process of backup (#21993)
1.  materialization table skip the process of backup
2. materialization table to full refresh mode atomically
3. Handle the case where the `rename table` is null
2023-07-20 15:59:55 +08:00
365afb5389 [fix](sparkdpp) Hive table properties not take effect when create spark session (#21881)
When creating a Hive external table for Spark loading, the Hive external table includes related information such as the Hive Metastore. However, when submitting a job, it is required to have the hive-site.xml file in the Spark conf directory; otherwise, the Spark job may fail with an error message indicating that the corresponding Hive table cannot be found.

The SparkEtlJob.initSparkConfigs method sets the properties of the external table into the Spark conf. However, at this point, the Spark session has already been created, and the Hive-related parameters will not take effect. To ensure that the Spark Hive catalog properly loads Hive tables, you need to set the Hive-related parameters before creating the Spark session.

Co-authored-by: zhangshixin <zhangshixin@youzan.com>
2023-07-20 14:36:00 +08:00
2ae9bfa3b2 [typo](docs) add oracle jdbc catalog FAQ of orai18n.jar (#22016) 2023-07-20 14:10:58 +08:00
9182b8d3c2 [Refactor](exec) Remove the unless header of vresult_writer (#22011)
Remove unless code of vresult_wirter;
2023-07-20 13:31:44 +08:00
86d7233b06 [fix](nereids) ExtractAndNormalizeWindowExpression rule should push down correct exprs to child (#21827)
consider the window function:
```sql
substr(
ref_1.cp_type,
sum(CASE WHEN ref_1.cp_type = 0 THEN 3 ELSE 2 END) OVER (),
1)
```
Before the pr, only "CASE WHEN ref_1.cp_type  = 0 THEN 3 ELSE 2 END" is pushed down.
But both "ref_1.cp_type" and "CASE WHEN ref_1.cp_type  = 0 THEN 3 ELSE 2 END"
should be pushed down.
This pr fix it
2023-07-20 11:47:55 +08:00