Commit Graph

2218 Commits

Author SHA1 Message Date
2019bb3870 [fix](bitmap) fix wrong result of bitmap intersect functions (#22735)
* [fix](bitmap) fix wrong result of bitmap intersect functions

* fix test case
2023-08-09 18:31:24 +08:00
690a519742 [fix](Nereids) disable or expansion when pipeline engine is disable (#22719) 2023-08-09 14:33:50 +08:00
31a4d6059e [fix](case) if enable_feature_binlog=false in frontend config, no nee… (#22692)
if enable_feature_binlog=false in frontend config, no nee…
2023-08-09 13:15:50 +08:00
7890e464ee [fix](case) 1. disable unstable case window_function 2. add sync after stream load (#22677)
* Update test_dup_tab_auto_inc_with_null.groovy,add sync after streamload

* Update test_unique_table_auto_inc.groovy, add sync after streamload* [fix](case) disable unstable case window_function
2023-08-09 11:03:31 +08:00
4608dcb2d9 [fix](agg) fix coredump caused by push down count aggregation (#22699)
fix coredump caused by push down count aggregation
2023-08-09 10:21:20 +08:00
66784cef71 [Enhancement](Load) Stream Load using SQL (#22509)
This PR was originally #16940 , but it has not been updated for a long time due to the original author @Cai-Yao . At present, we will merge some of the code into the master first.

thanks @Cai-Yao @yiguolei
2023-08-08 13:49:04 +08:00
1617368ee1 [fix](planner) fix bug of push constant conjuncts through set operation node (#22695)
when pushing down constant conjunct into set operation node, we should assign the conjunct to agg node if there is one. This is consistant with pushing constant conjunct into inlineview.
2023-08-08 12:25:42 +08:00
91b15183e7 [enhance][external]enhance and fix external cases 0807 (#22689)
enhance and fix external cases 0807
2023-08-08 10:53:08 +08:00
e578e1e6a2 [opt](Nereids) turnoff pipeline when dml temporary (#22693)
pipeline could not work well for dml
2023-08-08 10:26:40 +08:00
36cea89c22 [Fix](planner)support delete conditions contain non-key columns and add check in analyze phase for delete. (#22673) 2023-08-07 21:49:53 +08:00
c9dc715c5d [fix](broker-load) fix error when using multi data description for same table in load stmt (#22666)
For load request, there are 2 tuples on scan node, input tuple and output tuple.
The input tuple is for reading file, and it will be converted to output tuple based on user specified column mappings.

And the broker load support different column mapping in different data description to same table(or partition).
So for each scanner, the output tuples are same but the input tuple can be different.

The previous implements save the input tuple in scan node level, causing different scanner using same input tuple,
which is incorrect.
This PR remove the input tuple from scan node and save them in each scanners.
2023-08-07 20:03:03 +08:00
bc697ca9d6 [fix](time) fix error in time_to_sec 2023-08-07 17:33:24 +08:00
d9c93aaa1c [fix](regression) fix failed test delete_p0 in branch-2.0 #22652 2023-08-07 16:42:19 +08:00
f036cdfde6 [feature](compaction) support delete in cumulative compaction (#19609) 2023-08-07 15:22:21 +08:00
c82b9bd76b [test](pipline) exclude case test_doris_jdbc_catalog (#22664) 2023-08-07 15:13:34 +08:00
c31226b144 [refractor](regression-test) sort out test cases of external tables (#22640)
sort out the test cases of external table.
After modify, there are 2 directories:

1. `external_table_p0`: all p0 cases of external tables: hive, es, jdbc and tvf
2. `external_table_p2`: all p2 cases of external tables: hive, es, mysql, pg, iceberg and tvf

So that we can run it with one line command like:

```
sh run-regression-test.sh --run -d external_table_p0,external_table_p2
```
2023-08-07 11:12:30 +08:00
023815a4b4 [fix](planner)runtime filter shouldn't be pushed through window function node (#22501) 2023-08-07 09:57:12 +08:00
1a8a1e5b16 [Feature](count_by_enum) support count_by_enum function (#22071)
count_by_enum(expr1, expr2, ... , exprN);

Treats the data in a column as an enumeration and counts the number of values in each enumeration. Returns the number of enumerated values for each column, and the number of non-null values versus the number of null values.
2023-08-06 16:05:14 +08:00
3fe9f422e4 [fix](case) add order by in repositoryAffinityList1.sql (#22605) 2023-08-06 12:11:52 +08:00
75da60f1cc [regression-test](delete) should drop table test before create it (#22633)
Co-authored-by: yiguolei <yiguolei@gmail.com>
2023-08-05 21:07:46 +08:00
c01b6e0ba7 [fix](regression test) redefine variable in the test_table_level_compaction_policy (#22604)
redefine variable in the test_table_level_compaction_policy
2023-08-05 12:55:06 +08:00
d3b50e3b2a [BUG](date_trunc) fix date_trunc function only handle lower string (#22602)
fix date_trunc function only handle lower string
2023-08-05 12:53:13 +08:00
fe6bae2924 [fix](invert index) supports utf8 and non-utf8 strings (#22570)
supports utf8 and non-utf8 strings: [fix] compatible with utf8 and invalid utf8 doris-thirdparty#110
2023-08-05 12:52:53 +08:00
3024b82918 [fix](load)Fix wrong default value for char and varchar of reading json data (#22626)
If a column is defined as: col VARCHAR/CHAR NULL and no default value. Then we load json data which misses column col, the result queried is not correct:
+------+
| col |
+------+
| 1 |
+------+
But expect:
+------+
| col |
+------+
| NULL |
+------+

---------

Co-authored-by: duanxujian <duanxujian@jd.com>
2023-08-05 12:47:27 +08:00
ef0e0b7d79 [case](fix) add sync after stream load (#22601) 2023-08-05 08:28:26 +08:00
265cded7da [Fix](Planner) fix window function in aggregation (#22603)
Problem:
When window function in aggregation function, executor would report an error like: Required field 'node_type' was not present!

Example:
SELECT SUM(MAX(c1) OVER (PARTITION BY c2, c3)) FROM test_window_in_agg;

Reason:
When analyze aggregate, analytic expr (window function carrior when analyze) transfered to slot and loss message. So when
serialize to thrift package, TExpr can not determine node_type of analytic expr.

Solved:
We do not support aggregate(window function) yet. So we report an error when analyze.
2023-08-04 19:15:51 +08:00
9f92861c91 [fix](stats) Load partition stats unexpectedly (#22589)
syncLoadColStats method invoke stale method to deserialize columnstats after supporting load part stats,
2023-08-04 18:50:38 +08:00
7fe08c74fe [fix](inverted index) return empty result instead of error for empty match query (#22592)
return empty result instead of error for empty match query as follows:

`SELECT * FROM t WHERE msg MATCH ''`

`SELECT * FROM t WHERE msg MATCH 'stop_word'`
2023-08-04 17:36:32 +08:00
7d1e08eafa [Fix](Nereids) rand() and uuid() should not fold constant (#22492)
rand() and uuid() should not fold constant and we change the default value of fold constant for non-deterministic function to false.
2023-08-04 15:36:03 +08:00
ef53a27887 [fix](nereids) allow in or exits subquery in binary operator (#22391)
support subquery in binary operator like if( xx  in ( subquery ), 1, 0 )
2023-08-04 15:35:19 +08:00
d379b04b39 [fix](planner) fix bug of push conjuncts through second phase agg (#22417)
If there is a second phase agg, the output of the 1st phase agg is its intermediate tuple not the output tuple.
This pr fix it
2023-08-04 15:21:18 +08:00
658d75c816 [feature](Nereids): normalize join condition after expanding or condition NLJ (#22555) 2023-08-04 13:37:37 +08:00
d5a21de796 [Enhancement](planner)support fold constant for date_trunc() (#22122) 2023-08-04 13:32:48 +08:00
f828a3d826 [shape](nereids) ssb sf100 plan shape check (#22596) 2023-08-04 13:12:21 +08:00
62b1a7bcf3 [tpcds](nereids) add rule to eliminate empty relation #22203
1. eliminate emptyrelation,
2. const fold after filter pushdown
2023-08-04 12:49:53 +08:00
0e9fad4fe9 [stats](nereids) improve Anti join stats estimation #22444
No impact on TPC-H
impact on TPC-DS 16/69/94  improved
2023-08-04 12:48:39 +08:00
0d9caaee7d not run workload group by default (#22497) 2023-08-04 10:12:01 +08:00
3447a70b25 [Fix](planner)fix delete stmt contains where but delete all data. (#22563) 2023-08-03 23:44:05 +08:00
ab0d01d2b4 [fix](case) add sync, test_range_partition.groovy (#22556)
add sync, test_range_partition.groovy
2023-08-03 19:41:54 +08:00
469886eb4e [FIX](array)fix if function for array() #22553
[FIX](array)fix if function for array() #22553
2023-08-03 19:40:45 +08:00
23a69e860d [fix](regression) fix flaky regression test delete_mow_partial_update (#22548) 2023-08-03 19:26:42 +08:00
c63e3e6959 [fix](regression] fix test_table_level_compaction_policy
[fix](regression] fix test_table_level_compaction_policy
2023-08-03 15:24:17 +08:00
22344d6e4a [test](pipline) exclude fail case (#22546)
exclude fail case
2023-08-03 15:18:26 +08:00
4322fdc96d [feature](Nereids): add or expansion in CBO(#22465) 2023-08-03 13:29:33 +08:00
205a0793e9 [fix](regression) fix flaky test test_partial_update_schema_change (#22500)
* update

* update
2023-08-03 09:32:48 +08:00
938f768aba [fix](parquet) resolve offset check failed in parquet map type (#22510)
Fix error when reading empty map values in parquet. The `offsets.back()` doesn't not equal the number of elements in map's key column.

### How does this happen
Map in parquet is stored as repeated group, and `repeated_parent_def_level` is set incorrectly when parsing map node in parquet schema.
```
the map definition in parquet:
 optional group <name> (MAP) {
   repeated group map (MAP_KEY_VALUE) {
     required <type> key;
     optional <type> value;
   }
}
```

### How to fix
Set the `repeated_parent_def_level` of key/value node as the definition level of map node.

`repeated_parent_def_level` is the definition level of the first ancestor node whose `repetition_type` equals `REPEATED`.  Empty array/map values are not stored in doris column, so have to use `repeated_parent_def_level` to skip the empty or null values in ancestor node.

For instance, considering an array of strings with 3 rows like the following:
`null, [], [a, b, c]`
We can store four elements in data column: `null, a, b, c`
and the offsets column is: `1, 1, 4`
and the null map is: `1, 0, 0`
For the `i-th` row in array column: range from `offsets[i - 1]` until `offsets[i]` represents the elements in this row, so we can't store empty array/map values in doris data column. As a comparison, spark does not require `repeated_parent_def_level`, because the spark column stores empty array/map values , and use anther length column to indicate empty values. Please reference: https://github.com/apache/spark/blob/master/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/ParquetColumnVector.java

Furthermore, we can also avoid store null array/map values in doris data column. The same three rows as above, We can only store three elements in data column: `a, b, c`
and the offsets column is: `0, 0, 3`
and the null map is: `1, 0, 0`
2023-08-02 22:33:10 +08:00
3a787b6684 [improvement](regression) syncer regression test (#22490) 2023-08-02 20:09:27 +08:00
8cac8df40c [Fix](Planner) fix create view tosql not include partition (#22482)
Problem:
When create view with join in table partitions, an error would rise like "Unknown column"

Example:
CREATE VIEW my_view AS SELECT t1.* FROM t1 PARTITION(p1) JOIN t2 PARTITION(p2) ON t1.k1 = t2.k1;
select * from my_view ==> errCode = 2, detailMessage = Unknown column 'k1' in 't2'

Reason:
When create view, we do tosql first in order to persistent view sql. And when doing tosql of table reference, partition key
word was removed to keep neat of sql string. But here when we remove partition keyword it would regarded as an alias.
So "PARTITION" keyword can not be removed.

Solved:
Add “PARTITION” keyword back to tosql string.
2023-08-02 20:04:59 +08:00
0cd5183556 [Refactor](inverted index) refact tokenize function for inverted index (#22313) 2023-08-02 19:12:22 +08:00
a4ef340777 [test](pipline) adjust mem limit to 90 & exclude some cases (#22445)
adjust mem limit to 90 & exclude some cases
2023-08-02 15:11:22 +08:00