Commit Graph

2127 Commits

Author SHA1 Message Date
8caa5a9ba4 [Fix](mutli-catalog) Fix null partitions error in iceberg tables. (#22185)
### Issue
when partition has null partitions, it throws error
`Failed to fill partition column: t_int=null`

### Resolution
- Fix the following null partitions error in iceberg tables by replacing null partition to '\N'.
- Add regression test for hive null partition.
2023-07-27 23:57:35 +08:00
b5fa29e138 [fix](bitmap) incorrect result of function 'bitmap_from_array' (#22305) 2023-07-27 22:44:06 +08:00
e39d234db9 [opt](inverted index) add more check for create inverted index (#22297) 2023-07-27 20:33:24 +08:00
716d58f5ff [fix](Nereids) decimal divide should not return null if numerator is zero (#22309) 2023-07-27 20:23:04 +08:00
a87d34b19b [Fix](multi catalog statistics)Improve external table statistics collection (#22224)
Improve external table statistics collection, including log, observability and fix some bugs.
1. Add Running state for statistics job.
2. Add progress for show analyze job. (n/m tasks finished, n/m task failed and so on)
3. Add analyze time cost for show analyze task.
4. Make task failure message more clear.
5. Synchronize the job status updating code in updateTaskStatus.
6. Fix NPE in HMSAnalyzeTask. (Avoid refreshing statistics cache if the collection sql failed)
7. Return error message for with sync collection while timeout. 
8. Log level improvement
9. Fix misuse of logCreateAnalysisJob for tasks.
2023-07-27 20:01:14 +08:00
8a03766c58 [fix](test) change some name in regression test to avoid conflict name running in parallel (#22273) 2023-07-27 19:38:01 +08:00
ae5e39ad26 [opt](Nereids) add double signature back for round like function (#22284)
add double signature back for round like function
2023-07-27 19:10:43 +08:00
Pxl
87b9425772 [Bug](materialized-view) fix where clause not analyzed after fe restart (#22268)
fix where clause not analyzed after fe restart
2023-07-27 18:34:44 +08:00
Pxl
05d18b2f68 [Chore](regression-test) remove insert into select on enable nereids dml (#22291)
remove insert into select on enable nereids dml
2023-07-27 17:18:49 +08:00
6f1c03c766 [fix](jdbc_catalog) fix int and bigint in mysql view when use doris catalog (#22251) 2023-07-27 16:50:42 +08:00
0512e0b168 [test](regression) add cases for partial update with sequence_type (#22215) 2023-07-27 15:51:01 +08:00
4f6a3c5bf0 [feature](catalog) support clob type in oracle jdbc catalog (#21532) 2023-07-27 15:49:15 +08:00
0670e38d5c [pipeline](update)exclude block case test_round (#22290) 2023-07-27 15:38:00 +08:00
ddfdf62993 [opt](planner) support to parse scientific notation(aEb) (#22248) 2023-07-27 13:31:37 +08:00
8b51bfa384 [fix](planner) fix bug of unexpected nest loop join (#22236)
use isLiteral instead of isConstant to check if the expr is a literal. This prevent the unexpected nest loop join, see the test case for detail
2023-07-27 13:20:29 +08:00
41a230b721 [fix] iceberg catalog to specify the version and time (#22209)
problem:
1. create a iceberg_type catalog:
2. use iceberg catalog to specify verison
```
mysql> show catalog iceberg;
+----------------------+--------------------------+
| Key                  | Value                    |
+----------------------+--------------------------+
| type                 | iceberg                  |
| iceberg.catalog.type | hms                      |
| hive.metastore.uris  | thrift://127.0.0.1:9083 |
| hadoop.username      | hadoop                   |
| create_time          | 2023-07-25 16:51:00.522  |
+----------------------+--------------------------+
5 rows in set (0.02 sec)

mysql> select * from iceberg.iceberg_db.tb1 FOR VERSION AS OF 8783036402036752909;
ERROR 5090 (42000): errCode = 2, detailMessage = Only iceberg/hudi external table supports time travel in current version
```

change:
Add `ICEBERG_EXTERNAL_TABLE` type for specify the version and time
2023-07-27 12:04:41 +08:00
619a2857e1 [improvement](jdbc catalog) improve mysql jdbc catalog read bytea`s types & else improve (#22233) 2023-07-27 10:18:37 +08:00
052a416d49 [Enhencement](binlog) db enable binlog (#22256)
* Improve db update binlog properties (binlog.enable = "true") with check
all table enable binlog

* Add more test_alter_database_property regression test
2023-07-27 10:03:51 +08:00
341c45974c [round](decimalv2) round precise decimalv2 value (#22258) 2023-07-27 10:00:36 +08:00
163a38a527 [opt](Nereids) support sql cache (#22144)
1. let Nereids support sql cache
2. let legacy planner's sql cache supports union all
2023-07-27 09:57:31 +08:00
8fb28ecc9e [test](partial-update) add some cases for partial-update (#22210) 2023-07-27 09:52:40 +08:00
dcd6844ea5 [improvement](regression-test) add partial update with schema change case (#22213) 2023-07-27 09:51:42 +08:00
fb41265c27 [opt](Nereids) add boolean type signature for sum aggregate function (#21959) 2023-07-27 09:41:19 +08:00
12222eb145 forbid: regression-test/pipeline/p0/conf/regression-conf.groovy (#22271) 2023-07-26 23:27:18 +08:00
Pxl
560731f392 [Bug](runtime-filter) fix probe expr prepared twice on minmax runtime filter (#22229)
fix probe expr prepared twice on minmax runtime filter
2023-07-26 19:44:35 +08:00
9a07ae890a [fix](point query) Fix ArrayIndexOutOfBoundsException if close a prepare stmt (#22237) 2023-07-26 18:22:07 +08:00
8ff487cc4b [fix](cast) fix invalid value error when casting null date value to string then casting to date value (#22223) 2023-07-26 17:59:01 +08:00
14dcc53135 [fix](Nereids) cast time should turn nullable on all valid types (#22242)
valid types to cast to time/timev2:
- TINYINT
- SMALLINT
- INT
- BIGINT
- LARGEINT
- FLOAT
- DOUBLE
- CHAR
- VARCHAR
- STRING
2023-07-26 17:56:19 +08:00
be69025878 [opt](Nereids) add partial update support for delete stmt (#22184)
Currently, the new optimizer don't consider anything about partial update.
This PR add the ability to convert a delete statement to a partial update insert statement
for merge-on-write unique table
2023-07-26 17:34:31 +08:00
bb67a1467a [fix](Nereids): mergeGroup should merge target Group into existed Group (#22123) 2023-07-26 13:13:25 +08:00
21a3593a9a [fix](Nereids) translate failed when enable topn two phase opt (#22197)
1. should not add rowid slot to reslovedTupleExprs
2. should set notMaterialize to sort's tuple when do two phase opt
2023-07-26 11:38:50 +08:00
f4396ef8c7 [Fix](regression-test) nereids_p0/javaudf and nereids_p0/outfile cases cannot run on multi be cluster (#21929)
the cases as title will not pass in multi-be environment because the be queried doesn't contain outfile data. We will copy the outfile to every instance to fix it.
2023-07-26 11:33:51 +08:00
b20af13966 [fix][jdbc_case]Change the method of obtaining the driver for case test_doris_jdbc_catalog 0724 #22164 2023-07-26 08:48:16 +08:00
cf677b327b [fix](jdbc catalog) Fixed mappings with type errors for bool and tinyint(1) (#22089)
First of all, mysql does not have a boolean type, its boolean type is actually tinyint(1), in the previous logic, We force tinyint(1) to be a boolean by passing tinyInt1isBit=true, which causes an error if tinyint(1) is not a 0 or 1, Therefore, we need to match tinyint(1) according to tinyint instead of boolean, and this change will not affect the correctness of where k = 1 or where k = true queries
2023-07-25 22:45:22 +08:00
f44660db1a [chore](merge-on-write) disable single replica load and compaction for mow table (#22188) 2023-07-25 22:05:22 +08:00
5c8eda8685 [enhencement](regression) add UPDATE & DELETE tests for MOW partial update (#22212) 2023-07-25 22:03:38 +08:00
fc2b9db0ad [Feature](inverted index) add tokenize function for inverted index (#21813)
In this PR, we introduce TOKENIZE function for inverted index, it is used as following:
```
SELECT TOKENIZE('I love my country', 'english');
```
It has two arguments, first is text which has to be tokenized, the second is parser type which can be **english**, **chinese** or **unicode**.
It also can be used with existing table, like this:
```
mysql> SELECT TOKENIZE(c,"chinese") FROM chinese_analyzer_test;
+---------------------------------------+
| tokenize(`c`, 'chinese')              |
+---------------------------------------+
| ["来到", "北京", "清华大学"]          |
| ["我爱你", "中国"]                    |
| ["人民", "得到", "更", "实惠"]        |
+---------------------------------------+
```
2023-07-25 15:05:35 +08:00
d96e31c4d7 [opt](Nereids) not push down global limit to avoid early gather (#21891)
the global limit will create a gather action, and all the data will be calculated in one instance. If we push down the global limit, the node run after the limit node will run slowly.
We fix it by push down only local limit.

a join plan tree before fixing:

```
LogicalLimit(global)
    LogicalLimit(local)
        Plan()
            LogicalLimit(global)
                LogicalLimit(local)
                    LogicalJoin
                        LogicalLimit(global)
                            LogicalLimit(local)
                                Plan()
                        LogicalLimit(global)
                            LogicalLimit(local)
                                Plan()    

after fixing:
LogicalLimit(global)
    LogicalLimit(local)      
        Plan()
            LogicalLimit(local)
                LogicalJoin
                    LogicalLimit(local)
                        Plan()
                    LogicalLimit(local)
                        Plan()
```
2023-07-25 14:45:20 +08:00
2b4bfe5be7 [fix](autoinc) fix _fill_auto_inc_cols when the input column is ColumnConst (#22175) 2023-07-25 14:41:36 +08:00
c01230f99a [fix](match) Optimize the logic for match_phrase function filter (#21622) 2023-07-25 14:22:37 +08:00
0f439bb1ca [vectorized](udf) java udf support map type (#22059) 2023-07-25 11:56:20 +08:00
b41fcbb783 [feature](agg) add the aggregation function 'mag_agg' (#22043)
New aggregation function: map_agg.

This function requires two arguments: a key and a value, which are used to build a map.

select map_agg(column1, column2) from t group by column3;
2023-07-25 11:21:03 +08:00
3c58e9bac9 [Fix](Nereids) Fix problem of infer predicates not completely (#22145)
Problem:
When inferring predicate in nereids, new inferred predicates can not be the source of next round. For example:

create table tt1(c1 int, c2 int) distributed by hash(c1) properties('replication_num'='1');
create table tt2(c1 int, c2 int) distributed by hash(c1) properties('replication_num'='1');
create table tt3(c1 int, c2 int) distributed by hash(c1) properties('replication_num'='1');
explain select * from tt1 left join tt2 on tt1.c1 = tt2.c1 left join tt3 on tt2.c1 = tt3.c1 where tt1.c1 = 123;

we expect to get t33.c1 = 123, but we can just get t22.c1 = 123. Because when infer tt1.c1 = 123 and tt2.c1 = tt3.c1, we can
not get any relationship of these two predicates.

Solution:
We need to cache middle results of source predicates like t22.c1 = 123 in example.
2023-07-25 10:05:00 +08:00
a0463ea047 [round](decimalv2) round decimalv2 to precision value (#22138)
* [round](decimalv2) round decimalv2 to precision value

* update

* update`
2023-07-25 03:29:48 +08:00
752cec9e19 [Fix](multi-catalog) Fix not single slot filter conjuncts with dict filter issue. (#22052)
### Issue
Dictionary filtering is a mechanism that directly reads the dictionary encoding of a single string column filter condition for filter comparison. But dictionary filtered single string columns may be included in other multi-column filter conditions. This can cause problems.

For example:
`select * from multi_catalog.lineitem_string_date_orc where l_commitdate < l_receiptdate and l_receiptdate = '1995-01-01'  order by l_orderkey, l_partkey, l_suppkey, l_linenumber limit 10;`

`l_receiptdate` is string filter column,it is included by multi-column filter condition `l_commitdate < l_receiptdate`.

### Solution
Resolve it by separating the multi-column filter conditions and executing it after the dictionary filter column is converted to string.
2023-07-24 22:31:18 +08:00
9fe470b273 [pipeline](check) update check-pr-if-need-run-build.sh (#22171)
no need to run pipeline if only modify regression-test/pipeline/p0/conf/regression-conf.groovy or regression-test/pipeline/p1/conf/regression-conf.groovy
2023-07-24 21:04:23 +08:00
68bd4a1a96 [opt](Nereids) check multiple distinct functions that cannot be transformed into muti_distinct (#21626)
This commit introduces a transformation for SQL queries that contain multiple distinct aggregate functions. When the number of distinct values processed by these functions is greater than 1, they are converted into multi_distinct functions for more efficient handling.

Example:
```
SELECT COUNT(DISTINCT c1), SUM(DISTINCT c2) FROM tbl GROUP BY c3
-- Transformed to
SELECT MULTI_DISTINCT_COUNT(c1), MULTI_DISTINCT_SUM(c2) FROM tbl GROUP BY c3
```

The following functions can be transformed:
- COUNT
- SUM
- AVG
- GROUP_CONCAT

If any unsupported functions are encountered, an error is now reported during the optimization phase.

To ensure the absence of such cases, a final check has been implemented after the rewriting phase.
2023-07-24 16:34:17 +08:00
21deb57a4d [fix](Nereids) remove double sigature of ceil, floor and round (#22134)
we convert input parameters to double for function ceil, floor and round,
because DecimalV2 could not do these operation. Since we intro DecimalV3,
we should convert all parameters to DecimalV3 to get correct result.
For example, when we use double as parameters, we get wrong result:
```sql
select round(341/20000,4),341/20000,round(0.01705,4);
+-------------------------+---------------+-------------------+
| round((341 / 20000), 4) | (341 / 20000) | round(0.01705, 4) |
+-------------------------+---------------+-------------------+
| 0.017                   | 0.01705       | 0.0171            |
+-------------------------+---------------+-------------------+
```
DecimalV3 could get correct result
```sql
select round(341/20000,4),341/20000,round(0.01705,4);
+-------------------------+---------------+-------------------+
| round((341 / 20000), 4) | (341 / 20000) | round(0.01705, 4) |
+-------------------------+---------------+-------------------+
| 0.0171                  | 0.01705       | 0.0171            |
+-------------------------+---------------+-------------------+
```
2023-07-24 16:08:00 +08:00
d2531db1cf [fix](inverted index) fix regression case test_index_change_7 occasional failure (#22066) 2023-07-24 15:39:08 +08:00
ac9480123c [refactor](Nereids) push down all non-slot order key in sort and prune them upper sort (#22034)
According the implementation in execution engine, all order keys
in SortNode will be output. We must normalize LogicalSort follow
by it.
We push down all non-slot order key in sort to materialize them
behind sort. So, all order key will be slot and do not need do
projection by SortNode itself.
This will simplify translation of SortNode by avoid to generate
resolvedTupleExprs and sortTupleDesc.
2023-07-24 15:36:33 +08:00