Commit Graph

8289 Commits

Author SHA1 Message Date
9a07ae890a [fix](point query) Fix ArrayIndexOutOfBoundsException if close a prepare stmt (#22237) 2023-07-26 18:22:07 +08:00
14dcc53135 [fix](Nereids) cast time should turn nullable on all valid types (#22242)
valid types to cast to time/timev2:
- TINYINT
- SMALLINT
- INT
- BIGINT
- LARGEINT
- FLOAT
- DOUBLE
- CHAR
- VARCHAR
- STRING
2023-07-26 17:56:19 +08:00
be69025878 [opt](Nereids) add partial update support for delete stmt (#22184)
Currently, the new optimizer don't consider anything about partial update.
This PR add the ability to convert a delete statement to a partial update insert statement
for merge-on-write unique table
2023-07-26 17:34:31 +08:00
0b05c0b841 [chore](tools) add icon.svg to .idea to let idea present doris's logo (#22234) 2023-07-26 17:27:47 +08:00
582acad8a1 [feature](stats) Enable period time with cron expr (#22095)
Support such grammar

ANALYZE TABLE test WITH CRON "* * * * * ?"

Such job would be scheduled as the cron expr specifie, but natively support minute-level schedule only
2023-07-26 17:25:57 +08:00
964ac4e601 [opt](nereids) Retry when async analyze task failed (#21889)
Retry at most 5 times when async analyze task execution failed
2023-07-26 17:16:56 +08:00
af20d0c521 [fix](binlog) Fix BinlogUtils getExpiredMs overflow (#22174) 2023-07-26 15:15:34 +08:00
9f3960b460 [fix](kerberos)fix kerberos config read (#22081)
we should read kerberos config from properties first, so use override hive conf to set.
2023-07-26 13:36:12 +08:00
bb67a1467a [fix](Nereids): mergeGroup should merge target Group into existed Group (#22123) 2023-07-26 13:13:25 +08:00
21a3593a9a [fix](Nereids) translate failed when enable topn two phase opt (#22197)
1. should not add rowid slot to reslovedTupleExprs
2. should set notMaterialize to sort's tuple when do two phase opt
2023-07-26 11:38:50 +08:00
4c4f08f805 [fix](hudi) the required fields are empty if only reading partition columns (#22187)
1. If only read the partition columns, the `JniConnector` will produce empty required fields, so `HudiJniScanner` should read the "_hoodie_record_key" field at least to know how many rows in current hoodie split. Even if the `JniConnector` doesn't read this field, the call of `releaseTable` in `JniConnector` will reclaim the resource.

2. To prevent BE failure and exit, `JniConnector` should call release methods after `HudiJniScanner` is initialized. It should be noted that `VectorTable` is created lazily in `JniScanner`,  so we don't need to reclaim the resource when `HudiJniScanner` is failed to initialize.

## Remaining works
Other jni readers like `paimon` and `maxcompute` may encounter the same problems, the jni reader need to handle this abnormal situation on its own, and currently this fix can only ensure that BE will not exit.
2023-07-26 10:59:45 +08:00
9abf32324b [improvement](jdbc) add timestamp put to datev2 (#21680) 2023-07-26 09:10:34 +08:00
5f846056f7 [fix](forward) fix MissingFormatArgumentException when failed to forward stmt to Master (#22142) 2023-07-26 09:00:04 +08:00
cf717882d8 [fix](jdbc catalog) fix hana jdbc table bug (#22190) 2023-07-26 08:45:06 +08:00
ba3a0922eb [fix](ipv6)Support IPV6 (#22219)
fe:Remove restrictions from IPv4
be: thrift server Specify binding address
be: Restore changed code of “be/src/olap/task/engine_clone_task.cpp”
2023-07-26 08:40:32 +08:00
e8f4323e0f [Fix](jdbcCatalog) fix typo of some variable #22214 2023-07-26 08:34:45 +08:00
3414d1a61f [fix](hudi) table schema is not the same as parquet schema (#22186)
Upgrade hudi version from 0.13.0 to 0.13.1, and keep the hudi version of jni scanner the same as that of FE.
This may fix the bug of the table schema is not same as parquet schema.
2023-07-26 00:29:53 +08:00
cf677b327b [fix](jdbc catalog) Fixed mappings with type errors for bool and tinyint(1) (#22089)
First of all, mysql does not have a boolean type, its boolean type is actually tinyint(1), in the previous logic, We force tinyint(1) to be a boolean by passing tinyInt1isBit=true, which causes an error if tinyint(1) is not a 0 or 1, Therefore, we need to match tinyint(1) according to tinyint instead of boolean, and this change will not affect the correctness of where k = 1 or where k = true queries
2023-07-25 22:45:22 +08:00
b2be42c31c [fix](jdbc catalog) fix jdbc catalog like expr query error (#22141) 2023-07-25 22:30:28 +08:00
f44660db1a [chore](merge-on-write) disable single replica load and compaction for mow table (#22188) 2023-07-25 22:05:22 +08:00
999fbdc802 [improvement](jdbc) add new type 'object' of int (#21681) 2023-07-25 21:29:46 +08:00
20f180c4e1 [fix](iceberg) fix error when query iceberg v2 format (#22182)
This bug is introduced from #21771
Missing fileType field of TFileScanRangeParams, so the delete file of iceberg v2 will be treated as local file
and fail to read.
2023-07-25 21:15:46 +08:00
6dd0ca6d0b [fix](nereids) fix runtime filter on cte sender and set operation (#22181)
Current rf pushdown framework doesn't handle cte sender right. On cte consumer, it just return false and this will cause the rf is generated at the wrong place and lead the expr_order checking failed, but actually it should be pushed down on the cte sender. Also, set operation pushing down is unreachable if the outer stmt uses the alias of set operation's output before probeSlot's translation. Both of the above issues will be fixed in this pr
2023-07-25 20:26:04 +08:00
1715a824dd [fix](nereids) fix partition dest overwrite bug when cte as bc right (#22177)
In current cte multicast fragment param computing logic in coordinator, if shared hash table for bc opened, its destination's number will be the same as be hosts'. But the judgment of falling into shared hash table bc part code is wrong, which will cause when a multicast's target is fixed with both bc and partition, the first bc info will overwrite the following partition's, i.e, the destination info will be the host level, which should be per instance. This will cause the hash partition part hang.
2023-07-25 19:26:29 +08:00
28bbfdd590 [Fix](Nereids) fix minidump unit test caused of columnstatus changed (#22201)
Problem:
Minidump unit test failed because of column statistic deserialization need a new column schema but not added to minidump unit test file

Solved:
Add last update time to unit test input file
2023-07-25 19:23:12 +08:00
30965eed21 [fix](stats) Ignore complex type by default when collect column statistics (#21965)
By default, if it contains any complex type in Analyze stmt submitted by user and error would be thrown before this PR.
2023-07-25 18:26:49 +08:00
3b6702a1e3 [Bug](point query) cancel future when meet timeout in PointQueryExec (#21573)
1. cancel future when meet timeout and add config to modify rpc timeout
2. add config to modify numof BackendServiceProxy since under high concurrent work load GRPC channel will be blocked
2023-07-25 18:18:09 +08:00
f74f3e7944 [refactor](Nereids) add sink interface and abstract class (#22150)
1. add trait Sink
2. add abstract class LogicalSink and PhysicalSink
3. replace some sink visitor by visitLogicalSink and visitPhysicalSink
2023-07-25 17:51:49 +08:00
39ca91fc22 [opt](Nereids) always fallback when parse failed (#21865)
always fallback to legacy planner when parse failed even if enable_fallback_to_original_planner is set to false
2023-07-25 17:08:57 +08:00
f84af95ac4 [feature](Nereids) Add minidump replay and refactor user feature of minidump (#20716)
### Two main changes:
- 1、add minidump replay
- 2、change minidump serialization of statistic messages and some interface between main logic of nereids optimizer and minidump

### Use of nereids ut:
- 1、save minidump files:  
        Execute command by mysql-client:
```
set enable_nereids_planner=true;
set enable_minidump=true;
```
        Execute sql in mysql-client
- 2、use nereids-ut script to execute directory:
```
cp -r ${DORIS_HOME}/minidump ${DORIS_HOME}/output/fe && cd ${DORIS_HOME}/output/fe
./nereids_ut --d ${directory_of_minidump_files}
```

### Refactor of minidump
- move statistics used serialization to serialization of input and serialize with catalogs
- generating minidump file only when enable_minidump flag is set, minidump module interactive with main optimizer only by :
serializeInputsToDumpFile(catalog, statistics, query) && serializeOutputsToDumpFile(outputplan).
2023-07-25 15:26:19 +08:00
fc2b9db0ad [Feature](inverted index) add tokenize function for inverted index (#21813)
In this PR, we introduce TOKENIZE function for inverted index, it is used as following:
```
SELECT TOKENIZE('I love my country', 'english');
```
It has two arguments, first is text which has to be tokenized, the second is parser type which can be **english**, **chinese** or **unicode**.
It also can be used with existing table, like this:
```
mysql> SELECT TOKENIZE(c,"chinese") FROM chinese_analyzer_test;
+---------------------------------------+
| tokenize(`c`, 'chinese')              |
+---------------------------------------+
| ["来到", "北京", "清华大学"]          |
| ["我爱你", "中国"]                    |
| ["人民", "得到", "更", "实惠"]        |
+---------------------------------------+
```
2023-07-25 15:05:35 +08:00
d96e31c4d7 [opt](Nereids) not push down global limit to avoid early gather (#21891)
the global limit will create a gather action, and all the data will be calculated in one instance. If we push down the global limit, the node run after the limit node will run slowly.
We fix it by push down only local limit.

a join plan tree before fixing:

```
LogicalLimit(global)
    LogicalLimit(local)
        Plan()
            LogicalLimit(global)
                LogicalLimit(local)
                    LogicalJoin
                        LogicalLimit(global)
                            LogicalLimit(local)
                                Plan()
                        LogicalLimit(global)
                            LogicalLimit(local)
                                Plan()    

after fixing:
LogicalLimit(global)
    LogicalLimit(local)      
        Plan()
            LogicalLimit(local)
                LogicalJoin
                    LogicalLimit(local)
                        Plan()
                    LogicalLimit(local)
                        Plan()
```
2023-07-25 14:45:20 +08:00
28b714c371 [feature](executor) using fe version to set instance_num (#22047) 2023-07-25 14:37:42 +08:00
0f439bb1ca [vectorized](udf) java udf support map type (#22059) 2023-07-25 11:56:20 +08:00
f6b47c34b3 [improvement](stats) show stats with updated time (#21377)
Support to view the stats updated time.

After

```sql
mysql> show column stats t1;
+-------------+-------+------+----------+-----------+---------------+------+------+---------------------+
| column_name | count | ndv  | num_null | data_size | avg_size_byte | min  | max  | updated_time        |
+-------------+-------+------+----------+-----------+---------------+------+------+---------------------+
| col2        | 2.0   | 2.0  | 0.0      | 0.0       | 0.0           | 2    | 5    | 2023-06-30 15:50:24 |
| col3        | 2.0   | 2.0  | 0.0      | 0.0       | 0.0           | 3    | 6    | 2023-06-30 15:50:48 |
| col1        | 2.0   | 2.0  | 0.0      | 0.0       | 0.0           | '1'  | '4'  | 2023-06-30 15:50:48 |
+-------------+-------+------+----------+-----------+---------------+------+------+---------------------+
```

Before

```sql
mysql> show column stats t1;
+-------------+-------+------+----------+-----------+---------------+------+------+
| column_name | count | ndv  | num_null | data_size | avg_size_byte | min  | max  | 
+-------------+-------+------+----------+-----------+---------------+------+------+
| col2        | 2.0   | 2.0  | 0.0      | 0.0       | 0.0           | 2    | 5    | 
| col3        | 2.0   | 2.0  | 0.0      | 0.0       | 0.0           | 3    | 6    | 
| col1        | 2.0   | 2.0  | 0.0      | 0.0       | 0.0           | '1'  | '4'  | 
+-------------+-------+------+----------+-----------+---------------+------+------+
```
2023-07-25 11:22:08 +08:00
b41fcbb783 [feature](agg) add the aggregation function 'mag_agg' (#22043)
New aggregation function: map_agg.

This function requires two arguments: a key and a value, which are used to build a map.

select map_agg(column1, column2) from t group by column3;
2023-07-25 11:21:03 +08:00
6a03a612a0 [opt](Nereids) add check msg for creating decimal type (#22172) 2023-07-25 11:19:41 +08:00
2e20ff8cab [feature](metric) Support collect query counter and error query counter metric in user level (#22125)
1. support collect query counter and error query counter metric in user level
2. add sum and count for histogram metric for mistaken delete in PR #22045
2023-07-25 11:16:38 +08:00
3c58e9bac9 [Fix](Nereids) Fix problem of infer predicates not completely (#22145)
Problem:
When inferring predicate in nereids, new inferred predicates can not be the source of next round. For example:

create table tt1(c1 int, c2 int) distributed by hash(c1) properties('replication_num'='1');
create table tt2(c1 int, c2 int) distributed by hash(c1) properties('replication_num'='1');
create table tt3(c1 int, c2 int) distributed by hash(c1) properties('replication_num'='1');
explain select * from tt1 left join tt2 on tt1.c1 = tt2.c1 left join tt3 on tt2.c1 = tt3.c1 where tt1.c1 = 123;

we expect to get t33.c1 = 123, but we can just get t22.c1 = 123. Because when infer tt1.c1 = 123 and tt2.c1 = tt3.c1, we can
not get any relationship of these two predicates.

Solution:
We need to cache middle results of source predicates like t22.c1 = 123 in example.
2023-07-25 10:05:00 +08:00
fc67929e34 [improvement](catalog) optimize ldap and support more character in user and table name (#21968)
- common name support `-` ,reason: MySQL's db name support `-`
- table name support `-`
- username support `.`,reason:LDAP's username support `.`
- ldap doc
- ldap support rbac
2023-07-24 22:04:37 +08:00
7fcf702081 [improvement](multi catalog)paimon support filesystem metastore (#21910)
1.support filesystem metastore

2.support predicate and project when split

3.fix partition table query error

todo: Now you need to manually put paimon-s3-0.4.0-incubating.jar in be/lib/java_extensions when use s3 filesystem

doc pr: #21966
2023-07-24 22:02:57 +08:00
82bdcb3da8 [fix](Nereids) translate partition topn order key on wrong tuple (#22168)
partition key should on child tuple, sort key should on partition top's tuple
2023-07-24 20:46:27 +08:00
2d52d8d926 [opt](stats) Update stats table config and comment (#22070)
1. set replica count fot stats tbl as :"Math.max(Config.statistic_internal_table_replica_num,Config.min_replication_num_per_tablet)"
2. update comment for stats tbl remove symbol `'`
2023-07-24 20:43:55 +08:00
0677b261b5 [fix](Nereids) should not process prepare command by Nereids (#22167) 2023-07-24 20:11:40 +08:00
0205f540ac [enhancement](config) Enlarge broker scanner bytes conf to 500G, 5G is still not enough (#22126) 2023-07-24 19:49:39 +08:00
cf30ea914a [fix](Nereids) forbid gather sort with explict shuffle (#22153)
gather sort with explict shuffle usually bad, forbid it
2023-07-24 19:45:18 +08:00
3ba3690f93 [Fix](Http-API)Check and replace user sensitive characters (#22148) 2023-07-24 18:21:42 +08:00
68bd4a1a96 [opt](Nereids) check multiple distinct functions that cannot be transformed into muti_distinct (#21626)
This commit introduces a transformation for SQL queries that contain multiple distinct aggregate functions. When the number of distinct values processed by these functions is greater than 1, they are converted into multi_distinct functions for more efficient handling.

Example:
```
SELECT COUNT(DISTINCT c1), SUM(DISTINCT c2) FROM tbl GROUP BY c3
-- Transformed to
SELECT MULTI_DISTINCT_COUNT(c1), MULTI_DISTINCT_SUM(c2) FROM tbl GROUP BY c3
```

The following functions can be transformed:
- COUNT
- SUM
- AVG
- GROUP_CONCAT

If any unsupported functions are encountered, an error is now reported during the optimization phase.

To ensure the absence of such cases, a final check has been implemented after the rewriting phase.
2023-07-24 16:34:17 +08:00
21deb57a4d [fix](Nereids) remove double sigature of ceil, floor and round (#22134)
we convert input parameters to double for function ceil, floor and round,
because DecimalV2 could not do these operation. Since we intro DecimalV3,
we should convert all parameters to DecimalV3 to get correct result.
For example, when we use double as parameters, we get wrong result:
```sql
select round(341/20000,4),341/20000,round(0.01705,4);
+-------------------------+---------------+-------------------+
| round((341 / 20000), 4) | (341 / 20000) | round(0.01705, 4) |
+-------------------------+---------------+-------------------+
| 0.017                   | 0.01705       | 0.0171            |
+-------------------------+---------------+-------------------+
```
DecimalV3 could get correct result
```sql
select round(341/20000,4),341/20000,round(0.01705,4);
+-------------------------+---------------+-------------------+
| round((341 / 20000), 4) | (341 / 20000) | round(0.01705, 4) |
+-------------------------+---------------+-------------------+
| 0.0171                  | 0.01705       | 0.0171            |
+-------------------------+---------------+-------------------+
```
2023-07-24 16:08:00 +08:00
ac9480123c [refactor](Nereids) push down all non-slot order key in sort and prune them upper sort (#22034)
According the implementation in execution engine, all order keys
in SortNode will be output. We must normalize LogicalSort follow
by it.
We push down all non-slot order key in sort to materialize them
behind sort. So, all order key will be slot and do not need do
projection by SortNode itself.
This will simplify translation of SortNode by avoid to generate
resolvedTupleExprs and sortTupleDesc.
2023-07-24 15:36:33 +08:00