Commit Graph

5755 Commits

Author SHA1 Message Date
fadf3b906d [enhancement](planner) delete support between predicate (#17892) 2023-03-23 13:24:32 +08:00
abeec4848a [Fix](Nereids)fix be fold constant incorrectly on from_unixtime. (#18016) 2023-03-23 11:17:08 +08:00
089a91ecd5 [vectorized](function) support array_exists lambda function (#17931)
Co-authored-by: zhangyu209 <zhangyu209@meituan.com>
2023-03-23 11:11:39 +08:00
5a7d99e2f0 [Improvement](statistics) Support for collecting statistics at the granularity of partitions. (#17966)
* Support for collecting statistics at the granularity of partitions

* Add ut and fix some bug
2023-03-23 09:05:42 +08:00
58b00858ab [Refactor](pipeline) Remove unless fe session variable enable_rpc_opt_for_pipeline (#18019) 2023-03-23 07:27:58 +08:00
7ed15ee8c9 [Fix](multi-catalog) invalidates the file cache when table is non-partitioned. (#17932)
Reference to `org.apache.doris.planner.external.HiveSplitter`, the file cache of `HiveMetaStoreCache`
may be created even the table is a non-partitioned table,
so the `RefreshTableStmt` should consider this scene and handle it.
2023-03-22 23:34:18 +08:00
5021c0f91a [feature-wip](MTMV) Support joining tables with views (#18026)
* [feature-wip](MTMV) Support joining tables with views

* Resolve comments
2023-03-22 23:21:50 +08:00
e2e806a5e7 [improve](clickhouse jdbc) support clickhouse array type (#17993)
In this PR, I match the array type of ClickHouse to the array type of Doris's jdbc external.
2023-03-22 19:42:32 +08:00
410907c940 [improvement](inverted index)UNIQUE_KEYS table only supports inverted index when merge_on_write is enabled. (#17827)
When adding inverted index to UNIQUE_KEYS table without merge_on_write enabled, the match query may failed before the segment is compacted.
So we add the restriction here.
2023-03-22 17:47:30 +08:00
ebef0c038d Revert "[fix](function) fix AES/SM3/SM4 encrypt/ decrypt algorithm initialization vector bug (#17420)" (#17887)
This reverts commit 397cc011c4f1ba5a25c770258c13f1cd3f28b47d.
2023-03-22 13:28:25 +08:00
bd46d721e9 [feature](Nereids): pull up SEMI JOIN from INNER JOIN (#17765) 2023-03-22 12:48:04 +08:00
Pxl
40ca250678 [Feature](materialized-view) support where clause on create materialized view (#17534)
support where clause on create materialized view
2023-03-22 11:25:13 +08:00
Pxl
401836f523 [Bug](planner) fix core dump when lateral view above union node and have predicate (#17912)
fix core dump when lateral view above union node and have predicate
2023-03-22 11:24:45 +08:00
17a1ce5ed3 [fix](nereids) add a project node above sort node to eliminate unused order by keys (#17913)
if the order by keys are not simple slot in sort node, the order by exprs have to been added to sort node's output tuple. In that case, we need add a project node above sort node to eliminate the unused order by exprs. for example:

```sql
WITH t0 AS 
    (SELECT DATE_FORMAT(date,
         '%Y%m%d') AS date
    FROM cir_1756_t1 ), t3 AS 
    (SELECT date_format(date,
         '%Y%m%d') AS `date`
    FROM `cir_1756_t2`
    GROUP BY  date_format(date, '%Y%m%d')
    **ORDER BY  date_format(date, '%Y%m%d')** )
SELECT t0.date
FROM t0
LEFT JOIN t3
    ON t0.date = t3.date;
```

before:
```
+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Explain String                                                                                                                                   |
+--------------------------------------------------------------------------------------------------------------------------------------------------+
| LogicalProject[159] ( distinct=false, projects=[date#1], excepts=[], canEliminate=true )                                                         |
| +--LogicalJoin[158] ( type=LEFT_OUTER_JOIN, markJoinSlotReference=Optional.empty, hashJoinConjuncts=[(date#1 = date#3)], otherJoinConjuncts=[] ) |
|    |--LogicalProject[151] ( distinct=false, projects=[date_format(date#0, '%Y%m%d') AS `date`#1], excepts=[], canEliminate=true )                |
|    |  +--LogicalOlapScan ( qualified=default_cluster:bugfix.cir_1756_t1, indexName=cir_1756_t1, selectedIndexId=412339, preAgg=ON )              |
|    +--LogicalSort[157] ( orderKeys=[date_format(cast(date#3 as DATETIME), '%Y%m%d') asc null first] )                                            |
|       +--LogicalAggregate[156] ( groupByExpr=[date#3], outputExpr=[date#3], hasRepeat=false )                                                    |
|          +--LogicalProject[155] ( distinct=false, projects=[date_format(date#2, '%Y%m%d') AS `date`#3], excepts=[], canEliminate=true )          |
|             +--LogicalOlapScan ( qualified=default_cluster:bugfix.cir_1756_t2, indexName=cir_1756_t2, selectedIndexId=412352, preAgg=ON )        |
+--------------------------------------------------------------------------------------------------------------------------------------------------+
```

after:
```
+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Explain String                                                                                                                                   |
+--------------------------------------------------------------------------------------------------------------------------------------------------+
| LogicalProject[171] ( distinct=false, projects=[date#2], excepts=[], canEliminate=true )                                                         |
| +--LogicalJoin[170] ( type=LEFT_OUTER_JOIN, markJoinSlotReference=Optional.empty, hashJoinConjuncts=[(date#2 = date#4)], otherJoinConjuncts=[] ) |
|    |--LogicalProject[162] ( distinct=false, projects=[date_format(date#0, '%Y%m%d') AS `date`#2], excepts=[], canEliminate=true )                |
|    |  +--LogicalOlapScan ( qualified=default_cluster:bugfix.cir_1756_t1, indexName=cir_1756_t1, selectedIndexId=1049812, preAgg=ON )             |
|    +--LogicalProject[169] ( distinct=false, projects=[date#4], excepts=[], canEliminate=false )                                                  |
|       +--LogicalSort[168] ( orderKeys=[date_format(cast(date#4 as DATETIME), '%Y%m%d') asc null first] )                                         |
|          +--LogicalAggregate[167] ( groupByExpr=[date#4], outputExpr=[date#4], hasRepeat=false )                                                 |
|             +--LogicalProject[166] ( distinct=false, projects=[date_format(date#3, '%Y%m%d') AS `date`#4], excepts=[], canEliminate=true )       |
|                +--LogicalOlapScan ( qualified=default_cluster:bugfix.cir_1756_t2, indexName=cir_1756_t2, selectedIndexId=1049825, preAgg=ON )    |
+--------------------------------------------------------------------------------------------------------------------------------------------------+
```
2023-03-22 11:19:32 +08:00
f600f70619 [ehancement](fe) Tune for stats framework (#17860) 2023-03-22 11:07:56 +08:00
173d68409c [enhencement](planner) update and delete support use alias for target table (#17914) 2023-03-22 11:07:39 +08:00
b91a3b5a72 [fix](planner) should not bind slot on brother's tuple in subquery (#17813)
consider the query like this:
```sql
SELECT
    k3, k4
FROM
    test
WHERE
    EXISTS( SELECT
            d.*
        FROM
            (SELECT
                k1 AS _1234, SUM(k2)
            FROM
                `test` d
            GROUP BY _1234) d
                LEFT JOIN
            (SELECT
                k1 AS _1234,
                    SUM(k2)
            FROM
                `test`
            GROUP BY _1234) temp ON d._1234 = temp._1234) 
ORDER BY k3, k4
```

when we analyze group by exprs in `temp` inline view. we bind the `_1234` on `d._1234` by mistake.
that because, when we do analyze in a **SUB-QUERY**, we will resolve SlotRef by itself **AND** parent's tuple. in the meanwhile, we register child's tuple to parent's analyzer. So, in a **SUB-QUERY**, the brother's tuple will affect the resolve result of current inlineview's slot.

This PR:

1. add a flag on the function `resolveColumnRef` in `Analyzer`
```java
private TupleDescriptor resolveColumnRef(String colName, boolean requestFromChild);
private TupleDescriptor resolveColumnRef(TableName tblName, String colName, boolean requestByChild);
``` 

2. add a flag to specify whether the tuple is from child.
```java
// alias name -> <from child, tupleDesc>
private final Multimap<String, Pair<Boolean, TupleDescriptor>> tupleByAlias;
```

when `requestByChild == true`, we **SKIP** the tuple from other child to avoid resolve error.
2023-03-22 11:00:55 +08:00
a4b151e469 [fix](planner) should always execute projection plan (#17885)
1. should always execute projection plan, whatever the statement it is.
2. should always execute projection plan, since we only have vectorized engine now
2023-03-22 10:53:15 +08:00
6fa239384d [refactor](Nereids) remove tabletPruned flag in LogicalOlapScan. (#17983) 2023-03-22 10:45:14 +08:00
7ddba7bf54 [fix](multi-catalog) when checkProperties failed,will have dirty data (#17877) 2023-03-22 09:40:07 +08:00
545d3b1c3e [Enhancement](auth)support ranger col priv (#17915)
1.When querying data, it is no longer necessary to verify the permissions of the entire table, but rather to verify the 
permissions of the queried columns. Currently, the 'ranger' already supports column permissions, and the internal 
catalog provides the implementation of dummy column permissions (the actual verified permissions are still table 
permissions)

2.delete roles in userIdentity

3.Change trigger logic of initAccessController
2023-03-22 09:00:17 +08:00
8df4a94826 [fix](MTMV) Tasks leak when dropping job (#17984)
1. Divide MTMV regression tests into 4 suites
2. Try to remove tasks which were killed by dropping job actions in running map.
2023-03-21 23:22:17 +08:00
cb79e42e5c [refactor](file-system)(step-1) refactor file sysmte on BE and remove storage_backend (#17586)
See #17764 for details
I have tested:
- Unit test for local/s3/hdfs/broker file system: be/test/io/fs/file_system_test.cpp
- Outfile to local/s3/hdfs/broker.
- Load from local/s3/hdfs/broker.
- Query file on local/s3/hdfs/broker file system, with table value function and catalog.
- Backup/Restore with local/s3/hdfs/broker file system

Not test:
- cold & host data separation case.
2023-03-21 21:08:38 +08:00
82716ec99d [fix](Nereids) type coercion for subquery (#17661)
Complete the type coercion of the subquery in the function Binder process.

Expressions generated when subqueries are nested are uniformly converted to implicit types in the analyze stage.
Method: Add a typeCoercionExpr field to the subquery expression to store the generated cast information.

Fix scenario where scalarSubQuery handles arithmetic expressions when implicitly converting types
2023-03-21 20:38:06 +08:00
4193884a32 [feature](array_zip) Support array_zip function (#17696) 2023-03-21 18:44:30 +08:00
861d9c985c [refactor](Nereids): refactor Join Reorder Rule. (#17809) 2023-03-21 16:12:07 +08:00
ed7c880e18 [fix](Nereids) should turn off parallel scan when do local finalize agg (#17961) 2023-03-21 11:55:35 +08:00
1f569b7a7d [enhancement](topn explain) display explain two phase read more precise (#17946) 2023-03-21 10:53:47 +08:00
4023670f35 [BugFix](DOE) Add http prefix when it's not set in hosts properties. (#17745)
* Add http prefix when it's not set in hosts properties
2023-03-21 10:08:20 +08:00
6c8ed9135d [fix](truncate) fix unable to truncate table due to wrong storage medium (#17917)
When setting FE config default_storage_medium to SSD, and set all BE storage path as SSD.
And table will be stored with storage medium SSD.
But there is a FE config storage_cooldown_second and its default value is 30 days.
So after 30 days, the storage medium of table will be changed to HDD, which is unexpected.

This PR removes the storage_cooldown_second, and use a max value to set the cooldown time of SSD
storage medium when the default_storage_medium is SSD.
2023-03-21 10:04:47 +08:00
11a0ae9a87 [fix](ctas) fix show load throw NPE after ctas (#17937)
Missing userinfo

java.lang.NullPointerException: null
        at org.apache.doris.load.loadv2.LoadJob.getShowInfo(LoadJob.java:816) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.load.loadv2.LoadManager.getLoadJobInfosByDb(LoadManager.java:557) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.qe.ShowExecutor.handleShowLoad(ShowExecutor.java:1094) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.qe.ShowExecutor.execute(ShowExecutor.java:280) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.qe.StmtExecutor.handleShow(StmtExecutor.java:1862) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:619) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.qe.StmtExecutor.execute(StmtExecutor.java:435) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:414) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.qe.ConnectProcessor.dispatch(ConnectProcessor.java:558) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.qe.ConnectProcessor.processOnce(ConnectProcessor.java:799) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.mysql.ReadListener.lambda$handleEvent$0(ReadListener.java:52) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_131]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_131]
        at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]
2023-03-21 08:49:51 +08:00
bae9d8d7f2 [Feature-Wip](MySQL LOAD)Add trim quotes property for mysql load (#17775)
Add trim quotes property for mysql load to trim double quotes in the load files.
2023-03-21 00:32:58 +08:00
5cac64413a [Feature](ES): Support es get alias field type. (#17783)
Support es get alias field type.
2023-03-21 00:32:24 +08:00
dc284b62d9 [vectorized](function) support array_filter function (#17832) 2023-03-20 23:18:10 +08:00
8ffc85b6ff [fix](planner)project should be done inside inlineview (#17831)
* [fix](planner)project should be done inside inlineview

* add src column for slots in scan node's output tuple
2023-03-20 21:12:45 +08:00
Pxl
a92115f709 [Bug](materialized-view) fix select mv rollback fail on left join (#17850)
fix select mv rollback fail on left join
2023-03-20 19:14:17 +08:00
b4634342aa [bug](txn) fix concurrent txns's status data race when aborting txn (#17893) 2023-03-20 17:55:03 +08:00
223d7a36eb adjust distribution stats derive, fix bug in join estimation (#17916) 2023-03-20 13:04:29 +08:00
93cfd5cd2b [Enhance](ComputeNode)support k8s watch (#17442)
Describe your changes.

1.Add the watch mechanism to listen for changes in k8s statefulSet and update nodes in time.
2.For broker, there is only one name by default when using deployManager
3.Refactoring code makes it easier to understand and maintain
4.Fix jar package conflicts between okhttp-ws and okhttp

Previously, the logic of k8sDeployManager.getGroupHostInfos was to call the endpoints () interface of k8s,
which would cause if the pod was unexpectedly restarted, k8sDeployManager would delete the pod before the
restart from the fe or be list and add the pod after the restart to the fe or be list, which obviously does not
meet our expectations.
Now, after fqdn is enabled, we call the statefulSets() interface of k8s to listen for the number of copies to
determine whether we need to be online or offline.
In addition, the watch mechanism is added to avoid the possible A-B-A problem caused by timed polling.
For the sake of stability, when the watch mechanism does not receive messages for a period of time,
it will be degraded to the polling mode.

Now several environment variables have been added,ENV_FE_STATEFULSET,ENV_FE_OBSERVER_STATEFULSET,ENV_BE_STATEFULSET,ENV_BROKER_STATEFULSET,ENV_CN_STATEFULSET For statefulsetName,One-to-one correspondence with ENV_FE_SERVICE,ENV_FE_OBSERVER_SERVICE,ENV_BE_SERVICE,ENV_BROKER_SERVICE,ENV_CN_SERVICE,If a serviceName is configured, the corresponding statefulsetName must be configured, otherwise the program cannot be started.
2023-03-20 11:36:32 +08:00
5c990fb737 [fix](nereids) Analyze failed for SQL that has count distinct with same col (#17928)
This problem is caused by the slots with same hashcodes was put in the hashset results into the wrong rules was selected.Use list instead of set as return type of getDistinctArguments method
2023-03-19 21:31:47 +08:00
74dfdc00dc [nerids](statistics) remove lock in statistics cache loader #17833
remove the redandunt lock in the CacheLoader, since it use the forkjoinpool in default
Add execute time log for collect stats
Avoid submit duplicate task, when there already has a task to load for the same column
2023-03-19 20:30:21 +08:00
295b26db00 [chore](fe) update aspectj-maven-plugin to 1.14.0 version (#17890)
In #17797 , we introduced aspectj to help log exception easily.
However, the plugin version 1.11 do not support jdk9 and later.
For support compile FE with jdk11

update aspectj-maven-plugin to 1.14.0 version
add new dependency org.aspectj.aspectjrt 1.9.7 to fe-core
according to:

aspectj java version compatibility
aspectj-maven-plugin issue
aspectj release note
intro to aspectj
2023-03-19 14:50:09 +08:00
e359e412e1 [vectorized](udaf) fix java udaf meet error of std::bad_alloc (#17848)
Now if the user code of java udaf throws exception, because c++ code of agg function nobody could deal
with it, so maybe get error of std::bad_alloc
2023-03-19 11:52:15 +08:00
14dcdd188e [fix](fe) fix MetricRepo.THRIFT_COUNTER_RPC_ALL NullPointException (#17552) 2023-03-19 11:39:19 +08:00
b111f9a518 [fix](insert) Session varaiables dont work for transaction insert (#17551) 2023-03-19 10:43:02 +08:00
c5c89f3016 [Improve](hana catalog)Currently logged in users should only see the schemas they can access (#17918)
In the case of hana catalog, I think the current logged-in users should only see the schemas they can access.
2023-03-18 22:21:01 +08:00
c95eb8a67f [enhancement] Function(create/drop) support the global operation (#16973) (#17608)
Support create/drop global function. 
     When you create a custom function, it can only be used within in one database. It cannot be used in other database/catalog. When there are many databases/catalog, it needs to create function one by one.

## Problem summary

Describe your changes.
1、 When a function is created or deleted, add the global keyword.

CREATE [GLOBAL] [AGGREGATE] [ALIAS] FUNCTION function_name (arg_type [, ...]) [RETURNS ret_type] [INTERMEDIATE inter_type] [WITH PARAMETER(param [,...]) AS origin_function] [PROPERTIES ("key" = "value" [, ...]) ]

DROP [GLOBAL] FUNCTION function_name (arg_type [, ...])

2、A completely global global function is set, and the global function metadata is stored in the image. The function lookup strategy is to look in the database first, and if it can't be found, it looks in the global function.
Co-authored-by: lexluo <lexluo@tencent.com>
2023-03-18 22:06:48 +08:00
88713037bf [Bug][Fix] pipeline exec engine get wrong result when run regression test (#17896)
Fix regression p1:regression-test/suites/datev2/tpcds_sf1_p1/sql/pipeline case
2023-03-18 20:41:10 +08:00
3593b82498 [fix](schema change) Fix fe restart failed because of replay schema change alter job failed (#17825) 2023-03-17 20:54:50 +08:00
46d88ede02 [Refactor](Metadata tvf) Reconstruct Metadata table-value function into a more general framework. (#17590) 2023-03-17 19:54:50 +08:00