Commit Graph

4733 Commits

Author SHA1 Message Date
eea84ac36c [fix](Nereids): use == instead of id to identity PhysicalHashJoin (#24535) 2023-09-19 12:06:30 +08:00
b092bdaabf [feature](load) collect loaded rows on table level after txn published (#24346)
As title.

Stream load 20 lines

```
2023-09-14 11:40:04,186 DEBUG (PUBLISH_VERSION|23) [DatabaseTransactionMgr.updateCatalogAfterVisible():1769] table id to loaded rows:{51016=20}
```

```
mysql> select count(*) from dup_tbl_basic;
+----------+
| count(*) |
+----------+
|       20 |
+----------+
1 row in set (0.05 sec)
```
2023-09-19 12:00:08 +08:00
80bcb43143 [Feature]Support external table sample stats collection (#24376)
Support hive table sample stats collection. Gramma is like

`analyze table with sample percent 10`
2023-09-19 11:20:27 +08:00
1ac7c8f14d [improvement](scan_queue_mem_limit) scan queue mem limit is so small for (#24553)
a wide table

Users rarely set scan_queue_mem_limit, so it almost often works as 2G/20. However,
somecases we need set it to a larger value, especially for insrt into
select from a wide table.
2023-09-18 20:22:03 +08:00
c54fc82031 [improve](nereids) expand runtime filter target by hashJoin's equal condition (#23274)
generate more runtime filters
example:

lineitem join partsupp on l_partkey= ps_partkey join filter(part) on ps_partkey=p_partkey 
we need two RFs:
RF1: p_partkey->ps_partkey
RF2: p_partkey->l_partkey

This pr will generate RF2, but current version will not.

merge runtime filters
current version, if one src could affect 2 targets, we will generate 2 runtime filters.
after this pr, the two rf will be merged.
refer to regression test: ds_rf2/ds_rf5/ds_rf54
2023-09-18 18:27:01 +08:00
67e8951b72 [fix](stats) Fix analyze failed when there are thousands of partitions. (#24521)
It's caused by we used same query id for multiple queries of same olap analyze task, but many structures related to query execution depends on query id.
2023-09-18 17:27:10 +08:00
9a47e8fa73 [catalog lock](log) enable catalog lock log (#24530) 2023-09-18 16:56:01 +08:00
ef4ab106d8 [fix](security): non-static inner class should not implement serialized interface, or when it is serialized it will contain outer class info, which is not safe #24454
fix: non-static inner class should not implement serialized interface, or when it is serialized it will contain outer class info, which is not safe

And in this scenario, the class does not use info of outer class, which should use static class instead
2023-09-18 15:55:43 +08:00
b9f1ac153a [improvement](profile) do not remove value 0 counter (#24487)
do not remove value 0 counter
2023-09-18 15:31:19 +08:00
b4432ce577 [Feature](statistics)Support external table analyze partition (#24154)
Enable collect partition level stats for hive external table.
2023-09-18 14:59:26 +08:00
1153907897 [opt](nereids)add explanation why we always update col stats in StatsCalculator. 2023-09-18 13:47:37 +08:00
f3e350e8ec [Improvement](statistics)Improve statistics user experience (#24414)
Two improvements:
1. Move the `Job_id` column for the return info of `Analyze table` command to the first column. To keep consistent with `show analyze`.
```
mysql> analyze table hive.tpch100.region;
+--------+--------------+-------------------------+------------+--------------------------------+
| Job_Id | Catalog_Name | DB_Name                 | Table_Name | Columns                        |
+--------+--------------+-------------------------+------------+--------------------------------+
| 14403  | hive         | default_cluster:tpch100 | region     | [r_regionkey,r_comment,r_name] |
+--------+--------------+-------------------------+------------+--------------------------------+
1 row in set (0.03 sec)
```
2. Add `analyze_timeout` session variable, to control `analyze table/database with sync` timeout.
2023-09-18 13:36:41 +08:00
c56d3237e8 [opt](Nereids) remove canEliminate flag on LogicalProject (#24362)
since we have three infrastructure to ensure changing input column order
not lead to wrong result, we could remove this flag on LogicalProject to
eliminate project as mush as possible and let code clear.

1. output list in ResultSink node
2. regular children output in SetOperation node
3. producer to consumer slot id map in CteConsumer
2023-09-18 12:22:33 +08:00
7a8e3a6587 [fix](nereids) fix cte filter pushdown if the filters can be aggregated (#24489)
Current cte common filter extraction doesn't work if the filters can be aggregated, which will lead the common filter can't be pushed down inside cte. Consider the following case:
with main as (select c1 from t1) select * from (select m1.* from main m1, main m2 where m1.c1 = m2.c1) abc where c1 = 1;
The common c1=1 filter can't be pushed down.

This pr fixed the original extraction logic from set to list to make the logic works, and this will also resolve the tpcds query4/11's pattern works well also.
2023-09-18 11:26:55 +08:00
932b639086 [refactor](point query) decouple PointQueryExec from the Coordinator (#24509)
In order to decouple PointQueryExec from the Coordinator, both PointQueryExec and Coordinator inherit from CoordInterface, and are collectively scheduled through StmtExecutor.
2023-09-18 11:25:40 +08:00
c746a89c72 [improvement](transaction) print txn edit log cost time #24501 2023-09-18 11:06:30 +08:00
a07f59de8c [Fix](multi-catalog) Fix hadoop viewfs issues. (#24507)
Error Msg:
Caused by: org.apache.doris.datasource.CacheException: failed to get input splits for FileCacheKey{location='viewfs://my-cluster/ns1/usr/hive/warehouse/viewfs.db/parquet_table', inputFormat='org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'} in catalog test_viewfs_hive
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.loadFiles(HiveMetaStoreCache.java:466) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.access$400(HiveMetaStoreCache.java:112) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:210) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:202) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.common.util.CacheBulkLoader.lambda$null$0(CacheBulkLoader.java:42) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]
        ... 3 more
Caused by: org.apache.doris.common.UserException: errCode = 2, detailMessage = Failed to list located status for path: viewfs://my-cluster/ns1/usr/hive/warehouse/viewfs.db/parquet_table
        at org.apache.doris.fs.remote.RemoteFileSystem.listLocatedFiles(RemoteFileSystem.java:54) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.getFileCache(HiveMetaStoreCache.java:381) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.loadFiles(HiveMetaStoreCache.java:432) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.access$400(HiveMetaStoreCache.java:112) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:210) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:202) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.common.util.CacheBulkLoader.lambda$null$0(CacheBulkLoader.java:42) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]
        ... 3 more
Caused by: java.nio.file.AccessDeniedException: viewfs://my-cluster/ns1/usr/hive/warehouse/viewfs.db/parquet_table: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : com.amazonaws.SdkClientException: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
        at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Invoker.onceInTheFuture(Invoker.java:190) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing$ObjectListingIterator.next(Listing.java:651) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing$FileStatusListingIterator.requestNextBatch(Listing.java:430) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing$FileStatusListingIterator.<init>(Listing.java:372) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing.createFileStatusListingIterator(Listing.java:143) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing.getListFilesAssumingDir(Listing.java:211) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListFiles(S3AFileSystem.java:4898) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listFiles$38(S3AFileSystem.java:4840) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2480) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2499) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.listFiles(S3AFileSystem.java:4839) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.doris.fs.remote.RemoteFileSystem.listLocatedFiles(RemoteFileSystem.java:50) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.getFileCache(HiveMetaStoreCache.java:381) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.loadFiles(HiveMetaStoreCache.java:432) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.access$400(HiveMetaStoreCache.java:112) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:210) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:202) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.common.util.CacheBulkLoader.lambda$null$0(CacheBulkLoader.java:42) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]
        ... 3 more
2023-09-18 09:51:33 +08:00
f1e049d4d6 [fix](java-udaf)Fix need to restart BE after replacing the jar package in java-udaf (#24469) 2023-09-17 21:17:05 +08:00
ebe582758f [opt](Nereids): use LocalDate to replace Calendar (#24361) 2023-09-17 11:16:03 +08:00
4594fd25d8 [Fix](kerberos) Fix kerberos relogin bugs when using hdfs-load. (#24490) 2023-09-17 00:05:07 +08:00
88adab3114 [fix](Nereids): fix be core when array_map is not nullable (#24488)
fix be core when array_map is not nullable
2023-09-16 20:39:15 +08:00
a2efa650ec [catalog lock](log) enable info log level on catalog lock (#24471) 2023-09-16 20:29:49 +08:00
b7a7a05eaa [UT](binlog) Add BinlogManager unit test #24486
add BinlogManager unit test
add DBBinlog unit test
add TableBinlog unit test
2023-09-16 18:39:52 +08:00
de50fb5a46 [enhancement](Tablet) rename pathHashToDishInfoRef to pathHashToDiskInfoRef (#24311) 2023-09-16 18:39:11 +08:00
990d6c02ec [Feature](new function) Add a uuid-numeric function, returns uuid in largerint type, 20x faster than uuid (#24395) 2023-09-16 18:26:13 +08:00
0ccb032d79 [parameter](query timeout) change default query timeout to 15min (#24480)
Co-authored-by: yiguolei <yiguolei@gmail.com>
2023-09-16 18:17:58 +08:00
ed8db3727c [feature](partial update) support MOW partial update for insert statement (#21597) 2023-09-16 17:11:59 +08:00
8012ac7661 [fix](bdbje) Remove improper check for journalId (#24464)
* Introduced by  https://github.com/apache/doris/pull/24259
2023-09-16 14:52:27 +08:00
81b6ab9b68 [Fix](topn opt) only allow duplicate key or MOW model to use 2 phase read opt in nereids planner (#24485)
The fetch phase is not support aggregation at present
2023-09-16 10:01:36 +08:00
4dad7c94da [fix](orc) fix the count(*) pushdown issue in orc format (#24446)
In previous, when querying hive table in orc format, and the file is splitted.
the result of select count(*) may be multiple of the real row number.

This is because the number of rows should be got after orc strip prune,
otherwise, it may return wrong result
2023-09-16 09:57:39 +08:00
298bf0885d [fix](nereids) correlated anti join shouldn't be translated to null aware anti join (#24290)
original SQL
select t1.* from t1 where t1.k1 not in ( select t3.k1 from t3 where t1.k2 = t3.k2 );

rewrite SQL
before (wrong):
select t1.* from t1 null aware left anti join t2 on t1.k1 = t3.k1 and t1.k2 = t3.k2;
now (correct):
select t1.* from t1 left anti join t3 on t1.k2 = t3.k2 and (t1.k1 = t3.k1 or t3.k1 is null or t1.k1 is null);
2023-09-15 22:50:36 +08:00
1c142309a6 [refactor](jdbc catalog) refactor JdbcFunctionPushDownRule (#23826)
1. Change from using string matching function to using Expr matching
2. Replace the `nvl` function with `ifnull` when pushed down to MySQL
3. Adapt ClickHouse's `from_unixtime` function to push down
4. Non-function filtering can still be pushed down when `enable_func_pushdown` is set to false
2023-09-15 22:16:07 +08:00
ba4c738ac7 [Feature](Nereids) support values table (#23121)
support insert into table values(...) for Nereids.
sql like:
insert into t values(1, 2, 3)
insert into t values(1 + 1, dayofweek(now()), 4), (4, 5, 6)
insert into t values('1', '6.5', cast(1.5 as int))
2023-09-15 21:46:37 +08:00
5b43969e35 [fix](profile) fix simply profile because counter may be not same 2023-09-15 21:11:01 +08:00
b407f275c8 [fix](hive) fix partition prune issue and some external table test cases (#24338)
1. Fix hive partition prune bug, introduced from #23845, will fail `test_hive_default_partition` test case.
2. Fix `test_local_tvf.groovy` test case, the path of local tvf should be relative path.
3. Fix `test_external_catalog_hive` test case, the `partitions` is now reserve keywords
4. Support `local` tvf in Nereids, but fix related issue like:

```
Caused by: java.lang.NullPointerException
        at org.apache.doris.nereids.stats.ExpressionEstimation.castMinMax(ExpressionEstimation.java:171) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.ExpressionEstimation.visitCast(ExpressionEstimation.java:167) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.ExpressionEstimation.visitCast(ExpressionEstimation.java:109) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.trees.expressions.Cast.accept(Cast.java:55) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.ExpressionEstimation.visitAlias(ExpressionEstimation.java:394) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.ExpressionEstimation.visitAlias(ExpressionEstimation.java:109) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.trees.expressions.Alias.accept(Alias.java:145) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.ExpressionEstimation.estimate(ExpressionEstimation.java:119) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.StatsCalculator.lambda$computeProject$7(StatsCalculator.java:785) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_341]
        at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_341]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_341]
```
2023-09-15 20:57:04 +08:00
0742af70ea [Fix](planner) fix select from inline table return only the first row (#24365) 2023-09-15 18:14:54 +08:00
fa37a8bba8 [opt](stats) remove corresponding col stats status if the loading at the end of analyze task is failed (#24405) 2023-09-15 17:46:48 +08:00
4816ca6679 [fix](multi-catalog)fix mc decimal type parse, fix wrong obj location (#24242)
1. mc decimal type need parse correctly by arrow vector method
2. fix wrong obj location if use oss,obs,cosn

Will add test case in another PR
2023-09-15 17:44:56 +08:00
8297da56ad [fix](spark load)use low case for expr (#24402)
Use lowercase uniformly for expr.
2023-09-15 17:30:03 +08:00
d24f3efd4a [pipelineX](profile) Phase 1: refactor pipelineX detailed profile (#24322) 2023-09-15 16:14:05 +08:00
a4b62eec63 (enhancement)[fe] Add isMaster() check for FrontendService (#24412)
* In `FrontendServiceImpl` service, the api which need to write editlog
  need add `isMaster()` check
2023-09-15 15:01:08 +08:00
eb8ecf49bf [fix](planner) should set preserveRootTypes to true when call substituteList method in ExprSubstitutionMap's compose method (#24392)
if set preserveRootTypes to false when calling substituteList, the root cast expr may be lost during substituting. For example, the top cast expr is cast(decimal_col as double), if it's lost, the data type mismatch between plan node and be crashes.
2023-09-15 14:12:06 +08:00
32844b2a5b [fix](java-udf) Fix need to restart BE after replacing the jar package in java-udf (#24372)
Fix need to restart BE after replacing the jar package in java-udf
2023-09-15 13:30:08 +08:00
29fe87982f [improve](outfile) add file_suffix options for outfile (#24334) 2023-09-15 12:58:41 +08:00
00bb32cfc0 [opt](nereids) enable two phase partition topn opt #23870
Enable two phase partition topn optimization, instead of original full sort at the second phase.
E.g, partial plan of tpcds q67 is as following and a full sort after exchange will have performance impact, especially if the window column's ndv is very high and the number of window is huge.

------PhysicalTopN
--------filter((rk <= 100))
----------PhysicalWindow
------------PhysicalQuickSort
--------------PhysicalDistribute
----------------PhysicalPartitionTopN
------------------PhysicalProject

Under this scenario, the second phase full sort can be transformed to a global PhysicalPartitionTopN and reduce the cost from full sort. The plan will be optimized to the following:

------PhysicalTopN
--------filter((rk <= 100))
----------PhysicalWindow
------------PhysicalPartitionTopN
--------------PhysicalDistribute
----------------PhysicalPartitionTopN
------------------PhysicalProject
2023-09-15 10:30:34 +08:00
23f01ddf3a [feature](profile) support simply profile (#23377)
A Simplified Version of the Profile

Divided into three levels:
Level 2: The original profile.
Level 1: Instances with identical structures are merged, utilizing concatenation for info strings, and recording the extremum for time types.


Note that currently, this is purely experimental, simplifying the profile on the frontend (you can view profiles at any level).

Subsequently, we will transition the simplification process to the backend. At that point, due to the simplification being done on the backend, viewing profiles at other levels won't be possible.

Due to the issue with the pipeline structure, the active time does not accurately reflect the time of the operators.

```
set enable_simply_profile = false;
set enable_simply_profile = true;
```
2023-09-15 10:25:14 +08:00
320f1e9bbf [improve](routineload) improve show routine load output (#24264) 2023-09-15 10:22:47 +08:00
e0834b2f46 [chore](explain) add annotation in explain string whether nereids is ON #24394 2023-09-15 10:17:17 +08:00
9c681692bd Revert "[fix] fix http_stream retry mechanism (#23969)" (#24407)
This reverts commit 05e365ea137eb8c92b8e7eedc7d1435e83f065ae.
2023-09-15 10:07:53 +08:00
c5ef6cfea2 [fix](Table-Valued Function) fix be core when user sepcified empty column_separator using hdfs tvf (#24369) 2023-09-14 23:19:48 +08:00