Commit Graph

5955 Commits

Author SHA1 Message Date
7a8e3a6587 [fix](nereids) fix cte filter pushdown if the filters can be aggregated (#24489)
Current cte common filter extraction doesn't work if the filters can be aggregated, which will lead the common filter can't be pushed down inside cte. Consider the following case:
with main as (select c1 from t1) select * from (select m1.* from main m1, main m2 where m1.c1 = m2.c1) abc where c1 = 1;
The common c1=1 filter can't be pushed down.

This pr fixed the original extraction logic from set to list to make the logic works, and this will also resolve the tpcds query4/11's pattern works well also.
2023-09-18 11:26:55 +08:00
932b639086 [refactor](point query) decouple PointQueryExec from the Coordinator (#24509)
In order to decouple PointQueryExec from the Coordinator, both PointQueryExec and Coordinator inherit from CoordInterface, and are collectively scheduled through StmtExecutor.
2023-09-18 11:25:40 +08:00
c746a89c72 [improvement](transaction) print txn edit log cost time #24501 2023-09-18 11:06:30 +08:00
a07f59de8c [Fix](multi-catalog) Fix hadoop viewfs issues. (#24507)
Error Msg:
Caused by: org.apache.doris.datasource.CacheException: failed to get input splits for FileCacheKey{location='viewfs://my-cluster/ns1/usr/hive/warehouse/viewfs.db/parquet_table', inputFormat='org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'} in catalog test_viewfs_hive
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.loadFiles(HiveMetaStoreCache.java:466) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.access$400(HiveMetaStoreCache.java:112) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:210) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:202) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.common.util.CacheBulkLoader.lambda$null$0(CacheBulkLoader.java:42) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]
        ... 3 more
Caused by: org.apache.doris.common.UserException: errCode = 2, detailMessage = Failed to list located status for path: viewfs://my-cluster/ns1/usr/hive/warehouse/viewfs.db/parquet_table
        at org.apache.doris.fs.remote.RemoteFileSystem.listLocatedFiles(RemoteFileSystem.java:54) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.getFileCache(HiveMetaStoreCache.java:381) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.loadFiles(HiveMetaStoreCache.java:432) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.access$400(HiveMetaStoreCache.java:112) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:210) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:202) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.common.util.CacheBulkLoader.lambda$null$0(CacheBulkLoader.java:42) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]
        ... 3 more
Caused by: java.nio.file.AccessDeniedException: viewfs://my-cluster/ns1/usr/hive/warehouse/viewfs.db/parquet_table: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : com.amazonaws.SdkClientException: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
        at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Invoker.onceInTheFuture(Invoker.java:190) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing$ObjectListingIterator.next(Listing.java:651) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing$FileStatusListingIterator.requestNextBatch(Listing.java:430) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing$FileStatusListingIterator.<init>(Listing.java:372) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing.createFileStatusListingIterator(Listing.java:143) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.Listing.getListFilesAssumingDir(Listing.java:211) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListFiles(S3AFileSystem.java:4898) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listFiles$38(S3AFileSystem.java:4840) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449) ~[hadoop-common-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2480) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2499) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.hadoop.fs.s3a.S3AFileSystem.listFiles(S3AFileSystem.java:4839) ~[hadoop-aws-3.3.6.jar:?]
        at org.apache.doris.fs.remote.RemoteFileSystem.listLocatedFiles(RemoteFileSystem.java:50) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.getFileCache(HiveMetaStoreCache.java:381) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.loadFiles(HiveMetaStoreCache.java:432) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache.access$400(HiveMetaStoreCache.java:112) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:210) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.datasource.hive.HiveMetaStoreCache$3.load(HiveMetaStoreCache.java:202) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.common.util.CacheBulkLoader.lambda$null$0(CacheBulkLoader.java:42) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]
        ... 3 more
2023-09-18 09:51:33 +08:00
f1e049d4d6 [fix](java-udaf)Fix need to restart BE after replacing the jar package in java-udaf (#24469) 2023-09-17 21:17:05 +08:00
ebe582758f [opt](Nereids): use LocalDate to replace Calendar (#24361) 2023-09-17 11:16:03 +08:00
4594fd25d8 [Fix](kerberos) Fix kerberos relogin bugs when using hdfs-load. (#24490) 2023-09-17 00:05:07 +08:00
88adab3114 [fix](Nereids): fix be core when array_map is not nullable (#24488)
fix be core when array_map is not nullable
2023-09-16 20:39:15 +08:00
a2efa650ec [catalog lock](log) enable info log level on catalog lock (#24471) 2023-09-16 20:29:49 +08:00
b7a7a05eaa [UT](binlog) Add BinlogManager unit test #24486
add BinlogManager unit test
add DBBinlog unit test
add TableBinlog unit test
2023-09-16 18:39:52 +08:00
de50fb5a46 [enhancement](Tablet) rename pathHashToDishInfoRef to pathHashToDiskInfoRef (#24311) 2023-09-16 18:39:11 +08:00
990d6c02ec [Feature](new function) Add a uuid-numeric function, returns uuid in largerint type, 20x faster than uuid (#24395) 2023-09-16 18:26:13 +08:00
0ccb032d79 [parameter](query timeout) change default query timeout to 15min (#24480)
Co-authored-by: yiguolei <yiguolei@gmail.com>
2023-09-16 18:17:58 +08:00
ed8db3727c [feature](partial update) support MOW partial update for insert statement (#21597) 2023-09-16 17:11:59 +08:00
8012ac7661 [fix](bdbje) Remove improper check for journalId (#24464)
* Introduced by  https://github.com/apache/doris/pull/24259
2023-09-16 14:52:27 +08:00
81b6ab9b68 [Fix](topn opt) only allow duplicate key or MOW model to use 2 phase read opt in nereids planner (#24485)
The fetch phase is not support aggregation at present
2023-09-16 10:01:36 +08:00
4dad7c94da [fix](orc) fix the count(*) pushdown issue in orc format (#24446)
In previous, when querying hive table in orc format, and the file is splitted.
the result of select count(*) may be multiple of the real row number.

This is because the number of rows should be got after orc strip prune,
otherwise, it may return wrong result
2023-09-16 09:57:39 +08:00
298bf0885d [fix](nereids) correlated anti join shouldn't be translated to null aware anti join (#24290)
original SQL
select t1.* from t1 where t1.k1 not in ( select t3.k1 from t3 where t1.k2 = t3.k2 );

rewrite SQL
before (wrong):
select t1.* from t1 null aware left anti join t2 on t1.k1 = t3.k1 and t1.k2 = t3.k2;
now (correct):
select t1.* from t1 left anti join t3 on t1.k2 = t3.k2 and (t1.k1 = t3.k1 or t3.k1 is null or t1.k1 is null);
2023-09-15 22:50:36 +08:00
1c142309a6 [refactor](jdbc catalog) refactor JdbcFunctionPushDownRule (#23826)
1. Change from using string matching function to using Expr matching
2. Replace the `nvl` function with `ifnull` when pushed down to MySQL
3. Adapt ClickHouse's `from_unixtime` function to push down
4. Non-function filtering can still be pushed down when `enable_func_pushdown` is set to false
2023-09-15 22:16:07 +08:00
ba4c738ac7 [Feature](Nereids) support values table (#23121)
support insert into table values(...) for Nereids.
sql like:
insert into t values(1, 2, 3)
insert into t values(1 + 1, dayofweek(now()), 4), (4, 5, 6)
insert into t values('1', '6.5', cast(1.5 as int))
2023-09-15 21:46:37 +08:00
5b43969e35 [fix](profile) fix simply profile because counter may be not same 2023-09-15 21:11:01 +08:00
b407f275c8 [fix](hive) fix partition prune issue and some external table test cases (#24338)
1. Fix hive partition prune bug, introduced from #23845, will fail `test_hive_default_partition` test case.
2. Fix `test_local_tvf.groovy` test case, the path of local tvf should be relative path.
3. Fix `test_external_catalog_hive` test case, the `partitions` is now reserve keywords
4. Support `local` tvf in Nereids, but fix related issue like:

```
Caused by: java.lang.NullPointerException
        at org.apache.doris.nereids.stats.ExpressionEstimation.castMinMax(ExpressionEstimation.java:171) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.ExpressionEstimation.visitCast(ExpressionEstimation.java:167) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.ExpressionEstimation.visitCast(ExpressionEstimation.java:109) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.trees.expressions.Cast.accept(Cast.java:55) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.ExpressionEstimation.visitAlias(ExpressionEstimation.java:394) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.ExpressionEstimation.visitAlias(ExpressionEstimation.java:109) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.trees.expressions.Alias.accept(Alias.java:145) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.ExpressionEstimation.estimate(ExpressionEstimation.java:119) ~[doris-fe.jar:1.2-SNAPSHOT]
        at org.apache.doris.nereids.stats.StatsCalculator.lambda$computeProject$7(StatsCalculator.java:785) ~[doris-fe.jar:1.2-SNAPSHOT]
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_341]
        at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_341]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_341]
```
2023-09-15 20:57:04 +08:00
0742af70ea [Fix](planner) fix select from inline table return only the first row (#24365) 2023-09-15 18:14:54 +08:00
fa37a8bba8 [opt](stats) remove corresponding col stats status if the loading at the end of analyze task is failed (#24405) 2023-09-15 17:46:48 +08:00
4816ca6679 [fix](multi-catalog)fix mc decimal type parse, fix wrong obj location (#24242)
1. mc decimal type need parse correctly by arrow vector method
2. fix wrong obj location if use oss,obs,cosn

Will add test case in another PR
2023-09-15 17:44:56 +08:00
8297da56ad [fix](spark load)use low case for expr (#24402)
Use lowercase uniformly for expr.
2023-09-15 17:30:03 +08:00
d24f3efd4a [pipelineX](profile) Phase 1: refactor pipelineX detailed profile (#24322) 2023-09-15 16:14:05 +08:00
a4b62eec63 (enhancement)[fe] Add isMaster() check for FrontendService (#24412)
* In `FrontendServiceImpl` service, the api which need to write editlog
  need add `isMaster()` check
2023-09-15 15:01:08 +08:00
eb8ecf49bf [fix](planner) should set preserveRootTypes to true when call substituteList method in ExprSubstitutionMap's compose method (#24392)
if set preserveRootTypes to false when calling substituteList, the root cast expr may be lost during substituting. For example, the top cast expr is cast(decimal_col as double), if it's lost, the data type mismatch between plan node and be crashes.
2023-09-15 14:12:06 +08:00
32844b2a5b [fix](java-udf) Fix need to restart BE after replacing the jar package in java-udf (#24372)
Fix need to restart BE after replacing the jar package in java-udf
2023-09-15 13:30:08 +08:00
29fe87982f [improve](outfile) add file_suffix options for outfile (#24334) 2023-09-15 12:58:41 +08:00
00bb32cfc0 [opt](nereids) enable two phase partition topn opt #23870
Enable two phase partition topn optimization, instead of original full sort at the second phase.
E.g, partial plan of tpcds q67 is as following and a full sort after exchange will have performance impact, especially if the window column's ndv is very high and the number of window is huge.

------PhysicalTopN
--------filter((rk <= 100))
----------PhysicalWindow
------------PhysicalQuickSort
--------------PhysicalDistribute
----------------PhysicalPartitionTopN
------------------PhysicalProject

Under this scenario, the second phase full sort can be transformed to a global PhysicalPartitionTopN and reduce the cost from full sort. The plan will be optimized to the following:

------PhysicalTopN
--------filter((rk <= 100))
----------PhysicalWindow
------------PhysicalPartitionTopN
--------------PhysicalDistribute
----------------PhysicalPartitionTopN
------------------PhysicalProject
2023-09-15 10:30:34 +08:00
23f01ddf3a [feature](profile) support simply profile (#23377)
A Simplified Version of the Profile

Divided into three levels:
Level 2: The original profile.
Level 1: Instances with identical structures are merged, utilizing concatenation for info strings, and recording the extremum for time types.


Note that currently, this is purely experimental, simplifying the profile on the frontend (you can view profiles at any level).

Subsequently, we will transition the simplification process to the backend. At that point, due to the simplification being done on the backend, viewing profiles at other levels won't be possible.

Due to the issue with the pipeline structure, the active time does not accurately reflect the time of the operators.

```
set enable_simply_profile = false;
set enable_simply_profile = true;
```
2023-09-15 10:25:14 +08:00
320f1e9bbf [improve](routineload) improve show routine load output (#24264) 2023-09-15 10:22:47 +08:00
e0834b2f46 [chore](explain) add annotation in explain string whether nereids is ON #24394 2023-09-15 10:17:17 +08:00
9c681692bd Revert "[fix] fix http_stream retry mechanism (#23969)" (#24407)
This reverts commit 05e365ea137eb8c92b8e7eedc7d1435e83f065ae.
2023-09-15 10:07:53 +08:00
c5ef6cfea2 [fix](Table-Valued Function) fix be core when user sepcified empty column_separator using hdfs tvf (#24369) 2023-09-14 23:19:48 +08:00
5ba1f62da8 [enhancement](Nereids) make stats unchanged (#23737)
make stats unchanged when explore plan
2023-09-14 22:18:54 +08:00
66bd2a4862 [test](Nereids) add test push down filter (#24250)
Add test for pushDownFilterThroughProject
2023-09-14 22:13:41 +08:00
d4756d3118 [feature](Nereids): fold Cast(s as date/datetime) on FE (#24353)
cast("20210101" as Date) -> DateLiteral(2021, 1, 1)
2023-09-14 22:08:26 +08:00
f61e6483bf [enhancement](broker-load) support compress type for old broker load, and split compress type from file format (#23882) 2023-09-14 21:42:28 +08:00
05e365ea13 [fix] fix http_stream retry mechanism (#23969)
Co-authored-by: yiguolei <676222867@qq.com>
2023-09-14 21:41:11 +08:00
07720d3ff9 [feature](replica version) Add admin set replica version statement (#23706) 2023-09-14 21:12:00 +08:00
d20365cdcf [fix](transaction) fix publish txn fake succ (#24273) 2023-09-14 21:04:59 +08:00
c6a92955ca [refacotr](optimizer) Remove useless check #24237
Check stats table status at first
Comment histgram_tbl check since it useless for now
Do preheat both in master and follower
2023-09-14 19:35:56 +08:00
3ee89aea35 [Feature](merge-on-write)Support ignore mode for merge-on-write unique table (#21773) 2023-09-14 18:03:51 +08:00
9c6734e68e [bugfix](index) Fix build index limitations (#24358)
1. skip existed index on column with different id on build index
2. allow build index for CANCELED or FINISHED state
2023-09-14 17:53:22 +08:00
eaa35649bc [fix](bdbje) handle ReplicaWriteException in BDBJEJournal.write (#24259)
* When BDBJEJournal.write meet `ReplicaWriteException`, we should not
  retry. Because at the monment the bdbje node state is `REPLICA` (not `MASTER`)
  if we still retry write, at the same time trigger election, the orgin `REPLICA`
  node may transfer to `MASTER` and will cause incorrect journalId

Co-authored-by: yiguolei <676222867@qq.com>
2023-09-14 17:49:28 +08:00
d035a58374 [feature](nereids) support unnest subquery in LogicalOneRowRelation (#24355)
select (select 1);
before : 
ERROR 1105 (HY000): errCode = 2, detailMessage = Subquery is not supported in the select list.
after:
mysql> select (select 1);
+---------------------------------------------------------------------+
|  (SCALARSUBQUERY) (LogicalOneRowRelation ( projects=[1 AS `1`#0] )) |
+---------------------------------------------------------------------+
|                                                                   1 |
+---------------------------------------------------------------------+
1 row in set (0.61 sec)
2023-09-14 17:22:08 +08:00
0be0b8ff58 [opt](stats) Support display of auto analyze jobs (#24135)
### Support dispaly of auto analyze jobs

After this PR, users and DBA could use such grammar to check the execution status of auto analyze jobs:

```sql

SHOW AUTO ANALYZE [tbl_name] [WHERE STATE='SOME STATE']
```

Record count of history auto analyze job could be configured by setting FE option: auto_analyze_job_record_count, default value is 2000

### Enhance auto analyze

After this PR, auto jobs those created automatically will no longer execute beyond a specific time frame.
2023-09-14 17:10:04 +08:00