Commit Graph

5755 Commits

Author SHA1 Message Date
87c7f2fcc1 [Feature](profile) set sql and defaultDb fields in show-load-profile. (#15875)
When execute show load profile '/', the value of SQL and DefaultDb columns are all 'N/A', but we can fill these fields,the result of this pr is as follows:

Execute show load profile '/'\G:

MySQL [test_d]> show load profile '/'\G
*************************** 1. row ***************************
   QueryId: 652326
      User: N/A
 DefaultDb: default_cluster:test_d
       SQL: LOAD LABEL `default_cluster:test_d`.`xxx`  (APPEND DATA INFILE ('hdfs://xxx/user/hive/warehouse/xxx.db/xxx/*')  INTO TABLE xxx FORMAT AS 'ORC' (c1, c2, c3) SET (`c1` = `c1`, `c2` = `c2`, `c3` = `c3`))  WITH BROKER broker_xxx (xxx)  PROPERTIES ("max_filter_ratio" = "0", "timeout" = "30000")
 QueryType: Load
 StartTime: 2023-01-12 18:33:34
   EndTime: 2023-01-12 18:33:46
 TotalTime: 11s613ms
QueryState: N/A
1 row in set (0.01 sec)
2023-01-21 08:10:15 +08:00
8b40791718 [Feature](ES): catalog support mapping es _id #15943 2023-01-21 08:08:32 +08:00
01c001e2ac [refactor](javaudf) simplify UdfExecutor and UdafExecutor (#16050)
* [refactor](javaudf) simplify UdfExecutor and UdafExecutor

* update

* update
2023-01-21 08:07:28 +08:00
2daa5f3fef [fix](statistics) Fix statistics related threads continuously spawn as doing checkpoint #16088 2023-01-21 07:58:33 +08:00
7814d2b651 [Fix](Oracle External Table) fix that oracle external table can not insert batch values (#16117)
Issue Number: close #xxx

This pr fix two bugs:

_jdbc_scanner may be nullptr in vjdbc_connector.cpp, so we use another method to count jdbc statistic. close [Enhencement](jdbc scanner) add profile for jdbc scanner #15914
In the batch insertion scenario, oracle database does not support syntax insert into tables values (...),(...); , what it supports is:
insert all
into table(col1,col2) values(c1v1, c2v1)
into table(col1,col2) values(c1v2, c2v2)
SELECT 1 FROM DUAL;
2023-01-21 07:57:12 +08:00
5514b1c1b7 [enhancement](tablet_report) accelerate deleteFromBackend function to avoid tablet report task blocked (#16115) 2023-01-20 20:11:58 +08:00
0305aad097 [fix](privilege)fix grant resource bug (#16045)
GRANT USAGE_PRIV ON RESOURCE * TO user;
user will see all database

Describe your changes.

Set a PrivPredicate for show resources and remove USAGE under PrivPredicate in SHOW_ PRIV
2023-01-20 19:00:44 +08:00
a4265fae70 [enhancement](query) Make query scan nodes more evenly distributed (#16037)
Add replicaNumPerHost into consideration while schedule scan node to host to make final query scan nodes more evenly distributed in cluster
2023-01-20 16:24:49 +08:00
419f433d21 [fix](Nereids) topn arg check is not compatible with legacy planner (#16105) 2023-01-20 15:08:10 +08:00
72df283344 [fix](planner) extract common factor rule should consider not only where predicate (#16110)
This PR #14381 limit the `ExtractCommonFactorsRule` to handle only `WHERE` predicate,
but the predicate in `ON` clause should also be considered. Such as:

```
CREATE TABLE `nation` (
  `n_nationkey` int(11) NOT NULL,
  `n_name` varchar(25) NOT NULL,
  `n_regionkey` int(11) NOT NULL,
  `n_comment` varchar(152) NULL
)
DUPLICATE KEY(`n_nationkey`)
DISTRIBUTED BY HASH(`n_nationkey`) BUCKETS 1
PROPERTIES (
"replication_allocation" = "tag.location.default: 1"
);


select * from
nation n1 join nation n2
on (n1.n_name = 'FRANCE' and n2.n_name = 'GERMANY')
or (n1.n_name = 'GERMANY' and n2.n_name = 'FRANCE')
```

There should be predicates:
```
PREDICATES: `n1`.`n_name` IN ('FRANCE', 'GERMANY')
PREDICATES: `n2`.`n_name` IN ('FRANCE', 'GERMANY')
```
On each scan node.

This PR fix this issue by removing the limit of `ExtractCommonFactorsRule`
2023-01-20 14:53:48 +08:00
1638936e3f [fix](oracle catalog) oracle catalog support TIMESTAMP dateType of oracle (#16113)
`TIMESTAMP` dateType of Oracle will map to `DateTime` dateType of Doris
2023-01-20 14:47:58 +08:00
726427b795 [refactor](fe) refactor and upgrade dependency tree of FE and support AWS glue catalog (#16046)
1. Spark dpp
 
	Move `DppResult` and `EtlJobConfig` to sparkdpp package in `fe-common` module.
	So taht `fe-core` is longer depends on `spark-dpp` module, so that the `spark-dpp.jar`
	will not be moved into `fe/lib`, which reduce the size of FE output.
	
2. Modify start_fe.sh

	Modify the CLASSPATH to make sure that doris-fe.jar is at front, so that
	when loading classes with same qualified name, it will be got from doris-fe.jar firstly.
	
3. Upgrade hadoop and hive version

	hadoop: 2.10.2 -> 3.3.3
	hive: 2.3.7 -> 3.1.3
	
4. Override the IHiveMetastoreClient implementations from dependency

	`ProxyMetaStoreClient.java` for Aliyun DLF.
	`HiveMetaStoreClient.java` for origin Apache Hive metastore.

	Because I need to modified some of their method to make them compatible with
	different version of Hive.
	
5. Exclude some unused dependencies to reduce the size of FE output

	Now it is only 370MB (Before is 600MB)
	
6. Upgrade aws-java-sdk version to 1.12.31

7. Support AWS Glue Data Catalog

8. Remove HudiScanNode(no longer support)
2023-01-20 14:42:16 +08:00
3652cb3fe9 [test](Nereids)add test aboule dateType and dateTimeType (#16098) 2023-01-20 14:15:54 +08:00
101bc568d7 [fix](Nereids) fix bugs about date function (#16112)
1. when casting constant, check the value is whether in the range of targetType
2. change the scale of dateTimeV2 to 6
2023-01-20 14:11:17 +08:00
cbb203efd2 [fix](nereids) fix test_join regression test for nereids (#16094)
1. add TypeCoercion for (string, decimal) and (date, decimal)
2. The equality of LogicalProject node should consider children in some case
3. don't push down join condition like "t1 join t2 on true/false"
4. add PUSH_DOWN_FILTERS after FindHashConditionForJoin
5. nestloop join should support all kind of join
6. the intermediate tuple should contains slots from both children of nest loop join.
2023-01-20 14:02:29 +08:00
116e17428b [Enhancement](point query optimize) improve performace of point query on primary keys (#15491)
1. support row format using codec of jsonb
2. short path optimize for point query
3. support prepared statement for point query
4. support mysql binary format
2023-01-20 13:33:01 +08:00
3ebc98228d [feature wip](multi catalog)Support iceberg schema evolution. (#15836)
Support iceberg schema evolution for parquet file format.
Iceberg use unique id for each column to support schema evolution.
To support this feature in Doris, FE side need to get the current column id for each column and send the ids to be side.
Be read column id from parquet key_value_metadata, set the changed column name in Block to match the name in parquet file before reading data. And set the name back after reading data.
2023-01-20 12:57:36 +08:00
ab4127d0b2 [Fix][regression-test] Fix test_hdfs_tvf.groovy by update HDFS conf URI to uri and better error msg handling. (#16029)
Fix test_hdfs_tvf.groovy by update HDFS conf URI to uri and better error msg handling.
test_hdfs_tvf.groovy didn't passed.
2023-01-20 12:40:25 +08:00
ba71516eba [feature](jdbc catalog) support SQLServer jdbc catalog (#16093) 2023-01-20 12:37:38 +08:00
60231454cc [fix](nereids) fix bug in multiply return data type (#15949) 2023-01-20 11:44:24 +08:00
2ed4eac6f8 [feature](Nereids) add scalar function width_bucket (#16106) 2023-01-20 11:31:17 +08:00
73621bdb18 [enhance](Nereids) process DELETE_SIGN_COLUMN of OlapTable(#16030)
1. add DELETE_SIGN_COLUMN in non-visible-columns in LogicalOlapScan
2. when the table has a delete sign, add a filter `delete_sign_coumn = 0`
3. use output slots and non-visible slots to bind slot
2023-01-20 11:27:35 +08:00
5b2191a496 [fix](multi-catalog)Make ES catalog and resource compatible (#16096)
close #16099 

1. Make ES resource compatible with `username` property. Keep the same behavior with ES catalog.
2. Change ES catalog `username` to `user` to avoid confusion.
3. Add log in ESRestClient and make debug easier.
2023-01-20 09:31:57 +08:00
85c2060862 [Minor](Nereids): minor fix (#16095) 2023-01-20 00:53:11 +08:00
379ba73675 [enhance](nereids) tightestCommonType of datetime and datev2 is datev2 (#16086)
in original planner, tightestCommonType of datetime and datev2 is datev2.
make nereids compatible with original planner.
2023-01-19 19:55:19 +08:00
6cff651f71 [enhancement](statistics) add some methods to use histogram statistics (#15755)
1. Fix the histogram document
2. Add some methods for histogram statistics

TODO:
1. use histogram statistics for the optimizer
2023-01-19 19:20:18 +08:00
c1dd1fc331 [fix](nereids): fix all bugs in mergeGroup(). (#16079)
* [fix](Nereids): fix mergeGroup()

* polish code

* fix replace children of PhysicalEnforcer

* delete `deleteBestPlan`

* delete `getInputProperties`

* after merge GroupExpression, clear owner Group
2023-01-19 19:15:05 +08:00
dd869077f8 [fix](nereids) do not generate compare between Date to Date (#16061)
BE storage Engine has some bug in Date comparison, and hence if we push down predicates like Date'x' < Date 'y', we get error results.
This pr just convert expr like ’Date'x' < Date 'y',‘ to DateTime'x' < DateTime 'y'

TODO:
do storage engine support date slot compare with datetime?
if it support, we could avoid add cast on the slot
and then, this expression could push down to storage engine.
2023-01-19 15:56:51 +08:00
74c0677d62 [fix](planner) fix bugs in uncheckedCastChild (#15905)
1. `uncheckedCastChild` may generate redundant `CastExpr` like `cast( cast(XXX as Date) as Date)`
2. generate DateLiteral to replace cast(IntLiteral as Date)
2023-01-19 15:51:08 +08:00
21b78cb820 [fix](nereids) Fix bind failed of the slots in the group by clause (#16077)
Child's slot with same name to the slots in the outputexpression would be discarded which would cause the bind failed, since the slots in the group by expressions cannot find the corresponding bound slots from the child's output
2023-01-19 15:36:13 +08:00
0144c51ddb [fix](nereids) fix bug in CaseWhen.getDataType and add some missing case for findTightestCommonType (#15776) 2023-01-19 15:30:25 +08:00
47aa53fa72 [fix](multi-catalog)switching catalogs after dropping will get NPE. (#16067)
Issue Number: close #16066
2023-01-19 15:13:21 +08:00
c5beab39c0 [fix](nereids) Bind slot in having to its direct child instead of grand child (#16047)
For example, in this case, the `date` in having clause should be bind to alias which has same name, instead of `date` field of the relation

SELECT date_format(date, '%x%v') AS `date` FROM `tb_holiday` WHERE `date` between 20221111 AND 20221116 HAVING date = 202245 ORDER BY date;
2023-01-19 13:19:16 +08:00
abdf56bfa5 [fix](Nereids) wrong result of group_concat with order by or null args (#16081)
1. signatures without order element are wrong
2. signature with one arg is miss
3. group_concat should be NullableAggregateFunction
4. fold constant on fe should not fold NullableAggregateFunction with null arg

TODO
1. reorder rewrite rules, and then only forbid fold constant on NullableAggregateFunction with alwaysNullable == true
2023-01-19 11:22:30 +08:00
e846e8c0fd [enhance](Nereids): remove Group constructor for UT. (#16005) 2023-01-19 11:13:23 +08:00
3894de49d2 [Enhancement](topn) support two phase read for topn query (#15642)
This PR optimize topn query like `SELECT * FROM tableX ORDER BY columnA ASC/DESC LIMIT N`.

TopN is is compose of SortNode and ScanNode, when user table is wide like 100+ columns the order by clause is just a few columns.But ScanNode need to scan all data from storage engine even if the limit is very small.This may lead to lots of read amplification.So In this PR I devide TopN query into two phase:
1. The first phase we just need to read `columnA`'s data from storage engine along with an extra RowId column called `__DORIS_ROWID_COL__`.The other columns are pruned from ScanNode.
2. The second phase I put it in the ExchangeNode beacuase it's the central node for topn nodes in the cluster.The ExchangeNode will spawn a RPC to other nodes using the RowIds(sorted and limited from SortNode) read from the first phase and read row by row from storage engine.

After the second phase read, Block will contain all the data needed for the query
2023-01-19 10:01:33 +08:00
c7a72436e6 [Feature](multi-catalog)Add support for JuiceFS (#15969)
The broker implements the interface to juicefs,It supports loading data from juicefs to doris through broker.
At the same time, it also implements the multi catalog to read the hive data stored in juicefs
2023-01-19 08:54:16 +08:00
wxy
7288f1f1d4 [Fix](profile) do not send export profile when enable_profile=false. (#15996) 2023-01-19 08:06:39 +08:00
c43edbdfea [bug](cooldown)fix bug for single cooldown (#16040)
* fix bug for single cooldown

* fix bug for single cooldown
2023-01-19 08:03:32 +08:00
76622bcab4 [enhance](FE): remove constructor just used for UT and useless ERROR code (#16080)
* [enhance](FE): remove constructor just used for UT.

* [enhance](FE): remove useless ERROR Code

* fix checkstyle
2023-01-19 08:00:48 +08:00
d8f598eeab [enhancement](Nereids) add timestampadd, timestampdiff functions (#16072) 2023-01-19 01:05:25 +08:00
2acf634f84 [CleanUp](FE): cleanup useless code in FE. (#16058) 2023-01-18 22:25:41 +08:00
78ba446487 [Enhancement](Nereids) add more clear message when parse failed (#16056) 2023-01-18 22:19:46 +08:00
cbcd5228b7 [enhance](nereids): polish code for mergeGroup(). (#16057) 2023-01-18 21:03:46 +08:00
feeb69438b [opt](Nereids) optimize DistributeSpec generator of OlapScan (#15965)
use the size of selected partitions instead of olap table partition size to decide whether generate hashDistributeSpec
2023-01-18 20:18:11 +08:00
34075368ec (improvement)[bucket] Add auto bucket implement (#15250) 2023-01-18 19:50:18 +08:00
0916cbcb10 [ehancement](nereids) Made the parse for named expression more complete (#16010)
After this PR, we could support such grammar.

SELECT SUBSTRING("dddd编", 0, 3) AS "测试";
SELECT SUBSTRING("dddd编", 0, 3) "测试";
2023-01-18 19:44:51 +08:00
4035bd83c3 [fix](jdbc) fix jdbc driver bug and external datasource p2 test case issue (#16033)
Fix bug that when create jdbc resource with only jdbc driver file name, it will failed to do checksum
This is because we forgot the pass the full driver url to JdbcClient.

Add ResultSet.FETCH_FORWARD and set AutoCommit to false to jdbc connection, so to avoid OOM when fetching large amount of data

set useCursorFetch in jdbc url for both MySQL and PostgreSQL.

Fix some p2 external datasource bug
2023-01-18 17:48:06 +08:00
5265f5142f [fix](Nereids) add string and character type (#16044) 2023-01-18 17:27:45 +08:00
1fa2b662cf [opt](Nereids) add date_add/sub function (#16048)
1. add week_add week_diff function
2. register all date_add/date_diff function
2023-01-18 17:11:44 +08:00