Commit Graph

18601 Commits

Author SHA1 Message Date
fa35e54350 [fix](nereids)LogicalPlanDeepCopier will lost some info when coping logical relation (#34933) 2024-05-15 22:40:32 +08:00
e74b17c761 [Fix](Row store) support decimal256 type (#34887) 2024-05-15 19:01:18 +08:00
475d42f23e remove is cloud mode from regression test 2024-05-15 17:00:43 +08:00
91a154988d [feat](Nereids): Reject Commutativity Swap for Nested Loop Joins Affecting Parallelism (#34639) (#34798) 2024-05-15 16:52:26 +08:00
ab9ff0447d [Fix](regression-test) Fix test_hive_write_type. (#34667)
Because #34397 changed error code of ARITHMETIC_OVERFLOW_ERRROR, so the error msg is not expected in the test.
2024-05-15 15:21:57 +08:00
1a24895257 [opt](routine-load) optimize routine load task thread pool and related param(#32282) (#34896) 2024-05-15 12:42:02 +08:00
2cbe6740a5 [fix](reader) avoid be coredump in block reader in abnormal situation (#34878) 2024-05-15 12:38:40 +08:00
3ead073905 [fix](arrow-flight-sql) Fix Arrow Flight bind wrong Host in Fqdn #34850 2024-05-15 12:38:40 +08:00
baf9a45e57 [fix](mtmv) check groupby in agg-bottom-plan when rewrite agg query by mv (#34274)
check groupby in agg-bottom-plan when rewrite and rollup agg query by mv
2024-05-15 12:38:40 +08:00
f9c42f34dd [fix](auth)Compatible with previously enabled ldap configuration (#34891) 2024-05-15 12:36:47 +08:00
1f0c45204b [fix](iceberg) read the primary key columns if hasing equality delete (#34884)
backport: #34835
2024-05-15 11:37:25 +08:00
02084fd91f [fix](iceberg_orc)Fixed the bug that the iceberg reader did not perform position delete when reading the orc file without a predicate. (#34814) (#34882)
bp #34814
2024-05-15 11:31:29 +08:00
e13ce905cf [Fix](hive-writer) Fix hive partition update file size and remove redundant column names. (#34651) (#34885)
Backport #34651.
2024-05-15 11:23:32 +08:00
30256195c3 fix check column privilege failed by hidden column (#34849)
fix check column privilege failed by hidden column: DORIS_DELETE_SIGN
2024-05-15 10:38:10 +08:00
5ce58ed773 [fix](nereids) runtime filter push-down-cte column mapping bug #34875 2024-05-15 10:31:39 +08:00
00ce05393a [fix](profile) Load profile need to be registered to get real-time profile #34852 2024-05-15 10:29:04 +08:00
c7134faea9 [Fix](outfile) Fix the timing of setting the _is_closed flag in Parquet/ORC writer (#34668) 2024-05-15 10:28:22 +08:00
d5ab2787ba [Fix](function) fix pad functions behaviour of empty pad string (#34796)
fix pad functions behaviour of empty pad string
2024-05-15 10:28:09 +08:00
7a878eb797 [fix](Export) fix some issues of Export (#34345)
1.  Forbid rollback to the old optimizer in `Export` task. Since originStmt is empty, reverting to the old optimizer when the new optimizer is enabled is meaningless.

2. Display `parallelism` in 'show export'.

3. Create an initial Map for resultAttachedInfo to avoid NullPointerException.

4. Remove the originStmt added in this PR #31020, because the `Export` statement is underlying the Outfile statement, it cannot be treated as `OriginStmt`.
2024-05-15 10:27:53 +08:00
bd41341a97 [bugfix](tvf)catch exception for fetching SchemaTableData #34856 2024-05-14 23:40:50 +08:00
0b4d814598 [fix](decimal) Fix wrong result produced by decimal128 multiply (#34825)
* [fix](decimal) Fix wrong result produced by decimal128 multiply

* update
2024-05-14 23:34:11 +08:00
a0a025f763 [fix](regression test)fix test_hive_parquet_alter_column p2 case. (#34727) (#34859)
fix test_hive_parquet_alter_column p2 case.
Since this is a p2 case. The data is stored on emr, not in docker. So there is no need to consider hive2 and hive3.
2024-05-14 23:30:06 +08:00
4dd5379951 [bugfix](hive)fix error for writing to hive for 2.1 (#34518)
mirror #34520
2024-05-14 23:27:29 +08:00
7a480aab45 [fix](nereids) Push max rf into cte #34858 2024-05-14 21:05:18 +08:00
868949c8c0 fix some bugs (#34855) 2024-05-14 19:51:27 +08:00
c35fef631b fix bug in or expansion (#34851) 2024-05-14 19:39:33 +08:00
ff22128013 [fix](profile) Fix content missing of brokerload profile (#34839)
* Fix compile

* fix style

* [fix](profile) Fix content missing of brokerload profile (#33969)
2024-05-14 18:56:58 +08:00
bac51723e8 [fix](Nereids): fix some bugs in or expansion #34840
add unit test
2024-05-14 18:14:11 +08:00
47f0a6734b [fix][nereids] fix misunderstanding about bothSideShuffleKeysAreSameOrder (#34824)
Co-authored-by: zhongjian.xzj <zhongjian.xzj@zhongjianxzjdeMacBook-Pro.local>
2024-05-14 15:17:34 +08:00
0deb629d07 [fix](Nereids): clone the producer plan and put logicalAnchor generated by Or_Expansion above logicalSink (#34771)
* put cte anchor on the root

put logicalAnchor on root

clone plan of cte consumer

* fix unit test
2024-05-14 15:05:27 +08:00
5ece07ab8c [faultinjection](test) add some fault injection in pipeline task method 2024-05-14 15:01:32 +08:00
9491b7d422 [fix](iceberg) prevent coredump if read position delete file failed (#34802) 2024-05-14 14:03:33 +08:00
fcf26c9224 [fix](runtimefilter)slot comparison bug #34791 2024-05-14 11:07:19 +08:00
95b05928fd [fix](compaction) fix time series compaction merge empty rowsets priority #34562 (#34765) 2024-05-14 09:10:09 +08:00
104284bd6f [Refactor](jdbc catalog) Enhance Field Handling in JdbcFieldSchema Using Optional for Better Null Safety (#34730)
In some cases, using Resultset.getxxx cannot correctly handle null data, so I use Optional to enhance the fields of JdbcFieldSchema for better type mapping when some results are null.
2024-05-13 22:37:35 +08:00
22d4543346 [refactor](jdbc catalog) split sap_hana jdbc executor (#34772) 2024-05-13 22:36:52 +08:00
0ae1b9c70a [chore](remove code) Remove dragonbox related (#34528)
* Revert "[refactor](mysql result format) use new serde framework to tuple convert (#25006)"

This reverts commit e5ef0aa6d439c3f9b1f1fe5bc89c9ea6a71d4019.

* run buildall

* MORE

* FIX
2024-05-13 22:16:57 +08:00
5046fa2bea [improvement](mtmv) Split the expression mapping in LogicalCompatibilityContext for performance (#34646)
Need query to view expression mapping when check the logic of hyper graph is equals or not.
Getting all expression mapping one-time may affect performance. So split the expresson to three type
JOIN_EDGE, NODE, FILTER_EDGE and get them step by step.
2024-05-13 22:15:35 +08:00
c62ff0b672 [fix](auth) Disable revoke 'admin' from 'admin'` (#34644) 2024-05-13 22:15:16 +08:00
db15c811f8 [opt](Nereids) enhance properties regulator checking (#34603)
Enhance properties regulator checking:
(1) right bucket shuffle restriction takes effective only when either side has NATUAL shuffle type.
(2) enhance bothSideShuffleKeysAreSameOrder checking if taking EquivalenceExprIds into consideration.


Co-authored-by: zhongjian.xzj <zhongjian.xzj@zhongjianxzjdeMacBook-Pro.local>
2024-05-13 22:15:16 +08:00
d6f300ec62 [fix](Nereids) empty set handle in fe should use result output (#34676) 2024-05-13 22:15:16 +08:00
3ef5ed1ad0 [opt](Nereids) normalize column name of output file (#34650)
when do export to output file, normalize column name.
For example

> SELECT 1 > 2 INTO OUTFILE "..."

the column name of 1 > 2 will be __greater_than_0
2024-05-13 22:12:46 +08:00
792d87c87c [bugfix](multi catalog) fixed an issue in master where hive directory field separator was an empty string causing it to be core (#34594)
Fixed an issue in master where hive directory field separator was an empty string causing it to be core.
2024-05-13 22:12:16 +08:00
e602048f73 [fix](Nereids) Fix materialized view rule to check struct info validity (#34665)
This PR improves the materialized view rule by adding additional validity checks for the struct info. The changes include:

  1. In AbstractMaterializedViewRule.java:
    * Modified the checkPattern method to also check the validity of the struct info using context.getStructInfo().isValid().
    * Updated the error message for invalid struct info to include the view plan string for better debugging.

  2. In StructInfo.java:
    * Enhanced the isValid() method to ensure that all nodes in the hyper graph have non-null expressions, in addition to the existing validity check.

These changes ensure that the materialized view rule properly validates the struct info before proceeding with the optimization. It prevents invalid struct info from being used and provides more detailed error messages for debugging purposes.

The additional validity check in StructInfo.isValid() verifies that all nodes in the hyper graph have non-null expressions, which is necessary for the materialized view rule to function correctly.

By incorporating these validity checks, the materialized view rule becomes more robust and can handle invalid struct info gracefully, improving the overall stability and reliability of the optimization process.
2024-05-13 22:11:53 +08:00
11175208df [fix](delete) storage engine delete do not support bitmap (#34710) 2024-05-13 22:07:53 +08:00
40a1041651 [fix](jdbc catalog) fix jdbc table checksum and query jdbc tvf (#34780) 2024-05-13 18:08:18 +08:00
20a6f2a659 [Cherry-Pick](branch-2.1) Pick "Fix partial update case introduced by #33656 (#34260)" (#34767) 2024-05-13 17:00:04 +08:00
4150971e07 2.1.3-rc07 2024-05-12 22:22:09 +08:00
cdc950f2c3 [improvement](spill) improve spill log printing 2024-05-12 19:33:27 +08:00
755757e57c check pushDownContext validation after withNewProbeExpression() (#34704) (#34737)
when the runtime filter target to a constant, the PushDownContext.finalTarget is null.
in this case, do not pushdown runtime filter.
example:
select * from (select 1 as x from T1) T2 join T3 on T2.x=T3.x
when push down RF(T3.x->T2.x) inside "select 1 as x from T1", the column x is a constant, and the pushDown stopped.

(cherry picked from commit 7f06cf0a0125d84c41e1edbb71366e293f70239d)
2024-05-12 18:09:13 +08:00