Commit Graph

2631 Commits

Author SHA1 Message Date
c919834df3 [fix](test) ckbench shape test failed because parallel merge (#32224)
query36's shape broken by PR #32186
2024-03-15 18:03:03 +08:00
4534300030 [fix](Operator) RepeatNode does not handle empty expressions. (#32112)
In the past, RepeatNode did not handle empty expressions.
It used DCHECK to check if the expression was non-empty.
In non-debug mode, this caused _child_block to remain unprocessed, resulting in a deadlock.
Now, if the expression is empty, the output block directly outputs _child_block
2024-03-15 18:02:33 +08:00
9150c82545 [test](Nereids) add ckbench shape test (#32191) 2024-03-15 18:02:01 +08:00
36a0b93c44 [Enhancement](mor) Add unique mor table min max push down case #32196 2024-03-15 18:02:01 +08:00
bede948029 [enhancement](nereids) support unnest subquery with group by and having clause (#32002) 2024-03-15 18:01:49 +08:00
3c4234111b [fix](nereids)EliminateSemiJoin should consider empty table (#32107)
* [fix](nereids)EliminateSemiJoin should consider empty table

* update out file
2024-03-15 17:59:31 +08:00
c8c6e86386 [fix](Nereids): ignore project and distribute in test_cte_filter_pushdown and push_down_expression_in_hash_join (#32083) 2024-03-15 17:59:30 +08:00
99830b77a8 [feat](nereids) add merge aggregate rule (#31811) 2024-03-15 17:58:01 +08:00
df5ec16d7c [Refactor](exectuor)Add schema type table active_queries (#32057)
* Add schema type table active_queries
2024-03-15 17:57:28 +08:00
84af8e0a53 [enhance](mtmv)mtmv support hive default partition (#32051) 2024-03-12 22:51:11 +08:00
6acdd9cd48 The issue introduced by the recently added auto-increment column works fine on a single node but may result in discontinuous auto-increment IDs when running on a cluster. This PR has been modified to check for the uniqueness of the auto-increment column values instead of checking for equality to a fixed value. (#32115) 2024-03-12 21:51:36 +08:00
473bd3ee64 [fix](function) incorrect result of eq_for_null (#32103) 2024-03-12 18:50:26 +08:00
4956d5de83 [fix](planner) remove input slot for aggregate slot which is not materialized (#32092)
introduced by #26886

run this sql:
SELECT
        caseId
    FROM
        (
            SELECT
                caseId,
                count(judgementDateId)
            FROM
                (
                    SELECT
                        abs(caseId) AS caseId,
                        id as judgementDateId
                    FROM
                        dr_user_test_t2
                ) AGG_RESULT
            GROUP BY
                caseId
        ) TOTAL
        order by 1;


will get:

ERROR 1105 (HY000): errCode = 2, detailMessage = (172.17.0.1)[INTERNAL_ERROR]couldn't resolve slot descriptor 1, desc: tuples:
Tuple(id=5 slots=[Slot(id=10 type=DOUBLE col=-1, colname=, nullable=1), Slot(id=11 type=VARCHAR col=-1, colname=id, nullable=1)] has_varlen_slots=1)
Tuple(id=4 slots=[Slot(id=8 type=DOUBLE col=-1, colname=, nullable=1)] has_varlen_slots=0)
Tuple(id=2 slots=[Slot(id=4 type=DOUBLE col=-1, colname=caseId, nullable=1)] has_varlen_slots=0)
Tuple(id=0 slots=[Slot(id=0 type=VARCHAR col=-1, colname=caseId, nu
2024-03-12 18:50:26 +08:00
781a45d93c [Fix](nereids) fix date function rewrite (#32060) 2024-03-12 18:50:26 +08:00
ccd21a6ea4 [Improve](InPredict) enhance in predict with array type (#31828) 2024-03-12 14:19:14 +08:00
31ee448c87 [test](fix) Fix one missing line of output in out file (#32036) 2024-03-12 14:17:55 +08:00
ab21d85e8c [nereids](topn-filter) support multi-topn filter (FE part) (#31485)
support multi-topn-filter
2024-03-12 14:17:48 +08:00
cf6b22c621 [fix](jdbc catalog) fix type conversion error in MySQL JDBC Driver 5.x (#31880) 2024-03-12 14:07:57 +08:00
27eed5399d [Fix](auto-inc) Fix partial update auto inc publish case failure #31987 2024-03-12 14:07:00 +08:00
c5390d00bb [Improvement]Add schema table backend_active_tasks (#31945) 2024-03-09 19:55:48 +08:00
263135c193 [fix](case) fix export data consistency case (#32005) 2024-03-09 19:45:50 +08:00
62db7094ea Revert "Problem: When the old optimizer processes an INSERT INTO statement that contains two quotation marks, it results in only one quotation mark being written into the database. (#31890)" (#31986)
This reverts commit 8c309652e04698f311b6c9158105352e8416c69a.
2024-03-09 19:45:46 +08:00
609761567c [Fix](partial-update) Fix wrong column number passing to BE when partial and enable nereids (#31461)
* Problem:
Inconsistent behavior occurs when executing partial column update `UPDATE` statements and `INSERT` statements on merge-on-write tables with the Nereids optimizer enabled. The number of columns passed to BE differs; `UPDATE` operations incorrectly pass all columns, while `INSERT` operations correctly pass only the updated columns.

Reason:
The Nereids optimizer does not handle partial column update `UPDATE` statements properly. The processing logic for `UPDATE` statements rewrites them as equivalent `INSERT` statements, which are then processed according to the logic of `INSERT` statements. For example, assuming a MoW table structure with columns k1, k2, v1, v2, the correct rewrite should be:
* `UPDATE` table t1 set v1 = v1 + 1 where k1 = 1 and k2 = 2
 * =>
 * `INSERT` into table (v1) select v1 + 1 from table t1 where k1 = 1 and k2 = 2

However, the actual rewriting process does not consider the logic for partial column updates, leading to all columns being included in the `INSERT` statement, i.e., the result is:
* `INSERT` into table (k1, k2, v1, v2) select k1, k2, v1 + 1, v2 from table t1 where k1 = 1 and k2 = 2

This results in `UPDATE` operations incorrectly passing all columns to BE.

Solution:
Having analyzed the cause, the solution is straightforward: when rewriting partial column update `UPDATE` statements to `INSERT` statements, only retain the updated columns and all key columns (as partial column updates must include all key columns). Additionally, this PR includes error injection cases to verify the number of columns passed to BE is correct.

* 2

* 3

* 4

* 5
2024-03-09 19:45:42 +08:00
e8aa5ee7d5 [Improve](Variant) support bloom filter for variant subcolumns (#31347)
* [Improve](Variant) support bloom filter for variant subcolumns

* rebase
2024-03-09 19:45:03 +08:00
Pxl
19e6ebd09c [Feature](materialized-view) support mv with bitmap_union(bitmap_from_array()) case (#31962)
support mv with bitmap_union(bitmap_from_array()) case
2024-03-09 19:45:03 +08:00
eb280d374b [case](Nereids) add leading tpc-h (#30405)
add tpc-h shape cases using leading hint
except:

single table without join q1 q6
not support feature include tables after subquery unnested q2 q16 q18 q20 q21 q22
2024-03-09 19:45:03 +08:00
93d298d34a [fix](agg) wrong result of two or more map_agg functions in query (#31928) 2024-03-09 19:45:03 +08:00
8801916675 [regression](spark)Add spark to read doris multiple data types cases (#31861) 2024-03-09 19:43:21 +08:00
5b52812af2 Problem: When the old optimizer processes an INSERT INTO statement that contains two quotation marks, it results in only one quotation mark being written into the database. (#31890)
Reason: During syntax parsing, the old optimizer interprets two quotation marks as a single quotation mark.
Solution: Remove the logic that consolidates two quotation marks into one.
2024-03-09 19:43:21 +08:00
fa411f88df [Fix](Nereids) fix hint cases with random result (#31865) 2024-03-07 16:53:49 +08:00
667b1fba04 [enhance](mtmv) MTMV Use partial partition of base table (#31632)
MTMV add 3 properties:
partition_sync_limit: digit
partition_sync_time_unit: DAY/MONTH/YEAR
partition_sync_date_format: like "%Y-%m-%d"/"%Y%m%d"

For example, the current time is 2020-02-03 20:10:10
- If partition_sync_limit is set to 1 and partition_sync_time_unit is set to DAY, only partitions with a time greater than or equal to 2020-02-03 00:00:00 will be synchronized to the MTMV
- If partition_sync_limit is set to 1 and partition_sync_time_unit is set to MONTH, only partitions with a time greater than or equal to 2020-02-01 00:00:00 will be synchronized to the MTMV
- If partition_sync_limit is set to 1 and partition_sync_time_unit is set to YEAR, only partitions with a time greater than or equal to 2020-01-01 00:00:00 will be synchronized to the MTMV
- If partition_sync_limit is set to 3 and partition_sync_time_unit is set to MONTH, only partitions with a time greater than or equal to 2019-12-01 00:00:00 will be synchronized to the MTMV
- If partition_sync_limit is set to 4 and partition_sync_time_unit is set to DAY, only partitions with a time greater than or equal to 2020-01-31 00:00:00 will be synchronized to the MTMV
2024-03-07 16:53:49 +08:00
e91d16854b [fix](function) fix date_format function execution error on fe (#31645) 2024-03-07 16:53:19 +08:00
21ce85dc14 [fix](money_format) fix money_format #31883 2024-03-07 16:53:19 +08:00
5905ffa1da [enhancement](nereids) allow reorder mark join (#30644) 2024-03-07 16:53:19 +08:00
9bf22a872a [Bug](fix) fix or and "<=>" cause coredump in query (#31884) 2024-03-07 16:53:19 +08:00
5b00f4fbeb [improvement](jdbc catalog) opt get db2 schema list & xml type mapping (#31856)
1. Trim Schema Names: Adapted the system to remove trailing spaces from DB2 schema names, ensuring compatibility without affecting query operations.
2. XML Mapping: Implemented a feature to directly map XML types to String.
2024-03-07 16:53:19 +08:00
561709451c [fix](Nereids) fix group_concat(distinct) failed (#31873) 2024-03-07 16:16:05 +08:00
1d9e9fc884 [regression test] Test the unique model by modify a value type from TINYINT to other type (#31682) 2024-03-07 16:16:05 +08:00
Pxl
3716f8a171 [Bug](partition) fix npe when prune partition with not exist partition column in mv #31860 2024-03-07 16:16:05 +08:00
Pxl
dc9de4b6b5 [Bug](load) fix wrong data in mv when routine load with function mapping (#31787) 2024-03-07 16:16:05 +08:00
4f174c4fb9 [feature](function) Support for aggregate function foreach combiner (#31526) 2024-03-06 13:08:30 +08:00
9af64d848f [fix](pipelineX) fix error distribution in DistinctStreamingAggOperatorX (#31804) 2024-03-06 13:08:30 +08:00
2e9bd268cd [improvement](jdbc catalog) support sqlserver timestamp type read (#31805) 2024-03-06 13:08:04 +08:00
aba58b0f7b [test](Nereids) add grouping sets test (#31675)
Co-authored-by: feiniaofeiafei <moailing@selectdb.com>
2024-03-06 13:08:04 +08:00
1d2d0bd411 [fix](update) Update set value should consider sequence column (#31626)
When using update command to set column value, if the column is sequence column, the column 'DORIS_SEQUENCE_COL' should also be set to the same value.
2024-03-06 13:08:04 +08:00
1434d3983b [enhancement](test) Test the unique model by modify a key type from TINYINT to other type (#31713) 2024-03-06 13:07:59 +08:00
2d6e975d5a [fix](cast) fix wrong result while cast string to float (#31781)
Issue Number: close #31518
2024-03-06 13:07:59 +08:00
97640ee0e8 [test](leading) add leading tpc-ds regression test cases (#31681)
Co-authored-by: libinfeng <libinfeng@selectdb.com>
2024-03-06 13:07:49 +08:00
7998da4691 [fix](cast) wrong result while cast const to double then to string (#31657)
Issue Number: close #31514
2024-03-06 13:06:27 +08:00
7c30cb20fd [Fix](partial update) Fix partial update load false when schema includes auto increment column (#31725)
Problem:
When partially updating columns without specifying the auto-increment column, and the imported data contains new keys, an error stating the auto-increment column could not be found occurs.

Reason:
The logic for partial column updates does not account for new keys in auto-increment columns. Since auto-increment columns can be generated by the system, it's possible to omit this column data during import. However, partial column updates treat this as a regular column, expecting it to be nullable or have a default value for automatic filling, overlooking the fact that auto-increment columns can also be auto-filled. This oversight leads to the error.

Solution:
Incorporate a check for auto-increment columns into the partial column update logic, and include the logic for generating auto-increment column values in the process of completing partial updates.
2024-03-06 13:06:27 +08:00