This PR optimizes some of the logic related to group commit:
1. Improved the error handling when there is insufficient WAL space during import.
2. Accounted for cases where the content length is negative during import.
3. Added missing error log printing in `group_commit_mgr.cpp`.
1. rename old create/drop table to add/removeMemoryTable
2. add new create/drop table/db method
3. support hms catalog create/drop table/db
(cherry picked from commit b2e869c7414c68186de8d43b324ae736d7cc3463)
for this kind of sql:
create table test_default10(
a int,
b varchar(100) default current_timestamp
)
distributed by hash(a)
properties('replication_num'="1");
add check:
Types other than DATETIME and DATETIMEV2 cannot use current_timestamp as the default value
this PR support slot appearing both in agg func and grouping sets.
sql like below:
select sum(a) from t group by grouping sets ((a));
Before this PR, Nereids throw exception like below:
col_int_undef_signed cannot both in select list and aggregate functions when using GROUPING SETS/CUBE/ROLLUP, please use union instead.
This PR removes the restriction and supports this situation.
* When using stream processing frameworks like Flink with group commit mode enabled, the uncertain size of imported data makes such behavior prohibitive. Previously, to simplify the process, the error message for excessive data volume during streamload was combined with the one for group commit mode, leading to confusion for users when encountering errors indicating the data volume is too large during Flink imports. To address this issue, we are adjusting the logic: if a user employs stream processing imports like Flink with group commit mode enabled, we will automatically disable group commit mode, switching to the standard import mode instead. This is the essence of this PR.
Issue Number: close #xxx
For example, the hive table is partitioned by `date` and `region`, with the following 6 partitions
```
20200101
beijing
shanghai
20200102
beijing
shanghai
20200103
beijing
shanghai
```
If the MTMV is partitioned by `date`, then the MTMV will have three partitions: 20200101, 202000102, 20200103
If the MTMV is partitioned by `region`, then the MTMV will have two partitions: beijing, shanghai
should always enable workload group because other operations depend on it for example MTMV, and spill to disk.
the normal workload group should be created in constructor.