Support delete from partitioned table without partition specified in [DELETE] stmt.
## Usage
If it is a partitioned table, you can specify a partition.
If not specified, Doris will infer partition from the given conditions.
In two cases, Doris cannot infer the partition from conditions:
1) the conditions do not contain partition columns;
2) The operator of the partition column is `not in`.
When a partition table does not specify the partition,
or the partition cannot be inferred from the conditions,
the session variable `delete_without_partition` needs to be `true`
to make delete statement be applied to all partitions.
## Test case
Test case is added in `regression-test/suites/delete_p0/test_delete_from_partition.groovy`,
user can delete from partitioned table without partition specified now.
Issue Number: close#12574
This pr adds `NewJsonReader` which implements GenericReader interface to support read json format file.
TODO:
1. modify `_scann_eof` later.
2. Rename `NewJsonReader` to `JsonReader` when `JsonReader` is deleted.
From now, we don't support type derivation for array function's arguments.
So that the cases below will return wrong values or even cause be core.
mysql> select array_union([1],[10000000]);
+----------------------------------------+
| array_union(ARRAY(1), ARRAY(10000000)) |
+----------------------------------------+
| [1, -128] |
+----------------------------------------+
1 row in set (0.03 sec)
mysql> select array_union([NULL],[1]);
ERROR 1105 (HY000): RpcException, msg: io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason
mysql> select array_union([],[1]);
ERROR 1105 (HY000): RpcException, msg: io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason
This commit make a small fix to derivate the argument types of the array function
1、 For null type in arguments, cast the null type to boolean type, because null type should not be seen in be.
2、For different types in arguments, cast all arguments type to their compatible type.
this pr is used to fix the wrong result of array_join function.
before the change, the array_join function will return wrong result.
MySQL [example_db]> select array_join(["", "1", "2"], '');
+--------------------------------------+
| array_join(ARRAY('', '1', '2'), '') |
+--------------------------------------+
| 1_2 |
+--------------------------------------+
3.after the change, the array_join function will return correct result.
MySQL [example_db]> select array_join(["", "1", "2"], '');
+--------------------------------------+
| array_join(ARRAY('', '1', '2'), '') |
+--------------------------------------+
| _1_2 |
+--------------------------------------+
Issue Number: #7570
1. Modify default behavior of `build.sh`
The `BUILD_JAVA_UDF` is default ON, so that jvm is needed for compilation and runtime.
2. Add docker-compose for MySQL 5.7, PostgreSQL 14 and Hive 2
See `docker/thirdparties/docker-compose`.
3. Add some regression test cases for jdbc query on MySQL, PG and Hive Catalog
The default is `false`, if set to true, you need first start docker for MySQL/PG/Hive.
4. Support `if not exists` and `if exists` for create/drop resource and create/drop encryptkey
Fix for SQL with array column:
delete from tbl where c_array is null;
more info please refer to #13360
Co-authored-by: cambyzju <zhuxiaoli01@baidu.com>
1. remove FE config `enable_array_type`
2. limit the nested depth of array in FE side.
3. Fix bug that when loading array from parquet, the decimal type is treated as bigint
4. Fix loading array from csv(vec-engine), handle null and "null"
5. Change the csv array loading behavior, if the array string format is invalid in csv, it will be converted to null.
6. Remove `check_array_format()`, because it's logic is wrong and meaningless
7. Add stream load csv test cases and more parquet broker load tests
* [bugfix](VecDateTimeValue) eat the value of microsecond in function from_date_format_str
* add sql based regression test
Co-authored-by: xiaojunjie <xiaojunjie@baidu.com>
Problem:
IColumn::is_date property will lost after ColumnDate::clone called.
Fix:
After ColumnDate created, also set IColumn::is_date.
Co-authored-by: cambyzju <zhuxiaoli01@baidu.com>
This pr is used to expand the supported data type for array_min/array_max function.
Before the change , the array_min/array_max function can't support the date/datetime type.
After the change, array_min/array_max function can support the date/datetime type.
Co-authored-by: hucheng01 <hucheng01@baidu.com>
1. Refactor the file reader creation in FileFactory, for simplicity.
Previously, FileFactory had too many `create_file_reader` interfaces.
Now unified into two categories: the interface used by the previous BrokerScanNode,
and the interface used by the new FileScanNode.
And separate the creation methods of readers that read `StreamLoadPipe` and other readers that read files.
2. Modify the StreamLoadPlanner on FE side to support using ExternalFileScanNode
3. Now for generic reader, the file reader will be created inside the reader, not passed from the outside.
4. Add some test cases for csv stream load, the behavior is same as the old broker scanner.
* [fix](agg)the reseet the content of grouping exprs instead of replace it with original exprs
* keep old behavior if the grouping type is not GROUP_BY