Push down limit-distinct through left/right outer join or cross join.
such as select t1.c1 from t1 left join t2 on t1.c1 = t2.c1 order by t1.c1 limit 1;
1. Add support for Doris to parse ES date field without time zone info. eg: `2023-04-17T23:01:18.151`, this time will be treated as UTC time, since ES assumes that the time zone for time fields without time zones is UTC.
2. Change local time zone convertion from system local time zone to session variable time zone.
1. To ensure compatibility with the original optimizer, expose the non-lambda signature of highorder function externally.
2. fix some bugs in toSql function in the original optimizer
`ExternalFileTableValuedFunction` now has 3 derived classes:
- LocalTableValuedFunction
- HdfsTableValuedFunction
- S3TableValuedFunction
All these tvfs are for reading data from file. The difference is where to read the file, eg, from HDFS or from local filesystem.
So I refine the fields and methods of these classes.
Now there 3 kinds of properties of these tvfs:
1. File format properties
File format properties, such as `format`, `column_separator`. For all these tvfs, they are common properties.
So these properties should be analyzed in parenet class `ExternalFileTableValuedFunction`.
2. URI or file path
The URI or file path property indicate the file location. For different storage, the format of the uri are not same.
So they should be analyzed in each derived classes.
3. Other properties
All other properties which are special for certain tvf.
So they should be analyzed in each derived classes.
There are 2 new classes:
- `FileFormatConstants`: Define some common property names or variables related to file format.
- `FileFormatUtils`: Define some util methods related to file format.
After this PR, if we want to add some common properties for all these tvfs, only need to handled it in
`ExternalFileTableValuedFunction`, to avoid missing handle it in any one of them.
### Behavior change
1. Remove `fs.defaultFS` property in `hdfs()`, it can be got from `uri`
2. Use `\t` as the default column separator of csv format, same as stream load
when we do NormalizeToSlot, we pushed complex expression and only remain
slot of it. When we do this, we collect alias and their child and
compute its child in bottom project, remain the result slot in current
node. for example
Window(max(...), c1 as a1)
after normalization, we get
Window(max(...), a1)
+-- Project(..., c1 as a1)
But, in some cases, we remove some SlotReference by mistake, for example
Window(max(...), c1, c1 as a1)
after normalization, we get
Window(max(...), a1)
+-- Project(..., c1 as a1)
we lost the SlotReference c1. This PR fix this problem. After this Pr,
we get
Window(max(...), c1, a1)
+-- Project(..., c1, c1 as a1)
Problem:
be core because of bitmap calculation.
Reason:
when be check failed, it would core directly.
Example:
SELECT id_bitmap FROM test_bitmap WHERE id_bitmap IN (NULL) LIMIT 20;
Solved:
Forbidden this kind of expression in fe when analyze. And also forbid bitmap type comparing in other unsupported expressions.
Support complex types in jni framework, and successfully run end-to-end on hudi.
### How to Use
Other scanners only need to implement three interfaces in `ColumnValue`:
```
// Get array elements and append into values
void unpackArray(List<ColumnValue> values);
// Get map key array&value array, and append into keys&values
void unpackMap(List<ColumnValue> keys, List<ColumnValue> values);
// Get the struct fields specified by `structFieldIndex`, and append into values
void unpackStruct(List<Integer> structFieldIndex, List<ColumnValue> values);
```
Developers can take `HudiColumnValue` as an example.
Co-authored-by: sohardforaname <organic_chemistry@foxmail.com>
TODO:
1. support agg_state type
2. support implicit cast literal exception
3. use nereids execute dml for these regression cases:
- test_agg_state_nereids (for TODO 1)
- test_array_insert_overflow (for TODO 2)
- nereids_p0/json_p0/test_json_load_and_function (for TODO 2)
- nereids_p0/json_p0/test_json_unique_load_and_function (for TODO 2)
- nereids_p0/jsonb_p0/test_jsonb_load_and_function (for TODO 2)
- nereids_p0/jsonb_p0/test_jsonb_unique_load_and_function (for TODO 2)
- json_p0/test_json_load_and_function (for TODO 2)
- json_p0/test_json_unique_load_and_function (for TODO 2)
- jsonb_p0/test_jsonb_load_and_function (for TODO 2)
- jsonb_p0/test_jsonb_unique_load_and_function (for TODO 2)
- test_multi_partition_key (for TODO 2)
for sql like:
SELECT JSONB_EXTRACT('{"k1":"v31","k2":300}','$.k1');
the result should be
+------------------------------------------------+
| jsonb_extract('{"k1":"v31","k2":300}', '$.k1') |
+------------------------------------------------+
| "v31" |
+------------------------------------------------+
but curent result is
+------------------------------------------------+
| jsonb_extract('{"k1":"v31","k2":300}', '$.k1') |
+------------------------------------------------+
| <null> |
+------------------------------------------------+
create table t1(c1 int, c2 int);
create table t2(c1 int, c2 int);
insert into t1 values (1,1);
insert into t2 values (1,1);
select * from t1 where exists (select * from t2 where t1.c1 = t2.c1 limit 0);
the result should be empty set.
resetSelectList method will use originSelectList to recover the origin select list. If the originSelectList is lost in constructor, the resetSelectList will fail to recover and make the analyze process fail.