The changes of this PR for JdbcOracleClient are as follows:
#### bug fixes:
1. Fix the problem that if there is an approximate table name for Schema synchronization with a table name with `/` characters, the synchronization Column will be confused
2. Fix the NPE problem of metadata synchronization after enabling lower_case_table_names configuration
#### improvement:
1. Modify the method of synchronizing Oracle User to Doris Database mapping, use `metadata.getSchemas` instead of `SELECT DISTINCT OWNER FROM all_tables`
2. When synchronizing metadata, change `null` at the catalog level to `conn.getcatalog`
FEATURE:
1. enable array type in Nereids
2. support generice on function signature
3. support array and map type in type coercion and type check
4. add element_at and element_slice syntax in Nereids parser
REFACTOR:
1. remove AbstractDataType
BUG FIX:
1. remove FROM from nonReserved keyword list
TODO:
1. support lambda expression
2. use Nereids' way do function type coercion
3. use castIfnotSame when do implict cast on BoundFunction
4. let AnyDataType type coercion do same thing as function type coercion
5. add below array function
- array_apply
- array_concat
- array_filter
- array_sortby
- array_exists
- array_first_index
- array_last_index
- array_count
- array_shuffle shuffle
- array_pushfront
- array_pushback
- array_repeat
- array_zip
- reverse
- concat_ws
- split_by_string
- explode
- bitmap_from_array
- bitmap_to_array
- multi_search_all_positions
- multi_match_any
- tokenize
if we write sql with : `select cast(array() as array<varchar(10)>)`
castexpr in fe will call analyze() with `Type.matchExactType(childType, type, true);`
here array type only check contains_null , but should check inner type to make array matchExactType right
ctas with outer join like
```sql
create table a (
id int not null,
name varchar(20) not null
)
distributed by hash(id) buckets 4
properties (
"replication_num"="1"
);
create table b (
id int not null,
age int not null
)
distributed by hash(id) buckets 4
properties (
"replication_num"="1"
);
insert into a values(1, 'ww'), (2, 'zs');
insert into b values(1, 22);
create table c properties("replication_num"="1") as select a.id, a.name, b.age from a left join b on a.id = b.id;
```
the column 'age' in c is not null, but nullable is expected, we fix it by use the nullable mode of the outputs of root plan fragment.
* [Improvement](meta) add default_value column for result of show_variables stmt
* add Changed column to show whether value is modified
* fix code style issue
if we write sql with : select array(1.0,2.0,null, null,2.0)
here will pass arg type with uint8 to be which does not match array() func sign with deicmal, and make be core. so here should cast from be and make null tag to cast decimal type
Iceberg has its own metadata information, which includes count statistics for table data. If the table does not contain equli'ty delete, we can get the count data of the current table directly from the count statistics.
1. should not use ((LogicalPlanAdapter)parsedStmt).getStatementContext().getOriginStatement().originStmt.toLowerCase() as the cache key (do not invoke toLowerCase()), for example: select * from tbl1 where k1 = 'a' is different with select * from tbl1 where k1 = 'A', so the cache should be missed.
2. according to issue 6735 , the cache key should contains all views' s ddl sql (including nested views)