_do_evaluate will add temp result column into original table block, so in order to only convert correct columns to be nullable, need call convert_block_to_null before _do_evaluate
### Before:
return errors when tvf queries an empty file or an error uri:
1. get parsed schema failed, empty csv file
2. Can not get first file, please check uri.
### Now:
we just return empty set when tvf queries an empty file or an error uri.
```sql
mysql> select * from s3(
"uri" = "https://error_uri/exp_1.csv",
"s3.access_key"= "xx",
"s3.secret_key" = "yy",
"format" = "csv") limit 10;
Empty set (1.29 sec)
```
Support transforming trino dialect SQL to logical plan (#21854)
## Proposed changes
Issue Number: #21854
Use io.trino.sql.tree.AstVisitor as vistor, visit coorresponding trino node and transform it to doris logical plan.
## Further comments
Here are some examples for function transforming as following:
**ascii('a')** function is in doris and **codepoint('a')** funtion in trino, they have the same feature and have the same method signature, so we can use [TrinoFnCallTransformer](3b37b76886/fe/fe-core/src/main/java/org/apache/doris/nereids/parser/trino/TrinoFnCallTransformer.java) to handle them.
another example for ComplexTransformer as following:
**date_diff('second', TIMESTAMP '2020-12-25 22:00:00', TIMESTAMP '2020-12-25 21:00:00')"** fuction in trino
and **seconds_diff(2020-12-25 22:00:00, 2020-12-25 21:00:00)")** fuction in doris. They have different method signature, we cant not handle it by TrinoFnCallTransformer simply and we should handle it by individual complex transformer [DateDiffFnCallTransformer](3b37b76886/fe/fe-core/src/main/java/org/apache/doris/nereids/parser/trino/DateDiffFnCallTransformer.java).
At present, `load_to_singlt_tablet` import implementation refers to simple random number remainder, which cannot achieve true averaging. This will lead to uneven disk IO and uneven use of cluster resources. To solve this problem, we are preparing to implement round-robin for each partition tablet imported each time, in order to achieve average load to each tablet.
When generating the load query plan, the tablet index record currently imported is passed to BE.
Add a deamon task in FE to regularly clean up the `loadTabletRecordMap`. The map will get the bucket_number of the partition and update the `load_tablet_index` when `getCurrentLoadTabletIndex`.
Disable infer expr column name when query on old optimizer.
This bug is be brought in #24990
if your query SQL is
select id, name, sum(target) FROM db_test.table_test2 group by id, name;
the result column name when query is as following:
|id|name |sum(cast(target as DOUBLE))|
when you create view as following:
CREATE VIEW v1 as select id, name, sum(target) FROM db_test.table_test2 group by id, name;
then query the view v1, the result is as following:
|id|name |__sum_2|
Problem:
when running pipeline, we get randomly failed of test_leading
Reason:
physical distribute was generated and choosed to be the best plan because we can not get any statistic information of empty table. So we would get some unexpect result because we can not expect the order in memo
Solved:
Add statistic of columns used in test_leading, try repeatly in pipeline
this pr
1. fix use podarray push_back() with back() will make heap_use_after_free when podarray is reach capacity which would may make heap free
2. add cases for csv format for nested types. and csv file has two define which are without quote or just like json text
The user has configured the parameter lower_case_table_names, which ignores the case of the table name. When executed on the SQL client, the table name can be queried in both case.
But when using Connector to read doris data, the table names must be in the same case, otherwise an error will be reported.