Now we can not support streamload with column which is map/array nested map/array
serde can do this now , so we can replace it
Notice. if item data in complex type data is empty we just return error, instead of makeup default value , because now we can not define right default for complex type
we generate project for all set operation's children to ensure the order
of all children are not changed. However, some rules, such as
PushDownProjectThroughLimit could remove these projects involuntarily.
When it happen, the column order is wrong and lead to BE core dump.
This PR use a new variable in SetOperation to save the output order of
children of set operation. Then the children's output order could be
changed and never affect to SetOperation at all.
Issue Number: close#24315
The root cause of this issue is that Elasticsearch's long type allows inserting floats and strings. Doris did not handle these cases when doing type conversion. The current strategy is to take the integer before the decimal point if a float or string is found.
Add file cache regression test in tpch 1g on orc&parquet format.
tpch will run 3 times:
1. running without file cache
2. running with file cache for the first time
3. running with file cache for the second time
The file cache configuration is already added in `be/conf/be.conf` on the regression test environment, and the available capacity is 100MB. After running the tpch 1g test, the metrics introduced by https://github.com/apache/doris/pull/19177 is like:
```
doris_be_file_cache_normal_queue_curr_size{path="/mnt/datadisk1/gaoxin/file_cache"} 92808933
doris_be_file_cache_normal_queue_curr_elements{path="/mnt/datadisk1/gaoxin/file_cache"} 59
doris_be_file_cache_normal_queue_max_elements{path="/mnt/datadisk1/gaoxin/file_cache"} 102400
doris_be_file_cache_normal_queue_max_size{path="/mnt/datadisk1/gaoxin/file_cache"} 89128960
doris_be_file_cache_removed_elements{path="/mnt/datadisk1/gaoxin/file_cache"} 2132
doris_be_file_cache_segment_reader_cache_size{path="/mnt/datadisk1/gaoxin/file_cache"} 54
```
CREATE EXTERNAL TABLE `dim_server` (
`col1` varchar(50) NOT NULL,
`col2` varchar(50) NOT NULL
)
create view ads_oreo_sid_report
(
`col1` ,
`col2`
)
AS
select
tmp.col1,tmp.col2
from (
select 'abc' as col1,'def' as col2
) tmp
inner join dim_server ds on tmp.col1 = ds.col1 and tmp.col2 = ds.col2;
select * from ads_oreo_sid_report where col1='abc' and col2='def';
before this pr, col1='abc' and col2='def' can't be pushed to dim_server. now the 2 predicates can be pushed to odbc table.
Fix three bugs:
1. Hudi slice maybe has log files only, so `new Path(filePath)` will throw errors.
2. Hive column names are lowercase only, so match column names in ignore-case-mode.
3. Compatible with [Spark Datasource Configs](https://hudi.apache.org/docs/configurations/#Read-Options), so users can add `hoodie.datasource.merge.type=skip_merge` in catalog properties to skip merge logs files.
before the lambda function Expr not implement toSqlImpl() function.
so it's call parent function, which is not suit for lambda function.
and will be have error when create view.
select rank() over (partition by A, B) as r, sum(x) over(A, C) as s from T;
A is a common partition key for all windowExpressions, that is A is intersection of {A,B} and {A, C}
we could push filter A=1 through this window, since A is a common Partition key:
select * from (select a, row_number() over (partition by a) from win) T where a=1;
origin plan:
----filter((T.a = 1))
----------PhysicalWindow
------------PhysicalQuickSort
--------------PhysicalProject
------------------PhysicalOlapScan[win]
transformed to
----PhysicalWindow
------PhysicalQuickSort
--------PhysicalProject
----------filter((T.a = 1))
------------PhysicalOlapScan[win]
But C=1 can not be pushed through window.
In the original logic, the `Export` statement generates `Selectstmt` for execution. But there is no way to make the `SelectStmt` use the new optimizer.
Now, we change the `Export` statement to generate the `outfile SQL`, and then use the new optimizer to parse the SQL so that outfile can use the new optimizer.