**Optimize**
PR #14470 has used `Expr` to filter delete rows to match current data file,
but the rows in the delete file are [sorted by file_path then position](https://iceberg.apache.org/spec/#position-delete-files)
to optimize filtering rows while scanning, so this PR remove `Expr` and use binary search to filter delete rows.
In addition, delete files are likely to be encoded in dictionary, it's time-consuming to decode `file_path`
columns into `ColumnString`, so this PR use `ColumnDictionary` to read `file_path` column.
After testing, the performance of iceberg v2's MOR is improved by 30%+.
**Fix Bug**
Lazy-read-block may not have the filter column, if the whole group is filtered by `Expr`
and the batch_eof is generated from next batch.
this PR remove typeCoercion on expected expr in ExpressionRewriteTestHelper. Because we should not rewrite expected expr at all. It will change the expected expr unexpectedly.
Cooldown time is wrong for data in SSD, because cooldown time for all `table/partitionis`
is only calculated once when class `DataProperty` loaded and that cannot be updated later.
This patch is to ensure that cooldown time for each table/partition can be calculated in real time
when table/partition is created.
Co-authored-by: weizuo <weizuo@xiaomi.com>
Currently, we may fail to build the third-party libraries if we keep the outdated extracted data.
Considering the following scenario, Bob added patches to some libraries and Alice updates the codebase and builds
the third-party libraries. If Alice kept the outdated extracted data, she should fail to build the third-party libraries
because the patches are not applied due to the outdated `patched_marks`.
This PR introduces a way to clean the outdated data before building the third-party libraries.
1. Add IntegralDivide operator to support `DIV` semantics
2. Add more operator rewriter to keep expression type consistent between operators
3. Support the convertion between float type and decimal type.
After this PR, below cases could be executed normaly like the legacy optimizer:
use test_query_db;
select k1, k5,100000*k5 from test order by k1, k2, k3, k4;
select avg(k9) as a from test group by k1 having a < 100.0 order by a;
Refactor the usage of file cache
### Motivation
There may be many kinds of file cache for different scenarios.
So the logic of the file cache should be hidden inside the file reader,
so that for the upper-layer caller, the change of the file cache does not need to
modify the upper-layer calling logic.
### Details
1. Add `FileReaderOptions` param for `fs->open_file()`, and in `FileReaderOptions`
1. `CachePathPolicy`
Determine the cache file path for a given file path.
We can implement different `CachePathPolicy` for different file cache.
2. `FileCacheType`
Specified file cache type: SUB_FILE_CACHE, WHOLE_FILE_CACHE, FILE_BLOCK_SIZE, etc.
2. Hide the cache logic inside the file reader.
The `RemoteFileSystem` will handle the `CacheOptions` and determine whether to
return a `CachedFileReader` or a `RemoteFileReader`.
And the file cache is managed by `CachedFileReader`
Current column default value is used only for load task. But in the case of Iceberg schema change,
query task is also possible to read the default value for columns not exist in old schema.
This pr is to support default value for query task.
Manually tested the broker load and external emr regression cases.
currently, show create function is designed for native function, it has some non suitable points.
this pr bring several improvements, and make result of show create function can be used to create function.
1. add type property.
2. add ALWAYS_NULLABLE perperty for java_udf
3. use file property rather than object_file for java_udf, follow usage of create java_udf
4. remove md5 property, coz file may vary when create function again.
5. remove INIT_FN,UPDATE_FN,MERGE_FN,SERIALIZE_FN etc properties for java_udf, cos java_udf does not need these properties.
test data: https://data.gharchive.org/2020-11-13-18.json.gz, 2GB, 197696 lines
before: String 13s vs. JSONB 28s
after: String 13s vs. JSONB 16s
**NOTICE: simdjson need to be patched since BOOL is conflicted with a macro BOOL defined in odbc sqltypes.h**
Current bitmap index only can apply pushed down predicates which in AND conditions. When predicates in OR conditions and other complex compound conditions, it will not be pushed down to the storage layer, this leads to read more data.
Based on that situation, this pr will do:
1. this pr in order to support bitmap index apply compound predicates, query sql like:
select * from tb where a > 'hello' or b < 100;
select * from tb where a > 'hello' or b < 100 or c > 'ok';
select * from tb where (a > 'hello' or b <100) and (a < 'world' or b > 200);
select * from tb where (not a> 'hello') or b < 100;
...
above sql,column a and b and c has created bitmap_index.
2. this optimization can reduce reading data by index
3. set config enable_index_apply_compound_predicates to use this optimization