the cases as title will not pass in multi-be environment because the be queried doesn't contain outfile data. We will copy the outfile to every instance to fix it.
1. If only read the partition columns, the `JniConnector` will produce empty required fields, so `HudiJniScanner` should read the "_hoodie_record_key" field at least to know how many rows in current hoodie split. Even if the `JniConnector` doesn't read this field, the call of `releaseTable` in `JniConnector` will reclaim the resource.
2. To prevent BE failure and exit, `JniConnector` should call release methods after `HudiJniScanner` is initialized. It should be noted that `VectorTable` is created lazily in `JniScanner`, so we don't need to reclaim the resource when `HudiJniScanner` is failed to initialize.
## Remaining works
Other jni readers like `paimon` and `maxcompute` may encounter the same problems, the jni reader need to handle this abnormal situation on its own, and currently this fix can only ensure that BE will not exit.
* [Fix](multi-catalog) Not throw exceptions when file not exists for query of hms catalog.
* [Fix](multi-catalog) Not throw exceptions when file not exists for query of hms catalog.
---------
Co-authored-by: 王翔宇 <wangxiangyu@360shuke.com>
Upgrade hudi version from 0.13.0 to 0.13.1, and keep the hudi version of jni scanner the same as that of FE.
This may fix the bug of the table schema is not same as parquet schema.
First of all, mysql does not have a boolean type, its boolean type is actually tinyint(1), in the previous logic, We force tinyint(1) to be a boolean by passing tinyInt1isBit=true, which causes an error if tinyint(1) is not a 0 or 1, Therefore, we need to match tinyint(1) according to tinyint instead of boolean, and this change will not affect the correctness of where k = 1 or where k = true queries
This bug is introduced from #21771
Missing fileType field of TFileScanRangeParams, so the delete file of iceberg v2 will be treated as local file
and fail to read.
Current rf pushdown framework doesn't handle cte sender right. On cte consumer, it just return false and this will cause the rf is generated at the wrong place and lead the expr_order checking failed, but actually it should be pushed down on the cte sender. Also, set operation pushing down is unreachable if the outer stmt uses the alias of set operation's output before probeSlot's translation. Both of the above issues will be fixed in this pr
In current cte multicast fragment param computing logic in coordinator, if shared hash table for bc opened, its destination's number will be the same as be hosts'. But the judgment of falling into shared hash table bc part code is wrong, which will cause when a multicast's target is fixed with both bc and partition, the first bc info will overwrite the following partition's, i.e, the destination info will be the host level, which should be per instance. This will cause the hash partition part hang.
Problem:
Minidump unit test failed because of column statistic deserialization need a new column schema but not added to minidump unit test file
Solved:
Add last update time to unit test input file
1. cancel future when meet timeout and add config to modify rpc timeout
2. add config to modify numof BackendServiceProxy since under high concurrent work load GRPC channel will be blocked
### Two main changes:
- 1、add minidump replay
- 2、change minidump serialization of statistic messages and some interface between main logic of nereids optimizer and minidump
### Use of nereids ut:
- 1、save minidump files:
Execute command by mysql-client:
```
set enable_nereids_planner=true;
set enable_minidump=true;
```
Execute sql in mysql-client
- 2、use nereids-ut script to execute directory:
```
cp -r ${DORIS_HOME}/minidump ${DORIS_HOME}/output/fe && cd ${DORIS_HOME}/output/fe
./nereids_ut --d ${directory_of_minidump_files}
```
### Refactor of minidump
- move statistics used serialization to serialization of input and serialize with catalogs
- generating minidump file only when enable_minidump flag is set, minidump module interactive with main optimizer only by :
serializeInputsToDumpFile(catalog, statistics, query) && serializeOutputsToDumpFile(outputplan).
In this PR, we introduce TOKENIZE function for inverted index, it is used as following:
```
SELECT TOKENIZE('I love my country', 'english');
```
It has two arguments, first is text which has to be tokenized, the second is parser type which can be **english**, **chinese** or **unicode**.
It also can be used with existing table, like this:
```
mysql> SELECT TOKENIZE(c,"chinese") FROM chinese_analyzer_test;
+---------------------------------------+
| tokenize(`c`, 'chinese') |
+---------------------------------------+
| ["来到", "北京", "清华大学"] |
| ["我爱你", "中国"] |
| ["人民", "得到", "更", "实惠"] |
+---------------------------------------+
```
the global limit will create a gather action, and all the data will be calculated in one instance. If we push down the global limit, the node run after the limit node will run slowly.
We fix it by push down only local limit.
a join plan tree before fixing:
```
LogicalLimit(global)
LogicalLimit(local)
Plan()
LogicalLimit(global)
LogicalLimit(local)
LogicalJoin
LogicalLimit(global)
LogicalLimit(local)
Plan()
LogicalLimit(global)
LogicalLimit(local)
Plan()
after fixing:
LogicalLimit(global)
LogicalLimit(local)
Plan()
LogicalLimit(local)
LogicalJoin
LogicalLimit(local)
Plan()
LogicalLimit(local)
Plan()
```
New aggregation function: map_agg.
This function requires two arguments: a key and a value, which are used to build a map.
select map_agg(column1, column2) from t group by column3;