problem:
1. create a iceberg_type catalog:
2. use iceberg catalog to specify verison
```
mysql> show catalog iceberg;
+----------------------+--------------------------+
| Key | Value |
+----------------------+--------------------------+
| type | iceberg |
| iceberg.catalog.type | hms |
| hive.metastore.uris | thrift://127.0.0.1:9083 |
| hadoop.username | hadoop |
| create_time | 2023-07-25 16:51:00.522 |
+----------------------+--------------------------+
5 rows in set (0.02 sec)
mysql> select * from iceberg.iceberg_db.tb1 FOR VERSION AS OF 8783036402036752909;
ERROR 5090 (42000): errCode = 2, detailMessage = Only iceberg/hudi external table supports time travel in current version
```
change:
Add `ICEBERG_EXTERNAL_TABLE` type for specify the version and time
configs
Bdbje elect timeout is 30 seconds, so we enlarge thrift_rpc_timeout_ms
and txn_commit_rpc_timeout_ms to 60s.
BTW: enlarge bdbje_lock_timeout_second from 1 to 5.
* Improve db update binlog properties (binlog.enable = "true") with check
all table enable binlog
* Add more test_alter_database_property regression test
1. refactor in-predicate filter estimation
example: A in (1, 2, 3, 4)
after in-preidcate filter, A.stats.max<=4 and A.stats.min>=1
2. maintain minExpr and maxExpr in in-predicate stats derive
Currently, the new optimizer don't consider anything about partial update.
This PR add the ability to convert a delete statement to a partial update insert statement
for merge-on-write unique table
Support such grammar
ANALYZE TABLE test WITH CRON "* * * * * ?"
Such job would be scheduled as the cron expr specifie, but natively support minute-level schedule only
1. If only read the partition columns, the `JniConnector` will produce empty required fields, so `HudiJniScanner` should read the "_hoodie_record_key" field at least to know how many rows in current hoodie split. Even if the `JniConnector` doesn't read this field, the call of `releaseTable` in `JniConnector` will reclaim the resource.
2. To prevent BE failure and exit, `JniConnector` should call release methods after `HudiJniScanner` is initialized. It should be noted that `VectorTable` is created lazily in `JniScanner`, so we don't need to reclaim the resource when `HudiJniScanner` is failed to initialize.
## Remaining works
Other jni readers like `paimon` and `maxcompute` may encounter the same problems, the jni reader need to handle this abnormal situation on its own, and currently this fix can only ensure that BE will not exit.
Upgrade hudi version from 0.13.0 to 0.13.1, and keep the hudi version of jni scanner the same as that of FE.
This may fix the bug of the table schema is not same as parquet schema.
First of all, mysql does not have a boolean type, its boolean type is actually tinyint(1), in the previous logic, We force tinyint(1) to be a boolean by passing tinyInt1isBit=true, which causes an error if tinyint(1) is not a 0 or 1, Therefore, we need to match tinyint(1) according to tinyint instead of boolean, and this change will not affect the correctness of where k = 1 or where k = true queries
This bug is introduced from #21771
Missing fileType field of TFileScanRangeParams, so the delete file of iceberg v2 will be treated as local file
and fail to read.
Current rf pushdown framework doesn't handle cte sender right. On cte consumer, it just return false and this will cause the rf is generated at the wrong place and lead the expr_order checking failed, but actually it should be pushed down on the cte sender. Also, set operation pushing down is unreachable if the outer stmt uses the alias of set operation's output before probeSlot's translation. Both of the above issues will be fixed in this pr
In current cte multicast fragment param computing logic in coordinator, if shared hash table for bc opened, its destination's number will be the same as be hosts'. But the judgment of falling into shared hash table bc part code is wrong, which will cause when a multicast's target is fixed with both bc and partition, the first bc info will overwrite the following partition's, i.e, the destination info will be the host level, which should be per instance. This will cause the hash partition part hang.
Problem:
Minidump unit test failed because of column statistic deserialization need a new column schema but not added to minidump unit test file
Solved:
Add last update time to unit test input file
1. cancel future when meet timeout and add config to modify rpc timeout
2. add config to modify numof BackendServiceProxy since under high concurrent work load GRPC channel will be blocked
### Two main changes:
- 1、add minidump replay
- 2、change minidump serialization of statistic messages and some interface between main logic of nereids optimizer and minidump
### Use of nereids ut:
- 1、save minidump files:
Execute command by mysql-client:
```
set enable_nereids_planner=true;
set enable_minidump=true;
```
Execute sql in mysql-client
- 2、use nereids-ut script to execute directory:
```
cp -r ${DORIS_HOME}/minidump ${DORIS_HOME}/output/fe && cd ${DORIS_HOME}/output/fe
./nereids_ut --d ${directory_of_minidump_files}
```
### Refactor of minidump
- move statistics used serialization to serialization of input and serialize with catalogs
- generating minidump file only when enable_minidump flag is set, minidump module interactive with main optimizer only by :
serializeInputsToDumpFile(catalog, statistics, query) && serializeOutputsToDumpFile(outputplan).
In this PR, we introduce TOKENIZE function for inverted index, it is used as following:
```
SELECT TOKENIZE('I love my country', 'english');
```
It has two arguments, first is text which has to be tokenized, the second is parser type which can be **english**, **chinese** or **unicode**.
It also can be used with existing table, like this:
```
mysql> SELECT TOKENIZE(c,"chinese") FROM chinese_analyzer_test;
+---------------------------------------+
| tokenize(`c`, 'chinese') |
+---------------------------------------+
| ["来到", "北京", "清华大学"] |
| ["我爱你", "中国"] |
| ["人民", "得到", "更", "实惠"] |
+---------------------------------------+
```
the global limit will create a gather action, and all the data will be calculated in one instance. If we push down the global limit, the node run after the limit node will run slowly.
We fix it by push down only local limit.
a join plan tree before fixing:
```
LogicalLimit(global)
LogicalLimit(local)
Plan()
LogicalLimit(global)
LogicalLimit(local)
LogicalJoin
LogicalLimit(global)
LogicalLimit(local)
Plan()
LogicalLimit(global)
LogicalLimit(local)
Plan()
after fixing:
LogicalLimit(global)
LogicalLimit(local)
Plan()
LogicalLimit(local)
LogicalJoin
LogicalLimit(local)
Plan()
LogicalLimit(local)
Plan()
```