1. not use recursive parse to avoid stack overflow
2. To create a balanced tree instead of left deep tree
TODO: add expr_depth_limit to Nereids' parser
This file system cache key should contains `scheme://authority`, eg: `hdfs//nameservices1`.
Or it will encounter error:
```
Wrong FS: hdfs//abc/xxxx, expected: hdfs://def
```
Improve external table statistics collection, including log, observability and fix some bugs.
1. Add Running state for statistics job.
2. Add progress for show analyze job. (n/m tasks finished, n/m task failed and so on)
3. Add analyze time cost for show analyze task.
4. Make task failure message more clear.
5. Synchronize the job status updating code in updateTaskStatus.
6. Fix NPE in HMSAnalyzeTask. (Avoid refreshing statistics cache if the collection sql failed)
7. Return error message for with sync collection while timeout.
8. Log level improvement
9. Fix misuse of logCreateAnalysisJob for tasks.
current dphyper join reorder hasn't consider the join conjunct referencing only one side of the child. This is common case in outer join conjunct. So we need disable outer join reorder in dphyper until this problem is addressed.
problem:
1. create a iceberg_type catalog:
2. use iceberg catalog to specify verison
```
mysql> show catalog iceberg;
+----------------------+--------------------------+
| Key | Value |
+----------------------+--------------------------+
| type | iceberg |
| iceberg.catalog.type | hms |
| hive.metastore.uris | thrift://127.0.0.1:9083 |
| hadoop.username | hadoop |
| create_time | 2023-07-25 16:51:00.522 |
+----------------------+--------------------------+
5 rows in set (0.02 sec)
mysql> select * from iceberg.iceberg_db.tb1 FOR VERSION AS OF 8783036402036752909;
ERROR 5090 (42000): errCode = 2, detailMessage = Only iceberg/hudi external table supports time travel in current version
```
change:
Add `ICEBERG_EXTERNAL_TABLE` type for specify the version and time
configs
Bdbje elect timeout is 30 seconds, so we enlarge thrift_rpc_timeout_ms
and txn_commit_rpc_timeout_ms to 60s.
BTW: enlarge bdbje_lock_timeout_second from 1 to 5.
* Improve db update binlog properties (binlog.enable = "true") with check
all table enable binlog
* Add more test_alter_database_property regression test
1. refactor in-predicate filter estimation
example: A in (1, 2, 3, 4)
after in-preidcate filter, A.stats.max<=4 and A.stats.min>=1
2. maintain minExpr and maxExpr in in-predicate stats derive
Currently, the new optimizer don't consider anything about partial update.
This PR add the ability to convert a delete statement to a partial update insert statement
for merge-on-write unique table
Support such grammar
ANALYZE TABLE test WITH CRON "* * * * * ?"
Such job would be scheduled as the cron expr specifie, but natively support minute-level schedule only
1. If only read the partition columns, the `JniConnector` will produce empty required fields, so `HudiJniScanner` should read the "_hoodie_record_key" field at least to know how many rows in current hoodie split. Even if the `JniConnector` doesn't read this field, the call of `releaseTable` in `JniConnector` will reclaim the resource.
2. To prevent BE failure and exit, `JniConnector` should call release methods after `HudiJniScanner` is initialized. It should be noted that `VectorTable` is created lazily in `JniScanner`, so we don't need to reclaim the resource when `HudiJniScanner` is failed to initialize.
## Remaining works
Other jni readers like `paimon` and `maxcompute` may encounter the same problems, the jni reader need to handle this abnormal situation on its own, and currently this fix can only ensure that BE will not exit.
Upgrade hudi version from 0.13.0 to 0.13.1, and keep the hudi version of jni scanner the same as that of FE.
This may fix the bug of the table schema is not same as parquet schema.
First of all, mysql does not have a boolean type, its boolean type is actually tinyint(1), in the previous logic, We force tinyint(1) to be a boolean by passing tinyInt1isBit=true, which causes an error if tinyint(1) is not a 0 or 1, Therefore, we need to match tinyint(1) according to tinyint instead of boolean, and this change will not affect the correctness of where k = 1 or where k = true queries