consider sql:
```
SELECT *
FROM sub_query_correlated_subquery1 t1
WHERE coalesce(bitand(
cast(
(SELECT sum(k1)
FROM sub_query_correlated_subquery3 ) AS int),
cast(t1.k1 AS int)),
coalesce(t1.k1, t1.k2)) is NULL
ORDER BY t1.k1, t1.k2;
```
is Null conjunct is lost in SubqueryToApply rule. This pr fix it
After auto retry merged, it's hard to determine the execute times of doExecute method in compile time, and if the expected execute times in the expectation block is missed, unexpected invocation exception would be thrown, so just remove the expected execute times
select c_name from customer union select c_name from customer
this sql used agg node to get distinct row of c_name,
so it's no need to wait for inserted all data to hash map,
could output the data which it's inserted into hash map successed.
Fix tow bugs:
1. Unexpected null values in array column. If 65535 consecutive values are not null in nullable array column, this error will be triggered. The reason is that the array parser did not handle boundary conditions.
2. The number of rows of key filed, and that of value field in map column are not equal. Similarly, the number of rows among fields in struct column are not the same. This would be triggered when the number of rows are not equal among parquet pages of different columns in a row group.
1. not use recursive parse to avoid stack overflow
2. To create a balanced tree instead of left deep tree
TODO: add expr_depth_limit to Nereids' parser
### Issue
when partition has null partitions, it throws error
`Failed to fill partition column: t_int=null`
### Resolution
- Fix the following null partitions error in iceberg tables by replacing null partition to '\N'.
- Add regression test for hive null partition.
This file system cache key should contains `scheme://authority`, eg: `hdfs//nameservices1`.
Or it will encounter error:
```
Wrong FS: hdfs//abc/xxxx, expected: hdfs://def
```
configs
1. Because vertical compaction is enabled by default, it consumes less
memory, we can enlarge default value of compaction related configs.
2. Enlarge default value of shard size related to lock.
Improve external table statistics collection, including log, observability and fix some bugs.
1. Add Running state for statistics job.
2. Add progress for show analyze job. (n/m tasks finished, n/m task failed and so on)
3. Add analyze time cost for show analyze task.
4. Make task failure message more clear.
5. Synchronize the job status updating code in updateTaskStatus.
6. Fix NPE in HMSAnalyzeTask. (Avoid refreshing statistics cache if the collection sql failed)
7. Return error message for with sync collection while timeout.
8. Log level improvement
9. Fix misuse of logCreateAnalysisJob for tasks.
current dphyper join reorder hasn't consider the join conjunct referencing only one side of the child. This is common case in outer join conjunct. So we need disable outer join reorder in dphyper until this problem is addressed.