1. add a feature that support statement having aggregate function in order by list. such as:
SELECT COUNT(*) FROM t GROUP BY c1 ORDER BY COUNT(*) DESC;
2. add clickbench analyze unit tests
This pr does three things:
1. Modified the framework of table-valued-function(tvf).
2. be support `fetch_table_schema` rpc.
3. Implemented `S3(path, AK, SK, format)` table-valued-function.
[What is DLF](https://www.alibabacloud.com/product/datalake-formation)
This PR is a preparation for support DLF, with some changes of multi catalog
1. Add RuntimeException for most of hive meta store or es client visit operation.
2. Add DLF related dependencies.
3. Move the checks of es catalog properties to the analysis phase of creating es catalog
TODO(in next PR):
1. Refactor the `getSplit` method to support not only hdfs, but s3-compatible object storage.
2. Finish the implementation of supporting DLF
## Design
### Trigger
Every time when a rowset writer produces more than N (e.g. 10) segments, we trigger segment compaction. Note that only one segment compaction job for a single rowset at a time to ensure no recursing/queuing nightmare.
### Target Selection
We collect segments during every trigger. We skip big segments whose row num > M (e.g. 10000) coz we get little benefits from compacting them comparing our effort. Hence, we only pick the 'Longest Consecutive Small" segment group to do actual compaction.
### Compaction Process
A new thread pool is introduced to help do the job. We submit the above-mentioned 'Longest Consecutive Small" segment group to the pool. Then the worker thread does the followings:
- build a MergeIterator from the target segments
- create a new segment writer
- for each block readed from MergeIterator, the Writer append it
### SegID handling
SegID must remain consecutive after segment compaction.
If a rowset has small segments named seg_0, seg_1, seg_2, seg_3 and a big segment seg_4:
- we create a segment named "seg_0-3" to save compacted data for seg_0, seg_1, seg_2 and seg_3
- delete seg_0, seg_1, seg_2 and seg_3
- rename seg_0-3 to seg_0
- rename seg_4 to seg_1
It is worth noticing that we should wait inflight segment compaction tasks to finish before building rowset meta and committing this txn.
Persist external catalog/db/table, including the columns of external tables.
After this change, external objects could have their own uniq ID through their lifetime,
this is required for the statistic information collection.
Add a rule to check the permission of a user who are executing a query. Forbid users who don't have SELECT_PRIV on some tables from executing queries on these tables.