* [enhancement](test)unique model by modify a key type from SMALLINT to other type
* [enhancement](test)unique model by modify a value type from SMALLINT to other type
- fix syntax problems of only one table used in leading or mistake usage of brace
example: leading(t1),leading(t1 {t2})
- fix cte used in subquery of using leading
example: with cte as (select c1 from t1) select count(*) from t1 join (select /*+ leading(cte t2) */ c2 from t2 join cte on c2 = cte.c1) as alias on t1.c1 = alias.c2;
which cte used in subquery and subquery also have leading
- fix data mismatched with original plan cause of on predicate push to nullable side
example: select count(*) from t1 left join t2 on c1 > 500 and c2 > 500 can not change to select count(*) from t1 left join t2 on c2 > 500 where c1 > 500
in this #32112, handling empty sets (empty expression cases) has been addressed. However, multiple empty sets in grouping sets have different grouping IDs
Fix problem that if fe upgrade from a older version, it has error like:
```
MySQL [test]> show full tables;
ERROR 1105 (HY000): NullPointerException, msg: java.lang.NullPointerException: Cannot invoke "org.apache.doris.thrift.TInvertedIndexStorageFormat.toString()" because the return value of "org.apache.doris.catalog.OlapTable.getInvertedIndexStorageFormat()" is null
```
File meta cache on BE is used to cache the meta for external table's file such as parquet footer.
This cache is counted by number, not memory consumption.
So if the cache object is big(eg, a large parquet footer), the total memory consumption of this cache
will be large and causing OOM.
This PR mainly changes:
1. Add a new method `exceed_prune_limit()` for `CachePolicy`
For `ObjLRUCache`, it always return true so that the minor of full gc on BE will prune the cache each time.
2. Reduce the default capability of file meta cache, from 20000 to 1000
Also change the default capability of hdfs file handle cache, from 20000 to 1000
4. Change judgement of whether enable file meta cache when querying
If the number of file need to be read is larger than the 1/3 of the file meta cache's capability, file meta cache
will be disabled for this query. Because cache is useless if there are too many files.
In previous, the counter in `profile` may be updated when close the file reader.
And the file reader may be closed when the object being deconstruted.
But at that time, the `profile` object may already be deleted, causing NPE and BE will crash.
This PR try to fix this issue:
1. Remove the "profile counter update" logic from all `close()` method.
2. Add a new interface `ProfileCollector`
It has 2 methods:
- `collect_profile_at_runtime()`
It can be called at runtime, eg, in every `get_next_block()` method.
So that the counter in profile can be updated at runtime.
- `collect_profile_before_close()`
Should be called before the object call `close()`. And it will only be called once.
3. Derived from `ProfileCollector`
All classes which may update the profile counter in `close()` method should extends
the `ProfileCollector`. Such as `GenericReader`, etc. And implement `collect_profile_before_close()`
And `collect_profile_before_close()` will be called in `scanner->mark_to_need_to_close()`.