It's possible that a failure in the fe caused the check to fail, and at that moment, it may not be possible to retrieve the corresponding query ID from be.out.
* [fix](compaction)Fix single compaction to get all local versions #33849
add test and comment
* remove single replica compaction prepare input rowsets
reviesd
* [enhancement](Nereids) Enable parse sql from sql cache (#33262)
Before this pr, the query must pass through parser, analyzer, rewriter, optimizer and translator, then we can check whether this query can use sql cache, if the query is too long, or the number of join tables too big, the plan time usually >= 500ms.
This pr reduce this time by skip the fashion plan path, because we can reuse the previous physical plan and query result if no any changed. In some cases we should not parse sql from sql cache, e.g. table structure changed, data changed, user policies changed, privileges changed, contains non-deterministic functions, and user variables changed.
In my test case: query a view which has lots of join and union, and the tables has empty partition, the query latency is about 3ms. if not parse sql from sql cache, the plan time is about 550ms
## Features
1. use Config.sql_cache_manage_num to control how many sql cache be reused in on fe
2. if explain plan appear some plans contains `LogicalSqlCache` or `PhysicalSqlCache`, it means the query can use sql cache, like this:
```sql
mysql> set enable_sql_cache=true;
Query OK, 0 rows affected (0.00 sec)
mysql> explain physical plan select * from test.t;
+----------------------------------------------------------------------------------+
| Explain String(Nereids Planner) |
+----------------------------------------------------------------------------------+
| cost = 3.135 |
| PhysicalResultSink[53] ( outputExprs=[c1#0, c2#1] ) |
| +--PhysicalDistribute[50]@0 ( stats=3, distributionSpec=DistributionSpecGather ) |
| +--PhysicalOlapScan[t]@0 ( stats=3 ) |
+----------------------------------------------------------------------------------+
4 rows in set (0.02 sec)
mysql> select * from test.t;
+------+------+
| c1 | c2 |
+------+------+
| 1 | 2 |
| -2 | -2 |
| NULL | 30 |
+------+------+
3 rows in set (0.05 sec)
mysql> explain physical plan select * from test.t;
+-------------------------------------------------------------------------------------------+
| Explain String(Nereids Planner) |
+-------------------------------------------------------------------------------------------+
| cost = 0.0 |
| PhysicalSqlCache[2] ( queryId=78511f515cda466b-95385d892d6c68d0, backend=127.0.0.1:9050 ) |
| +--PhysicalResultSink[52] ( outputExprs=[c1#0, c2#1] ) |
| +--PhysicalDistribute[49]@0 ( stats=3, distributionSpec=DistributionSpecGather ) |
| +--PhysicalOlapScan[t]@0 ( stats=3 ) |
+-------------------------------------------------------------------------------------------+
5 rows in set (0.01 sec)
```
(cherry picked from commit 03bd2a337d4a56ea9c91673b3bd4ae518ed10f20)
* fix
* [fix](Nereids) fix some sql cache consistence bug between multiple frontends (#33722)
fix some sql cache consistence bug between multiple frontends which introduced by [enhancement](Nereids) Enable parse sql from sql cache #33262, fix by use row policy as the part of sql cache key.
support dynamic update the num of fe manage sql cache key
(cherry picked from commit 90abd76f71e73702e49794d375ace4f27f834a30)
* [fix](Nereids) fix bug of dry run query with sql cache (#33799)
1. dry run query should not use sql cache
2. fix test sql cache in cloud mode
3. enable cache OneRowRelation and EmptyRelation in frontend to skip parse sql
(cherry picked from commit dc80ecf7f33da7b8c04832dee88abd09f7db9ffe)
* remove cloud mode
* remove @NotNull
* [Fix](Variant Type) forbit distribution info contains variant columns (#33707)
* [Fix](Variant) VariantRootColumnIterator::read_by_rowids with wrong null map size (#33734)
insert_range_from should start from `size` with `count` elements for null map
* [Fix](Variant) check column index validation for extracted columns (#33766)
1. Remame`list` to `globList` . The path of this `list` needs to have a wildcard character, and the corresponding hdfs interface is `globStatus`, so the modified name is `globList`.
2. If you only need to view files based on paths, you can use the `listFiles` operation.
3. Merge `listLocatedFiles` function into `listFiles` function.