* [enhancement](Nereids) Enable parse sql from sql cache (#33262)
Before this pr, the query must pass through parser, analyzer, rewriter, optimizer and translator, then we can check whether this query can use sql cache, if the query is too long, or the number of join tables too big, the plan time usually >= 500ms.
This pr reduce this time by skip the fashion plan path, because we can reuse the previous physical plan and query result if no any changed. In some cases we should not parse sql from sql cache, e.g. table structure changed, data changed, user policies changed, privileges changed, contains non-deterministic functions, and user variables changed.
In my test case: query a view which has lots of join and union, and the tables has empty partition, the query latency is about 3ms. if not parse sql from sql cache, the plan time is about 550ms
## Features
1. use Config.sql_cache_manage_num to control how many sql cache be reused in on fe
2. if explain plan appear some plans contains `LogicalSqlCache` or `PhysicalSqlCache`, it means the query can use sql cache, like this:
```sql
mysql> set enable_sql_cache=true;
Query OK, 0 rows affected (0.00 sec)
mysql> explain physical plan select * from test.t;
+----------------------------------------------------------------------------------+
| Explain String(Nereids Planner) |
+----------------------------------------------------------------------------------+
| cost = 3.135 |
| PhysicalResultSink[53] ( outputExprs=[c1#0, c2#1] ) |
| +--PhysicalDistribute[50]@0 ( stats=3, distributionSpec=DistributionSpecGather ) |
| +--PhysicalOlapScan[t]@0 ( stats=3 ) |
+----------------------------------------------------------------------------------+
4 rows in set (0.02 sec)
mysql> select * from test.t;
+------+------+
| c1 | c2 |
+------+------+
| 1 | 2 |
| -2 | -2 |
| NULL | 30 |
+------+------+
3 rows in set (0.05 sec)
mysql> explain physical plan select * from test.t;
+-------------------------------------------------------------------------------------------+
| Explain String(Nereids Planner) |
+-------------------------------------------------------------------------------------------+
| cost = 0.0 |
| PhysicalSqlCache[2] ( queryId=78511f515cda466b-95385d892d6c68d0, backend=127.0.0.1:9050 ) |
| +--PhysicalResultSink[52] ( outputExprs=[c1#0, c2#1] ) |
| +--PhysicalDistribute[49]@0 ( stats=3, distributionSpec=DistributionSpecGather ) |
| +--PhysicalOlapScan[t]@0 ( stats=3 ) |
+-------------------------------------------------------------------------------------------+
5 rows in set (0.01 sec)
```
(cherry picked from commit 03bd2a337d4a56ea9c91673b3bd4ae518ed10f20)
* fix
* [fix](Nereids) fix some sql cache consistence bug between multiple frontends (#33722)
fix some sql cache consistence bug between multiple frontends which introduced by [enhancement](Nereids) Enable parse sql from sql cache #33262, fix by use row policy as the part of sql cache key.
support dynamic update the num of fe manage sql cache key
(cherry picked from commit 90abd76f71e73702e49794d375ace4f27f834a30)
* [fix](Nereids) fix bug of dry run query with sql cache (#33799)
1. dry run query should not use sql cache
2. fix test sql cache in cloud mode
3. enable cache OneRowRelation and EmptyRelation in frontend to skip parse sql
(cherry picked from commit dc80ecf7f33da7b8c04832dee88abd09f7db9ffe)
* remove cloud mode
* remove @NotNull
Currently in regression-test, when a be crash, because curl does not set a timeout, suite-thread will get stuck.
To solve this, encapsulate the call to be into a function, set the timeout uniformly, and avoid getting stuck
This pr is mainly supplement statistics regression test. include the following:
analyze stats p0 tests:
1. Universal analysis
analyze stats p1 tests:
1. Universal analysis
2. Sampled analysis
3. Incremental analysis
4. Automatic analysis
5. Periodic analysis
manage stats p0 tests:
1. Alter table stats
2. Show table stats
3. Alter column stats
4. Show column stats and histogram
5. Drop column stats
6. Drop expired stats
TODO:
1. Supplement related documents
2. Optimize for unstable cases encountered during testing
3. Add other cases
For pr related to statistics, should ensure that all of these cases pass!
Add p0 test cases, including:
aggregate
join
union
order by
group by
keyword
arithmetic operators
logical operators
case function
coalesce
between
in
like
limit
where
regexp
window function
runtime filter
schema change
Support register suite plugin to add third-party function.
See
1. register in: ${DORIS_HOME}/regression-test/plugins/plugin_example.groovy
2. usage: ${DORIS_HOME}/regression-test/suites/demo/test_plugin.groovy
3. doc: ${DORIS_HOME}/docs/zh-CN/developer-guide/regression-testing.md