1. create aggregation table
2. insert some data
3. drop the table and create again
4. modify some parameters for some branch
5. insert some data
6. change the parameters back to its default
Some error happen when using routine load
```
[INTERNAL_ERROR]Message at offset XXX might be too large to fetch, try increasing receive.message.max.bytes
```
Refer to https://github.com/confluentinc/librdkafka/issues/2993, we should upgrade librdkafka version to avoid this bug.
For left outer join or full outer join, when build side data is empty, null data is output for build side, but nested column data of nullable column is not properly initialized, which may cause decimal arithmetic overflow
**Thanks for** pr #21855 to provide a wonderful reference.
Maybe it is very difficult and **cost-expensive** to implement **a comprehensive logical plan adapter**, maybe there is just some small syntax variations between doris and some other engines (such as hive/spark), so we can just **focus on** the **difference** here.
This pr mainly focus on the **syntax difference between doris and spark-sql**. For instance, do some function tranformations and override some syntax validations.
- add a dialect named `spark_sql`
- move method `NereidsParser#parseSQLWithDialect` to `TrinoParser`
- extract some `FnCallTransformer`/`FnCallTransformers` classes, so we can reuse the logic about the function transformers
- allow derived tables without alias when we set dialect to `spark_sql`(legacy and nereids parser are both supported)
- add some function transformers for hive/spark built-in functions
### Test case (from our online doris cluster)
- Test derived table without alias
```sql
MySQL [(none)]> show variables like '%dialect%';
+---------------+-------+---------------+---------+
| Variable_name | Value | Default_Value | Changed |
+---------------+-------+---------------+---------+
| sql_dialect | spark_sql | doris | 1 |
+---------------+-------+---------------+---------+
1 row in set (0.01 sec)
MySQL [(none)]> select * from (select 1);
+------+
| 1 |
+------+
| 1 |
+------+
1 row in set (0.03 sec)
MySQL [(none)]> select __auto_generated_subquery_name.a from (select 1 as a);
+------+
| a |
+------+
| 1 |
+------+
1 row in set (0.03 sec)
MySQL [(none)]> set sql_dialect=doris;
Query OK, 0 rows affected (0.02 sec)
MySQL [(none)]> select * from (select 1);
ERROR 1248 (42000): errCode = 2, detailMessage = Every derived table must have its own alias
MySQL [(none)]>
```
- Test spark-sql/hive built-in functions
```sql
MySQL [(none)]> show global functions;
Empty set (0.01 sec)
MySQL [(none)]> show variables like '%dialect%';
+---------------+-------+---------------+---------+
| Variable_name | Value | Default_Value | Changed |
+---------------+-------+---------------+---------+
| sql_dialect | spark_sql | doris | 1 |
+---------------+-------+---------------+---------+
1 row in set (0.01 sec)
MySQL [(none)]> select get_json_object('{"a":"b"}', '$.a');
+----------------------------------+
| json_extract('{"a":"b"}', '$.a') |
+----------------------------------+
| "b" |
+----------------------------------+
1 row in set (0.04 sec)
MySQL [(none)]> select split("a b c", " ");
+-------------------------------+
| split_by_string('a b c', ' ') |
+-------------------------------+
| ["a", "b", "c"] |
+-------------------------------+
1 row in set (1.17 sec)
```
Group commit choose be always first no decommissioned be in all be.
Choose be with selectBackendIdsByPolicy like common stream load and do not choose decommissioned be may be better.
The user manually adjusted the 'name' field in the __repo_info file under the repo file on S3, but did not modify the folder name. This led to an issue when the user created a repo with the same name as the folder in a certain cluster. The system parsed the 'name' field in the existing __repo_info and used an incorrect name, causing the subsequent repo to be unusable. A judgment has been added here: the 'name' field in the __repo_info must be the same as the new repo's name, otherwise, an error will be reported.
This enhancement shall extend existing logic for SHOW PARTITIONS FROM to include: -
Limit/Offset
Where [partition name only] [equal operator and like operator]
Order by [partition name only]
Issue Number: close#27834
Here is an example:
```
mysql> ALTER SYSTEM DROP FOLLOWER "127.0.0.1:19017";
ERROR 1105 (HY000): errCode = 2, detailMessage = Unable to drop this alive
follower, because the quorum requirements are not met after this command
execution. Current num alive followers 2, num followers 3, majority after
execution 2
```
Currently, _flush_active_memtables() is using stale memtracker data, especially when some other thread has just it.
Refresh memtrackers before flush to avoid this problem.