**Thanks for** pr #21855 to provide a wonderful reference.
Maybe it is very difficult and **cost-expensive** to implement **a comprehensive logical plan adapter**, maybe there is just some small syntax variations between doris and some other engines (such as hive/spark), so we can just **focus on** the **difference** here.
This pr mainly focus on the **syntax difference between doris and spark-sql**. For instance, do some function tranformations and override some syntax validations.
- add a dialect named `spark_sql`
- move method `NereidsParser#parseSQLWithDialect` to `TrinoParser`
- extract some `FnCallTransformer`/`FnCallTransformers` classes, so we can reuse the logic about the function transformers
- allow derived tables without alias when we set dialect to `spark_sql`(legacy and nereids parser are both supported)
- add some function transformers for hive/spark built-in functions
### Test case (from our online doris cluster)
- Test derived table without alias
```sql
MySQL [(none)]> show variables like '%dialect%';
+---------------+-------+---------------+---------+
| Variable_name | Value | Default_Value | Changed |
+---------------+-------+---------------+---------+
| sql_dialect | spark_sql | doris | 1 |
+---------------+-------+---------------+---------+
1 row in set (0.01 sec)
MySQL [(none)]> select * from (select 1);
+------+
| 1 |
+------+
| 1 |
+------+
1 row in set (0.03 sec)
MySQL [(none)]> select __auto_generated_subquery_name.a from (select 1 as a);
+------+
| a |
+------+
| 1 |
+------+
1 row in set (0.03 sec)
MySQL [(none)]> set sql_dialect=doris;
Query OK, 0 rows affected (0.02 sec)
MySQL [(none)]> select * from (select 1);
ERROR 1248 (42000): errCode = 2, detailMessage = Every derived table must have its own alias
MySQL [(none)]>
```
- Test spark-sql/hive built-in functions
```sql
MySQL [(none)]> show global functions;
Empty set (0.01 sec)
MySQL [(none)]> show variables like '%dialect%';
+---------------+-------+---------------+---------+
| Variable_name | Value | Default_Value | Changed |
+---------------+-------+---------------+---------+
| sql_dialect | spark_sql | doris | 1 |
+---------------+-------+---------------+---------+
1 row in set (0.01 sec)
MySQL [(none)]> select get_json_object('{"a":"b"}', '$.a');
+----------------------------------+
| json_extract('{"a":"b"}', '$.a') |
+----------------------------------+
| "b" |
+----------------------------------+
1 row in set (0.04 sec)
MySQL [(none)]> select split("a b c", " ");
+-------------------------------+
| split_by_string('a b c', ' ') |
+-------------------------------+
| ["a", "b", "c"] |
+-------------------------------+
1 row in set (1.17 sec)
```
Group commit choose be always first no decommissioned be in all be.
Choose be with selectBackendIdsByPolicy like common stream load and do not choose decommissioned be may be better.
The user manually adjusted the 'name' field in the __repo_info file under the repo file on S3, but did not modify the folder name. This led to an issue when the user created a repo with the same name as the folder in a certain cluster. The system parsed the 'name' field in the existing __repo_info and used an incorrect name, causing the subsequent repo to be unusable. A judgment has been added here: the 'name' field in the __repo_info must be the same as the new repo's name, otherwise, an error will be reported.
This enhancement shall extend existing logic for SHOW PARTITIONS FROM to include: -
Limit/Offset
Where [partition name only] [equal operator and like operator]
Order by [partition name only]
Issue Number: close#27834
Here is an example:
```
mysql> ALTER SYSTEM DROP FOLLOWER "127.0.0.1:19017";
ERROR 1105 (HY000): errCode = 2, detailMessage = Unable to drop this alive
follower, because the quorum requirements are not met after this command
execution. Current num alive followers 2, num followers 3, majority after
execution 2
```
This PR #27515 change the logic if Table's `isPartitioned()` method.
But this method has 2 usages:
1. To check whether a table is range or list partitioned, for some DML operation such as Alter, Export.
For this case, it should return true if the table is range or list partitioned. even if it has only
one partition and one buckets.
2. To check whether the data is distributed (either by partitions or by buckets), for query planner.
For this case, it should return true if table has more than one bucket. Even if this table is not
range or list partitioned, if it has more than one bucket, it should return true.
So we should separate this method into 2, for different usages.
Otherwise, it may cause some unreasonable plan shape
in some case, if set incorrectly, will be cause BE core dump
10:18:19 *** SIGFPE integer divide by zero (@0x564853c204c8) received by PID 2132555
int max_scanners =
config::doris_scanner_thread_pool_thread_num / state->query_parallel_instance_num();
If specified, got a column of constant. otherwise an incremental series like it always be.
mysql> select * from numbers("number" = "5", "const_value" = "-123");
+--------+
| number |
+--------+
| -123 |
| -123 |
| -123 |
| -123 |
| -123 |
+--------+
5 rows in set (0.11 sec)
Previously temporarily upgrade Arrow to dev version 15.0.0-SNAPSHOT, because the latest release version Arrow 14.0.1 jdbc:arrow-flight-sql has BUG, jdbc:arrow-flight-sql cannot be used normally, see: apache/arrow#38785
But Arrow 15.0.0-SNAPSHOT was not published to the Maven central repository, and the network could not be connected sometimes, so back to Arrow 14.0.1. jdbc:arrow-flight-sql will be supported after upgrading to Arrow 15.0.0 release version.