If sql in create view has more than one count distinct, and write column name explicitly.
We will generate sql contains function multi_count_distinct.
It cannot be analyzed and all query containing this view will fail.
BufferControlBlock may block all fragment handle threads leads to be out of work
modify include:
BufferControlBlock cancel after max timeout
StmtExcutor notify be to cancel the fragment when unexcepted occur
more details see issue #16203
* [fix](Nereids): fix scalar_function A-F.
* [Fix](regression-test)fix regression test framework cannot compare double value nan and inf.
* revert dround()
Main subtask of [DSIP-28](https://cwiki.apache.org/confluence/display/DORIS/DSIP-028%3A+Suppot+MySQL+Load+Data)
## Problem summary
Support mysql load syntax as below:
```sql
LOAD DATA
[LOCAL]
INFILE 'file_name'
INTO TABLE tbl_name
[PARTITION (partition_name [, partition_name] ...)]
[COLUMNS TERMINATED BY 'string']
[LINES TERMINATED BY 'string']
[IGNORE number {LINES | ROWS}]
[(col_name_or_user_var [, col_name_or_user_var] ...)]
[SET (col_name={expr | DEFAULT} [, col_name={expr | DEFAULT}] ...)]
[PROPERTIES (key1 = value1 [, key2=value2]) ]
```
For example,
```sql
LOAD DATA
LOCAL
INFILE 'local_test.file'
INTO TABLE db1.table1
PARTITION (partition_a, partition_b, partition_c, partition_d)
COLUMNS TERMINATED BY '\t'
(k1, k2, v2, v10, v11)
set (c1=k1,c2=k2,c3=v10,c4=v11)
PROPERTIES ("auth" = "root:", "strict_mode"="true")
```
Note that in this pr the property named `auth` must be set since stream load need auth. I will optimize it later.
The internal statistic table column_statistics has a non-null field idx_id, the insert sql for hive table set the default value to NULL, which will failed to insert the result. Change it to -1.
External hms catalog table column names in doris are all in lower case,
while iceberg table or spark-sql created hive table may contain upper case column name,
which will cause empty query result. This pr is to fix this bug.
1. For parquet file, transfer all column names to lower case while parse parquet metadata.
2. For orc file, store the origin column names and lower case column names in two vectors, use the suitable names in different cases.
3. FE side, change the column name back to the origin column name in iceberg while doing convertToIcebergExpr.
When execute show load profile '/', the value of SQL and DefaultDb columns are all 'N/A', but we can fill these fields,the result of this pr is as follows:
Execute show load profile '/'\G:
MySQL [test_d]> show load profile '/'\G
*************************** 1. row ***************************
QueryId: 652326
User: N/A
DefaultDb: default_cluster:test_d
SQL: LOAD LABEL `default_cluster:test_d`.`xxx` (APPEND DATA INFILE ('hdfs://xxx/user/hive/warehouse/xxx.db/xxx/*') INTO TABLE xxx FORMAT AS 'ORC' (c1, c2, c3) SET (`c1` = `c1`, `c2` = `c2`, `c3` = `c3`)) WITH BROKER broker_xxx (xxx) PROPERTIES ("max_filter_ratio" = "0", "timeout" = "30000")
QueryType: Load
StartTime: 2023-01-12 18:33:34
EndTime: 2023-01-12 18:33:46
TotalTime: 11s613ms
QueryState: N/A
1 row in set (0.01 sec)