Iceberg table partition name may contain upper case characters, for example: City=xxx, Nation=xxx.
But in Doris, all column names are in lower case. Here we transfer the partition name to lower case to keep consist with column name.
make VcompoundPred optimization work well
#19818 this pr try to enable VcompoundPred optimization but get wrong result on tpcds q28.
The reason is some nullable logic on mysql need special handling.
mysql [regression_test_tpcds_sf1_p1]>select null and false;
+----------------+
| NULL AND FALSE |
+----------------+
| 0 |
+----------------+
1 row in set (0.00 sec)
mysql [regression_test_tpcds_sf1_p1]>select null and true;
+---------------+
| NULL AND TRUE |
+---------------+
| NULL |
+---------------+
1 row in set (0.00 sec)
mysql [regression_test_tpcds_sf1_p1]>select null or false;
+---------------+
| NULL OR FALSE |
+---------------+
| NULL |
+---------------+
1 row in set (0.00 sec)
mysql [regression_test_tpcds_sf1_p1]>select null or true;
+--------------+
| NULL OR TRUE |
+--------------+
| 1 |
+--------------+
1 row in set (0.00 sec)
Hudi external table is deprecated since 1.2.
We should remove it now.
Recommend to use "multi-catalog" feature to connect to Hudi.
User can not create Hudi external table.
When restarting FE, all hudi external table will still be replayed but can not be read. And when doing checkpoint, all these tables will be discarded.
Sometimes I find that the tablet scheduler can not schedule tablet, and with no more info for debugging.
So I add some debug log for this process.
No logic is changed.
In a scenario where multiple DBs are simultaneously imported with high concurrency, a significant number of transactions will be generated. Without a summary field, we cannot clearly see how many transactions there are in the current cluster. Therefore, I have enhanced this point.
```
mysql> show proc "/transactions";
+-------+-----------------------------------+-----------------------+
| DbId | DbName | RunningTransactionNum |
+-------+-----------------------------------+-----------------------+
| 10002 | default_cluster:xxxx | 0 |
| 14005 | default_cluster:__internal_schema | 0 |
| Total | 2 | 0 |
+-------+-----------------------------------+-----------------------+
3 rows in set (0.02 sec)
```
When doing serialization of minidump input, we can find that when serializing colocate table index, the size and entry get by the hash map always unmatched when concurrent occur. So a write lock be added to ensure concurrency.