Sometimes I find that the tablet scheduler can not schedule tablet, and with no more info for debugging.
So I add some debug log for this process.
No logic is changed.
In a scenario where multiple DBs are simultaneously imported with high concurrency, a significant number of transactions will be generated. Without a summary field, we cannot clearly see how many transactions there are in the current cluster. Therefore, I have enhanced this point.
```
mysql> show proc "/transactions";
+-------+-----------------------------------+-----------------------+
| DbId | DbName | RunningTransactionNum |
+-------+-----------------------------------+-----------------------+
| 10002 | default_cluster:xxxx | 0 |
| 14005 | default_cluster:__internal_schema | 0 |
| Total | 2 | 0 |
+-------+-----------------------------------+-----------------------+
3 rows in set (0.02 sec)
```
When doing serialization of minidump input, we can find that when serializing colocate table index, the size and entry get by the hash map always unmatched when concurrent occur. So a write lock be added to ensure concurrency.
1. add session variable max_execution_time to an alias of query timeout, if user set max_execution_time, the query timeout will be modified too.
2. add a setter attribute to session variable, so that we could add some logic in setter method instead of field reflection.
Add a new FE config: show_details_for_unaccessible_tablet.
Default is false, when set to true, if a query is unable to select a healthy replica,
the detailed information of all the replicas of the tablet including the specific reason why they are unqueryable,
will be printed out.
when forbid_unknown_col_stats is open and some column stats is unknown,
we will check the be status by StatisticsUtil.statsTblAvailable(), and report error according to be status.
why upgrade? anything wrong?
Try to fix the problem about opentelemetry::v1::ext::http::client::curl::HttpOperation::Send(), I have updated the pr info.
A similar situation with #19344 , because sometimes hms meta info is newer than hms events, if we try to invoke org.apache.doris.datasource.hive.PooledHiveMetaStoreClient#getTable and this table is not exists, some error will throws and this event can not be handled.
To be more compatible with MySQL, rename JSONB type name and function name to JSON.
The old JSONB type name and jsonb_xx function can still be used for backward compatibility.
There is a function jsonb_extract remained since json_extract is used by json string function and more work need to change it. It will be changed further.
1. add json_unquote and json_extract functions
2. remove mv releated code in visitPhysicalOlapScan
3. forbid bitmap and hll type for topn node's sort exprs
4. HashDistributionInfo of olap scan node should use the slots from output not the full schema
5. SelectMaterializedIndexWithoutAggregate should use the filter node's output together with the predicate to get the correct mv
6. forbid SimplifyArithmeticRule for decimal type
7. make DecimalLiteral's type and value consistent with each other if the value is decimalv2
8. json_array need support empty argument
Nereids planner add conjuncts to ScanNode after call finalize, this may cause external table scan node fail to filter
useless partition, because external table do the partition prune in the finalize method.
This pr is to fix this bug. In the rewrite stage, pass the conjuncts to LogicalFileScan object, and eventually pass to
ScanNode while creating it. So that the ScanNode could use the conjuncts while doing finalize.
Why not doing the partition prune in the LogicalFileScan like LogicalOlapScan doing?
Because Iceberg api doesn't have the partition concept, it just accept a list of Conjuncts,
so it's easier to pass the conjuncts to ScanNode (Hive, Icegerg, Hudi...) and doing the partition prune in there.