1. add json_unquote and json_extract functions
2. remove mv releated code in visitPhysicalOlapScan
3. forbid bitmap and hll type for topn node's sort exprs
4. HashDistributionInfo of olap scan node should use the slots from output not the full schema
5. SelectMaterializedIndexWithoutAggregate should use the filter node's output together with the predicate to get the correct mv
6. forbid SimplifyArithmeticRule for decimal type
7. make DecimalLiteral's type and value consistent with each other if the value is decimalv2
8. json_array need support empty argument
fix mem_limit default value
memory_gc_sleep_time_s to memory_gc_sleep_time_ms
LoadChannelMgr::_handle_mem_exceed_limit process_mem_limit to process soft mem limit
fix query mem tracker print
Nereids planner add conjuncts to ScanNode after call finalize, this may cause external table scan node fail to filter
useless partition, because external table do the partition prune in the finalize method.
This pr is to fix this bug. In the rewrite stage, pass the conjuncts to LogicalFileScan object, and eventually pass to
ScanNode while creating it. So that the ScanNode could use the conjuncts while doing finalize.
Why not doing the partition prune in the LogicalFileScan like LogicalOlapScan doing?
Because Iceberg api doesn't have the partition concept, it just accept a list of Conjuncts,
so it's easier to pass the conjuncts to ScanNode (Hive, Icegerg, Hudi...) and doing the partition prune in there.
Fix threes bugs of timestampv2 precision:
1. Hive catalog doesn't set the precision of timestampv2, and can't get the precision from hive metastore, so set the largest precision for timestampv2;
2. Jdbc catalog use datetimev1 to parse timestamp, and convert to timestampv2, so the precision is lost.
3. TVF doesn't use the precision from meta data of file format.
Change nereids minidump switch from "Dump_nereids" to "enable_minidump" which is more exactly and neat. Also fix bug of Double.NaN (not a number) serialize bug when doing column statistic serialization.
# Proposed changes
In the nereids. Before this PR: when we access some unexists tables. It will report the exception as follows:
```
mysql> select * from tt;
ERROR 1105 (HY000): errCode = 2, detailMessage = Unexpected exception: null
```
After this PR, it will get the following results:
```
mysql> select * from tt;
ERROR 1105 (HY000): errCode = 2, detailMessage = Unexpected exception: Table [tt] does not exist in database [default_cluster:test].
```
## Problem summary
It is because in this [function](f5af07f7b2/fe/fe-core/src/main/java/org/apache/doris/nereids/CascadesContext.java (L328)), we ignore the exception. So the size of `tables` in `CascadesContext` is zero not null. So we can only get null after `table = cascadesContext.getTableByName(tableName);`.
1. Remove an exec node method corresponding to a span and replace it with an exec node corresponding to a span;
2. Fix some problems with tracing in pipeline.