# Proposed changes
- Refactor AggregateFunction
1. AggregateFunction implement ComputeSignature
3. Add a CustomSignature to dynamic compute signature, we can check input type and compute implicit cast type in the `customSignature` method
2. Add PartialAggType to record some type information before disassemble aggregate
4. Refine and create a custom catalog function when translate AggregateFunction, without `finalizeForNereids`
- Support explain plan
1. explain parsed plan select ...
5. explain analyzed plan select ...
6. explain rewritten/logical plan select ...
7. explain optimized/physical plan select ...
8. explain all plan select ...
before:
MySQL [test]> select cast('135.759999999' as DECIMAL(10,3));
+----------------------------------------+
| CAST('135.759999999' AS DECIMAL(10,3)) |
+----------------------------------------+
| 135.759999999 |
+----------------------------------------+
1 row in set (0.00 sec)
now:
MySQL [stage]> select cast('135.759999999' as DECIMAL(10,3));
+----------------------------------------+
| CAST('135.759999999' AS DECIMAL(10,3)) |
+----------------------------------------+
| 135.759 |
+----------------------------------------+
1 row in set (0.01 sec)
When executing create table as select stmt,
the varchar/char/string type of column in created table will be unified to string type.
Because when select from external table (mysql/pg, etc), the length of varchar in external database
is calculated by "char" length, not "byte" length.
So if there is a column with varchar(10) in external table, then there will be a same varchar(10)
in created table. But the byte length of data in external table may be larger than 10, causing failure of CTAS.
Change to string will not impact performance of the capacity of disk storage.
And notice that if a string type column is the first column, it will be changed to varchar(65535),
because we do not allow string type column as sort key column.
#13195 left some unresolved issues. One of them is that some BE unit tests fail.
This PR fixes this issue. Now, we can run the command ./run-be-ut.sh --run successfully on macOS.
When you create the Uniq table, you can specify the mapping of sequence column to other columns.
You no longer need to specify mapping column when importing.
Invalidate catalog/db/table cache when doing
refresh catalog/db/table.
Tested table with 10000 partitions. The refresh operation will cost about 10-20 ms.
Collect HMS external table statistic information through external metadata.
Insert the result into __internal_schema.column_statistics using insert into SQL.
Add regression test for external hive orc table. This PR has generated all basic types support by hive orc, and create a hive external table to touch them in docker environment.
Functions to be tested:
1. Ensure that all types are parsed correctly
2. Ensure that the null map of all types are parsed correctly
3. Ensure that the `SearchArgument` of `OrcReader` works well
4. Only select partition columns
If a table already been analyzed, then we analyze it again, the new statistics would larger than expected since the incremental would contain the values from table level statistics since the SQL lack the predication for the nullability of part_id
In previous implementation, when doing list partition prune, we need to generation `rangeToId`
every time we doing prune.
But `rangeToId` is actually a static data that should be create-once-use-every-where.
So for hive partition, I created the `rangeToId` and all other necessary data structures for partition prunning
in partition cache, so that we can use it directly.
In my test, the cost of partition prune for 10000 partitions reduce from 8s -> 0.2s.
Aslo add "partition" info in explain string for hive table.
```
| 0:VEXTERNAL_FILE_SCAN_NODE |
| predicates: `nation` = '0024c95b' |
| inputSplitNum=1, totalFileSize=4750, scanRanges=1 |
| partition=1/10000 |
| numNodes=1 |
| limit: 10 |
```
Bug fix:
1. Fix bug that es scan node can not filter data
2. Fix bug that query es with predicate like `where substring(test2,2) = "ext2";` will fail at planner phase.
`Unexpected exception: org.apache.doris.analysis.FunctionCallExpr cannot be cast to org.apache.doris.analysis.SlotRef`
TODO:
1. Some problem when quering es version 8: ` Unexpected exception: Index: 0, Size: 0`, will be fixed later.
1. split DateLiteral and DateTimeLiteral into V1 and V2
2. add a type coercion about DateLikeType: DateTimeV2Type > DateTimeType > DateV2Type > DateType
3. add a rule to remove unnecessary CAST on DateLikeType in ComparisonPredicate
PR https://github.com/apache/doris/pull/13917 has supported lazy read for non-predicate columns in ParquetReader,
but can't trigger lazy read when predicate columns are partition or missing columns.
This PR support such case, and fill partition and missing columns in `FileReader`.
Before this pr, if we try to load ORC file with native list(or array) type data, the be will crash.
Because complex types in ORC file include multi real columns, so we need to filter columns by column names.
Otherwise we could not read all columns we need.
Now arrow release-7.0.0 only support create stripe reader by column index, so we patch it to support create stripe reader by column names.
Co-authored-by: cambyzju <zhuxiaoli01@baidu.com>
In concurrent load, some publish timeout happens occasionally. This is
cause by meta lock hold by other thread so publish add increase rowset
hang for several seconds.
StorageEngine::start_delete_unused_rowset will hold gc_mutex and it cost
a lot of time, so that add_used_rowset wait lock, and compaction modify_rowset
or other tablet method will hold meta_lock and call add_unused_rowset which
will make meta_lock occupied for too long, finally makes publish timeout.
In this pr, I copy unused_rowsets in lock and delete these rowset without lock,
makes gc_mutex more lightweight so meta lock can be acquired immediately in publish thread.
My test shows that no publish timeout in concurrent stream load.