In some cases, using Resultset.getxxx cannot correctly handle null data, so I use Optional to enhance the fields of JdbcFieldSchema for better type mapping when some results are null.
* Revert "[refactor](mysql result format) use new serde framework to tuple convert (#25006)"
This reverts commit e5ef0aa6d439c3f9b1f1fe5bc89c9ea6a71d4019.
* run buildall
* MORE
* FIX
Need query to view expression mapping when check the logic of hyper graph is equals or not.
Getting all expression mapping one-time may affect performance. So split the expresson to three type
JOIN_EDGE, NODE, FILTER_EDGE and get them step by step.
Enhance properties regulator checking:
(1) right bucket shuffle restriction takes effective only when either side has NATUAL shuffle type.
(2) enhance bothSideShuffleKeysAreSameOrder checking if taking EquivalenceExprIds into consideration.
Co-authored-by: zhongjian.xzj <zhongjian.xzj@zhongjianxzjdeMacBook-Pro.local>
This PR improves the materialized view rule by adding additional validity checks for the struct info. The changes include:
1. In AbstractMaterializedViewRule.java:
* Modified the checkPattern method to also check the validity of the struct info using context.getStructInfo().isValid().
* Updated the error message for invalid struct info to include the view plan string for better debugging.
2. In StructInfo.java:
* Enhanced the isValid() method to ensure that all nodes in the hyper graph have non-null expressions, in addition to the existing validity check.
These changes ensure that the materialized view rule properly validates the struct info before proceeding with the optimization. It prevents invalid struct info from being used and provides more detailed error messages for debugging purposes.
The additional validity check in StructInfo.isValid() verifies that all nodes in the hyper graph have non-null expressions, which is necessary for the materialized view rule to function correctly.
By incorporating these validity checks, the materialized view rule becomes more robust and can handle invalid struct info gracefully, improving the overall stability and reliability of the optimization process.
when the runtime filter target to a constant, the PushDownContext.finalTarget is null.
in this case, do not pushdown runtime filter.
example:
select * from (select 1 as x from T1) T2 join T3 on T2.x=T3.x
when push down RF(T3.x->T2.x) inside "select 1 as x from T1", the column x is a constant, and the pushDown stopped.
(cherry picked from commit 7f06cf0a0125d84c41e1edbb71366e293f70239d)
Problem and Cause:
In original planner, date_add function would choose different data type of datetime or datev2. Which when original planner choose datev2 as constant date type. Fe could not folding date_add function because missing of function signature of folding constant date_add(datev2, int)
Solved:
Add corresponding function signatures of date_add/months_add/years_add in original planner
baseTables in MTMVRelation stores all baseTables in the nested materialized view,
now adding baseTablesOneLevel to only store the baseTables at the current level.
Remove the is_append mode from the sink component due to the following reasons:
1. The performance improvement from this mode is relatively minor, approximately 10%, as demonstrated in previous benchmarks.
2. The mode complicates maintenance. It requires a separate data writing path to avoid copying, which increases complexity and poses a risk of potential data loss.
I've already test the compability with previous version
* [refactor](Nereids)refactor runtime filter generator (#34275)
1. unify the process of generating rf for hash join and for nested loop join
2. fix some bugs in generating rf
3. remove some duplicated check
(cherry picked from commit 07267faac0d9c6ef3bb1fd4ee101b4c761c8a2f2)
* [refactor](nereids) do not deny a runtime filter by removing an entry in aliasMap (#34559)
in current version, there are 2 approaches to verify whether a join condition can be used to generate a runtime filter, they are
1. remove the output slot from aliasMap
2. pushDownVisitor.visit(...) return false
the 1st approach has some drawbacks, we prefer to the 2ed approach.
In this pr, all the cases are handled by the 2ed approach, and remove the related code for the 1st approach.
(cherry picked from commit a29082bf31e66efa2df193b38347e610f2bf7464)
* rebase
we have an order reserved mappping from string to double.
for string column A, we have double values for A.min and A.max.
when estimating A<"abc", A.min/max could be used to judge whether 'abc' is between A.min and A.max, but it cannot be used to do range estimation. suppose "abc" is mapped to double x. if we compute selectivity by formula "sel = (x-A.min)/(A.max-A.min)", we are likely to obtain extreme values.