Query rewrite by mv support bitmap_union and bitmap_union_count roll up, aggregate functions which supports roll up is listed as following:
| 查询中函数 | 物化视图中函数 | 函数上卷后 |
|------------------|--------------|--------------------|
| max | max | max |
| min | min | min |
| sum | sum | sum |
| count | count | sum |
| count(distinct ) | bitmap_union | bitmap_union_count |
| bitmap_union | bitmap_union | bitmap_union|
| bitmap_union_count | bitmap_union | bitmap_union_count |
this depends on https://github.com/apache/doris/pull/29256
test_unique_table.groovy and test_unique_table_like.groovy both use database test_unique_db.
If they run at the same time, we may got the following errors:
java.sql.SQLException: errCode = 2, detailMessage = There are still some transactions in the COMMITTED state waiting to be completed. The database [default_cluster:test_unique_db] cannot be dropped. If you want to forcibly drop(cannot be recovered), please use "DROP database FORCE".
Both Master & Branch2.0 have this problem.
An error occurred when starting BE with JDK17
```java
Exception: java.lang.StackOverflowError thrown from the UncaughtExceptionHandler in thread "process reaper"
```
This error occurs when BE's java code calls Runtime.exec() to fork the child process.
It turned out that Doris was calling the `glog` library in the C++ layer to cause this problem.
The solution comes from: https://github.com/google/glog/issues/975
"operator_id" should be invisible, but the local shuffle is a planned operator in the BE (Backend), without a plan node ID. We use it in profiles and other places, and there might be duplicates. Therefore, we switch it to a negative number here to distinguish it as a plan node ID.
The current logic for SQL dialect conversion is all in the `fe-core` module, which may lead to the following issues:
- Changes to the dialect conversion logic may occur frequently, requiring users to upgrade the Doris version frequently within the fe-core module, leading to a longer change cycle.
- The cost of customized development is high, requiring users to replace the fe-core JAR package.
Turning it into a plugin can address the above issues properly.
Problem:
fe ut failed cause of null pointer error
Cause:
fe ut getting statement context from connection context failed
Resolved:
add null pointer judgement
Force to use zonemap for collecting string type min max.
String type is not using zonemap for min max, because zonemap value at BE side is truncated at 512 bytes which may cause the value not accurate. But it's OK for statisitcs min max, and this could also avoid scan whole table while sampling.
Be do not support RF for NullSafeEquals, so fe not generate RF for them.
However, after we support NullSafeEquals as Hash join condition,
the order of RF is wrong when generating RF in FE. this PR fix it.