Fix: #3946
CL:
1. Add prepare phase for `from_unixtime()`, `date_format()` and `convert_tz()` functions, to handle the format string once for all.
2. Find the cctz timezone when init `runtime state`, so that don't need to find timezone for each rows.
3. Add constant rewrite rule for `utc_timestamp()`
4. Add doc for `to_date()`
5. Comment out the `push_handler_test`, it can not run in DEBUG mode, will be fixed later.
6. Remove `timezone_db.h/cpp` and add `timezone_utils.h/cpp`
The performance shows bellow:
11,000,000 rows
SQL1: `select count(from_unixtime(k1)) from tbl1;`
Before: 8.85s
After: 2.85s
SQL2: `select count(from_unixtime(k1, '%Y-%m-%d %H:%i:%s')) from tbl1 limit 1;`
Before: 10.73s
After: 4.85s
The date string format seems still slow, we may need a further enhancement about it.
In a large scale cluster, we may rolling upgrade BEs, this patch add a
column named 'Version' for command 'show backends;', as well as website
'/system?path=//backends', to provide a method to check whether there
is any BE missing upgraded.
1. Fix the bug introduced by https://github.com/apache/incubator-doris/pull/2947.
The following sql result is 0000, which is wrong. The result should be 1601
```
select date_format('2020-02-19 16:01:12','%H%i');
```
2. Add constant Express plan test, ensure the FE constant Express compute result is right.
3. Remove the `castToInt ` function in `FEFunctions`, which is duplicated with `CastExpr::getResultValue`
4. Implement `getNodeExplainString` method for `UnionNode`
The logic chain is following:
1. `date_format(if(, NULL, `dt`), '%Y%m%d')` as HASH_PARTITIONED exprs,which is not right, we should use Agg intermediate materialized slot
2. we don't use Agg intermediate materialized slot as HASH_PARTITIONED exprs, becasue
```
// the parent fragment is partitioned on the grouping exprs;
// substitute grouping exprs to reference the *output* of the agg, not the input
partitionExprs = Expr.substituteList(partitionExprs,
node.getAggInfo().getIntermediateSmap(), ctx_.getRootAnalyzer(), false);
parentPartition = DataPartition.hashPartitioned(partitionExprs);
```
the partitionExprs substitute failed。
3. partitionExprs substitute failed because partitionExprs has a casttodate child,but agg info getIntermediateSmap has a cast in datetime child.
4. The cast to date or cast to datetime child exist because `TupleIsNullPredicate` insert a `if` Expr. we don't have `if date` fn, so Doris use `if int` Expr.
5. the `date` in the `catstodate` depend on slot dt date type. the `datetime` in the `catstodatetime` depend on datetime arg type in `date_format` function.
So we could fix this issue by make if fn support date type or make date_format fn support date type
This CL modify the `evalExpr()` of ExpressionFunctions, so that it won't change the
`FunctionCallExpr` to `NullLiteral` when there is null parameter in UDF. Which will fix the
problem described in ISSUE: #2913
Support Grouping Sets, Rollup and Cube to extend group by statement
support GROUPING SETS syntax
```
SELECT a, b, SUM( c ) FROM tab1 GROUP BY GROUPING SETS ( (a, b), (a), (b), ( ) );
```
cube or rollup like
```
SELECT a, b,c, SUM( d ) FROM tab1 GROUP BY ROLLUP|CUBE(a,b,c)
```
[ADD] support grouping functions in expr like grouping(a) + grouping(b) (#2039)
[FIX] fix analyzer error in window function(#2039)
This CL changes:
1. add function bitmap_to_string and bitmap_from_string, which will
convert a bitmap to/from string which contains all bit in bitmap
2. add function murmur_hash3_32, which will compute murmur hash for
input strings
3. make the function cast float to string the same with user result
logic
This CL make bitmap_count, bitmap_union, and bitmap_union_count accept any expression whose return type is bitmap as input so that we can support flexible bitmap expression such as bitmap_count(bitmap_and(to_bitmap(1), to_bitmap(2))).
This CL also create separate documentation for each bitmap UDF to conform with other functions.
This commit fix the issue [ISSUE-2002].
It changes the priority of coalesce, ifnull, nullif function etc.
The priority of decimal is higher then varchar in the IS_SUPERTYPE_OF compare mode.
Example:
select coalesce(decimal_column, 1) from table;
the return type of coalesce should be decimal instead of varchar.
Add supertype about datetime and date
The supertype of datetime is bigint, largeint etc.
In IS_SUPERTYPE_OF compare mode, the function(bigint, bigint, bigint) is a supertype of function(datetime, bigint, int).
Example:
select coalesce(now(), 1)) from web_returns;
the return type of coalesce should be bigint instead of varchar.
In this change list
1. validate HLL column when loading data, if data is invalid, this row
will be filtered.
2. seems as empty HLL when serializing invalid type of HLL data, with
this change, all ingested data will be valid.
3. seems as empty HLL when deserializing nullptr or invalid type of HLL data.
With this change, dirty data can be handled normally.
4. rename function empty_hll to hll_empty.
5. disable memtable_flush_execute_test because this will fails
sometimes. When tearing down, some thread is not joined, and they will
visit destroyed resource, which is invalid.
1. get_json_xxx() now support using quoto to escape dot
2. Implement json_path_prepare() function to preprocess json_path
Performance of get_json_string() on 1000000 rows reduces from 2.27s to 0.27s