1. upgrade log4j to 2.12.1
2. Add 2 new FE config:
'sys_log_delete_age' and default is '7d', for sys log.
'audit_log_delete_age' and default is '30d', for audit log.
it means if a log's last modification time is 7/30 days ago, it will be deleted.
Currently in the date_add/date_sub functions (DATE_ADD(DATETIME date,INTERVAL expr type)), the expr parameter is the interval you want to add.
Doris will convert these functions to xxx_sub/xxx_add. However, there is only the days_add function in fe, which causes other date_add formats, such as select date_add('2010-11-30 23:59:59', INTERVAL 2 DAY), cannot be pruned.
So I've added other functions to support fe partition prune
* Change Null-safe equal operator from cross join to hash join
ISSUE-2136
This commit change the join method from cross join to hash join when the equal operator is Null-safe '<=>'.
It will improve the speed of query which has the Null-safe equal operator.
The finds_nulls field is used to save if there is Null-safe operator.
The finds_nulls[i] is true means that the i-th equal operator is Null-safe.
The equal function in hash table will return true, if both val and loc are NULL when finds_nulls[i] is true.
Some stmt, such as DDL and DML stmt will be forwarded from non-master FE
to Master FE. But these stmt will be logged in non-master FE's audit log
with its origin stmt id generated on non-master FE.
So we should also pass this origin stmt id to Master, so that we can track
this stmt's execution process more easily.
The precision of cast function is useless. For example,
Query: select cast(10 as decimal(1,0));
Result: 10
Although numeric field is overflow, the numeric could be return.
So, the cast function from decimal(a,b) to decimal(c,d) could not be executed.
Some use has the requirment that only some of columns will be update in
one load operation, and others will retain as original. However, Doris
can't handle this situation, because user must specify value for all
columns. Then if a column aggregation method is REPLACE, use must query
original value to overwrite it. This often needs some work for user to
do.
If this CL is applied, user can use REPLACE_IF_NOT_NULL instead of
REPLACE. Then when load data to table, if user don't intent to change
value of this column, user can specify NULL for this column. Doris will
retain original value for this column.
At present, we do not support SQL MODE which is similar to MySQL. In MySQL, SQL MODE is stored in global session and session with a 64 bit address,and every bit 0 or 1 on this address represents a mode state. Besides, MySQL supports combine mode which is composed of several modes.
We should support SQL MODE to deal with sql dialect problems. We can heuristically use the MySQL way to store SQL MODE in session and parse it into string when we need to return it back to client.
This commit suggests a solution to support SQL MODE. But it's just a sample, and the mode types in SqlModeHelper.java are not really meaningful from now on.
Mainly fix the following issues:
1. A null pointer exception is raised when a database or table is dropped. The expected behavior is that the routine load job is stopped.
2. Memory leaks. Batch routine load task submissions are no longer performed, and modifications are submitted separately for each task.
3. Unreasonable task timeout.
Routine load tasks should not be queued in the BE thread pool for execution. The task sent to the BE should be executed immediately, otherwise the task in the FE will be timeout first. Eventually leads to constant timeout for all subsequent tasks.
4. All routine load job should be scheduled once it being submitted. Not waiting the available BE slot. Otherwise, all later submitted jobs may not be scheduled forever.
Add a new type: Object. Currently, it's mainly for complex aggregate metrics(HLL , Bitmap).
The Object type has the following constraints:
1 Object type could not as key column type
2 Object type doesn't support all indices (BloomFilter, short key, zone map, invert index)
3 Object type doesn't support filter and group by
In the implementation:
The Object type reuse the StringValue and StringVal, because in storage engine, the Object type is binary, it has a pointer and length.
[QUERY]
The type of function which is different from the type of expr will return the incorrect result in query.
Example:
the type of expr is date
the type of function is int
So, the upper fragment will receive a int value instead of date while the result expr is date.
If there is no cast function, the result of query will be incorrect.
Fix the ISSUE:2017
This commit enable the cast function in date.
The date literal can be cast to target type which is implicitly castable such as int, bigint, largeint.
This commit fix the issue [ISSUE-2002].
It changes the priority of coalesce, ifnull, nullif function etc.
The priority of decimal is higher then varchar in the IS_SUPERTYPE_OF compare mode.
Example:
select coalesce(decimal_column, 1) from table;
the return type of coalesce should be decimal instead of varchar.
Add supertype about datetime and date
The supertype of datetime is bigint, largeint etc.
In IS_SUPERTYPE_OF compare mode, the function(bigint, bigint, bigint) is a supertype of function(datetime, bigint, int).
Example:
select coalesce(now(), 1)) from web_returns;
the return type of coalesce should be bigint instead of varchar.
If FE is restarted between txn committed and visible, the load job will be rescheduled and failed with label already exists.
The reason is that there are inconsistency between transaction of load job and meta of load job.
So, the replay of the txn attachment need to be done in function replayOnCommitted.
The load job state and progress is correct after that.
The prepare/close step of scalar function is already supported in execution framework, We only need to do is that support it in syntax and meta in frontend.
In addition, 'Hive' binary type of scalar function NOT supports prepare/close step, we need to make it supports.
This commit enable the avg operator in fe instead of converting the avg function into sum/count.
Also, this commit fix the bug of deciamlv2 avg which cause the core in be.
The int128 could not be assinged directly.
The speed of avg function is similar to sum function after enhancement.