Currently in the date_add/date_sub functions (DATE_ADD(DATETIME date,INTERVAL expr type)), the expr parameter is the interval you want to add.
Doris will convert these functions to xxx_sub/xxx_add. However, there is only the days_add function in fe, which causes other date_add formats, such as select date_add('2010-11-30 23:59:59', INTERVAL 2 DAY), cannot be pruned.
So I've added other functions to support fe partition prune
* Change Null-safe equal operator from cross join to hash join
ISSUE-2136
This commit change the join method from cross join to hash join when the equal operator is Null-safe '<=>'.
It will improve the speed of query which has the Null-safe equal operator.
The finds_nulls field is used to save if there is Null-safe operator.
The finds_nulls[i] is true means that the i-th equal operator is Null-safe.
The equal function in hash table will return true, if both val and loc are NULL when finds_nulls[i] is true.
In the production environment, hdfs broker should read config from hdfs-site.xml instead of manual input due to the complexity of hdfs config
add dfs.namenode.kerberos.principal.pattern config for kerberos auth
fix some getting config code for brief
Some stmt, such as DDL and DML stmt will be forwarded from non-master FE
to Master FE. But these stmt will be logged in non-master FE's audit log
with its origin stmt id generated on non-master FE.
So we should also pass this origin stmt id to Master, so that we can track
this stmt's execution process more easily.
The precision of cast function is useless. For example,
Query: select cast(10 as decimal(1,0));
Result: 10
Although numeric field is overflow, the numeric could be return.
So, the cast function from decimal(a,b) to decimal(c,d) could not be executed.
Some use has the requirment that only some of columns will be update in
one load operation, and others will retain as original. However, Doris
can't handle this situation, because user must specify value for all
columns. Then if a column aggregation method is REPLACE, use must query
original value to overwrite it. This often needs some work for user to
do.
If this CL is applied, user can use REPLACE_IF_NOT_NULL instead of
REPLACE. Then when load data to table, if user don't intent to change
value of this column, user can specify NULL for this column. Doris will
retain original value for this column.
At present, we do not support SQL MODE which is similar to MySQL. In MySQL, SQL MODE is stored in global session and session with a 64 bit address,and every bit 0 or 1 on this address represents a mode state. Besides, MySQL supports combine mode which is composed of several modes.
We should support SQL MODE to deal with sql dialect problems. We can heuristically use the MySQL way to store SQL MODE in session and parse it into string when we need to return it back to client.
This commit suggests a solution to support SQL MODE. But it's just a sample, and the mode types in SqlModeHelper.java are not really meaningful from now on.
[STORAGE]
1 fix mem fix mem leak when calling string builder.get_dictionary_page;
2 fix delete invalid mem addr in bitshuffleBuilder when no array grow happends
when bitshuffleBuilder didn't grow array, the data page which not use new to allocate will be
returned to ColumnWriter.
When ColumnWriter destructs, the data page will be deleted,this causes core dump
Mainly fix the following issues:
1. A null pointer exception is raised when a database or table is dropped. The expected behavior is that the routine load job is stopped.
2. Memory leaks. Batch routine load task submissions are no longer performed, and modifications are submitted separately for each task.
3. Unreasonable task timeout.
Routine load tasks should not be queued in the BE thread pool for execution. The task sent to the BE should be executed immediately, otherwise the task in the FE will be timeout first. Eventually leads to constant timeout for all subsequent tasks.
4. All routine load job should be scheduled once it being submitted. Not waiting the available BE slot. Otherwise, all later submitted jobs may not be scheduled forever.
Add a new type: Object. Currently, it's mainly for complex aggregate metrics(HLL , Bitmap).
The Object type has the following constraints:
1 Object type could not as key column type
2 Object type doesn't support all indices (BloomFilter, short key, zone map, invert index)
3 Object type doesn't support filter and group by
In the implementation:
The Object type reuse the StringValue and StringVal, because in storage engine, the Object type is binary, it has a pointer and length.
[QUERY]
The type of function which is different from the type of expr will return the incorrect result in query.
Example:
the type of expr is date
the type of function is int
So, the upper fragment will receive a int value instead of date while the result expr is date.
If there is no cast function, the result of query will be incorrect.
ISSUE-2069: This kind of query could be stuck.
The sender failed to send the last packet to receiver.
Also, the failure does not be reportted to FE , so the query is not cancelled.
The error log sames as "body_size=xxxx from xxx:xxx is too large".
The reason of the socket is that the packet of the query is too big which is more then the max_body_size of brpc.
This commit add a config named brpc_max_body_size whcih is used to change the max_body_size of brpc.
Also, user can change the max_body_size directly on-the-fly by "http://host:brpc_port/flags".