The new class named 'AuthorizationInfo' is used to save the auth info in jobs.
The job doesn't need to retrieve the auth info by meta id which maybe throw the exception when db or table has been dropped or renamed.
The persistence of 'AuthorizationInfo' take effect in META_VERSION 56
This can happen if the Doris cluster is deployed with all, for example, SSD medium,
but create all tables with HDD storage medium property. Then getLoadScore(SSD) will
always return 0.0, so that no replica will be chosen when try to delete redundant
replicas.
Before changing default insert operation to streaming load, if the select result
of a insert stmt is empty, a label will still be returned to the user, and user
can use this label to check the insert load job's status.
After changing the insert operation, if the select result is empty, a exception
will be thrown to user client directly without any label.
This new usage pattern is not friendly to already existed users, which is forcing
them to change their way of using insert operation.
So I add a new FE config 'using_old_load_usage_pattern', default is false.
If set to true, a label will be returned to user even if the select result is empty.
The issue is following:
Request1:
BE aborts the txn.
Attachment of txn is set.
Attachment of txn is set to null without lock of txn because the task has been aborted by FE.
Request2:
BE aborts the txn again.
Attachment of txn is set again.
Request1:
The attachment is not null so the job wants to find the task and commit it.
The job could not find the task so it is paused. (NullPointer Exception)
In this commit, The commit request will check if task exists instead of checking txn attachment
is not null.
Without dbId parameter, the backend report version can not be
updated when publish task report to FE, which may cause incorrect
order of report.
Related commit: 5c1b4f6
If one child of a binary predicate is column, and the other is a constant expr,
set the compatible type to column's type.
eg:
k1(int):
... WHERE k1 = "123" --> k1 = cast("123" as int);
k2(varchar):
... WHERE 123 = k2 --> cast(123 as varchar) = k2
This optimization is for the case that some users may use a int column to save date, eg: 20190703,
but query with predicate: col = "20190703".
If not casting "20190703" to int, query optimizer can not do partition prune correctly.
When a rollup table contains value columns of REPLACE aggregation type,
all key columns of base table should be included in this rollup.
Without all key columns, the order of rows is undefined.
For example, table schema k1,k2,k3,v1(REPLACE)
1,2,3,A
1,2,4,B
1,2,5,C
A rollup with column(k1,k2,v1):
1,2,A
1,2,B
1,2,C
No matter A or B or C be the last winner, it is meanless to user.
Also fix a bug that set password for current user on non-master FE
will throw NullPointerException.
This commit change the idToTable to concurrent hashmap in database. It don't need to lock the database before getTable.
The database will be locked in GlobalTxnManager. The load job will be locked after that.
So the lock sequence is: database, load manager, load job.
The previous setting of timeout of a publish version task is mess.
I change it to a configurable time, default it 30 seconds.
And when the table is under rollup or schema change, I double this timeout.
This a kind of best-effort-optimization. Because with a short timeout,
a replica's publish version task is more likely to fail. And if quorum replicas
of a tablet fail to publish, the alter job will fail.
If the table is not under rollup or schema change, the failure of a replica's
publish version task has a minor effect because the replica can be repaired
by tablet repair process very soon. But the tablet repair process will not
repair rollup replicas.
* This commit has brought contribution to streaming mini load
The operation of streaming mini load is sames as previous. Also, user can check the load by frontend.
The difference is that streaming mini load finish the task before reply of REST API while the non-streaming only register a load.
* When updating doris
Updating fe or be firstly are also supported. After fe and be are updated, the streaming mini load will take effect.
* For multi mini load
The non-streaming mini load still has been used by multi mini load. The behavior of multi mini load has not been changed.
* Add a interface named isSupportedFunction
This function is used to protect the correctness of new feature which consists of be and fe during updaing.
* Add streaming job in LoadProc
* Add a config named desired_max_waiting_jobs
1. If the number of pending load jobs is more then desired_max_waiting_jobs, the create load stmt will be rejected.
2. If the number of need_scheduler load jobs is more then desired_max_waiting_jobs, the new routine load job will be rejected.
3. Desired max size is only a expect number, so the size of queue may be more then this number sometimes.
* Merge load manager and load jobs in jobs proc dir
Currently, historical alter jobs will keep for a while before being removed.
And this time is configured by label_keep_max_second. Which is also used for
Load jobs.
But to avoid too many historical load jobs being kept in memory,
'label_keep_max_second' always set to a short time, causing alter jobs to be
removed vary soon.
Add a new FE config 'history_job_keep_max_second' to configure the keep time of
alter jobs. Default is 7 days.
If there are only 3 backends and replication num is 3. If one replica of a
tablet is bad, there is no 4th backend for tablet repair. So we need to delete
a bad replica first to make room for new replica.
This change adds a load property named strict_mode which is used to prohibit the incorrect data.
When it is set to false, the incorrect data will be loaded by NULL just like before.
When it is set to true, the incorrect data which belongs to a column without expr will be filtered.
The strict_mode is supported in broker load v2 now. It will be supported in stream load later.
For example, we start the process for the first time. The pid is 12345. Due to the accident, the process is killed and the fe.pid exists. Then we start the process for the second time. The pid is 6789. The fe.pid shows 67895 , Because file.write only cover the first four digits. This case can happen easily when we use supervise. Then I add the file.setLength(0) and delete the old data.