This commit will promote the priority of the || operator to the front of the + - * / mod operators.
It solves the problems 2.1 that mentioned at issue #2396 .
For problem at 2.2 in issue #2396 , it is actually the same problem mentioned in issue #2142 . As it said in pr #2398 before, the influence of modifying that logic will cause semantic errors in insert and load, so this commit will left the bug unsolved temporary.
appendix:
In Mysql 5.7.27
|| and |
select 23|1||7;
23
select (23|1)||7
237
select 23|(1||7)
23
Priority : || > |
|| and &
select 10&1||7;
0
select (10&1)||7
7
select 10&(1||7)
0
Priority : || > &
|| and ^
select 10^1||7
27
select (10^1)||7
117
select 10^(1||7)
27
Priority : || > ^
|| and ~
select ~1||7
184467440737095516147
select ~(1||7)
18446744073709551598
priority : || < ~
[Tag System]
This CL includes 2 parts:
Add classes related to "tag"
Resource: is the collective name of the nodes that provide various service capabilities in Doris cluster.
Tag: A Tag consists of type and name.
TagSet: TagSet represents a set of tags.
TagManager: maintains 2 indexes:
one is from tag to resource.
one is from resource to tags
ISSUE #1723
Using JSON as serialization methods of metadata
Introduce GSON library to serialize the new classes mentioned above.
ISSUE #2415#2389
GSON's version is updated to 2.8.6
1 Because we don't support array type currently, so I use variable arguments instead.
2 intersect_count directly return final count, not bitmap like bitmap_union, because intersect_count return bitmap is more complex and need more serialize. If we really need bitmap format from intersect_count, we could do that in another PR and which won't have compatibility problems.
The multi cluster feature will be deprecated soon.
Add a FE config "disable_cluster_feature", and default is true, to
forbid any cluster related operations, include:
* create/drop cluster
* add free backend/add backend to cluster/decommission cluster balance
* change the backends num of cluster
* link/migration db
* fix ut
Doris have use RLE to encoding/decoding integer.
Four types are comprised of the RLE encoding/decoding algorithm.
Short Repeat : used for short repeating integer sequences.
Direct : used for integer sequences whose values have a relatively constant bit width.
Patched Base : used for integer sequences whose bit widths varies a lot.
Delta : used for monotonically increasing or decreasing sequences.
This bug occurs in Patched Base Type for large negative number.
In patched base, base value is stored 1 to 8 bytes and encoding to 0 ~ 7.
If the base value is 8 byte, the encoding value for base width should be 7.
But now will encoding to 8, this is problem.
It will result in inconsistent data with loaded data because wrong encoding procedure.
In extreme case, the BE process will be cored dump because illegal address.
Add a flag in RowsetMeta to record whether it has been deleted from rowset meta.
Before this PR, 37156 rowsets only cost 1642 s.
With this PR, 37319 rowsets just cost 1 s.
[Storage][V2 Format]
Currently all columns use DEFAULT_ENCODING as ColumnMetaPB.encoding. However we may change the default encoding type for a data type in the future, therefore concrete encoding type such as PLAIN_ENCODING/BIT_SHUFFLE should be stored in column meta in order to support encoding evolution.
This commit fixs the bug below,
FE throws a unexpected exception when encounter a query like :
Set sql_mode = '0,PIPES_AS_CONCAT'.
and make some change to sql mode analyze process, now the analyze process is no longer put in SetVar.class, but in VariableMgr.class.
[Metric] Add tablet compaction score metrics
Backend:
Add metric "tablet_max_compaction_score" to monitor the current max compaction
score of tablets on this Backend. This metric will be updated each time
the compaction thread picking tablets to compact.
Frontend:
Add metric "tablet_max_compaction_score" for each Backend. These metrics will
be updated when backends report tablet.
And also add a calculated metric "max_tablet_compaction_core" to monitor the
max compaction core of tablets on all Backends.
The control framework is implemented through heartbeat message. Use uint64_t as flags to control different functions.
Now add a flag to set the default rowset type to beta.
Currently, special treatment is used for HLL types (and OBJECT types).
When loading data, because there is no need to serialize HLL content
(the upper layer has already done), we directly save the pointer
of `HyperLogLog` object in `Slice->data` (at the corresponding `Cell`
in each `Row`) and make `Slice->size` to be 0. This logic is different
from when reading the HLL column. When reading, we need to deserialize
the HLL object from the `Slice` object. This causes us to have different
implementations of `copy_row()` when loading and reading.
In the optimization(commit: 177fec8917304e399aa7f3facc4cc4804e72ce8b),
the logic of `copy_row()` was added before a row can be added into the
`MemTable`, but the current `copy_row()` treats the `HLL column Cell`
as a normal Slice object(i.e. will memcpy its data according its size).
So this change adds a `copy_object()` method to `TypeInfo`, which is
used to copy the HLL column during loading data.
Note: The way of copying rows should be unified in the future. At that
time, we can delete the `copy_object()` method.
1. Reduce the publish version interval
2. Change the visible version check from `getReadyToPublishTransactions` to `finishTransaction`,and make the publish version task from serial to concurrent.
3. When `getReadyToPublishTransactions` sort the transactionState by CommitTime to make low version transaction publish firstly and reduce the wait time in `finishTransaction`,
This CL fixes the following problems
1. check whether TabletsChannel has been closed/cancelled in `reduce_mem_usage` to avoid using a closed DeltaWriter
2. make `FlushHandle.wait` wait for all submitted tasks to finish so that memtable is deallocated before its delta writer
3. make `~MemTracker()` release its consumption bytes to accommodate situations in aggregate_func.h that bitmap and hll call `MemTracker::consume` without corresponding `MemTracker::release`, which cause the consumption of root tracker never drops to zero
'isRestore' flag is for the old version of backup and restore process,
which is deprecated long time ago. Remove it.
This commit is also for making a further step to ISSUE #1723.
[Load]
When performing a long-time load job, the following errors may occur. Causes the load to fail.
load channel manager add batch with unknown load id: xxx
There is a case of this error because Doris opened an unrelated channel during the load
process. This channel will not receive any data during the entire load process. Therefore,
after a fixed timeout, the channel will be released.
And after the entire load job is completed, it will try to close all open channels. When it try to
close this channel, it will find that the channel no longer exists and an error is reported.
This CL will pass the timeout of load job to the load channel, so that the timeout of load channels
will be same as load job's.
All classes that implement the Wriable interface need only implement the write() method.
The read() method should be implemented by itself according to the situation of different
classes.
**Authorization checking logic**
There are some problems with the current password and permission checking logic. For example:
First, we create a user by:
`create user cmy@"%" identified by "12345";`
And then 'cmy' can login with password '12345' from any hosts.
Second, we create another user by:
`create user cmy@"192.168.%" identified by "abcde";`
Because "192.168.%" has a higher priority in the permission table than "%". So when "cmy" try
to login in by password "12345" from host "192.168.1.1", it should match the second permission
entry, and will be rejected because of invalid password.
But in current implementation, Doris will continue to check password on first entry, than let it pass. So we should change it.
**Permission checking logic**
After a user login, it should has a unique identity which is got from permission table. For example,
when "cmy" from host "192.168.1.1" login, it's identity should be `cmy@"192.168.%"`. And Doris
should use this identity to check other permission, not by using the user's real identity, which is
`cmy@"192.168.1.1"`.
**Black list**
Functionally speaking, Doris only support adding WHITE LIST, which is to allow user to login from
those hosts in the white list. But is some cases, we do need a BLACK LIST function.
Fortunately, by changing the logic described above, we can simulate the effect of the BLACK LIST.
For example, First we add a user by:
`create user cmy@'%' identified by '12345';`
And now user 'cmy' can login from any hosts. and if we don't want 'cmy' to login from host A, we
can add a new user by:
`create user cmy@'A' identified by 'other_passwd';`
Because "A" has a higher priority in the permission table than "%". If 'cmy' try to login from A using password '12345', it will be rejected.
The problem with the current implementation is that all data to be
inserted will be counted in memory, but for the aggregation model or
some other special cases, not all data will be inserted into `MemTable`,
and these data should not be counted in memory.
This change makes the `SkipList` use the exclusive `MemPool`,
and only the data will be inserted into the `SkipList` can use this
`MemPool`. In other words, those discarded rows will not be
counted by the `MemPool` of` SkipList`.
In order to avoid duplicate checking whether a row already exists in
`SkipList`, this change also modifies the `SkipList` interface(A `Hint`
will be fetched when `Find()`, and then use it in `InsertUseHint()`),
and made `SkipList` no longer aware of the aggregation logic.
At present, because of the data row(`Tuple`) generated by the upper layer
is different from the data row(`Row`) internally represented by the
engine, when inserting `MemTable`, the data row must be copied.
If the row needs to be inserted into SkipList, we need copy it again
to `MemPool` of `SkipList`.
And, at present, the aggregation function only supports `MemPool` when
copying, so even if the data will not be inserted into` SkipList`,
`MemPool` is still used (in the future, it can be replaced with an
ordinary` Buffer`). However, we reuse the allocated memory in MemPool,
that is, we do not reallocate new memory every time.
Note: Due to the characteristics of `MemPool` (once inserted, it cannot
be partially cleared), the following scenarios may still cause multiple
flushes. For example, the aggregation model of a string column is `MAX`,
and the data inserted at the same time is in ascending order, then for
each data row, it must apply for memory from `MemPool` in `SkipList`,
that is, although the old rows in SkipList` will be discarded,
the memory occupied will still be counted.
I did a test on my development machine using `STREAM LOAD`: a table with
only one tablet and all columns are keys, the original data was
1.1G (9318799 rows), and there were 377745 rows after removing duplicates.
It can be found that both the number of files and the query efficiency are
greatly improved, the price paid is only a slight increase in load time.
before:
```
$ ll storage/data/0/10019/1075020655/
total 4540
-rw------- 1 dev dev 393152 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_0_0.dat
-rw------- 1 dev dev 1135 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_0_0.idx
-rw------- 1 dev dev 421660 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_10_0.dat
-rw------- 1 dev dev 1185 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_10_0.idx
-rw------- 1 dev dev 184214 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_1_0.dat
-rw------- 1 dev dev 610 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_1_0.idx
-rw------- 1 dev dev 329181 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_11_0.dat
-rw------- 1 dev dev 935 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_11_0.idx
-rw------- 1 dev dev 343813 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_12_0.dat
-rw------- 1 dev dev 985 Dec 2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_12_0.idx
-rw------- 1 dev dev 315364 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_2_0.dat
-rw------- 1 dev dev 885 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_2_0.idx
-rw------- 1 dev dev 423806 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_3_0.dat
-rw------- 1 dev dev 1185 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_3_0.idx
-rw------- 1 dev dev 294811 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_4_0.dat
-rw------- 1 dev dev 835 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_4_0.idx
-rw------- 1 dev dev 403241 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_5_0.dat
-rw------- 1 dev dev 1135 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_5_0.idx
-rw------- 1 dev dev 350753 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_6_0.dat
-rw------- 1 dev dev 860 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_6_0.idx
-rw------- 1 dev dev 266966 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_7_0.dat
-rw------- 1 dev dev 735 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_7_0.idx
-rw------- 1 dev dev 451191 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_8_0.dat
-rw------- 1 dev dev 1235 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_8_0.idx
-rw------- 1 dev dev 398439 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_9_0.dat
-rw------- 1 dev dev 1110 Dec 2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_9_0.idx
{
"TxnId": 16,
"Label": "cd9f8392-dfa0-4626-8034-22f7cb97044c",
"Status": "Success",
"Message": "OK",
"NumberTotalRows": 9318799,
"NumberLoadedRows": 9318799,
"NumberFilteredRows": 0,
"NumberUnselectedRows": 0,
"LoadBytes": 1079581477,
"LoadTimeMs": 46907
}
mysql> select count(*) from xxx_before;
+----------+
| count(*) |
+----------+
| 377745 |
+----------+
1 row in set (0.91 sec)
```
aftr:
```
$ ll storage/data/0/10013/1075020655/
total 3612
-rw------- 1 dev dev 3328992 Dec 2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_0_0.dat
-rw------- 1 dev dev 8460 Dec 2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_0_0.idx
-rw------- 1 dev dev 350576 Dec 2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_1_0.dat
-rw------- 1 dev dev 985 Dec 2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_1_0.idx
{
"TxnId": 12,
"Label": "88f606d5-8095-4f15-b61d-49b7080c16b8",
"Status": "Success",
"Message": "OK",
"NumberTotalRows": 9318799,
"NumberLoadedRows": 9318799,
"NumberFilteredRows": 0,
"NumberUnselectedRows": 0,
"LoadBytes": 1079581477,
"LoadTimeMs": 48771
}
mysql> select count(*) from xxx_after;
+----------+
| count(*) |
+----------+
| 377745 |
+----------+
1 row in set (0.38 sec)
```
fix arithmetic operation between numeric and non-numeric will cause unexpected value.
After this patch you will get
mysql> select 1 + "kks";
+-----------+
| 1 + 'kks' |
+-----------+
| 1 |
+-----------+
1 row in set (0.02 sec)
mysql> select 1 - "kks";
+-----------+
| 1 - 'kks' |
+-----------+
| 1 |
+-----------+
1 row in set (0.01 sec)
This commit add a new plan node named AssertNumRowsNode
which is used to determine whether the number of rows exceeds the limit.
The subquery in Binary predicate and Between-and predicate should be added a AssertNumRowsNode
which is used to determine whether the number of rows in subquery is more than 1.
If the number of rows in subquery is more than 1, the query will be cancelled.
For example:
There are 4 rows in table t1.
Query: select c1 from t1 where c1=(select c2 from t1);
Result: ERROR 1064 (HY000): Expected no more than 1 to be returned by expression select c2 from t1
ISSUE-2270
TPC-DS 6,54,58
For #2383
1. Limit the concurrent transactions of routine load job
2. Create new routine load task when txn is VISIBLE, not after COMMITTED.
For #2267
1. All non-master daemon thread should also be started after catalog is ready.
For #2354
1. `fixLoadJobMetaError()` should be called after all meta data is read, including image and edit logs.
2. Mini load job should set to CANCELLED when corresponding transaction is not found, instead
of UNKNOWN.