This PR support following functions:
1. Support content properties in backup stmt. It means user can backup only metadata or
meta+data which use content [METADATA_ONLY| ALL]attribute to distinguish.
2. Support exclude some tables in backup and restore stmt. This means that some
very large and unimportant tables can be excluded when the entire database is backed up.
3. Support backup and restore whole database instead of declaring each table name
in the backup and restore statement.
The backup and restore api has changed as following:
```
BACKUP SNAPSHOT [db_name].{snapshot_name}
TO 'repo_name'
[ON|EXCLUDE (
'table_name' [partition (p1,...)]
)]
[properties (
"content" = "metadata_only|all"
)]
RESTORE SNAPSHOT [db_name].{snapshot_name}
TO 'repo_name'
[EXCLUDE|ON (
'table_name' [partition (p1,...)]
)]
[properties (
)]
```
Support delete statement like:
1. delete from table partitions(p1, p2) where xxx; // apply to p1, p2
2. delete from table where xxx; // apply to all partitions
Also remove code about the deprecated sync/async delete job.
This CL changes FE meta version to 94
1. add graceful exit mechanism for the compaction producer thread.
2. if compaction task submits unsuccessfully, the compaction task should pop from `_tablet_submitted_compaction`.
[BackupAndRestore] Support backup and restore view and external odbc table
1. Support backup and restore view and odbc table. The syntax is the same as that of the backup and restore table.
2. If the table associated with the view does not exist in the snapshot,
the view can still be backed up successfully, but the TableNotFound exception will be thrown when querying the view.
3. If the odbc table associated with the odbc resource, the odbc resource will be backuped and restored together.
4. If the same view, odbc table and resource already exists in the database, it will compare whether the metadata of snapshot is consistent.
If it is inconsistent, the restoration will fail.
4. This pr also modified the json format of the backup information.
A `new_backup_objects` object is added to the root node to store backup meta-information other than olap table,
such as views and external tables.
```
{
"backup_objects": {},
"new_backup_objects": {
"view": [
{"name": "view1", "id": "10001"}
],
"odbc_table": [
{"name":"xxx", xxx}
]
"odbc_resources": [
{"name": "bj_oracle"}
]
}
}
```
5. This pr changes the serialization and deserialization method of backup information
from manual construction to automatic analysis by Gson tools.
Change-Id: I216469bf2a6484177185d8354dcca2dc19f653f3
If there are too large fields in the table, there may be only one row in each page,
and this row also has a zone map index
This causes the stored data to expand three times the original data,
It also takes up more memory when reading those segments
Therefore, we need to Disable the creation of zonemap indexes for segments with too few rows
In the previous implementation, in an load job,
multiple memtables of the same tablet are written to disk sequentially.
In fact, multiple memtables can be written out of order in parallel,
only need to ensure that each memtable uses a different segment writer.
* [Load] Broker Load supports setting the load parallelism
Similar to the parallel_fragment_exec_instance_num parameter,
it allows the user to set the parallelism of the load execution plan
on a single node when the broker load is submitted.
eg:
```
...
properties (
"load_parallelism" = "4";
...
)
```
This parameter is currently only used to support the load parallelism setting,
but it cannot significantly improve the load speed for the time being.
The speed increase will be completed in subsequent code submissions.
Documents will also be added in subsequent submissions.
This PR also update the FE meta version.
The essence of the problem is behavior of negative zero (- 0.0) in comparison with positive zero (+ 0.0).
Currently in GroupBy and HashPartition, -0.0 is not equal to 0.0 (result of Hash function),
so the -0.0 and 0.0 are divided into 2 partitions.
In row_number analytic function, for the sorted data, a new partition will be opened when the values of
the upper and lower rows are not equal. But in C++ the comparison 0.0 == -0.0 is true, so 0.0 and -0.0
are divided into the same partition for row_number.
(Floating point arithmetic in C++ is often IEEE-754. This norm defines two different representations for
the value zero: positive zero and negative zero. It is also defined that those two representations must
compare equals. Refer to https://stackoverflow.com/questions/45795397)
At present, the application of vlog in the code is quite confusing.
It is inherited from impala VLOG_XX format, and there is also VLOG(number) format.
VLOG(number) format does not have a unified specification, so this pr standardizes the use of VLOG
1. Schema hash is useless long time ago
Currently, schema hash can only be generated as a random integer, no need to calculated
from real schema.
2. The CRC32 algo is not enough to generate the table' signature.
Table's signature is used to determine whether the tables have the same schema.
And current CRC32 algo may return same signature even if table's schema are different.
So I change it to calculate the md5 of a signature string assembled by schema info of a table.
Currently, fe's SystemMetrics only support tcp. I add system memory metrics for fe.
Then we can get system memory metrics , which is used to troubleshoot memory problems.
A reword suggestion.
Reasons: Before my change, the statement is "If this change need a document change, I have updated the document",
and there is a grammar error in it evidently: "change" cannot be paired with "need".
Either "changes need" or "change need" will be ok at the grammar level.
According to the context, "changes need" will be better.
Now, the statement is "If these changes need document changes, I have updated the document".
When Doris is in debug mode, function `Coordinator#traceInstance` is used to print
the physical execute plan of a fragment instance for debug.
Function `Coordinator#traceInstance` uses param `scanRangeAssignment` to print
the detail of a fragment. But bucket shuffle join and colocate shuffle join do not fill the param.
That will cause debug not work well.
This path fill assignment param of bucket shuffle and colocate shuffle for debug.
Based on PR #4475, this patch add a new feature for single tablet migration between different disks by http.
Co-authored-by: weizuo <weizuo@xiaomi.com>
Currently when a scan node scans many tablets, Doris will assure it load balance when choosing which replica for scan task to be executed. But it does not take other scan nodes into consideration to implement a global load balance. This patch tries to make all tables of all scan nodes to be load balance.
Co-authored-by: wangxixu <wangxixu@xiaomi.com>