This PR optimize topn query like `SELECT * FROM tableX ORDER BY columnA ASC/DESC LIMIT N`.
TopN is is compose of SortNode and ScanNode, when user table is wide like 100+ columns the order by clause is just a few columns.But ScanNode need to scan all data from storage engine even if the limit is very small.This may lead to lots of read amplification.So In this PR I devide TopN query into two phase:
1. The first phase we just need to read `columnA`'s data from storage engine along with an extra RowId column called `__DORIS_ROWID_COL__`.The other columns are pruned from ScanNode.
2. The second phase I put it in the ExchangeNode beacuase it's the central node for topn nodes in the cluster.The ExchangeNode will spawn a RPC to other nodes using the RowIds(sorted and limited from SortNode) read from the first phase and read row by row from storage engine.
After the second phase read, Block will contain all the data needed for the query
* [Schema Change] support fast add/drop column (#49)
* [feature](schema-change) support fast schema change. coauthor: yixiutt
* [schema change] Using columns desc from fe to read data. coauthor: Lchangliang
* [feature](schema change) schema change optimize for add/drop columns.
1.add uniqueId field for class column.
2.schema change for add/drop columns directly update schema meta
Co-authored-by: yixiutt <yixiu@selectdb.com>
Co-authored-by: SWJTU-ZhangLei <1091517373@qq.com>
[Feature](schema change) fix write and add regression test (#69)
Co-authored-by: yixiutt <yixiu@selectdb.com>
[schema change] be ssupport that delete use newest schema
add delete regression test
fix regression case (#107)
tmp
[feature](schema change) light schema change exclude rollup and agg/uniq/dup key type.
[feature](schema change) fe olapTable maxUniqueId write in disk.
[feature](schema change) add rpc iface for sc add column.
[feature](schema change) add columnsDesc to TPushReq for ligtht sc.
resolve the deadlock when schema change (#124)
fix columns from fe don't has bitmap_index flag (#134)
add update/delete case
construct MATERIALIZED schema from origin schema when insert
fix not vectorized compaction coredump
use segment cache
choose newest schema by schema version when compaction (#182)
[bugfix](schema change) fix ligth schema change problem.
[feature](schema change) light schema change add alter job. (#1)
fix be ut
[bug] (schema change) unique drop key column should not light schema
change
[feature](schema change) add schema change regression-test.
fix regression test
[bugfix](schema change) fix multi alter clauses for light schema change. (#2)
[bugfix](schema change) fix multi clauses calculate column unique id (#3)
modify PushTask process (#217)
[Bugfix](schema change) fix jobId replay cause bdbje exception.
[bug](schema change) fix max col unique id repeatitive. (#232)
[optimize](schema change) modify pendingMaxColUniqueId generate rule.
fix compaction error
* fix be ut
* fix snapshot load core
fix unique_id error (#278)
[refact](fe) remove redundant code for light schema change. (#4)
[refact](fe) remove redundant code for light schema change. (#4)
format fe core
format be core
fix be ut
modify fe meta version
fix rebase error
flush schema into rowset_meta in old table
[refactor](schema change) refact fe light schema change. (#5)
delete the change of schemahash and support get max version schema
* modify for review
* fix be ut
* fix schema change test
Supported:
1. Change FeMetaVersion to 111, compatible with upgrade from 110.
2. Add catalog level privileges, and degrade global level privileges to catalog level if FeMetaVersion < 111.
3. Support 'show all grants', 'show roles' statement.
4. Previous version of SQL syntax.
Todo:
1. three-segment format catalog.database.table in SQL syntax.
2. User document for the unified authority management of datalake.
3. LDAP services to provide authentication.
1. fix all checkstyle warning
2. change all checkstyle rules to error
3. remove some java doc rules
a. RequireEmptyLineBeforeBlockTagGroup
b. JavadocStyle
c. JavadocParagraph
4. suppress some rules for old codes
a. all java doc rules only affect on Nereids
b. DeclarationOrder only affect on Nereids
c. OverloadMethodsDeclarationOrder only affect on Nereids
d. VariableDeclarationUsageDistance only affect on Nereids
e. suppress OneTopLevelClass on org/apache/doris/load/loadv2/dpp/ColumnParser.java
f. suppress OneTopLevelClass on org/apache/doris/load/loadv2/dpp/SparkRDDAggregator.java
g. suppress LineLength on org/apache/doris/catalog/FunctionSet.java
h. suppress LineLength on org/apache/doris/common/ErrorCode.java
This CL mainly changes:
1. Broker Load
When assigning backends, use user level resource tag to find available backends.
If user level resource tag is not set, broker load task can be assigned to any BE node,
otherwise, task can only be assigned to BE node which match the user level tags.
2. Routine Load
The current routine load job does not have user info, so it can not get user level tag when assigning tasks.
So there are 2 ways:
1. For old routine load job, use tags of replica allocation info to select BE nodes.
2. For new routine load job, the user info will be added and persisted in routine load job.
Issue Number: close#9403
set below rules' severity to error and format code according check info.
a. Merge conflicts unresolved
b. Avoid using corresponding octal or Unicode escape
c. Avoid Escaped Unicode Characters
d. No Line Wrap
e. Package Name
f. Type Name
g. Annotation Location
h. Interface Type Parameter
i. CatchParameterName
j. Pattern Variable Name
k. Record Component Name
l. Record Type Parameter Name
m. Method Type Parameter Name
n. Redundant Import
o. Custom Import Order
p. Unused Imports
q. Avoid Star Import
r. tab character in file
s. Newline At End Of File
t. Trailing whitespace found
There are many old codes in FE for old FE meta version such as `if (FeMetaVersion < VERSION_45) xxxxx`,
but the latest FE meta version is 107, these code maybe never reached,
but we do not remove these code because "sometimes" there are old code.
Add minimum required version check to allow us remove these old codes.
Close related #7389
Support create Iceberg external table in Doris.
This is the first step to support Iceberg external table.
### Create Iceberg external table
This pr describes two ways to create Iceberg external tables. Both ways do not require explicitly specifying column definitions, Doris automatically converts them based on Iceberg's column definitions.
1. Create an Iceberg external table directly
```sql
CREATE [EXTERNAL] TABLE table_name
ENGINE = ICEBERG
[COMMENT "comment"]
PROPERTIES (
"iceberg.database" = "iceberg_db_name",
"iceberg.table" = "icberg_table_name",
"iceberg.hive.metastore.uris" = "thrift://192.168.0.1:9083",
"iceberg.catalog.type" = "HIVE_CATALOG"
);
```
2. Create an Iceberg database and automatically create all the tables under that db.
```sql
CREATE DATABASE db_name
[COMMENT "comment"]
PROPERTIES (
"iceberg.database" = "iceberg_db_name",
"iceberg.hive.metastore.uris" = "thrift://192.168.0.1:9083",
"iceberg.catalog.type" = "HIVE_CATALOG"
);
```
### Show table creation
1. For individual tables you can view them with `help show create table`.
```sql
mysql> show create table iceberg_db.logs_1;
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| logs_1 | CREATE TABLE `logs_1` (
`level` varchar(-1) NOT NULL COMMENT "null",
`event_time` datetime NOT NULL COMMENT "null",
`message` varchar(-1) NOT NULL COMMENT "null"
) ENGINE=ICEBERG
COMMENT "ICEBERG"
PROPERTIES (
"iceberg.database" = "doris",
"iceberg.table" = "logs_1",
"iceberg.hive.metastore.uris" = "thrift://10.10.10.10:9087",
"iceberg.catalog.type" = "HIVE_CATALOG"
) |
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```
2. For Iceberg database, you can view it with `help show table creation`.
```sql
mysql> show table creation from iceberg_db;
+--------+---------+---------------------+---------------------------------------------------------+
| Table | Status | Create Time | Error Msg |
+--------+---------+---------------------+---------------------------------------------------------+
| logs | fail | 2021-12-14 13:50:10 | Cannot convert unknown type to Doris type: list<string> |
| logs_1 | success | 2021-12-14 13:50:10 | |
+--------+---------+---------------------+---------------------------------------------------------+
2 rows in set (0.00 sec)
```
This is a new syntax.
Show table creation records in Iceberg database:
Syntax:
```sql
SHOW TABLE CREATION [FROM db] [LIKE mask]
```
#6206
At present, our image file does not have file header/footer. When we need to change the image format (such as adding different journal versions to the image), there is no way to distinguish different image formats.
Therefore, we suggest adding file header and footer to the image. By the new image format, we can freely distinguish and define different image reading ways.
The format of the image is as follows:
```
/**
* Image Format:
* |- Image --------------------------------------|
* | - Magic String (4 bytes) |
* | - Header Length (4 bytes) |
* | |- Header -----------------------------| |
* | | |- Json Header ---------------| | |
* | | | - version | | |
* | | | - other key/value(undecided)| | |
* | | |-----------------------------| | |
* | |--------------------------------------| |
* | |
* | |- Image Body -------------------------| |
* | | Object a | |
* | | Object b | |
* | | ... | |
* | |--------------------------------------| |
* | |
* | |- Footer -----------------------------| |
* | | | - Checksum (8 bytes) | |
* | | |- object index --------------| | |
* | | | - index a | | |
* | | | - index b | | |
* | | | ... | | |
* | | |-----------------------------| | |
* | | - other value(undecided) | |
* | |--------------------------------------| |
* | - Footer Length (8 bytes) |
* | - Magic String (4 bytes) |
* |----------------------------------------------|
*/
```
1. Magic Number
One image format is identified by one magic string and one version field. The magic string is save in the first 4 bytes and last 4 bytes in the images.
2. Image Header:
The version is save in the header with json format now.
3. Image Body:
Equal to the original image.
4.Image Footer:
Image footer stores the file offset(index) of many image objects. If necessary, we can read some objects in the image by the footer.
Provides basic property related classes supports create, query, read, write, etc.
Currently, Doris FE mostly uses `if` statement to check properties in SQL. There is a lot of redundancy in the code.
The `PropertySet` class can be used in the analysis phase of `Statement`. The validation and correctness of the input properties are automatic verified. It can simplify the code and improve the readability of the code.
Usage:
1. Create a custom class that implements `SchemaGroup` interface.
2. Define the properties to be used. If it's a required parameter, there is no need to set the default value.
3. According the the requirements, in the logic called `readFromStrMap` and other functions to check and obtain parameters.
Demo:
Class definition
```
public class FileFormat implements PropertySchema.SchemaGroup {
public static final PropertySchema<FileFormat.Type> FILE_FORMAT_TYPE =
new PropertySchema.EnumProperty<>("type", FileFormat.Type.class).setDefauleValue(FileFormat.Type.CSV);
public static final PropertySchema<String> RECORD_DELIMITER =
new PropertySchema.StringProperty("record_delimiter").setDefauleValue("\n");
public static final PropertySchema<String> FIELD_DELIMITER =
new PropertySchema.StringProperty("field_delimiter").setDefauleValue("|");
public static final PropertySchema<Integer> SKIP_HEADER =
new PropertySchema.IntProperty("skip_header", true).setMin(0).setDefauleValue(0);
private static final FileFormat INSTANCE = new FileFormat();
private ImmutableMap<String, PropertySchema> schemas = PropertySchema.createSchemas(
FILE_FORMAT_TYPE,
RECORD_DELIMITER,
FIELD_DELIMITER,
SKIP_HEADER);
public ImmutableMap<String, PropertySchema> getSchemas() {
return schemas;
}
public static FileFormat get() {
return INSTANCE;
}
}
```
Usage
```
public class CreateXXXStmt extends DdlStmt {
private PropertiesSet<FileFormat> analyzedFileFormat = PropertiesSet.empty(FileFormat.get());
private final Map<String, String> fileFormatOptions;
...
public void analyze(Analyzer analyzer) throws UserException {
...
if (fileFormatOptions != null) {
try {
analyzedFileFormat = PropertiesSet.readFromStrMap(FileFormat.get(), fileFormatOptions);
} catch (IllegalArgumentException e) {
...
}
}
// 1. Get property value
String recordDelimiter = analyzedFileFormat.get(FileFormat.RECORD_DELIMITER)
// 2. Check the validity of parameters
PropertiesSet.verifyKey(FileFormat.get(), fileFormatOptions);
...
}
}
```
Currently, FeMetaVersion.java is in fe-common, users may forget to copy fe-common.jar when upgrading the service.
It's really dangerous because the data may be corrupted and can not be recovered.
The io related codes may be used by new modules, so It's better to move them to fe-common.
The modification to fe-core is frequent, but there are many generated java files by thrift
will slow down the compilation, so It's better to move thrift generation process to fe-common.
Currently both log4j1 and log4j2 are used, which leads to logs are written to wrong files.
Our modification will remove log4j1 from dependency, use slf4j + slf4j -> log4j2 instead.
This CL mainly changes:
1. Add 2 new FE modules
1. fe-common
save all common classes for other modules, currently only `jmockit`
2. spark-dpp
The Spark DPP application for Spark Load. And I removed all dpp related classes to this module, including unit tests.
2. Change the `build.sh`
Add a new param `--spark-dpp` to compile the `spark-dpp` alone. And `--fe` will compile all FE modules.
the output of `spark-dpp` module is `spark-dpp-1.0.0-jar-with-dependencies.jar`, and it will be installed to `output/fe/spark-dpp/`.
3. Modify some bugs of spark load