Commit Graph

106 Commits

Author SHA1 Message Date
4b49d05e97 [refactor](fe) remove type related class to fe-common to reduce java-udf jar size (#15808) 2023-01-17 00:01:15 +08:00
525f990d2b [feture-wip](multi-catalog) upgrade iceberg pom version to 1.1.0, for rest catalog api (#15964)
Co-authored-by: jinzhe <jinzhe@selectdb.com>
2023-01-16 23:10:41 +08:00
e979cc444a [improvement](multi-catalog) support hive 1.x (#15886)
The inferface of hive metastore changes from version to version.
Currently, Doris use hive 2.3.7 as hms client version.
When using to connect hive 1.x, some interface such as get_table_req does not exist
in hive 1.x. So we can't get metadata from hive 1.x.

In this PR, I copied the HiveMetastoreClient from hive 2.3.7 release, and modify some of interface's
implementation, so that it will use old interface to connect to hive 1.x.

And when creating hms catalog, you can specify the hive version, eg:

CREATE CATALOG `hive` PROPERTIES (
  "hive.metastore.uris" = "thrift://127.0.0.1:9083",
  "type" = "hms",
  "hive.version" = "1.1"
);
If hive.version does not specified, Doris will use hive 2.3.x compatible interface to visit hms.
2023-01-13 18:32:12 +08:00
89c21af87d [chore](fe) update fe snapshot to 1.2 and fix auditloader compile error (#15787)
This PR #14925 change some field of AuditEvent, so we need to upgrade the fe-core's SNAPSHOT to 1.2
because auditloader depends on fe-core

Already push the 1.2-SNAPSHOT to
https://repository.apache.org/content/repositories/snapshots/org/apache/doris/fe-core/1.2-SNAPSHOT/
2023-01-11 08:46:48 +08:00
d48abd91df [deps](fe)upgrade deps version (#15262)
upgrade hadoop version to 2.10.2
jackson-databind to 2.14.1
2022-12-24 22:18:10 +08:00
6151a43e9c [Thirdparty](Protobuf) update protobuf from 3.14.0 to 3.15.0 (#15055) 2022-12-24 20:45:11 +08:00
ef1bb9819a [feature-wip](MTMV) Support mapping the partition rule of base table to the materialized view (#14930)
When we create a materialized view for multiple tables, users may not figure out the partition rule for the materialized view, because the query result can be too complex. If the query result doesn't match one of the partition rules, the insertion will fail.

We can resolve this issue by mapping the partition rule of base table to the materialized view. As a result, users don't need specify the partition rules and query results are all valid because they are retrieved from the partitions of the base table.

## Use case

mysql> CREATE TABLE t1 (pk INT NOT NULL, v1 INT SUM) PARTITION BY RANGE(pk) (
    ->   PARTITION p1 VALUES LESS THAN ('10'),
    ->   PARTITION p2 VALUES LESS THAN ('90')
    -> )
    -> DISTRIBUTED BY HASH(pk)
    -> PROPERTIES ('replication_num' = '1');
Query OK, 0 rows affected (0.04 sec)

mysql> CREATE TABLE t2 (pk INT NOT NULL, v2 INT SUM) PARTITION BY LIST(pk) (
    ->   PARTITION odd VALUES IN ('10', '30', '50', '70', '90'),
    ->   PARTITION even VALUES IN ('20', '40', '60', '80')
    -> )
    -> DISTRIBUTED BY HASH(pk)
    -> PROPERTIES ('replication_num' = '1');
Query OK, 0 rows affected (0.02 sec)

mysql> CREATE MATERIALIZED VIEW mv BUILD IMMEDIATE REFRESH COMPLETE
    -> KEY (mpk) PARTITION BY (t1.pk) DISTRIBUTED BY HASH(mpk) PROPERTIES ('replication_num' = '1')
    -> AS SELECT t1.pk AS mpk, v1, v2 FROM t1, t2 WHERE t1.pk = t2.pk;
Query OK, 0 rows affected (0.10 sec)

mysql> SHOW CREATE TABLE mv;
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Materialized View | Create Materialized View                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mv                | CREATE MATERIALIZED VIEW `mv`
BUILD IMMEDIATE REFRESH COMPLETE ON DEMAND
KEY(`mpk`)
PARTITION BY RANGE(`mpk`)
(PARTITION p1 VALUES [("-2147483648"), ("10")),
PARTITION p2 VALUES [("10"), ("90")))
DISTRIBUTED BY HASH(`mpk`) BUCKETS 10
PROPERTIES (
"replication_allocation" = "tag.location.default: 1",
"in_memory" = "false",
"storage_format" = "V2",
"disable_auto_compaction" = "false"
)
AS SELECT `t1`.`pk` AS `mpk`, `v1` AS `v1`, `v2` AS `v2` FROM `default_cluster:dev`.`t1` , `default_cluster:dev`.`t2` WHERE `t1`.`pk` = `t2`.`pk`; |
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
2022-12-09 22:47:21 +08:00
ed96442b85 [fix](multi-catalog) fix persist issue about jdbc catalog and class loader issue #14794
Fix a bug that JDBC catalog/database/table should be add to GsonUtil

Fix a class loader issue that sometime it will cause ClassNotFoundException

Fix regression test to use different catalog name.

Comment out 2 regression tests:

regression-test/suites/query_p0/system/test_query_sys.groovy
regression-test/suites/statistics/alter_col_stats.groovy
Need to be fixed later
2022-12-05 09:05:13 +08:00
3c8524b9d8 [security](fe jar) upgrade commons-codec:commons-codec to 1.13 #13951 2022-11-07 13:50:07 +08:00
fb5a3e118a [feature-wip](dlf) prepare to support aliyun dlf (#13969)
[What is DLF](https://www.alibabacloud.com/product/datalake-formation)

This PR is a preparation for support DLF, with some changes of multi catalog

1. Add RuntimeException for most of hive meta store or es client visit operation.
2. Add DLF related dependencies.
3. Move the checks of es catalog properties to the analysis phase of creating es catalog

TODO(in next PR):

1. Refactor the `getSplit` method to support not only hdfs, but s3-compatible object storage.
2. Finish the implementation of supporting DLF
2022-11-06 10:01:57 +08:00
3c95106d45 [Bug](jdbc) Fix memory leak for JDBC datasource (#13657) 2022-10-27 00:02:25 +08:00
57b7a416d2 [chore](build) add apache snapshot maven repo to repositories (#11549) 2022-08-06 07:15:28 +08:00
95091256b0 [chore](deps) update bdbje tp doris bdbje, update libhdfs3 to improve performance (#11497) 2022-08-04 17:10:56 +08:00
388db05ef9 [bugfix](log4j) Upgrade log4j to 2.18.0 (#11368) 2022-07-31 22:21:33 +08:00
6963c41a04 [dependency] Upgrade Apache Commons Validator version to the latest one (#10508) 2022-07-22 17:03:46 +08:00
68b9a2936a [improvement](doe) Step1: Fe generates the DSL and is used to explain (#9895)
For the first step, I will only change FE and then change BE once I make sure the DSL is ok.
2022-07-18 23:20:58 +08:00
e769597fd2 [Improvement] (datetime) support microsecond for date literal (#10917)
* [Improvement] (datetime) support microsecond for date literal

* remove joda dependency
2022-07-18 21:39:39 +08:00
67f341f44e [TLP](step-1) Remove incubator prefix (#10230)
Remove some `incubator-` prefix in source code.
The document is not modified, will be done in next PR.
2022-06-19 19:34:52 +08:00
b7b78ae707 [style](fe)the last step of fe CheckStyle (#10134)
1. fix all checkstyle warning
2. change all checkstyle rules to error
3. remove some java doc rules
    a. RequireEmptyLineBeforeBlockTagGroup
    b. JavadocStyle
    c. JavadocParagraph
4. suppress some rules for old codes
    a. all java doc rules only affect on Nereids
    b. DeclarationOrder only affect on Nereids
    c. OverloadMethodsDeclarationOrder only affect on Nereids
    d. VariableDeclarationUsageDistance only affect on Nereids
    e. suppress OneTopLevelClass on org/apache/doris/load/loadv2/dpp/ColumnParser.java
    f. suppress OneTopLevelClass on org/apache/doris/load/loadv2/dpp/SparkRDDAggregator.java
    g. suppress LineLength on org/apache/doris/catalog/FunctionSet.java
    h. suppress LineLength on org/apache/doris/common/ErrorCode.java
2022-06-17 21:02:45 +08:00
24ad11af6a [deps] upgrade fabric8 k8s client to compitable new k8s cluster (#9933) 2022-06-06 10:00:36 +08:00
8092439634 [feature](hudi) Step2: Support query hudi external table(include cow and mor table) (#9752)
support query cow and mor hudi table.
2022-05-30 09:43:36 +08:00
72e0042efb [feature-wip](hudi) Step1: Support create hudi external table (#9559)
support create hudi table
support show create table for hudi table

### Design
1. create hudi table without schema(recommanded)
```sql
    CREATE [EXTERNAL] TABLE table_name
    ENGINE = HUDI
    [COMMENT "comment"]
    PROPERTIES (
    "hudi.database" = "hudi_db_in_hive_metastore",
    "hudi.table" = "hudi_table_in_hive_metastore",
    "hudi.hive.metastore.uris" = "thrift://127.0.0.1:9083"
    );
```

2. create hudi table with schema
```sql
    CREATE [EXTERNAL] TABLE table_name
    [(column_definition1[, column_definition2, ...])]
    ENGINE = HUDI
    [COMMENT "comment"]
    PROPERTIES (
    "hudi.database" = "hudi_db_in_hive_metastore",
    "hudi.table" = "hudi_table_in_hive_metastore",
    "hudi.hive.metastore.uris" = "thrift://127.0.0.1:9083"
    );
```
When create hudi table with schema, the columns must exist in corresponding table in hive metastore.
2022-05-17 11:30:23 +08:00
122cc3b772 [chore](fe code style)add suppressions to fe check style (#9429)
Current fe check style check all files. But some rules should be only applied on production files.
Add suppressions to suppress some rules on test files.
2022-05-12 12:16:55 +08:00
d1b85d51a0 [code style](fe) Include test sources (#9366)
Include test sources, we also need to check them.
2022-05-09 09:40:44 +08:00
1746f61388 [refactor](test) Refactor FE unit test framework that starts a FE server. (#9388)
Currently, we use `UtFrameUtils` to start a FE server in the FE unit test. 
Each test class has to do some initialization and clean up stuff with the JUnit4
`@BeforeClass` and `@AfterClass` annotation. It's redundant and boring.
Besides, almost all the APIs in `UtFrameUtils` has a `ConnectContext` parameter, which is not easy to use.

This PR proposes to use an inherit-manner, i.e., wrap all the common logic in base class `TestWithFeService`,
leveraging the 
JUnit5 `@BeforeAll` and `@AfterAll` annotation to narrow down the setup and cleanup lifecycle to each test class instance.
At the same time, the derived concrete test class could directly use utility methods inherited from the base class,
without calling a util class and passing a `ConnectContext` argument.

`UtFrameUtils` and `DorisAssert`  are marked as deprecated. We could remove these two classes
if this refactor works well for a time.
2022-05-07 21:28:42 +08:00
784681f106 [FE Code Style][step 0]add github action to check incremental code in pr (#9328)
1. add rules to checkstyle
2. add github action to check incremental code in pr
2022-05-01 17:30:29 +08:00
b6b6e17eb7 [chore] (workflow)add sonarcloud workflow to check code quality and security (#9252) 2022-04-28 11:09:56 +08:00
af2295f971 MOD: remove <scope>provided</scope> (#9177) 2022-04-25 10:00:57 +08:00
13f1f94f86 [chore] upgrade log4j version to 2.17.2 (#8774)
upgrade log4j version to 2.17.2
2022-04-02 21:29:25 +08:00
b98da02611 [chore][fix](httpv2) Use mariadb-java-client for http query api (#8716)
In #8319, I remove mysql-connector-java dependency because of license incompatibility.
But we need a mysql compatible driver for http query api. So I choose mariadb-java-client,
which is under LGPL.
2022-03-30 09:59:45 +08:00
22cf6ea17c [chore] Modify build.sh and refactor dependency of FE submodules (#8732)
This PR fixes the #8731 and refactor the `build.sh` script.

The build.sh script is currently responsible for the compilation of the following Doris components.
1. FE
    - fe-common
    - fe-core
    - spark-dpp
    - hive-udf
    - java-udf
    - ui
2. BE
    - palo_be
    - meta_tool
3. broker

In the FE module.
- The 4 submodules `fe-common, fe-core, spark-dpp and ui` together form Frontend.
- `spark-dpp, hive-udf and java-udf` can be compiled separately to produce jar packages for individual use.

In the BE module.
- `palo_be` can start the BE process separately.
- `meta_tool` can be compiled separately to produce binaries.

The modified build.sh script has the following changes:

1. there is no longer an option to compile `ui` separately, build together with `--fe`.
2. `fe/be/spark-dpp/hive-udf/java-udf/palo_be/meta_tool` can be compiled separately.
3. all components except `java-udf` will be compiled by default (`java-udf` is in development)

Remaining issues:

Several submodules of FE have messy dependencies.
For example, `java-udf` depends on `fe-core`, and `fe-core` depends on `spark-dpp`,
resulting in a large binary jar of `java-udf`.
It needs to be reorganized afterwards.
2022-03-30 00:13:24 +08:00
b2861f36c4 [chore] optimize aws thirdparty package download. (#8637) 2022-03-28 09:35:51 +08:00
b89e4c7bba [feature-wip](java-udf) support java UDF with fixed-length input and output (#8516)
This feature is propsoed in [DSIP-1](https://cwiki.apache.org/confluence/display/DORIS/DSIP-001%3A+Java+UDF). 
This PR support fixed-length input and output Java UDF. Phase I in DIP-1 is done after this PR.

To support Java UDF effeciently, I use no data copy in JNI call and all compute operations are off-heap in Java.
To achieve that, I use a UdfExecutor instead. 

For users, a UDF class must have a public evaluate method.
2022-03-23 10:32:50 +08:00
f3c44bcd75 [chore][fix](librdkafka) disable librdkafka assert and update some thirdparty (#8425)
1. comment  librdkafka `rd_assert(thrd_is_current(rkb->rkb_thread));` to avoid core dump
2. upgrade arrow to 7.0.0
3. upgrade aws sdk to 1.9
4. upgrade orc to 1.7.2
2022-03-12 22:09:06 +08:00
50a59f3f86 [license] Organize third-party dependent licenses for bianry releases (#8350) 2022-03-07 23:18:58 +08:00
9961b2c860 [refactor] Remove mysql-connector and replace org.json with com.googlecode.json-simple (#8319)
1. mysql-connector-java
    mysql-connector-java is under GLPv2 license, which is not compatible with APLv2, and Doris does not use it.

2. org.json
    org.json is under JSON license, which is not compatible with APLv2. I use `json-simple` to replace it.
2022-03-05 14:41:04 +08:00
315bfe2d0e Revert "[chore](dependency) upgrade-grpc-version (#8218)" (#8250)
This reverts commit df7e848cbbc8170c7bd83d812d7cac58b5574570.

Reverts apache/incubator-doris#8218

Because when using grpc 1.44.1, the corresponding `protoc-gen-grpc-java` plugin
requried GLIBC_2.14, which is not found in CentOS 6.

So I suggest to revert this commit this time. And considering upgrading this component
after most systems have reached glibc version 2.14.

And for Mac M1, you may have to change this version manually for now
2022-03-02 10:16:25 +08:00
93c638f3a2 [fix][chore](insert)(fe) Fix analysis error of insert stmt and modify grpc-netty dependency (#8265)
This bug is introduced from #8112.

Also , I change the `grpc-netty` dependency to `grpc-netty-shaded`, to avoid dependency conflict:
```
java.lang.NoSuchMethodError: io.netty.buffer.PooledByteBufAllocator.
```
2022-03-01 11:12:10 +08:00
87b96cfcd6 [feature](iceberg) Step3: Support query iceberg external table (#8179)
1. Add Iceberg scan node 
2. Add Iceberg/Hive table type in thrift 
3. Support querying Iceberg tables of format types `parquet` and `orc`
2022-02-26 17:04:11 +08:00
df7e848cbb [chore](dependency) upgrade-grpc-version (#8218)
upgrade grpc.version, so macos with M1 chip can build Fe correctly.
1.30.0 -> 1.44.1
2022-02-24 23:17:32 +08:00
264f38471c [feature](spark-load) add Hive Bitmap UDFs (#8036)
Hive Bitmap UDF provides UDFs for generating bitmap and bitmap operations in hive tables.
The bitmap in Hive is exactly the same as the Doris bitmap.
The bitmap in Hive can be imported into Doris through spark bitmap load.
2022-02-17 10:45:20 +08:00
3b8d48f08b [feature-wip](iceberg) Step1: Support create Iceberg external table (#7391)
Close related #7389

Support create Iceberg external table in Doris. 

This is the first step to support Iceberg external table.

### Create Iceberg external table
This pr describes two ways to create Iceberg external tables. Both ways do not require explicitly specifying column definitions, Doris automatically converts them based on Iceberg's column definitions.

1. Create an Iceberg external table directly

```sql
    CREATE [EXTERNAL] TABLE table_name 
    ENGINE = ICEBERG
    [COMMENT "comment"]
    PROPERTIES (
    "iceberg.database" = "iceberg_db_name",
    "iceberg.table" = "icberg_table_name",
    "iceberg.hive.metastore.uris"  =  "thrift://192.168.0.1:9083",
    "iceberg.catalog.type"  =  "HIVE_CATALOG"
    );
```

2. Create an Iceberg database and automatically create all the tables under that db.

```sql
    CREATE DATABASE db_name 
    [COMMENT "comment"]
    PROPERTIES (
    "iceberg.database" = "iceberg_db_name",
    "iceberg.hive.metastore.uris" = "thrift://192.168.0.1:9083",
    "iceberg.catalog.type" = "HIVE_CATALOG"
    );
```

### Show table creation

1. For individual tables you can view them with `help show create table`.

```sql 
mysql> show create table iceberg_db.logs_1;
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table  | Create Table                                                                                                                                                                                                                                                                                                                                                 |
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| logs_1 | CREATE TABLE `logs_1` (
  `level` varchar(-1) NOT NULL COMMENT "null",
  `event_time` datetime NOT NULL COMMENT "null",
  `message` varchar(-1) NOT NULL COMMENT "null"
) ENGINE=ICEBERG
COMMENT "ICEBERG"
PROPERTIES (
"iceberg.database" = "doris",
"iceberg.table" = "logs_1",
"iceberg.hive.metastore.uris"  =  "thrift://10.10.10.10:9087",
"iceberg.catalog.type"  =  "HIVE_CATALOG"
) |
+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```

2. For Iceberg database, you can view it with `help show table creation`.

```sql
mysql> show table creation from iceberg_db;
+--------+---------+---------------------+---------------------------------------------------------+
| Table  | Status  | Create Time         | Error Msg                                               |
+--------+---------+---------------------+---------------------------------------------------------+
| logs   | fail    | 2021-12-14 13:50:10 | Cannot convert unknown type to Doris type: list<string> |
| logs_1 | success | 2021-12-14 13:50:10 |                                                         |
+--------+---------+---------------------+---------------------------------------------------------+
2 rows in set (0.00 sec)
```

  This is a new syntax.
  
  Show table creation records in Iceberg database:
  
  Syntax:
  ```sql
      SHOW TABLE CREATION [FROM db] [LIKE mask]
  ```
2022-01-27 10:22:47 +08:00
4bdeef3b64 [chore][fix][doc](fe-plugin)(mysqldump) fix build auditlog plugin error (#7804)
1. fix problems when build fe_plugins
2. format
3. add docs about dump data using mysql dump
2022-01-26 09:11:23 +08:00
4ac8b3c9a9 [fix][s3] Fix bug that can not visit aliyun oss with aws s3 sdk (#7691)
Close #7690

1. Exclude httpclient and httpcore dependencies from thrift@0.13

    Explicitly use httpclient@4.5.13 and httpcore@4.4.15
    https://stackoverflow.com/questions/59265959/java-lang-bootstrapmethoderror-call-site-initialization-exception-from-athena-j

2. Exclude aws-java-sdk-s3 dependency from hadoop-aws

    Explicitly use aws-java-sdk-s3@1.11.95
    https://github.com/aws/aws-sdk-java/issues/1032
2022-01-11 15:00:31 +08:00
ad35067a2a [chore][docs] add deploy spark/flink connectors to maven release repo docs (#7616) 2022-01-06 23:23:33 +08:00
738d2d2e07 [refactor] update parent pom version and optimize build scripts (#7548) 2022-01-05 10:45:11 +08:00
85c30fc720 [deps] Upgrade Log4j to 2.7.1 to solve the CVE-2021-44832 security vulnerability (#7536)
Upgrade Log4j to 2.7.1 to solve the CVE-2021-44832 security vulnerability

Co-authored-by: Zhengguo Yang <yangzhgg@gmail.com>
2021-12-30 10:21:37 +08:00
2872dbfeb8 [refactor] Standardize the writing of pom files, prepare for deployment to maven (#7477) 2021-12-30 10:16:37 +08:00
7a1bb5b335 log4j upgrade to 2.17.0 (#7440)
Solved the third security vulnerability CVE-2021-45105 that was discovered
2021-12-21 09:28:02 +08:00
e64da03866 [deps](log4j) Upgrade log4j 2 to 2.16.0 (#7394)
Upgrade log4j 2 to 2.16.0, the official strongly recommends upgrading to this version
2021-12-14 15:57:16 +08:00