[feature](cold-hot) support s3 resource (#8808)

Add cold hot support in FE meta, support alter resource DDL in FE
This commit is contained in:
qiye
2022-04-13 09:52:03 +08:00
committed by GitHub
parent 64cf64d1f8
commit bca121333e
45 changed files with 1871 additions and 471 deletions

View File

@ -25,114 +25,117 @@ under the License.
-->
# ALTER SYSTEM
## Description
This statement is used to operate on nodes in a system. (Administrator only!)
Grammar:
1) Adding nodes (without multi-tenant functionality, add in this way)
ALTER SYSTEM ADD BACKEND "host:heartbeat_port"[,"host:heartbeat_port"...];
2) Adding idle nodes (that is, adding BACKEND that does not belong to any cluster)
ALTER SYSTEM ADD FREE BACKEND "host:heartbeat_port"[,"host:heartbeat_port"...];
3) Adding nodes to a cluster
ALTER SYSTEM ADD BACKEND TO cluster_name "host:heartbeat_port"[,"host:heartbeat_port"...];
4) Delete nodes
ALTER SYSTEM DROP BACKEND "host:heartbeat_port"[,"host:heartbeat_port"...];
5) Node offline
ALTER SYSTEM DECOMMISSION BACKEND "host:heartbeat_port"[,"host:heartbeat_port"...];
6)226;- 21152;-Broker
ALTER SYSTEM ADD BROKER broker_name "host:port"[,"host:port"...];
(7) 20943;"23569;" Broker
ALTER SYSTEM DROP BROKER broker_name "host:port"[,"host:port"...];
8) Delete all Brokers
ALTER SYSTEM DROP ALL BROKER broker_name
9) Set up a Load error hub for centralized display of import error information
ALTER SYSTEM SET LOAD ERRORS HUB PROPERTIES ("key" = "value"[, ...]);
10) Modify property of BE
ALTER SYSTEM MODIFY BACKEND "host:heartbeat_port" SET ("key" = "value"[, ...]);
This statement is used to operate on nodes in a system. (Administrator only!)
Syntax:
1) Adding nodes (without multi-tenant functionality, add in this way)
ALTER SYSTEM ADD BACKEND "host:heartbeat_port"[,"host:heartbeat_port"...];
2) Adding idle nodes (that is, adding BACKEND that does not belong to any cluster)
ALTER SYSTEM ADD FREE BACKEND "host:heartbeat_port"[,"host:heartbeat_port"...];
3) Adding nodes to a cluster
ALTER SYSTEM ADD BACKEND TO cluster_name "host:heartbeat_port"[,"host:heartbeat_port"...];
4) Delete nodes
ALTER SYSTEM DROP BACKEND "host:heartbeat_port"[,"host:heartbeat_port"...];
5) Node offline
ALTER SYSTEM DECOMMISSION BACKEND "host:heartbeat_port"[,"host:heartbeat_port"...];
6) Add Broker
ALTER SYSTEM ADD BROKER broker_name "host:port"[,"host:port"...];
7) Drop Broker
ALTER SYSTEM DROP BROKER broker_name "host:port"[,"host:port"...];
8) Delete all Brokers
ALTER SYSTEM DROP ALL BROKER broker_name
9) Set up a Load error hub for centralized display of import error information
ALTER SYSTEM SET LOAD ERRORS HUB PROPERTIES ("key" = "value"[, ...]);
10) Modify property of BE
ALTER SYSTEM MODIFY BACKEND "host:heartbeat_port" SET ("key" = "value"[, ...]);
Explain:
1) Host can be hostname or IP address
2) heartbeat_port is the heartbeat port of the node
3) Adding and deleting nodes are synchronous operations. These two operations do not take into account the existing data on the node, the node is directly deleted from the metadata, please use cautiously.
4) Node offline operations are used to secure offline nodes. This operation is asynchronous. If successful, the node will eventually be removed from the metadata. If it fails, the offline will not be completed.
5) The offline operation of the node can be cancelled manually. See CANCEL DECOMMISSION for details
6) Load error hub:
Currently, two types of Hub are supported: Mysql and Broker. You need to specify "type" = "mysql" or "type" = "broker" in PROPERTIES.
If you need to delete the current load error hub, you can set type to null.
1) When using the Mysql type, the error information generated when importing will be inserted into the specified MySQL library table, and then the error information can be viewed directly through the show load warnings statement.
Explain:
1) Host can be hostname or IP address
2) heartbeat_port is the heartbeat port of the node
3) Adding and deleting nodes are synchronous operations. These two operations do not take into account the existing data on the node, the node is directly deleted from the metadata, please use cautiously.
4) Node offline operations are used to secure offline nodes. This operation is asynchronous. If successful, the node will eventually be removed from the metadata. If it fails, the offline will not be completed.
5) The offline operation of the node can be cancelled manually. See CANCEL DECOMMISSION for details
6) Load error hub:
Currently, two types of Hub are supported: Mysql and Broker. You need to specify "type" = "mysql" or "type" = "broker" in PROPERTIES.
If you need to delete the current load error hub, you can set type to null.
1) When using the Mysql type, the error information generated when importing will be inserted into the specified MySQL library table, and then the error information can be viewed directly through the show load warnings statement.
Hub of Mysql type needs to specify the following parameters:
host: mysql host
port: mysql port
user: mysql user
password: mysql password
database mysql database
table: mysql table
2) When the Broker type is used, the error information generated when importing will form a file and be written to the designated remote storage system through the broker. Make sure that the corresponding broker is deployed
Hub of Broker type needs to specify the following parameters:
Broker: Name of broker
Path: Remote Storage Path
Other properties: Other information necessary to access remote storage, such as authentication information.
7) Modify BE node attributes currently supports the following attributes:
1. tag.location:Resource tag
2. disable_query: Query disabled attribute
3. disable_load: Load disabled attribute
Hub of Mysql type needs to specify the following parameters:
host: mysql host
port: mysql port
user: mysql user
password: mysql password
database mysql database
table: mysql table
## Example
2) When the Broker type is used, the error information generated when importing will form a file and be written to the designated remote storage system through the broker. Make sure that the corresponding broker is deployed
Hub of Broker type needs to specify the following parameters:
Broker: Name of broker
Path: Remote Storage Path
Other properties: Other information necessary to access remote storage, such as authentication information.
7) Modify BE node attributes currently supports the following attributes:
1. tag.location:Resource tag
2. disable_query: Query disabled attribute
3. disable_load: Load disabled attribute
## example
1. Add a node
ALTER SYSTEM ADD BACKEND "host:port";
2. Adding an idle node
ALTER SYSTEM ADD FREE BACKEND "host:port";
3. Delete two nodes
ALTER SYSTEM DROP BACKEND "host1:port", "host2:port";
4. offline two nodes
ALTER SYSTEM DECOMMISSION BACKEND "host1:port", "host2:port";
5. Add two Hdfs Broker
ALTER SYSTEM ADD BROKER hdfs "host1:port", "host2:port";
6. Add a load error hub of Mysql type
ALTER SYSTEM SET LOAD ERRORS HUB PROPERTIES
("type"= "mysql",
"host" = "192.168.1.17"
"port" = "3306",
"User" = "my" name,
"password" = "my_passwd",
"database" = "doris_load",
"table" = "load_errors"
);
7. 添加一个 Broker 类型的 load error hub
ALTER SYSTEM SET LOAD ERRORS HUB PROPERTIES
("type"= "broker",
"Name" = BOS,
"path" = "bos://backup-cmy/logs",
"bos_endpoint" ="http://gz.bcebos.com",
"bos_accesskey" = "069fc278xxxxxx24ddb522",
"bos_secret_accesskey"="700adb0c6xxxxxx74d59eaa980a"
);
8. Delete the current load error hub
ALTER SYSTEM SET LOAD ERRORS HUB PROPERTIES
("type"= "null");
9. Modify BE resource tag
ALTER SYSTEM MODIFY BACKEND "host1:9050" SET ("tag.location" = "group_a");
10. Modify the query disabled attribute of BE
ALTER SYSTEM MODIFY BACKEND "host1:9050" SET ("disable_query" = "true");
11. Modify the load disabled attribute of BE
ALTER SYSTEM MODIFY BACKEND "host1:9050" SET ("disable_load" = "true");
1. Add a node
ALTER SYSTEM ADD BACKEND "host:port";
2. Adding an idle node
ALTER SYSTEM ADD FREE BACKEND "host:port";
3. Delete two nodes
ALTER SYSTEM DROP BACKEND "host1:port", "host2:port";
4. offline two nodes
ALTER SYSTEM DECOMMISSION BACKEND "host1:port", "host2:port";
5. Add two Hdfs Broker
ALTER SYSTEM ADD BROKER hdfs "host1:port", "host2:port";
6. Add a load error hub of Mysql type
ALTER SYSTEM SET LOAD ERRORS HUB PROPERTIES
("type"= "mysql",
"host" = "192.168.1.17"
"port" = "3306",
"User" = "my" name,
"password" = "my_passwd",
"database" = "doris_load",
"table" = "load_errors"
);
7. 添加一个 Broker 类型的 load error hub
ALTER SYSTEM SET LOAD ERRORS HUB PROPERTIES
("type"= "broker",
"Name" = BOS,
"path" = "bos://backup-cmy/logs",
"bos_endpoint" ="http://gz.bcebos.com",
"bos_accesskey" = "069fc278xxxxxx24ddb522",
"bos_secret_accesskey"="700adb0c6xxxxxx74d59eaa980a"
);
8. Delete the current load error hub
ALTER SYSTEM SET LOAD ERRORS HUB PROPERTIES
("type"= "null");
9. Modify BE resource tag
ALTER SYSTEM MODIFY BACKEND "host1:9050" SET ("tag.location" = "group_a");
10. Modify the query disabled attribute of BE
ALTER SYSTEM MODIFY BACKEND "host1:9050" SET ("disable_query" = "true");
11. Modify the load disabled attribute of BE
ALTER SYSTEM MODIFY BACKEND "host1:9050" SET ("disable_load" = "true");
## keyword
AGE,SYSTEM,BACKGROUND,BROKER,FREE
AGE, SYSTEM, BACKGROUND, BROKER, FREE

View File

@ -0,0 +1,48 @@
---
{
"title": "ALTER RESOURCE",
"language": "en"
}
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
# ALTER RESOURCE
## Description
This statement is used to modify an existing resource. Only the root or admin user can modify resources.
Syntax:
ALTER RESOURCE 'resource_name'
PROPERTIES ("key"="value", ...);
Note: The resource type does not support modification.
## Example
1. Modify the working directory of the Spark resource named spark0:
ALTER RESOURCE 'spark0' PROPERTIES ("working_dir" = "hdfs://127.0.0.1:10000/tmp/doris_new");
2. Modify the maximum number of connections to the S3 resource named remote_s3:
ALTER RESOURCE 'remote_s3' PROPERTIES ("s3_max_connections" = "100");
## keyword
ALTER, RESOURCE

View File

@ -71,6 +71,7 @@ under the License.
1) The following attributes of the modified partition are currently supported.
- storage_medium
- storage_cooldown_time
- remote_storage_cooldown_time
- replication_num
— in_memory
2) For single-partition tables, partition_name is the same as the table name.

View File

@ -0,0 +1,134 @@
---
{
"title": "CREATE RESOURCE",
"language": "en"
}
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
# CREATE RESOURCE
## Description
This statement is used to create a resource. Only the root or admin user can create resources. Currently supports Spark, ODBC, S3 external resources.
In the future, other external resources may be added to Doris for use, such as Spark/GPU for query, HDFS/S3 for external storage, MapReduce for ETL, etc.
Syntax:
CREATE [EXTERNAL] RESOURCE "resource_name"
PROPERTIES ("key"="value", ...);
Explanation:
1. The type of resource needs to be specified in PROPERTIES "type" = "[spark|odbc_catalog|s3]", currently supports spark, odbc_catalog, s3.
2. The PROPERTIES varies according to the resource type, see the example for details.
## Example
1. Create a Spark resource named spark0 in yarn cluster mode.
````
CREATE EXTERNAL RESOURCE "spark0"
PROPERTIES
(
"type" = "spark",
"spark.master" = "yarn",
"spark.submit.deployMode" = "cluster",
"spark.jars" = "xxx.jar,yyy.jar",
"spark.files" = "/tmp/aaa,/tmp/bbb",
"spark.executor.memory" = "1g",
"spark.yarn.queue" = "queue0",
"spark.hadoop.yarn.resourcemanager.address" = "127.0.0.1:9999",
"spark.hadoop.fs.defaultFS" = "hdfs://127.0.0.1:10000",
"working_dir" = "hdfs://127.0.0.1:10000/tmp/doris",
"broker" = "broker0",
"broker.username" = "user0",
"broker.password" = "password0"
);
````
Spark related parameters are as follows:
- spark.master: Required, currently supports yarn, spark://host:port.
- spark.submit.deployMode: The deployment mode of the Spark program, required, supports both cluster and client.
- spark.hadoop.yarn.resourcemanager.address: Required when master is yarn.
- spark.hadoop.fs.defaultFS: Required when master is yarn.
- Other parameters are optional, refer to http://spark.apache.org/docs/latest/configuration.html
Working_dir and broker need to be specified when Spark is used for ETL. described as follows:
working_dir: The directory used by the ETL. Required when spark is used as an ETL resource. For example: hdfs://host:port/tmp/doris.
broker: broker name. Required when spark is used as an ETL resource. Configuration needs to be done in advance using the `ALTER SYSTEM ADD BROKER` command.
broker.property_key: The authentication information that the broker needs to specify when reading the intermediate file generated by ETL.
2. Create an ODBC resource
````
CREATE EXTERNAL RESOURCE `oracle_odbc`
PROPERTIES (
"type" = "odbc_catalog",
"host" = "192.168.0.1",
"port" = "8086",
"user" = "test",
"password" = "test",
"database" = "test",
"odbc_type" = "oracle",
"driver" = "Oracle 19 ODBC driver"
);
````
The relevant parameters of ODBC are as follows:
- hosts: IP address of the external database
- driver: The driver name of the ODBC appearance, which must be the same as the Driver name in be/conf/odbcinst.ini.
- odbc_type: the type of the external database, currently supports oracle, mysql, postgresql
- user: username of the foreign database
- password: the password information of the corresponding user
3. Create S3 resource
````
CREATE RESOURCE "remote_s3"
PROPERTIES
(
"type" = "s3",
"s3_endpoint" = "http://bj.s3.com",
"s3_region" = "bj",
"s3_root_path" = "/path/to/root",
"s3_access_key" = "bbb",
"s3_secret_key" = "aaaa",
"s3_max_connections" = "50",
"s3_request_timeout_ms" = "3000",
"s3_connection_timeout_ms" = "1000"
);
````
S3 related parameters are as follows:
- required
- s3_endpoint: s3 endpoint
- s3_region: s3 region
- s3_root_path: s3 root directory
- s3_access_key: s3 access key
- s3_secret_key: s3 secret key
- optional
- s3_max_connections: the maximum number of s3 connections, the default is 50
- s3_request_timeout_ms: s3 request timeout, in milliseconds, the default is 3000
- s3_connection_timeout_ms: s3 connection timeout, in milliseconds, the default is 1000
## keyword
CREATE, RESOURCE

View File

@ -297,18 +297,24 @@ Syntax:
PROPERTIES (
"storage_medium" = "[SSD|HDD]",
["storage_cooldown_time" = "yyyy-MM-dd HH:mm:ss"],
["remote_storage_resource" = "xxx"],
["remote_storage_cooldown_time" = "yyyy-MM-dd HH:mm:ss"],
["replication_num" = "3"],
["replication_allocation" = "xxx"]
["replication_allocation" = "xxx"]
)
```
storage_medium: SSD or HDD, The default initial storage media can be specified by `default_storage_medium= XXX` in the fe configuration file `fe.conf`, or, if not, by default, HDD.
Note: when FE configuration 'enable_strict_storage_medium_check' is' True ', if the corresponding storage medium is not set in the cluster, the construction clause 'Failed to find enough host in all backends with storage medium is SSD|HDD'.
storage_cooldown_time: If storage_medium is SSD, data will be automatically moved to HDD when timeout.
Default is 30 days.
Format: "yyyy-MM-dd HH:mm:ss"
replication_num: Replication number of a partition. Default is 3.
replication_allocation: Specify the distribution of replicas according to the resource tag.
storage_medium: SSD or HDD, The default initial storage media can be specified by `default_storage_medium= XXX` in the fe configuration file `fe.conf`, or, if not, by default, HDD.
Note: when FE configuration 'enable_strict_storage_medium_check' is' True ', if the corresponding storage medium is not set in the cluster, the construction clause 'Failed to find enough host in all backends with storage medium is SSD|HDD'.
storage_cooldown_time: If storage_medium is SSD, data will be automatically moved to HDD when timeout.
Default is 30 days.
Format: "yyyy-MM-dd HH:mm:ss"
remote_storage_resource: The remote storage resource name, which needs to be used in conjunction with the storage_cold_medium parameter.
remote_storage_cooldown_time: Used in conjunction with remote_storage_resource. Indicates the expiration time of the partition stored locally.
Does not expire by default. Must be later than storage_cooldown_time if used with it.
The format is: "yyyy-MM-dd HH:mm:ss"
replication_num: Replication number of a partition. Default is 3.
replication_allocation: Specify the distribution of replicas according to the resource tag.
If table is not range partitions. This property takes on Table level. Or it will takes on Partition level.
User can specify different properties for different partition by `ADD PARTITION` or `MODIFY PARTITION` statements.
@ -353,7 +359,7 @@ Syntax:
dynamic_partition.reserved_history_periods: Used to specify the range of reserved history periods
```
5) You can create multiple Rollups in bulk when building a table
5) You can create multiple Rollups in bulk when building a table
grammar:
```
ROLLUP (rollup_name (column_name1, column_name2, ...)
@ -405,68 +411,89 @@ Syntax:
"storage_cooldown_time" = "2015-06-04 00:00:00"
);
```
3. Create an olap table, with range partitioned, distributed by hash. Records with the same key exist at the same time, set the initial storage medium and cooling time, use default column storage.
1) LESS THAN
3. Create an olap table, distributed by hash, with aggregation type. Also set storage medium and cooldown time.
Setting up remote storage resource and cold data storage media.
```
CREATE TABLE example_db.table_range
CREATE TABLE example_db.table_hash
(
k1 DATE,
k2 INT,
k3 SMALLINT,
v1 VARCHAR(2048),
v2 DATETIME DEFAULT "2014-02-04 15:36:00"
k1 BIGINT,
k2 LARGEINT,
v1 VARCHAR(2048) REPLACE,
v2 SMALLINT SUM DEFAULT "10"
)
ENGINE=olap
DUPLICATE KEY(k1, k2, k3)
PARTITION BY RANGE (k1)
(
PARTITION p1 VALUES LESS THAN ("2014-01-01"),
PARTITION p2 VALUES LESS THAN ("2014-06-01"),
PARTITION p3 VALUES LESS THAN ("2014-12-01")
)
DISTRIBUTED BY HASH(k2) BUCKETS 32
AGGREGATE KEY(k1, k2)
DISTRIBUTED BY HASH (k1, k2) BUCKETS 32
PROPERTIES(
"storage_medium" = "SSD", "storage_cooldown_time" = "2015-06-04 00:00:00"
"storage_medium" = "SSD",
"storage_cooldown_time" = "2015-06-04 00:00:00",
"remote_storage_resource" = "remote_s3",
"remote_storage_cooldown_time" = "2015-12-04 00:00:00"
);
```
Explain:
This statement will create 3 partitions:
```
( { MIN }, {"2014-01-01"} )
[ {"2014-01-01"}, {"2014-06-01"} )
[ {"2014-06-01"}, {"2014-12-01"} )
```
Data outside these ranges will not be loaded.
```
2) Fixed Range
```
CREATE TABLE table_range
(
k1 DATE,
k2 INT,
k3 SMALLINT,
v1 VARCHAR(2048),
v2 DATETIME DEFAULT "2014-02-04 15:36:00"
)
ENGINE=olap
DUPLICATE KEY(k1, k2, k3)
PARTITION BY RANGE (k1, k2, k3)
(
PARTITION p1 VALUES [("2014-01-01", "10", "200"), ("2014-01-01", "20", "300")),
PARTITION p2 VALUES [("2014-06-01", "100", "200"), ("2014-07-01", "100", "300"))
)
DISTRIBUTED BY HASH(k2) BUCKETS 32
PROPERTIES(
"storage_medium" = "SSD"
);
```
4. Create an olap table, with list partitioned, distributed by hash. Records with the same key exist at the same time, set the initial storage medium and cooling time, use default column storage.
4. Create an olap table, with range partitioned, distributed by hash. Records with the same key exist at the same time, set the initial storage medium and cooling time, use default column storage.
1) LESS THAN
```
CREATE TABLE example_db.table_range
(
k1 DATE,
k2 INT,
k3 SMALLINT,
v1 VARCHAR(2048),
v2 DATETIME DEFAULT "2014-02-04 15:36:00"
)
ENGINE=olap
DUPLICATE KEY(k1, k2, k3)
PARTITION BY RANGE (k1)
(
PARTITION p1 VALUES LESS THAN ("2014-01-01"),
PARTITION p2 VALUES LESS THAN ("2014-06-01"),
PARTITION p3 VALUES LESS THAN ("2014-12-01")
)
DISTRIBUTED BY HASH(k2) BUCKETS 32
PROPERTIES(
"storage_medium" = "SSD", "storage_cooldown_time" = "2015-06-04 00:00:00"
);
```
Explain:
This statement will create 3 partitions:
```
( { MIN }, {"2014-01-01"} )
[ {"2014-01-01"}, {"2014-06-01"} )
[ {"2014-06-01"}, {"2014-12-01"} )
```
Data outside these ranges will not be loaded.
2) Fixed Range
```
CREATE TABLE table_range
(
k1 DATE,
k2 INT,
k3 SMALLINT,
v1 VARCHAR(2048),
v2 DATETIME DEFAULT "2014-02-04 15:36:00"
)
ENGINE=olap
DUPLICATE KEY(k1, k2, k3)
PARTITION BY RANGE (k1, k2, k3)
(
PARTITION p1 VALUES [("2014-01-01", "10", "200"), ("2014-01-01", "20", "300")),
PARTITION p2 VALUES [("2014-06-01", "100", "200"), ("2014-07-01", "100", "300"))
)
DISTRIBUTED BY HASH(k2) BUCKETS 32
PROPERTIES(
"storage_medium" = "SSD"
);
```
5. Create an olap table, with list partitioned, distributed by hash. Records with the same key exist at the same time, set the initial storage medium and cooling time, use default column storage.
1) Single column partition
@ -540,9 +567,9 @@ Syntax:
Data that is not within these partition enumeration values will be filtered as illegal data
5. Create a mysql table
5.1 Create MySQL table directly from external table information
```
6. Create a mysql table
6.1 Create MySQL table directly from external table information
```
CREATE EXTERNAL TABLE example_db.table_mysql
(
k1 DATE,
@ -561,21 +588,20 @@ Syntax:
"database" = "mysql_db_test",
"table" = "mysql_table_test"
)
```
```
5.2 Create MySQL table with external ODBC catalog resource
```
CREATE EXTERNAL RESOURCE "mysql_resource"
PROPERTIES
(
"type" = "odbc_catalog",
"user" = "mysql_user",
"password" = "mysql_passwd",
"host" = "127.0.0.1",
"port" = "8239"
);
```
```
6.2 Create MySQL table with external ODBC catalog resource
```
CREATE EXTERNAL RESOURCE "mysql_resource"
PROPERTIES
(
"type" = "odbc_catalog",
"user" = "mysql_user",
"password" = "mysql_passwd",
"host" = "127.0.0.1",
"port" = "8239"
);
CREATE EXTERNAL TABLE example_db.table_mysql
(
k1 DATE,
@ -590,10 +616,10 @@ Syntax:
"odbc_catalog_resource" = "mysql_resource",
"database" = "mysql_db_test",
"table" = "mysql_table_test"
)
```
);
```
6. Create a broker table, with file on HDFS, line delimit by "|", column separated by "\n"
7. Create a broker table, with file on HDFS, line delimit by "|", column separated by "\n"
```
CREATE EXTERNAL TABLE example_db.table_broker (
@ -616,7 +642,7 @@ Syntax:
);
```
7. Create table will HLL column
8. Create table will HLL column
```
CREATE TABLE example_db.example_table
@ -631,7 +657,7 @@ Syntax:
DISTRIBUTED BY HASH(k1) BUCKETS 32;
```
8. Create a table will BITMAP_UNION column
9. Create a table will BITMAP_UNION column
```
CREATE TABLE example_db.example_table
@ -645,21 +671,21 @@ Syntax:
AGGREGATE KEY(k1, k2)
DISTRIBUTED BY HASH(k1) BUCKETS 32;
```
9. Create a table with QUANTILE_UNION column (the origin value of **v1** and **v2** columns must be **numeric** types)
10. Create a table with QUANTILE_UNION column (the origin value of **v1** and **v2** columns must be **numeric** types)
```
CREATE TABLE example_db.example_table
(
k1 TINYINT,
k2 DECIMAL(10, 2) DEFAULT "10.5",
v1 QUANTILE_STATE QUANTILE_UNION,
v2 QUANTILE_STATE QUANTILE_UNION
)
ENGINE=olap
AGGREGATE KEY(k1, k2)
DISTRIBUTED BY HASH(k1) BUCKETS 32;
```
10. Create 2 colocate join table.
```
CREATE TABLE example_db.example_table
(
k1 TINYINT,
k2 DECIMAL(10, 2) DEFAULT "10.5",
v1 QUANTILE_STATE QUANTILE_UNION,
v2 QUANTILE_STATE QUANTILE_UNION
)
ENGINE=olap
AGGREGATE KEY(k1, k2)
DISTRIBUTED BY HASH(k1) BUCKETS 32;
```
11. Create 2 colocate join table.
```
CREATE TABLE `t1` (
@ -682,7 +708,7 @@ Syntax:
);
```
11. Create a broker table, with file on BOS.
12. Create a broker table, with file on BOS.
```
CREATE EXTERNAL TABLE example_db.table_broker (
@ -700,7 +726,7 @@ Syntax:
);
```
12. Create a table with a bitmap index
13. Create a table with a bitmap index
```
CREATE TABLE example_db.table_hash
@ -717,7 +743,7 @@ Syntax:
DISTRIBUTED BY HASH(k1) BUCKETS 32;
```
13. Create a dynamic partitioning table (dynamic partitioning needs to be enabled in FE configuration), which creates partitions 3 days in advance every day. For example, if today is' 2020-01-08 ', partitions named 'p20200108', 'p20200109', 'p20200110', 'p20200111' will be created.
14. Create a dynamic partitioning table (dynamic partitioning needs to be enabled in FE configuration), which creates partitions 3 days in advance every day. For example, if today is' 2020-01-08 ', partitions named 'p20200108', 'p20200109', 'p20200110', 'p20200111' will be created.
```
[types: [DATE]; keys: [2020-01-08]; ‥types: [DATE]; keys: [2020-01-09]; )
@ -726,29 +752,29 @@ Syntax:
[types: [DATE]; keys: [2020-01-11]; ‥types: [DATE]; keys: [2020-01-12]; )
```
```
CREATE TABLE example_db.dynamic_partition
(
k1 DATE,
k2 INT,
k3 SMALLINT,
v1 VARCHAR(2048),
v2 DATETIME DEFAULT "2014-02-04 15:36:00"
)
ENGINE=olap
DUPLICATE KEY(k1, k2, k3)
PARTITION BY RANGE (k1) ()
DISTRIBUTED BY HASH(k2) BUCKETS 32
PROPERTIES(
"storage_medium" = "SSD",
"dynamic_partition.time_unit" = "DAY",
"dynamic_partition.end" = "3",
"dynamic_partition.prefix" = "p",
"dynamic_partition.buckets" = "32"
);
```
14. Create a table with rollup index
```
```
CREATE TABLE example_db.dynamic_partition
(
k1 DATE,
k2 INT,
k3 SMALLINT,
v1 VARCHAR(2048),
v2 DATETIME DEFAULT "2014-02-04 15:36:00"
)
ENGINE=olap
DUPLICATE KEY(k1, k2, k3)
PARTITION BY RANGE (k1) ()
DISTRIBUTED BY HASH(k2) BUCKETS 32
PROPERTIES(
"storage_medium" = "SSD",
"dynamic_partition.time_unit" = "DAY",
"dynamic_partition.end" = "3",
"dynamic_partition.prefix" = "p",
"dynamic_partition.buckets" = "32"
);
```
15. Create a table with rollup index
```
CREATE TABLE example_db.rolup_index_table
(
event_day DATE,
@ -765,11 +791,11 @@ Syntax:
r3(event_day)
)
PROPERTIES("replication_num" = "3");
```
```
15. Create a inmemory table:
16. Create a inmemory table:
```
```
CREATE TABLE example_db.table_hash
(
k1 TINYINT,
@ -783,10 +809,10 @@ Syntax:
COMMENT "my first doris table"
DISTRIBUTED BY HASH(k1) BUCKETS 32
PROPERTIES ("in_memory"="true");
```
```
16. Create a hive external table
```
17. Create a hive external table
```
CREATE TABLE example_db.table_hive
(
k1 TINYINT,
@ -800,11 +826,11 @@ Syntax:
"table" = "hive_table_name",
"hive.metastore.uris" = "thrift://127.0.0.1:9083"
);
```
```
17. Specify the replica distribution of the table through replication_allocation
18. Specify the replica distribution of the table through replication_allocation
```
```
CREATE TABLE example_db.table_hash
(
k1 TINYINT,
@ -812,9 +838,9 @@ Syntax:
)
DISTRIBUTED BY HASH(k1) BUCKETS 32
PROPERTIES (
"replication_allocation"="tag.location.group_a:1, tag.location.group_b:2"
);
"replication_allocation"="tag.location.group_a:1, tag.location.group_b:2"
);
CREATE TABLE example_db.dynamic_partition
(
k1 DATE,
@ -833,11 +859,11 @@ Syntax:
"dynamic_partition.buckets" = "32",
"dynamic_partition."replication_allocation" = "tag.location.group_a:3"
);
```
```
17. Create an Iceberg external table
19. Create an Iceberg external table
```
```
CREATE TABLE example_db.t_iceberg
ENGINE=ICEBERG
PROPERTIES (
@ -846,7 +872,7 @@ Syntax:
"iceberg.hive.metastore.uris" = "thrift://127.0.0.1:9083",
"iceberg.catalog.type" = "HIVE_CATALOG"
);
```
```
## keyword

View File

@ -0,0 +1,46 @@
---
{
"title": "DROP RESOURCE",
"language": "en"
}
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
# DROP RESOURCE
## Description
This statement is used to delete an existing resource. Only the root or admin user can delete resources.
Syntax:
DROP RESOURCE 'resource_name'
Note: ODBC/S3 resources that are in use cannot be deleted.
## Example
1. Delete the Spark resource named spark0:
DROP RESOURCE 'spark0';
## keyword
DROP, RESOURCE

View File

@ -25,17 +25,19 @@ under the License.
-->
# SHOW RESOURCES
## description
This statement is used to display the resources that the user has permission to use. Ordinary users can only display the resources with permission, while root or admin users can display all the resources.
## Description
This statement is used to display the resources that the user has permission to use.
Ordinary users can only display the resources with permission, while root or admin users can display all the resources.
Grammar
Syntax:
SHOW RESOURCES
[
WHERE
[NAME [ = "your_resource_name" | LIKE "name_matcher"]]
[RESOURCETYPE = ["SPARK"]]
[RESOURCETYPE = ["[spark|odbc_catalog|s3]"]]
]
[ORDER BY ...]
[LIMIT limit][OFFSET offset];
@ -48,7 +50,8 @@ under the License.
5) If LIMIT is specified, limit matching records are displayed. Otherwise, it is all displayed.
6) If OFFSET is specified, the query results are displayed starting with the offset offset. The offset is 0 by default.
## example
## Example
1. Display all resources that the current user has permissions on
SHOW RESOURCES;
@ -60,5 +63,5 @@ under the License.
## keyword
SHOW, RESOURCES
SHOW RESOURCES, RESOURCES