Modify create and drop docs in DDL module. (#9030)
Modify create and drop docs in DDL module.
This commit is contained in:
@ -36,14 +36,23 @@ This statement is used to restore a previously deleted database, table or partit
|
||||
|
||||
grammar:
|
||||
|
||||
1) restore database
|
||||
1. restore database
|
||||
|
||||
```sql
|
||||
RECOVER DATABASE db_name;
|
||||
```
|
||||
```sql
|
||||
RECOVER DATABASE db_name;
|
||||
```
|
||||
|
||||
1) restore table
|
||||
2) restore partition
|
||||
2. restore table
|
||||
|
||||
```sql
|
||||
RECOVER TABLE [db_name.]table_name;
|
||||
```
|
||||
|
||||
3. restore partition
|
||||
|
||||
```sql
|
||||
RECOVER PARTITION partition_name FROM [db_name.]table_name;
|
||||
```
|
||||
|
||||
illustrate:
|
||||
|
||||
|
||||
@ -26,13 +26,63 @@ under the License.
|
||||
|
||||
## CREATE-DATABASE
|
||||
|
||||
### Name
|
||||
|
||||
CREATE DATABASE
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to create a new database (database)
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
CREATE DATABASE [IF NOT EXISTS] db_name
|
||||
[PROPERTIES ("key"="value", ...)];
|
||||
````
|
||||
|
||||
`PROPERTIES` Additional information about the database, which can be defaulted.
|
||||
|
||||
- If you create an Iceberg database, you need to provide the following information in properties:
|
||||
|
||||
```sql
|
||||
PROPERTIES (
|
||||
"iceberg.database" = "iceberg_db_name",
|
||||
"iceberg.hive.metastore.uris" = "thrift://127.0.0.1:9083",
|
||||
"iceberg.catalog.type" = "HIVE_CATALOG"
|
||||
)
|
||||
````
|
||||
|
||||
illustrate:
|
||||
|
||||
- `ceberg.database` : the library name corresponding to Iceberg;
|
||||
- `iceberg.hive.metastore.uris` : hive metastore service address;
|
||||
- `iceberg.catalog.type`: The default is `HIVE_CATALOG`; currently only `HIVE_CATALOG` is supported, and more Iceberg catalog types will be supported in the future.
|
||||
|
||||
### Example
|
||||
|
||||
1. Create a new database db_test
|
||||
|
||||
```sql
|
||||
CREATE DATABASE db_test;
|
||||
````
|
||||
|
||||
2. Create a new Iceberg database iceberg_test
|
||||
|
||||
```sql
|
||||
CREATE DATABASE `iceberg_test`
|
||||
PROPERTIES (
|
||||
"iceberg.database" = "doris",
|
||||
"iceberg.hive.metastore.uris" = "thrift://127.0.0.1:9083",
|
||||
"iceberg.catalog.type" = "HIVE_CATALOG"
|
||||
);
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, DATABASE
|
||||
````text
|
||||
CREATE, DATABASE
|
||||
````
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,60 @@ under the License.
|
||||
|
||||
## CREATE-ENCRYPT-KEY
|
||||
|
||||
### Name
|
||||
|
||||
CREATE ENCRYPTKEY
|
||||
|
||||
### Description
|
||||
|
||||
This statement creates a custom key. Executing this command requires the user to have `ADMIN` privileges.
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
CREATE ENCRYPTKEY key_name AS "key_string"
|
||||
````
|
||||
|
||||
illustrate:
|
||||
|
||||
`key_name`: The name of the key to be created, may contain the name of the database. For example: `db1.my_key`.
|
||||
|
||||
`key_string`: The string to create the key with.
|
||||
|
||||
If `key_name` contains the database name, then the custom key will be created in the corresponding database, otherwise this function will create the database in the current session. The name of the new key cannot be the same as the existing key in the corresponding database, otherwise the creation will fail.
|
||||
|
||||
### Example
|
||||
|
||||
1. Create a custom key
|
||||
|
||||
```sql
|
||||
CREATE ENCRYPTKEY my_key AS "ABCD123456789";
|
||||
````
|
||||
|
||||
2. Use a custom key
|
||||
|
||||
To use a custom key, you need to add the keyword `KEY`/`key` before the key, separated from the `key_name` space.
|
||||
|
||||
```sql
|
||||
mysql> SELECT HEX(AES_ENCRYPT("Doris is Great", KEY my_key));
|
||||
+------------------------------------------------+
|
||||
| hex(aes_encrypt('Doris is Great', key my_key)) |
|
||||
+------------------------------------------------+
|
||||
| D26DB38579D6A343350EDDC6F2AD47C6 |
|
||||
+------------------------------------------------+
|
||||
1 row in set (0.02 sec)
|
||||
|
||||
mysql> SELECT AES_DECRYPT(UNHEX('D26DB38579D6A343350EDDC6F2AD47C6'), KEY my_key);
|
||||
+------------------------------------------------- -------------------+
|
||||
| aes_decrypt(unhex('D26DB38579D6A343350EDDC6F2AD47C6'), key my_key) |
|
||||
+------------------------------------------------- -------------------+
|
||||
| Doris is Great |
|
||||
+------------------------------------------------- -------------------+
|
||||
1 row in set (0.01 sec)
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, ENCRYPT, KEY
|
||||
CREATE, ENCRYPTKEY
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,10 +26,203 @@ under the License.
|
||||
|
||||
## CREATE-EXTERNAL-TABLE
|
||||
|
||||
### Name
|
||||
|
||||
CREATE EXTERNAL TABLE
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to create an external table, see [CREATE TABLE](./CREATE-TABLE.html) for the specific syntax.
|
||||
|
||||
Which type of external table is mainly identified by the ENGINE type, currently MYSQL, BROKER, HIVE, ICEBERG are optional
|
||||
|
||||
1. If it is mysql, you need to provide the following information in properties:
|
||||
|
||||
```sql
|
||||
PROPERTIES (
|
||||
"host" = "mysql_server_host",
|
||||
"port" = "mysql_server_port",
|
||||
"user" = "your_user_name",
|
||||
"password" = "your_password",
|
||||
"database" = "database_name",
|
||||
"table" = "table_name"
|
||||
)
|
||||
````
|
||||
|
||||
Notice:
|
||||
|
||||
- "table_name" in "table" entry is the real table name in mysql. The table_name in the CREATE TABLE statement is the name of the mysql table in Doris, which can be different.
|
||||
|
||||
- The purpose of creating a mysql table in Doris is to access the mysql database through Doris. Doris itself does not maintain or store any mysql data.
|
||||
|
||||
2. If it is a broker, it means that the access to the table needs to pass through the specified broker, and the following information needs to be provided in properties:
|
||||
|
||||
```sql
|
||||
PROPERTIES (
|
||||
"broker_name" = "broker_name",
|
||||
"path" = "file_path1[,file_path2]",
|
||||
"column_separator" = "value_separator"
|
||||
"line_delimiter" = "value_delimiter"
|
||||
)
|
||||
````
|
||||
|
||||
In addition, you need to provide the Property information required by the Broker, and pass it through the BROKER PROPERTIES, for example, HDFS needs to pass in
|
||||
|
||||
```sql
|
||||
BROKER PROPERTIES(
|
||||
"username" = "name",
|
||||
"password" = "password"
|
||||
)
|
||||
````
|
||||
|
||||
According to different Broker types, the content that needs to be passed in is also different.
|
||||
|
||||
Notice:
|
||||
|
||||
- If there are multiple files in "path", separate them with comma [,]. If the filename contains a comma, use %2c instead. If the filename contains %, use %25 instead
|
||||
- Now the file content format supports CSV, and supports GZ, BZ2, LZ4, LZO (LZOP) compression formats.
|
||||
|
||||
3. If it is hive, you need to provide the following information in properties:
|
||||
|
||||
```sql
|
||||
PROPERTIES (
|
||||
"database" = "hive_db_name",
|
||||
"table" = "hive_table_name",
|
||||
"hive.metastore.uris" = "thrift://127.0.0.1:9083"
|
||||
)
|
||||
````
|
||||
|
||||
Where database is the name of the library corresponding to the hive table, table is the name of the hive table, and hive.metastore.uris is the address of the hive metastore service.
|
||||
|
||||
4. In case of iceberg, you need to provide the following information in properties:
|
||||
|
||||
```sql
|
||||
PROPERTIES (
|
||||
"iceberg.database" = "iceberg_db_name",
|
||||
"iceberg.table" = "iceberg_table_name",
|
||||
"iceberg.hive.metastore.uris" = "thrift://127.0.0.1:9083",
|
||||
"iceberg.catalog.type" = "HIVE_CATALOG"
|
||||
)
|
||||
````
|
||||
|
||||
Where database is the library name corresponding to Iceberg;
|
||||
table is the corresponding table name in Iceberg;
|
||||
hive.metastore.uris is the hive metastore service address;
|
||||
catalog.type defaults to HIVE_CATALOG. Currently only HIVE_CATALOG is supported, more Iceberg catalog types will be supported in the future.
|
||||
|
||||
### Example
|
||||
|
||||
1. Create a MYSQL external table
|
||||
|
||||
Create mysql table directly from outer table information
|
||||
|
||||
```sql
|
||||
CREATE EXTERNAL TABLE example_db.table_mysql
|
||||
(
|
||||
k1 DATE,
|
||||
k2 INT,
|
||||
k3 SMALLINT,
|
||||
k4 VARCHAR(2048),
|
||||
k5 DATETIME
|
||||
)
|
||||
ENGINE=mysql
|
||||
PROPERTIES
|
||||
(
|
||||
"host" = "127.0.0.1",
|
||||
"port" = "8239",
|
||||
"user" = "mysql_user",
|
||||
"password" = "mysql_passwd",
|
||||
"database" = "mysql_db_test",
|
||||
"table" = "mysql_table_test"
|
||||
)
|
||||
````
|
||||
|
||||
Create mysql table through External Catalog Resource
|
||||
|
||||
```sql
|
||||
# Create Resource first
|
||||
CREATE EXTERNAL RESOURCE "mysql_resource"
|
||||
PROPERTIES
|
||||
(
|
||||
"type" = "odbc_catalog",
|
||||
"user" = "mysql_user",
|
||||
"password" = "mysql_passwd",
|
||||
"host" = "127.0.0.1",
|
||||
"port" = "8239"
|
||||
);
|
||||
|
||||
# Then create mysql external table through Resource
|
||||
CREATE EXTERNAL TABLE example_db.table_mysql
|
||||
(
|
||||
k1 DATE,
|
||||
k2 INT,
|
||||
k3 SMALLINT,
|
||||
k4 VARCHAR(2048),
|
||||
k5 DATETIME
|
||||
)
|
||||
ENGINE=mysql
|
||||
PROPERTIES
|
||||
(
|
||||
"odbc_catalog_resource" = "mysql_resource",
|
||||
"database" = "mysql_db_test",
|
||||
"table" = "mysql_table_test"
|
||||
)
|
||||
````
|
||||
|
||||
2. Create a broker external table with data files stored on HDFS, the data is split with "|", and "\n" is newline
|
||||
|
||||
```sql
|
||||
CREATE EXTERNAL TABLE example_db.table_broker (
|
||||
k1 DATE,
|
||||
k2 INT,
|
||||
k3 SMALLINT,
|
||||
k4 VARCHAR(2048),
|
||||
k5 DATETIME
|
||||
)
|
||||
ENGINE=broker
|
||||
PROPERTIES (
|
||||
"broker_name" = "hdfs",
|
||||
"path" = "hdfs://hdfs_host:hdfs_port/data1,hdfs://hdfs_host:hdfs_port/data2,hdfs://hdfs_host:hdfs_port/data3%2c4",
|
||||
"column_separator" = "|",
|
||||
"line_delimiter" = "\n"
|
||||
)
|
||||
BROKER PROPERTIES (
|
||||
"username" = "hdfs_user",
|
||||
"password" = "hdfs_password"
|
||||
)
|
||||
````
|
||||
|
||||
3. Create a hive external table
|
||||
|
||||
```sql
|
||||
CREATE TABLE example_db.table_hive
|
||||
(
|
||||
k1 TINYINT,
|
||||
k2 VARCHAR(50),
|
||||
v INT
|
||||
)
|
||||
ENGINE=hive
|
||||
PROPERTIES
|
||||
(
|
||||
"database" = "hive_db_name",
|
||||
"table" = "hive_table_name",
|
||||
"hive.metastore.uris" = "thrift://127.0.0.1:9083"
|
||||
);
|
||||
````
|
||||
|
||||
4. Create an Iceberg skin
|
||||
|
||||
```sql
|
||||
CREATE TABLE example_db.t_iceberg
|
||||
ENGINE=ICEBERG
|
||||
PROPERTIES (
|
||||
"iceberg.database" = "iceberg_db",
|
||||
"iceberg.table" = "iceberg_table",
|
||||
"iceberg.hive.metastore.uris" = "thrift://127.0.0.1:9083",
|
||||
"iceberg.catalog.type" = "HIVE_CATALOG"
|
||||
);
|
||||
````
|
||||
|
||||
|
||||
### Keywords
|
||||
|
||||
|
||||
@ -26,13 +26,74 @@ under the License.
|
||||
|
||||
## CREATE-FILE
|
||||
|
||||
### Name
|
||||
|
||||
CREATE FILE
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to create and upload a file to the Doris cluster.
|
||||
This function is usually used to manage files that need to be used in some other commands, such as certificates, public and private keys, and so on.
|
||||
|
||||
This command can only be executed by users with `admin` privileges.
|
||||
A certain file belongs to a certain database. This file can be used by any user with access rights to database.
|
||||
|
||||
A single file size is limited to 1MB.
|
||||
A Doris cluster can upload up to 100 files.
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
CREATE FILE "file_name" [IN database]
|
||||
[properties]
|
||||
````
|
||||
|
||||
illustrate:
|
||||
|
||||
- file_name: custom file name.
|
||||
- database: The file belongs to a certain db, if not specified, the db of the current session is used.
|
||||
- properties supports the following parameters:
|
||||
- url: Required. Specifies the download path for a file. Currently only unauthenticated http download paths are supported. After the command executes successfully, the file will be saved in doris and the url will no longer be needed.
|
||||
- catalog: Required. The classification name of the file can be customized. However, in some commands, files in the specified catalog are looked up. For example, in the routine import, when the data source is kafka, the file under the catalog name kafka will be searched.
|
||||
- md5: optional. md5 of the file. If specified, verification will be performed after the file is downloaded.
|
||||
|
||||
### Example
|
||||
|
||||
1. Create a file ca.pem , classified as kafka
|
||||
|
||||
```sql
|
||||
CREATE FILE "ca.pem"
|
||||
PROPERTIES
|
||||
(
|
||||
"url" = "https://test.bj.bcebos.com/kafka-key/ca.pem",
|
||||
"catalog" = "kafka"
|
||||
);
|
||||
````
|
||||
|
||||
2. Create a file client.key, classified as my_catalog
|
||||
|
||||
```sql
|
||||
CREATE FILE "client.key"
|
||||
IN my_database
|
||||
PROPERTIES
|
||||
(
|
||||
"url" = "https://test.bj.bcebos.com/kafka-key/client.key",
|
||||
"catalog" = "my_catalog",
|
||||
"md5" = "b5bb901bf10f99205b39a46ac3557dd9"
|
||||
);
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, FILE
|
||||
````text
|
||||
CREATE, FILE
|
||||
````
|
||||
|
||||
### Best Practice
|
||||
|
||||
1. This command can only be executed by users with amdin privileges. A certain file belongs to a certain database. This file can be used by any user with access rights to database.
|
||||
|
||||
2. File size and quantity restrictions.
|
||||
|
||||
This function is mainly used to manage some small files such as certificates. So a single file size is limited to 1MB. A Doris cluster can upload up to 100 files.
|
||||
|
||||
|
||||
@ -26,13 +26,129 @@ under the License.
|
||||
|
||||
## CREATE-FUNCTION
|
||||
|
||||
### Name
|
||||
|
||||
CREATE FUNCTION
|
||||
|
||||
### Description
|
||||
|
||||
This statement creates a custom function. Executing this command requires the user to have `ADMIN` privileges.
|
||||
|
||||
If `function_name` contains the database name, then the custom function will be created in the corresponding database, otherwise the function will be created in the database where the current session is located. The name and parameters of the new function cannot be the same as the existing functions in the current namespace, otherwise the creation will fail. But only with the same name and different parameters can be created successfully.
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
CREATE [AGGREGATE] [ALIAS] FUNCTION function_name
|
||||
(arg_type [, ...])
|
||||
[RETURNS ret_type]
|
||||
[INTERMEDIATE inter_type]
|
||||
[WITH PARAMETER(param [,...]) AS origin_function]
|
||||
[PROPERTIES ("key" = "value" [, ...]) ]
|
||||
````
|
||||
|
||||
Parameter Description:
|
||||
|
||||
- `AGGREGATE`: If there is this item, it means that the created function is an aggregate function.
|
||||
|
||||
|
||||
- `ALIAS`: If there is this item, it means that the created function is an alias function.
|
||||
|
||||
|
||||
If the above two items are absent, it means that the created function is a scalar function
|
||||
|
||||
- `function_name`: The name of the function to be created, which can include the name of the database. For example: `db1.my_func`.
|
||||
|
||||
|
||||
- `arg_type`: The parameter type of the function, which is the same as the type defined when creating the table. Variable-length parameters can be represented by `, ...`. If it is a variable-length type, the type of the variable-length parameter is the same as that of the last non-variable-length parameter.
|
||||
|
||||
**NOTE**: `ALIAS FUNCTION` does not support variable-length arguments and must have at least one argument.
|
||||
|
||||
- `ret_type`: Required for creating new functions. If you are aliasing an existing function, you do not need to fill in this parameter.
|
||||
|
||||
|
||||
- `inter_type`: The data type used to represent the intermediate stage of the aggregation function.
|
||||
|
||||
|
||||
- `param`: used to represent the parameter of the alias function, including at least one.
|
||||
|
||||
|
||||
- `origin_function`: used to represent the original function corresponding to the alias function.
|
||||
|
||||
- `properties`: Used to set properties related to aggregate functions and scalar functions. The properties that can be set include:
|
||||
|
||||
- `object_file`: The URL path of the custom function dynamic library. Currently, only HTTP/HTTPS protocol is supported. This path needs to remain valid for the entire life cycle of the function. This option is required
|
||||
|
||||
- `symbol`: The function signature of the scalar function, which is used to find the function entry from the dynamic library. This option is required for scalar functions
|
||||
|
||||
- `init_fn`: The initialization function signature of the aggregate function. Required for aggregate functions
|
||||
|
||||
- `update_fn`: update function signature of aggregate function. Required for aggregate functions
|
||||
|
||||
- `merge_fn`: Merge function signature of aggregate function. Required for aggregate functions
|
||||
|
||||
- `serialize_fn`: Serialize function signature of aggregate function. Optional for aggregate functions, if not specified, the default serialization function will be used
|
||||
|
||||
- `finalize_fn`: The function signature of the aggregate function to get the final result. Optional for aggregate functions, if not specified, the default get-result function will be used
|
||||
|
||||
- `md5`: The MD5 value of the function dynamic link library, which is used to verify whether the downloaded content is correct. This option is optional
|
||||
|
||||
- `prepare_fn`: The function signature of the prepare function of the custom function, which is used to find the prepare function entry from the dynamic library. This option is optional for custom functions
|
||||
|
||||
- `close_fn`: The function signature of the close function of the custom function, which is used to find the close function entry from the dynamic library. This option is optional for custom functions
|
||||
|
||||
### Example
|
||||
|
||||
1. Create a custom scalar function
|
||||
|
||||
```sql
|
||||
CREATE FUNCTION my_add(INT, INT) RETURNS INT PROPERTIES (
|
||||
"symbol" = "_ZN9doris_udf6AddUdfEPNS_15FunctionContextERKNS_6IntValES4_",
|
||||
"object_file" = "http://host:port/libmyadd.so"
|
||||
);
|
||||
````
|
||||
|
||||
2. Create a custom scalar function with prepare/close functions
|
||||
|
||||
```sql
|
||||
CREATE FUNCTION my_add(INT, INT) RETURNS INT PROPERTIES (
|
||||
"symbol" = "_ZN9doris_udf6AddUdfEPNS_15FunctionContextERKNS_6IntValES4_",
|
||||
"prepare_fn" = "_ZN9doris_udf14AddUdf_prepareEPNS_15FunctionContextENS0_18FunctionStateScopeE",
|
||||
"close_fn" = "_ZN9doris_udf12AddUdf_closeEPNS_15FunctionContextENS0_18FunctionStateScopeE",
|
||||
"object_file" = "http://host:port/libmyadd.so"
|
||||
);
|
||||
````
|
||||
|
||||
3. Create a custom aggregate function
|
||||
|
||||
```sql
|
||||
CREATE AGGREGATE FUNCTION my_count (BIGINT) RETURNS BIGINT PROPERTIES (
|
||||
"init_fn"="_ZN9doris_udf9CountInitEPNS_15FunctionContextEPNS_9BigIntValE",
|
||||
"update_fn"="_ZN9doris_udf11CountUpdateEPNS_15FunctionContextERKNS_6IntValEPNS_9BigIntValE",
|
||||
"merge_fn"="_ZN9doris_udf10CountMergeEPNS_15FunctionContextERKNS_9BigIntValEPS2_",
|
||||
"finalize_fn"="_ZN9doris_udf13CountFinalizeEPNS_15FunctionContextERKNS_9BigIntValE",
|
||||
"object_file"="http://host:port/libudasample.so"
|
||||
);
|
||||
````
|
||||
|
||||
|
||||
4. Create a scalar function with variable length arguments
|
||||
|
||||
```sql
|
||||
CREATE FUNCTION strconcat(varchar, ...) RETURNS varchar properties (
|
||||
"symbol" = "_ZN9doris_udf6StrConcatUdfEPNS_15FunctionContextERKNS_6IntValES4_",
|
||||
"object_file" = "http://host:port/libmyStrConcat.so"
|
||||
);
|
||||
````
|
||||
|
||||
5. Create a custom alias function
|
||||
|
||||
```sql
|
||||
CREATE ALIAS FUNCTION id_masking(INT) WITH PARAMETER(id) AS CONCAT(LEFT(id, 3), '****', RIGHT(id, 4));
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, FUNCTION
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,36 @@ under the License.
|
||||
|
||||
## CREATE-INDEX
|
||||
|
||||
### Name
|
||||
|
||||
CREATE INDEX
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to create an index
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
CREATE INDEX [IF NOT EXISTS] index_name ON table_name (column [, ...],) [USING BITMAP] [COMMENT 'balabala'];
|
||||
````
|
||||
Notice:
|
||||
- Currently only supports bitmap indexes
|
||||
- BITMAP indexes are only created on a single column
|
||||
|
||||
### Example
|
||||
|
||||
1. Create a bitmap index for siteid on table1
|
||||
|
||||
```sql
|
||||
CREATE INDEX [IF NOT EXISTS] index_name ON table1 (siteid) USING BITMAP COMMENT 'balabala';
|
||||
````
|
||||
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, INDEX
|
||||
````text
|
||||
CREATE, INDEX
|
||||
````
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,201 @@ under the License.
|
||||
|
||||
## CREATE-MATERIALIZED-VIEW
|
||||
|
||||
### Name
|
||||
|
||||
CREATE MATERIALIZED VIEW
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to create a materialized view.
|
||||
|
||||
This operation is an asynchronous operation. After the submission is successful, you need to view the job progress through [SHOW ALTER TABLE MATERIALIZED VIEW](../../Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.html). After displaying FINISHED, you can use the `desc [table_name] all` command to view the schema of the materialized view.
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
CREATE MATERIALIZED VIEW [MV name] as [query]
|
||||
[PROPERTIES ("key" = "value")]
|
||||
````
|
||||
|
||||
illustrate:
|
||||
|
||||
- `MV name`: The name of the materialized view, required. Materialized view names for the same table cannot be repeated.
|
||||
|
||||
- `query`: The query statement used to construct the materialized view, the result of the query statement is the data of the materialized view. Currently supported query formats are:
|
||||
|
||||
```sql
|
||||
SELECT select_expr[, select_expr ...]
|
||||
FROM [Base view name]
|
||||
GROUP BY column_name[, column_name ...]
|
||||
ORDER BY column_name[, column_name ...]
|
||||
````
|
||||
|
||||
The syntax is the same as the query syntax.
|
||||
|
||||
- `select_expr`: All columns in the schema of the materialized view.
|
||||
- Only supports single column without expression calculation, aggregate column.
|
||||
- The aggregate function currently only supports three types of SUM, MIN, and MAX, and the parameter of the aggregate function can only be a single column without expression calculation.
|
||||
- Contains at least one single column.
|
||||
- All involved columns can only appear once.
|
||||
- `base view name`: The original table name of the materialized view, required.
|
||||
- Must be a single table and not a subquery
|
||||
- `group by`: The grouping column of the materialized view, optional.
|
||||
- If not filled, the data will not be grouped.
|
||||
- `order by`: the sorting column of the materialized view, optional.
|
||||
- The declaration order of the sort column must be the same as the column declaration order in select_expr.
|
||||
- If order by is not declared, the sorting column is automatically supplemented according to the rules. If the materialized view is an aggregate type, all grouping columns are automatically supplemented as sort columns. If the materialized view is of a non-aggregate type, the first 36 bytes are automatically supplemented as the sort column.
|
||||
- If the number of auto-supplemented sorts is less than 3, the first three are used as the sort sequence. If query contains a grouping column, the sorting column must be the same as the grouping column.
|
||||
|
||||
- properties
|
||||
|
||||
Declare some configuration of the materialized view, optional.
|
||||
|
||||
````text
|
||||
PROPERTIES ("key" = "value", "key" = "value" ...)
|
||||
````
|
||||
|
||||
The following configurations can be declared here:
|
||||
|
||||
````text
|
||||
short_key: The number of sorting columns.
|
||||
timeout: The timeout for materialized view construction.
|
||||
````
|
||||
|
||||
### Example
|
||||
|
||||
Base table structure is
|
||||
|
||||
```sql
|
||||
mysql> desc duplicate_table;
|
||||
+-------+--------+------+------+---------+-------+
|
||||
| Field | Type | Null | Key | Default | Extra |
|
||||
+-------+--------+------+------+---------+-------+
|
||||
| k1 | INT | Yes | true | N/A | |
|
||||
| k2 | INT | Yes | true | N/A | |
|
||||
| k3 | BIGINT | Yes | true | N/A | |
|
||||
| k4 | BIGINT | Yes | true | N/A | |
|
||||
+-------+--------+------+------+---------+-------+
|
||||
````
|
||||
|
||||
1. Create a materialized view that contains only the columns of the original table (k1, k2)
|
||||
|
||||
```sql
|
||||
create materialized view k1_k2 as
|
||||
select k1, k2 from duplicate_table;
|
||||
````
|
||||
|
||||
The schema of the materialized view is as follows, the materialized view contains only two columns k1, k2 without any aggregation
|
||||
|
||||
````text
|
||||
+-------+-------+--------+------+------+ ---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+-------+-------+--------+------+------+ ---------+-------+
|
||||
| k1_k2 | k1 | INT | Yes | true | N/A | |
|
||||
| | k2 | INT | Yes | true | N/A | |
|
||||
+-------+-------+--------+------+------+ ---------+-------+
|
||||
````
|
||||
|
||||
2. Create a materialized view with k2 as the sort column
|
||||
|
||||
```sql
|
||||
create materialized view k2_order as
|
||||
select k2, k1 from duplicate_table order by k2;
|
||||
````
|
||||
|
||||
The schema of the materialized view is shown in the figure below. The materialized view contains only two columns k2, k1, where k2 is the sorting column without any aggregation.
|
||||
|
||||
````text
|
||||
+-------+-------+--------+------+------- +---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+-------+-------+--------+------+------- +---------+-------+
|
||||
| k2_order | k2 | INT | Yes | true | N/A | |
|
||||
| | k1 | INT | Yes | false | N/A | NONE |
|
||||
+-------+-------+--------+------+------- +---------+-------+
|
||||
````
|
||||
|
||||
3. Create a materialized view with k1, k2 grouping and k3 column aggregated by SUM
|
||||
|
||||
```sql
|
||||
create materialized view k1_k2_sumk3 as
|
||||
select k1, k2, sum(k3) from duplicate_table group by k1, k2;
|
||||
````
|
||||
|
||||
The schema of the materialized view is shown in the figure below. The materialized view contains two columns k1, k2, sum(k3) where k1, k2 are the grouping columns, and sum(k3) is the sum value of the k3 column grouped by k1, k2.
|
||||
|
||||
Since the materialized view does not declare a sorting column, and the materialized view has aggregated data, the system defaults to supplement the grouping columns k1 and k2 as sorting columns.
|
||||
|
||||
````text
|
||||
+-------+-------+--------+------+------- +---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+-------+-------+--------+------+------- +---------+-------+
|
||||
| k1_k2_sumk3 | k1 | INT | Yes | true | N/A | |
|
||||
| | k2 | INT | Yes | true | N/A | |
|
||||
| | k3 | BIGINT | Yes | false | N/A | SUM |
|
||||
+-------+-------+--------+------+------- +---------+-------+
|
||||
````
|
||||
|
||||
4. Create a materialized view that removes duplicate rows
|
||||
|
||||
```sql
|
||||
create materialized view deduplicate as
|
||||
select k1, k2, k3, k4 from duplicate_table group by k1, k2, k3, k4;
|
||||
````
|
||||
|
||||
The materialized view schema is as shown below. The materialized view contains columns k1, k2, k3, and k4, and there are no duplicate rows.
|
||||
|
||||
````text
|
||||
+-------+-------+--------+------+------- +---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+-------+-------+--------+------+------- +---------+-------+
|
||||
| deduplicate | k1 | INT | Yes | true | N/A | |
|
||||
| | k2 | INT | Yes | true | N/A | |
|
||||
| | k3 | BIGINT | Yes | true | N/A | |
|
||||
| | k4 | BIGINT | Yes | true | N/A | |
|
||||
+-------+-------+--------+------+------- +---------+-------+
|
||||
````
|
||||
|
||||
5. Create a non-aggregate materialized view that does not declare a sort column
|
||||
|
||||
The schema of all_type_table is as follows
|
||||
|
||||
````
|
||||
+-------+--------------+------+-------+---------+- ------+
|
||||
| Field | Type | Null | Key | Default | Extra |
|
||||
+-------+--------------+------+-------+---------+- ------+
|
||||
| k1 | TINYINT | Yes | true | N/A | |
|
||||
| k2 | SMALLINT | Yes | true | N/A | |
|
||||
| k3 | INT | Yes | true | N/A | |
|
||||
| k4 | BIGINT | Yes | true | N/A | |
|
||||
| k5 | DECIMAL(9,0) | Yes | true | N/A | |
|
||||
| k6 | DOUBLE | Yes | false | N/A | NONE |
|
||||
| k7 | VARCHAR(20) | Yes | false | N/A | NONE |
|
||||
+-------+--------------+------+-------+---------+- ------+
|
||||
````
|
||||
|
||||
The materialized view contains k3, k4, k5, k6, k7 columns, and does not declare a sort column, the creation statement is as follows:
|
||||
|
||||
```sql
|
||||
create materialized view mv_1 as
|
||||
select k3, k4, k5, k6, k7 from all_type_table;
|
||||
````
|
||||
|
||||
The default added sorting column of the system is k3, k4, k5 three columns. The sum of the bytes of these three column types is 4(INT) + 8(BIGINT) + 16(DECIMAL) = 28 < 36. So the addition is that these three columns are used as sorting columns. The schema of the materialized view is as follows, you can see that the key field of the k3, k4, k5 columns is true, that is, the sorting column. The key field of the k6, k7 columns is false, which is a non-sorted column.
|
||||
|
||||
```sql
|
||||
+----------------+-------+--------------+------+-- -----+---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+----------------+-------+--------------+------+-- -----+---------+-------+
|
||||
| mv_1 | k3 | INT | Yes | true | N/A | |
|
||||
| | k4 | BIGINT | Yes | true | N/A | |
|
||||
| | k5 | DECIMAL(9,0) | Yes | true | N/A | |
|
||||
| | k6 | DOUBLE | Yes | false | N/A | NONE |
|
||||
| | k7 | VARCHAR(20) | Yes | false | N/A | NONE |
|
||||
+----------------+-------+--------------+------+-- -----+---------+-------+
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, MATERIALIZED, VIEW
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,121 @@ under the License.
|
||||
|
||||
## CREATE-RESOURCE
|
||||
|
||||
### Name
|
||||
|
||||
CREATE RESOURCE
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to create a resource. Only the root or admin user can create resources. Currently supports Spark, ODBC, S3 external resources.
|
||||
In the future, other external resources may be added to Doris for use, such as Spark/GPU for query, HDFS/S3 for external storage, MapReduce for ETL, etc.
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
CREATE [EXTERNAL] RESOURCE "resource_name"
|
||||
PROPERTIES ("key"="value", ...);
|
||||
````
|
||||
|
||||
illustrate:
|
||||
|
||||
- The type of resource needs to be specified in PROPERTIES "type" = "[spark|odbc_catalog|s3]", currently supports spark, odbc_catalog, s3.
|
||||
- PROPERTIES differs depending on the resource type, see the example for details.
|
||||
|
||||
### Example
|
||||
|
||||
1. Create a Spark resource named spark0 in yarn cluster mode.
|
||||
|
||||
```sql
|
||||
CREATE EXTERNAL RESOURCE "spark0"
|
||||
PROPERTIES
|
||||
(
|
||||
"type" = "spark",
|
||||
"spark.master" = "yarn",
|
||||
"spark.submit.deployMode" = "cluster",
|
||||
"spark.jars" = "xxx.jar,yyy.jar",
|
||||
"spark.files" = "/tmp/aaa,/tmp/bbb",
|
||||
"spark.executor.memory" = "1g",
|
||||
"spark.yarn.queue" = "queue0",
|
||||
"spark.hadoop.yarn.resourcemanager.address" = "127.0.0.1:9999",
|
||||
"spark.hadoop.fs.defaultFS" = "hdfs://127.0.0.1:10000",
|
||||
"working_dir" = "hdfs://127.0.0.1:10000/tmp/doris",
|
||||
"broker" = "broker0",
|
||||
"broker.username" = "user0",
|
||||
"broker.password" = "password0"
|
||||
);
|
||||
````
|
||||
|
||||
Spark related parameters are as follows:
|
||||
- spark.master: Required, currently supports yarn, spark://host:port.
|
||||
- spark.submit.deployMode: The deployment mode of the Spark program, required, supports both cluster and client.
|
||||
- spark.hadoop.yarn.resourcemanager.address: Required when master is yarn.
|
||||
- spark.hadoop.fs.defaultFS: Required when master is yarn.
|
||||
- Other parameters are optional, refer to [here](http://spark.apache.org/docs/latest/configuration.html)
|
||||
|
||||
|
||||
|
||||
Working_dir and broker need to be specified when Spark is used for ETL. described as follows:
|
||||
|
||||
- working_dir: The directory used by the ETL. Required when spark is used as an ETL resource. For example: hdfs://host:port/tmp/doris.
|
||||
- broker: broker name. Required when spark is used as an ETL resource. Configuration needs to be done in advance using the `ALTER SYSTEM ADD BROKER` command.
|
||||
- broker.property_key: The authentication information that the broker needs to specify when reading the intermediate file generated by ETL.
|
||||
|
||||
2. Create an ODBC resource
|
||||
|
||||
```sql
|
||||
CREATE EXTERNAL RESOURCE `oracle_odbc`
|
||||
PROPERTIES (
|
||||
"type" = "odbc_catalog",
|
||||
"host" = "192.168.0.1",
|
||||
"port" = "8086",
|
||||
"user" = "test",
|
||||
"password" = "test",
|
||||
"database" = "test",
|
||||
"odbc_type" = "oracle",
|
||||
"driver" = "Oracle 19 ODBC driver"
|
||||
);
|
||||
````
|
||||
|
||||
The relevant parameters of ODBC are as follows:
|
||||
- hosts: IP address of the external database
|
||||
- driver: The driver name of the ODBC appearance, which must be the same as the Driver name in be/conf/odbcinst.ini.
|
||||
- odbc_type: the type of the external database, currently supports oracle, mysql, postgresql
|
||||
- user: username of the foreign database
|
||||
- password: the password information of the corresponding user
|
||||
|
||||
3. Create S3 resource
|
||||
|
||||
```sql
|
||||
CREATE RESOURCE "remote_s3"
|
||||
PROPERTIES
|
||||
(
|
||||
"type" = "s3",
|
||||
"s3_endpoint" = "http://bj.s3.com",
|
||||
"s3_region" = "bj",
|
||||
"s3_root_path" = "/path/to/root",
|
||||
"s3_access_key" = "bbb",
|
||||
"s3_secret_key" = "aaaa",
|
||||
"s3_max_connections" = "50",
|
||||
"s3_request_timeout_ms" = "3000",
|
||||
"s3_connection_timeout_ms" = "1000"
|
||||
);
|
||||
````
|
||||
|
||||
S3 related parameters are as follows:
|
||||
- Required parameters
|
||||
- s3_endpoint: s3 endpoint
|
||||
- s3_region: s3 region
|
||||
- s3_root_path: s3 root directory
|
||||
- s3_access_key: s3 access key
|
||||
- s3_secret_key: s3 secret key
|
||||
- optional parameter
|
||||
- s3_max_connections: the maximum number of s3 connections, the default is 50
|
||||
- s3_request_timeout_ms: s3 request timeout, in milliseconds, the default is 3000
|
||||
- s3_connection_timeout_ms: s3 connection timeout, in milliseconds, the default is 1000
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, RESOURCE
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,10 +26,71 @@ under the License.
|
||||
|
||||
## CREATE-TABLE-LIKE
|
||||
|
||||
### Name
|
||||
|
||||
CREATE TABLE LIKE
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to create an empty table with the exact same table structure as another table, and can optionally replicate some rollups.
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] [database.]table_name LIKE [database.]table_name [WITH ROLLUP (r1,r2,r3,...)]
|
||||
````
|
||||
|
||||
illustrate:
|
||||
|
||||
- The copied table structure includes Column Definition, Partitions, Table Properties, etc.
|
||||
- The user needs to have `SELECT` permission on the copied original table
|
||||
- Support for copying external tables such as MySQL
|
||||
- Support the rollup of copying OLAP Table
|
||||
|
||||
### Example
|
||||
|
||||
1. Create an empty table with the same table structure as table1 under the test1 library, the table name is table2
|
||||
|
||||
```sql
|
||||
CREATE TABLE test1.table2 LIKE test1.table1
|
||||
````
|
||||
|
||||
2. Create an empty table with the same table structure as test1.table1 under the test2 library, the table name is table2
|
||||
|
||||
```sql
|
||||
CREATE TABLE test2.table2 LIKE test1.table1
|
||||
````
|
||||
|
||||
3. Create an empty table with the same table structure as table1 under the test1 library, the table name is table2, and copy the two rollups of r1 and r2 of table1 at the same time
|
||||
|
||||
```sql
|
||||
CREATE TABLE test1.table2 LIKE test1.table1 WITH ROLLUP (r1,r2)
|
||||
````
|
||||
|
||||
4. Create an empty table with the same table structure as table1 under the test1 library, the table name is table2, and copy all the rollups of table1 at the same time
|
||||
|
||||
```sql
|
||||
CREATE TABLE test1.table2 LIKE test1.table1 WITH ROLLUP
|
||||
````
|
||||
|
||||
5. Create an empty table with the same table structure as test1.table1 under the test2 library, the table name is table2, and copy the two rollups of r1 and r2 of table1 at the same time
|
||||
|
||||
```sql
|
||||
CREATE TABLE test2.table2 LIKE test1.table1 WITH ROLLUP (r1,r2)
|
||||
````
|
||||
|
||||
6. Create an empty table with the same table structure as test1.table1 under the test2 library, the table name is table2, and copy all the rollups of table1 at the same time
|
||||
|
||||
```sql
|
||||
CREATE TABLE test2.table2 LIKE test1.table1 WITH ROLLUP
|
||||
````
|
||||
|
||||
7. Create an empty table under the test1 library with the same table structure as the MySQL outer table1, the table name is table2
|
||||
|
||||
```sql
|
||||
CREATE TABLE test1.table2 LIKE test1.table1
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, TABLE, LIKE
|
||||
|
||||
@ -28,7 +28,7 @@ under the License.
|
||||
|
||||
### Description
|
||||
|
||||
This command is used to create a table. The subject of this document describes the syntax for creating Doris self-maintained tables. For external table syntax, please refer to the [CREATE-EXTERNAL-TABLE] document.
|
||||
This command is used to create a table. The subject of this document describes the syntax for creating Doris self-maintained tables. For external table syntax, please refer to the [CREATE-EXTERNAL-TABLE](./CREATE-EXTERNAL-TABLE.html) document.
|
||||
|
||||
```sql
|
||||
CREATE TABLE [IF NOT EXISTS] [database.]table
|
||||
@ -149,7 +149,7 @@ distribution_info
|
||||
|
||||
* `engine_type`
|
||||
|
||||
Table engine type. All types in this document are OLAP. For other external table engine types, see [CREATE EXTERNAL TABLE] (DORIS/SQL Manual/Syntax Help/DDL/CREATE-EXTERNAL-TABLE.md) document. Example:
|
||||
Table engine type. All types in this document are OLAP. For other external table engine types, see [CREATE EXTERNAL TABLE](./CREATE-EXTERNAL-TABLE.html) document. Example:
|
||||
|
||||
`ENGINE=olap`
|
||||
|
||||
@ -298,12 +298,12 @@ distribution_info
|
||||
* `dynamic_partition.reserved_history_periods`: Used to specify the range of reserved history periods.
|
||||
|
||||
* Data Sort Info
|
||||
|
||||
|
||||
The relevant parameters of data sort info are as follows:
|
||||
|
||||
* `data_sort.sort_type`: the method of data sorting, options: z-order/lexical, default is lexical
|
||||
* `data_sort.col_num`: the first few columns to sort, col_num muster less than total key counts
|
||||
|
||||
|
||||
### Example
|
||||
|
||||
1. Create a detailed model table
|
||||
@ -499,34 +499,33 @@ distribution_info
|
||||
```sql
|
||||
CREATE TABLE example_db.table_hash
|
||||
(
|
||||
k1 TINYINT,
|
||||
k2 DECIMAL(10, 2) DEFAULT "10.5"
|
||||
)
|
||||
DISTRIBUTED BY HASH(k1) BUCKETS 32
|
||||
PROPERTIES (
|
||||
"replication_allocation"="tag.location.group_a:1, tag.location.group_b:2"
|
||||
);
|
||||
|
||||
CREATE TABLE example_db.dynamic_partition
|
||||
(
|
||||
k1 DATE,
|
||||
k2 INT,
|
||||
k3 SMALLINT,
|
||||
v1 VARCHAR(2048),
|
||||
v2 DATETIME DEFAULT "2014-02-04 15:36:00"
|
||||
)
|
||||
PARTITION BY RANGE (k1) ()
|
||||
DISTRIBUTED BY HASH(k2) BUCKETS 32
|
||||
PROPERTIES(
|
||||
"dynamic_partition.time_unit" = "DAY",
|
||||
"dynamic_partition.start" = "-3",
|
||||
"dynamic_partition.end" = "3",
|
||||
"dynamic_partition.prefix" = "p",
|
||||
"dynamic_partition.buckets" = "32",
|
||||
"dynamic_partition."replication_allocation" = "tag.location.group_a:3"
|
||||
);
|
||||
|
||||
k1 TINYINT,
|
||||
k2 DECIMAL(10, 2) DEFAULT "10.5"
|
||||
)
|
||||
DISTRIBUTED BY HASH(k1) BUCKETS 32
|
||||
PROPERTIES (
|
||||
"replication_allocation"="tag.location.group_a:1, tag.location.group_b:2"
|
||||
);
|
||||
|
||||
CREATE TABLE example_db.dynamic_partition
|
||||
(
|
||||
k1 DATE,
|
||||
k2 INT,
|
||||
k3 SMALLINT,
|
||||
v1 VARCHAR(2048),
|
||||
v2 DATETIME DEFAULT "2014-02-04 15:36:00"
|
||||
)
|
||||
PARTITION BY RANGE (k1) ()
|
||||
DISTRIBUTED BY HASH(k2) BUCKETS 32
|
||||
PROPERTIES(
|
||||
"dynamic_partition.time_unit" = "DAY",
|
||||
"dynamic_partition.start" = "-3",
|
||||
"dynamic_partition.end" = "3",
|
||||
"dynamic_partition.prefix" = "p",
|
||||
"dynamic_partition.buckets" = "32",
|
||||
"dynamic_partition."replication_allocation" = "tag.location.group_a:3"
|
||||
);
|
||||
```
|
||||
### Keywords
|
||||
|
||||
CREATE, TABLE
|
||||
@ -535,7 +534,7 @@ distribution_info
|
||||
|
||||
#### Partitioning and bucketing
|
||||
|
||||
A table must specify the bucket column, but it does not need to specify the partition. For the specific introduction of partitioning and bucketing, please refer to the [Data Division] (DORIS/Getting Started/Relational Model and Data Division.md) document.
|
||||
A table must specify the bucket column, but it does not need to specify the partition. For the specific introduction of partitioning and bucketing, please refer to the [Data Division](../../../../data-table/data-partition.html) document.
|
||||
|
||||
Tables in Doris can be divided into partitioned tables and non-partitioned tables. This attribute is determined when the table is created and cannot be changed afterwards. That is, for partitioned tables, you can add or delete partitions in the subsequent use process, and for non-partitioned tables, you can no longer perform operations such as adding partitions afterwards.
|
||||
|
||||
@ -545,7 +544,7 @@ Therefore, it is recommended to confirm the usage method to build the table reas
|
||||
|
||||
#### Dynamic Partition
|
||||
|
||||
The dynamic partition function is mainly used to help users automatically manage partitions. By setting certain rules, the Doris system regularly adds new partitions or deletes historical partitions. Please refer to [Dynamic Partition] (DORIS/Operation Manual/Dynamic Partition.md) document for more help.
|
||||
The dynamic partition function is mainly used to help users automatically manage partitions. By setting certain rules, the Doris system regularly adds new partitions or deletes historical partitions. Please refer to [Dynamic Partition](../../../../advanced/partition/dynamic-partition.html) document for more help.
|
||||
|
||||
#### Materialized View
|
||||
|
||||
@ -555,7 +554,7 @@ If the materialized view is created when the table is created, all subsequent da
|
||||
|
||||
If you add a materialized view in the subsequent use process, if there is data in the table, the creation time of the materialized view depends on the current amount of data.
|
||||
|
||||
For the introduction of materialized views, please refer to the document [materialized views] (DORIS/Operation Manual/materialized views.md).
|
||||
For the introduction of materialized views, please refer to the document [materialized views](../../../../advanced/materialized-view.html).
|
||||
|
||||
#### Index
|
||||
|
||||
|
||||
@ -26,13 +26,57 @@ under the License.
|
||||
|
||||
## CREATE-VIEW
|
||||
|
||||
### Name
|
||||
|
||||
CREATE VIEW
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to create a logical view
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
CREATE VIEW [IF NOT EXISTS]
|
||||
[db_name.]view_name
|
||||
(column1[ COMMENT "col comment"][, column2, ...])
|
||||
AS query_stmt
|
||||
````
|
||||
|
||||
|
||||
illustrate:
|
||||
|
||||
- Views are logical views and have no physical storage. All queries on the view are equivalent to the sub-queries corresponding to the view.
|
||||
- query_stmt is any supported SQL
|
||||
|
||||
### Example
|
||||
|
||||
1. Create the view example_view on example_db
|
||||
|
||||
```sql
|
||||
CREATE VIEW example_db.example_view (k1, k2, k3, v1)
|
||||
AS
|
||||
SELECT c1 as k1, k2, k3, SUM(v1) FROM example_table
|
||||
WHERE k1 = 20160112 GROUP BY k1,k2,k3;
|
||||
````
|
||||
|
||||
2. Create a view with a comment
|
||||
|
||||
```sql
|
||||
CREATE VIEW example_db.example_view
|
||||
(
|
||||
k1 COMMENT "first key",
|
||||
k2 COMMENT "second key",
|
||||
k3 COMMENT "third key",
|
||||
v1 COMMENT "first value"
|
||||
)
|
||||
COMMENT "my first view"
|
||||
AS
|
||||
SELECT c1 as k1, k2, k3, SUM(v1) FROM example_table
|
||||
WHERE k1 = 20160112 GROUP BY k1,k2,k3;
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, VIEW
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,34 @@ under the License.
|
||||
|
||||
## DROP-DATABASE
|
||||
|
||||
### Name
|
||||
|
||||
DOPR DATABASE
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to delete the database (database)
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
DROP DATABASE [IF EXISTS] db_name [FORCE];
|
||||
````
|
||||
|
||||
illustrate:
|
||||
|
||||
- During the execution of DROP DATABASE, the deleted database can be recovered through the RECOVER statement. See the [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.html) statement for details
|
||||
- If you execute DROP DATABASE FORCE, the system will not check the database for unfinished transactions, the database will be deleted directly and cannot be recovered, this operation is generally not recommended
|
||||
|
||||
### Example
|
||||
|
||||
1. Delete the database db_test
|
||||
|
||||
```sql
|
||||
DROP DATABASE db_test;
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, DATABASE
|
||||
DROP, DATABASE
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,36 @@ under the License.
|
||||
|
||||
## DROP-ENCRYPT-KEY
|
||||
|
||||
### Name
|
||||
|
||||
DROP ENCRYPTKEY
|
||||
|
||||
### Description
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
DROP ENCRYPTKEY key_name
|
||||
````
|
||||
|
||||
Parameter Description:
|
||||
|
||||
- `key_name`: The name of the key to delete, can include the name of the database. For example: `db1.my_key`.
|
||||
|
||||
Delete a custom key. The name of the key is exactly the same to be deleted.
|
||||
|
||||
Executing this command requires the user to have `ADMIN` privileges.
|
||||
|
||||
### Example
|
||||
|
||||
1. Delete a key
|
||||
|
||||
```sql
|
||||
DROP ENCRYPTKEY my_key;
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, ENCRYPT, KEY
|
||||
DROP, ENCRYPT, KEY
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,38 @@ under the License.
|
||||
|
||||
## DROP-FILE
|
||||
|
||||
### Name
|
||||
|
||||
DROP FILE
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to delete an uploaded file.
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
DROP FILE "file_name" [FROM database]
|
||||
[properties]
|
||||
````
|
||||
|
||||
illustrate:
|
||||
|
||||
- file_name: file name.
|
||||
- database: a db to which the file belongs, if not specified, the db of the current session is used.
|
||||
- properties supports the following parameters:
|
||||
- `catalog`: Required. The category the file belongs to.
|
||||
|
||||
### Example
|
||||
|
||||
1. Delete the file ca.pem
|
||||
|
||||
```sql
|
||||
DROP FILE "ca.pem" properties("catalog" = "kafka");
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, FILE
|
||||
DROP, FILE
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,36 @@ under the License.
|
||||
|
||||
## DROP-FUNCTION
|
||||
|
||||
### Name
|
||||
|
||||
DROP FUNCTION
|
||||
|
||||
### Description
|
||||
|
||||
Delete a custom function. Function names and parameter types are exactly the same to be deleted.
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
DROP FUNCTION function_name
|
||||
(arg_type [, ...])
|
||||
````
|
||||
|
||||
Parameter Description:
|
||||
|
||||
- `function_name`: the name of the function to delete
|
||||
- `arg_type`: the argument list of the function to delete
|
||||
|
||||
### Example
|
||||
|
||||
1. Delete a function
|
||||
|
||||
```sql
|
||||
DROP FUNCTION my_add(INT, INT)
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, FUNCTION
|
||||
DROP, FUNCTION
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,29 @@ under the License.
|
||||
|
||||
## DROP-INDEX
|
||||
|
||||
### Name
|
||||
|
||||
DROP INDEX
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to delete the index of the specified name from a table. Currently, only bitmap indexes are supported.
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
DROP INDEX [IF EXISTS] index_name ON [db_name.]table_name;
|
||||
````
|
||||
|
||||
### Example
|
||||
|
||||
1. Delete the index
|
||||
|
||||
```sql
|
||||
CREATE INDEX [IF NOT EXISTS] index_name ON table1 ;
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, INDEX
|
||||
DROP, INDEX
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,94 @@ under the License.
|
||||
|
||||
## DROP-MATERIALIZED-VIEW
|
||||
|
||||
### Name
|
||||
|
||||
DROP MATERIALIZED VIEW
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to drop a materialized view. Synchronous syntax
|
||||
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
DROP MATERIALIZED VIEW [IF EXISTS] mv_name ON table_name;
|
||||
````
|
||||
|
||||
|
||||
1. IF EXISTS:
|
||||
Do not throw an error if the materialized view does not exist. If this keyword is not declared, an error will be reported if the materialized view does not exist.
|
||||
|
||||
2. mv_name:
|
||||
The name of the materialized view to delete. Required.
|
||||
|
||||
3. table_name:
|
||||
The name of the table to which the materialized view to be deleted belongs. Required.
|
||||
|
||||
### Example
|
||||
|
||||
The table structure is
|
||||
|
||||
```sql
|
||||
mysql> desc all_type_table all;
|
||||
+----------------+-------+----------+------+------ -+---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+----------------+-------+----------+------+------ -+---------+-------+
|
||||
| all_type_table | k1 | TINYINT | Yes | true | N/A | |
|
||||
| | k2 | SMALLINT | Yes | false | N/A | NONE |
|
||||
| | k3 | INT | Yes | false | N/A | NONE |
|
||||
| | k4 | BIGINT | Yes | false | N/A | NONE |
|
||||
| | k5 | LARGEINT | Yes | false | N/A | NONE |
|
||||
| | k6 | FLOAT | Yes | false | N/A | NONE |
|
||||
| | k7 | DOUBLE | Yes | false | N/A | NONE |
|
||||
| | | | | | | | |
|
||||
| k1_sumk2 | k1 | TINYINT | Yes | true | N/A | |
|
||||
| | k2 | SMALLINT | Yes | false | N/A | SUM |
|
||||
+----------------+-------+----------+------+------ -+---------+-------+
|
||||
````
|
||||
|
||||
1. Drop the materialized view named k1_sumk2 of the table all_type_table
|
||||
|
||||
```sql
|
||||
drop materialized view k1_sumk2 on all_type_table;
|
||||
````
|
||||
|
||||
The table structure after the materialized view is deleted
|
||||
|
||||
````text
|
||||
+----------------+-------+----------+------+------ -+---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+----------------+-------+----------+------+------ -+---------+-------+
|
||||
| all_type_table | k1 | TINYINT | Yes | true | N/A | |
|
||||
| | k2 | SMALLINT | Yes | false | N/A | NONE |
|
||||
| | k3 | INT | Yes | false | N/A | NONE |
|
||||
| | k4 | BIGINT | Yes | false | N/A | NONE |
|
||||
| | k5 | LARGEINT | Yes | false | N/A | NONE |
|
||||
| | k6 | FLOAT | Yes | false | N/A | NONE |
|
||||
| | k7 | DOUBLE | Yes | false | N/A | NONE |
|
||||
+----------------+-------+----------+------+------ -+---------+-------+
|
||||
````
|
||||
|
||||
2. Drop a non-existent materialized view in the table all_type_table
|
||||
|
||||
```sql
|
||||
drop materialized view k1_k2 on all_type_table;
|
||||
ERROR 1064 (HY000): errCode = 2, detailMessage = Materialized view [k1_k2] does not exist in table [all_type_table]
|
||||
````
|
||||
|
||||
The delete request reports an error directly
|
||||
|
||||
3. Delete the materialized view k1_k2 in the table all_type_table, if it does not exist, no error will be reported.
|
||||
|
||||
```sql
|
||||
drop materialized view if exists k1_k2 on all_type_table;
|
||||
Query OK, 0 rows affected (0.00 sec)
|
||||
````
|
||||
|
||||
If it exists, delete it, if it does not exist, no error is reported.
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, MATERIALIZED, VIEW
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,31 @@ under the License.
|
||||
|
||||
## DROP-RESOURCE
|
||||
|
||||
### Name
|
||||
|
||||
DROP RESOURCE
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to delete an existing resource. Only the root or admin user can delete resources.
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
DROP RESOURCE 'resource_name'
|
||||
````
|
||||
|
||||
Note: ODBC/S3 resources in use cannot be deleted.
|
||||
|
||||
### Example
|
||||
|
||||
1. Delete the Spark resource named spark0:
|
||||
|
||||
```sql
|
||||
DROP RESOURCE 'spark0';
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, RESOURCE
|
||||
DROP, RESOURCE
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,41 @@ under the License.
|
||||
|
||||
## DROP-TABLE
|
||||
|
||||
### Name
|
||||
|
||||
DROP TABLE
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to drop a table.
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
DROP TABLE [IF EXISTS] [db_name.]table_name [FORCE];
|
||||
````
|
||||
|
||||
|
||||
illustrate:
|
||||
|
||||
- After executing DROP TABLE for a period of time, the dropped table can be recovered through the RECOVER statement. See [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.html) statement for details
|
||||
- If you execute DROP TABLE FORCE, the system will not check whether there are unfinished transactions in the table, the table will be deleted directly and cannot be recovered, this operation is generally not recommended
|
||||
|
||||
### Example
|
||||
|
||||
1. Delete a table
|
||||
|
||||
```sql
|
||||
DROP TABLE my_table;
|
||||
````
|
||||
|
||||
2. If it exists, delete the table of the specified database
|
||||
|
||||
```sql
|
||||
DROP TABLE IF EXISTS example_db.my_table;
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, TABLE
|
||||
DROP, TABLE
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,13 +26,43 @@ under the License.
|
||||
|
||||
## TRUNCATE-TABLE
|
||||
|
||||
### Name
|
||||
|
||||
TRUNCATE TABLE
|
||||
|
||||
### Description
|
||||
|
||||
This statement is used to clear the data of the specified table and partition
|
||||
grammar:
|
||||
|
||||
```sql
|
||||
TRUNCATE TABLE [db.]tbl[ PARTITION(p1, p2, ...)];
|
||||
````
|
||||
|
||||
illustrate:
|
||||
|
||||
- The statement clears the data, but leaves the table or partition.
|
||||
- Unlike DELETE, this statement can only clear the specified table or partition as a whole, and cannot add filter conditions.
|
||||
- Unlike DELETE, using this method to clear data will not affect query performance.
|
||||
- The data deleted by this operation cannot be recovered.
|
||||
- When using this command, the table status needs to be NORMAL, that is, operations such as SCHEMA CHANGE are not allowed.
|
||||
|
||||
### Example
|
||||
|
||||
1. Clear the table tbl under example_db
|
||||
|
||||
```sql
|
||||
TRUNCATE TABLE example_db.tbl;
|
||||
````
|
||||
|
||||
2. Empty p1 and p2 partitions of table tbl
|
||||
|
||||
```sql
|
||||
TRUNCATE TABLE tbl PARTITION(p1, p2);
|
||||
````
|
||||
|
||||
### Keywords
|
||||
|
||||
TRUNCATE, TABLE
|
||||
TRUNCATE, TABLE
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -36,14 +36,25 @@ REVOCER
|
||||
|
||||
语法:
|
||||
|
||||
1) 恢复 database
|
||||
1. 恢复 database
|
||||
|
||||
```sql
|
||||
RECOVER DATABASE db_name;
|
||||
```
|
||||
```sql
|
||||
RECOVER DATABASE db_name;
|
||||
```
|
||||
|
||||
1) 恢复 table
|
||||
2) 恢复 partition
|
||||
2. 恢复 table
|
||||
|
||||
```sql
|
||||
RECOVER TABLE [db_name.]table_name;
|
||||
```
|
||||
|
||||
3. 恢复 partition
|
||||
|
||||
```sql
|
||||
RECOVER PARTITION partition_name FROM [db_name.]table_name;
|
||||
```
|
||||
|
||||
|
||||
|
||||
说明:
|
||||
|
||||
|
||||
@ -26,13 +26,63 @@ under the License.
|
||||
|
||||
## CREATE-DATABASE
|
||||
|
||||
### Name
|
||||
|
||||
CREATE DATABASE
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于新建数据库(database)
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
CREATE DATABASE [IF NOT EXISTS] db_name
|
||||
[PROPERTIES ("key"="value", ...)];
|
||||
```
|
||||
|
||||
`PROPERTIES` 该数据库的附加信息,可以缺省。
|
||||
|
||||
- 如果创建 Iceberg 数据库,则需要在 properties 中提供以下信息:
|
||||
|
||||
```sql
|
||||
PROPERTIES (
|
||||
"iceberg.database" = "iceberg_db_name",
|
||||
"iceberg.hive.metastore.uris" = "thrift://127.0.0.1:9083",
|
||||
"iceberg.catalog.type" = "HIVE_CATALOG"
|
||||
)
|
||||
```
|
||||
|
||||
参数说明:
|
||||
|
||||
- `ceberg.database` :Iceberg 对应的库名;
|
||||
- `iceberg.hive.metastore.uris` :hive metastore 服务地址;
|
||||
- `iceberg.catalog.type`: 默认为 `HIVE_CATALOG`;当前仅支持 `HIVE_CATALOG`,后续会支持更多 Iceberg catalog 类型。
|
||||
|
||||
### Example
|
||||
|
||||
1. 新建数据库 db_test
|
||||
|
||||
```sql
|
||||
CREATE DATABASE db_test;
|
||||
```
|
||||
|
||||
2. 新建 Iceberg 数据库 iceberg_test
|
||||
|
||||
```sql
|
||||
CREATE DATABASE `iceberg_test`
|
||||
PROPERTIES (
|
||||
"iceberg.database" = "doris",
|
||||
"iceberg.hive.metastore.uris" = "thrift://127.0.0.1:9083",
|
||||
"iceberg.catalog.type" = "HIVE_CATALOG"
|
||||
);
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, DATABASE
|
||||
```text
|
||||
CREATE, DATABASE
|
||||
```
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -24,15 +24,63 @@ specific language governing permissions and limitations
|
||||
under the License.
|
||||
-->
|
||||
|
||||
## CREATE-ENCRYPT-KEY
|
||||
## CREATE-ENCRYPTKEY
|
||||
|
||||
### Name
|
||||
|
||||
CREATE ENCRYPTKEY
|
||||
|
||||
### Description
|
||||
|
||||
此语句创建一个自定义密钥。执行此命令需要用户拥有 `ADMIN` 权限。
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
CREATE ENCRYPTKEY key_name AS "key_string"
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
`key_name`: 要创建密钥的名字, 可以包含数据库的名字。比如:`db1.my_key`。
|
||||
|
||||
`key_string`: 要创建密钥的字符串。
|
||||
|
||||
如果 `key_name` 中包含了数据库名字,那么这个自定义密钥会创建在对应的数据库中,否则这个函数将会创建在当前会话所在的数据库。新密钥的名字不能够与对应数据库中已存在的密钥相同,否则会创建失败。
|
||||
|
||||
### Example
|
||||
|
||||
1. 创建一个自定义密钥
|
||||
|
||||
```sql
|
||||
CREATE ENCRYPTKEY my_key AS "ABCD123456789";
|
||||
```
|
||||
|
||||
2. 使用自定义密钥
|
||||
|
||||
使用自定义密钥需在密钥前添加关键字 `KEY`/`key`,与 `key_name` 空格隔开。
|
||||
|
||||
```sql
|
||||
mysql> SELECT HEX(AES_ENCRYPT("Doris is Great", KEY my_key));
|
||||
+------------------------------------------------+
|
||||
| hex(aes_encrypt('Doris is Great', key my_key)) |
|
||||
+------------------------------------------------+
|
||||
| D26DB38579D6A343350EDDC6F2AD47C6 |
|
||||
+------------------------------------------------+
|
||||
1 row in set (0.02 sec)
|
||||
|
||||
mysql> SELECT AES_DECRYPT(UNHEX('D26DB38579D6A343350EDDC6F2AD47C6'), KEY my_key);
|
||||
+--------------------------------------------------------------------+
|
||||
| aes_decrypt(unhex('D26DB38579D6A343350EDDC6F2AD47C6'), key my_key) |
|
||||
+--------------------------------------------------------------------+
|
||||
| Doris is Great |
|
||||
+--------------------------------------------------------------------+
|
||||
1 row in set (0.01 sec)
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, ENCRYPT, KEY
|
||||
CREATE, ENCRYPTKEY
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,12 +26,203 @@ under the License.
|
||||
|
||||
## CREATE-EXTERNAL-TABLE
|
||||
|
||||
### Name
|
||||
|
||||
CREATE EXTERNAL TABLE
|
||||
|
||||
### Description
|
||||
|
||||
此语句用来创建外部表,具体语法参阅 [CREATE TABLE](./CREATE-TABLE.html)。
|
||||
|
||||
主要通过 ENGINE 类型来标识是哪种类型的外部表,目前可选 MYSQL、BROKER、HIVE、ICEBERG
|
||||
|
||||
1. 如果是 mysql,则需要在 properties 提供以下信息:
|
||||
|
||||
```sql
|
||||
PROPERTIES (
|
||||
"host" = "mysql_server_host",
|
||||
"port" = "mysql_server_port",
|
||||
"user" = "your_user_name",
|
||||
"password" = "your_password",
|
||||
"database" = "database_name",
|
||||
"table" = "table_name"
|
||||
)
|
||||
```
|
||||
|
||||
注意:
|
||||
|
||||
- "table" 条目中的 "table_name" 是 mysql 中的真实表名。而 CREATE TABLE 语句中的 table_name 是该 mysql 表在 Doris 中的名字,可以不同。
|
||||
|
||||
- 在 Doris 创建 mysql 表的目的是可以通过 Doris 访问 mysql 数据库。而 Doris 本身并不维护、存储任何 mysql 数据。
|
||||
|
||||
2. 如果是 broker,表示表的访问需要通过指定的broker, 需要在 properties 提供以下信息:
|
||||
|
||||
```sql
|
||||
PROPERTIES (
|
||||
"broker_name" = "broker_name",
|
||||
"path" = "file_path1[,file_path2]",
|
||||
"column_separator" = "value_separator"
|
||||
"line_delimiter" = "value_delimiter"
|
||||
)
|
||||
```
|
||||
|
||||
另外还需要提供Broker需要的Property信息,通过BROKER PROPERTIES来传递,例如HDFS需要传入
|
||||
|
||||
```sql
|
||||
BROKER PROPERTIES(
|
||||
"username" = "name",
|
||||
"password" = "password"
|
||||
)
|
||||
```
|
||||
|
||||
这个根据不同的Broker类型,需要传入的内容也不相同
|
||||
|
||||
注意:
|
||||
|
||||
- "path" 中如果有多个文件,用逗号[,]分割。如果文件名中包含逗号,那么使用 %2c 来替代。如果文件名中包含 %,使用 %25 代替
|
||||
- 现在文件内容格式支持CSV,支持GZ,BZ2,LZ4,LZO(LZOP) 压缩格式。
|
||||
|
||||
3. 如果是 hive,则需要在 properties 提供以下信息:
|
||||
|
||||
```sql
|
||||
PROPERTIES (
|
||||
"database" = "hive_db_name",
|
||||
"table" = "hive_table_name",
|
||||
"hive.metastore.uris" = "thrift://127.0.0.1:9083"
|
||||
)
|
||||
```
|
||||
|
||||
其中 database 是 hive 表对应的库名字,table 是 hive 表的名字,hive.metastore.uris 是 hive metastore 服务地址。
|
||||
|
||||
4. 如果是 iceberg,则需要在 properties 中提供以下信息:
|
||||
|
||||
```sql
|
||||
PROPERTIES (
|
||||
"iceberg.database" = "iceberg_db_name",
|
||||
"iceberg.table" = "iceberg_table_name",
|
||||
"iceberg.hive.metastore.uris" = "thrift://127.0.0.1:9083",
|
||||
"iceberg.catalog.type" = "HIVE_CATALOG"
|
||||
)
|
||||
```
|
||||
|
||||
其中 database 是 Iceberg 对应的库名;
|
||||
table 是 Iceberg 中对应的表名;
|
||||
hive.metastore.uris 是 hive metastore 服务地址;
|
||||
catalog.type 默认为 HIVE_CATALOG。当前仅支持 HIVE_CATALOG,后续会支持更多 Iceberg catalog 类型。
|
||||
|
||||
### Example
|
||||
|
||||
1. 创建MYSQL外部表
|
||||
|
||||
直接通过外表信息创建mysql表
|
||||
|
||||
```sql
|
||||
CREATE EXTERNAL TABLE example_db.table_mysql
|
||||
(
|
||||
k1 DATE,
|
||||
k2 INT,
|
||||
k3 SMALLINT,
|
||||
k4 VARCHAR(2048),
|
||||
k5 DATETIME
|
||||
)
|
||||
ENGINE=mysql
|
||||
PROPERTIES
|
||||
(
|
||||
"host" = "127.0.0.1",
|
||||
"port" = "8239",
|
||||
"user" = "mysql_user",
|
||||
"password" = "mysql_passwd",
|
||||
"database" = "mysql_db_test",
|
||||
"table" = "mysql_table_test"
|
||||
)
|
||||
```
|
||||
|
||||
通过External Catalog Resource创建mysql表
|
||||
|
||||
```sql
|
||||
# 先创建Resource
|
||||
CREATE EXTERNAL RESOURCE "mysql_resource"
|
||||
PROPERTIES
|
||||
(
|
||||
"type" = "odbc_catalog",
|
||||
"user" = "mysql_user",
|
||||
"password" = "mysql_passwd",
|
||||
"host" = "127.0.0.1",
|
||||
"port" = "8239"
|
||||
);
|
||||
|
||||
# 再通过Resource创建mysql外部表
|
||||
CREATE EXTERNAL TABLE example_db.table_mysql
|
||||
(
|
||||
k1 DATE,
|
||||
k2 INT,
|
||||
k3 SMALLINT,
|
||||
k4 VARCHAR(2048),
|
||||
k5 DATETIME
|
||||
)
|
||||
ENGINE=mysql
|
||||
PROPERTIES
|
||||
(
|
||||
"odbc_catalog_resource" = "mysql_resource",
|
||||
"database" = "mysql_db_test",
|
||||
"table" = "mysql_table_test"
|
||||
)
|
||||
```
|
||||
|
||||
2. 创建一个数据文件存储在HDFS上的 broker 外部表, 数据使用 "|" 分割,"\n" 换行
|
||||
|
||||
```sql
|
||||
CREATE EXTERNAL TABLE example_db.table_broker (
|
||||
k1 DATE,
|
||||
k2 INT,
|
||||
k3 SMALLINT,
|
||||
k4 VARCHAR(2048),
|
||||
k5 DATETIME
|
||||
)
|
||||
ENGINE=broker
|
||||
PROPERTIES (
|
||||
"broker_name" = "hdfs",
|
||||
"path" = "hdfs://hdfs_host:hdfs_port/data1,hdfs://hdfs_host:hdfs_port/data2,hdfs://hdfs_host:hdfs_port/data3%2c4",
|
||||
"column_separator" = "|",
|
||||
"line_delimiter" = "\n"
|
||||
)
|
||||
BROKER PROPERTIES (
|
||||
"username" = "hdfs_user",
|
||||
"password" = "hdfs_password"
|
||||
)
|
||||
```
|
||||
|
||||
3. 创建一个hive外部表
|
||||
|
||||
```sql
|
||||
CREATE TABLE example_db.table_hive
|
||||
(
|
||||
k1 TINYINT,
|
||||
k2 VARCHAR(50),
|
||||
v INT
|
||||
)
|
||||
ENGINE=hive
|
||||
PROPERTIES
|
||||
(
|
||||
"database" = "hive_db_name",
|
||||
"table" = "hive_table_name",
|
||||
"hive.metastore.uris" = "thrift://127.0.0.1:9083"
|
||||
);
|
||||
```
|
||||
|
||||
4. 创建一个 Iceberg 外表
|
||||
|
||||
```sql
|
||||
CREATE TABLE example_db.t_iceberg
|
||||
ENGINE=ICEBERG
|
||||
PROPERTIES (
|
||||
"iceberg.database" = "iceberg_db",
|
||||
"iceberg.table" = "iceberg_table",
|
||||
"iceberg.hive.metastore.uris" = "thrift://127.0.0.1:9083",
|
||||
"iceberg.catalog.type" = "HIVE_CATALOG"
|
||||
);
|
||||
```
|
||||
|
||||
|
||||
### Keywords
|
||||
|
||||
|
||||
@ -26,13 +26,74 @@ under the License.
|
||||
|
||||
## CREATE-FILE
|
||||
|
||||
### Name
|
||||
|
||||
CREATE FILE
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于创建并上传一个文件到 Doris 集群。
|
||||
该功能通常用于管理一些其他命令中需要使用到的文件,如证书、公钥私钥等等。
|
||||
|
||||
该命令只用 `admin` 权限用户可以执行。
|
||||
某个文件都归属与某一个的 database。对 database 拥有访问权限的用户都可以使用该文件。
|
||||
|
||||
单个文件大小限制为 1MB。
|
||||
一个 Doris 集群最多上传 100 个文件。
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
CREATE FILE "file_name" [IN database]
|
||||
[properties]
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- file_name: 自定义文件名。
|
||||
- database: 文件归属于某一个 db,如果没有指定,则使用当前 session 的 db。
|
||||
- properties 支持以下参数:
|
||||
- url:必须。指定一个文件的下载路径。当前仅支持无认证的 http 下载路径。命令执行成功后,文件将被保存在 doris 中,该 url 将不再需要。
|
||||
- catalog:必须。对文件的分类名,可以自定义。但在某些命令中,会查找指定 catalog 中的文件。比如例行导入中的,数据源为 kafka 时,会查找 catalog 名为 kafka 下的文件。
|
||||
- md5: 可选。文件的 md5。如果指定,会在下载文件后进行校验。
|
||||
|
||||
### Example
|
||||
|
||||
1. 创建文件 ca.pem ,分类为 kafka
|
||||
|
||||
```sql
|
||||
CREATE FILE "ca.pem"
|
||||
PROPERTIES
|
||||
(
|
||||
"url" = "https://test.bj.bcebos.com/kafka-key/ca.pem",
|
||||
"catalog" = "kafka"
|
||||
);
|
||||
```
|
||||
|
||||
2. 创建文件 client.key,分类为 my_catalog
|
||||
|
||||
```sql
|
||||
CREATE FILE "client.key"
|
||||
IN my_database
|
||||
PROPERTIES
|
||||
(
|
||||
"url" = "https://test.bj.bcebos.com/kafka-key/client.key",
|
||||
"catalog" = "my_catalog",
|
||||
"md5" = "b5bb901bf10f99205b39a46ac3557dd9"
|
||||
);
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, FILE
|
||||
```text
|
||||
CREATE, FILE
|
||||
```
|
||||
|
||||
### Best Practice
|
||||
|
||||
1. 该命令只有 amdin 权限用户可以执行。某个文件都归属与某一个的 database。对 database 拥有访问权限的用户都可以使用该文件。
|
||||
|
||||
2. 文件大小和数量限制。
|
||||
|
||||
这个功能主要用于管理一些证书等小文件。因此单个文件大小限制为 1MB。一个 Doris 集群最多上传 100 个文件。
|
||||
|
||||
|
||||
@ -26,10 +26,127 @@ under the License.
|
||||
|
||||
## CREATE-FUNCTION
|
||||
|
||||
### Name
|
||||
|
||||
CREATE FUNCTION
|
||||
|
||||
### Description
|
||||
|
||||
此语句创建一个自定义函数。执行此命令需要用户拥有 `ADMIN` 权限。
|
||||
|
||||
如果 `function_name` 中包含了数据库名字,那么这个自定义函数会创建在对应的数据库中,否则这个函数将会创建在当前会话所在的数据库。新函数的名字与参数不能够与当前命名空间中已存在的函数相同,否则会创建失败。但是只有名字相同,参数不同是能够创建成功的。
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
CREATE [AGGREGATE] [ALIAS] FUNCTION function_name
|
||||
(arg_type [, ...])
|
||||
[RETURNS ret_type]
|
||||
[INTERMEDIATE inter_type]
|
||||
[WITH PARAMETER(param [,...]) AS origin_function]
|
||||
[PROPERTIES ("key" = "value" [, ...]) ]
|
||||
```
|
||||
|
||||
参数说明:
|
||||
|
||||
- `AGGREGATE`: 如果有此项,表示的是创建的函数是一个聚合函数。
|
||||
|
||||
|
||||
- `ALIAS`:如果有此项,表示的是创建的函数是一个别名函数。
|
||||
|
||||
|
||||
如果没有上述两项,表示创建的函数是一个标量函数
|
||||
|
||||
- `function_name`: 要创建函数的名字, 可以包含数据库的名字。比如:`db1.my_func`。
|
||||
|
||||
|
||||
- `arg_type`: 函数的参数类型,与建表时定义的类型一致。变长参数时可以使用`, ...`来表示,如果是变长类型,那么变长部分参数的类型与最后一个非变长参数类型一致。
|
||||
|
||||
**注意**:`ALIAS FUNCTION` 不支持变长参数,且至少有一个参数。
|
||||
|
||||
- `ret_type`: 对创建新的函数来说,是必填项。如果是给已有函数取别名则可不用填写该参数。
|
||||
|
||||
|
||||
- `inter_type`: 用于表示聚合函数中间阶段的数据类型。
|
||||
|
||||
|
||||
- `param`:用于表示别名函数的参数,至少包含一个。
|
||||
|
||||
|
||||
- `origin_function`:用于表示别名函数对应的原始函数。
|
||||
|
||||
- `properties`: 用于设定聚合函数和标量函数相关属性,能够设置的属性包括:
|
||||
|
||||
- `object_file`: 自定义函数动态库的URL路径,当前只支持 HTTP/HTTPS 协议,此路径需要在函数整个生命周期内保持有效。此选项为必选项
|
||||
|
||||
- `symbol`: 标量函数的函数签名,用于从动态库里面找到函数入口。此选项对于标量函数是必选项
|
||||
|
||||
- `init_fn`: 聚合函数的初始化函数签名。对于聚合函数是必选项
|
||||
|
||||
- `update_fn`: 聚合函数的更新函数签名。对于聚合函数是必选项
|
||||
|
||||
- `merge_fn`: 聚合函数的合并函数签名。对于聚合函数是必选项
|
||||
|
||||
- `serialize_fn`: 聚合函数的序列化函数签名。对于聚合函数是可选项,如果没有指定,那么将会使用默认的序列化函数
|
||||
|
||||
- `finalize_fn`: 聚合函数获取最后结果的函数签名。对于聚合函数是可选项,如果没有指定,将会使用默认的获取结果函数
|
||||
|
||||
- `md5`: 函数动态链接库的MD5值,用于校验下载的内容是否正确。此选项是可选项
|
||||
|
||||
- `prepare_fn`: 自定义函数的prepare函数的函数签名,用于从动态库里面找到prepare函数入口。此选项对于自定义函数是可选项
|
||||
|
||||
- `close_fn`: 自定义函数的close函数的函数签名,用于从动态库里面找到close函数入口。此选项对于自定义函数是可选项
|
||||
|
||||
### Example
|
||||
|
||||
1. 创建一个自定义标量函数
|
||||
|
||||
```sql
|
||||
CREATE FUNCTION my_add(INT, INT) RETURNS INT PROPERTIES (
|
||||
"symbol" = "_ZN9doris_udf6AddUdfEPNS_15FunctionContextERKNS_6IntValES4_",
|
||||
"object_file" = "http://host:port/libmyadd.so"
|
||||
);
|
||||
```
|
||||
|
||||
2. 创建一个有prepare/close函数的自定义标量函数
|
||||
|
||||
```sql
|
||||
CREATE FUNCTION my_add(INT, INT) RETURNS INT PROPERTIES (
|
||||
"symbol" = "_ZN9doris_udf6AddUdfEPNS_15FunctionContextERKNS_6IntValES4_",
|
||||
"prepare_fn" = "_ZN9doris_udf14AddUdf_prepareEPNS_15FunctionContextENS0_18FunctionStateScopeE",
|
||||
"close_fn" = "_ZN9doris_udf12AddUdf_closeEPNS_15FunctionContextENS0_18FunctionStateScopeE",
|
||||
"object_file" = "http://host:port/libmyadd.so"
|
||||
);
|
||||
```
|
||||
|
||||
3. 创建一个自定义聚合函数
|
||||
|
||||
```sql
|
||||
CREATE AGGREGATE FUNCTION my_count (BIGINT) RETURNS BIGINT PROPERTIES (
|
||||
"init_fn"="_ZN9doris_udf9CountInitEPNS_15FunctionContextEPNS_9BigIntValE",
|
||||
"update_fn"="_ZN9doris_udf11CountUpdateEPNS_15FunctionContextERKNS_6IntValEPNS_9BigIntValE",
|
||||
"merge_fn"="_ZN9doris_udf10CountMergeEPNS_15FunctionContextERKNS_9BigIntValEPS2_",
|
||||
"finalize_fn"="_ZN9doris_udf13CountFinalizeEPNS_15FunctionContextERKNS_9BigIntValE",
|
||||
"object_file"="http://host:port/libudasample.so"
|
||||
);
|
||||
```
|
||||
|
||||
|
||||
4. 创建一个变长参数的标量函数
|
||||
|
||||
```sql
|
||||
CREATE FUNCTION strconcat(varchar, ...) RETURNS varchar properties (
|
||||
"symbol" = "_ZN9doris_udf6StrConcatUdfEPNS_15FunctionContextERKNS_6IntValES4_",
|
||||
"object_file" = "http://host:port/libmyStrConcat.so"
|
||||
);
|
||||
```
|
||||
|
||||
5. 创建一个自定义别名函数
|
||||
|
||||
```sql
|
||||
CREATE ALIAS FUNCTION id_masking(INT) WITH PARAMETER(id) AS CONCAT(LEFT(id, 3), '****', RIGHT(id, 4));
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, FUNCTION
|
||||
|
||||
@ -26,13 +26,36 @@ under the License.
|
||||
|
||||
## CREATE-INDEX
|
||||
|
||||
### Name
|
||||
|
||||
CREATE INDEX
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于创建索引
|
||||
语法:
|
||||
|
||||
```sql
|
||||
CREATE INDEX [IF NOT EXISTS] index_name ON table_name (column [, ...],) [USING BITMAP] [COMMENT'balabala'];
|
||||
```
|
||||
注意:
|
||||
- 目前只支持bitmap 索引
|
||||
- BITMAP 索引仅在单列上创建
|
||||
|
||||
### Example
|
||||
|
||||
1. 在table1 上为siteid 创建bitmap 索引
|
||||
|
||||
```sql
|
||||
CREATE INDEX [IF NOT EXISTS] index_name ON table1 (siteid) USING BITMAP COMMENT 'balabala';
|
||||
```
|
||||
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, INDEX
|
||||
```text
|
||||
CREATE, INDEX
|
||||
```
|
||||
|
||||
### Best Practice
|
||||
|
||||
|
||||
@ -26,10 +26,199 @@ under the License.
|
||||
|
||||
## CREATE-MATERIALIZED-VIEW
|
||||
|
||||
### Name
|
||||
|
||||
CREATE MATERIALIZED VIEW
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于创建物化视图。
|
||||
|
||||
该操作为异步操作,提交成功后,需通过 [SHOW ALTER TABLE MATERIALIZED VIEW](../../Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.html) 查看作业进度。在显示 FINISHED 后既可通过 `desc [table_name] all` 命令来查看物化视图的 schema 了。
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
CREATE MATERIALIZED VIEW [MV name] as [query]
|
||||
[PROPERTIES ("key" = "value")]
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- `MV name`:物化视图的名称,必填项。相同表的物化视图名称不可重复。
|
||||
|
||||
- `query`:用于构建物化视图的查询语句,查询语句的结果既物化视图的数据。目前支持的 query 格式为:
|
||||
|
||||
```sql
|
||||
SELECT select_expr[, select_expr ...]
|
||||
FROM [Base view name]
|
||||
GROUP BY column_name[, column_name ...]
|
||||
ORDER BY column_name[, column_name ...]
|
||||
```
|
||||
|
||||
语法和查询语句语法一致。
|
||||
|
||||
- `select_expr`:物化视图的 schema 中所有的列。
|
||||
- 仅支持不带表达式计算的单列,聚合列。
|
||||
- 其中聚合函数目前仅支持 SUM, MIN, MAX 三种,且聚合函数的参数只能是不带表达式计算的单列。
|
||||
- 至少包含一个单列。
|
||||
- 所有涉及到的列,均只能出现一次。
|
||||
- `base view name`:物化视图的原始表名,必填项。
|
||||
- 必须是单表,且非子查询
|
||||
- `group by`:物化视图的分组列,选填项。
|
||||
- 不填则数据不进行分组。
|
||||
- `order by`:物化视图的排序列,选填项。
|
||||
- 排序列的声明顺序必须和 select_expr 中列声明顺序一致。
|
||||
- 如果不声明 order by,则根据规则自动补充排序列。 如果物化视图是聚合类型,则所有的分组列自动补充为排序列。 如果物化视图是非聚合类型,则前 36 个字节自动补充为排序列。
|
||||
- 如果自动补充的排序个数小于3个,则前三个作为排序列。 如果 query 中包含分组列的话,则排序列必须和分组列一致。
|
||||
|
||||
- properties
|
||||
|
||||
声明物化视图的一些配置,选填项。
|
||||
|
||||
```text
|
||||
PROPERTIES ("key" = "value", "key" = "value" ...)
|
||||
```
|
||||
|
||||
以下几个配置,均可声明在此处:
|
||||
|
||||
```text
|
||||
short_key: 排序列的个数。
|
||||
timeout: 物化视图构建的超时时间。
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
Base 表结构为
|
||||
|
||||
```sql
|
||||
mysql> desc duplicate_table;
|
||||
+-------+--------+------+------+---------+-------+
|
||||
| Field | Type | Null | Key | Default | Extra |
|
||||
+-------+--------+------+------+---------+-------+
|
||||
| k1 | INT | Yes | true | N/A | |
|
||||
| k2 | INT | Yes | true | N/A | |
|
||||
| k3 | BIGINT | Yes | true | N/A | |
|
||||
| k4 | BIGINT | Yes | true | N/A | |
|
||||
+-------+--------+------+------+---------+-------+
|
||||
```
|
||||
|
||||
1. 创建一个仅包含原始表 (k1, k2)列的物化视图
|
||||
|
||||
```sql
|
||||
create materialized view k1_k2 as
|
||||
select k1, k2 from duplicate_table;
|
||||
```
|
||||
|
||||
物化视图的 schema 如下图,物化视图仅包含两列 k1, k2 且不带任何聚合
|
||||
|
||||
```text
|
||||
+-----------------+-------+--------+------+------+---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+-----------------+-------+--------+------+------+---------+-------+
|
||||
| k1_k2 | k1 | INT | Yes | true | N/A | |
|
||||
| | k2 | INT | Yes | true | N/A | |
|
||||
+-----------------+-------+--------+------+------+---------+-------+
|
||||
```
|
||||
|
||||
2. 创建一个以 k2 为排序列的物化视图
|
||||
|
||||
```sql
|
||||
create materialized view k2_order as
|
||||
select k2, k1 from duplicate_table order by k2;
|
||||
```
|
||||
|
||||
物化视图的 schema 如下图,物化视图仅包含两列 k2, k1,其中 k2 列为排序列,不带任何聚合。
|
||||
|
||||
```text
|
||||
+-----------------+-------+--------+------+-------+---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+-----------------+-------+--------+------+-------+---------+-------+
|
||||
| k2_order | k2 | INT | Yes | true | N/A | |
|
||||
| | k1 | INT | Yes | false | N/A | NONE |
|
||||
+-----------------+-------+--------+------+-------+---------+-------+
|
||||
```
|
||||
|
||||
3. 创建一个以 k1, k2 分组,k3 列为 SUM 聚合的物化视图
|
||||
|
||||
```sql
|
||||
create materialized view k1_k2_sumk3 as
|
||||
select k1, k2, sum(k3) from duplicate_table group by k1, k2;
|
||||
```
|
||||
|
||||
物化视图的 schema 如下图,物化视图包含两列 k1, k2,sum(k3) 其中 k1, k2 为分组列,sum(k3) 为根据 k1, k2 分组后的 k3 列的求和值。
|
||||
|
||||
由于物化视图没有声明排序列,且物化视图带聚合数据,系统默认补充分组列 k1, k2 为排序列。
|
||||
|
||||
```text
|
||||
+-----------------+-------+--------+------+-------+---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+-----------------+-------+--------+------+-------+---------+-------+
|
||||
| k1_k2_sumk3 | k1 | INT | Yes | true | N/A | |
|
||||
| | k2 | INT | Yes | true | N/A | |
|
||||
| | k3 | BIGINT | Yes | false | N/A | SUM |
|
||||
+-----------------+-------+--------+------+-------+---------+-------+
|
||||
```
|
||||
|
||||
4. 创建一个去除重复行的物化视图
|
||||
|
||||
```sql
|
||||
create materialized view deduplicate as
|
||||
select k1, k2, k3, k4 from duplicate_table group by k1, k2, k3, k4;
|
||||
```
|
||||
|
||||
物化视图 schema 如下图,物化视图包含 k1, k2, k3, k4列,且不存在重复行。
|
||||
|
||||
```text
|
||||
+-----------------+-------+--------+------+-------+---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+-----------------+-------+--------+------+-------+---------+-------+
|
||||
| deduplicate | k1 | INT | Yes | true | N/A | |
|
||||
| | k2 | INT | Yes | true | N/A | |
|
||||
| | k3 | BIGINT | Yes | true | N/A | |
|
||||
| | k4 | BIGINT | Yes | true | N/A | |
|
||||
+-----------------+-------+--------+------+-------+---------+-------+
|
||||
```
|
||||
|
||||
5. 创建一个不声明排序列的非聚合型物化视图
|
||||
|
||||
all_type_table 的 schema 如下
|
||||
|
||||
```
|
||||
+-------+--------------+------+-------+---------+-------+
|
||||
| Field | Type | Null | Key | Default | Extra |
|
||||
+-------+--------------+------+-------+---------+-------+
|
||||
| k1 | TINYINT | Yes | true | N/A | |
|
||||
| k2 | SMALLINT | Yes | true | N/A | |
|
||||
| k3 | INT | Yes | true | N/A | |
|
||||
| k4 | BIGINT | Yes | true | N/A | |
|
||||
| k5 | DECIMAL(9,0) | Yes | true | N/A | |
|
||||
| k6 | DOUBLE | Yes | false | N/A | NONE |
|
||||
| k7 | VARCHAR(20) | Yes | false | N/A | NONE |
|
||||
+-------+--------------+------+-------+---------+-------+
|
||||
```
|
||||
|
||||
物化视图包含 k3, k4, k5, k6, k7 列,且不声明排序列,则创建语句如下:
|
||||
|
||||
```sql
|
||||
create materialized view mv_1 as
|
||||
select k3, k4, k5, k6, k7 from all_type_table;
|
||||
```
|
||||
|
||||
系统默认补充的排序列为 k3, k4, k5 三列。这三列类型的字节数之和为 4(INT) + 8(BIGINT) + 16(DECIMAL) = 28 < 36。所以补充的是这三列作为排序列。 物化视图的 schema 如下,可以看到其中 k3, k4, k5 列的 key 字段为 true,也就是排序列。k6, k7 列的 key 字段为 false,也就是非排序列。
|
||||
|
||||
```sql
|
||||
+----------------+-------+--------------+------+-------+---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+----------------+-------+--------------+------+-------+---------+-------+
|
||||
| mv_1 | k3 | INT | Yes | true | N/A | |
|
||||
| | k4 | BIGINT | Yes | true | N/A | |
|
||||
| | k5 | DECIMAL(9,0) | Yes | true | N/A | |
|
||||
| | k6 | DOUBLE | Yes | false | N/A | NONE |
|
||||
| | k7 | VARCHAR(20) | Yes | false | N/A | NONE |
|
||||
+----------------+-------+--------------+------+-------+---------+-------+
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, MATERIALIZED, VIEW
|
||||
|
||||
@ -26,10 +26,119 @@ under the License.
|
||||
|
||||
## CREATE-RESOURCE
|
||||
|
||||
### Name
|
||||
|
||||
CREATE RESOURCE
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于创建资源。仅 root 或 admin 用户可以创建资源。目前支持 Spark, ODBC, S3 外部资源。
|
||||
将来其他外部资源可能会加入到 Doris 中使用,如 Spark/GPU 用于查询,HDFS/S3 用于外部存储,MapReduce 用于 ETL 等。
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
CREATE [EXTERNAL] RESOURCE "resource_name"
|
||||
PROPERTIES ("key"="value", ...);
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- PROPERTIES中需要指定资源的类型 "type" = "[spark|odbc_catalog|s3]",目前支持 spark, odbc_catalog, s3。
|
||||
- 根据资源类型的不同 PROPERTIES 有所不同,具体见示例。
|
||||
|
||||
### Example
|
||||
|
||||
1. 创建yarn cluster 模式,名为 spark0 的 Spark 资源。
|
||||
|
||||
```sql
|
||||
CREATE EXTERNAL RESOURCE "spark0"
|
||||
PROPERTIES
|
||||
(
|
||||
"type" = "spark",
|
||||
"spark.master" = "yarn",
|
||||
"spark.submit.deployMode" = "cluster",
|
||||
"spark.jars" = "xxx.jar,yyy.jar",
|
||||
"spark.files" = "/tmp/aaa,/tmp/bbb",
|
||||
"spark.executor.memory" = "1g",
|
||||
"spark.yarn.queue" = "queue0",
|
||||
"spark.hadoop.yarn.resourcemanager.address" = "127.0.0.1:9999",
|
||||
"spark.hadoop.fs.defaultFS" = "hdfs://127.0.0.1:10000",
|
||||
"working_dir" = "hdfs://127.0.0.1:10000/tmp/doris",
|
||||
"broker" = "broker0",
|
||||
"broker.username" = "user0",
|
||||
"broker.password" = "password0"
|
||||
);
|
||||
```
|
||||
|
||||
Spark 相关参数如下:
|
||||
- spark.master: 必填,目前支持yarn,spark://host:port。
|
||||
- spark.submit.deployMode: Spark 程序的部署模式,必填,支持 cluster,client 两种。
|
||||
- spark.hadoop.yarn.resourcemanager.address: master为yarn时必填。
|
||||
- spark.hadoop.fs.defaultFS: master为yarn时必填。
|
||||
- 其他参数为可选,参考[这里](http://spark.apache.org/docs/latest/configuration.html)
|
||||
|
||||
|
||||
|
||||
Spark 用于 ETL 时需要指定 working_dir 和 broker。说明如下:
|
||||
|
||||
- working_dir: ETL 使用的目录。spark作为ETL资源使用时必填。例如:hdfs://host:port/tmp/doris。
|
||||
- broker: broker 名字。spark作为ETL资源使用时必填。需要使用`ALTER SYSTEM ADD BROKER` 命令提前完成配置。
|
||||
- broker.property_key: broker读取ETL生成的中间文件时需要指定的认证信息等。
|
||||
|
||||
2. 创建 ODBC resource
|
||||
|
||||
```sql
|
||||
CREATE EXTERNAL RESOURCE `oracle_odbc`
|
||||
PROPERTIES (
|
||||
"type" = "odbc_catalog",
|
||||
"host" = "192.168.0.1",
|
||||
"port" = "8086",
|
||||
"user" = "test",
|
||||
"password" = "test",
|
||||
"database" = "test",
|
||||
"odbc_type" = "oracle",
|
||||
"driver" = "Oracle 19 ODBC driver"
|
||||
);
|
||||
```
|
||||
|
||||
ODBC 的相关参数如下:
|
||||
- hosts:外表数据库的IP地址
|
||||
- driver:ODBC外表的Driver名,该名字需要和be/conf/odbcinst.ini中的Driver名一致。
|
||||
- odbc_type:外表数据库的类型,当前支持oracle, mysql, postgresql
|
||||
- user:外表数据库的用户名
|
||||
- password:对应用户的密码信息
|
||||
|
||||
3. 创建 S3 resource
|
||||
|
||||
```sql
|
||||
CREATE RESOURCE "remote_s3"
|
||||
PROPERTIES
|
||||
(
|
||||
"type" = "s3",
|
||||
"s3_endpoint" = "http://bj.s3.com",
|
||||
"s3_region" = "bj",
|
||||
"s3_root_path" = "/path/to/root",
|
||||
"s3_access_key" = "bbb",
|
||||
"s3_secret_key" = "aaaa",
|
||||
"s3_max_connections" = "50",
|
||||
"s3_request_timeout_ms" = "3000",
|
||||
"s3_connection_timeout_ms" = "1000"
|
||||
);
|
||||
```
|
||||
|
||||
S3 相关参数如下:
|
||||
- 必需参数
|
||||
- s3_endpoint:s3 endpoint
|
||||
- s3_region:s3 region
|
||||
- s3_root_path:s3 根目录
|
||||
- s3_access_key:s3 access key
|
||||
- s3_secret_key:s3 secret key
|
||||
- 可选参数
|
||||
- s3_max_connections:s3 最大连接数量,默认为 50
|
||||
- s3_request_timeout_ms:s3 请求超时时间,单位毫秒,默认为 3000
|
||||
- s3_connection_timeout_ms:s3 连接超时时间,单位毫秒,默认为 1000
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, RESOURCE
|
||||
|
||||
@ -26,10 +26,71 @@ under the License.
|
||||
|
||||
## CREATE-TABLE-LIKE
|
||||
|
||||
### Name
|
||||
|
||||
CREATE TABLE LIKE
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于创建一个表结构和另一张表完全相同的空表,同时也能够可选复制一些rollup。
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] [database.]table_name LIKE [database.]table_name [WITH ROLLUP (r1,r2,r3,...)]
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- 复制的表结构包括Column Definition、Partitions、Table Properties等
|
||||
- 用户需要对复制的原表有`SELECT`权限
|
||||
- 支持复制MySQL等外表
|
||||
- 支持复制OLAP Table的rollup
|
||||
|
||||
### Example
|
||||
|
||||
1. 在test1库下创建一张表结构和table1相同的空表,表名为table2
|
||||
|
||||
```sql
|
||||
CREATE TABLE test1.table2 LIKE test1.table1
|
||||
```
|
||||
|
||||
2. 在test2库下创建一张表结构和test1.table1相同的空表,表名为table2
|
||||
|
||||
```sql
|
||||
CREATE TABLE test2.table2 LIKE test1.table1
|
||||
```
|
||||
|
||||
3. 在test1库下创建一张表结构和table1相同的空表,表名为table2,同时复制table1的r1,r2两个rollup
|
||||
|
||||
```sql
|
||||
CREATE TABLE test1.table2 LIKE test1.table1 WITH ROLLUP (r1,r2)
|
||||
```
|
||||
|
||||
4. 在test1库下创建一张表结构和table1相同的空表,表名为table2,同时复制table1的所有rollup
|
||||
|
||||
```sql
|
||||
CREATE TABLE test1.table2 LIKE test1.table1 WITH ROLLUP
|
||||
```
|
||||
|
||||
5. 在test2库下创建一张表结构和test1.table1相同的空表,表名为table2,同时复制table1的r1,r2两个rollup
|
||||
|
||||
```sql
|
||||
CREATE TABLE test2.table2 LIKE test1.table1 WITH ROLLUP (r1,r2)
|
||||
```
|
||||
|
||||
6. 在test2库下创建一张表结构和test1.table1相同的空表,表名为table2,同时复制table1的所有rollup
|
||||
|
||||
```sql
|
||||
CREATE TABLE test2.table2 LIKE test1.table1 WITH ROLLUP
|
||||
```
|
||||
|
||||
7. 在test1库下创建一张表结构和MySQL外表table1相同的空表,表名为table2
|
||||
|
||||
```sql
|
||||
CREATE TABLE test1.table2 LIKE test1.table1
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, TABLE, LIKE
|
||||
|
||||
@ -28,7 +28,7 @@ under the License.
|
||||
|
||||
### Description
|
||||
|
||||
该命令用于创建一张表。本文档主语介绍创建 Doris 自维护的表的语法。外部表语法请参阅 [CREATE-EXTERNAL-TABLE] 文档。
|
||||
该命令用于创建一张表。本文档主语介绍创建 Doris 自维护的表的语法。外部表语法请参阅 [CREATE-EXTERNAL-TABLE](./CREATE-EXTERNAL-TABLE.html)文档。
|
||||
|
||||
```sql
|
||||
CREATE TABLE [IF NOT EXISTS] [database.]table
|
||||
@ -116,15 +116,15 @@ distribution_info
|
||||
|
||||
示例:
|
||||
|
||||
```
|
||||
k1 TINYINT,
|
||||
k2 DECIMAL(10,2) DEFAULT "10.5",
|
||||
k4 BIGINT NULL DEFAULT VALUE "1000" COMMENT "This is column k4",
|
||||
v1 VARCHAR(10) REPLACE NOT NULL,
|
||||
v2 BITMAP BITMAP_UNION,
|
||||
v3 HLL HLL_UNION,
|
||||
v4 INT SUM NOT NULL DEFAULT "1" COMMENT "This is column v4"
|
||||
```
|
||||
```text
|
||||
k1 TINYINT,
|
||||
k2 DECIMAL(10,2) DEFAULT "10.5",
|
||||
k4 BIGINT NULL DEFAULT VALUE "1000" COMMENT "This is column k4",
|
||||
v1 VARCHAR(10) REPLACE NOT NULL,
|
||||
v2 BITMAP BITMAP_UNION,
|
||||
v3 HLL HLL_UNION,
|
||||
v4 INT SUM NOT NULL DEFAULT "1" COMMENT "This is column v4"
|
||||
```
|
||||
|
||||
* `index_definition_list`
|
||||
|
||||
@ -150,7 +150,7 @@ distribution_info
|
||||
|
||||
* `engine_type`
|
||||
|
||||
表引擎类型。本文档中类型皆为 OLAP。其他外部表引擎类型见 [CREATE EXTERNAL TABLE](DORIS/SQL手册/语法帮助/DDL/CREATE-EXTERNAL-TABLE.md) 文档。示例:
|
||||
表引擎类型。本文档中类型皆为 OLAP。其他外部表引擎类型见 [CREATE EXTERNAL TABLE](./CREATE-EXTERNAL-TABLE.html) 文档。示例:
|
||||
|
||||
`ENGINE=olap`
|
||||
|
||||
@ -272,7 +272,7 @@ distribution_info
|
||||
|
||||
* `in_memory`
|
||||
|
||||
通过此属性设置该表是否为 [内存表](DORIS/操作手册/内存表.md)。
|
||||
通过此属性设置该表是否为内存表。
|
||||
|
||||
`"in_memory" = "true"`
|
||||
|
||||
@ -536,7 +536,7 @@ distribution_info
|
||||
|
||||
#### 分区和分桶
|
||||
|
||||
一个表必须指定分桶列,但可以不指定分区。关于分区和分桶的具体介绍,可参阅 [数据划分](DORIS/开始使用/关系模型与数据划分.md) 文档。
|
||||
一个表必须指定分桶列,但可以不指定分区。关于分区和分桶的具体介绍,可参阅 [数据划分](../../../../data-table/data-partition.html) 文档。
|
||||
|
||||
Doris 中的表可以分为分区表和无分区的表。这个属性在建表时确定,之后不可更改。即对于分区表,可以在之后的使用过程中对分区进行增删操作,而对于无分区的表,之后不能再进行增加分区等操作。
|
||||
|
||||
@ -546,7 +546,7 @@ Doris 中的表可以分为分区表和无分区的表。这个属性在建表
|
||||
|
||||
#### 动态分区
|
||||
|
||||
动态分区功能主要用于帮助用户自动的管理分区。通过设定一定的规则,Doris 系统定期增加新的分区或删除历史分区。可参阅 [动态分区](DORIS/操作手册/动态分区.md) 文档查看更多帮助。
|
||||
动态分区功能主要用于帮助用户自动的管理分区。通过设定一定的规则,Doris 系统定期增加新的分区或删除历史分区。可参阅 [动态分区](../../../../advanced/partition/dynamic-partition.html) 文档查看更多帮助。
|
||||
|
||||
#### 物化视图
|
||||
|
||||
@ -556,7 +556,7 @@ Doris 中的表可以分为分区表和无分区的表。这个属性在建表
|
||||
|
||||
如果在之后的使用过程中添加物化视图,如果表中已有数据,则物化视图的创建时间取决于当前数据量大小。
|
||||
|
||||
关于物化视图的介绍,请参阅文档 [物化视图](DORIS/操作手册/物化视图.md)。
|
||||
关于物化视图的介绍,请参阅文档 [物化视图](../../../../advanced/materialized-view.html)。
|
||||
|
||||
#### 索引
|
||||
|
||||
|
||||
@ -26,10 +26,55 @@ under the License.
|
||||
|
||||
## CREATE-VIEW
|
||||
|
||||
### Name
|
||||
|
||||
CREATE VIEW
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于创建一个逻辑视图
|
||||
语法:
|
||||
|
||||
```sql
|
||||
CREATE VIEW [IF NOT EXISTS]
|
||||
[db_name.]view_name
|
||||
(column1[ COMMENT "col comment"][, column2, ...])
|
||||
AS query_stmt
|
||||
```
|
||||
|
||||
|
||||
说明:
|
||||
|
||||
- 视图为逻辑视图,没有物理存储。所有在视图上的查询相当于在视图对应的子查询上进行。
|
||||
- query_stmt 为任意支持的 SQL
|
||||
|
||||
### Example
|
||||
|
||||
1. 在 example_db 上创建视图 example_view
|
||||
|
||||
```sql
|
||||
CREATE VIEW example_db.example_view (k1, k2, k3, v1)
|
||||
AS
|
||||
SELECT c1 as k1, k2, k3, SUM(v1) FROM example_table
|
||||
WHERE k1 = 20160112 GROUP BY k1,k2,k3;
|
||||
```
|
||||
|
||||
2. 创建一个包含 comment 的 view
|
||||
|
||||
```sql
|
||||
CREATE VIEW example_db.example_view
|
||||
(
|
||||
k1 COMMENT "first key",
|
||||
k2 COMMENT "second key",
|
||||
k3 COMMENT "third key",
|
||||
v1 COMMENT "first value"
|
||||
)
|
||||
COMMENT "my first view"
|
||||
AS
|
||||
SELECT c1 as k1, k2, k3, SUM(v1) FROM example_table
|
||||
WHERE k1 = 20160112 GROUP BY k1,k2,k3;
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
CREATE, VIEW
|
||||
|
||||
@ -26,10 +26,33 @@ under the License.
|
||||
|
||||
## DROP-DATABASE
|
||||
|
||||
### Name
|
||||
|
||||
DOPR DATABASE
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于删除数据库(database)
|
||||
语法:
|
||||
|
||||
```sql
|
||||
DROP DATABASE [IF EXISTS] db_name [FORCE];
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- 执行 DROP DATABASE 一段时间内,可以通过 RECOVER 语句恢复被删除的数据库。详见 [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.html) 语句
|
||||
- 如果执行 DROP DATABASE FORCE,则系统不会检查该数据库是否存在未完成的事务,数据库将直接被删除并且不能被恢复,一般不建议执行此操作
|
||||
|
||||
### Example
|
||||
|
||||
1. 删除数据库 db_test
|
||||
|
||||
```sql
|
||||
DROP DATABASE db_test;
|
||||
```
|
||||
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, DATABASE
|
||||
|
||||
@ -24,12 +24,36 @@ specific language governing permissions and limitations
|
||||
under the License.
|
||||
-->
|
||||
|
||||
## DROP-ENCRYPT-KEY
|
||||
## DROP-ENCRYPTKEY
|
||||
|
||||
### Name
|
||||
|
||||
DROP ENCRYPTKEY
|
||||
|
||||
### Description
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
DROP ENCRYPTKEY key_name
|
||||
```
|
||||
|
||||
参数说明:
|
||||
|
||||
- `key_name`: 要删除密钥的名字, 可以包含数据库的名字。比如:`db1.my_key`。
|
||||
|
||||
删除一个自定义密钥。密钥的名字完全一致才能够被删除。
|
||||
|
||||
执行此命令需要用户拥有 `ADMIN` 权限。
|
||||
|
||||
### Example
|
||||
|
||||
1. 删除掉一个密钥
|
||||
|
||||
```sql
|
||||
DROP ENCRYPTKEY my_key;
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, ENCRYPT, KEY
|
||||
|
||||
@ -26,10 +26,36 @@ under the License.
|
||||
|
||||
## DROP-FILE
|
||||
|
||||
### Name
|
||||
|
||||
DROP FILE
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于删除一个已上传的文件。
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
DROP FILE "file_name" [FROM database]
|
||||
[properties]
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- file_name: 文件名。
|
||||
- database: 文件归属的某一个 db,如果没有指定,则使用当前 session 的 db。
|
||||
- properties 支持以下参数:
|
||||
- `catalog`: 必须。文件所属分类。
|
||||
|
||||
### Example
|
||||
|
||||
1. 删除文件 ca.pem
|
||||
|
||||
```sql
|
||||
DROP FILE "ca.pem" properties("catalog" = "kafka");
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, FILE
|
||||
|
||||
@ -26,10 +26,34 @@ under the License.
|
||||
|
||||
## DROP-FUNCTION
|
||||
|
||||
### Name
|
||||
|
||||
DROP FUNCTION
|
||||
|
||||
### Description
|
||||
|
||||
删除一个自定义函数。函数的名字、参数类型完全一致才能够被删除
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
DROP FUNCTION function_name
|
||||
(arg_type [, ...])
|
||||
```
|
||||
|
||||
参数说明:
|
||||
|
||||
- `function_name`: 要删除函数的名字
|
||||
- `arg_type`: 要删除函数的参数列表
|
||||
|
||||
### Example
|
||||
|
||||
1. 删除掉一个函数
|
||||
|
||||
```sql
|
||||
DROP FUNCTION my_add(INT, INT)
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, FUNCTION
|
||||
|
||||
@ -26,10 +26,27 @@ under the License.
|
||||
|
||||
## DROP-INDEX
|
||||
|
||||
### Name
|
||||
|
||||
DROP INDEX
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于从一个表中删除指定名称的索引,目前仅支持bitmap 索引
|
||||
语法:
|
||||
|
||||
```sql
|
||||
DROP INDEX [IF EXISTS] index_name ON [db_name.]table_name;
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
1. 删除索引
|
||||
|
||||
```sql
|
||||
CREATE INDEX [IF NOT EXISTS] index_name ON table1 ;
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, INDEX
|
||||
|
||||
@ -26,10 +26,92 @@ under the License.
|
||||
|
||||
## DROP-MATERIALIZED-VIEW
|
||||
|
||||
### Name
|
||||
|
||||
DROP MATERIALIZED VIEW
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于删除物化视图。同步语法
|
||||
|
||||
语法:
|
||||
|
||||
```sql
|
||||
DROP MATERIALIZED VIEW [IF EXISTS] mv_name ON table_name;
|
||||
```
|
||||
|
||||
|
||||
1. IF EXISTS:
|
||||
如果物化视图不存在,不要抛出错误。如果不声明此关键字,物化视图不存在则报错。
|
||||
|
||||
2. mv_name:
|
||||
待删除的物化视图的名称。必填项。
|
||||
|
||||
3. table_name:
|
||||
待删除的物化视图所属的表名。必填项。
|
||||
|
||||
### Example
|
||||
|
||||
表结构为
|
||||
|
||||
```sql
|
||||
mysql> desc all_type_table all;
|
||||
+----------------+-------+----------+------+-------+---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+----------------+-------+----------+------+-------+---------+-------+
|
||||
| all_type_table | k1 | TINYINT | Yes | true | N/A | |
|
||||
| | k2 | SMALLINT | Yes | false | N/A | NONE |
|
||||
| | k3 | INT | Yes | false | N/A | NONE |
|
||||
| | k4 | BIGINT | Yes | false | N/A | NONE |
|
||||
| | k5 | LARGEINT | Yes | false | N/A | NONE |
|
||||
| | k6 | FLOAT | Yes | false | N/A | NONE |
|
||||
| | k7 | DOUBLE | Yes | false | N/A | NONE |
|
||||
| | | | | | | |
|
||||
| k1_sumk2 | k1 | TINYINT | Yes | true | N/A | |
|
||||
| | k2 | SMALLINT | Yes | false | N/A | SUM |
|
||||
+----------------+-------+----------+------+-------+---------+-------+
|
||||
```
|
||||
|
||||
1. 删除表 all_type_table 的名为 k1_sumk2 的物化视图
|
||||
|
||||
```sql
|
||||
drop materialized view k1_sumk2 on all_type_table;
|
||||
```
|
||||
|
||||
物化视图被删除后的表结构
|
||||
|
||||
```text
|
||||
+----------------+-------+----------+------+-------+---------+-------+
|
||||
| IndexName | Field | Type | Null | Key | Default | Extra |
|
||||
+----------------+-------+----------+------+-------+---------+-------+
|
||||
| all_type_table | k1 | TINYINT | Yes | true | N/A | |
|
||||
| | k2 | SMALLINT | Yes | false | N/A | NONE |
|
||||
| | k3 | INT | Yes | false | N/A | NONE |
|
||||
| | k4 | BIGINT | Yes | false | N/A | NONE |
|
||||
| | k5 | LARGEINT | Yes | false | N/A | NONE |
|
||||
| | k6 | FLOAT | Yes | false | N/A | NONE |
|
||||
| | k7 | DOUBLE | Yes | false | N/A | NONE |
|
||||
+----------------+-------+----------+------+-------+---------+-------+
|
||||
```
|
||||
|
||||
2. 删除表 all_type_table 中一个不存在的物化视图
|
||||
|
||||
```sql
|
||||
drop materialized view k1_k2 on all_type_table;
|
||||
ERROR 1064 (HY000): errCode = 2, detailMessage = Materialized view [k1_k2] does not exist in table [all_type_table]
|
||||
```
|
||||
|
||||
删除请求直接报错
|
||||
|
||||
3. 删除表 all_type_table 中的物化视图 k1_k2,不存在不报错。
|
||||
|
||||
```sql
|
||||
drop materialized view if exists k1_k2 on all_type_table;
|
||||
Query OK, 0 rows affected (0.00 sec)
|
||||
```
|
||||
|
||||
存在则删除,不存在则不报错。
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, MATERIALIZED, VIEW
|
||||
|
||||
@ -26,10 +26,29 @@ under the License.
|
||||
|
||||
## DROP-RESOURCE
|
||||
|
||||
### Name
|
||||
|
||||
DROP RESOURCE
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于删除一个已有的资源。仅 root 或 admin 用户可以删除资源。
|
||||
语法:
|
||||
|
||||
```sql
|
||||
DROP RESOURCE 'resource_name'
|
||||
```
|
||||
|
||||
注意:正在使用的 ODBC/S3 资源无法删除。
|
||||
|
||||
### Example
|
||||
|
||||
1. 删除名为 spark0 的 Spark 资源:
|
||||
|
||||
```sql
|
||||
DROP RESOURCE 'spark0';
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, RESOURCE
|
||||
|
||||
@ -26,10 +26,40 @@ under the License.
|
||||
|
||||
## DROP-TABLE
|
||||
|
||||
### Name
|
||||
|
||||
DROP TABLE
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于删除 table 。
|
||||
语法:
|
||||
|
||||
```sql
|
||||
DROP TABLE [IF EXISTS] [db_name.]table_name [FORCE];
|
||||
```
|
||||
|
||||
|
||||
说明:
|
||||
|
||||
- 执行 DROP TABLE 一段时间内,可以通过 RECOVER 语句恢复被删除的表。详见 [RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.html) 语句
|
||||
- 如果执行 DROP TABLE FORCE,则系统不会检查该表是否存在未完成的事务,表将直接被删除并且不能被恢复,一般不建议执行此操作
|
||||
|
||||
### Example
|
||||
|
||||
1. 删除一个 table
|
||||
|
||||
```sql
|
||||
DROP TABLE my_table;
|
||||
```
|
||||
|
||||
2. 如果存在,删除指定 database 的 table
|
||||
|
||||
```sql
|
||||
DROP TABLE IF EXISTS example_db.my_table;
|
||||
```
|
||||
|
||||
|
||||
### Keywords
|
||||
|
||||
DROP, TABLE
|
||||
|
||||
@ -26,10 +26,41 @@ under the License.
|
||||
|
||||
## TRUNCATE-TABLE
|
||||
|
||||
### Name
|
||||
|
||||
TRUNCATE TABLE
|
||||
|
||||
### Description
|
||||
|
||||
该语句用于清空指定表和分区的数据
|
||||
语法:
|
||||
|
||||
```sql
|
||||
TRUNCATE TABLE [db.]tbl[ PARTITION(p1, p2, ...)];
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- 该语句清空数据,但保留表或分区。
|
||||
- 不同于 DELETE,该语句只能整体清空指定的表或分区,不能添加过滤条件。
|
||||
- 不同于 DELETE,使用该方式清空数据不会对查询性能造成影响。
|
||||
- 该操作删除的数据不可恢复。
|
||||
- 使用该命令时,表状态需为 NORMAL,即不允许正在进行 SCHEMA CHANGE 等操作。
|
||||
|
||||
### Example
|
||||
|
||||
1. 清空 example_db 下的表 tbl
|
||||
|
||||
```sql
|
||||
TRUNCATE TABLE example_db.tbl;
|
||||
```
|
||||
|
||||
2. 清空表 tbl 的 p1 和 p2 分区
|
||||
|
||||
```sql
|
||||
TRUNCATE TABLE tbl PARTITION(p1, p2);
|
||||
```
|
||||
|
||||
### Keywords
|
||||
|
||||
TRUNCATE, TABLE
|
||||
|
||||
Reference in New Issue
Block a user