[Improvement](docs) Update EN doc (#9228)
This commit is contained in:
@ -40,7 +40,7 @@ key:
|
||||
Super user rights:
|
||||
max_user_connections: Maximum number of connections.
|
||||
max_query_instances: Maximum number of query instance user can use when query.
|
||||
sql_block_rules: set sql block rules。After setting, if the query user execute match the rules, it will be rejected.
|
||||
sql_block_rules: set sql block rules.After setting, if the query user execute match the rules, it will be rejected.
|
||||
cpu_resource_limit: limit the cpu resource usage of a query. See session variable `cpu_resource_limit`.
|
||||
exec_mem_limit: Limit the memory usage of the query. See the description of the session variable `exec_mem_limit` for details. -1 means not set.
|
||||
load_mem_limit: Limit memory usage for imports. See the introduction of the session variable `load_mem_limit` for details. -1 means not set.
|
||||
|
||||
@ -78,7 +78,7 @@ under the License.
|
||||
Other properties: Other information necessary to access remote storage, such as authentication information.
|
||||
|
||||
7) Modify BE node attributes currently supports the following attributes:
|
||||
1. tag.location:Resource tag
|
||||
1. tag.location: Resource tag
|
||||
2. disable_query: Query disabled attribute
|
||||
3. disable_load: Load disabled attribute
|
||||
|
||||
|
||||
@ -199,7 +199,7 @@ under the License.
|
||||
9. Modify default buckets number of partition
|
||||
grammer:
|
||||
MODIFY DISTRIBUTION DISTRIBUTED BY HASH (k1[,k2 ...]) BUCKETS num
|
||||
note:
|
||||
note:
|
||||
1)Only support non colocate table with RANGE partition and HASH distribution
|
||||
|
||||
10. Modify table comment
|
||||
|
||||
@ -51,12 +51,12 @@ Grammar:
|
||||
|
||||
## example
|
||||
[CANCEL ALTER TABLE COLUMN]
|
||||
1. 撤销针对 my_table 的 ALTER COLUMN 操作。
|
||||
1. Cancel ALTER COLUMN operation for my_table.
|
||||
CANCEL ALTER TABLE COLUMN
|
||||
FROM example_db.my_table;
|
||||
|
||||
[CANCEL ALTER TABLE ROLLUP]
|
||||
1. 撤销 my_table 下的 ADD ROLLUP 操作。
|
||||
1. Cancel ADD ROLLUP operation for my_table.
|
||||
CANCEL ALTER TABLE ROLLUP
|
||||
FROM example_db.my_table;
|
||||
|
||||
|
||||
@ -79,7 +79,7 @@ CREATE [AGGREGATE] [ALIAS] FUNCTION function_name
|
||||
> "prepare_fn": Function signature of the prepare function for finding the entry from the dynamic library. This option is optional for custom functions
|
||||
>
|
||||
> "close_fn": Function signature of the close function for finding the entry from the dynamic library. This option is optional for custom functions
|
||||
> "type": Function type, RPC for remote udf, NATIVE for c++ native udf
|
||||
> "type": Function type, RPC for remote udf, NATIVE for c++ native udf
|
||||
|
||||
|
||||
|
||||
|
||||
@ -36,7 +36,7 @@ under the License.
|
||||
2. Baidu AFS: afs for Baidu. Only be used inside Baidu.
|
||||
3. Baidu Object Storage(BOS): BOS on Baidu Cloud.
|
||||
4. Apache HDFS.
|
||||
5. Amazon S3:Amazon S3。
|
||||
5. Amazon S3: Amazon S3.
|
||||
|
||||
### Syntax:
|
||||
|
||||
@ -137,14 +137,14 @@ under the License.
|
||||
read_properties:
|
||||
|
||||
Used to specify some special parameters.
|
||||
Syntax:
|
||||
Syntax:
|
||||
[PROPERTIES ("key"="value", ...)]
|
||||
|
||||
You can specify the following parameters:
|
||||
|
||||
line_delimiter: Used to specify the line delimiter in the load file. The default is `\n`. You can use a combination of multiple characters as the column separator.
|
||||
line_delimiter: Used to specify the line delimiter in the load file. The default is `\n`. You can use a combination of multiple characters as the column separator.
|
||||
|
||||
fuzzy_parse: Boolean type, true to indicate that parse json schema as the first line, this can make import more faster,but need all key keep the order of first line, default value is false. Only use for json format.
|
||||
fuzzy_parse: Boolean type, true to indicate that parse json schema as the first line, this can make import more faster,but need all key keep the order of first line, default value is false. Only use for json format.
|
||||
|
||||
jsonpaths: There are two ways to import json: simple mode and matched mode.
|
||||
simple mode: it is simple mode without setting the jsonpaths parameter. In this mode, the json data is required to be the object type. For example:
|
||||
@ -152,7 +152,7 @@ under the License.
|
||||
|
||||
matched mode: the json data is relatively complex, and the corresponding value needs to be matched through the jsonpaths parameter.
|
||||
|
||||
strip_outer_array: Boolean type, true to indicate that json data starts with an array object and flattens objects in the array object, default value is false. For example:
|
||||
strip_outer_array: Boolean type, true to indicate that json data starts with an array object and flattens objects in the array object, default value is false. For example:
|
||||
[
|
||||
{"k1" : 1, "v1" : 2},
|
||||
{"k1" : 3, "v1" : 4}
|
||||
@ -207,9 +207,9 @@ under the License.
|
||||
dfs.client.failover.proxy.provider: Specify the provider that client connects to namenode by default: org. apache. hadoop. hdfs. server. namenode. ha. Configured Failover ProxyProvider.
|
||||
4.4. Amazon S3
|
||||
|
||||
fs.s3a.access.key:AmazonS3的access key
|
||||
fs.s3a.secret.key:AmazonS3的secret key
|
||||
fs.s3a.endpoint:AmazonS3的endpoint
|
||||
fs.s3a.access.key: AmazonS3的access key
|
||||
fs.s3a.secret.key: AmazonS3的secret key
|
||||
fs.s3a.endpoint: AmazonS3的endpoint
|
||||
4.5. If using the S3 protocol to directly connect to the remote storage, you need to specify the following attributes
|
||||
|
||||
(
|
||||
@ -230,7 +230,7 @@ under the License.
|
||||
)
|
||||
fs.defaultFS: defaultFS
|
||||
hdfs_user: hdfs user
|
||||
namenode HA:
|
||||
namenode HA:
|
||||
By configuring namenode HA, new namenode can be automatically identified when the namenode is switched
|
||||
dfs.nameservices: hdfs service name, customize, eg: "dfs.nameservices" = "my_ha"
|
||||
dfs.ha.namenodes.xxx: Customize the name of a namenode, separated by commas. XXX is a custom name in dfs. name services, such as "dfs. ha. namenodes. my_ha" = "my_nn"
|
||||
|
||||
@ -76,12 +76,12 @@ under the License.
|
||||
|
||||
7. hdfs
|
||||
Specify to use libhdfs export to hdfs
|
||||
Grammar:
|
||||
Grammar:
|
||||
WITH HDFS ("key"="value"[,...])
|
||||
|
||||
The following parameters can be specified:
|
||||
fs.defaultFS: Set the fs such as:hdfs://ip:port
|
||||
hdfs_user:Specify hdfs user name
|
||||
fs.defaultFS: Set the fs such as:hdfs://ip:port
|
||||
hdfs_user:Specify hdfs user name
|
||||
|
||||
## example
|
||||
|
||||
|
||||
@ -162,10 +162,10 @@ Date class (DATE/DATETIME): 2017-10-03, 2017-06-13 12:34:03.
|
||||
NULL value: N
|
||||
|
||||
6. S3 Storage
|
||||
fs.s3a.access.key user AK,required
|
||||
fs.s3a.secret.key user SK,required
|
||||
fs.s3a.endpoint user endpoint,required
|
||||
fs.s3a.impl.disable.cache whether disable cache,default true,optional
|
||||
fs.s3a.access.key user AK,required
|
||||
fs.s3a.secret.key user SK,required
|
||||
fs.s3a.endpoint user endpoint,required
|
||||
fs.s3a.impl.disable.cache whether disable cache,default true,optional
|
||||
|
||||
'35;'35; example
|
||||
|
||||
|
||||
@ -29,7 +29,7 @@ under the License.
|
||||
|
||||
The `SELECT INTO OUTFILE` statement can export the query results to a file. Currently supports export to remote storage through Broker process, or directly through S3, HDFS protocol such as HDFS, S3, BOS and COS(Tencent Cloud) through the Broker process. The syntax is as follows:
|
||||
|
||||
Grammar:
|
||||
Grammar:
|
||||
query_stmt
|
||||
INTO OUTFILE "file_path"
|
||||
[format_as]
|
||||
@ -50,7 +50,7 @@ under the License.
|
||||
3. properties
|
||||
Specify the relevant attributes. Currently it supports exporting through the Broker process, or through the S3, HDFS protocol.
|
||||
|
||||
Grammar:
|
||||
Grammar:
|
||||
[PROPERTIES ("key"="value", ...)]
|
||||
The following parameters can be specified:
|
||||
column_separator: Specifies the exported column separator, defaulting to t. Supports invisible characters, such as'\x07'.
|
||||
@ -173,7 +173,7 @@ under the License.
|
||||
"AWS_SECRET_KEY" = "xxx",
|
||||
"AWS_REGION" = "bd"
|
||||
)
|
||||
The final generated file prefix is `my_file_{fragment_instance_id}_`。
|
||||
The final generated file prefix is `my_file_{fragment_instance_id}_`.
|
||||
|
||||
7. Use the s3 protocol to export to bos, and enable concurrent export of session variables.
|
||||
set enable_parallel_outfile = true;
|
||||
|
||||
@ -30,11 +30,11 @@ under the License.
|
||||
|
||||
The kafka partition and offset in the result show the currently consumed partition and the corresponding offset to be consumed.
|
||||
|
||||
grammar:
|
||||
grammar:
|
||||
SHOW [ALL] CREATE ROUTINE LOAD for load_name;
|
||||
|
||||
Description:
|
||||
`ALL`: optional,Is for getting all jobs, including history jobs
|
||||
Description:
|
||||
`ALL`: optional,Is for getting all jobs, including history jobs
|
||||
`load_name`: routine load name
|
||||
|
||||
## example
|
||||
|
||||
Reference in New Issue
Block a user