From e83bab4e446055156606ffe37f3f52ee65fc426d Mon Sep 17 00:00:00 2001 From: lexluo09 <39718951+lexluo09@users.noreply.github.com> Date: Wed, 21 Dec 2022 14:12:41 +0800 Subject: [PATCH] [typo](docs)add spark-doris-connector config (#15214) --- docs/en/docs/ecosystem/spark-doris-connector.md | 3 ++- docs/zh-CN/docs/ecosystem/spark-doris-connector.md | 4 +++- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/docs/en/docs/ecosystem/spark-doris-connector.md b/docs/en/docs/ecosystem/spark-doris-connector.md index d2b8dcdaf3..8a7545f478 100644 --- a/docs/en/docs/ecosystem/spark-doris-connector.md +++ b/docs/en/docs/ecosystem/spark-doris-connector.md @@ -273,7 +273,8 @@ kafkaSource.selectExpr("CAST(key AS STRING)", "CAST(value as STRING)") | sink.batch.size | 10000 | Maximum number of lines in a single write BE | | sink.max-retries | 1 | Number of retries after writing BE failed | | sink.properties.* | -- | The stream load parameters.

eg:
sink.properties.column_separator' = ','

| - +| doris.sink.task.partition.size | -- | The number of partitions corresponding to the Writing task. After filtering and other operations, the number of partitions written in Spark RDD may be large, but the number of records corresponding to each Partition is relatively small, resulting in increased writing frequency and waste of computing resources. The smaller this value is set, the less Doris write frequency and less Doris merge pressure. It is generally used with doris.sink.task.use.repartition. | +| doris.sink.task.use.repartition | false | Whether to use repartition mode to control the number of partitions written by Doris. The default value is false, and coalesce is used (note: if there is no Spark action before the write, the whole computation will be less parallel). If it is set to true, then repartition is used (note: you can set the final number of partitions at the cost of shuffle). | ### SQL & Dataframe Configuration diff --git a/docs/zh-CN/docs/ecosystem/spark-doris-connector.md b/docs/zh-CN/docs/ecosystem/spark-doris-connector.md index 2a7c093a87..209e53d34e 100644 --- a/docs/zh-CN/docs/ecosystem/spark-doris-connector.md +++ b/docs/zh-CN/docs/ecosystem/spark-doris-connector.md @@ -280,7 +280,9 @@ kafkaSource.selectExpr("CAST(key AS STRING)", "CAST(value as STRING)") | doris.write.fields | -- | 指定写入Doris表的字段或者字段顺序,多列之间使用逗号分隔。
默认写入时要按照Doris表字段顺序写入全部字段。 | | sink.batch.size | 10000 | 单次写BE的最大行数 | | sink.max-retries | 1 | 写BE失败之后的重试次数 | -| sink.properties.* | -- | Stream Load 的导入参数。
例如: 'sink.properties.column_separator' = ', ' | +| sink.properties.* | -- | Stream Load 的导入参数。
例如: 'sink.properties.column_separator' = ', ' | +| doris.sink.task.partition.size | -- | Doris写入任务对应的 Partition 个数。Spark RDD 经过过滤等操作,最后写入的 Partition 数可能会比较大,但每个 Partition 对应的记录数比较少,导致写入频率增加和计算资源浪费。
此数值设置越小,可以降低 Doris 写入频率,减少 Doris 合并压力。该参数配合 doris.sink.task.use.repartition 使用。 | +| doris.sink.task.use.repartition | false | 是否采用 repartition 方式控制 Doris写入 Partition数。默认值为 false,采用 coalesce 方式控制(注意: 如果在写入之前没有 Spark action 算子,可能会导致整个计算并行度降低)。
如果设置为 true,则采用 repartition 方式(注意: 可设置最后 Partition 数,但会额外增加 shuffle 开销)。 | ### SQL 和 Dataframe 专有配置