[typo](doc)spark load add task timeout parameter #20115
This commit is contained in:
@ -186,6 +186,7 @@ REVOKE USAGE_PRIV ON RESOURCE resource_name FROM ROLE role_name
|
||||
- `spark.master`: required, yarn is supported at present, `spark://host:port`.
|
||||
- `spark.submit.deployMode`: the deployment mode of Spark Program. It is required and supports cluster and client.
|
||||
- `spark.hadoop.fs.defaultfs`: required when master is yarn.
|
||||
- `spark.submit.timeout`:spark task timeout, default 5 minutes
|
||||
- Other parameters are optional, refer to `http://spark.apache.org/docs/latest/configuration.html`
|
||||
- YARN RM related parameters are as follows:
|
||||
- If Spark is a single-point RM, you need to configure `spark.hadoop.yarn.resourcemanager.address`,address of the single point resource manager.
|
||||
|
||||
@ -159,6 +159,7 @@ REVOKE USAGE_PRIV ON RESOURCE resource_name FROM ROLE role_name
|
||||
- `spark.master`: 必填,目前支持 Yarn,Spark://host:port。
|
||||
- `spark.submit.deployMode`: Spark 程序的部署模式,必填,支持 Cluster、Client 两种。
|
||||
- `spark.hadoop.fs.defaultFS`: Master 为 Yarn 时必填。
|
||||
- `spark.submit.timeout`:spark任务超时时间,默认5分钟
|
||||
- YARN RM 相关参数如下:
|
||||
- 如果 Spark 为单点 RM,则需要配置`spark.hadoop.yarn.resourcemanager.address`,表示单点 ResourceManager 地址。
|
||||
- 如果 Spark 为 RM-HA,则需要配置(其中 hostname 和 address 任选一个配置):
|
||||
|
||||
Reference in New Issue
Block a user