Spark Administration in Kubernetes
In Fusion 5.x, Spark operates in native Kubernetes mode, rather than the standalone mode used in Fusion 4.x. This topic describes Spark operations in Fusion 5.x.
Node Selectors
You can control which nodes Spark executors are scheduled on using a Spark configuration property for a job:
spark.kubernetes.node.selector.<LABEL>=<LABEL_VALUE>
Use the LABEL
specified for the node, and the name of the node as the LABEL_VALUE
. For example, if a node is labeled with fusion_node_type=spark_only
, schedule Spark executor pods to run on that node using:
spark.kubernetes.node.selector.fusion_node_type=spark_only
Spark version 2.4.x does not support tolerations for Spark pods. As a result, Spark pods can’t be scheduled on any nodes with taints. |
Cluster mode
Fusion 5 ships with Spark and operates in "cluster mode" on top of Kubernetes. In cluster mode, each Spark driver runs in a separate pod, and resources can be managed per job. Each executor also runs in its own pod.
Spark config defaults
The table below shows the default configurations for Spark. These settings are configured in the job-launcher config map, accessible using kubectl get configmaps <release-name>-job-launcher
. Some of these settings are also configurable via Helm.
Spark Configuration | Default value | Helm Variable |
---|---|---|
|
3g |
|
|
2 |
|
|
3g |
|
|
6 |
|
|
3 |
|
|
true |
Spark Configuration | Default value | Helm Variable |
---|---|---|
|
Always |
|
|
|
|
|
<name>-job-launcher-spark |
|
|
fusion-dev-docker.ci-artifactory.lucidworks.com |
|
|
fusion-dev-docker.ci-artifactory.lucidworks.com |
|