Install the Spark History Server
While logs from the Spark driver and executor pods can be viewed using kubectl logs [POD_NAME]
, executor pods are deleted at their end of their execution, and driver pods are deleted by Fusion on a default schedule of every hour. In order to preserve and view Spark logs, install the Spark History Server into your Kubernetes cluster and configure Spark to write logs in a manner that suits your needs.
Spark History Server can be installed via its publicly available Helm chart. To do this, create a values.yaml
file to configure it:
helm install [release-name] [chart] --namespace [fusion-namespace] --values [spark-history-values-yaml]
For related topics, see Spark Operations.