Subject: k8s orchestrating Spark service


Thanks Matt,

Actually I can’t use spark-submit. We submit the Driver programmatically
through the API. But this is not the issue and using k8s as the master is
also not the issue though you may be right about it being easier, it
doesn’t quite get to the heart.

We want to orchestrate a bunch of services including Spark. The rest work,
we are asking if anyone has seen a good starting point for adding Spark as
a k8s managed service.
From: Matt Cheah <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
Reply: Matt Cheah <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
Date: July 1, 2019 at 3:26:20 PM
To: Pat Ferrel <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
Subject:  Re: k8s orchestrating Spark service

I would recommend looking into Spark’s native support for running on
Kubernetes. One can just start the application against Kubernetes directly
using spark-submit in cluster mode or starting the Spark context with the
right parameters in client mode. See
https://spark.apache.org/docs/latest/running-on-kubernetes.html

I would think that building Helm around this architecture of running Spark
applications would be easier than running a Spark standalone cluster. But
admittedly I’m not very familiar with the Helm technology – we just use
spark-submit.

-Matt Cheah

*From: *Pat Ferrel <[EMAIL PROTECTED]>
*Date: *Sunday, June 30, 2019 at 12:55 PM
*To: *"[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
*Subject: *k8s orchestrating Spark service

We're trying to setup a system that includes Spark. The rest of the
services have good Docker containers and Helm charts to start from.

Spark on the other hand is proving difficult. We forked a container and
have tried to create our own chart but are having several problems with
this.

So back to the community… Can anyone recommend a Docker Container + Helm
Chart for use with Kubernetes to orchestrate:

   - Spark standalone Master
   - several Spark Workers/Executors

This not a request to use k8s to orchestrate Spark Jobs, but the service
cluster itself.

Thanks