Analytics Data Compute
powered by Apache Spark
Launch your Apache Spark jobs on the fly, we take care of the infrastructure !
You need Apache Spark computation over a big Apache Spark cluster but you don't have computers ?
You don't have enough time to create a cluster of computers and do all installations and configurations ?
You just need a cluster for few hours and not forever ?
Or you just want to try out easily the power of Apache Spark ?
Subscribe to our Lab, and make your big data computation over an Apache Spark cluster in one step !
How it works
By one command, you send your Apache Spark code jar file and spark-submit commandline options, and we take care of the rest.
ovh-spark-submit --token $TOKEN --class org.apache.spark.examples.SparkPi --name SparkJob1 --executor-memory 2G --total-executor-cores 4 spark-examples_2.11-2.3.1.jar 1000
OVH creates the whole Apache Spark cluster on the fly upon receiving the command from user. Then it executes the Spark job, get the results and deletes the whole cluster after finishing the job.
During your computation you can read/write your input/output data from/to any network storage system like Openstack swift.
- Cluster is only accessible through HTTPS.
- A new and dedicated cluster will be created for each request and it will be deleted after finishing the job.
- Your cluster is isolated from internet.
- Your cluster computers are created in you own Openstack project.
- Results and output logs will be saved in swift of your Openstack project.
You can monitor the whole process with complete details.
You have a dashboard that shows how much resources during what time you have consumed.
The following link will lead you to the getting started documentation
There is no complex billing. You just pay what you consume. The minimal time is 1 hour.
|Spark software :||Free|
|Network Inbound / Outbound traffic :||Free|
Hourly billing, based on OVH Public Cloud instances on Linux.
Please refer to OVH Public Cloud pricing.
Total price will depend of the amount of CPU and RAM you want.
Example : you request 60GB of RAM. It will deploy 3 x R3-20 instances. The -hourly usage will depend on your Apache Spark Job. If it takes 1 hour and 20 minutes, you will pay for 3 x R3-20 instance for 2 hours (every hour started is billed).
What is the delivery time?
In few minutes you can create a big cluster of computers with Spark ready to use. It will change a little according to the number of requested nodes.
What version of Apache Spark do you use in the cluster?
You can specify any version of Spark that you need in the command line options. All official released versions of Apache Spark are supported, from 1.6.3 to 2.3.2.
Can I create a Spark cluster and send several Spark jobs to the same cluster?
Yes, There is a command line option that you can keep your cluster and send as many jobs as you like to the same cluster. Then you have another command line option to delete the cluster.
Am I an administrator of my cluster?
Yes, the cluster is visible in your Openstack CLI and OVH Public Cloud control panel. However, it's not necessary to administrate it. OVH will start, deploy, configure and delete the cluster.
How many machines will be deployed?
Number of machines is calculated based on the number of Cores and Memory you specify in command line options.
How can I create a TOKEN for my Openstack project?
You can create a token in your OVH Public Cloud control panel or via the official Openstack client command line.
Can I have access to Apache Spark UI and Dashboard during computation?
Yes, during the computation you will have access to the Spark UI and dashboard and after finishing the task you will have access to the output logs (stderr, stdout) in your Openstack Swift storage.