How i can integrate Apache Spark with the Play Framework to display predictions in real time?
Asked Answered
J

4

6

I'm doing some testing with Apache Spark, for my final project in college. I have a data set that I use to generate a decision tree, and make some predictions on new data.

In the future, I think to use this project into production, where I would generate a decision tree (batch processing), and through a web interface or a mobile application receives new data, making the prediction of the class of that entry, and inform the result instantly to the user. And also go storing these new entries for after a while generating a new decision tree (batch processing), and repeat this process continuously.

Despite the Apache Spark have the purpose of performing batch processing, there is the streaming API that allows you to receive real-time data, and in my application this data will only be used by a model built in a batch process with a decision tree, and how the prediction is quite fast, it allows the user to have the answer quickly.

My question is what are the best ways to integrate Apache Spark with a web application (plan to use the Play Framework scala version)?

Javier answered 10/5, 2015 at 3:33 Comment(1)
'Best' by what criteria?Amadou
D
4

One of the issues you will run into with Spark is it takes some time to start up and build a SparkContext. If you want to do Spark queries via web calls, it will not be practical to fire up spark-submit every time. Instead, you will want to turn your driver application (these terms will make more sense later) into an RPC server.

In my application I am embedding a web server (http4s) so I can do XmlHttpRequests in JavaScript to directly query my application, which will return JSON objects.

Downatheel answered 10/5, 2015 at 4:18 Comment(1)
I am quite interested in your solution. Are there significant performance penalty when running such a long term Spark job? I mean will the later application degrade in performance because of the GC pressure?Grano
F
3

Spark is a fast large scale data processing platform. The key here is large scale data. In most cases, the time to process that data will not be sufficiently fast to meet the expectations of your average web app user. It is far better practice to perform the processing offline and write the results of your Spark processing to e.g a database. Your web app can then efficiently retrieve those results by querying that database.

That being said, spark job server server provides a REST api for submitting Spark jobs.

Forgetful answered 22/5, 2015 at 4:15 Comment(1)
Yes, i will create the model offline, and utilize this model to make predictions on real time.Javier
P
1

Spark (< v1.6) uses Akka underneath. So does Play. You should be able to write a Spark action as an actor that communicates with a receiving actor in the Play system (that you also write).

You can let Akka worry about de/serialization, which will work as long as both systems have the same class definitions on their classpaths.

If you want to go further than that, you can write Akka Streams code that tees the data stream to your Play application.

Profiterole answered 6/8, 2016 at 1:33 Comment(2)
From 1.6.0 Spark has moved out of Akka usageMisdemeanant
Yes that is true. I'll leave this for posterity sake. Or until somebody decides it should be deleted ;) Meanwhile, I'll adjust the answer to indicate.Profiterole
V
1

check this link out, you need to run spark in local mode (on your web server) and the offline ML model should be saved in S3 so you can access the model then from web app and cache the model jut once and you will be having spark context running in local mode continuously .

https://commitlogs.com/2017/02/18/serve-spark-ml-model-using-play-framework-and-s3/

Also another approach is to use Livy (REST API calls on spark)

https://index.scala-lang.org/luqmansahaf/play-livy-module/play-livy/1.0?target=_2.11

the s3 option is the way forward i guess, if the batch model changes you need to refresh the website cache (down time) for few minutes.

look into these links

https://github.com/openforce/spark-mllib-scala-play/blob/master/app/modules/SparkUtil.scala

https://github.com/openforce/spark-mllib-scala-play

Thanks Sri

Vanitavanity answered 12/5, 2018 at 12:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.