Spark : how to run spark file from spark shell
Asked Answered
L

6

75

I am using CDH 5.2. I am able to use spark-shell to run the commands.

  1. How can I run the file(file.spark) which contain spark commands.
  2. Is there any way to run/compile the scala programs in CDH 5.2 without sbt?
Leora answered 31/12, 2014 at 6:52 Comment(0)
A
119

To load an external file from spark-shell simply do

:load PATH_TO_FILE

This will call everything in your file.

I don't have a solution for your SBT question though sorry :-)

Addams answered 31/12, 2014 at 16:59 Comment(2)
Hi, this command working if I have a file in local machine, but is it possible to refer this location as hdfs path. i.e. :load hdfs://localhost:9000/fileWestern
It is not working for me. I am using CDH 5.7 quick start VMAglaia
O
165

In command line, you can use

spark-shell -i file.scala

to run code which is written in file.scala

Octavus answered 17/3, 2015 at 2:17 Comment(9)
I have tried the command but it does not run the code from the file instead starts the scala shellAglaia
@AlexRajKaliamoorthy I might be late. Just trying to help your comment/question. It does execute however you need to include System.exit(0) to the end of the script in order to exit the spark-shellRetardment
still not working only defned the object then start scala shellHinojosa
How to run .py (Python files) ??Armilla
@AbhishekPansotra spark-submit [options] <app jar | python file> [app arguments]Hartebeest
how to send argument to file.scala in this case?Bruno
In the Scala file if you define an Object SparkTest{...} you need to call main SparkTest.main(args = Array()) and System.exit(0) after like mentioned above.Bhagavadgita
@Ziyao Li, when i typed spark-shell --help it didnt showed me -i option, why so?Cottonade
where is the doc with that - maybe I ll look into the code - fasterPolak
A
119

To load an external file from spark-shell simply do

:load PATH_TO_FILE

This will call everything in your file.

I don't have a solution for your SBT question though sorry :-)

Addams answered 31/12, 2014 at 16:59 Comment(2)
Hi, this command working if I have a file in local machine, but is it possible to refer this location as hdfs path. i.e. :load hdfs://localhost:9000/fileWestern
It is not working for me. I am using CDH 5.7 quick start VMAglaia
S
13

You can use either sbt or maven to compile spark programs. Simply add the spark as dependency to maven

<repository>
      <id>Spark repository</id>
      <url>http://www.sparkjava.com/nexus/content/repositories/spark/</url>
</repository>

And then the dependency:

<dependency>
      <groupId>spark</groupId>
      <artifactId>spark</artifactId>
      <version>1.2.0</version>
</dependency>

In terms of running a file with spark commands: you can simply do this:

echo"
   import org.apache.spark.sql.*
   ssc = new SQLContext(sc)
   ssc.sql("select * from mytable").collect
" > spark.input

Now run the commands script:

cat spark.input | spark-shell
Sibship answered 31/12, 2014 at 16:43 Comment(1)
Downvoting on an apparently useful answer would at least merit an explanation for your concern.Sibship
C
9

Just to give more perspective to the answers

Spark-shell is a scala repl

You can type :help to see the list of operation that are possible inside the scala shell

scala> :help
All commands can be abbreviated, e.g., :he instead of :help.
:edit <id>|<line>        edit history
:help [command]          print this summary or command-specific help
:history [num]           show the history (optional num is commands to show)
:h? <string>             search the history
:imports [name name ...] show import history, identifying sources of names
:implicits [-v]          show the implicits in scope
:javap <path|class>      disassemble a file or class name
:line <id>|<line>        place line(s) at the end of history
:load <path>             interpret lines in a file
:paste [-raw] [path]     enter paste mode or paste a file
:power                   enable power user mode
:quit                    exit the interpreter
:replay [options]        reset the repl and replay all previous commands
:require <path>          add a jar to the classpath
:reset [options]         reset the repl to its initial state, forgetting all session entries
:save <path>             save replayable session to a file
:sh <command line>       run a shell command (result is implicitly => List[String])
:settings <options>      update compiler options, if possible; see reset
:silent                  disable/enable automatic printing of results
:type [-v] <expr>        display the type of an expression without evaluating it
:kind [-v] <expr>        display the kind of expression's type
:warnings                show the suppressed warnings from the most recent line which had any

:load interpret lines in a file

Cottonade answered 7/11, 2017 at 23:26 Comment(0)
M
9

Tested on both spark-shell version 1.6.3 and spark2-shell version 2.3.0.2.6.5.179-4, you can directly pipe to the shell's stdin like

spark-shell <<< "1+1"

or in your use case,

spark-shell < file.spark
Meyers answered 22/11, 2019 at 11:50 Comment(1)
It works, but output to stdout is basically a replay of everything you would see if you enter the spark-shell and enter all lines from the file.Portative
P
0

You can run as you run your shell script. This example to run from command line environment example

./bin/spark-shell :- this is the path of your spark-shell under bin /home/fold1/spark_program.py :- This is the path where your python program is there.

So:

./bin.spark-shell /home/fold1/spark_prohram.py
Philous answered 21/1, 2020 at 13:48 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.