pyspark : NameError: name 'spark' is not defined
D

7

38

I am copying the pyspark.ml example from the official document website: http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.Transformer

data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([1.0, 1.0]),),(Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]
df = spark.createDataFrame(data, ["features"])
kmeans = KMeans(k=2, seed=1)
model = kmeans.fit(df)

However, the example above wouldn't run and gave me the following errors:

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-28-aaffcd1239c9> in <module>()
      1 from pyspark import *
      2 data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([1.0, 1.0]),),(Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]
----> 3 df = spark.createDataFrame(data, ["features"])
      4 kmeans = KMeans(k=2, seed=1)
      5 model = kmeans.fit(df)

NameError: name 'spark' is not defined

What additional configuration/variable needs to be set to get the example running?

Denicedenie answered 16/9, 2016 at 23:5 Comment(1)
change to sqlContext works. thanks!Denicedenie
S
13

Since you are calling createDataFrame(), you need to do this:

df = sqlContext.createDataFrame(data, ["features"])

instead of this:

df = spark.createDataFrame(data, ["features"])

spark stands there as the sqlContext.


In general, some people have that as sc, so if that didn't work, you could try:

df = sc.createDataFrame(data, ["features"])
Sula answered 16/9, 2016 at 23:12 Comment(2)
If I use sc, it doesn't work. But if I use sqlContext, it works. Is this expected?Denicedenie
Yes @Edamame, it all depends on how you import stuff.. :)Sula
D
93

You can add

from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext('local')
spark = SparkSession(sc)

to the begining of your code to define a SparkSession, then the spark.createDataFrame() should work.

Darkling answered 5/4, 2017 at 12:39 Comment(0)
D
38

Answer by 率怀一 is good and will work for the first time. But the second time you try it, it will throw the following exception :

ValueError: Cannot run multiple SparkContexts at once; existing SparkContext(app=pyspark-shell, master=local) created by __init__ at <ipython-input-3-786525f7559f>:10 

There are two ways to avoid it.

1) Using SparkContext.getOrCreate() instead of SparkContext():

from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext.getOrCreate()
spark = SparkSession(sc)

2) Using sc.stop() in the end, or before you start another SparkContext.

Duke answered 30/3, 2019 at 21:51 Comment(0)
S
13

Since you are calling createDataFrame(), you need to do this:

df = sqlContext.createDataFrame(data, ["features"])

instead of this:

df = spark.createDataFrame(data, ["features"])

spark stands there as the sqlContext.


In general, some people have that as sc, so if that didn't work, you could try:

df = sc.createDataFrame(data, ["features"])
Sula answered 16/9, 2016 at 23:12 Comment(2)
If I use sc, it doesn't work. But if I use sqlContext, it works. Is this expected?Denicedenie
Yes @Edamame, it all depends on how you import stuff.. :)Sula
S
3

If it errors you regarding other open session do this:

from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext.getOrCreate();

spark = SparkSession(sc)
scraped_data=spark.read.json("/Users/reihaneh/Desktop/nov3_final_tst1/")
Schist answered 4/11, 2021 at 21:34 Comment(0)
F
3

You have to import the spark as following if you are using python then it will create a spark session but remember it is an old method though it will work.

from pyspark.shell import spark
Fescennine answered 15/1, 2022 at 21:41 Comment(0)
L
1

The situation may be different now..

from pyspark.sql import SparkSession
..
spark = SparkSession(sc)

works.

Lonilonier answered 30/11, 2023 at 8:47 Comment(1)
This question has already been adequately answered and you must use a code block in your reply.Assiduity
S
0

spark is a variable that usually denotes the Spark session. If the variable is not defined, you can instantiate one:

from pyspark.sql import SparkSession
spark = SparkSession.builder \
                    .appName('My PySpark App') \
                    .getOrCreate()

Alternatively, you can use the pyspark shell where spark (the Spark session) as well as sc (the Spark context) are predefined (see also NameError: name 'spark' is not defined, how to solve?).

Spokeswoman answered 22/4 at 18:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.