How do I convert an RDD with a SparseVector Column to a DataFrame with a column as Vector
Asked Answered
P

3

13

I have an RDD with a tuple of values (String, SparseVector) and I want to create a DataFrame using the RDD. To get a (label:string, features:vector) DataFrame which is the Schema required by most of the ml algorithm's libraries. I know it can be done because HashingTF ml Library outputs a vector when given a features column of a DataFrame.

temp_df = sqlContext.createDataFrame(temp_rdd, StructType([
        StructField("label", DoubleType(), False),
        StructField("tokens", ArrayType(StringType()), False)
    ]))

#assumming there is an RDD (double,array(strings))

hashingTF = HashingTF(numFeatures=COMBINATIONS, inputCol="tokens", outputCol="features")

ndf = hashingTF.transform(temp_df)
ndf.printSchema()

#outputs 
#root
#|-- label: double (nullable = false)
#|-- tokens: array (nullable = false)
#|    |-- element: string (containsNull = true)
#|-- features: vector (nullable = true)

So my question is, can I somehow having an RDD of (String, SparseVector) convert it to a DataFrame of (String, vector). I tried with the usual sqlContext.createDataFrame but there is no DataType that fits the needs I have.

df = sqlContext.createDataFrame(rdd,StructType([
        StructField("label" , StringType(),True),
        StructField("features" , ?Type(),True)
    ]))
Pagas answered 23/9, 2015 at 16:47 Comment(0)
A
20

You have to use VectorUDT here:

# In Spark 1.x
# from pyspark.mllib.linalg import SparseVector, VectorUDT
from pyspark.ml.linalg import SparseVector, VectorUDT

temp_rdd = sc.parallelize([
    (0.0, SparseVector(4, {1: 1.0, 3: 5.5})),
    (1.0, SparseVector(4, {0: -1.0, 2: 0.5}))])

schema = StructType([
    StructField("label", DoubleType(), True),
    StructField("features", VectorUDT(), True)
])

temp_rdd.toDF(schema).printSchema()

## root
##  |-- label: double (nullable = true)
##  |-- features: vector (nullable = true)

Just for completeness Scala equivalent:

import org.apache.spark.sql.Row
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.types.{DoubleType, StructType}
// In Spark 1x.
// import org.apache.spark.mllib.linalg.{Vectors, VectorUDT}
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.ml.linalg.SQLDataTypes.VectorType

val schema = new StructType()
  .add("label", DoubleType)
   // In Spark 1.x
   //.add("features", new VectorUDT())
  .add("features",VectorType)

val temp_rdd: RDD[Row]  = sc.parallelize(Seq(
  Row(0.0, Vectors.sparse(4, Seq((1, 1.0), (3, 5.5)))),
  Row(1.0, Vectors.sparse(4, Seq((0, -1.0), (2, 0.5))))
))

spark.createDataFrame(temp_rdd, schema).printSchema

// root
// |-- label: double (nullable = true)
// |-- features: vector (nullable = true)
Allege answered 23/9, 2015 at 17:33 Comment(4)
Wow, I was looking for this during ages! almost cry of happiness :,) +1Lewin
This worked! thank you very much! can you tell me where in the documentation is that? cant find any VectorUDT on linalg apache spark DocsPagas
@OrangelMarquez maybe a pull request is requiredLewin
I don't know about docs but Spark source is an useful resource: github.com/apache/spark/blob/master/examples/src/main/python/ml/…Allege
K
4

While @zero323 answer https://mcmap.net/q/856946/-how-do-i-convert-an-rdd-with-a-sparsevector-column-to-a-dataframe-with-a-column-as-vector makes sense, and I wish it worked for me - the rdd underlying the dataframe, sqlContext.createDataFrame(temp_rdd, schema), the still contained SparseVectors types I had to do the following to convert to DenseVector types - if someone has a shorter/better way I want to know

temp_rdd = sc.parallelize([
    (0.0, SparseVector(4, {1: 1.0, 3: 5.5})),
    (1.0, SparseVector(4, {0: -1.0, 2: 0.5}))])

schema = StructType([
    StructField("label", DoubleType(), True),
    StructField("features", VectorUDT(), True)
])

temp_rdd.toDF(schema).printSchema()
df_w_ftr = temp_rdd.toDF(schema)

print 'original convertion method: ',df_w_ftr.take(5)
print('\n')
temp_rdd_dense = temp_rdd.map(lambda x: Row(label=x[0],features=DenseVector(x[1].toArray())))
print type(temp_rdd_dense), type(temp_rdd)
print 'using map and toArray:', temp_rdd_dense.take(5)

temp_rdd_dense.toDF().show()

root
 |-- label: double (nullable = true)
 |-- features: vector (nullable = true)

original convertion method:  [Row(label=0.0, features=SparseVector(4, {1: 1.0, 3: 5.5})), Row(label=1.0, features=SparseVector(4, {0: -1.0, 2: 0.5}))]


<class 'pyspark.rdd.PipelinedRDD'> <class 'pyspark.rdd.RDD'>
using map and toArray: [Row(features=DenseVector([0.0, 1.0, 0.0, 5.5]), label=0.0), Row(features=DenseVector([-1.0, 0.0, 0.5, 0.0]), label=1.0)]

+------------------+-----+
|          features|label|
+------------------+-----+
| [0.0,1.0,0.0,5.5]|  0.0|
|[-1.0,0.0,0.5,0.0]|  1.0|
+------------------+-----+
Karlakarlan answered 16/1, 2016 at 6:45 Comment(0)
T
1

this is an example in scala for spark 2.1

import org.apache.spark.ml.linalg.Vector

def featuresRDD2DataFrame(features: RDD[Vector]): DataFrame = {
    import sparkSession.implicits._
    val rdd: RDD[(Double, Vector)] = features.map(x => (0.0, x))
    val df = rdd.toDF("label","features").select("features")
    df
  }

the toDF() was not recognized by the compiler on the features rdd

Terpsichorean answered 6/11, 2017 at 8:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.