Dataframe transpose with pyspark in Apache Spark
Asked Answered
B

2

17

I have a dataframe df that have following structure:

+-----+-----+-----+-------+
|  s  |col_1|col_2|col_...|
+-----+-----+-----+-------+
| f1  |  0.0|  0.6|  ...  |
| f2  |  0.6|  0.7|  ...  |
| f3  |  0.5|  0.9|  ...  |
|  ...|  ...|  ...|  ...  |

And I want to calculate the transpose of this dataframe so it will be look like

+-------+-----+-----+-------+------+
|  s    | f1  | f2  | f3    |   ...|
+-------+-----+-----+-------+------+
|col_1  |  0.0|  0.6|  0.5  |   ...|
|col_2  |  0.6|  0.7|  0.9  |   ...|
|col_...|  ...|  ...|  ...  |   ...|

I tied this two solutions but it returns that dataframe has not the specified used method:

method 1:

 for x in df.columns:
    df = df.pivot(x)

method 2:

df = sc.parallelize([ (k,) + tuple(v[0:]) for k,v in df.items()]).toDF()

how can I fix this.

Brechtel answered 27/9, 2017 at 16:38 Comment(1)
You could take a look here #36216255Recumbent
F
20

If data is small enough to be transposed (not pivoted with aggregation) you can just convert it to Pandas DataFrame:

df = sc.parallelize([
    ("f1", 0.0, 0.6, 0.5),
    ("f2", 0.6, 0.7, 0.9)]).toDF(["s", "col_1", "col_2", "col_3"])

df.toPandas().set_index("s").transpose()
s       f1   f2
col_1  0.0  0.6
col_2  0.6  0.7
col_3  0.5  0.9

If it is to large for this, Spark won't help. Spark DataFrame distributes data by row (although locally uses columnar storage), therefore size of a individual rows is limited to local memory.

Feucht answered 27/9, 2017 at 16:57 Comment(3)
You might want to reset index before converting it back to Spark Dataframe in order to not lose columns names in the rows. You can do it by using the command 'reset_index'. E.g: df.toPandas().set_index("s").transpose().reset_index()Fiftieth
How can we convert the result to dataframe again ?Impedance
@Impedance spark.createDataFrame(result)Feucht
M
2

You can try Koalas by databricks. Koalas is a similar to Pandas but made for distributed processing and available in Pyspark (atleast from 3.0.0).

kdf = df.to_koalas()
kdf_t = kdf.transpose()
df_T = kdf_t.to_spark()

edit: to efficiently access Koalas you need to define partitions, otherwise there can be serious performance degradation.

Macguiness answered 22/12, 2021 at 11:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.