I read that the Kryo serializer can provide faster serialization when used in Apache Spark. However, I'm using Spark through Python.
Do I still get notable benefits from switching to the Kryo serializer?
I read that the Kryo serializer can provide faster serialization when used in Apache Spark. However, I'm using Spark through Python.
Do I still get notable benefits from switching to the Kryo serializer?
Kryo
won’t make a major impact on PySpark
because it just stores data as byte[]
objects, which are fast to serialize even with Java.
But it may be worth a try — you would just set the spark.serializer
configuration and trying not to register any classe.
What might make more impact is storing your data as MEMORY_ONLY_SER
and enabling spark.rdd.compress
, which will compress them your data.
In Java this can add some CPU overhead, but Python runs quite a bit slower, so it might not matter. It might also speed up computation by reducing GC or letting you cache more data.
Reference : Matei Zaharia's answer in the mailing list.
you would just set the spark.serializer
, just set empty value ? if there is a value to set, what it might be –
Toxin conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
, per the documentation. Saikat doesn't yet have the reputation to leave this as a comment, so instead left it as an answer. Since that answer will likely get deleted, I'm reiterating their guidance here. –
Entomo PySpark
stores data as bytes[]
, you mean the bytes resulted by the python serializer? i.e Pickle or Marshal? –
Almonte It all depends on what you mean when you say PySpark. In the last two years, PySpark development, same as the Spark development in general, shifted from the low level RDD API towards high level APIs like DataFrame
or ML
.
These APIs are natively implemented on JVM and the Python code is mostly limited to a bunch of RPC calls executed on the driver. Everything else is pretty much the same code as executed using Scala or Java so it should benefit from Kryo in the same way as the native applications.
I will argue that at the end of the day there is not much to lose when you use Kryo with PySpark and potentially something to gain when your application depends heavily on the "native" APIs.
© 2022 - 2024 — McMap. All rights reserved.