Initial situation
AVRO serialized events are sent to an azure event hub. These events are stored persistently using azure event hubs capture feature. Captured data, along with event hub metadata, is written in Apache Avro format. The original events contained in the capture avro file shall be analyzed using (py)Spark.
Question
How to deserialize an AVRO serialized event that is contained within a field / column of an AVRO file using (py)Spark? (Annotation: the avro schema of the event is not know by the reader application, but it is contained within the message as avro header)
Background
The Background is an analytical platform for an IoT scenario. Messages are provided by an IoT platform running on kafka. To be more flexible with schema changes the strategic decision is to stick with avro format. To enable the use of Azure Stream Analytics (ASA) the avro schema is specified with each message (otherwise ASA is not able to deserialize the message).
capture file avro schema
The schema of the avro files generated by the event hub capture feature is as listed below:
{
"type":"record",
"name":"EventData",
"namespace":"Microsoft.ServiceBus.Messaging",
"fields":[
{"name":"SequenceNumber","type":"long"},
{"name":"Offset","type":"string"},
{"name":"EnqueuedTimeUtc","type":"string"},
{"name":"SystemProperties","type":{"type":"map","values":["long","double","string","bytes"]}},
{"name":"Properties","type":{"type":"map","values":["long","double","string","bytes"]}},
{"name":"Body","type":["null","bytes"]}
]
}
(note that the actual message is stored in the body field as bytes)
example event avro schema
For illustration i sent events with the following avro schema to event hub:
{
"type" : "record",
"name" : "twitter_schema",
"namespace" : "com.test.avro",
"fields" : [
{"name" : "username","type" : "string"},
{"name" : "tweet","type" : "string"},
{"name" : "timestamp","type" : "long"}
],
}
example event
{
"username": "stackoverflow",
"tweet": "please help deserialize me",
"timestamp": 1366150681
}
example avro message payload
(encoded as a string / note that avro schema is included)
Objavro.schema�{"type":"record","name":"twitter_schema","namespace":"com.test.avro","fields":[{"name":"username","type":"string"},{"name":"tweet","type":"string"},{"name":"timestamp","type":"long"}]}
So at the end this payload will be stored as Bytes in the 'Body' field of the capture avro file.
.
.
My current approach
For ease of use, testing and debugging i currently use a pyspark jupyter notebook.
Config of Spark Session:
%%configure
{
"conf": {
"spark.jars.packages": "com.databricks:spark-avro_2.11:4.0.0"
}
}
reading avro file into a dataframe and outputting result:
capture_df = spark.read.format("com.databricks.spark.avro").load("[pathToCaptureAvroFile]")
capture_df.show()
result:
+--------------+------+--------------------+----------------+----------+--------------------+
|SequenceNumber|Offset| EnqueuedTimeUtc|SystemProperties|Properties| Body|
+--------------+------+--------------------+----------------+----------+--------------------+
| 71| 9936|11/4/2018 4:59:54 PM| Map()| Map()|[4F 62 6A 01 02 1...|
| 72| 10448|11/4/2018 5:00:01 PM| Map()| Map()|[4F 62 6A 01 02 1...|
getting the content of the Body field and cast it to a String:
msgRdd = capture_df.select(capture_df.Body.cast("string")).rdd.map(lambda x: x[0])
That's how far i got the code working. Spent a lot of time trying to deserialize the actual message, but without success. I would appreciate any help!
Some additional info: Spark is running on a Microsoft Azure HDInsight 3.6 cluster. Spark Version is 2.2. Python Version is 2.7.12.
RDD[Byte[]]
where the bytes are avro payload into a Dataframe. For instance, with anRDD[String]
where the strings are json you can do in pysparkspark.read.json(rdd)
, but for avro,spark.read.format('avro').load(rdd)
doesn't work asread.load()
requires only a path whereread.json()
accepts RDD[String]. If I find a solution I'll post it, but so far I have none... – Mithridate