PySpark: Deserializing an Avro serialized message contained in an eventhub capture avro file
Asked Answered
S

3

7

Initial situation

AVRO serialized events are sent to an azure event hub. These events are stored persistently using azure event hubs capture feature. Captured data, along with event hub metadata, is written in Apache Avro format. The original events contained in the capture avro file shall be analyzed using (py)Spark.


Question

How to deserialize an AVRO serialized event that is contained within a field / column of an AVRO file using (py)Spark? (Annotation: the avro schema of the event is not know by the reader application, but it is contained within the message as avro header)


Background

The Background is an analytical platform for an IoT scenario. Messages are provided by an IoT platform running on kafka. To be more flexible with schema changes the strategic decision is to stick with avro format. To enable the use of Azure Stream Analytics (ASA) the avro schema is specified with each message (otherwise ASA is not able to deserialize the message).

capture file avro schema

The schema of the avro files generated by the event hub capture feature is as listed below:

{
    "type":"record",
    "name":"EventData",
    "namespace":"Microsoft.ServiceBus.Messaging",
    "fields":[
                 {"name":"SequenceNumber","type":"long"},
                 {"name":"Offset","type":"string"},
                 {"name":"EnqueuedTimeUtc","type":"string"},
                 {"name":"SystemProperties","type":{"type":"map","values":["long","double","string","bytes"]}},
                 {"name":"Properties","type":{"type":"map","values":["long","double","string","bytes"]}},
                 {"name":"Body","type":["null","bytes"]}
             ]
}

(note that the actual message is stored in the body field as bytes)

example event avro schema

For illustration i sent events with the following avro schema to event hub:

{
    "type" : "record",
    "name" : "twitter_schema",
    "namespace" : "com.test.avro",
    "fields" : [ 
                {"name" : "username","type" : "string"}, 
                {"name" : "tweet","type" : "string"},
                {"name" : "timestamp","type" : "long"}
    ],
}

example event

{
    "username": "stackoverflow",
    "tweet": "please help deserialize me",
    "timestamp": 1366150681
}

example avro message payload

(encoded as a string / note that avro schema is included)

Objavro.schema�{"type":"record","name":"twitter_schema","namespace":"com.test.avro","fields":[{"name":"username","type":"string"},{"name":"tweet","type":"string"},{"name":"timestamp","type":"long"}]}

So at the end this payload will be stored as Bytes in the 'Body' field of the capture avro file.

.
.


My current approach

For ease of use, testing and debugging i currently use a pyspark jupyter notebook.

Config of Spark Session:

%%configure
{
    "conf": {
        "spark.jars.packages": "com.databricks:spark-avro_2.11:4.0.0"
    }
}

reading avro file into a dataframe and outputting result:

capture_df = spark.read.format("com.databricks.spark.avro").load("[pathToCaptureAvroFile]")
capture_df.show()

result:

+--------------+------+--------------------+----------------+----------+--------------------+
|SequenceNumber|Offset|     EnqueuedTimeUtc|SystemProperties|Properties|                Body|
+--------------+------+--------------------+----------------+----------+--------------------+
|            71|  9936|11/4/2018 4:59:54 PM|           Map()|     Map()|[4F 62 6A 01 02 1...|
|            72| 10448|11/4/2018 5:00:01 PM|           Map()|     Map()|[4F 62 6A 01 02 1...|

getting the content of the Body field and cast it to a String:

msgRdd = capture_df.select(capture_df.Body.cast("string")).rdd.map(lambda x: x[0])

That's how far i got the code working. Spent a lot of time trying to deserialize the actual message, but without success. I would appreciate any help!

Some additional info: Spark is running on a Microsoft Azure HDInsight 3.6 cluster. Spark Version is 2.2. Python Version is 2.7.12.

Supermarket answered 7/11, 2018 at 21:6 Comment(0)
N
3

What you want to do is apply .decode('utf-8') to each element in the Body column. You have to create an UDF from decode, so you can apply it. The UDF can be created with

from pyspark.sql import functions as f

decodeElements = f.udf(lambda a: a.decode('utf-8'))

Here is a complete example for parsing avro files stored by the IoT Hub to a custom Blob Storage endpoint:

storage_account_name = "<YOUR STORACE ACCOUNT NAME>"
storage_account_access_key = "<YOUR STORAGE ACCOUNT KEY>"

# Read all files from one day. All PartitionIds are included. 
file_location = "wasbs://<CONTAINER>@"+storage_account_name+".blob.core.windows.net/<IoT Hub Name>/*/2018/11/30/*/*"
file_type = "avro"

# Read raw data
spark.conf.set(
  "fs.azure.account.key."+storage_account_name+".blob.core.windows.net",
  storage_account_access_key)

reader = spark.read.format(file_type).option("inferSchema", "true")
raw = reader.load(file_location)

# Decode Body into strings
from pyspark.sql import functions as f

decodeElements = f.udf(lambda a: a.decode('utf-8'))

jsons = raw.select(
    raw['EnqueuedTimeUtc'],
    raw['SystemProperties.connectionDeviceId'].alias('DeviceId'), 
    decodeElements(raw['Body']).alias("Json")
)

# Parse Json data
from pyspark.sql.functions import from_json

json_schema = spark.read.json(jsons.rdd.map(lambda row: row.Json)).schema
data = jsons.withColumn('Parsed', from_json('Json', json_schema)).drop('Json')

Disclamer: I am new to both Python and Databricks and my solution is probably less than perfect. But I spent more than a day to get this working and I hope it can be a good starting point for someone.

Nephoscope answered 4/12, 2018 at 8:56 Comment(4)
This answer explains how to read Avro data from the Azure blob storage (or hdfs). The question asked is with how to cast an RDD[Byte[]] where the bytes are avro payload into a Dataframe. For instance, with an RDD[String] where the strings are json you can do in pyspark spark.read.json(rdd), but for avro, spark.read.format('avro').load(rdd) doesn't work as read.load() requires only a path where read.json() accepts RDD[String]. If I find a solution I'll post it, but so far I have none...Mithridate
No. The question is explicitly about event "stored [...] using azure event hubs capture feature". However, this may not be your specific scenario. Thanks for the down vote.Nephoscope
You're right I misread... I'm so sorry for the downvote! I tried removing it but it doesn't let me unless you edit the answer. However I still don't think that your answer works for the use case described in the question. I confirm that it works if the captured message (the one in the Body field) is a json, but in this question the captured message seems written in avro too. So we get an Avro body inside an Avro capture file. I still haven't found a good solution for that. Looks like Spark 2.4 has introduced a from_avro UDFs that could help, but I didn't test it.Mithridate
I know it is a python question - but today I needed to figure it out in scala. One way was from_json using gist.github.com/geoHeil/b1be2ec09f9c5e9a3b887073fe8bf004 obviously, spark.apache.org/docs/latest/… would be better but I did not have a json schema at hand right now. for .select(from_avro('value, jsonFormatSchema)Monarda
D
0

I suppose you could also do something like:

jsonRdd = raw.select(raw.Body.cast("string"))
Denudate answered 7/12, 2018 at 20:48 Comment(0)
R
0

I had the same issue.

The Spark 2.4 version solved the issue for me.

You find the documentation here: https://databricks.com/blog/2018/11/30/apache-avro-as-a-built-in-data-source-in-apache-spark-2-4.html

Remark: you need to know how your AVRO file looks like to create your schema (they just load it in here).

The downside: it's currently only available in Scala and Java. As far as I know it't not possible in Python yet.

Repertory answered 19/4, 2019 at 12:26 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.