Join 2 JSON inputs linked by Primary Key
Asked Answered
D

1

1

I am trying to merge 2 JSON inputs (this example is from a file, but it will be from a Google Pub Sub input later) from these:

orderID.json:    
{"orderID":"test1","orderPacked":"Yes","orderSubmitted":"Yes","orderVerified":"Yes","stage":1}



combined.json:
    {"barcode":"95590","name":"Ash","quantity":6,"orderID":"test1"}
    {"barcode":"95591","name":"Beat","quantity":6,"orderID":"test1"}
    {"barcode":"95592","name":"Cat","quantity":6,"orderID":"test1"}
    {"barcode":"95593","name":"Dog","quantity":6,"orderID":"test2"}
    {"barcode":"95594","name":"Scar","quantity":6,"orderID":"test2"}

To something like this (using orderID as the unique and primary key):

output.json: 
{"orderID":"test1","orderPacked":"Yes","orderSubmitted":"Yes","orderVerified":"Yes","stage":1,"barcode":"95590","name":"Ash","quantity":6}
{"orderID":"test1","orderPacked":"Yes","orderSubmitted":"Yes","orderVerified":"Yes","stage":1,"barcode":"95591","name":"Beat","quantity":6}
{"orderID":"test1","orderPacked":"Yes","orderSubmitted":"Yes","orderVerified":"Yes","stage":1,"barcode":"95592","name":"Cat","quantity":6}

I have my codes like this now which was adapted from join two json in Google Cloud Platform with dataflow

from __future__ import absolute_import
import argparse
import apache_beam as beam
import json
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import StandardOptions
from google.api_core import datetime_helpers
from google.api_core.exceptions import InternalServerError
from google.api_core.exceptions import ServiceUnavailable
from google.api_core.exceptions import TooManyRequests
from google.cloud import bigquery

def run(argv=None):
    """Build and run the pipeline."""
    parser = argparse.ArgumentParser()
    parser.add_argument(
        '--topic',
        type=str,
        help='Pub/Sub topic to read from')
    parser.add_argument(
        '--topic2',
        type=str,
        help='Pub/Sub topic to match with'
    )
    parser.add_argument(
        '--output',
        help=('Output local filename'))

    args, pipeline_args = parser.parse_known_args(argv)
    options = PipelineOptions(pipeline_args)
    options.view_as(SetupOptions).save_main_session = True
    options.view_as(StandardOptions).streaming = True

p = beam.Pipeline(options=options)

    orderID = (p | 'read from text1' >> beam.io.ReadFromText('orderID.json') 
    #'Read from orderID PubSub' >> beam.io.ReadFromPubSub(topic=args.topic2)
                | 'Parse JSON to Dict' >> beam.Map(lambda e: json.loads(e))
                | 'key_orderID' >> beam.Map(lambda orders: (orders['orderID'], orders))
                )

    orders_si = beam.pvalue.AsDict(orderID) 

    orderDetails = (p | 'read from text' >> beam.io.ReadFromText('combined.json') 
                      | 'Parse JSON to Dict1' >> beam.Map(lambda e: json.loads(e)))
    #'Read from PubSub' >> beam.io.ReadFromPubSub(topic=args.topic))

    def join_orderID_orderDetails(order, order_dict):
        return order.update(order_dict[order['orderID']])

    joined_dicts = orderDetails | beam.Map(join_orderID_orderDetails, order_dict=orders_si)

    joined_dicts | beam.io.WriteToText('beam.output')

p.run()
#result.wait_until_finish()

if __name__ == '__main__':
    run()

But my output now in beam.output just shows:

None
None
None

Can someone point out to me what I am doing wrong about this ?

The question that is different from the reported duplicate post is:

  1. Why are my results "None"?
  2. What am I doing wrong here?
  3. I suspect these are the issues:

    • "order" variable - is that correctly referenced in "join_orderID_orderDetails"
    • List item "join_orderID_orderDetails" in "join_dicts? - is that correctly referneced too?
Damiandamiani answered 7/8, 2019 at 5:52 Comment(2)
Possible duplicate of join two json in Google Cloud Platform with dataflowVictoria
Hi @Victoria as I've mentioned in my question, I did adapted the code from you answer there. I am still unable to get the right working solution. And in the comment section of the topic where you mentioned it's as duplicate topic, I asked some questions there.Damiandamiani
G
1

Try the below, Hope this will help you a little.

Here i have used an array of your order and combined, instead of using a file.

order = [{"orderID":"test1","orderPacked":"Yes","orderSubmitted":"Yes","orderVerified":"Yes","stage":1}]

combined = [
   {"barcode":"95590","name":"Ash","quantity":6,"orderID":"test1"},
   {"barcode":"95591","name":"Beat","quantity":6,"orderID":"test1"},
   {"barcode":"95592","name":"Cat","quantity":6,"orderID":"test1"},
   {"barcode":"95593","name":"Dog","quantity":6,"orderID":"test2"},
   {"barcode":"95594","name":"Scar","quantity":6,"orderID":"test2"}
   ]


def joinjson(repl, tobeCombined):
  newarr = []
  for data in tobeCombined:
    replData = getOrderData(repl,data['orderID'])
    if replData is not None:
      data.update(replData)
    newarr.append(data)

  return newarr

def getOrderData(order, orderID):
  for data in order:
    print("Data OrderID : ",data['orderID'])
    if data['orderID'] == orderID:
      return data



print(joinjson(order,combined))
Goodbye answered 7/8, 2019 at 6:46 Comment(5)
Thanks @Shishir - I tried your example, it works on text although not as I fully intended as it still spits out the "test2" orderID albeit without the order appended to it. However I tried to apply it to PubSub/Google Cloud flow, the list from joinjson idoes not have wait_until_finish() support unfortunately....Damiandamiani
What I'm looking for is code that supports wait_until_finish (using apache beam). Thanks for your example anyway, I learnt a bit more about python now.Damiandamiani
I understand your point,but if this answer has helped you in any way. You can upvote my answer. Thanks in advance.Goodbye
Yup done! Upvote my question for more visibility too please :-)Damiandamiani
hello @ShishirNaresh, can you please look at the below and advise #59427350Arundel

© 2022 - 2024 — McMap. All rights reserved.