Difference between beam.ParDo and beam.Map in the output type?
Asked Answered
H

1

20

I am using Apache-Beam to run some data transformation, which including data extraction from txt, csv, and different sources of data. One thing I noticed, is the difference of results when using beam.Map and beam.ParDo

In the next sample:

I am reading csv data, and in the first case pass it to a DoFn using a beam.ParDo, which extracts the first element which is the date, then print it. In the second case, I directly use beam.Map to do the same thing, then print it.

class Printer(beam.DoFn):
    def process(self,data_item):
        print data_item

class DateExtractor(beam.DoFn):
    def process(self,data_item):
        return (str(data_item).split(','))[0]

data_from_source = (p
                    | 'ReadMyFile 01' >> ReadFromText('./input/data.csv')
                    | 'Splitter using beam.ParDo 01' >> beam.ParDo(DateExtractor())
                    | 'Printer the data 01' >> beam.ParDo(Printer())
                    )

copy_of_the_data =  (p
                    | 'ReadMyFile 02' >> ReadFromText('./input/data.csv')
                    | 'Splitter using beam.Map 02' >> beam.Map(lambda record: (record.split(','))[0])
                    | 'Printer the data 02' >> beam.ParDo(Printer())
                    )

What I noticed in the two outputs are the next:

##With beam.ParDo##
2
0
1
7
-
0
4
-
0
3
2
0
1
7

##With beam.Map##
2017-04-03
2017-04-03
2017-04-10
2017-04-10
2017-04-11
2017-04-12
2017-04-12

I find this strange. I am wondering if the problem in the printing function? But after using different transformations, it is showing the same results. As Example running:

| 'Group it 01' >> beam.Map(lambda record: (record, 1))

which still returning the same issue :

##With beam.ParDo##
('8', 1)
('2', 1)
('0', 1)
('1', 1)

##With beam.Map##
(u'2017-04-08', 1)
(u'2017-04-08', 1)
(u'2017-04-09', 1)
(u'2017-04-09', 1)

Any idea what is the reason? What do I am missing in the difference between beam.Map and beam.ParDo ???

Hoff answered 24/12, 2018 at 11:35 Comment(0)
H
37

Short Answer

You need to wrap the return value of a ParDo into a list.

Longer Version

ParDos in general can return any number of outputs for a single input, i.e. for a single input string you can emit zero, one, or many results. For this reason the Beam SDK treats the output of a ParDo as not a single element, but a collection of elements.

In your case the ParDo emits a single string instead of a collection. Beam Python SDK still tries to interpret the output of that ParDo as if it was a collection of elements. And it does so by interpreting the string you emitted as collection of characters. Because of that, your ParDo now effectively produces a stream of single characters, not a stream of strings.

What you need to do is wrap your return value into a list:

class DateExtractor(beam.DoFn):
    def process(self,data_item):
        return [(str(data_item).split(','))[0]]

Notice the square brackets. See the programming guide for more examples.

Map, on the other hand, can be thought of as a special case of ParDo. Map is expected to produce exactly one output for each input. So in this case you can just return a single value out of lambda and it works as expected.

And you probably don't need to wrap the data_item in str. According to the docs the ReadFromText transform produces strings.

Hurst answered 27/12, 2018 at 19:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.