Using Dataflow vs. Cloud Composer [closed]
Asked Answered
G

4

26

I'd like to get some clarification on whether Cloud Dataflow or Cloud Composer is the right tool for the job, and I wasn't clear from the Google Documentation.

Currently, I'm using Cloud Dataflow to read a non-standard csv file -- do some basic processing -- and load it into BigQuery.

Let me give a very basic example:

# file.csv
type\x01date
house\x0112/27/1982
car\x0111/9/1889

From this file we detect the schema and create a BigQuery table, something like this:

`table`
type (STRING)
date (DATE)

And, we also format our data to insert (in python) into BigQuery:

DATA = [
    ("house", "1982-12-27"),
    ("car", "1889-9-11")
]

This is a vast simplification of what's going on, but this is how we're currently using Cloud Dataflow.

My question then is, where does Cloud Composer come into the picture? What additional features could it provide on the above? In other words, why would it be used "on top of" Cloud Dataflow?

Gunslinger answered 11/1, 2019 at 22:20 Comment(1)
Two different tools that solve different problems. Dataflow allows you to build scalable data processing pipelines (Batch & Stream). Composer is used to schedule, orchestrate and manage data pipelines.Fidgety
M
28

Cloud composer(which is backed by Apache Airflow) is designed for tasks scheduling in small scale.

Here is an example to help you understand:

Say you have a CSV file in GCS, and using your example, say you use Cloud Dataflow to process it and insert formatted data into BigQuery. If this is a one-off thing, you have just finished it and its perfect.

Now let's say your CSV file is overwritten at 01:00 UTC every day, and you want to run the same Dataflow job to process it every time when its overwritten. If you don't want to manually run the job exactly at 01:00 UTC regardless of weekends and holidays, you need a thing to periodically run the job for you (in our example, at 01:00 UTC every day). Cloud Composer can help you in this case. You can provide a config to Cloud Composer, which includes what jobs to run (operators), when to run (specify a job start time) and run in what frequency (can be daily, weekly or even yearly).

It seems cool already, however, what if the CSV file is overwritten not at 01:00 UTC, but anytime in a day, how will you choose the daily running time? Cloud Composer provides sensors, which can monitor a condition (in this case, the CSV file modification time). Cloud Composer can guarantee that it kicks off a job only if the condition is satisfied.

There are a lot more features that Cloud Composer/Apache Airflow provide, including having a DAG to run multiple jobs, failed task retry, failure notification and a nice dashboard. You can also learn more from their documentations.

Merchant answered 11/1, 2019 at 22:58 Comment(1)
I wouldn't agree with "small" scale. They say one may create workflows of arbitrary complexity. Also as a part of workflow one may query terabytes of data in BigQuery and run jobs on huge Dataproc/Dataflow clusters which doesn't sound "small scale".Ames
C
24

For the basics of your described task, Cloud Dataflow is a good choice. Big data that can be processed in parallel is a good choice for Cloud Dataflow.

The real world of processing big data is usually messy. Data is usually somewhat to very dirty, arrives constantly or in big batches and needs to be processed in time sensitive ways. Usually it takes the coordination of more than one task / system to extract desired data. Think of load, transform, merge, extract and store types of tasks. Big data processing is often glued together using using shell scripts and / or Python programs. This makes automation, management, scheduling and control processes difficult.

Google Cloud Composer is a big step up from Cloud Dataflow. Cloud Composer is a cross platform orchestration tool that supports AWS, Azure and GCP (and more) with management, scheduling and processing abilities.

Cloud Dataflow handles tasks. Cloud Composer manages entire processes coordinating tasks that may involve BigQuery, Dataflow, Dataproc, Storage, on-premises, etc.

My question then is, where does Cloud Composer come into the picture? What additional features could it provide on the above? In other words, why would it be used "on top of" Cloud Dataflow?

If you need / require more management, control, scheduling, etc. of your big data tasks, then Cloud Composer adds significant value. If you are just running a simple Cloud Dataflow task on demand once in a while, Cloud Composer might be overkill.

Conductive answered 11/1, 2019 at 22:55 Comment(0)
S
1

Cloud Composer Apache Airflow is designed for tasks scheduling

Cloud Dataflow Apache Beam = handle tasks

For me, the Cloud Composer is a step up (a big one) from Dataflow. If I had one task, let's say to process my CSV file from Storage to BQ I would/could use Dataflow. But if I wanted to run the same job daily I would use Composer.

Sudor answered 7/7, 2021 at 18:28 Comment(0)
M
0

Also consider costs. Cloud Composer has a standing cost of 500$ pr month (Composer 2) or 250$ (Composer1), whereas Dataflow has no standing costs, only when it runs.

Dataflow has also beed exteded to include "batch data pipeline" which gives you improves means for scheduling and monitoring and makes this a viable option for simple data transformation jobs.

You don´t have an orchertra when you only have 2 trumpets and a trumbone... I would only consider using Cloud Composer/Airflow if you have hundrets of jobs to manage, or some few jobs but with many interconnected steps.

(There are also other options to consider, for example Cloud Data Fusion, Cloud Workflows, Datastream, Cloud Functions, Cloud Run, App Engine.)

Muslin answered 26/4, 2023 at 12:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.