PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data
Asked Answered
B

4

15

Background:

I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP.

I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse.

How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5.

Best Solution?

From what I can piece together the most practical solution is as follows:

  • Create a TRIGGER to write all DML activity to a rotating CSV log file
  • Perform whatever transformations are required
  • Use the native DW data pump tool to efficiently pump the transformed CSV into the DW

Why this approach?

  • TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable
  • CSV easy and fast to transform
  • Easy to pump CSV into the DW

Alternatives considered ....

  • Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log)
  • Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects
  • Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner
  • Using the WAL

Has anyone done this before? Want to share your thoughts?

Bulbul answered 25/3, 2010 at 22:45 Comment(1)
What did you end up using?Cymophane
P
11

Assuming that your tables of interest have (or can be augmented with) a unique, indexed, sequential key, then you will get much much better value out of simply issuing SELECT ... FROM table ... WHERE key > :last_max_key with output to a file, where last_max_key is the last key value from the last extraction (0 if first extraction.) This incremental, decoupled approach avoids introducing trigger latency in the insertion datapath (be it custom triggers or modified Slony), and depending on your setup could scale better with number of CPUs etc. (However, if you also have to track UPDATEs, and the sequential key was added by you, then your UPDATE statements should SET the key column to NULL so it gets a new value and gets picked by the next extraction. You would not be able to track DELETEs without a trigger.) Is this what you had in mind when you mentioned Talend?

I would not use the logging facility unless you cannot implement the solution above; logging most likely involves locking overhead to ensure log lines are written sequentially and do not overlap/overwrite each other when multiple backends write to the log (check the Postgres source.) The locking overhead may not be catastrophic, but you can do without it if you can use the incremental SELECT alternative. Moreover, statement logging would drown out any useful WARNING or ERROR messages, and the parsing itself will not be instantaneous.

Unless you are willing to parse WALs (including transaction state tracking, and being ready to rewrite the code everytime you upgrade Postgres) I would not necessarily use the WALs either -- that is, unless you have the extra hardware available, in which case you could ship WALs to another machine for extraction (on the second machine you can use triggers shamelessly -- or even statement logging -- since whatever happens there does not affect INSERT/UPDATE/DELETE performance on the primary machine.) Note that performance-wise (on the primary machine), unless you can write the logs to a SAN, you'd get a comparable performance hit (in terms of thrashing filesystem cache, mostly) from shipping WALs to a different machine as from running the incremental SELECT.

Paxwax answered 30/3, 2010 at 5:27 Comment(2)
The Talend option was going to adopt the approach you suggested ... maybe I should revisit. But you highlighted the key issue, tracking the INSERTs and UPDATEs and DELETEs. So no matter what I do there is some work to be done to get it to work cleanly and efficiently ... surprised this is not a very common issue with lots of examples on web. Thanks for your well thought through response.Bulbul
A potential issue with the primary-key-threshold idea is that postgres sequences are non-transactional. That is, a transaction that inserted with a lower PK may commit after a transaction that inserted with a higher PK. Your ETL strategy could therefore "miss" inserts (assuming a read committed isolation level). This would rarely be an issue (unless you have huge insert volume or long transactions), but it is something to consider if you cannot tolerate data loss during ETL.Cystolith
F
3

if you can think of a 'checksum table' that contains only the id's and the 'checksum' you can not only do a quick select of the new records but also the changed and deleted records.

the checksum could be a crc32 checksum function you like.

Fraise answered 17/4, 2010 at 21:17 Comment(1)
I don't know why there isn't more talk about this solution, this is a very common solution for lots of platforms.Pipe
P
0

The new ON CONFLICT clause in PostgreSQL has changed the way I do many updates. I pull the new data (based on a row_update_timestamp) into a temp table then in one SQL statement INSERT into the target table with ON CONFLICT UPDATE. If your target table is partitioned then you need to jump through a couple of hoops (i.e. hit the partition table directly). The ETL can happen as you load the the Temp table (most likely) or in the ON CONFLICT SQL (if trivial). Compared to to other "UPSERT" systems (Update, insert if zero rows etc.) this shows a huge speed improvement. In our particular DW environment we don't need/want to accommodate DELETEs. Check out the ON CONFLICT docs - it gives Oracle's MERGE a run for it's money!

Palembang answered 3/1, 2017 at 16:37 Comment(0)
D
0

My view on this topic for nowadays 2023...

Option1 (Batch Approach):

  • Staging using Incremental Extract by having an integer or timestamp for keeping the maximum row transferred per table in each iteration. We can use always ONCONFLICT statement to avoid any unexpected key violation due an unexpected iteration crash. This way cannot track row deletions but we can use deletion flag as a column to filter into datawarehouse.
  • DataWarehouse using Calculated Tables by making stored procedures to make complex joins / calculations and insert the results into new precalculated tables.

-Option2 (Pipeline Approach)

  • Staging using Logical Replication for real-time extract. Logical Replication can capture and replicate changes into the same sequence they occurred and therefore the target database will always be consistent. This way can track also deletions.
  • DataWarehouse using a mix of Incremental Materialized Views for real-time precalculated lightweight joins / calculations, and Calculated Tables using stored procedures for more heavier joins / calculations as IVM currently do not support out joins and all types of aggregations.
Detribalize answered 3/5, 2023 at 7:31 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.