On Redshift you can create a table to receive the data:
CREATE TABLE redshift_schema.redshift_table (...);
Then create a foreign data wrapper, server and a virtual phantom of the table in PostgreSQL RDS:
CREATE EXTENSION redshift_fdw;
----optional
--CREATE FOREIGN DATA WRAPPER redshift_fdw
--HANDLER postgres_fdw_handler
--VALIDATOR postgres_fdw_validator
--OPTIONS ();
CREATE SERVER redshift_server_mydb
FOREIGN DATA WRAPPER redshift_fdw
OPTIONS (dbname 'mydb', port '5439', connect_timeout '200000', host 'myhost.redshift.amazonaws.com');
CREATE USER MAPPING FOR mypguser
SERVER redshift_server_mydb
OPTIONS (user 'myrsuser', password 'mypassword');
IMPORT FOREIGN SCHEMA redshift_schema
LIMIT TO (redshift_table)
FROM SERVER redshift_server_mydb
INTO postgresql_schema;
Now in PostgreSQL you can (in a function if you like) load (select, insert, update, delete) the Redshift table from the PostgreSQL table (without using dblink):
INSERT INTO postgresql_schema.redshift_table
SELECT *
FROM postgresql_schema.postgresql_table;
Now when you look at the Redshift table all the data is there and you can UNLOAD the table to S3 as required.
COPY ... TO PROGRAM
– Carmencitacopy to file
andaws s3 cp file to bucket
– Gal