I have data in a .csv file that I want to import into my Hasura cluster's PostgreSQL database instance. What's the best way to do this?
Create table_name
with the appropriate schema to absorb your CSV data; use psql to stream data on to postgres. Execute this command:
$ psql <postgres-url> -d <database-name> -U <user-name> -c \
"copy table_name from STDIN with delimiter as ',';" \
< /path/to/file.csv
You will have the data from CSV file inside table table_name
$ hasura ms cp /path/to/file.csv hasura/postgres:/data.csv
and $ hasura ms exec postgres -n hasura -- psql -U admin -d hasuradb -c "copy table_name from STDIN with delimiter as ',';" < /data.csv
–
Blameless hasura microservice
command? I can't see it in the Hasura CLI docs –
Anny Adding my answer here for reference. When deploying Hasura in Heroku we can get temporary credentials for the Postgres database by accessing the Postgres add-on from the Heroku resources dashboard. Then you can access the database directly using the url provided on the settings tab.
psql 'postgres://UUUUUU:[email protected]:5432/DBNAME'
Then in the Postgres console you can do something like:
\copy countryinfo from 'countryinfo.csv' with delimiter as E'\t';
The above for a tab delimited file downloaded from Geonames.org. Note: I deleted the comment lines before input.
You can use pgAdmin to simplify the task. Connect to your postgres instance, then go to Import/Export Data under Tools menu after selecting the desired table from the left sidebar.
Select the .csv file and click on "Ok"
You should now have all your data imported successfully.
© 2022 - 2024 — McMap. All rights reserved.