psycopg2 error: DatabaseError: error with no message from the libpq
Asked Answered
K

1

15

I have an application that parses and loads data from csv files into a Postgres 9.3 database. In serial execution insert statements/cursor executions work without an issue.

I added celery in the mix to add parallel parsing and inserting of the data files. Parsing works fine. However, I go to run insert statements and I get:

[2015-05-13 11:30:16,464:  ERROR/Worker-1] ingest_task.work_it: Exception
    Traceback (most recent call last):
    File "ingest_tasks.py", line 86, in work_it
        rowcount = ingest_data.load_data(con=con, statements=statements)
    File "ingest_data.py", line 134, in load_data
        ingest_curs.execute(statement)
    DatabaseError: error with no message from the libpq
Kovacev answered 14/5, 2015 at 16:2 Comment(0)
D
22

I encountered a similar problem when multiprocessing engine.execute(). I solved this problem finally by just adding engine.dispose() right in the first line under the function where the subprocess is supposed to enter, as suggested in the official document:

When a program uses multiprocessing or fork(), and an Engine object is copied to the child process, Engine.dispose() should be called so that the engine creates brand new database connections local to that fork. Database connections generally do not travel across process boundaries.

Desert answered 25/10, 2015 at 16:18 Comment(3)
Exactly. And this will occur with any multiprocessing/worker and database connection setup. You'll generally need to create a new db connection on a per process basis.Heraclid
This should be an accepted answer, thanks for the documentation link.Octofoil
In my case, it was as simple as putting engine.dispose in the initilizer argument for mutliprocessing pool, e.g. pool = multiprocessing.Pool(processes=42, initializer=db.engine.dispose)Bronchi

© 2022 - 2024 — McMap. All rights reserved.