How to upsert pandas DataFrame to PostgreSQL table?
Asked Answered
S

6

17

I've scraped some data from web sources and stored it all in a pandas DataFrame. Now, in order harness the powerful db tools afforded by SQLAlchemy, I want to convert said DataFrame into a Table() object and eventually upsert all data into a PostgreSQL table. If this is practical, what is a workable method of going about accomplishing this task?

Stylistic answered 22/4, 2020 at 13:43 Comment(0)
S
23

Update: You can save yourself some typing by using this method.


If you are using PostgreSQL 9.5 or later you can perform the UPSERT using a temporary table and an INSERT ... ON CONFLICT statement:

import sqlalchemy as sa

# …

with engine.begin() as conn:
    # step 0.0 - create test environment
    conn.exec_driver_sql("DROP TABLE IF EXISTS main_table")
    conn.exec_driver_sql(
        "CREATE TABLE main_table (id int primary key, txt varchar(50))"
    )
    conn.exec_driver_sql(
        "INSERT INTO main_table (id, txt) VALUES (1, 'row 1 old text')"
    )
    # step 0.1 - create DataFrame to UPSERT
    df = pd.DataFrame(
        [(2, "new row 2 text"), (1, "row 1 new text")], columns=["id", "txt"]
    )
    
    # step 1 - create temporary table and upload DataFrame
    conn.exec_driver_sql(
        "CREATE TEMPORARY TABLE temp_table AS SELECT * FROM main_table WHERE false"
    )
    df.to_sql("temp_table", conn, index=False, if_exists="append")

    # step 2 - merge temp_table into main_table
    conn.exec_driver_sql(
        """\
        INSERT INTO main_table (id, txt) 
        SELECT id, txt FROM temp_table
        ON CONFLICT (id) DO
            UPDATE SET txt = EXCLUDED.txt
        """
    )

    # step 3 - confirm results
    result = conn.exec_driver_sql("SELECT * FROM main_table ORDER BY id").all()
    print(result)  # [(1, 'row 1 new text'), (2, 'new row 2 text')]
Seringapatam answered 14/6, 2020 at 23:36 Comment(3)
Rather than having the schema for "main table" in your code twice, you can create your temporary table like this; ` CREATE TEMPORARY TABLE temp_table AS SELECT * FROM main_table WHERE false`Cognition
thank you GordThompson this is perfect. Good job on the match_column addition (for cases where unique constraints are different from the index). I was using a delete/insert using COPY and this method is giving smilar perf. This is safer and shorter. Small suggestion: drop the temp table in the end, and give the temp table a unique name like @pedrovgp does below.Loving
@GordThompson will this work for mysql too?Dunnite
W
18

I have needed this so many times, I ended up creating a gist for it.

The function is below, it will create the table if it is the first time persisting the dataframe and will update the table if it already exists:

import pandas as pd
import sqlalchemy
import uuid
import os

def upsert_df(df: pd.DataFrame, table_name: str, engine: sqlalchemy.engine.Engine):
    """Implements the equivalent of pd.DataFrame.to_sql(..., if_exists='update')
    (which does not exist). Creates or updates the db records based on the
    dataframe records.
    Conflicts to determine update are based on the dataframes index.
    This will set unique keys constraint on the table equal to the index names
    1. Create a temp table from the dataframe
    2. Insert/update from temp table into table_name
    Returns: True if successful
    """

    # If the table does not exist, we should just use to_sql to create it
    if not engine.execute(
        f"""SELECT EXISTS (
            SELECT FROM information_schema.tables 
            WHERE  table_schema = 'public'
            AND    table_name   = '{table_name}');
            """
    ).first()[0]:
        df.to_sql(table_name, engine)
        return True

    # If it already exists...
    temp_table_name = f"temp_{uuid.uuid4().hex[:6]}"
    df.to_sql(temp_table_name, engine, index=True)

    index = list(df.index.names)
    index_sql_txt = ", ".join([f'"{i}"' for i in index])
    columns = list(df.columns)
    headers = index + columns
    headers_sql_txt = ", ".join(
        [f'"{i}"' for i in headers]
    )  # index1, index2, ..., column 1, col2, ...

    # col1 = exluded.col1, col2=excluded.col2
    update_column_stmt = ", ".join([f'"{col}" = EXCLUDED."{col}"' for col in columns])

    # For the ON CONFLICT clause, postgres requires that the columns have unique constraint
    query_pk = f"""
    ALTER TABLE "{table_name}" DROP CONSTRAINT IF EXISTS unique_constraint_for_upsert;
    ALTER TABLE "{table_name}" ADD CONSTRAINT unique_constraint_for_upsert UNIQUE ({index_sql_txt});
    """
    engine.execute(query_pk)

    # Compose and execute upsert query
    query_upsert = f"""
    INSERT INTO "{table_name}" ({headers_sql_txt}) 
    SELECT {headers_sql_txt} FROM "{temp_table_name}"
    ON CONFLICT ({index_sql_txt}) DO UPDATE 
    SET {update_column_stmt};
    """
    engine.execute(query_upsert)
    engine.execute(f"DROP TABLE {temp_table_name}")

    return True
Whyte answered 18/10, 2021 at 14:7 Comment(5)
Magic, this works beautifully! Easily the best answer on SO. As mentioned in the comment, it is the perfect equivalent to pd.DataFrame.to_sql(..., if_exists='update'), and it even adds an index-level duplicates constraint so duplicates cannot possibly appear in the table.Rovner
@Whyte does this work for mysql?Dunnite
@NicholasHansen-Feruch, I did not test it. Since the syntax is sometimes different, it is not guaranteed to work.Whyte
This is great but just want to point out some things that weren't initially obvious to me. This approach assumes your dataframe has a named index that is unique, if your pandas df has the default index then you can create one with df.set_index([col1,col2,...]). I also had an issue where I had to wrap the first sql to find the table in a sqlalchemy.text but this might be version dependent.Foreskin
Great script. Thanks for sharing. If you're using Sqlalchemy 2.0, you'll need to change the engine.execute to with engine.connect() as conn: ... conn.execute(); conn.commit()`Casa
B
2

Here is my code for bulk insert & insert on conflict update query for postgresql from pandas dataframe:

Lets say id is unique key for both postgresql table and pandas df and you want to insert and update based on this id.

import pandas as pd
from sqlalchemy import create_engine, text

engine = create_engine(postgresql://username:pass@host:port/dbname)
query = text(f""" 
                INSERT INTO schema.table(name, title, id)
                VALUES {','.join([str(i) for i in list(df.to_records(index=False))])}
                ON CONFLICT (id)
                DO  UPDATE SET name= excluded.name,
                               title= excluded.title
         """)
engine.execute(query)

Make sure that your df columns must be same order with your table.

EDIT 1:

Thanks to Gord Thompson's comment, I realized that this query won't work if there is single quote in columns. Therefore here is a fix if there is single quote in columns:

import pandas as pd
from sqlalchemy import create_engine, text

df.name = df.name.str.replace("'", "''")
df.title = df.title.str.replace("'", "''")
engine = create_engine(postgresql://username:pass@host:port/dbname)
query = text(""" 
            INSERT INTO author(name, title, id)
            VALUES %s
            ON CONFLICT (id)
            DO  UPDATE SET name= excluded.name,
                           title= excluded.title
     """ % ','.join([str(i) for i in list(df.to_records(index=False))]).replace('"', "'"))
engine.execute(query)
Bur answered 4/12, 2020 at 8:42 Comment(4)
SQL Injection issue: The above code will fail if either name or title contains a single quote. Example here.Seringapatam
@GordThompson thank you for your comment. I've edited my solution aboveBur
Now the code fails if either name or title contains double quotes. :(Seringapatam
Is there a version of this that uses sql parameters instead of % string interpolation? NULL values in your df will break string interpolation, as is the case here.Turnover
I
1

Consider this function if your DataFrame and SQL Table contain the same column names and types already. Advantages:

  • Good if you have a long dataframe to insert. (Batching)
  • Avoid writing long sql statement in your code.
  • Fast

.

from sqlalchemy import Table
from sqlalchemy.engine.base import Engine as sql_engine
from sqlalchemy.dialects.postgresql import insert
from sqlalchemy.ext.automap import automap_base
import pandas as pd


def upsert_database(list_input: pd.DataFrame, engine: sql_engine, table: str, schema: str) -> None:
    if len(list_input) == 0:
        return None
    flattened_input = list_input.to_dict('records')
    with engine.connect() as conn:
        base = automap_base()
        base.prepare(engine, reflect=True, schema=schema)
        target_table = Table(table, base.metadata,
                             autoload=True, autoload_with=engine, schema=schema)
        chunks = [flattened_input[i:i + 1000] for i in range(0, len(flattened_input), 1000)]
        for chunk in chunks:
            stmt = insert(target_table).values(chunk)
            update_dict = {c.name: c for c in stmt.excluded if not c.primary_key}
            conn.execute(stmt.on_conflict_do_update(
                constraint=f'{table}_pkey',
                set_=update_dict)
            )
Iodate answered 19/2, 2021 at 10:8 Comment(1)
I want to use this, but am a bit intimidated by all of the functions that are new to me from sqlalchemy. If you ever get a chance to explain or comment this answer, I think it could be a great one for those of us who need to upsert from dataframes.Lissotrichous
S
0

This is the cleanest way I have found to upsert using pandas and postgres:

def postgres_upsert(table, conn, keys, data_iter):
        from sqlalchemy.dialects.postgresql import insert

        data = [dict(zip(keys, row)) for row in data_iter]

        insert_statement = insert(table.table).values(data)
        upsert_statement = insert_statement.on_conflict_do_update(
            constraint=f"{table.table.name}_pkey",
            set_={c.key: c for c in insert_statement.excluded},
        )
        conn.execute(upsert_statement)
engine = create_sqlalchemy_engine(db_params)


df.to_sql(name="your_existing_table_name", con=engine, if_exists='append', index=False, method=postgres_upsert,
            chunksize=1000)  # Adjust chunksize as necessary
Stonecrop answered 4/4, 2024 at 6:4 Comment(2)
As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.Contumacious
Some more context the Community Bot comment: given this question has an accepted answer with many upvotes, and other upvoted answers, it is important to explain how your code works and how it differs from the previous answers. Doing so will make your answer more valuable to future users, and may increase the chances of attracting upvotes. Thanks.Kourtneykovac
A
-2

If you already have a pandas dataframe you could use df.to_sql to push the data directly through SQLAlchemy

from sqlalchemy import create_engine
#create a connection from Postgre URI
cnxn = create_engine("postgresql+psycopg2://username:password@host:port/database")
#write dataframe to database
df.to_sql("my_table", con=cnxn, schema="myschema")
Amazement answered 22/4, 2020 at 13:59 Comment(3)
Indeed, that is certainly a viable options, and thank you for your input! However, I am looking to upsert data - not just insert or replace a table. That's where I think sqlalchemy could be a better option.Stylistic
#25955700 Maybe you could use this wrapper for SqlAlchemy Insert that implements upsert using the on commit clause dynamically?Amazement
Yes, this works only if one has a Table() sqlalchemy object. In order to do this, I first need to convert the pandas df to a Table() object. - which is the main and first thing I want to doStylistic

© 2022 - 2025 — McMap. All rights reserved.