Improve INSERT-per-second performance of SQLite
Asked Answered
M

12

3379

Optimizing SQLite is tricky. Bulk-insert performance of a C application can vary from 85 inserts per second to over 96,000 inserts per second!

Background: We are using SQLite as part of a desktop application. We have large amounts of configuration data stored in XML files that are parsed and loaded into an SQLite database for further processing when the application is initialized. SQLite is ideal for this situation because it's fast, it requires no specialized configuration, and the database is stored on disk as a single file.

Rationale: Initially I was disappointed with the performance I was seeing. It turns-out that the performance of SQLite can vary significantly (both for bulk-inserts and selects) depending on how the database is configured and how you're using the API. It was not a trivial matter to figure out what all of the options and techniques were, so I thought it prudent to create this community wiki entry to share the results with Stack Overflow readers in order to save others the trouble of the same investigations.

The Experiment: Rather than simply talking about performance tips in the general sense (i.e. "Use a transaction!"), I thought it best to write some C code and actually measure the impact of various options. We're going to start with some simple data:

  • A 28 MB TAB-delimited text file (approximately 865,000 records) of the complete transit schedule for the city of Toronto
  • My test machine is a 3.60 GHz P4 running Windows XP.
  • The code is compiled with Visual C++ 2005 as "Release" with "Full Optimization" (/Ox) and Favor Fast Code (/Ot).
  • I'm using the SQLite "Amalgamation", compiled directly into my test application. The SQLite version I happen to have is a bit older (3.6.7), but I suspect these results will be comparable to the latest release (please leave a comment if you think otherwise).

Let's write some code!

The Code: A simple C program that reads the text file line-by-line, splits the string into values and then inserts the data into an SQLite database. In this "baseline" version of the code, the database is created, but we won't actually insert data:

/*************************************************************
    Baseline code to experiment with SQLite performance.

    Input data is a 28 MB TAB-delimited text file of the
    complete Toronto Transit System schedule/route info
    from http://www.toronto.ca/open/datasets/ttc-routes/

**************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <string.h>
#include "sqlite3.h"

#define INPUTDATA "C:\\TTC_schedule_scheduleitem_10-27-2009.txt"
#define DATABASE "c:\\TTC_schedule_scheduleitem_10-27-2009.sqlite"
#define TABLE "CREATE TABLE IF NOT EXISTS TTC (id INTEGER PRIMARY KEY, Route_ID TEXT, Branch_Code TEXT, Version INTEGER, Stop INTEGER, Vehicle_Index INTEGER, Day Integer, Time TEXT)"
#define BUFFER_SIZE 256

int main(int argc, char **argv) {

    sqlite3 * db;
    sqlite3_stmt * stmt;
    char * sErrMsg = 0;
    char * tail = 0;
    int nRetCode;
    int n = 0;

    clock_t cStartClock;

    FILE * pFile;
    char sInputBuf [BUFFER_SIZE] = "\0";

    char * sRT = 0;  /* Route */
    char * sBR = 0;  /* Branch */
    char * sVR = 0;  /* Version */
    char * sST = 0;  /* Stop Number */
    char * sVI = 0;  /* Vehicle */
    char * sDT = 0;  /* Date */
    char * sTM = 0;  /* Time */

    char sSQL [BUFFER_SIZE] = "\0";

    /*********************************************/
    /* Open the Database and create the Schema */
    sqlite3_open(DATABASE, &db);
    sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);

    /*********************************************/
    /* Open input file and import into Database*/
    cStartClock = clock();

    pFile = fopen (INPUTDATA,"r");
    while (!feof(pFile)) {

        fgets (sInputBuf, BUFFER_SIZE, pFile);

        sRT = strtok (sInputBuf, "\t");     /* Get Route */
        sBR = strtok (NULL, "\t");            /* Get Branch */
        sVR = strtok (NULL, "\t");            /* Get Version */
        sST = strtok (NULL, "\t");            /* Get Stop Number */
        sVI = strtok (NULL, "\t");            /* Get Vehicle */
        sDT = strtok (NULL, "\t");            /* Get Date */
        sTM = strtok (NULL, "\t");            /* Get Time */

        /* ACTUAL INSERT WILL GO HERE */

        n++;
    }
    fclose (pFile);

    printf("Imported %d records in %4.2f seconds\n", n, (clock() - cStartClock) / (double)CLOCKS_PER_SEC);

    sqlite3_close(db);
    return 0;
}

The "Control"

Running the code as-is doesn't actually perform any database operations, but it will give us an idea of how fast the raw C file I/O and string processing operations are.

Imported 864913 records in 0.94 seconds

Great! We can do 920,000 inserts per second, provided we don't actually do any inserts :-)


The "Worst-Case-Scenario"

We're going to generate the SQL string using the values read from the file and invoke that SQL operation using sqlite3_exec:

sprintf(sSQL, "INSERT INTO TTC VALUES (NULL, '%s', '%s', '%s', '%s', '%s', '%s', '%s')", sRT, sBR, sVR, sST, sVI, sDT, sTM);
sqlite3_exec(db, sSQL, NULL, NULL, &sErrMsg);

This is going to be slow because the SQL will be compiled into VDBE code for every insert and every insert will happen in its own transaction. How slow?

Imported 864913 records in 9933.61 seconds

Yikes! 2 hours and 45 minutes! That's only 85 inserts per second.

Using a Transaction

By default, SQLite will evaluate every INSERT / UPDATE statement within a unique transaction. If performing a large number of inserts, it's advisable to wrap your operation in a transaction:

sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, &sErrMsg);

pFile = fopen (INPUTDATA,"r");
while (!feof(pFile)) {

    ...

}
fclose (pFile);

sqlite3_exec(db, "END TRANSACTION", NULL, NULL, &sErrMsg);

Imported 864913 records in 38.03 seconds

That's better. Simply wrapping all of our inserts in a single transaction improved our performance to 23,000 inserts per second.

Using a Prepared Statement

Using a transaction was a huge improvement, but recompiling the SQL statement for every insert doesn't make sense if we using the same SQL over-and-over. Let's use sqlite3_prepare_v2 to compile our SQL statement once and then bind our parameters to that statement using sqlite3_bind_text:

/* Open input file and import into the database */
cStartClock = clock();

sprintf(sSQL, "INSERT INTO TTC VALUES (NULL, @RT, @BR, @VR, @ST, @VI, @DT, @TM)");
sqlite3_prepare_v2(db,  sSQL, BUFFER_SIZE, &stmt, &tail);

sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, &sErrMsg);

pFile = fopen (INPUTDATA,"r");
while (!feof(pFile)) {

    fgets (sInputBuf, BUFFER_SIZE, pFile);

    sRT = strtok (sInputBuf, "\t");   /* Get Route */
    sBR = strtok (NULL, "\t");        /* Get Branch */
    sVR = strtok (NULL, "\t");        /* Get Version */
    sST = strtok (NULL, "\t");        /* Get Stop Number */
    sVI = strtok (NULL, "\t");        /* Get Vehicle */
    sDT = strtok (NULL, "\t");        /* Get Date */
    sTM = strtok (NULL, "\t");        /* Get Time */

    sqlite3_bind_text(stmt, 1, sRT, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 2, sBR, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 3, sVR, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 4, sST, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 5, sVI, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 6, sDT, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 7, sTM, -1, SQLITE_TRANSIENT);

    sqlite3_step(stmt);

    sqlite3_clear_bindings(stmt);
    sqlite3_reset(stmt);

    n++;
}
fclose (pFile);

sqlite3_exec(db, "END TRANSACTION", NULL, NULL, &sErrMsg);

printf("Imported %d records in %4.2f seconds\n", n, (clock() - cStartClock) / (double)CLOCKS_PER_SEC);

sqlite3_finalize(stmt);
sqlite3_close(db);

return 0;

Imported 864913 records in 16.27 seconds

Nice! There's a little bit more code (don't forget to call sqlite3_clear_bindings and sqlite3_reset), but we've more than doubled our performance to 53,000 inserts per second.

PRAGMA synchronous = OFF

By default, SQLite will pause after issuing a OS-level write command. This guarantees that the data is written to the disk. By setting synchronous = OFF, we are instructing SQLite to simply hand-off the data to the OS for writing and then continue. There's a chance that the database file may become corrupted if the computer suffers a catastrophic crash (or power failure) before the data is written to the platter:

/* Open the database and create the schema */
sqlite3_open(DATABASE, &db);
sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA synchronous = OFF", NULL, NULL, &sErrMsg);

Imported 864913 records in 12.41 seconds

The improvements are now smaller, but we're up to 69,600 inserts per second.

PRAGMA journal_mode = MEMORY

Consider storing the rollback journal in memory by evaluating PRAGMA journal_mode = MEMORY. Your transaction will be faster, but if you lose power or your program crashes during a transaction you database could be left in a corrupt state with a partially-completed transaction:

/* Open the database and create the schema */
sqlite3_open(DATABASE, &db);
sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA journal_mode = MEMORY", NULL, NULL, &sErrMsg);

Imported 864913 records in 13.50 seconds

A little slower than the previous optimization at 64,000 inserts per second.

PRAGMA synchronous = OFF and PRAGMA journal_mode = MEMORY

Let's combine the previous two optimizations. It's a little more risky (in case of a crash), but we're just importing data (not running a bank):

/* Open the database and create the schema */
sqlite3_open(DATABASE, &db);
sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA synchronous = OFF", NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA journal_mode = MEMORY", NULL, NULL, &sErrMsg);

Imported 864913 records in 12.00 seconds

Fantastic! We're able to do 72,000 inserts per second.

Using an In-Memory Database

Just for kicks, let's build upon all of the previous optimizations and redefine the database filename so we're working entirely in RAM:

#define DATABASE ":memory:"

Imported 864913 records in 10.94 seconds

It's not super-practical to store our database in RAM, but it's impressive that we can perform 79,000 inserts per second.

Refactoring C Code

Although not specifically an SQLite improvement, I don't like the extra char* assignment operations in the while loop. Let's quickly refactor that code to pass the output of strtok() directly into sqlite3_bind_text(), and let the compiler try to speed things up for us:

pFile = fopen (INPUTDATA,"r");
while (!feof(pFile)) {

    fgets (sInputBuf, BUFFER_SIZE, pFile);

    sqlite3_bind_text(stmt, 1, strtok (sInputBuf, "\t"), -1, SQLITE_TRANSIENT); /* Get Route */
    sqlite3_bind_text(stmt, 2, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT);    /* Get Branch */
    sqlite3_bind_text(stmt, 3, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT);    /* Get Version */
    sqlite3_bind_text(stmt, 4, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT);    /* Get Stop Number */
    sqlite3_bind_text(stmt, 5, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT);    /* Get Vehicle */
    sqlite3_bind_text(stmt, 6, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT);    /* Get Date */
    sqlite3_bind_text(stmt, 7, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT);    /* Get Time */

    sqlite3_step(stmt);        /* Execute the SQL Statement */
    sqlite3_clear_bindings(stmt);    /* Clear bindings */
    sqlite3_reset(stmt);        /* Reset VDBE */

    n++;
}
fclose (pFile);

Note: We are back to using a real database file. In-memory databases are fast, but not necessarily practical

Imported 864913 records in 8.94 seconds

A slight refactoring to the string processing code used in our parameter binding has allowed us to perform 96,700 inserts per second. I think it's safe to say that this is plenty fast. As we start to tweak other variables (i.e. page size, index creation, etc.) this will be our benchmark.


Summary (so far)

I hope you're still with me! The reason we started down this road is that bulk-insert performance varies so wildly with SQLite, and it's not always obvious what changes need to be made to speed-up our operation. Using the same compiler (and compiler options), the same version of SQLite and the same data we've optimized our code and our usage of SQLite to go from a worst-case scenario of 85 inserts per second to over 96,000 inserts per second!


CREATE INDEX then INSERT vs. INSERT then CREATE INDEX

Before we start measuring SELECT performance, we know that we'll be creating indices. It's been suggested in one of the answers below that when doing bulk inserts, it is faster to create the index after the data has been inserted (as opposed to creating the index first then inserting the data). Let's try:

Create Index then Insert Data

sqlite3_exec(db, "CREATE  INDEX 'TTC_Stop_Index' ON 'TTC' ('Stop')", NULL, NULL, &sErrMsg);
sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, &sErrMsg);
...

Imported 864913 records in 18.13 seconds

Insert Data then Create Index

...
sqlite3_exec(db, "END TRANSACTION", NULL, NULL, &sErrMsg);
sqlite3_exec(db, "CREATE  INDEX 'TTC_Stop_Index' ON 'TTC' ('Stop')", NULL, NULL, &sErrMsg);

Imported 864913 records in 13.66 seconds

As expected, bulk-inserts are slower if one column is indexed, but it does make a difference if the index is created after the data is inserted. Our no-index baseline is 96,000 inserts per second. Creating the index first then inserting data gives us 47,700 inserts per second, whereas inserting the data first then creating the index gives us 63,300 inserts per second.


I'd gladly take suggestions for other scenarios to try... And will be compiling similar data for SELECT queries soon.

Millicent answered 10/11, 2009 at 22:16 Comment(28)
You don't mention what do you mean by large amounts of data, a few Gb? Terra? Or Mb?Scarito
Good point! In our case we are dealing with approximately 1.5 million key/value pairs read from XML and CSV text files into 200k records. Small by comparison to databases that run sites like SO - but big enough that tuning SQLite performance becomes important.Tomlinson
Perhaps each tip added to the question, could attempt to give some idea of the sizes (or database structures) it's likely to help with. Of course it's not possible to give precise figures, or to be accurate for every possible case, but something along the lines of "this hint is pretty pointless until you have at least a few million rows in each table", or whatever.Favrot
https://mcmap.net/q/40711/-sqlite3-bulk-insert-from-c ?Berard
When do you experience the performance problems? During load, simple queries or complex joins? This really makes a big difference on how to optimise the DB.Sudoriferous
When you call stuff like MS SQL or Oracle you call another process but you call sqlite alwyas in process. That makes sqlite fast. Sadly the commit takes a long time in sqlite. It is important to use parameterized queries if you want good performance (just like with Oracle). When I use file monitor (sysinternals) I see that my virus scanner again and again investigates the journal file. I wonder whether that slows things down or not? I should investigate that issue further.Carnatic
"We have large amounts of configuration data stored in XML files that are parsed and loaded into an SQLite database for further processing when the application is initialized." why don't you keep everything in the sqlite database in the first place, instead of storing in XML and then loading everything at initialization time?Default
"The code is compiled with MSVC 2005 as "Release" with "Full Optimization" (/Ox) and Favor Fast Code (/Ot)." You may try using PGO when compiling.Default
You may also want to try avoiding having fgets in the loop: just mmap the whole file into memory and iterate on that. Hint: pass MAP_POPULATE to mmap and MADV_SEQUENTIAL to madvise. This should raise the performance of the control version.Default
Have you tried not calling sqlite3_clear_bindings(stmt);? You set the bindings every time through which should be enough: Prior to calling sqlite3_step() for the first time or immediately after sqlite3_reset(), the application can invoke one of the sqlite3_bind() interfaces to attach values to the parameters. Each call to sqlite3_bind() overrides prior bindings on the same parameter (see: sqlite.org/cintro.html). There is nothing in the docs for that function saying you must call it.Marceline
ahcox: the binding is to the address pointed at and not the variable, so that wouldn't work since strtok returns a new pointer each time. You'd either have to strcpy after each strtok or make your own tokenizer which always copies as it reads along the string.Abuttals
Did you do repeated measurements? The 4s "win" for avoiding 7 local pointers is strange, even assuming a confused optimizer.Carolyn
It would be interesting to see how your final case performs with the step removed. My guess is you're burning a lot of CPU time by using SQLITE_TRANSIENT; you could probably improve the time with some more C refactoring.Pouf
Updated test with results in: codereview.stackexchange.com/questions/26822/…Tab
@nemetroid: there is still no need to call sqlite3_clear_bindings, simply rebind the new pointers before the next iteration.Ace
Mike - I think some optimization may be had with how a DateTime column is created and how it is used. I suspect some casting might occur that delays the result. Your input is welcome here, and or here: https://mcmap.net/q/40712/-what-39-s-the-most-efficient-way-to-handle-datetimes-especially-since-a-column-of-datetime-results-in-a-numeric/328397Botryoidal
Don't use feof() to control the termination of your input loop. Use the result returned by fgets(). https://mcmap.net/q/40713/-reaching-eof-with-fgetsCatchy
you have to add in benchmark, the option of Threading is Enabled in SQLite or not, because enabling Threads in SQLite have an impact on performancesPlasia
why don't you run the code without the inserts, to see what is the "theoretical" baseline. It can be done in several steps: only reading data and parsing. Only reading data, parsing and binding.Ehrenburg
another thing is the db file size - the physical file that the db stores the table data. Make sure it has sufficient storage reserved for all the data you want to insert. This way you will avoid auto-growth operations during inserts. Same applies to transaction log that keeps all changes. Make sure it has space too - a db backup before usually empties transaction log.Armelda
Use begin transactions and end transactions to insert in batch.Hindoo
I generally find just reading the entire file in one go and then parsing the lines in memory is faster than reading line by line (unless you parallelize it).Revolutionary
I want to add a note about choosing the clock() function to measure time in the benchmark. The function implementation varies on different OSes. While on Windows it measures amount of wall-time elapsed since the start of the program, on other OSes it measures CPU time used by the program. Luckily TS had Windows XP and it worked for him, but on other OSes it may not be what you really want. This may help with choosing the appropriate function.Master
This isn't actually a question, is it? Perhaps it would be wise to put all this infomation in an answer?Viscid
so my situation is a little different. I have an application that, upon system crash, it wouldn't be so bad to lose the last few changes, but it would be bad to corrupt the database. I assume that if I use the journal=memory setting the transactions will either make it to disk or they won't, but the database will remain sane. The other settings like synchronous off, I presume can create a corrupt database on system crash. Which settings will get me the most speed without corrupting the database even if I lose some transactions upon system crash? Thanks.Scibert
Is it safe to use sqlite3_exec(db, "PRAGMA synchronous = OFF", NULL, NULL, &sErrMsg); sqlite3_exec(db, "PRAGMA journal_mode = MEMORY", NULL, NULL, &sErrMsg); if im not using transaction? what can happen on program crash?Suzannesuzerain
There's still disk buffers that may stall your write operation for a few split seconds. This is unrelated to using transactions.Herbherbaceous
I'm surprised the refactoring led to a performance improvement. Seems more like variation between run, assuming you compiled with optimization.Dugan
K
872

Several tips:

  1. Put inserts/updates in a transaction.
  2. For older versions of SQLite - Consider a less paranoid journal mode (pragma journal_mode). There is NORMAL, and then there is OFF, which can significantly increase insert speed if you're not too worried about the database possibly getting corrupted if the OS crashes. If your application crashes the data should be fine. Note that in newer versions, the OFF/MEMORY settings are not safe for application level crashes.
  3. Playing with page sizes makes a difference as well (PRAGMA page_size). Having larger page sizes can make reads and writes go a bit faster as larger pages are held in memory. Note that more memory will be used for your database.
  4. If you have indices, consider calling CREATE INDEX after doing all your inserts. This is significantly faster than creating the index and then doing your inserts.
  5. You have to be quite careful if you have concurrent access to SQLite, as the whole database is locked when writes are done, and although multiple readers are possible, writes will be locked out. This has been improved somewhat with the addition of a WAL in newer SQLite versions.
  6. Take advantage of saving space...smaller databases go faster. For instance, if you have key value pairs, try making the key an INTEGER PRIMARY KEY if possible, which will replace the implied unique row number column in the table.
  7. If you are using multiple threads, you can try using the shared page cache, which will allow loaded pages to be shared between threads, which can avoid expensive I/O calls.
  8. Don't use !feof(file)!

I've also asked similar questions here and here.

Kmeson answered 10/11, 2009 at 22:16 Comment(5)
Docs don't know a PRAGMA journal_mode NORMAL sqlite.org/pragma.html#pragma_journal_modeDdt
It's been a while, my suggestions applied for older versions before a WAL was introduced. Looks like DELETE is the new normal setting, and now there's OFF and MEMORY settings as well. I suppose OFF/MEMORY will improve write performance at the expense of database integrity, and OFF disables rollbacks completely.Kmeson
for #7, do you have an example on how to enable shared page cache using the c# system.data.sqlite wrapper?Euonymus
#4 brought back ages old memories--There was at least one case back in the before-times where dropping an index before a group of adds and re-creating it afterwards sped inserts significantly. May still work out quicker on modern systems for some adds where you know you have sole access to the table for the period.Defalcation
@Kmeson perhaps were you speaking about synchronous = NORMAL? sqlite.org/pragma.html#pragma_synchronousHoneymoon
A
175

Try using SQLITE_STATIC instead of SQLITE_TRANSIENT for those inserts.

SQLITE_TRANSIENT will cause SQLite to copy the string data before returning.

SQLITE_STATIC tells it that the memory address you gave it will be valid until the query has been performed (which in this loop is always the case). This will save you several allocate, copy and deallocate operations per loop. Possibly a large improvement.

Alban answered 10/11, 2009 at 22:16 Comment(0)
M
130

Avoid sqlite3_clear_bindings(stmt).

The code in the test sets the bindings every time through which should be enough.

The C API intro from the SQLite docs says:

Prior to calling sqlite3_step() for the first time or immediately after sqlite3_reset(), the application can invoke the sqlite3_bind() interfaces to attach values to the parameters. Each call to sqlite3_bind() overrides prior bindings on the same parameter

There is nothing in the docs for sqlite3_clear_bindings saying you must call it in addition to simply setting the bindings.

More detail: Avoid_sqlite3_clear_bindings()

Marceline answered 10/11, 2009 at 22:16 Comment(1)
Wonderfully right: "Contrary to the intuition of many, sqlite3_reset() does not reset the bindings on a prepared statement. Use this routine to reset all host parameters to NULL." - sqlite.org/c3ref/clear_bindings.htmlMak
S
73

On bulk inserts

Inspired by this post and by the Stack Overflow question that led me here -- Is it possible to insert multiple rows at a time in an SQLite database? -- I've posted my first Git repository:

https://github.com/rdpoor/CreateOrUpdate

which bulk loads an array of ActiveRecords into MySQL, SQLite or PostgreSQL databases. It includes an option to ignore existing records, overwrite them or raise an error. My rudimentary benchmarks show a 10x speed improvement compared to sequential writes -- YMMV.

I'm using it in production code where I frequently need to import large datasets, and I'm pretty happy with it.

Salena answered 10/11, 2009 at 22:16 Comment(3)
@Jess: If you follow the link, you'll see that he meant the batch insert syntax.Benzene
@afaulconbridge: Probably just as well: I suspect you'll attain comparable speedups -- but safer and easier -- simply by wrapping your inserts within a single transaction.Salena
I concur with these cats, You have 28MB of data? I have 130 Terabytes of binary data,Payday
G
62

Bulk imports seems to perform best if you can chunk your INSERT/UPDATE statements. A value of 10,000 or so has worked well for me on a table with only a few rows, YMMV...

Garlandgarlanda answered 10/11, 2009 at 22:16 Comment(1)
You'd want to tune x = 10,000 so that x = cache [= cache_size * page_size] / average size of your insert.Benzene
G
53

If you care only about reading, somewhat faster (but might read stale data) version is to read from multiple connections from multiple threads (connection per-thread).

First find the items, in the table:

SELECT COUNT(*) FROM table

then read in pages (LIMIT/OFFSET):

SELECT * FROM table ORDER BY _ROWID_ LIMIT <limit> OFFSET <offset>

where <limit> and <offset> are calculated per-thread, like this:

int limit = (count + n_threads - 1)/n_threads;

for each thread:

int offset = thread_index * limit

For our small (200mb) db this made 50-75% speed-up (3.8.0.2 64-bit on Windows 7). Our tables are heavily non-normalized (1000-1500 columns, roughly 100,000 or more rows).

Too many or too little threads won't do it, you need to benchmark and profile yourself.

Also for us, SHAREDCACHE made the performance slower, so I manually put PRIVATECACHE (cause it was enabled globally for us)

Girvin answered 10/11, 2009 at 22:16 Comment(0)
B
36

I coudn't get any gain from transactions until I raised cache_size to a higher value i.e. PRAGMA cache_size=10000;

Blend answered 10/11, 2009 at 22:16 Comment(1)
Note that using a positive value for cache_size sets the number of pages to cache, not the total RAM size. With the default page size of 4kB, this setting will hold up to 40MB of data per open file (or per process, if running with shared cache).Heifetz
S
28

After reading this tutorial, I tried to implement it to my program.

I have 4-5 files that contain addresses. Each file has approx 30 million records. I am using the same configuration that you are suggesting but my number of INSERTs per second is way low (~10.000 records per sec).

Here is where your suggestion fails. You use a single transaction for all the records and a single insert with no errors/fails. Let's say that you are splitting each record into multiple inserts on different tables. What happens if the record is broken?

The ON CONFLICT command does not apply, cause if you have 10 elements in a record and you need each element inserted to a different table, if element 5 gets a CONSTRAINT error, then all previous 4 inserts need to go too.

So here is where the rollback comes. The only issue with the rollback is that you lose all your inserts and start from the top. How can you solve this?

My solution was to use multiple transactions. I begin and end a transaction every 10.000 records (Don't ask why that number, it was the fastest one I tested). I created an array sized 10.000 and insert the successful records there. When the error occurs, I do a rollback, begin a transaction, insert the records from my array, commit and then begin a new transaction after the broken record.

This solution helped me bypass the issues I have when dealing with files containing bad/duplicate records (I had almost 4% bad records).

The algorithm I created helped me reduce my process by 2 hours. Final loading process of file 1hr 30m which is still slow but not compared to the 4hrs that it initially took. I managed to speed the inserts from 10.000/s to ~14.000/s

If anyone has any other ideas on how to speed it up, I am open to suggestions.

UPDATE:

In Addition to my answer above, you should keep in mind that inserts per second depending on the hard drive you are using too. I tested it on 3 different PCs with different hard drives and got massive differences in times. PC1 (1hr 30m), PC2 (6hrs) PC3 (14hrs), so I started wondering why would that be.

After two weeks of research and checking multiple resources: Hard Drive, Ram, Cache, I found out that some settings on your hard drive can affect the I/O rate. By clicking properties on your desired output drive you can see two options in the general tab. Opt1: Compress this drive, Opt2: Allow files of this drive to have contents indexed.

By disabling these two options all 3 PCs now take approximately the same time to finish (1hr and 20 to 40min). If you encounter slow inserts check whether your hard drive is configured with these options. It will save you lots of time and headaches trying to find the solution

Sciamachy answered 10/11, 2009 at 22:16 Comment(2)
I will suggest the following. * Use SQLITE_STATIC vs SQLITE_TRANSIENT to avoid a string copy you must ensure the string will not be changed before the transaction is executed * Use bulk insert INSERT INTO stop_times VALUES (NULL, ?, ?, ?, ?, ?, ?, ?, ?, ?), (NULL, ?, ?, ?, ?, ?, ?, ?, ?, ?), (NULL, ?, ?, ?, ?, ?, ?, ?, ?, ?),(NULL, ?, ?, ?, ?, ?, ?, ?, ?, ?), (NULL, ?, ?, ?, ?, ?, ?, ?, ?, ?) * mmap the file to reduce the number of syscalls.Crystalloid
Doing that I am able to import 5,582,642 records in 11.51 secondsCrystalloid
D
18

The answer to your question is that the newer SQLite 3 has improved performance, use that.

This answer Why is SQLAlchemy insert with sqlite 25 times slower than using sqlite3 directly? by SqlAlchemy Orm Author has 100k inserts in 0.5 sec, and I have seen similar results with python-sqlite and SqlAlchemy. Which leads me to believe that performance has improved with SQLite 3.

Designing answered 10/11, 2009 at 22:16 Comment(0)
V
2

Using PRAGMA journal_mode = WAL doubled the speed of INSERTs in my case since internally it is the same as batching INSERTS as suggested here.

In my case I needed to import data into an index and not just a table. SQLite has this awesome feature of WITHOUT ROWID which allows to combine a table and index. By default, a table in SQLite is also a B-Tree and any indexes are stored in separate B-Tree pages. Using WITHOUT ROWID uses just one B-Tree for the table and index.

I also used PRAGMA auto_vacuum = 0 as technically it should prevent SQLite from improving space utilisation at the expense of database size, but it does not seem to make any visible difference in performance.

Although my case is a bit different from OP's requirement, the first suggestion of using WAL should make a difference for his case.

Veats answered 10/11, 2009 at 22:16 Comment(0)
E
2

Splitting up the task into multiple transactions like @Jimmy_A did is the way to go. Otherwise, you may saturate your RAM with a monster transaction and a heavy COMMIT task.

For further performance tuning, you also may enable write-back cache on your hard drive given you use a somehow battery backed system (laptop, UPS, RAID controller with battery...).

Egarton answered 10/11, 2009 at 22:16 Comment(0)
T
-2

Use ContentProvider for inserting the bulk data in db. The below method used for inserting bulk data in to database. This should Improve INSERT-per-second performance of SQLite.

private SQLiteDatabase database;
database = dbHelper.getWritableDatabase();

public int bulkInsert(@NonNull Uri uri, @NonNull ContentValues[] values) {

database.beginTransaction();

for (ContentValues value : values)
 db.insert("TABLE_NAME", null, value);

database.setTransactionSuccessful();
database.endTransaction();

}

Call bulkInsert method :

App.getAppContext().getContentResolver().bulkInsert(contentUriTable,
            contentValuesArray);

Link: https://www.vogella.com/tutorials/AndroidSQLite/article.html check Using ContentProvider Section for more details

Trenton answered 10/11, 2009 at 22:16 Comment(2)
This question is about C++ sqlite, and is not Android specificAlejandro
Most ContentProviders do not properly implement bulkInsert, and they are not actually notably faster than just calling insert in a loop :(Alejandro

© 2022 - 2024 — McMap. All rights reserved.