OperationalError: database is locked
Asked Answered
A

28

161

I have made some repetitive operations in my application (testing it), and suddenly I’m getting a weird error:

OperationalError: database is locked

I've restarted the server, but the error persists. What can it be all about?

Additive answered 3/7, 2010 at 21:26 Comment(0)
D
149

From Django's documentation:

SQLite is meant to be a lightweight database, and thus can't support a high level of concurrency. OperationalError: database is locked errors indicate that your application is experiencing more concurrency than sqlite can handle in default configuration. This error means that one thread or process has an exclusive lock on the database connection and another thread timed out waiting for the lock the be released.

Python's SQLite wrapper has a default timeout value that determines how long the second thread is allowed to wait on the lock before it times out and raises the OperationalError: database is locked error.

If you're getting this error, you can solve it by:

  • Switching to another database backend. At a certain point SQLite becomes too "lite" for real-world applications, and these sorts of concurrency errors indicate you've reached that point.

  • Rewriting your code to reduce concurrency and ensure that database transactions are short-lived.

  • Increase the default timeout value by setting the timeout database option:

    "OPTIONS": {
        # ...
        "timeout": 20,
        # ...
    }
    

    This will make SQLite wait a bit longer before throwing "database is locked" errors; it won't really do anything to solve them.

Dinosaurian answered 3/7, 2010 at 21:33 Comment(9)
Specify a longer-than-default timeout may help to relieve the problem: create_engine('sqlite:///{}'.format(xxx), connect_args={'timeout': 15})Posy
@kawing-chiu: How do you do that for running Django tests?Rising
Two concurrent transactions from different threads on the same process that both attempt to write to the database is more concurrency than sqlite can handle. My answer below has additional detail about this.Johnsson
Worked for me: Kill processes w/ a DB connection (e.g. PyCharm, Shell, etc.) & restartCup
This is a terrible answer to be top without additional clarification. Sqlite is EXTREMELY robust for the overwhelming majority of local storage usage or even for small websites with hundreds of visitors. Basj ' answer is way more relevant for most people.Fallingout
@FrederickNord Django documentation now(?) described how to increase timeout for SQLite database backend: docs.djangoproject.com/en/4.0/ref/databases/…Airdrie
That has been there for long time but doesn't help. This https://mcmap.net/q/151964/-sqlite-database-table-is-locked-on-tests helped me.Rising
In my case, my API testing tool was competing with the frontend for DB access, which caused the error. So see if you have any test tool interfering with your frontend or vice versaDovekie
I have set the timeout to 500 seconds. But I get a database is locked error after about 1 second already. Is it a problem if multiples threads wait at the same time for the database?Bumpkin
A
82

In my case, it was because I open the database from DB Browser for SQLite. When I close it from the browser, the problem is gone.

Alurd answered 12/9, 2016 at 11:21 Comment(3)
Yeah this worked for me too amazingly. I guess DB browser must have been making the extra connection that was causing it to crash.Euphonious
I added a column to a table through DB Browser for SQLite and it had locked the database. Closing it solved the issue for me.Bernetta
Exactly this helped me as well. Not sure what was reason.Mcfadden
S
77

I slightly disagree with the accepted answer which, by quoting this doc, implicitly links OP's problem (Database is locked) to this:

Switching to another database backend. At a certain point SQLite becomes too "lite" for real-world applications, and these sorts of concurrency errors indicate you've reached that point.

This is a bit "too easy" to incriminate SQlite for this problem (which is very powerful when correctly used; it's not only a toy for small databases, fun fact: An SQLite database is limited in size to 140 terabytes ).

Unless you have a very busy server with thousands of connections at the same second, the reason for this Database is locked error is probably more a bad use of the API, than a problem inherent to SQlite which would be "too light". Here are more informations about Implementation Limits for SQLite.


Now the solution:

I had the same problem when I was using two scripts using the same database at the same time:

  • one was accessing the DB with write operations
  • the other was accessing the DB in read-only

Solution: always do cursor.close() as soon as possible after having done a (even read-only) query.

Here are more details.

Steamship answered 25/11, 2018 at 17:43 Comment(8)
A few hours ago, I agreed with you that it seems too easy to incriminate sqlite for this problem. But now I know that two simultaneous connections that start a transaction that includes a write operation is enough to get "database is locked". See my answer for more details.Johnsson
@evan sqlite has a "busy timeout" . If you set it to nonzero, you will never see this message even if many threads are accessing the db... unless those threads fail to close a transaction. holding transactions and connections open kills sqlite "concurrency"Cyrus
can you suggest one or two cases that qualify as "bad use of the API"? @SteamshipSimultaneous
@Simultaneous not closing (even read-only) cursors as soon as possible would be such an example. See the link "more details" at the end of the answer to see a complete illustration.Steamship
Thank you: the top answer is absolutely terrible to be there without additional clarification: the first part of your answer covers it well. SQlite is extremely robust for the overwhelming majority of local storage usage cases. Even for small websites with hundreds of visitors it might not be worth it going further than it.Fallingout
so happy you did write this answer, i was about to write but found you have already provided this feedback, I came here cuz I was facing this error, I had a hunch that MY code had a problem rather then sqlite, and I found that to be true (fixed). I am running a very busy mission critical warehouse on a single sqlite db behind my custom REST based .net app server for 4 years now and never had a problem (1 table even has about million rows). People are too quick to dismiss sqlite, if I could, I would run this damn database on super computers.Maddiemadding
Hey, I am getting this error in django, where django handles all the db queries. Is there a way to manually close the cursor in django?Talus
@MeetGondaliya django is calling it for you unless you are interacting with the db yourself.Elexa
M
49

The practical reason for this is often that the python or django shells have opened a request to the DB and it wasn't closed properly; killing your terminal access often frees it up. I had this error on running command line tests today.

Edit: I get periodic upvotes on this. If you'd like to kill access without rebooting the terminal, then from commandline you can do:

from django import db
db.connections.close_all()
Mope answered 22/10, 2013 at 11:52 Comment(5)
how to fix it without killing terminal? Any idea?Amersham
@neuronet close your connection in shell?Calix
I had to set DJANGO_SETTINGS_MODULE before the db function call: os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<subfolder_with_setings.json>.settings") Otherwise, IMHO best answer hereDispend
+1 for the db.connections.close_all() tip. I was looking for something that would unlock the database before I shell out to a cleanup script in tearDown(). This fixed it. Thanks.Squire
I'm not sure what this snippet does and it did not solve my problem, but in order to run it without getiing erros I had to run from django.conf import settings settings.configure() from here.Misname
E
10

As others have told, there is another process that is using the SQLite file and has not closed the connection. In case you are using Linux, you can see which processes are using the file (for example db.sqlite3) using the fuser command as follows:

$ sudo fuser -v db.sqlite3
                     USER        PID ACCESS COMMAND
/path/to/db.sqlite3:
                     user        955 F....  apache2

If you want to stop the processes to release the lock, use fuser -k which sends the KILL signal to all processes accessing the file:

sudo fuser -k db.sqlite3

Note that this is dangerous as it might stop the web server process in a production server.

Thanks to @cz-game for pointing out fuser!

Earful answered 6/5, 2019 at 7:34 Comment(2)
This works fine, Thanks :) sudo fuser -k app.db in my caseLeveille
sudo fuser -k app.db works in my case. Thanks a lotSpline
J
8

I encountered this error message in a situation that is not (clearly) addressed by the help info linked in patrick's answer.

When I used transaction.atomic() to wrap a call to FooModel.objects.get_or_create() and called that code simultaneously from two different threads, only one thread would succeed, while the other would get the "database is locked" error. Changing the timeout database option had no effect on the behavior.

I think this is due to the fact that sqlite cannot handle multiple simultaneous writers, so the application must serialize writes on their own.

I solved the problem by using a threading.RLock object instead of transaction.atomic() when my Django app is running with a sqlite backend. That's not entirely equivalent, so you may need to do something else in your application.

Here's my code that runs FooModel.objects.get_or_create simultaneously from two different threads, in case it is helpful:

from concurrent.futures import ThreadPoolExecutor

import configurations
configurations.setup()

from django.db import transaction
from submissions.models import ExerciseCollectionSubmission

def makeSubmission(user_id):
    try:
        with transaction.atomic():
            e, _ = ExerciseCollectionSubmission.objects.get_or_create(
                student_id=user_id, exercise_collection_id=172)
    except Exception as e:
        return f'failed: {e}'

    e.delete()

    return 'success'


futures = []

with ThreadPoolExecutor(max_workers=2) as executor:
    futures.append(executor.submit(makeSubmission, 296))
    futures.append(executor.submit(makeSubmission, 297))

for future in futures:
    print(future.result())
Johnsson answered 6/9, 2019 at 18:14 Comment(2)
sqlite has a "busy timeout" . If you set it to nonzero, you will never see this message even if many threads are accessing the db... unless those threads fail to close a transaction. holding transactions and connections open kills sqlite "concurrency"Cyrus
I have the same problem: I use transaction.atomic(). 500s timeout. But I get in my test that database locked error after 2 sekonds. I have not understood why? If I make a few requests, I can see that transaction.atomic() does work...Bumpkin
H
7

I got this error when using a database file saved under WSL (\\wsl$ ...) and running a windows python interpreter.

You can either not save the database in your WSL-tree or use a linux based interpreter in your distro.

How answered 15/5, 2021 at 14:44 Comment(1)
This worked for me too, copied the sqlite file from WSL to a Windows directory and it started working.Maddis
R
6

I was facing this issue in my flask app because I opened the database in SQLite Browser and forgot to write the changes.

If you have also made any changes in SQLite Browser, then click on write changes and everything will be fine.

enter image description here

Rociorock answered 19/10, 2021 at 17:19 Comment(0)
T
3

For me it gets resolved once I closed the django shell which was opened using python manage.py shell

Tecu answered 6/2, 2019 at 2:59 Comment(0)
B
2

This also could happen if you are connected to your sqlite db via dbbrowser plugin through pycharm. Disconnection will solve the problem

Benevolence answered 7/1, 2019 at 15:21 Comment(0)
R
2

I've got the same error! One of the reasons was the DB connection was not closed. Therefore, check for unclosed DB connections. Also, check if you have committed the DB before closing the connection.

Reseta answered 2/9, 2019 at 13:11 Comment(0)
T
2

I had a similar error, right after the first instantiation of Django (v3.0.3). All recommendations here did not work apart from:

  • deleted the db.sqlite3 file and lose the data there, if any,
  • python manage.py makemigrations
  • python manage.py migrate

Btw, if you want to just test PostgreSQL:

docker run --rm --name django-postgres \
  -e POSTGRES_PASSWORD=mypassword \
  -e PGPORT=5432 \
  -e POSTGRES_DB=myproject \
  -p 5432:5432 \
  postgres:9.6.17-alpine

Change the settings.py to add this DATABASES:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'myproject',
        'USER': 'postgres',
        'PASSWORD': 'mypassword',
        'HOST': 'localhost',
        'PORT': '5432',
    }
}

...and add database adapter:

pip install psycopg2-binary

Then the usual:

python manage.py makemigrations
python manage.py migrate
Targett answered 14/2, 2020 at 22:50 Comment(0)
S
2

Check if your database is opened on another DB Browser.

If it is opened on an other application, then close the application and run the program again.

Sabba answered 5/6, 2021 at 9:3 Comment(0)
P
1

Just close (stop) and open (start) the database. This solved my problem.

Phares answered 6/5, 2020 at 21:35 Comment(0)
N
1

I found this worked for my needs. (thread locking) YMMV conn = sqlite3.connect(database, timeout=10)

https://docs.python.org/3/library/sqlite3.html

sqlite3.connect(database[, timeout, detect_types, isolation_level, check_same_thread, factory, cached_statements, uri])

When a database is accessed by multiple connections, and one of the processes modifies the database, the SQLite database is locked until that transaction is committed. The timeout parameter specifies how long the connection should wait for the lock to go away until raising an exception. The default for the timeout parameter is 5.0 (five seconds).

Novick answered 18/8, 2020 at 4:4 Comment(0)
H
1

In my case, I added a new record manually saved and again through shell tried to add new record this time it works perfectly check it out.

In [7]: from main.models import Flight

In [8]: f = Flight(origin="Florida", destination="Alaska", duration=10)

In [9]: f.save()

In [10]: Flight.objects.all() 
Out[10]: <QuerySet [<Flight: Flight object (1)>, <Flight: Flight object (2)>, <Flight: Flight object (3)>, <Flight: Flight object (4)>]>
Headstand answered 21/11, 2020 at 8:54 Comment(0)
A
1

actually I have faced same problem , when I use "transaction.atomic() with select_for_update() " i got error message "the OperationalError: database is locked" ,

and after many tries / searching / read django docs , i found the problem from SQLite itself it is not support select_for_update method as django DOCs says , kindly have a look at the following url and read it deeply:

https://docs.djangoproject.com/en/dev/ref/databases/#database-is-locked-errors

, and when i moved to MySQL everything goes fine .

as django DOCs also says "database is locked" may happen when database timeout occur , they recommend you to change database timeout by setting up the following option :

'OPTIONS': {
    # ...
    'timeout': 20,
    # ...
}

finally, I recommend you to use MySQL/PostgreSQL even if you working on development environment .

I hope this helpful for you .

Andrew answered 23/3, 2021 at 4:27 Comment(0)
C
0

In my case, I had not saved a database operation I performed within the SQLite Browser. Saving it solved the issue.

Casserole answered 13/9, 2018 at 21:41 Comment(0)
D
0

A very unusual scenario, which happened to me.

There was infinite recursion, which kept creating the objects.

More specifically, using DRF, I was overriding create method in a view, and I did

def create(self, request, *args, **kwargs):
    ....
    ....

    return self.create(request, *args, **kwargs)
Dysphemism answered 19/4, 2020 at 20:33 Comment(0)
E
0

Already lot of Answers are available here, even I want to share my case , this may help someone..

I have opened the connection in Python API to update values, I'll close connection only after receiving server response. Here what I did was I have opened connection to do some other operation in server as well before closing the connection in Python API.

Estis answered 27/6, 2020 at 9:2 Comment(0)
F
0

If you get this error while using manage.py shell, one possible reason is that you have a development server running (manage.py runserver) which is locking the database. Stoping the server while using the shell has always fixed the problem for me.

Fancied answered 4/8, 2020 at 13:9 Comment(0)
D
0

I got this error when attempting to create a new table in SQLite but the session object contained uncommitted (though flushed) changes.

Make sure to either:

  1. Commit the session(s) before creating a new table
  2. Close all sessions and perform the table creation in a new connection
  3. ...
Drachm answered 11/5, 2021 at 8:39 Comment(0)
H
0

@Shilp Thapak's answer is correct: the reason for the error is that you did not write your manual changes to the data in your DB Browser for SQLite before running your application.

If you didn't write the changes in whatever SQL client you are using, you can still create the engine but

engine.connect()

will throw the operational error about the database being locked.

You can check whether your engine can connect by checking the existence of a rollback journal. The default mode of a rollback journal is to be created and deleted at the start and end of a transaction.

It is exists in the same directory where your database is, it has the same name as the database file and the suffix "-journal" appended.

If the mode is not changed, at Journal mode in Edit pragmas panel in DB Browser for SQLite.

You can check the existence of the temp file like so:

if os.path.isfile('your-database.sqlite-journal'):
    print("The database is locked. Please write your changes in your SQL client before proceeding.\n")

Read more about temporary files here.

So no need to close the server or DB Browser for SQLite for that sake. In fact, as long as all the changes are written, you can have several clients connected to the database simultaneously and still run your application at the same time.

Hypocrite answered 21/1, 2022 at 12:52 Comment(0)
D
0

for me i tried to deploy the database to host using gunicorn and I used only one worker so the database was locked. add to gunicorn_config.py

workers=3

Dilks answered 18/12, 2023 at 20:20 Comment(0)
C
0

I met very same problem on a service production DJANGO server. We fix the problem by shrinking the sqlite DB using a DB tool such as 'DB Browser'.

Copal answered 16/1 at 9:50 Comment(1)
As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.Kenzie
H
-2

UPDATE django version 2.1.7

I got this error sqlite3.OperationalError: database is locked using pytest with django.

Solution:

If we are using @pytest.mark.django_db decorator. What it does is create a in-memory-db for testing.

Named: file:memorydb_default?mode=memory&cache=shared We can get this name with:

from django.db import connection
db_path = connection.settings_dict['NAME']

To access this database and also edit it, do:

Connect to the data base:

with sqlite3.connect(db_path, uri=True) as conn:
    c = conn.cursor()

Use uri=True to specifies the disk file that is the SQLite database to be opened.

To avoid the error activate transactions in the decorator:

@pytest.mark.django_db(transaction=True)

Final function:

from django.db import connection

@pytest.mark.django_db(transaction=True)
def test_mytest():
    db_path = connection.settings_dict['NAME']
    with sqlite3.connect(db_path, uri=True) as conn:
        c = conn.cursor()
        c.execute('my amazing query')
        conn.commit()
    assert ... == ....
Hypophosphite answered 17/4, 2019 at 21:38 Comment(1)
Are you saying that in-memory sqlite databases never raise the "database is locked" error? This answer is confusing because the original question doesn't involve pytest and I don't know what pytest.mark.django_db does. The sqlite docs don't say that in-memory databases have any different concurrency constraints.Johnsson
U
-2

I just needed to add alias sqlite='sqlite3' to my ~/.zshrc

I then deleted the partially-failed creation of the virtualenv in ~/.pyenv/versions/new-virtualenv and reran pyenv virtualenv <name> and it worked swimmingly

Untie answered 9/10, 2022 at 0:3 Comment(0)
F
-10

try this command:

sudo fuser -k 8000/tcp
Fugazy answered 15/1, 2018 at 11:56 Comment(1)
-1, Downvoted as it offers no explanation as what this solution does and how, while also making assumptions about the port that is being usedCountermeasure

© 2022 - 2024 — McMap. All rights reserved.