Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?
Asked Answered
L

13

166

From a script I sent a query like this thousands of times to my local database:

update some_table set some_column = some_value

I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.

I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:

Lock wait timeout exceeded; try restarting transaction

It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?

Looming answered 4/5, 2010 at 15:43 Comment(8)
What is the output of SHOW FULL PROCESSLIST?Jimmie
It shows only the SHOW FULL PROCESSLIST command, nothing else. It's a local development database. Nothing is running on it. I got the 'lock wait..' error message on the command line when I tried dropping the index from there.Looming
In that case you are probably creating 2 separate connections in different transactions that have to wait for eachother.Jimmie
I didn't create any transactions afterwards. I killed the script, rebooted the machine and logged in from the command line to look around. Nothing else used the database except for the mysql command line client, so something must have been stuck in the table.Looming
Thanks for this - just saved me a bunch of headache to not pull my hair and reinstall the server. I thought that something were seriously wrong :)Auraaural
Related question: How to debug Lock wait timeout exceeded?Tartuffe
I have the same problem, except all my data is backed up in XML files. None of these solutions work for me (they tend to assume a running system, not one that rebooted), so I guess I'll have to wait out the restoration of millions of records via load xml local infile. [sigh]Fidele
great answer for similar question, listed at dba.stackexchange.com/questions/100984/…Fluxmeter
H
163

I had a similar problem and solved it by checking the threads that are running. To see the running threads use the following command in mysql command line interface:

SHOW PROCESSLIST;

It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute. In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:

KILL 115;

This will terminate the connection for the corresponding thread.

Hyperbolic answered 31/5, 2013 at 16:13 Comment(6)
TAKE NOTE! This was mentioned by someone on one of the many SO threads concerning this problem: Sometimes the process that has locked the table shows up as sleeping in the processlist! I was tearing my hair out until I killed all the threads that were open in the database in question, sleeping or not. That finally unlocked the table and let the update query run. The commenter mentioned something like "Sometimes a MySQL thread locks a table, then sleeps while it waits for something non-MySQL-related to happen."Hyunhz
(I added my own answer to flesh out that thought fragment here, on a related question)Hyunhz
This helped me. I had a couple of sleeping threads created by PHPStorm database management/query feature.Polacre
Great! I had a hidden process since 5 hours that I could identify and kill.Admission
this isnt a solution any more than duct taping over an infected wound is a solution. youre not addressing the underlying root problem.Strand
The same happened in my case. I had query was stuck and only after I killed all 'Sleep' processes query executed. Can anybody explain why?Overbearing
H
71

You can check the currently running transactions with

SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`

Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:

KILL 1234;

If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.

Heliacal answered 1/3, 2016 at 10:56 Comment(1)
You might need to use root account to run that SQL to see which transaction is actually blocking other from accessing tableSwig
N
55

Check InnoDB status for locks

SHOW ENGINE InnoDB STATUS;

Check MySQL open tables

SHOW OPEN TABLES WHERE In_use > 0;

Check pending InnoDB transactions

SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`; 

Check lock dependency - what blocks what

SELECT * FROM `information_schema`.`innodb_locks`;

After investigating the results above, you should be able to see what is locking what.

The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.

For example, as described here, the misuse of the following annotation might cause locks in the database:

@Transactional(propagation = Propagation.REQUIRES_NEW) 
Nadbus answered 14/11, 2016 at 10:23 Comment(2)
Thank you so much! Running SELECT * FROM information_schema.innodb_trx t JOIN information_schema.processlist p ON t.trx_mysql_thread_id = p.id revealed the culprit: The locking thread came from my on IP address ... I had forgotten to close a debug console that I had left in the middle of a transaction...Mucoviscidosis
The INNODB_LOCKS table is deprecated as of MySQL 5.7.14 and is removed in MySQL 8.0. Try SELECT * FROM performance_schema.data_locks; instead.Incomer
C
50

This started happening to me when my database size grew and I was doing a lot of transactions on it.

Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.

Run this:

SET GLOBAL innodb_lock_wait_timeout = 5000; 

And then this:

SET innodb_lock_wait_timeout = 5000; 
Chute answered 10/8, 2014 at 14:41 Comment(3)
Reference link: innodb_lock_wait_timeoutClicker
Just remember to change the values to its previous values if you don't want to keep them there forever.Fulton
This means that some of my users will have a more slow experience right?Skunk
I
11

When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.

Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.

Iatrics answered 10/6, 2014 at 14:50 Comment(0)
V
9

Restart MySQL, it works fine.

BUT beware that if such a query is stuck, there is a problem somewhere :

  • in your query (misplaced char, cartesian product, ...)
  • very numerous records to edit
  • complex joins or tests (MD5, substrings, LIKE %...%, etc.)
  • data structure problem
  • foreign key model (chain/loop locking)
  • misindexed data

As @syedrakib said, it works but this is no long-living solution for production.

Beware : doing the restart can affect your data with inconsistent state.

Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).

Ventricle answered 4/3, 2013 at 13:41 Comment(1)
THANK YOU. you did not solve my issue directly but I suffered from table locks and it was so bad. That is the only post ob the whole internet, which leads me to the idea to check the foreign keys. So i found out that the key of the primary key was 11 and the foreign keys 10. I dont know how that could happen and why everything worked before.Thorma
N
3

Goto processes in mysql.

So can see there is task still working.

Kill the particular process or wait until process complete.

Nit answered 25/4, 2012 at 12:2 Comment(2)
It solves the problem. But this can't be a solution for a LIVE production server...... How can we output this deadlock handling? Or how can we AVOID this deadlock from happening?Hitlerism
well, it so happens that even with PERFECTLY well-formed and PERFECTLY escaped mysql queries which are not even written by the developer but by the active-records interface of the framework, lock wait timeout issues can STILL happen. So i don't writing proper mysql query is a factor here.Hitlerism
T
1

I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).

I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.

I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.

My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.

Teetotum answered 4/4, 2012 at 23:20 Comment(0)
F
0

I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)

Fascinating answered 10/5, 2011 at 13:19 Comment(1)
It might solve the problem. But having to do this every time such error occurs can't be a solution for a LIVE production server...... How can we output this deadlock handling? Or how can we AVOID this deadlock from happening?Hitlerism
A
0

I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.

Astereognosis answered 16/3, 2012 at 16:17 Comment(2)
It solves the problem. But this can't be a solution for a LIVE production server...... How can we output this deadlock handling? Or how can we AVOID this deadlock from happening?Hitlerism
restart Apache and its services (or at least the MySQL), no need to rebootScarborough
A
0

Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.

Alkmaar answered 10/8, 2020 at 18:49 Comment(1)
This is not answering the OP, since initially it is clear that the use case is running, in this case mistakenly, multiple updates in multiple, unrelated transactions.Olivarez
R
0

issue resolved in my case by changing delete to truncate

issue- query:

delete from Survey1.sr_survey_generic_details
mycursor.execute(query)

fix- query:

truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
Regale answered 1/11, 2020 at 19:54 Comment(0)
D
0

This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.

Drayton answered 8/4, 2022 at 15:43 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.