Why does an insert query occasionally take so long to complete?
Asked Answered
D

11

38

This is a pretty simple problem. Inserting data into the table normally works fine, except for a few times where the insert query takes a few seconds. (I am not trying to bulk insert data.) So I setup a simulation for the insert process to find out why the insert query occasionally takes more than 2 seconds to run. Joshua suggested that the index file may be being adjusted; I removed the id (primary key field), but the delay still happens.

I have a MyISAM table: daniel_test_insert (this table starts completely empty):

create table if not exists daniel_test_insert ( 
    id int unsigned auto_increment not null, 
    value_str varchar(255) not null default '', 
    value_int int unsigned default 0 not null, 
    primary key (id) 
)

I insert data into it, and sometimes a insert query takes > 2 seconds to run. There are no reads on this table - Only writes, in serial, by a single threaded program.

I ran the exact same query 100,000 times to find why the query occasionall takes a long time. So far, it appears to be a random occurrence.

This query for example took 4.194 seconds (a very long time for an insert):

Query: INSERT INTO daniel_test_insert SET value_int=12345, value_str='afjdaldjsf aljsdfl ajsdfljadfjalsdj fajd as f' - ran for 4.194 seconds
status               | duration | cpu_user  | cpu_system | context_voluntary | context_involuntary | page_faults_minor
starting             | 0.000042 | 0.000000  | 0.000000   | 0                 | 0                   | 0                
checking permissions | 0.000024 | 0.000000  | 0.000000   | 0                 | 0                   | 0                
Opening tables       | 0.000024 | 0.001000  | 0.000000   | 0                 | 0                   | 0                
System lock          | 0.000022 | 0.000000  | 0.000000   | 0                 | 0                   | 0                
Table lock           | 0.000020 | 0.000000  | 0.000000   | 0                 | 0                   | 0                
init                 | 0.000029 | 0.000000  | 0.000000   | 1                 | 0                   | 0                
update               | 4.067331 | 12.151152 | 5.298194   | 204894            | 18806               | 477995           
end                  | 0.000094 | 0.000000  | 0.000000   | 8                 | 0                   | 0                
query end            | 0.000033 | 0.000000  | 0.000000   | 1                 | 0                   | 0                
freeing items        | 0.000030 | 0.000000  | 0.000000   | 1                 | 0                   | 0                
closing tables       | 0.125736 | 0.278958  | 0.072989   | 4294              | 604                 | 2301             
logging slow query   | 0.000099 | 0.000000  | 0.000000   | 1                 | 0                   | 0                
logging slow query   | 0.000102 | 0.000000  | 0.000000   | 7                 | 0                   | 0                
cleaning up          | 0.000035 | 0.000000  | 0.000000   | 7                 | 0                   | 0

(This is an abbreviated version of the SHOW PROFILE command, I threw out the columns that were all zero.)

Now the update has an incredible number of context switches and minor page faults. Opened_Tables increases about 1 per 10 seconds on this database (not running out of table_cache space)

Stats:

  • MySQL 5.0.89

  • Hardware: 32 Gigs of ram / 8 cores @ 2.66GHz; raid 10 SCSI harddisks (SCSI II???)

  • I have had the hard drives and raid controller queried: No errors are being reported. CPUs are about 50% idle.

  • iostat -x 5 (reports less than 10% utilization for harddisks) top report load average about 10 for 1 minute (normal for our db machine)

  • Swap space has 156k used (32 gigs of ram)

I'm at a loss to find out what is causing this performance lag. This does NOT happen on our low-load slaves, only on our high load master. This also happens with memory and innodb tables. Does anyone have any suggestions? (This is a production system, so nothing exotic!)

Disengagement answered 15/9, 2010 at 23:5 Comment(4)
Do other tables/databases work fine? Might there be a bad drive (any I/O errors in system.log)? How long does the query take from the mysql command line? What about mysql mydb < lots-of-inserts.sql?Haiti
No we have this problem on other tables. I created a sample table and just dumped on it to see what happened. If I use mysql mydb < lots-of-inserts.sql, that won't give me a "profile" on it. This only happens once in a while, but our monitoring software catches it.Disengagement
What do you mean by "This same row; 100,000 times"? Do you mean a single insert takes 2 seconds, or doing the same insert 100,000 times takes 2 seconds?Nobile
I insert the same row, 100,000 times. occassionally a single insert will take 2 or more seconds. The inserting of 100,000 rows is just find that 1 in 20,000 times when the query takes longer than 2 seconds. I will update the questionDisengagement
T
23

I have noticed the same phenomenon on my systems. Queries which normally take a millisecond will suddenly take 1-2 seconds. All of my cases are simple, single table INSERT/UPDATE/REPLACE statements --- not on any SELECTs. No load, locking, or thread build up is evident.

I had suspected that it's due to clearing out dirty pages, flushing changes to disk, or some hidden mutex, but I have yet to narrow it down.

Also Ruled Out

  • Server load -- no correlation with high load
  • Engine -- happens with InnoDB/MyISAM/Memory
  • MySQL Query Cache -- happens whether it's on or off
  • Log rotations -- no correlation in events

The only other observation I have at this point is derived from the fact I'm running the same db on multiple machines. I have a heavy read application so I'm using an environment with replication -- most of the load is on the slaves. I've noticed that even though there is minimal load on the master, the phenomenon occurs more there. Even though I see no locking issues, maybe it's Innodb/Mysql having trouble with (thread) concurrency? Recall that the updates on the slave will be single threaded.

MySQL Verion 5.1.48

Update

I think I have a lead for the problem on my case. On some of my servers, I noticed this phenomenon on more than the others. Seeing what was different between the different servers, and tweaking things around, I was lead to the MySQL innodb system variable innodb_flush_log_at_trx_commit.

I found the doc a bit awkward to read, but innodb_flush_log_at_trx_commit can take the values of 1,2,0:

  • For 1, the log buffer is flushed to the log file for every commit, and the log file is flushed to disk for every commit.
  • For 2, the log buffer is flushed to the log file for every commit, and the log file is flushed to disk approximately every 1-2 seconds.
  • For 0, the log buffer is flushed to the log file every second, and the log file is flushed to disk every second.

Effectively, in the order (1,2,0), as reported and documented, you're supposed to get with increasing performance in trade for increased risk.

Having said that, I found that the servers with innodb_flush_log_at_trx_commit=0 were performing worse (i.e. having 10-100 times more "long updates") than the servers with innodb_flush_log_at_trx_commit=2. Moreover, things immediately improved on the bad instances when I switched it to 2 (note you can change it on the fly).

So, my question is, what is yours set to? Note that I'm not blaming this parameter, but rather highlighting that it's context is related to this issue.

Tortfeasor answered 9/11, 2010 at 22:38 Comment(6)
I recently upgraded to 5.1.48. I think it was the query cache. I turned it off and things started performming much better.Disengagement
I did have a similar problem about a year ago when my query_cache was too large, so when a big chunk of data was invalidated, the whole server would lock up for up to 30 seconds. On that note, see: dom.as/2009/07/08/query-cache-tuningTortfeasor
I found the same problem on my CentOS system. With innodb_flush_log_at_trx_commit=1, my UPDATEs were taking 0.040 seconds to complete, 99% of that time was in the 'closing tables' state. When I switched to either 0 or 2, my entire UPDATE completes in 0.0002 seconds!Tightfisted
that was VERY helpful! I ran a script that inserted rows into the DB and took around 15 minutes to insert less than 20k tuples. Changing that variable from 1 to 0 inserted more than 30k rows in about 3-4 seconds!Zumstein
Maybe I'm a bit late on this, but since this reply is still suggested from the search engines, using the value 2 breaks the ACID compliance of mysql, be careful!Dashiell
I had a similar problem. For any simple queries (INSERT INTO, SELECT, UPDATE etc) which runs by average in 5 to 15 ms, would occasionally and randomly take about 1 to 2 seconds! This is unrelated to the parameter values, for sure. I do use InnoDB, but no replication. The parameters in my.cnf are all on default, and there was no specific high load on the server for these spikes to occur. When I added innodb_flush_log_at_trx_commit=2 to my.cnf, there were about 95% less of those spikes. This suggests that it is a Disk I/O problem, due to occasional/random disk usage I guess. Thanks!!Stepson
C
1

I had this problem using INNODB tables. (and INNODB indexes are even slower to rewrite than MYISAM)

I suppose you are doing multiple other queries on some other tables, so the problem would be that MySQL has to handle disk writes in files that get larger and needs to allocate additional space to those files.

If you use MYISAM tables I strongly suggest using

LOAD DATA INFILE 'file-on-disk' INTO TABLE `tablename` 

command; MYISAM is sensationally fast with this (even with primary keys) and the file can be formatted as csv and you can specify the column names (or you can put NULL as the value for the autoincrement field).

View MYSQL doc here.

Canicular answered 16/9, 2010 at 8:35 Comment(0)
M
1

The first Tip I would give you, is to disable the autocommit functionality and than commit manually.

LOCK TABLES a WRITE;
... DO INSERTS HERE
UNLOCK TABLES;

This benefits performance because the index buffer is flushed to disk only once, after all INSERT statements have completed. Normally, there would be as many index buffer flushes as there are INSERT statements.

But propably best you can do, and if that is possible in your application, you do a bulk insert with one single select.

This is done via Vector Binding and it's the fastest way you can go.

Instead
of:
"INSERT INTO tableName values()"
DO
"INSERT INTO tableName values(),(),(),().......(n) " ,

But consider this option only if parameter vector binding is possible with your mysql driver you're using.

Otherwise I would tend to the first possibility and LOCK the table for every 1000 inserts. Don't lock it for 100k inserts, because you'l get a buffer overflow.

Melano answered 16/9, 2010 at 8:46 Comment(6)
What programming language are you using? Perhaps I can give you an example how to do.Melano
I'm using ruby. However, I'm doing the same insert 100,000 times because a few of take so long. I'm trying to find the reason that a few of them take so long.Disengagement
So did you try to LOCK, UNLOCK the table and check how performance behaved?Melano
I removed the index so that there would not be an issue of the index. But I still had the same performance problem.Disengagement
I'm not having this problem on non-production machines, which makes me think this is due to temporary "spikes" or other global locks which are randomly interfering with the inserts. Which would explain the periodic insert delays. Or perhaps just a bad time of a "bulk" write or two some where else in the code.Disengagement
I helped me using an explicit transaction instead of autocommit. Thank you. I was having problems with "query end" in the INSERT.Goodard
O
1

Can you create one more table with 400 (not null) columns and run your test again? If the number of slow inserts became higher this could indicate MySQL is wasting time writing your records. (I dont know how it works, but he may be alocating more blocks, or moving something to avoid fragmentation.... really dont know)

Odilia answered 22/9, 2010 at 14:37 Comment(3)
I added more columns, and I found that writes were pretty slow. I'm wondering if the delay has something to do about writing data to the harddisk. Our harddisk(s) appear to have performance capacity.Disengagement
ie, 34 writes per second. Which for a non-indexed table is pretty slow!Disengagement
I dont know, but you may run some MySQL defrag tool and see if helps. I think MySQL is wasting your time looking for good places to put your records...Deutero
D
1

We upgraded to MySQL 5.1 and during this event the Query cache became an issue with a lot of "Freeing items?" thread states. We then removed the query cache.

Either the upgrade to MySQL 5.1 or removing the query cache resolved this issue.

FYI, to future readers.

-daniel

Disengagement answered 9/12, 2010 at 23:2 Comment(0)
T
1

We hit exactly the same issue and reported here: http://bugs.mysql.com/bug.php?id=62381

We are using 5.1.52 and don't have solution yet. We may need to turn QC off to avoid this perf hit.

Tanta answered 22/9, 2011 at 10:28 Comment(0)
H
1

if you are using multiple insertion at one using for loop, then please take a break after every loop using PHP's sleep("time in seconds") function.

Hickson answered 22/7, 2016 at 5:39 Comment(0)
D
0

Read this on Myisam Performance: http://adminlinux.blogspot.com/2010/05/mysql-allocating-memory-for-caches.html

Search for:

'The MyISAM key block size The key block size is important' (minus the single quotes), this could be what's going on. I think they fixed some of these types of issues with 5.1

Demilitarize answered 21/9, 2010 at 19:49 Comment(2)
the problem is that this still happens even if the keys are removed. IE, a table without any indexes.Disengagement
understandable, (I think) that MySQL blocks off space to write records. so something (Example cause I don't know) say it blocks off enough space to write for 5 records, after the space is used MySQL needs to reserve another block of space for the next set of records. This might be the extra execution time happening.Demilitarize
E
0

Can you check the stats on the disk subsystem? Is the I/O satuated? This sounds like internal DB work going on flushing stuff to disk/log.

Endurable answered 24/9, 2010 at 15:10 Comment(2)
Well, I'm not able to get precise measurements on IO. But I have run "iostat -x 5" and I have not seen anything about 5-10% while running. There may be temporary spikes that may be hidden due "averaging". I have no idea how to query the system for a finer measurements other than watching vmstat, which also has not yielded any useful informationDisengagement
I've seen issues in SQLServer in which performance does strange things - when it's not my code (lol) it ususally is the disk subsystem. SQLServer has a recovery interval parameter in which SQLServer flushes things to disk. If you are really pounding in the inserts and the recovery interval is hit, performance is hit hard for a bit which bubbles up to the application as some insert taking a long time and it actually has nothing to do with the insert at all but some internal housekeeping in SQLServer - could also be that the disk channel - spindles and/or disk controllers are just at 100%Endurable
N
0

To check if your disk is behaving badly, and if you're in Windows, you can create a batch cmd file that creates 10,000 files:

@echo OFF
FOR /L %%G IN (1, 1, 10000) DO TIME /T > out%%G.txt

save it in a temp dir, like test.cmd

Enable command extensions running CMD with the /E:ON parameter

CMD.exe /E:ON

Then run your batch and see if the time between the first and the last out file differ in seconds or minutes.

On Unix/Linux you can write a similare shell script.

Na answered 27/9, 2010 at 15:55 Comment(0)
T
0

By any chance is there an SSD drive in the server? Some SSD drives suffer from 'studder', which could cause your symptom.

In any case, I would try to find out if the delay is occurring in MySQL or in the disk subsystem.

What OS is your server, and what file system is the MySQL data on?

Trail answered 2/10, 2010 at 5:29 Comment(1)
We have real raided SCSI harddrives. We are using CentOS 5.4 and xfs for the file system / file snapshots. I've thought about using FusionIO but, it never really gained traction.Disengagement

© 2022 - 2024 — McMap. All rights reserved.