Very Slow Eloquent Insert/Update Queries in Laravel
Asked Answered
N

2

5

I have a laravel application which must insert/update thousands of records per second in a for loop. my problem is that my Database insert/update rate is 100-150 writes per second . I have increased the amount of RAM dedicated to my database but got no luck.

enter image description here

is there any way to increase the write rate for mysql to thousands of records per second ?

please provide me optimum configurations for performance tuning

and PLEASE do not down mark the question . my code is correct . Its not a code problem because I have no problem with MONGODB . but I have to use mysql .

My Storage Engine is InnoDB

Norsworthy answered 14/6, 2017 at 18:58 Comment(11)
dev.mysql.com/doc/refman/5.7/en/insert-optimization.html #9819771 serverfault.com/questions/118504/…Aluminiferous
@Aluminiferous thanks . I will check these outNorsworthy
Batch processing is your friend. Maybe do 10 to 100 inserts in a single statement? ON DUPLICATE KEY UPDATE could work for updates unless better is needed. NOTE: fewer transactions is also generally good if you're using transactions.Edmonds
What makes you think it`s slow?Also you are comparing mongo with mysql which is apple to oranges.Show some sample data for inserts or better yet that loop you mentioned.Voncile
@Voncile It does 100's of operations and Ramin wants (or needs) 1000's of operations per second therefore, by definition, it's an order of magnitude "too slow".Edmonds
@Voncile because its not even 1000 transactions/s on a 8 GB RAM core i5 CPU . it should not be like this in my opinionNorsworthy
I have read on a question of Stackoverflow that mysql is capable of thousands of transactions/sNorsworthy
@Edmonds I have tries transactions for my inserts & updates but no different results . It has to be some misconfigurationNorsworthy
@Ramin If you can isolate to mostly insert transactions and just modify the insert code to do 2 inserts every execute call that might give you an idea if batching might help. Of course, you're right if it's tuning it'll be a lot more difficult for us to help from a generic standpoint. Also you're 100% autocommit is off?Edmonds
@Edmonds thanks for the helpNorsworthy
Dup of dba.stackexchange.com/questions/176301/…Homeostasis
H
4

Inserting rows one at a time, and autocommitting each statement, has two overheads.

Each transaction has overhead, probably more than one insert. So inserting multiple rows in one transaction is the trick. This requires a code change, not a configuration change.

Each INSERT statement has overhead. One insert has about 90% over head and 10% actual insert.

The optimal is 100-1000 rows being inserted per transaction.

For rapid inserts:

  • Best is LOAD DATA -- if you are starting with a .csv file. If you must build the .csv file first, then it is debatable whether that overhead makes this approach lose.
  • Second best is multi-row INSERT statements: INSERT INTO t (a,b) VALUES (1,2), (2,3), (44,55), .... I recommend 1000 per statement, and COMMIT each statement. This is likely to get you past 1000 rows per second being inserted.

Another problem... Since each index is updated as the row is inserted, you may run into trouble with thrashing I/O to achieve this task. InnoDB automatically "delays" updates to non-unique secondary indexes (no need for INSERT DELAYED), but the work is eventually done. (So RAM size and innodb_buffer_pool_size come into play.)

If the "thousands" of rows/second is a one time task, then you can stop reading here. If you expect to do this continually 'forever', there are other issues to contend with. See High speed ingestion .

Homeostasis answered 29/6, 2017 at 3:17 Comment(0)
T
2

For insert, you might want to look into the INSERT DELAYED syntax. That will increase insert performance, but it won't help with update and the syntax will eventually be deprecated. This post offers an alternative for updates, but it involves custom replication.

One way my company's succeeded in speeding up inserts is by writing the SQL to a file, and then doing using a MySQL LOAD DATA INFILE command, but I believe we found that required the server's command line to have the mysql application installed.

I've also found that inserting and updating in a batch is often faster. So if you're calling INSERT 2k times, you might be better off running 10 inserts of 200 rows each. This would decrease the lock requirements and decrease information/number of calls sent over the wire.

Teddman answered 14/6, 2017 at 20:36 Comment(4)
Thanks buddy . after a while I resolved this by using transactions and sending all data in one go to the database serverNorsworthy
I will write it here laterNorsworthy
INSERT DELAYED applies to MyISAM and not InnoDB. LOAD DATA is an SQL statement, so if you can issue "raw" SQL, it should not need the mysql client. However, the overhead of writing the file may wipe out most of LOAD's benefit.Homeostasis
Unfortunately delayed inserts and replaces were deprecated from MySQL version 5.6 onwards. Grouping queries into batches or using staging tables seems to be best approach to increase performance now when inserting a lot of records.Ennead

© 2022 - 2024 — McMap. All rights reserved.