MySQL maximum memory usage
Asked Answered
S

7

127

I would like to know how it is possible to set an upper limit on the amount of memory MySQL uses on a Linux server.

Right now, MySQL will keep taking up memory with every new query requested so that it eventually runs out of memory. Is there a way to place a limit so that no more than that amount is used by MySQL?

Sisal answered 24/7, 2009 at 16:28 Comment(5)
MySQL does not "take up memory for every new query and eventually run out". Memory usage is much more complex than that.Lillian
For me this blog post helped to see what affects the amount of ram dedicated for MySQL the minimum and the maximum tech.labelleassiette.com/…Shutdown
It is now 13 years later. MySQL is [mostly] well configured "out of the box".Lillian
@RickJames When I see max_connect_errors at 1 Million still from time to time, it makes me wonder how to publishing organization ever expects to earn 'credibility'. Why would anyone EVER allow 1 Million attempts to guess a password? I advise 10 for the limit to frustrate the hacker/cracker after a reasonable number of mistakes. They will be locked out until server restart or FLUSH HOSTS enables another attempt.Comanche
Simple: Hackers. I agree that 1M is unreasonably high. Most servers get very few connection errors. I have seen few get past half a million, more specifically: Connection_errors_internalLillian
B
194

MySQL's maximum memory usage very much depends on hardware, your settings and the database itself.

Hardware

The hardware is the obvious part. The more RAM the merrier, faster disks ftw. Don't believe those monthly or weekly news letters though. MySQL doesn't scale linear - not even on Oracle hardware. It's a little trickier than that.

The bottom line is: there is no general rule of thumb for what is recommend for your MySQL setup. It all depends on the current usage or the projections.

Settings & database

MySQL offers countless variables and switches to optimize its behavior. If you run into issues, you really need to sit down and read the (f'ing) manual.

As for the database -- a few important constraints:

  • table engine (InnoDB, MyISAM, ...)
  • size
  • indices
  • usage

Most MySQL tips on stackoverflow will tell you about 5-8 so called important settings. First off, not all of them matter - e.g. allocating a lot of resources to InnoDB and not using InnoDB doesn't make a lot of sense because those resources are wasted.

Or - a lot of people suggest to up the max_connection variable -- well, little do they know it also implies that MySQL will allocate more resources to cater those max_connections -- if ever needed. The more obvious solution might be to close the database connection in your DBAL or to lower the wait_timeout to free those threads.

If you catch my drift -- there's really a lot, lot to read up on and learn.

Engines

Table engines are a pretty important decision, many people forget about those early on and then suddenly find themselves fighting with a 30 GB sized MyISAM table which locks up and blocks their entire application.

I don't mean to say MyISAM sucks, but InnoDB can be tweaked to respond almost or nearly as fast as MyISAM and offers such thing as row-locking on UPDATE whereas MyISAM locks the entire table when it is written to.

If you're at liberty to run MySQL on your own infrastructure, you might also want to check out the percona server because among including a lot of contributions from companies like Facebook and Google (they know fast), it also includes Percona's own drop-in replacement for InnoDB, called XtraDB.

See my gist for percona-server (and -client) setup (on Ubuntu): http://gist.github.com/637669

Size

Database size is very, very important -- believe it or not, most people on the Intarwebs have never handled a large and write intense MySQL setup but those do really exist. Some people will troll and say something like, "Use PostgreSQL!!!111", but let's ignore them for now.

The bottom line is: judging from the size, decision about the hardware are to be made. You can't really make a 80 GB database run fast on 1 GB of RAM.

Indices

It's not: the more, the merrier. Only indices needed are to be set and usage has to be checked with EXPLAIN. Add to that that MySQL's EXPLAIN is really limited, but it's a start.

Suggested configurations

About these my-large.cnf and my-medium.cnf files -- I don't even know who those were written for. Roll your own.

Tuning primer

A great start is the tuning primer. It's a bash script (hint: you'll need linux) which takes the output of SHOW VARIABLES and SHOW STATUS and wraps it into hopefully useful recommendation. If your server has ran some time, the recommendation will be better since there will be data to base them on.

The tuning primer is not a magic sauce though. You should still read up on all the variables it suggests to change.

Reading

I really like to recommend the mysqlperformanceblog. It's a great resource for all kinds of MySQL-related tips. And it's not just MySQL, they also know a lot about the right hardware or recommend setups for AWS, etc.. These guys have years and years of experience.

Another great resource is planet-mysql, of course.

Brietta answered 21/10, 2010 at 13:14 Comment(1)
I don't know about tuning primer, how does it compare to mysqltuner ?Deluxe
S
39

We use these settings:

etc/my.cnf
innodb_buffer_pool_size = 384M
key_buffer = 256M
query_cache_size = 1M
query_cache_limit = 128M
thread_cache_size = 8
max_connections = 400
innodb_lock_wait_timeout = 100

for a server with the following specifications:

Dell Server
CPU cores: Two
Processor(s): 1x Dual Xeon
Clock Speed: >= 2.33GHz
RAM: 2 GBytes
Disks: 1×250 GB SATA
Sciurine answered 14/9, 2010 at 13:11 Comment(5)
I think you (and the author you link to) have query_cache_size and query_cache_limit the wrong way around. You're telling MySQL: allocate a 1MB cache but don't put any queries in it that are larger than 128MB. dev.mysql.com/doc/refman/5.0/en/query-cache-configuration.htmlTrachoma
I would lower max_connections. I would decide which engine to use, and not allocate lots of space for both.Lillian
some of these are deprecated with mysql 8, not sure what to update with. just adding innodb_buffer_pool_size and table_definition_cache helped a lot thoughHumblebee
query cache is deprecated, those settings were off by default, and now that feature is removed from mysql altogetherHumblebee
Ain't key_buffer_size instead of key_buffer ?Clinic
A
20

mysqld.exe was using 480 mb in RAM. I found that I added this parameter to my.ini

table_definition_cache = 400

that reduced memory usage from 400,000+ kb down to 105,000kb

Audiology answered 27/6, 2014 at 8:29 Comment(3)
What section does this go under? I added it to mine and the service refused to start.Cana
Nevermind, I moved it under [wampmysqld] and it worked great and reduced the memory I was using significantly. I think it may have sped up my localhost pageloads in the process as well, they seem faster now.Cana
Though the default and minimum is 400, what brought it higher than 400 in your case?Wine
R
19

Database memory usage is a complex topic. The MySQL Performance Blog does a good job of covering your question, and lists many reasons why it's hugely impractical to "reserve" memory.

If you really want to impose a hard limit, you could do so, but you'd have to do it at the OS level as there is no built-in setting. In linux, you could utilize ulimit, but you'd likely have to modify the way MySQL starts in order to impose this.


The best solution is to tune your server down, so that a combination of the usual MySQL memory settings will result in generally lower memory usage by your MySQL installation. This will of course have a negative impact on the performance of your database, but some of the settings you can tweak in my.ini are:

key_buffer_size
query_cache_size
query_cache_limit
table_cache
max_connections
tmp_table_size
innodb_buffer_pool_size

I'd start there and see if you can get the results you want. There are many articles out there about adjusting MySQL memory settings.


Edit:

Note that some variable names have changed in the newer 5.1.x releases of MySQL.

For example:

table_cache

Is now:

table_open_cache
Randle answered 24/7, 2009 at 16:55 Comment(4)
Hi! Thanks for your answer. I have noticed that the equation that people quote is the following: key_buffer_size+(read_buffer_size+sort_buffer_size)*max_connections=Total Memory. I have set the following: key_buffer_size=128M, read_buffer_size=1M, sort_buffer_size=2M, max_connections=120, and the total memory on the server is 512M. However, after many queries, the free memory has gone as low as 12M and would probably continue to go down with further usage. Is there a reason why this is so and can it be prevented? Thanks!Sisal
Or maybe I need to take into consideration not the total memory on the server (512M) but the free memory (i.e. memory available after loading all OS related and other programs)?Sisal
If you're going to modify tmp_table_size with the intention of increasing the size of temp tables that can be stored in RAM, remember to also increase the max_heap_table_size - as MySQL uses the minimum of the two...Untaught
@TimothyMilsud - No formula like that really works. And most servers run quite fine when a formula claims too much RAM is being used.Lillian
P
7

About how MYSQL is eating memory: https://dev.mysql.com/doc/refman/8.0/en/memory-use.html

in /etc/my.cnf:

[mysqld]
...

performance_schema = 0

table_cache = 0
table_definition_cache = 0
max_connect_errors = 10

query_cache_size = 0
query_cache_limit = 0

...

Good work on server with 256MB Memory.

Pittel answered 29/8, 2015 at 14:42 Comment(5)
Why is the table_definition_cache = 0? Some explanation would be nice. And you're basically not caching queries...same effect if you query_cache_type = 0 :)Cockchafer
@KhomNazid Cache is eating memory, in my case server had only 256Mb memory, so this config helps to save it. More about memory dev.mysql.com/doc/refman/8.0/en/memory-use.html "MySQL also requires memory for the table definition cache. The table_definition_cache system variable defines the number of table definitions that can be stored in the table definition cache. If you use a large number of tables, you can create a large table definition cache to speed up the opening of tables. The table definition cache takes less space and does not use file descriptors, unlike the table cache."Heavyduty
This didn't start for me, i had to comment out table_cache / query_cache_size / query_cache_limit, then it started.Adamson
@shilovk Do you have any good reason to enable a hacker/cracker 10000 attempts to guess a password trying to break in to your system? That is what you have allowed when you use max_connect_errors = 10000. Suggestion, limit the devil to 10 to retain your sanity.Comanche
@shilovk Thanks for a more reasonable suggestion.Comanche
B
1

Since I do not have enough reputation points to upvote a previous answer, I concur that the "table_definition_cache = 400" answer worked on my old Centos server.

Bennir answered 14/4, 2022 at 8:49 Comment(0)
A
0

If you are looking for optimizing your docker mysql container then the below command may help. I was able to run mysql docker container from a default 480mb to mere 100 mbs

docker run -d -p 3306:3306 -e MYSQL_DATABASE=test -e MYSQL_ROOT_PASSWORD=tooor -e MYSQL_USER=test -e MYSQL_PASSWORD=test -v /mysql:/var/lib/mysql --name mysqldb mysql --table_definition_cache=100 --performance_schema=0 --default-authentication-plugin=mysql_native_password

Altheta answered 18/1, 2020 at 10:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.