Reduce memory usage in Google Cloud SQL
Asked Answered
F

1

6

We have a simple MySQL db with a few tables, managed by Google Cloud SQL. The database has a few hundred million rows, and fairly light reading/writing. We are nowhere near the limits for storage, yet memory usage is climbing linearly with storage, and will soon be an issue for scalability.

Should memory usage be rising linearly with storage? What should we consider to lower our memory usage? We haven't customized any database flags.

I'm not seeing any information about memory usage and best practices in the Google Cloud documentation. It seems way too early for us to be thinking about horizontal scaling/sharding.

Flowering answered 13/12, 2018 at 0:40 Comment(4)
Memory is an important part of SQL performance. Databases will try to use as much memory as possible which is good. If you see the CPU usage become high and/or high disk I/O and average response time increasing, this means you are starting to reach / have reached the instances capacity.Evangelia
So if our memory usage is cruising linearly towards the limit of the machine we've selected in CloudSQL, it won't cause issues and will self-correct? And we should watch CPU and I/O for indications we need to scale up?Flowering
Yes, you should monitor CPU when concurrency for both QPS and Connectivity rises. High I/O is when you know you are performing outside of memory operations. Storage of course is not a big issue using CloudSQLCoarse
@JohnHanley is there a metric that can be used for average response time? I don't think I've seen any from stackdriverKilpatrick
C
13

MySQL will automatically allocate around 0.8 of the instance's memory to store data & indexes. This is in order to avoid heavy I/O operations and provide good response times.

So, by default, MySQL will hope to store as much as possible in memory - causing the memory usage to look like it scales linearly as more data is inserted.

This does not necessarily mean you are close to facing issues. It is really a matter of what resources your MySQL machine has. Give it 128GB of RAM and insert ~ 120GB worth data, and it will hold ~ 102GB of data in memory (maintaining performance) while 64GB RAM machine with 120GB worth data will surely show slower response times.

A side note:

If you store hundreds of millions of rows and willing to compromise a little in query times (millisecond to seconds), I would suggest checking out BigQuery, you might end up paying less than Cloud SQL, not worrying about scale (ever..) nor DBA / Dev Ops maintenance (Memory, CPU, Indexing etc..).

Coarse answered 13/12, 2018 at 12:16 Comment(4)
We are also using BigQuery, for internal analytics. My feeling is that it can't support a web app with many concurrent connections?Flowering
Yes, you are correct. Wouldn't work as you expect it to as the first layer of data under a web app. Although, it depends on what's going on in your app :)Coarse
@Coarse Could you provide a source for this: "MySQL will automatically allocate around 0.8 of the instance's memory to store data & indexes."Navel
@MAcabot, these 2 are good reads in the subject: dev.mysql.com/doc/refman/8.0/en/…. dev.mysql.com/doc/refman/8.0/en/…Coarse

© 2022 - 2024 — McMap. All rights reserved.