Google Cloud SQL increasing size until full disk with no reason
Asked Answered
G

2

31

I don't know how to explain this well, but I will try.

  • I use Google Cloud SQL second edition with 20 GB disk size.
  • I have several wp. databases with 166.5 MB Total size.

And right now my Storage usage is 9.52 GB! (With 166.5 MB SQL data...) and increasing still going faster...

What to do?!

gcloud-sql-problem-increasing-size

enter image description here

UPDATE :

I solve this with :

  • I made an export in bucket
  • I created a new instance Cloud SQL
  • Import from bucket
  • And delete instance with problem.

(And changed ip from my applications)

I don't know sure where problem come from but could be a "Storage overhead from binary logs".

Next time will check binary logs with : mysql> SHOW BINARY LOGS;

What I think Google is missing is a purge binary logs! (an easy way!)

UPDATE FINAL :

With binary logs active, storage of your cloud SQL will expand continuously.

For anyone in the same situation, you can edit the instance and uncheck binary logs, after that the current binary logs will purge.

Sorry for my noob problem! :D (I'm a beginner in Server administration.)

Thanks Vadim!

Guarantee answered 29/1, 2016 at 21:23 Comment(9)
There is really no way at all we could help with this. We have no way at all of knowing what your code is doing. You probably should contact Google.Aldenalder
Ok, I will do this. thanks, but what I don't understand is where is size from, how to see that files...Guarantee
Considering it's cloud based, I don't think there are any real "files" to speak of. In any case, something as simple is forgetting to close a connection could cause your database to increase in size. You've got to look at your code.Aldenalder
Do you have binary logs enabled? Can you show us what other flags you have enabled? For example, do you have the mysql general log enabled?Riffle
You can use "SHOW BINARY LOGS;" to see how much space is used by the binary logs.Riffle
no flag, and I have checked "Enable binary logging (for point-in-time recovery and replication)".Guarantee
I don't know where to run "SHOW BINARY LOGS", in shell? I manage my database with heidiSQL... After some reasearch could be "Storage overhead from binary logs"Guarantee
If you don't need binary logs, disabling them will purge existing binary logs.Riffle
Yes, you're right! Thank you Vadmin!Guarantee
R
17

If you have binary logs enabled, mysql will make a record of all changes, which is required for replication or point-in-time recovery.

If you have no need for these features, you can disable binary logs which will purge any existing logs from your instance.

If binary logs are enabled, they will not grow indefinitely. Binary logs older than the oldest automatic backup (7 days) are purged automatically.

Riffle answered 30/1, 2016 at 1:9 Comment(7)
In google cloud sql second generation, point in time recovery is not active. And I don't know why suddenly size is increasing without any changes in settings. (Maybe Google still work on that) And sometime increasing must be purget, but when? In print screen you can see that is just increasing in size. They must specific a minimum size of disk when you check binary logs and how that it works with increasing in size and maybe purget after x days.Guarantee
Binary log growth depends on your write queries. Did you start writing more data to the database around that time? Did you receive more traffic around that time?Riffle
Traffic increasing around 40%, but nothing that big anyway... So, the growth was too big and sudden! I Think is something that Google maybe work on it (a bug maybe). I think is a bug because point in time is not currently active in second generation and maybe that isn't ready yet.Guarantee
I am an engineer on Cloud SQL and I'm not aware of any reports of issues relating to binary logs. Binary log growth is related to how much data is changed in a statement, not necessarily the number of statements executed. For example, executing a single update statement that touches many rows in a large table will make an entry for every row that is affected.Riffle
@Riffle So why is there no finer control over the binary logs than just on/off? Given the huge amount of space they may take up, it ought to be possible to automatically purge binary logs after X days. They will be purged after some time is much too vague, it means that as the binary logs keep growing in size, it is impossible to estimate whether the space they will eventually take up is worth the storage cost, especially since it is impossible to revert any increases in storage size.Caricature
I've updated my answer to be more specific. Binary logs are purged as older backups roll off.Riffle
@vadim, I'd also tell users how to disable the feature in your answer. For those interested you have two options using the cloud console: 1. Go to console.cloud.google.com/sql/instances?project=<your-project> and select the database instance 2a) select edit in the top action bar, expand enable auto backups and uncheck enable binary logging, save your changes 2b) open the backups tab, click manage automated backups, uncheck enable binary logging, save. Keep in mind that these changes require a restart of your database instance!Gastineau
N
0

I had the same issue with our GCP SQL instance. In my case the storage went from 10GB that we first configured to 728GB (with an increase in billing accordingly).

In my case the culprit was archived WAL Logs:

Metrics explorer

Just viewing the current log size did not show this because the logs had already been cleared.

The issue is that GCP has a hidden option:

Automatically increase storage option for GCP SQL

This is checked by default (IIRC). So the storage will increase automatically in events of high log output (in my case this happened when upgrading Postgresql server version.

The catch is that once increased, the storage cannot be decreased, ending up with increased monthly operational costs.

Norinenorita answered 2/6, 2023 at 4:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.