Amazon RDS (PostgreSQL): Optimize memory usage
Asked Answered
P

1

9

There is something i did not really understand with Amazon RDS (the PostrgreSQL version). Some queries takes a lot of time to show their results. I have set all relevant indexes (as shown with EXPLAIN). So I think it's not due to my schema design.

I do not use a big machine (m3.xlarge) as bigger ones are too much expensive. My database size is about 300GB.

It seems that Postgres does not use all the available memory (only ~5GB, the "Freeable memory" report of the console shows that there are always ~10GB freeable...). I try to tune my "parameter group" as proposed by tune-your-postgres-rds-instance, especially set EFFECTIVE_CACHE_SIZE to 70%. But it does not change anything.

I'm probably wrong somewhere... Any idea ?

Prologue answered 6/1, 2016 at 10:57 Comment(0)
R
13

To make more memory available to your queries you would tune your work_mem. There are implications to doing that since that's memory per backend.

effective_cache_size actually doesn't deal with memory at all. It's an optimizer parameter.

"Freeable memory" is a good thing - it means that the memory is currently used (most likely) by postgres in the operating system cache.

You can increase your shared_buffers to allow postgres to use more of it's own memory for caching, but there are limits to it's effectiveness that mean you don't usually want to use more than 25% of available memory to this.

Recreation answered 6/1, 2016 at 16:33 Comment(2)
Is there any similar answer for MS SQL server? I want to optimize the memory usage. Currently the memory usage is a bit too high.Dola
> that's memory per backend ... this seems to be not true, as per other answers I've just seen: each backend might use multiples of that (!) ...see dba.stackexchange.com/a/33616/166815 for detailsSublunary

© 2022 - 2024 — McMap. All rights reserved.