Need to load the whole postgreSQL database into the RAM
Asked Answered
F

7

26

How do I put my whole PostgreSql database into the RAM for a faster access?? I have 8GB memory and I want to dedicate 2 GB for the DB. I have read about the shared buffers settings but it just caches the most accessed fragment of the database. I needed a solution where the whole DB is put into the RAM and any read would happen from the RAM DB and any write operation would first write into the RAM DB and then the DB on the hard drive.(some thing like the default fsync = on with shared buffers in postgresql configuration settings).

Flub answered 2/1, 2009 at 15:4 Comment(2)
Btw, it's for a website for which I am using Python and django framework.Flub
Does anyone have a 2022 answer for this? RAM prices have fallen dramatically so it's actually feasible. Mainly because I don't want to have a pgsql and redis.Frankenstein
I
9

I have asked myself the same question for a while. One of the disadvantages of PostgreSQL is that it does not seem to support an IN MEMORY storage engines as MySQL does...

Anyway I ran in to an article couple of weeks ago describing how this could be done; although it only seems to work on Linux. I really can't vouch for it for I have not tried it myself, but it does seem to make sense since a PostgreSQL tablespace is indeed assigned a mounted repository.

However, even with this approach, I am not sure you could put your index(s) into RAM as well; I do not think MySQL forces HASH index use with its IN MEMORY table for nothing...

I also wanted to do a similar thing to improve performance for I am also working with huge data sets. I am using python; they have dictionary data types which are basically hash tables in the form of {key: value} pairs. Using these is very efficient and effective. Basically, to get my PostgreSQL table into RAM, I load it into such a python dictionary, work with it, and persist it into db once in a while; its worth it if it is used well.

If you are not using python, I am pretty sure their is a similar dictionary-mapping data structure in your language.

Hope this helps!

Isla answered 4/1, 2009 at 6:29 Comment(1)
The article is no longer available at that link. Archive.org has it: web.archive.org/web/20080517112936/http://…Dives
S
2

if you are pulling data by id, use memcached - http://www.danga.com/memcached/ + postgresql.

Skiing answered 14/1, 2009 at 15:7 Comment(0)
S
1

Set up an old-fashioned RAMdisk and tell pg to store its data there.

Be sure you back it up well though.

Sarsen answered 2/6, 2009 at 22:33 Comment(1)
wouldn't he lose their data on a server restart?Knitwear
J
0

Perhaps something like a Tangosol Coherence cache if you're using Java.

Jerrylee answered 2/1, 2009 at 15:9 Comment(0)
B
0

With only an 8GB database, if you've already optimized all the SQL activity and you're ready solve query problems with hardware, I suggest you're in trouble. This is just not a scalable solution in the long term. Are you sure there is nothing you can do to make substantial differences on the software and database design side?

Bemire answered 4/1, 2009 at 6:59 Comment(0)
G
0

I haven't tried this myself (yet) but:

Theoretically, it should be possible to combine the two.

If you do this, you might also want to tweak seq_page_cost and random_page_cost to reflect the relative storage costs. See https://www.postgresql.org/docs/current/runtime-config-query.html

The pre-existing advice for query optimization and increasing shared_buffers still stands though. The chances are that if you're having these problems on a database this small simply putting it into RAM probably isn't the right fix.

Guildsman answered 26/8, 2022 at 17:13 Comment(0)
O
0

One solution is to use Fujistu version of PostGreSQL that supports in memory columnstore indexes... https://www.postgresql.fastware.com/in-memory-columnar-index-brochure

But it cost a lot....

Or run MS SQL Server with the In Memory tables features.... Even the free version express has it !

Overvalue answered 31/8, 2022 at 14:43 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.