How do I make Rails.cache (in-memory cache) work with Puma?
Asked Answered
T

3

13

I'm using Rails 5.1. I have application-wide memory_store caching happening with Rails. This is set up in my config/environments/development.rb file

  £ Enable/disable caching. By default caching is disabled.
  if Rails.root.join('tmp/caching-dev.txt').exist?
    config.action_controller.perform_caching = true

    config.cache_store = :memory_store
    config.public_file_server.headers = {
      'Cache-Control' => 'public, max-age=172800'
    }
  else
    config.action_controller.perform_caching = true
    config.cache_store = :memory_store
  end

This allows me to do things like

      Rails.cache.fetch(cache_key) do
        msg_data
      end

in one part of my application (a web socket) and access that data in another part of my application (a controller). However, what I'm noticing is that if I start my Rails server with puma running (e.g. include the below file at config/puma.rb) ...

threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i
threads threads_count, threads_count

£ Specifies the `port` that Puma will listen on to receive requests, default is 3000.
£
port        ENV.fetch("PORT") { 3000 }

£ Specifies the number of `workers` to boot in clustered mode.
£ Workers are forked webserver processes. If using threads and workers together
£ the concurrency of the application would be max `threads` * `workers`.
£ Workers do not work on JRuby or Windows (both of which do not support
£ processes).
£
workers ENV.fetch("WEB_CONCURRENCY") { 4 }

app_dir = File.expand_path("../..", __FILE__)
shared_dir = "£{app_dir}/shared"

£ Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env

£ Set up socket location
bind "unix://£{shared_dir}/sockets/puma.sock"

£ Logging
stdout_redirect "£{shared_dir}/log/puma.stdout.log", "£{shared_dir}/log/puma.stderr.log", true

£ Set master PID and state locations
pidfile "£{shared_dir}/pids/puma.pid"
state_path "£{shared_dir}/pids/puma.state"
activate_control_app





£ Use the `preload_app!` method when specifying a `workers` number.
£ This directive tells Puma to first boot the application and load code
£ before forking the application. This takes advantage of Copy On Write
£ process behavior so workers use less memory. If you use this option
£ you need to make sure to reconnect any threads in the `on_worker_boot`
£ block.
£
£ preload_app!

£ The code in the `on_worker_boot` will be called if you are using
£ clustered mode by specifying a number of `workers`. After each worker
£ process is booted this block will be run, if you are using `preload_app!`
£ option you will want to use this block to reconnect to any threads
£ or connections that may have been created at application boot, Ruby
£ cannot share connections between processes.
£
on_worker_boot do
  require "active_record"
  ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
  ActiveRecord::Base.establish_connection(YAML.load_file("£{app_dir}/config/database.yml")[rails_env])
end

£ Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart

In memory caching no longer works. In other words

Rails.cache.fetch(cache_key)

always returns nothing. I would like to have a multi-threaded puma environment (eventually) to gracefully handle many requests. However I'd also like my cache to work. How can I get them to both play together?

Thermosetting answered 1/6, 2018 at 22:13 Comment(0)
O
9

You can't use memory_store with puma running in clustered mode (i.e. with multiple workers). It says so right here in the Rails guide. You can't share memory between separate processes, so this clearly stands to reason.

If reducing puma workers down to 1 is not an option, then consider using Redis or Memcached instead. The documentation in the Rails guide is quite complete in this regard - you'll need to add a gem or two to your Gemfile, and update config.cache_store. You will need to install the relevant service on the box, or alternatively there are plenty of hosted service providers that will manage it for you (Heroku Redis, Redis To Go, Memcachier etc)

Outsize answered 7/6, 2018 at 16:24 Comment(8)
I am using Redis. You're saying I can pull off this memory sharing thing with Redis?Thermosetting
@Dave: yes. Redis is ideal for this. You probably already have all the gems you need in that case, and the only thing to give some consideration to is whether you want to introduce namespacing to keep the various parts separated (not sure where else you are using it in your app. Sessions perhaps?)Outsize
Oh that's great news. I'm using Redis to help manage web sockets, publishing messages to a channel and what not. With regards to the posted question, what do I need to change "config.cache_store = :memory_store" to take advantage of Redis?Thermosetting
@Dave: just scroll down a bit more on that Rails guide page :-) guides.rubyonrails.org/…. You will want :redis_cache_store insteadOutsize
Thanks brother! I'm going to give this a whirl. It might take me a couple of days to come back here and accept but I'll keep you updated on my progress.Thermosetting
To be clear, it's very possible to share memory between processes on the same machine - if more than one process do overlapping mmaps on the same file with MAP_SHARED, they share memory, or with shm_open. But if you go with Redis, then you get to share cache across machines, not just processes.Churrigueresque
@BarryKelly - I would be interested to see how to make this work with Rails. Do you have any reference?Boutin
@BSeven for Ruby low level things that require specific OS support are idiomatically written as extensions, since gem installation can build etc. I'm not aware of a gem which uses shared memory to share a cache across different processes on the same machine; but it's not surprising, it wouldn't be super useful in Rails since people generally scale to multiple FE machines. (I'm assuming you mean shared memory, since using Redis is fairly easy.)Churrigueresque
G
1

I don't know if you can - but don't do that in any case. Use a real cache service. memcached, for example.

http://guides.rubyonrails.org/caching_with_rails.html

config.cache_store = :mem_cache_store, "localhost" # assuming you run memcached on localhost

And... That's about it.

Granitite answered 1/6, 2018 at 23:20 Comment(4)
I'm willing to entertain your suggestion but I need some code examples and what I need to do to set that up given the code I posted.Thermosetting
I'm only running my application on a single server so it seems like overkill to install a memcached application to handle Rails cache data. If there's absolutely no way to make this work through traditional Rails, I'll come back and accept.Thermosetting
You're thinking that memcached is overhead. It is - but it's trivial. Remember, we're talking about huge machines with tons of CPU/RAM. Running memcached is trivial and high performant. It is what everyone uses for caching (or redis or something) because that's what it does and it does it well. Don't go solving problems that are solved. You have enough of your own :-)Granitite
@Thermosetting - Using Memcached is simple. You wouldn't need any other code besides config.cache_store. It is a similar solution to Redis. The biggest drawback is that you have to make sure it is running, which may or may not be an issue for you.Boutin
B
0

Although Redis is a great solution, another possibility is to use the FileStore Cache. This could be desirable if you don't want to run Redis and simplify the environment.

With this cache store, multiple server processes on the same host can share a cache.

https://guides.rubyonrails.org/caching_with_rails.html#activesupport-cache-filestore

Also, this could be implemented on a RAM Drive which may be faster than SSD.

Boutin answered 7/1, 2023 at 5:2 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.