I'm using Rails 5.1. I have application-wide memory_store caching happening with Rails. This is set up in my config/environments/development.rb
file
£ Enable/disable caching. By default caching is disabled.
if Rails.root.join('tmp/caching-dev.txt').exist?
config.action_controller.perform_caching = true
config.cache_store = :memory_store
config.public_file_server.headers = {
'Cache-Control' => 'public, max-age=172800'
}
else
config.action_controller.perform_caching = true
config.cache_store = :memory_store
end
This allows me to do things like
Rails.cache.fetch(cache_key) do
msg_data
end
in one part of my application (a web socket) and access that data in another part of my application (a controller). However, what I'm noticing is that if I start my Rails server with puma running (e.g. include the below file at config/puma.rb) ...
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i
threads threads_count, threads_count
£ Specifies the `port` that Puma will listen on to receive requests, default is 3000.
£
port ENV.fetch("PORT") { 3000 }
£ Specifies the number of `workers` to boot in clustered mode.
£ Workers are forked webserver processes. If using threads and workers together
£ the concurrency of the application would be max `threads` * `workers`.
£ Workers do not work on JRuby or Windows (both of which do not support
£ processes).
£
workers ENV.fetch("WEB_CONCURRENCY") { 4 }
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "£{app_dir}/shared"
£ Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env
£ Set up socket location
bind "unix://£{shared_dir}/sockets/puma.sock"
£ Logging
stdout_redirect "£{shared_dir}/log/puma.stdout.log", "£{shared_dir}/log/puma.stderr.log", true
£ Set master PID and state locations
pidfile "£{shared_dir}/pids/puma.pid"
state_path "£{shared_dir}/pids/puma.state"
activate_control_app
£ Use the `preload_app!` method when specifying a `workers` number.
£ This directive tells Puma to first boot the application and load code
£ before forking the application. This takes advantage of Copy On Write
£ process behavior so workers use less memory. If you use this option
£ you need to make sure to reconnect any threads in the `on_worker_boot`
£ block.
£
£ preload_app!
£ The code in the `on_worker_boot` will be called if you are using
£ clustered mode by specifying a number of `workers`. After each worker
£ process is booted this block will be run, if you are using `preload_app!`
£ option you will want to use this block to reconnect to any threads
£ or connections that may have been created at application boot, Ruby
£ cannot share connections between processes.
£
on_worker_boot do
require "active_record"
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("£{app_dir}/config/database.yml")[rails_env])
end
£ Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart
In memory caching no longer works. In other words
Rails.cache.fetch(cache_key)
always returns nothing. I would like to have a multi-threaded puma environment (eventually) to gracefully handle many requests. However I'd also like my cache to work. How can I get them to both play together?