I want the following
- During startup, the master process loads a large table from file and saves it into a shared variable. The table has 9 columns and 12 million rows, 432MB in size.
- The worker processes run HTTP server, accepting real-time queries against the large table.
Here is my code, which obviously does not achieve my goal.
var my_shared_var;
var cluster = require('cluster');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
// Load a large table from file and save it into my_shared_var,
// hoping the worker processes can access to this shared variable,
// so that the worker processes do not need to reload the table from file.
// The loading typically takes 15 seconds.
my_shared_var = load('path_to_my_large_table');
// Fork worker processes
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
} else {
// The following line of code actually outputs "undefined".
// It seems each process has its own copy of my_shared_var.
console.log(my_shared_var);
// Then perform query against my_shared_var.
// The query should be performed by worker processes,
// otherwise the master process will become bottleneck
var result = query(my_shared_var);
}
I have tried saving the large table into MongoDB so that each process can easily access to the data. But the table size is so huge that it takes MongoDB about 10 seconds to complete my query even with an index. This is too slow and not acceptable for my real-time application. I have also tried Redis, which holds data in memory. But Redis is a key-value store and my data is a table. I also wrote a C++ program to load the data into memory, and the query took less than 1 second, so I want to emulate this in node.js.
memcached
a suitable choice for this data? – Poirer