To be more specific for my application: the shared data are mostly persistent data such as monitoring status, configurations -- not more than few hundreds of items, and are updated and read frequently but no more than 1 or 2Hz. The processes are local to each other on the same machine.
EDIT 1: more info - Processes are expected to poll on a set of data they are interested in (ie. monitoring) Most of the data are persistent during the lifetime of the program but some (eg. configs) are required to be restored after software restart. Data are updated by the owner only (assume one owner for each data) Number of processes are small too (not more than 10)
Although using a database is notably more reliable and scalable, it always seems to me it is kind of an overkill or too heavy to use when all I do with it is to share data within an application. Whereas message passing with eg. JMS also has the middleware part, but it is more lightweight and has a more natural or flexible communication API. Implementing event notification and command pattern is also easier with messaging I think.
It will be of great help if anyone can give me an example of what circumstance would one be more preferable to the other.
For example I know we can more readily share persistent data between processes using database, although it is also doable with messaging by distributing across processes and/or storing with some XML files.
And according to here, http://en.wikipedia.org/wiki/Database-as-IPC and here, http://tripatlas.com/Database_as_an_IPC. It says it'd be an anti pattern when used in place of message passing, but it does not elaborate on eg. how bad can the performance hit be using database compared to message passing.
I have gone thru several previous posts that asked a similar question but I am hoping to find an answer that'd focus on design justification. But from those questions that I have read so far I can see a lot of people did use database for IPC (or implemented message queue with database)
Thanks!