I have an MSMQ based location application, where I receive position updates from units in the field and they are processed and put in a database.
The update process does not have dependencies outside the DB, so I my app can be configured with a variable number of threads. As I want the process to be robust under failure, I want to process as much messages as I can, but not more (so if the system fails, I can pick up where I left).
I have the app working correctly, but I've seen that if I raise the number of threads I use to process messages, my avg number of messages is at one level (I use performance counters to measure this), and I get the system to utilize, say, 50% of the CPU time available (I have an Core i7 820QM with 4 physical cores and 8 logical cores), but if I instead of raising threads, launch the same number of processes, I do get to use 100% of the CPU time, and get a much higher number of average events processed.
Can it be a lock contention problem? Something to do with the way Windows 7 treats hyper-threaded processors? I wish to understand the nature of the problem, and any pointers would be really appreciated.
Note: I'm using MSMQ, Rx and Entity Framework in this project.