MQ Queue with multiple consumers but only one active
Asked Answered
A

3

5

We have one MQ Queue which receives messages from an external system out of our control. Our system processing the incoming messages is a critical one and needs to be up and running 27x7 no matter what.

The order in which the incoming messages are processed is also not negotiable which means we need to process them in exactly the order they arrived.

To make sure our system is 100% available we deployed our system to a bunch of physical machines able to process those messages.

Once the messages reached our system we put in place a mechanism to make sure the messages processing does not go out of order while also getting some performance gain as a result of the parallel processing. For us the performance gain is a good to have, however it is rather a side effect as our main goal is the high availability while assuring the right processing order.

My thoughts were to have on every single machine an MDB able to process the incoming messages but only have one active consumer at a time.

We are using Webshere MQ as a JMS provider and Webshere Application Server 8.5 to deploy our application.

The problem with multiple consumers listening to the same queue does not seem to be a workable solution as when messages arrive in bulk they would be round-robin passed to all consumers and there is no way to control how this is going to happen and the messages easily go out of sequence.

When I manually stopped all the listeners but one obviously the messages got processed in order. But manually shutting down and starting up such listeners is definitely not a HA solution.

We could put in place monitoring processes to check for health of the system and shut things down or start them up as required but this still looks too weak to me. What we want in fact is to have all listeners up and running but only one receiving the messages. If that one goes down for whatever reasons then another one sitting there will become active and start processing messages.

Initially we considered using a topic rather than a queue but this comes with other issues like below:

  1. we cannot control the source of our messages
  2. the high volume of messages we have would put us in in trouble with our going down subscribers that have to be durable and such when coming up back will have to deal with lots of pending messages
  3. the input queues are already part of a cluster and changing all the infrastructure would require a lot of work

Anyway in my view it has to be an existing pattern to accommodate situations like this. Any help, suggestion would be greatly appreciated.

The solution does not have to be a specific MQ one, any idea is welcome.

Thanks in advance

Avaria answered 19/9, 2013 at 6:59 Comment(0)
G
6

Create a second queue, we'll call it the "control queue." Into this queue, put a single message, we'll call it the "token." Change application processing as follows:

  1. Listen on the control queue for a message.
  2. Get the token from the control queue under syncpoint.
  3. Put the same token message back on the control queue, also under syncpoint.
  4. Process a transaction from the normal input queue, also under syncpoint.
  5. COMMIT the messages.
  6. Loop.

The COMMIT completes the transaction on the input queue and makes the token available to the other MDBs. No processing of the input queue can occur except by the MDB that has the token under syncpoint. However you can have any number of MDBs waiting on the token. A failure of any one of them allows the others to take over instantly.

No need to use XA, by the way. WMQ's single-phase COMMIT works great with this.

Gorizia answered 19/9, 2013 at 18:42 Comment(2)
Thanks a lot Rob. This looks like a very good idea and quite interesting too. I need to create a POC spike and as soon as I will get it working I will come back with my findings.Avaria
Thanks Rob. Everything worked exactly as desired. I noticed the time to process the whole test lot doubled comparing with when it was only one MDB consumer and about eight times slower comparing to when all my four consumers where running in parallel (but getting messages out of sequence). Anyway it works and I think there are ways to improve the processing time. Still a pity there is no way to attach a sequence number to the message when being written to the queue. That would have solved my problem very nicely. However great suggestion and thank you very much.Avaria
E
2

When applications are trying to use the queue through their MDB listeners, we can restrict them by defining the queue with DEFSOPT(Exclusive). This will make sure that only one application can consume messages from that queue.

If we wish to restrict to only one instance of the application, define it as NOSHARE. So that, one instance of one application can get hold of messages on the queue at a time. Others will get their turn when current one releases the lock.

Emissivity answered 15/11, 2016 at 13:35 Comment(1)
Thanks for your suggestion. I wished I was in the same working space to verify how it works. Unfortunately It is not easy for me to get a WAS instance and a MQ Manager installed on my laptop and test your solution.Avaria
M
0

in my opinion synchronizing multiple consumers is not a big problem and is the most efficient solution. I don't know where processing result have to be recorded (maybe JMS queue again ? ), but i would try to use a lightweitght agent before that point. You can use timestamps or implement a counter over JMS to preserve order. Consumers could execute in parallel and then post in a support queue. Than the single agent can order them using a queuebrowser and then a transaction. This agent should be "wathdogged".

Alessandro

Mundane answered 19/9, 2013 at 9:54 Comment(2)
Thanks ALessandro Can u please be more specific about what u meant about a lightweight agent. Is this a small consumer that will attach the counter or analyze the timestamps?Avaria
Just a short note to mention that I tried to use the JMSTimestamp to re sequence the messages back but it did not work. Analyzing data for one month of work brought up the fact that there are cases (quite a lot) in which we receive more than one message in the same millisecond. the max was 22 messages per millisecond.Avaria

© 2022 - 2024 — McMap. All rights reserved.