How to implement Message Queuing Solution
Asked Answered
T

2

6

I have a scenario where about 10 different messages will need to be enqueued and then dequeued / processed. One subscriber will need all 10 messages, but another will only need 8 of the 10 messages. I am trying to understand what the best way is to setup this type of architecture. Do you create a queue for each message type so the subscriber(s) can just subscribe to the relevant queues or do you dump them all to the same queue and ignore the messages that are not relevant to that subscriber? I want to ensure the solution is flexible / scalable, etc.

Process:

  1. 10 different xml messages will be enqueued to an IBM WebSphere MQ server.
  2. We will use .Net (Most likely WCF since WebSphere MQ 7.1 has added in WCF support)
  3. We will dequeue the messages and load them into another backend DB (Most likely SQL Server).
  4. Solution needs to scale well because we will be processing a very large number of messages and this could grow (Probably 40-50,000 / hr). At least large amount for us.

As always greatly appreciate the info.

--S

Tjader answered 30/10, 2011 at 18:47 Comment(2)
What is different about the messages that need to be ignored? There are several different options here - selectors, topics, properties. Which to use depends on how the app or QMgr would distinguish which messages are relevant.Canikin
Hi @Canikin the header of the message for all 10 will be the same, but the content will be different. So we could look at the header to determine if the content of the message is relevant or not. Bottom line is for two of the messages we don't want one of the subscribers to get those.Tjader
C
1

OK, based on the comments, here's a suggestion that will scale and doesn't require much change on the apps.

On the producer side, I'd copy the message selection criteria to a message property and then publish the message to a topic. The only change that is required here to the app is the message property. If for some reason you don't want to make it publish using the native functionality, you can define an alias over a topic. The app thinks it is sending messages but they are really publications.

On the consumer side you have a couple of choices. One is to create administrative subscriptions for each app and use a selector in the subscription. The messages are then funneled to a dedicated queue per consumer, based on the selection criteria. The apps think that they are simply consuming messages.

Alternatively the app can simply subscribe to the topic. This gives you the option of a dynamic subscription that doesn't receive messages when the app is disconnected (if in fact you wanted that) or a durable subscription that is functionally equivalent to the administrative subscription.

This solution will easily scale to the volumes you cited. Another option is that the producer doesn't use properties. Here, the consumer application consumes all messages, breaks open the message payload on each and decides whether to process or ignore the message. In this solution the producer is still publishing to a topic. Any solution involving straight queueing forces the producer to know all the destinations. Add another consumer, change the producer. Also, there's a PUT for each destination.

The worst case is a producer putting multiple messages and a consumer having to read each one to decide if it's going to be ignored. That option might have problems scaling, depending on how deep in the payload the selection criteria field lies. Really long XPath expression = poor performance and no way to tune WMQ to make up for it since the latency is all in the application at that point.

Best case, producer sets a message property and publishes. Consumers select on property in their subscription or an administrative subscription does this for them. Whether this solution uses application subscriptions or administrative subscriptions doesn't make any difference as far as scalability is concerned.

Canikin answered 30/10, 2011 at 19:30 Comment(0)
R
2

Creating queues is relatively 'cheap' from a resource perspective, plus yes, it's better to use a queue for each specific purpose, so it's probably better in this case to separate them by target client if possible. Using a queue to pull messages selectively based on some criteria (correlation ID or some other thing) is usually a bad idea. The best performing scenario in messaging is the most straightforward one: simply pull messages from the queue as they arrive, rather than peeking and receiving selectively.

As to scaling, I can't speak for Websphere MQ or other IBM products, but 40-50K messages per hour isn't particularly hard for MSMQ on Windows Server to handle, so I'd assume IBM can do that as well. Usually the bottleneck isn't the queuing platform itself but rather the process of dequeuing and processing individual messages.

Richthofen answered 30/10, 2011 at 18:55 Comment(3)
Makes sense thanks a lot @kprobst. So would be better to probably just create a queue for each subscriber as you suggest above. That does seem like a good strategy. That is what I was worried about is having to partially process the message to see if it should be pulled in or not, etc.Tjader
Then the question becomes how do you get that message into each of the multiple queues. Do you plan to have the producer app create multiple msgs and know where each one goes? That's why I asked about how the messages can be distinguished in my comment above.Canikin
@Canikin yeah we would have to have the producer decide what goes where. Which could potentially be a bottleneck I suppose. Can only distinguish by a value (property) in the header that tells us what kind of message is contained within the body of the message.Tjader
C
1

OK, based on the comments, here's a suggestion that will scale and doesn't require much change on the apps.

On the producer side, I'd copy the message selection criteria to a message property and then publish the message to a topic. The only change that is required here to the app is the message property. If for some reason you don't want to make it publish using the native functionality, you can define an alias over a topic. The app thinks it is sending messages but they are really publications.

On the consumer side you have a couple of choices. One is to create administrative subscriptions for each app and use a selector in the subscription. The messages are then funneled to a dedicated queue per consumer, based on the selection criteria. The apps think that they are simply consuming messages.

Alternatively the app can simply subscribe to the topic. This gives you the option of a dynamic subscription that doesn't receive messages when the app is disconnected (if in fact you wanted that) or a durable subscription that is functionally equivalent to the administrative subscription.

This solution will easily scale to the volumes you cited. Another option is that the producer doesn't use properties. Here, the consumer application consumes all messages, breaks open the message payload on each and decides whether to process or ignore the message. In this solution the producer is still publishing to a topic. Any solution involving straight queueing forces the producer to know all the destinations. Add another consumer, change the producer. Also, there's a PUT for each destination.

The worst case is a producer putting multiple messages and a consumer having to read each one to decide if it's going to be ignored. That option might have problems scaling, depending on how deep in the payload the selection criteria field lies. Really long XPath expression = poor performance and no way to tune WMQ to make up for it since the latency is all in the application at that point.

Best case, producer sets a message property and publishes. Consumers select on property in their subscription or an administrative subscription does this for them. Whether this solution uses application subscriptions or administrative subscriptions doesn't make any difference as far as scalability is concerned.

Canikin answered 30/10, 2011 at 19:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.