I am having a hard time figuring out a reliable and scalable solution for a webhook dispatch system.
The current system uses RabbitMQ
with a queue for webhooks (let's call it events
), which are consumed and dispatched. This system worked for some time, but now there are a few problems:
- If a system user generates too many events, it will take up the queue causing other users to not receive webhooks for a long time
- If I split all events into multiple queues (by URL hash), it reduces the possibility of the first problem, but it still happens from time to time when a very busy user hits the same queue
- If I try to put each URL into its own queue, the challenge is to dynamically create/assign consumers to those queues. As far as
RabbitMQ
documentation goes, the API is very limited in filtering for non-empty queues or for queues that do not have consumers assigned. - As far as
Kafka
goes, as I understand from reading everything about it, the situation will be the same in the scope of a single partition.
So, the question is - is there a better way/system for this purpose? Maybe I am missing a very simple solution that would allow one user to not interfere with another user?
Thanks in advance!