Getting "PreconditionFailed - inequivalent arg 'x-max-priority' for queue" error when trying to set up priority queues with Celery+RabbitMQ
Asked Answered
W

1

11

I have RabbitMQ setup with two queues called: low and high. I want my celery workers to consume from the high priority queue before consuming tasks for the low priority queue. I get this following error when trying to push a message into RabbitMQ

>>> import tasks
>>> tasks.high.apply_async()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/vagrant/.local/lib/python3.6/site-packages/celery/app/task.py", line 570, in apply_async
    **options
  File "/home/vagrant/.local/lib/python3.6/site-packages/celery/app/base.py", line 756, in send_task
    amqp.send_task_message(P, name, message, **options)
  File "/home/vagrant/.local/lib/python3.6/site-packages/celery/app/amqp.py", line 552, in send_task_message
    **properties
  File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/messaging.py", line 181, in publish
    exchange_name, declare,
  File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/connection.py", line 510, in _ensured
    return fun(*args, **kwargs)
  File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/messaging.py", line 194, in _publish
    [maybe_declare(entity) for entity in declare]
  File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/messaging.py", line 194, in <listcomp>
    [maybe_declare(entity) for entity in declare]
  File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/messaging.py", line 102, in maybe_declare
    return maybe_declare(entity, self.channel, retry, **retry_policy)
  File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/common.py", line 121, in maybe_declare
    return _maybe_declare(entity, channel)
  File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/common.py", line 145, in _maybe_declare
    entity.declare(channel=channel)
  File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/entity.py", line 609, in declare
    self._create_queue(nowait=nowait, channel=channel)
  File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/entity.py", line 618, in _create_queue
    self.queue_declare(nowait=nowait, passive=False, channel=channel)
  File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/entity.py", line 653, in queue_declare
    nowait=nowait,
  File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/channel.py", line 1154, in queue_declare
    spec.Queue.DeclareOk, returns_tuple=True,
  File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/abstract_channel.py", line 80, in wait
    self.connection.drain_events(timeout=timeout)
  File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/connection.py", line 500, in drain_events
    while not self.blocking_read(timeout):
  File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/connection.py", line 506, in blocking_read
    return self.on_inbound_frame(frame)
  File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame
    callback(channel, method_sig, buf, None)
  File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/connection.py", line 510, in on_inbound_method
    method_sig, payload, content,
  File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/abstract_channel.py", line 126, in dispatch_method
    listener(*args)
  File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/channel.py", line 282, in _on_close
    reply_code, reply_text, (class_id, method_id), ChannelError,
amqp.exceptions.PreconditionFailed: Queue.declare: (406) PRECONDITION_FAILED - inequivalent arg 'x-max-priority' for queue 'high' in vhost '/': received none but current is the value '10' of type 'signedint'

Here is my celery configuration

import ssl
broker_url="amqps://"
result_backend="amqp://"
include=["tasks"]
task_acks_late=True
task_default_rate_limit="150/m"
task_time_limit=300
worker_prefetch_multiplier=1
worker_max_tasks_per_child=2
timezone="UTC"
broker_use_ssl = {'keyfile': '/usr/local/share/private/my_key.key', 'certfile': '/usr/local/share/ca-certificates/my_cert.crt', 'ca_certs': '/usr/local/share/ca-certificates/rootca.crt', 'cert_reqs': ssl.CERT_REQUIRED, 'ssl_version': ssl.PROTOCOL_TLSv1_2}
from kombu import Exchange, Queue
task_default_priority=5
task_queue_max_priority = 10
task_queues = [Queue('high', Exchange('high'), routing_key='high', queue_arguments={'x-max-priority': 10}),]
task_routes = {'tasks.high': {'queue': 'high'}}

I have a tasks.py script with the following tasks defined

from __future__ import absolute_import, unicode_literals
from celery_app import celery_app

@celery_app.task
def low(queue='low'):
    print("Low Priority")

@celery_app.task(queue='high')
def high():
    print("HIGH PRIORITY")

And my celery_app.py script:

from __future__ import absolute_import, unicode_literals
from celery import Celery
from celery_once import QueueOnce
import celeryconfig

celery_app = Celery("test")
if __name__ == '__main__':
    celery_app.start()

I am starting the celery workers with this command

celery -A celery_app worker -l info --config celeryconfig --concurrency=16 -n "%h:celery" -O fair -Q high,low

I'm using:

  • RabbitMQ: 3.7.17
  • Celery: 4.3.0
  • Python: 3.6.7
  • OS: Ubuntu 18.04.3 LTS bionic
Wolffish answered 27/8, 2020 at 0:2 Comment(0)
L
3

Recently I stuck with the same Issue and found this question. I decided to post possible solution for anyone else who will find it in the future.

Current error message means that the queue had been declared with a priority 10, but now its signature contains a priority none. For example here is a similar issue with x-expires with good explanation:

Celery insists that every client know in advanced how a queue was created.

In order to fix such issue you may vary the following things:

  • change task_queue_max_priority (which defines default value of queue's x-max-priority) or get rid of it.
  • declare queue low with the queue_arguments={'x-max-priority': 10} as you did for queue high.

For me the problem has been solved when all queue declarations matched with previously created queues.

Linage answered 18/10, 2022 at 13:35 Comment(2)
thanks @zok, i think i solved the issue by re-creating the queuesWolffish
re-creating the queues helps. If you're using rabbitmq in docker, you may want to: docker exec rabbitmq rabbitmqctl stop_app && docker exec rabbitmq rabbitmqctl reset && docker exec rabbitmq rabbitmqctl start_app; because the current docker rabbitmq container stores queue information on a mounted volumeMichalmichalak

© 2022 - 2024 — McMap. All rights reserved.