KafkaTimeoutError: Failed to update metadata after 60.0 secs
Asked Answered
S

2

11

I have a use case of high throughput kafka producer where I want to push thousands of json messages every second.

I have a 3 node kafka cluster and I am using latest kafka-python library and have following method to produce message

def publish_to_kafka(topic):
    data = get_data(topic)
    producer = KafkaProducer(bootstrap_servers=['b1', 'b2', 'b3'],
                             value_serializer=lambda x: dumps(x).encode('utf-8'), compression_type='gzip')
    try:
        for obj in data:
           producer.send(topic, value=obj)
    except Exception as e:
            logger.error(e)
    finally:
        producer.close()

My topic has 3 partitions.

Methods works correctly sometimes and fails with error "KafkaTimeoutError: Failed to update metadata after 60.0 secs."

What settings I needs to change to get it work smoothly?

Spermous answered 4/6, 2020 at 15:37 Comment(1)
Can you share your Kafka broker configuration (server.properties) ? Also, when you say that it sometimes fail, do you mean using the exact same topic?Pigskin
S
14
  1. If a topic does not exist and you are trying to produce to that topic and auto topic creation is set to false, then it can occur.

    Possible resolution: In broker configuration (server.properties) auto.create.topics.enable=true (Note, this is default in Confluent Kafka)

  2. Another case could be network congestion or speed, if it is taking more than 60 sec to update metadata with the Kafka broker.

    Possible resolution: Producer configuration: max.block.ms = 1200000 (120 sec, for ex)

  3. Check if your broker(s) are going down for some reason (for ex, too much load) and why they are not able to respond to metadata requests. You can see them in server.log file, typically.

Superorder answered 4/6, 2020 at 18:22 Comment(0)
M
1

This is not exactly the answer for original question (because original question describe situation where you get this randomly), but you can get "KafkaTimeoutError: Failed to update metadata after 60.0 secs." error if kafka topic is blocked by Kafka ACLs. You can see your ACLs using

./kafka-acls.sh --bootstrap-server <kafka_server_1>:<port>,<kafka_server_2>:<port> --list

OR

./kafka-acls.sh --authorizer-properties <zookeeper_server_1>:<port>,<zookeeper_server_2>:<port> --list

Regards!

Major answered 12/12, 2023 at 17:7 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.