Apache Kafka: Failed to Update Metadata/java.nio.channels.ClosedChannelException
Asked Answered
T

1

8

I'm just getting started with Apache Kafka/Zookeeper and have been running into issues trying to set up a cluster on AWS. Currently I have three servers:

One running Zookeeper and two running Kafka.

I can start the Kafka servers without issue and can create topics on both of them. However, the trouble comes when I try to start a producer on one machine and a consumer on the other:

on the Kafka producer:

kafka-console-producer.sh --broker-list <kafka server 1 aws public dns>:9092,<kafka server 2 aws public dns>:9092 --topic samsa

on the Kafka consumer:

kafka-console-consumer.sh --zookeeper <zookeeper server ip>:2181 --topic samsa

I type in a message on the producer ("hi") and nothing happens for a while. Then I get this message:

ERROR Error when sending message to topic samsa with key: null, value: 2 bytes
with error: Failed to update metadata after 60000 ms.
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

On the consumer side I get this message, which repeats periodically:

WARN Fetching topic metadata with correlation id # for topics [Set(samsa)] from broker [BrokerEndPoint(<broker.id>,<producer's advertised.host.name>,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
    at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
    at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
    at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
    at kafka.producer.SyncProducer.send(SyncProducer.scala:119)
    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
    at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

After a while, the producer will then start rapidly throwing this error message with # increasing incrementally:

WARN Error while fetching metadata with correlation id # : {samsa=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

Not sure where to go from here. Let me know if more details about my configuration files are needed

Tragic answered 14/12, 2015 at 2:54 Comment(0)
T
10

This was a configuration issue.

In order to get it running several changes to config files had to happen:

In config/server.properties on each Kafka server:

  • host.name: <Public IP>
  • advertised.host.name: <AWS Public DNS Address>

In config/producer.properties on each Kafka server:

  • metadata.broker.list: <Producer Server advertised.host.name>:<Producer Server port>,<Consumer Server advertised.host.name>:<Consumer Server port>

In /etc/hosts on each Kafka server, change 127.0.0.1 localhost localhost.localdomain to:

<Public IP>  localhost localhost.localdomain
Tragic answered 14/12, 2015 at 22:24 Comment(1)
@kellanburker Should I restart any service after editing /etc/hosts file?Veranda

© 2022 - 2024 — McMap. All rights reserved.