Kafka + Zookeeper: Connection to node -1 could not be established. Broker may not be available
Asked Answered
H

7

63

I am running in my locahost both Zookeeper and Kafka (1 instance each).

I create succesfully a topic from kafka:

./bin/kafka-topics.sh --zookeeper localhost:2181 --create --replication-factor 1 --partitions 1 --topic Hello-Nicola

Created topic "Hello-Nicola".

Kafka logs show:

[2017-12-06 16:00:17,753] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2017-12-06 16:03:19,347] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Hello-Nicola-0 (kafka.server.ReplicaFetcherManager)
[2017-12-06 16:03:19,393] INFO Loading producer state from offset 0 for partition Hello-Nicola-0 with message format version 2 (kafka.log.Log)
[2017-12-06 16:03:19,406] INFO Completed load of log Hello-Nicola-0 with 1 log segments, log start offset 0 and log end offset 0 in 35 ms (kafka.log.Log)
[2017-12-06 16:03:19,408] INFO Created log for partition [Hello-Nicola,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2017-12-06 16:03:19,409] INFO [Partition Hello-Nicola-0 broker=0] No checkpointed highwatermark is found for partition Hello-Nicola-0 (kafka.cluster.Partition)
[2017-12-06 16:03:19,411] INFO Replica loaded for partition Hello-Nicola-0 with initial high watermark 0 (kafka.cluster.Replica)
[2017-12-06 16:03:19,413] INFO [Partition Hello-Nicola-0 broker=0] Hello-Nicola-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)

But Zookeeper logs show:

2017-12-06 16:03:19,299 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x1000177fb3d0001 type:create cxid:0x43 zxid:0x26 txntype:-1 reqpath:n/a Error Path:/brokers/topics/Hello-Nicola/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/Hello-Nicola/partitions/0
2017-12-06 16:03:19,302 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x1000177fb3d0001 type:create cxid:0x44 zxid:0x27 txntype:-1 reqpath:n/a Error Path:/brokers/topics/Hello-Nicola/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/Hello-Nicola/partitions

If I try to produce messages:

./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Hello-Nicola
>ciao
[2017-12-06 16:04:21,897] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-12-06 16:04:22,000] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

server.properties (in kafka) is:

broker.id=0
listeners=PLAINTEXT://mylocal-0:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

It seems that Zookeeper didn't registrer any broker.

Any suggestion, please?

Harvin answered 6/12, 2017 at 15:12 Comment(4)
From the broker logs, all looks fine, it detected the topic creation and successfully created the logs on disk. Can you please post your broker configs ?Virgiliovirgin
I added those info to the post. ThxHarvin
Any quick help here - #67763576?Schmaltz
See this answer in case it helps. https://mcmap.net/q/323212/-connection-to-node-1-127-0-0-1-9092-could-not-be-established-broker-may-not-be-availableDepoliti
H
8

I found the error. Observing zookeeper logs when the server started I noticed:

server.1=mylocal-0.:2888:3888

with a dot (.) after the name of the host.

The script that produces the zookeeper's config is from https://github.com/kubernetes/contrib/blob/master/statefulsets/zookeeper/zkGenConfig.sh

Looking inside I see that DOMAIN is not filled:

HOST=`hostname -s`
DOMAIN=`hostname -d`

function print_servers() {
    for (( i=1; i<=$ZK_REPLICAS; i++ ))
    do
        echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT"
    done
}

For my case (localhost) I don't need domain, so I removed that variable.

Now zookeeper and kafka communicate with no errors.

Harvin answered 6/12, 2017 at 16:7 Comment(0)
D
34

UPD: if you are running in single-node mode:

I have seen this message in spark console log while trying to deploy application. Solved by changing this parameter in server.properties:

listeners=PLAINTEXT://myhostname:9092

to

listeners=PLAINTEXT://localhost:9092

make sure that you have java process listening on 9092 with netstat -lptu

Dur answered 21/8, 2018 at 23:36 Comment(5)
With localhost it means you have just one broker, right?Harvin
If you are trying to run standalone / single-node and you have some VPN software it's very likely that the InetAddress.getLocalHost().getCanonicalHostName() that is used by default if you don't specify listeners resolve to some invalid IP address. In that case you need to explicitly set listeners=PLAINTEXT://localhost:9092Autotype
when i installed kafka via brew this file was located at /usr/local/etc/kafka/server.propertiesClancy
If you are running Kafka in Docker add this flag -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 So the full command will look like: docker run -d --name kafka -p 9092:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 --link zookeeper:zookeeper confluent/kafkaAffiliate
you just need to use host.docker.internal instead of localhost on docker for mac and docker for windowsLaux
J
20

Change:

#listeners=PLAINTEXT://:9092`

in server.properties to:

listeners=PLAINTEXT://localhost:9092

Note: You also need to uncomment this statement aka remove the # symbol.

Jaredjarek answered 18/9, 2019 at 13:8 Comment(2)
this worked for me on macOS Catalina [version 10.15.4 (19E287)]Maribelmaribelle
This did not work for me, please helpRaasch
H
8

I found the error. Observing zookeeper logs when the server started I noticed:

server.1=mylocal-0.:2888:3888

with a dot (.) after the name of the host.

The script that produces the zookeeper's config is from https://github.com/kubernetes/contrib/blob/master/statefulsets/zookeeper/zkGenConfig.sh

Looking inside I see that DOMAIN is not filled:

HOST=`hostname -s`
DOMAIN=`hostname -d`

function print_servers() {
    for (( i=1; i<=$ZK_REPLICAS; i++ ))
    do
        echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT"
    done
}

For my case (localhost) I don't need domain, so I removed that variable.

Now zookeeper and kafka communicate with no errors.

Harvin answered 6/12, 2017 at 16:7 Comment(0)
I
7

If you want to set up it for local then you need to un comment the below line in path_to_kafka_folder\kafka_2.13-2.6.0\config\server.properties

listeners=PLAINTEXT://localhost:9092 serverproperties snapshot

Immure answered 15/8, 2020 at 15:28 Comment(0)
A
5

If this happens suddenly after it was working, you should try to restart Kafka first.

In my case, restarting solved the problem:

$docker-compose down && docker-compose up -d
Aesthetic answered 23/5, 2019 at 9:53 Comment(0)
P
0

If running Kafka Client in docker ( docker-compose) and getting "Broker may not be available". Solution is to add this to docker-compose.yml

network_mode: host

This enables the Kafka client in docker to see locally running Kafka (localhost:9092).

You don't need to change listeners=* if you can see host computer network and resolve localhost:9092 to host

Profusive answered 20/10, 2022 at 10:29 Comment(0)
J
0

0

If running Kafka Client in docker ( docker-compose) and getting "Broker may not be available". Solution is to add this to docker-compose.yml

network_mode: host This enables the Kafka client in docker to see locally running Kafka (localhost:9092).

You don't need to change listeners=* if you can see host computer network and resolve localhost:9092 to host this works. thank you

Jorgenson answered 11/2 at 17:44 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.