Kafka Broker doesn't find cluster id and creates new one after docker restart
Asked Answered
O

19

41

I've created docker with kafka broker and zookeeper to start it with run script. If I do fresh start it starts normally and runs ok (Windows -> WSL -> two tmux windows, one session). If I shut down kafka or zookeeper and start it again it will connect normally.

Problem occurs when I stop docker container (docker stop my_kafka_container). Then I start with my script ./run_docker. In that script before start I delete old container docker rm my_kafka_containerand then docker run.

Zookeeper starts normally and in file meta.properties it has old cluster id from previous start up, but kafka broker for some reason cannot find by znode cluster/id this id and creates new one which is not that which is stored in meta.properties. And I get

  ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID m1Ze6AjGRwqarkcxJscgyQ doesn't match stored clusterId Some(1TGYcbFuRXa4Lqojs4B9Hw) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
        at kafka.server.KafkaServer.startup(KafkaServer.scala:220)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
        at kafka.Kafka$.main(Kafka.scala:84)
        at kafka.Kafka.main(Kafka.scala)
[2020-01-04 15:58:43,303] INFO shutting down (kafka.server.KafkaServer)

How to avoid broker change it cluster id?

Omnifarious answered 4/1, 2020 at 16:14 Comment(3)
Are you using Kafka 2.4.0? I have the same issue and also this seems to be related: serverfault.com/questions/997762/…Cowpuncher
I managed to solve this issue do you steel need an answer ?Polyamide
@Dorian in any case it would be helpful to post it. Please, post it.Omnifarious
C
37

I had the same issue when using Docker. This issue occurs since Kafka 2.4 because a check was added to see if the Cluster ID in Zookeeper matches. It's storing the cluster id in meta.properties.

This can be fixed by making the Zookeeper data persistent and not only the Zookeeper logs. E.g. with the following config:

volumes:
  - ~/kafka/data/zookeeper_data:/var/lib/zookeeper/data
  - ~/kafka/data/zookeeper_log:/var/lib/zookeeper/log

You should also remove the meta.properties file in the Kafka logs once so that Kafka retrieves the right cluster id from Zookeeper. After that the IDs should match and you don't have to do this anymore.

You may also run into a snapshot.trust.empty error which was also added in 2.4. You can solve this by either adding the snapshot.trust.empty=true setting or by making the Zookeeper data persistent before doing the upgrade to 2.4.

Cutlass answered 26/3, 2020 at 10:2 Comment(7)
Making the Zookeeper data persistent fixes the root causeToscano
persisting just the data dir is sufficient to resolve the issue (checked on 2.5)Generosity
I had the same issue from confluent kafka 5.1.0 to 6.0.1 migration. I need to add volumes in zookeeperAshok
@Cutlass thanks. that worked. didn't get back to that project for a long time ;)Omnifarious
Where to add these lines?Nonrecognition
@TalhaAkbar the example I used is in a docker-compose yaml file. If you use docker directly then you need to use the --volume argument for each rowCutlass
The problem will remain if you have volumes for kafka and zookeeper but when you want to clean all data with docker-compose down --volumes. After restart ClusterID will be different but old value will remain in the kafka volume forlder(~/kafka/data/kafka1_volume:/bitnami/kafka for example). Solution here is to update meta.properties ClusterID value(or remove them) check Or in my case i am good with removing all data because i want to drop all volumes.Singlebreasted
D
47

If you are 100% sure you are connecting to the right ZooKeeper and the right Kafka log directories, but for some reason things don't match and you don't feel like losing all your data while trying to recover:

The Kafka data directory (check config/server.properties for log.dirs property, it defaults to /tmp/kafka-logs) contains a file called meta.properties. It contains the cluster ID. Which should have matched the ID registered to ZK. Either edit the file to match ZK, edit ZK to match the file, or delete the file (it contains the cluster id and the broker id, the first is currently broken and the second is in the config file normally). After this minor surgery, Kafka will start with all your existing data, since you didn't delete any data file.

Like this: mv /tmp/kafka-logs/meta.properties /tmp/kafka-logs/meta.properties_old

Dusk answered 9/2, 2020 at 1:45 Comment(1)
It's important to stress to ensure your Zookeeper state is intact, as this indicates potentially catastrophic loss of data on the Zookeeper cluster. Updating the cluster id on the brokers in this scenario will cause the Kafka cluster to be essentially blank with no reference to which partitions exist and where the replica logs reside.Fitly
C
37

I had the same issue when using Docker. This issue occurs since Kafka 2.4 because a check was added to see if the Cluster ID in Zookeeper matches. It's storing the cluster id in meta.properties.

This can be fixed by making the Zookeeper data persistent and not only the Zookeeper logs. E.g. with the following config:

volumes:
  - ~/kafka/data/zookeeper_data:/var/lib/zookeeper/data
  - ~/kafka/data/zookeeper_log:/var/lib/zookeeper/log

You should also remove the meta.properties file in the Kafka logs once so that Kafka retrieves the right cluster id from Zookeeper. After that the IDs should match and you don't have to do this anymore.

You may also run into a snapshot.trust.empty error which was also added in 2.4. You can solve this by either adding the snapshot.trust.empty=true setting or by making the Zookeeper data persistent before doing the upgrade to 2.4.

Cutlass answered 26/3, 2020 at 10:2 Comment(7)
Making the Zookeeper data persistent fixes the root causeToscano
persisting just the data dir is sufficient to resolve the issue (checked on 2.5)Generosity
I had the same issue from confluent kafka 5.1.0 to 6.0.1 migration. I need to add volumes in zookeeperAshok
@Cutlass thanks. that worked. didn't get back to that project for a long time ;)Omnifarious
Where to add these lines?Nonrecognition
@TalhaAkbar the example I used is in a docker-compose yaml file. If you use docker directly then you need to use the --volume argument for each rowCutlass
The problem will remain if you have volumes for kafka and zookeeper but when you want to clean all data with docker-compose down --volumes. After restart ClusterID will be different but old value will remain in the kafka volume forlder(~/kafka/data/kafka1_volume:/bitnami/kafka for example). Solution here is to update meta.properties ClusterID value(or remove them) check Or in my case i am good with removing all data because i want to drop all volumes.Singlebreasted
R
15

There is a cluster.id property in meta.properties just replace id with the stated in the error log.
meta.properties file is in kafka.logdir. You can learn kafka.logdir from Kafka config server.properties. An example below.

cat /opt/kafka/config/server.properties | grep log.dirs
Expected output:
log.dirs=/data/kafka-logs

Once you find meta.properties file change it. After change it should look like.

#
#Tue Apr 14 12:06:31 EET 2020
cluster.id=m1Ze6AjGRwqarkcxJscgyQ
version=0
broker.id=0
Resistive answered 14/4, 2020 at 10:16 Comment(3)
Simple solution, Good solution.Springe
How come I changed cluster Id just like you explained and got the same error again?Wideangle
So a long time I have not used Kafka on Windows anymore. But it had worked at that time.Elenor
M
12

I have tried most of the answers and found the hard way (loosing all my data and records) what actually works.
For WINDOWS Operating System Only
So as suggested by others we do need to change and set default path for data directories for both

Kafka in server.properties and
Zookeeper in zookeeper.properties

//Remember this is important if you are on windows give double slash .
for kafka
log.dirs=C://kafka_2.13-2.5//data//kafka

Same goes for zookeeper
dataDir=C://kafka_2.13-2.5//data//zookeeper

and obviously you need to create the above listed folders first before setting anything

then try to run zookeeper and Kafka haven't faced the issue since changing the path.
Prior to this I had single "/" which worked only once then changed to "" again this worked also but once.

EDIT And don't forget to properly kill the process
kafka-server-stop.bat and
zookeeper-server-stop.bat

Militant answered 27/4, 2020 at 16:28 Comment(4)
I was about to go bat-shit crazy...thanks...a lot...Kafka needs more developer-friendly documentationRosebay
@NipunDavid Glad that this was helpful to you.Militant
Not sure why this is for windows only when the same steps apply to Unix. Windows uses slashes in the other direction, anyway, and Java would properly detect thatAlienism
This is not working on windowsNonrecognition
P
6

To Solve this issue :

  1. Just Delete all the log/Data file created (or generated) into zookeeper and kafka.
  2. Run Zookeper
  3. Run Kafka
Polyamide answered 21/1, 2020 at 1:13 Comment(4)
This is not advised if you actually want to preserve any existing dataAlienism
@cricket_007 Of course but it is the only way I found. If someone got a better Answer I would be pleased to Know it ...Polyamide
I'll add: if you're using Docker - you need to remove "volume".Salience
@ErnestasKardzys Since there are four volumes involved, which volume are you referring to? And must the whole volume be removed or just certain files in the volume? Remember that the goal is to retain all actual data and let the docker services reuse that data.Haematothermal
T
6

Kafka was started in past with other/other instance of zookeeper, thus old zookeeper id is registered in it. In Kafka config directory, open kafka config properties file lets say server.properties Find the log path directory with parameter log.dirs= then go to log path directory and find the file meta.properties in it. Open the file meta.properties and update the cluster.id= or delete this file or all the log file from log path directory and restart kafka.

Teammate answered 13/3, 2020 at 12:15 Comment(0)
V
4

Edit meta.properties and remove line with cluster.id and restart kafka.

On linux servers it is located in /var/lib/kafka/meta.properties

Do this for all servers. New cluster id will be provided by zookeeper for the brokers.

Vienna answered 28/9, 2020 at 11:20 Comment(1)
That what you need. In my case the issue appeared when i have added static volumes to my kafka instances yaml - ~/kafka/data/kafka1_volume:/bitnami/kafka So now when you cleaning volumes with docker-compose down --volumes it will remove them but all data will remain persisted in the ~/kafka/data/kafka1_volume along with meta.properties file that contain's ClusterID. You can edit speciffic file as mentioned in your volume's folder. Or as in my case that's experimental project i just cleaning all folder's with all data.Singlebreasted
C
2

This is due to a new feature that was introduced in the Kafka 2.4.0 release and it is [KAFKA-7335] - Store clusterId locally to ensure broker joins the right cluster. When the docker restart happens, Kafka tries to match the locally stored clusterId to the Zookeeper's clusterId(which changed because of docker restart) due to this mismatch, the above error is thrown. Please refer to this link for more information.

Connaught answered 30/3, 2020 at 7:14 Comment(0)
R
2

In my case this was due to missing configuration of the zookeeper cluster or more precisely, each zookeeper node was working independently and thus data such as the cluster id was not shared between the kafka nodes. When a kafka node started after other nodes have already started running, it did not recognize via zookeeper that a cluster id have already been established and created a new cluster id and tried communicating with other nodes that similarly had given themselves different ids.

To resolve this:

  1. We need to clear the zookeeper dir defined by dataDir in the kafka/config/zookeeper.properties file
  2. In this folder add a file called myid containing a uniqe id for each zookeeper node
  3. Add the following configuration to each kafka/config/zookeeper.properties file:
tickTime=2000
initLimit=5
syncLimit=2
server.1=<zookeeper node #1 address>:2888:3888
server.2=<zookeeper node #2 address>:2888:3888
server.3=<zookeeper node #3 address>:2888:3888
  1. Remove the cluster.id line from the meta.properties file which resides in the path described by log.dirs property in the kafka/config/server.properties file or delete this file altogether

You can refer to the zookeeper documentation for more info: https://zookeeper.apache.org/doc/r3.3.3/zookeeperStarted.html#sc_RunningReplicatedZooKeeper

Relict answered 17/8, 2021 at 19:28 Comment(1)
do you know if your solutions avoids the broker from changing its cluster Id?Wideangle
C
1

Try the following...

  1. Enable following line in ./config/server.properties

    listeners=PLAINTEXT://:9092

  2. Modify default ZooKeeper dataDir

  3. Modify default Kafka log dir

Cochrane answered 6/2, 2020 at 10:54 Comment(0)
D
1

For windows, renaming or deleting this meta.properties helped to launch kafka and observed file has been created once launched.

{kafka-installation-folder}\softwareskafkalogs\meta.properties
Doak answered 14/2, 2020 at 3:36 Comment(0)
M
1

For this error:

ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$) kafka.common.InconsistentClusterIdException: The Cluster ID 77PZKMMvRVuedQzKixTIQA doesn't match stored clusterId Some() in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.

Solution for Mac on homebrew kafka installation:

  1. open /opt/homebrew/var/lib/kafka-logs/meta.properties and replace cluster.id by YOUR cluster ID in the message, as above.
  2. delete folder of logs rm -r /opt/homebrew/var/run/zookeeper/data/
  3. reset your zookeeper and kafka.
Manager answered 9/10, 2023 at 4:6 Comment(0)
S
0

I encountered the same issue while running Kafka server on my Windows Machine.

You can try Following to resolve this issue:

  1. Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\config (considering your version of kafka, folder name could be kafka_)
  2. Search for entry log.dirs
  3. If your log.dir path contains windows directory path like this E:\Shyam\Software\kafka_2.11-2.4.0\kafka-logs which has a single backslash i.e \, change it to double back-slash i.e with \

Hope it helps. Cheers

Simdars answered 27/2, 2020 at 20:32 Comment(0)
P
0

Try this:

  • Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\config
  • Search for entry log.dirs
  • If you have the directory specified C:....... change it to relative to current directory. Example log.dirs=../../logs

This worked for me :)

Pacheco answered 4/3, 2020 at 18:19 Comment(0)
G
0

enter image description here

This is how I solved it. I searched this file, renamed it and started it successfully and a new file was created.

I am Kafka installed by brew under mac

Hope this helps you.

Glaucescent answered 7/4, 2020 at 3:34 Comment(0)
S
0

If during testing, you are trying to launch an EmbeddedKafka broker, and if your test case doesnt do clean-up of the temp directory, then you will have to manually delete the kafka log directory to get past this error.

Semination answered 24/8, 2020 at 3:15 Comment(0)
R
0

Error -> The Cluster ID Ltm5IhhbSMypbxp3XZ_onA doesn't match stored clusterId Some(sAPfAIxcRZ2xBew78KDDTg) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.

Linux ->

Go to /tmp/kafka-logs Check the meta.properties file

use vi meta.properties Change the cluster id to the required id

Reservoir answered 30/7, 2021 at 11:4 Comment(1)
Did this work for anyone?Wideangle
K
0

For me, as mentioned above, deleting the meta.properties helped. Since I had kafka and zookeeper running in a terminal, and I had installed kafka and zookeeper through homebrew, for me the package where the file lied was /opt/homebrew/var/lib/kafka-logs. Once I reached there I ran an rm command to delete the file.

Kaminsky answered 25/5, 2023 at 23:16 Comment(0)
N
0

I've just deleted images of zookeper and kafka But this will result in data loss. Don't do this on a real project.

Naxos answered 7/3 at 11:18 Comment(1)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Michellmichella

© 2022 - 2024 — McMap. All rights reserved.