Unable to start kafka with zookeeper (kafka.common.InconsistentClusterIdException)
Asked Answered
M

13

43

Below the steps I did to get this issue :

  1. Launch ZooKeeper
  2. Launch Kafka : .\bin\windows\kafka-server-start.bat .\config\server.properties

And at the second step the error happens :

ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) kafka.common.InconsistentClusterIdException: The Cluster ID Reu8ClK3TTywPiNLIQIm1w doesn't match stored clusterId Some(BaPSk1bCSsKFxQQ4717R6Q) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong. at kafka.server.KafkaServer.startup(KafkaServer.scala:220) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)

When I trigger .\bin\windows\kafka-server-start.bat .\config\server.properties zookeeper console returns :

INFO [SyncThread:0:FileTxnLog@216] - Creating new log file: log.1

How to fix this issue to get kafka running ?

Edit You can access to the proper issue on the right site (serverfault) here

Edit Here is the Answer

Melda answered 25/12, 2019 at 21:10 Comment(6)
#59593018Proudman
Voting to reopen in order to close for the right reason, since: [1] The question is clearly a duplicate of Kafka Broker doesn't find cluster id and creates new one after docker restart, as noted by the OP. [2] The current reason for closing is invalid since the question is not about "professional server or networking-related infrastructure administration" at all; it is about a Kafka exception on startup. (And if this question really was off topic then thousands of other questions tagged Kafka on SO would be as well.)Indicant
@Indicant this issue is slightly différent than the other one since it doesn’t use docker. And please also note that my issue was posted before the issue you are talking about ...Melda
@Dorian: I'm really confused now!... You have updated this question and linked to an another answer written by yourself as the solution! If you are now claiming that it is not a solution then delete the text "Edit Here is the Answer" from your question above.Indicant
@Indicant yes because I wasn’t allowed to ask to re open until today... ans I wanted to share with the community how I did solve my issue ...Melda
@Dorian Well your question got reopened! Do you care to post an answer to it now?Indicant
M
53

I managed to Solve this issue with the following steps :

  1. Just Delete all the log/Data file created (or generated) into zookeeper and kafka.
  2. Run Zookeper
  3. Run Kafka

[Since this post is open again I post my answer there so you got all on the same post]

Melda answered 11/5, 2020 at 9:34 Comment(2)
After you deleted all the log files (including its directory I assume) didn't kafka prompt you it couldn't find logs/logs.log file?Workbench
@Workbench You need to run zookeper first then kafka.Melda
D
45

** 1. The easiest solution is to remove all kafka logs and start again. This is enough to solve the problem. e.g.

rm -f /tmp/kafka-logs/*

** 2. How to find Kafka log path:**

  • Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\config\server.properties (considering your version of kafka, folder name could be kafka_<kafka_version>):

  • Then search for entry log.dirs to check where logs locate log.dirs=/tmp/kafka-logs

** 3. Why: the root cause is Kafka saved failed cluster ID in meta.properties.**

Try to delete kafka-logs/meta.properties from your tmp folder, which is located in C:/tmp folder by default on windows, and /tmp/kafka-logs on Linux

if kafka is running in docker containers, the log path may be specified by volume config in the docker-compose - see docs.docker.com/compose/compose-file/compose-file-v2/#volumes -- Chris Halcrow

Davilman answered 9/8, 2020 at 7:28 Comment(4)
if you need to know where your log directory is at first, look at: <your-kafka-install-directory>/config/server.properties and search for the log.dirs=.. row.Congratulate
Note that if kafka is running in docker containers, the log path may be specified by volume config in the docker-compose - see docs.docker.com/compose/compose-file/compose-file-v2/#volumesMardellmarden
In my case, this solved the problem: rm -f /tmp/kafka-logs/*Discordant
Or we can just locate the file and delete it wherever it is locate kafka-logs/meta.properties gives you <path>/kafka-logs/meta.properties and then rm <path>/kafka-logs/meta.propertiesGibbet
C
22

For mac, the following steps are needed.

  • Stop kafka service: brew services stop kafka
  • open kafka server.properties file: vim /usr/local/etc/kafka/server.properties
  • find value of log.dirs in this file. For me, it is /usr/local/var/lib/kafka-logs
  • delete path-to-log.dirs/meta.properties file
  • start kafka service brew services start kafka
Cabernet answered 4/11, 2020 at 16:15 Comment(0)
D
16

No need to delete the log/data files on Kafka. Check the Kafka error logs and find the new cluster id. Update the meta.properties file with cluster-ID then restart the Kafka.

/home/kafka/logs/meta.properties

To resolve this issue permanently follow below.

Check your zookeeper.properties file and look for dataDirpath and change the path tmp location to any other location which should not be removed after server restart.

/home/kafka/kafka/config/zookeeper.properties

Copy the zookeeper folder and file to the new(below or non tmp) location then restart the zookeeper and Kafka.

cp -r /tmp/zookeeper /home/kafka/zookeeper

Now server restart won’t affect the Kafka startup.

Disembogue answered 20/7, 2021 at 17:34 Comment(0)
K
4

If you use Embedded Kafka with Testcontainers in your Java project like myself, then simply delete your build/kafka folder and Bob's your uncle.

The mentioned meta.properties can be found under build/kafka/out/embedded-kafka.

Katrinka answered 5/2, 2021 at 14:21 Comment(0)
A
2

I had some old volumes lingering around. I checked the volumes like this:

docker volume list

And pruned old volumes:

 docker volume prune

And also removed the ones that were kafka: example:

docker volume rm test_kafka
Amourpropre answered 27/4, 2021 at 18:39 Comment(0)
T
1

I deleted the following directories :-

a.) logs directory from kafka-server's configured location i.e. log.dir property path.

b.) tmp directory from kafka broker's location.

log.dirs=../tmp/kafka-logs-1

Tugman answered 2/10, 2020 at 6:21 Comment(0)
M
1

I was using docker-compose to re-set up Kafka on a Linux server, with a known, working docker-compose.config that sets up a number of Kafka components (broker, zookeeper, connect, rest proxy), and I was getting the issue described in the OP. I fixed this for my dev server instance by doing the following

  • docker-compose down
  • backup kafka-logs directory using cp kafka-logs -r kafka-logs-bak
  • delete the kafka-logs/meta.properties file
  • docker-compose up -d

Note for users of docker-compose:

My log files weren't in the default location (/tmp/kafka-logs). If you're running Kafka in Docker containers, the log path can be specified by volume config in the docker-compose e.g.

volumes:
      - ./kafka-logs:/tmp/kafka-logs

This is specifying SOURCE:TARGET. ./kafka-logs is the source (i.e. a directory named kafka-logs, in the same directory as the docker-compose file). This is then targeted to /tmp/kafka-logs as the mounted volume within the kafka container). So the logs can either be deleted from the source folder on the host machine, or by deleting them from the mounted volume after doing a docker exec into the kafka container.

see https://docs.docker.com/compose/compose-file/compose-file-v2/#volumes

Mardellmarden answered 21/6, 2021 at 4:23 Comment(1)
This made the cluster eventually start but the existing topics data didn't load properly, so I lost all the topicsMetencephalon
G
1

For me, meta.properties was in /usr/local/var/lib/kafka-logs By removing it, the kafka started working.

Gaylene answered 26/10, 2021 at 0:58 Comment(0)
B
0

I also deleted all the content of the folder containing all data generated by Kafka. I could find the folder in my .yml file:

 kafka:
    image: confluentinc/cp-kafka:7.0.0
    ports:
      - '9092:9092'
    environment:
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true"
    volumes:
      - ./kafka-data/data:/var/lib/kafka/data
    depends_on:
      - zookeeper
    networks:
      - default

Under volumes: stays the location. So, in my case I deleted all files of the data folder located under kafka-data.

Benoite answered 7/2, 2022 at 20:1 Comment(0)
D
0

I've tried deleting the meta.properties file but didn't work.

In my case, it's solved by deleting legacy docker images.

But the problem with this is that deletes all previous data. So be careful if you want to keep the old data this is not the right solution for you.

docker rm $(docker ps -q -f 'status=exited')
docker rmi $(docker images -q -f "dangling=true")
Decode answered 24/11, 2022 at 4:15 Comment(0)
E
0

I ran it on my Windows environment and had the same issue tried deleting logs from C:/tmp/logs and restarted , still failed

Then tried to manually match the cluster ID and it worked although I don't know if its safe or not but once you locate meta.properties somewhere in kafka directory you can replace the cluster ID to match with the kafka server then you are good to go

Eduard answered 14/6, 2023 at 19:48 Comment(0)
N
0

Error

 ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID xE_GQIvjRqOtq2SsAn0Ghw doesn't match stored clusterId Some(6RVi4Sz4QuyXQyfJym83TQ) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
    at kafka.server.KafkaServer.startup(KafkaServer.scala:218)
    at kafka.Kafka$.main(Kafka.scala:109)
    at kafka.Kafka.main(Kafka.scala)
[2024-02-14 14:40:26,552] INFO shutting down (kafka.server.KafkaServer)

And findout the log.dirs location

 grep -irn "log.dirs" /usr/odp/3.2.2.0-1/kafka/conf/

Response
/usr/odp/3.2.2.0-1/kafka/conf/server.properties:40:log.dirs=/kafka-logs

Go to this location edit the meta.properties file with valid cluster id


[root@nonkrb2 kafka]# cat /kafka-logs/meta.properties
#
#Wed Feb 14 14:42:22 IST 2024
cluster.id=6RVi4Sz4QuyXQyfJym83TQ
version=0
broker.id=1001

Restart the Kafka Broker, should work.

Nyssa answered 14/2 at 9:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.