Kafka cached zkVersion not equal to that in zookeeper broker not recovering
Asked Answered
C

2

8

I have a kafka cluster with 3 brokers. I have started facing issues lately with brokers going out of the cluster and producrs/consumers throwing leader not available errors.

On examining the logs I see following sequence of events:

//Lots of replica fetcher threads starting/stopping

[2017-10-09 14:48:50,600] INFO [ReplicaFetcherManager on broker 6] Removed fetcher for partitions

[2017-10-09 14:48:50,608] INFO [ReplicaFetcherThread-0-7], Shutting down (kafka.server.ReplicaFetcherThread)
[2017-10-09 14:48:50,918] INFO [ReplicaFetcherThread-0-7], Stopped  (kafka.server.ReplicaFetcherThread)
[2017-10-09 14:48:50,918] INFO [ReplicaFetcherThread-0-7], Shutdown completed (kafka.server.ReplicaFetcherThread)

//continuously Expanding/Shrinking ISR

[2017-10-09 14:48:51,037] INFO Partition [__consumer_offsets,8] on broker 6: Expanding ISR for partition __consumer_offsets-8 from 6,8 to 6,8,7 (kafka.cluster.Partition)
[2017-10-09 14:48:51,038] INFO Partition [__consumer_offsets,35] on broker 6: Expanding ISR for partition __consumer_offsets-35 from 6,8 to 6,8,7 (kafka.cluster.Partition)

[2017-10-09 14:49:01,702] INFO Partition [t1,1] on broker 6: Shrinking ISR for partition [t1,1] from 6,7 to 6 (kafka.cluster.Partition)
[2017-10-09 14:49:01,702] INFO Partition [__consumer_offsets,41] on broker 6: Shrinking ISR for partition [__consumer_offsets,41] from 6,8,7 to 6,8 (kafka.cluster.Partition)

//Reregisteration of broker and leader reelection

[2017-10-09 14:51:54,380] INFO re-registering broker info in ZK for broker 6

[2017-10-09 14:51:54,405] INFO New leader is 7 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)

//ControllerMovedException errors

[2017-10-09 14:56:39,746] ERROR [KafkaApi-6] Error when handling request.. org.apache.kafka.common.errors.ControllerMovedException: Broker 6 received update metadata request with correlation id 59 from an old controlle
r 7 with epoch 301. Latest known controller epoch is 302

[2017-10-09 14:57:59,210] INFO re-registering broker info in ZK for broker 6 (kafka.server.KafkaHealthcheck$SessionExpireListener)
[2017-10-09 14:57:59,210] INFO Creating /brokers/ids/6 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2017-10-09 14:57:59,213] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2017-10-09 14:57:59,213] INFO Registered broker 6 at path /brokers/ids/6 with addresses: EndPoint(kafka03,9092,ListenerName(PLAIN
TEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2017-10-09 14:57:59,213] INFO done re-registering broker (kafka.server.KafkaHealthcheck$SessionExpireListener)
[2017-10-09 14:57:59,213] INFO Subscribing to /brokers/topics path to watch for new topics (kafka.server.KafkaHealthcheck$SessionExpireListener
)
[2017-10-09 14:57:59,224] INFO New leader is 7 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
[2017-10-09 14:58:11,697] INFO Partition [testing1,2] on broker 6: Shrinking ISR for partition [testing1,2] from 6,8 to 6 (kafka.cluster.Partit
ion)
[2017-10-09 14:58:11,700] INFO Partition [testing1,2] on broker 6: Cached zkVersion [199] not equal to that in zookeeper, skip updating ISR (ka
fka.cluster.Partition)

Then these errors occur in a loop, and cluster cannot recover

[2017-10-09 16:17:26,769] INFO Partition [__consumer_offsets,14] on broker 6: Shrinking ISR for partition [__consumer_offsets,14] from 7,6,8 to 7,6 (kafka.cluster.Partition)
[2017-10-09 16:17:26,771] INFO Partition [__consumer_offsets,14] on broker 6: Cached zkVersion [306] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)

On the clients, I receive Leader not available error.

Its not clear why the cluster is entering this invalid state.. any ideas?

Charmion answered 9/10, 2017 at 10:55 Comment(1)
I had the same issue but additionally one of my consumers started reading messages from offset 0 of a topic ( topic with a single partition) . Trying to figure out but have no clue as to why this happened. Also this topic has only 1 consumer, so not sure if it was a problem in the consumer or would have happened to all consumers for this topic. Any help appreciatedGeorgena
B
5

This issue is known and tracked in KAFKA-2729 but not solved till now. This happens as far as I know on networks with big delays due to max traffic or some short network outages in a small timeframe. The only solution (afaik) is to restart all brokers.

Barrister answered 9/10, 2017 at 12:1 Comment(1)
Was able to resolve these error messages by restarting Zookeeper and the broker on a single node, which was the one emitting the ISR error messages.Stygian
S
2
  1. kill the current controller, then there will be another controller elected.

  2. restart the previous controller you killed at step 1

partiton state will be fixed

Salmi answered 25/1, 2018 at 14:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.