Can I ignore org.apache.kafka.common.errors.NotLeaderForPartitionExceptions?
Asked Answered
A

2

13

My Apache Kafka producer (0.9.0.1) intermittently throws a

org.apache.kafka.common.errors.NotLeaderForPartitionException

My code that performs the Kafka send resembles this

final Future<RecordMetadata> futureRecordMetadata = KAFKA_PRODUCER.send(new ProducerRecord<String, String>(kafkaTopic, UUID.randomUUID().toString(), jsonMessage));

try {
    futureRecordMetadata.get();
} catch (final InterruptedException interruptedException) {
    interruptedException.printStackTrace();
    throw new RuntimeException("sendKafkaMessage(): Failed due to InterruptedException(): " + sourceTableName + " " + interruptedException.getMessage());
} catch (final ExecutionException executionException) {
    executionException.printStackTrace();
    throw new RuntimeException("sendKafkaMessage(): Failed due to ExecutionException(): " + sourceTableName + " " + executionException.getMessage());
}

I catch NotLeaderForPartitionException within the catch (final ExecutionException executionException) {} block.

Is it OK to ignore this particular exception?

Has my Kafka message been sent successfully?

Anselme answered 28/4, 2016 at 14:23 Comment(0)
T
26

If you receive NotLeaderForPartitionException, your data was not written successfully.

Each topic partition is stored by one or multiple Brokers (with one leader; the remaining brokers are called followers) depending on your replication factor. A producer needs to send new messages to the leader Broker (data replication to followers happens internally).

Your producer client does not connect to the correct Broker, ie, to a follower instead of the leader (or to a broker that is not even a follower any longer), and this broker rejects your send request. This can happen if the leader changed but the producer still has outdated cached metadata about which broker is the leader for a partition.

Tallinn answered 30/4, 2016 at 12:42 Comment(2)
What is the solution for it?Repress
@VinitGaikwad The producer will retry sending the data internally first (and will refresh it's metadata to learn about the new leader the data must be sent to) -- thus, the application get this exception only if all retries are exhausted. Hence, you might want to increase the retries config parameter for the producer. If you want to handle it at the application level (was seems to be the second best option compared to increasing producer retries), you would need to call Producer.send() to resend the data.Tallinn
C
0

I encountered the same issue when attempting to use a Kafka cluster in Kubernetes. The Kubernetes cluster was hosted on a cloud instance. I attempted to run Kafka producer code from my local machine. During the first run, an UnknownHostException occurred. I resolved this by adding the Kafka Kubernetes service address to my /etc/hosts file.

Example:

10.160.160.60    kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092

However, I then encountered a NotLeaderOrFollowerException. Despite trying multiple solutions, none seemed to work correctly. Restarting the local application would resolve the issue temporarily. However, the problem would reappear after subsequent restarts.

Ultimately, I moved my local application into the Kafka cluster, which solved the problem. The issue occurs because all Kafka broker addresses in my /etc/hosts file point to the same IP address. When the producer attempted to connect the Kafka topic leader, Kubernetes Proxy routed requests to random Kafka nodes, resulting in this issue.

I started working with single node Kafka on Docker locally. Avoid trying to connect to a Kafka cluster on a remote server. If necessary, you should open public access to different IP:Port combinations for each Kafka node and specify the list of all brokers in your local application configuration.

Crosscut answered 20/3, 2024 at 6:59 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.