Kafka Producer error Expiring 10 record(s) for TOPIC:XXXXXX: 6686 ms has passed since batch creation plus linger time
Asked Answered
F

8

36

Kafka Version : 0.10.2.1,

Kafka Producer error Expiring 10 record(s) for TOPIC:XXXXXX: 6686 ms has passed since batch creation plus linger time
org.apache.kafka.common.errors.TimeoutException: Expiring 10 record(s) for TOPIC:XXXXXX: 6686 ms has passed since batch creation plus linger time
Forme answered 14/10, 2017 at 23:54 Comment(3)
You get this error when the producer can't send data to the broker that it thinks is responsible for the messages according to the metadata that it has. Did the kafka broker die or your producer have connection issues at that time?Lorianne
I am also getting this error intermittently throughout the day. Searching for an answerJahdol
Its stopped occurring when I change my kafka producer "max.request.size": "4713360", "acks": "all", "timeout.ms":"18000", "batch.size": "100000", -- this is size in bytes .. "linger.ms":"100", "retries": "5", "min.insync.replicas":"2", "buffer.memory ":"66554432", "request.timeout.ms":"90000","block.on.buffer.full","true" basically linger.ms and batch.size and block.on.buffer.full plays major role hereForme
T
45

This exception is occurring because you are queueing records at a much faster rate than they can be sent.

When you call the send method, the ProducerRecord will be stored in an internal buffer for sending to the broker. The method returns immediately once the ProducerRecord has been buffered, regardless of whether it has been sent.

Records are grouped into batches for sending to the broker, to reduce the transport overhead per message and increase throughput.

Once a record is added into a batch, there is a time limit for sending that batch to ensure that it has been sent within a specified duration. This is controlled by the Producer configuration parameter, request.timeout.ms, which defaults to 30 seconds. See related answer

If the batch has been queued longer than the timeout limit, the exception will be thrown. Records in that batch will be removed from the send queue.

Producer configs block.on.buffer.full, metadata.fetch.timeout.ms and timeout.ms have been removed. They were initially deprecated in Kafka 0.9.0.0.

Therefore give a try for increasing request.timeout.ms

Still, if you have any problem related to throughput, you can also refer following blog

Trask answered 4/12, 2017 at 15:17 Comment(6)
Unfortunately this link is dead to me.Marr
@CristianoFontes yes, unfortunately, it's down. But Pretty much answer has already been covered above.Trask
One situation where this might occur is when you start up a job(s) and are moving through backlog at a faster rate than you normally would. That spike in traffic can be difficult for Kafka to handle.Annulment
@Trask this "request.timeout.ms" means the network time taken by the "Sender thread" to send the buffered messages ?Leotaleotard
@CristianoFontes will the kafka producer do the retry(if retry set in producer config >3) in case of "xxx ms has passed since batch creation plus linger time" ?Incongruity
For those looking for the blog post, it can be found on web archive: web.archive.org/web/20181027004036/http://ingest.tips/2015/07/…Collettecolletti
C
2

This issue originates when wither brokers/topics/partitions are not able to contact with producer or producer times out before the queue.

I found that even for a live brokers you can encounter this issue. In my case, the topic partitions leaders were pointing to inactive broker ids. To fix this issue, you have to migrate those leaders to active brokers.

Use topic-reassignment tool for impacted topics. Topic Migration: https://kafka.apache.org/21/documentation.html#basic_ops_automigrate

Cybil answered 21/4, 2020 at 21:51 Comment(0)
N
1

I had same message and I fixed it cleaning the kafka data from zookeeper. After that it's working.

Noblewoman answered 27/6, 2018 at 20:58 Comment(3)
Did you just clear certain things out of ZK or drop all Kafka managed data for ZK?Adoptive
In my case, I cleaned all ZK data.Noblewoman
Can anyone please guide me on how I can do that? I am facing the exact problem and except for cleaning the data, have tried it all! I am willing to clean all zk data but not able to understand how!Taperecord
C
1

i had faced same issue in aks cluster, just restarting of kafka and zookeeper servers resolved the issue.

Catbird answered 29/11, 2018 at 13:57 Comment(0)
M
0

FOR KAFKA DOCKER CASE

For a lot of time find out what happened, including changes server.properties , producer.properties and my code (Eclipse). That does not work for me (I send message from my laptop to Kafka Docker on a Linux server)

I cleaned Kafka and Zookeeper and reinstall them by docker-compose.yml(I'm newbie). Please look at my docker-compose.yml file and follow how I changes these IP to my Linux server's IP

bitnami/kafka

bitnami/kafka

to...

bitnami-changed

while 10.5.1.30 is my Linux server's IP address

wurstmeister kafka

wurstmeister

after that, I ran my code and here's result:

result

full code:

import java.util.Properties;
import java.util.concurrent.Future;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;

public class SimpleProducer {
    public static void main(String[] args) throws Exception {
        try {
            String topicName = "demo";
            Properties props = new Properties();
            props.put("bootstrap.servers", "10.5.1.30:9092");
            props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
            props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
            Producer<String, String> producer = new KafkaProducer<String, String>(props);
            Future<RecordMetadata> f = producer.send(new ProducerRecord<String, String>(topicName, "Eclipse3"));
            System.out.println("Message sent successfully, total of message is: " + f.get().toString());
            producer.close();
        } catch (Exception e) {
            System.out.println(e.getMessage());
        }
        System.out.println("Successful");

    }
}

Hope that helps. Peace !!!

Mashburn answered 8/11, 2021 at 7:17 Comment(1)
Changing Kafka containers doesn't fix the error in the title. Only having f.get() and producer.close() is causing the batch not to expirePiatt
S
-1

Say a topic has 100 partitions (0-99). Kafka lets you produce records to a topic by specifying a particular partition. Faced the issue where I'm trying to produce to partition > 99, because brokers reject these records.

Stratosphere answered 16/1, 2019 at 17:26 Comment(0)
R
-2

We tried everything, but no luck.

  1. Decreased producer batch size and increased request.timeout.ms.
  2. Restarted target kafka cluster, still no luck.
  3. Checked replication on target kafka cluster, that as well was working fine.
  4. Added retries, retries.backout.ms in prodcuer properties.
  5. Added linger.time as well in kafka prodcuer properties.

Finally our case there was issue with kafka cluster itself, from 2 servers we were unable to fetch metadata in between.

When we changed target kafka cluster to our dev box, it worked fine.

Recall answered 27/11, 2019 at 11:3 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.