Kafka: The message when serialized is larger than the maximum request size you have configured with the max.request.size configuration
Asked Answered
S

4

14

Getting the following error (Kafka 2.1.0):

2018-12-03 21:22:37.873 ERROR 37645 --- [nio-8080-exec-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='{82, 73, 70, 70, 36, 96, 19, 0, 87, 65, 86, 69, 102, 109, 116, 32, 16, 0, 0, 0, 1, 0, 1, 0, 68, -84,...' to topic recieved_sound: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1269892 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

I tried all the suggestions in various SO posts.

My Producer.properties:

max.request.size=41943040
message.max.bytes=41943040
replica.fetch.max.bytes=41943040
fetch.message.max.bytes=41943040

Server.properties:

socket.request.max.bytes=104857600
message.max.bytes=41943040
max.request.size=41943040
replica.fetch.max.bytes=41943040
fetch.message.max.bytes=41943040

ProducerConfig (Spring Boot):

configProps.put("message.max.bytes", "41943040");
configProps.put("max.request.size", "41943040");
configProps.put("replica.fetch.max.bytes", "41943040");
configProps.put("fetch.message.max.bytes", "41943040");

ConsumerConfig (SpringBoot):

props.put("fetch.message.max.bytes", "41943040");
props.put("message.max.bytes", "41943040");
props.put("max.request.size", "41943040");
props.put("replica.fetch.max.bytes", "41943040");
props.put("fetch.message.max.bytes", "41943040");

I also changed Strings to numbers in the last 2 files. Started brokers multiple times, and created new topics. I was getting org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept error initially, which got fixed by these changes, but still no luck with this new error.

Selfregulated answered 4/12, 2018 at 3:27 Comment(4)
Your configurations look fine. I am guessing maybe you did not deploy the changeds to all brokers? Can you check the broker config using bin/kafka-configs.sh to make sure your configurations are correct on all brokers?Hyde
also add max.partition.fetch.bytesSaguache
max.partition.fetch.bytes is a soft limit. from documentation: If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.Aparicio
You might need to add kaka.max.partition.fetch.bytes instead of max.partition.fetch.bytes to the client propertiesShanell
V
11

Set a breakpoint in KafkaProducer.ensureValidRecordSize() to see what's going on.

With this app

@SpringBootApplication
public class So53605262Application {

    public static void main(String[] args) {
        SpringApplication.run(So53605262Application.class, args);
    }

    @Bean
    public NewTopic topic() {
        return new NewTopic("so53605262", 1, (short) 1);
    }

    @Bean
    public ApplicationRunner runner(KafkaTemplate<String, String> template) {
        return args -> template.send("so53605262", new String(new byte[1024 * 1024 * 2]));
    }

}

I get

The message is 2097240 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

as expected; when I add

spring.kafka.producer.properties.max.request.size=3000000

(which is the equivalent of your config but using Spring Boot properties), I get

The request included a message larger than the max message size the server will accept.

If debugging doesn't help, perhaps you can post a complete small app that exhibits the behavior you see.

Viscardi answered 4/12, 2018 at 14:29 Comment(0)
C
2

You can change message size if Kafka property is file on a server.

for default sever.property file

#/usr/local/kafka/config
#message.max.bytes=26214400

producer.properties->

# the maximum size of a request in bytes
# max.request.size=26214400

same for conusmer

Corpulent answered 1/3, 2019 at 10:44 Comment(0)
Q
0

I realise the thread is old; nonetheless, for those who, like myself, find this post when grappling with publishing large messages to kafka:

To get the max_request_size=3145764 (3Mb)to work for the local producer I needed to set compression_type='lz4' in KafkaProducer settings:

self.producer = KafkaProducer(
compression_type='lz4', 
max_request_size=3145764,
...,
)

Note: compression_type may be: lz4, gzip or snappy.

Without setting the compression_type local producer was unable to deliver messages larger than 1Mb to kafka irrespectively of having max_request_size set to 3 Mb.

Quicksilver answered 27/2 at 5:5 Comment(0)
D
-3

You should set config in producer in such way

Props.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, "41943040");
Deaf answered 1/3, 2019 at 11:40 Comment(1)
It's Props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, "41943040");Chongchoo

© 2022 - 2024 — McMap. All rights reserved.