getting "org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)"
Asked Answered
H

10

52

I have installed zookeeper and kafka, first step : running zookeeper by the following commands :

bin/zkServer.sh start
bin/zkCli.sh

second step : running kafka server

bin/kafka-server-start.sh config/server.properties

kafka should run at localhost:9092

but I am getting the following error :

WARN Unexpected error from /0:0:0:0:0:0:0:1; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)

I am following the following link : Link1 Link2

I am new to kafka ,please help me to set it up.

Hix answered 19/3, 2018 at 19:31 Comment(3)
This might help. It could be issue with how your consumer connects to broker.Chafee
receive size by default is 1M . You may want to also look at max.message.bytes=20000000 message.max.bytes=20000000Sclerous
possible duplicate of #57141850Ium
P
51

1195725856 is GET[space] encoded as a big-endian, four-byte integer (see here for more information on how that works). This indicates that HTTP traffic is being sent to Kafka port 9092, but Kafka doesn't accept HTTP traffic, it only accepts its own protocol (which takes the first four bytes as the receive size, hence the error).

Since the error is received on startup, it is likely benign and may indicate a scanning service or similar on your network scanning ports with protocols that Kafka doesn't understand.

In order to find the cause, you can find where the HTTP traffic is coming from using tcpdump:

tcpdump -i any -w trap.pcap dst port 9092
# ...wait for logs to appear again, then ^C...
tcpdump -qX -r trap.pcap | less +/HEAD

Overall though, this is probably annoying but harmless. At least Kafka isn't actually allocating/dirtying the memory. :-)

Placable answered 14/1, 2020 at 8:54 Comment(3)
That was exactly my case. This error can happen when Prometheus is configured to scrape for data on Kafka port (by default 9092) but should be scraping on JMX exporter port (by default 8080)Carlist
Same for me. I run all zookeeper and kafka with docker-compose. I suspect it's because I defined a healthcheck part scraping localhost:9092 with curl, so I removed this part.Burin
This was exactly my issue as well. Its amazing how you find the valueSigma
T
22

Try to reset socket.request.max.bytes value in $KAFKA_HOME/config/server.properties file to more than your packet size and restart kafka server.

Transpose answered 17/7, 2018 at 11:30 Comment(1)
But how one can get error while starting the kafka. It says received message is bigger than set size. But we havent yet fully started the kafka to start receiving. Am I missing something?Pinfeather
Y
12

My initial guess would be that you might be trying to receive a request that is too large. The maximum size is the default size for socket.request.max.bytes, which is 100MB. So if you have a message which is bigger than 100MB try to increase the value of this variable under server.properties and make sure to restart the cluster before trying again.


If the above doesn't work, then most probably you are trying to connect to a non-SSL-listener. If you are using the default broker of the port, you need to verify that :9092 is the SSL listener port on that broker.

For example,

listeners=SSL://:9092
advertised.listeners=SSL://:9092
inter.broker.listener.name=SSL

should do the trick for you (Make sure you restart Kafka after re-configuring these properties).

Yellowtail answered 17/7, 2018 at 13:28 Comment(2)
But how one can get error while starting the kafka. It says received message is bigger than set size. But we havent yet fully started the kafka to start receiving. Am I missing something? Also we were getting (size = 1195725856 larger than 104857600). The received size is 1195725856 B = 1.1 GB. Increasing socket.request.max.bytes to 2GB gave java.lang.OutOfMemoryError. Should I set KAFKA_HEAP_OPTS="-Xms512m -Xmx2g. Notice -Xmx2g. Is it how we can specify 2GB max java heap size?Pinfeather
@Pinfeather would request you to check this SO #41120028 . There is a possibility that on port 9092 some other application might be sending data.Kutaisi
U
2

This is how I resolved this issue after installing a Kafka, ELK and Kafdrop set up:

  1. First stop every application one by one that interfaces with Kakfa to track down the offending service.

  2. Resolve the issue with that application.

In my set up it was Metricbeats.

It was resolved by editing the Metricbeats kafka.yml settings file located in modules.d sub folder:

  1. Ensuring the Kafka advertised.listener in server.properties was referenced in the hosts property.

  2. Uncomment the metricsets and client_id properties.

The resulting kafka.yml looks like:

# Module: kafka
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.6/metricbeat-module-kafka.html

# Kafka metrics collected using the Kafka protocol
- module: kafka
  metricsets:
    - partition
    - consumergroup
period: 10s
hosts: ["[your advertised.listener]:9092"]

client_id: metricbeat
Unhandy answered 19/5, 2020 at 7:46 Comment(0)
R
2

I recently encountered this error, but the "size" was 369295617.

Converting this to hexadecimal results in 0x16030101, which are the SSL handshake magic bytes 0x16, 0x03, 0x01 and part of the SSL packet size (0x01).

In my instance I needed to override the config (security.protocol) for my consumer (which was in production mode, expecting SSL) to be "SASL_PLAINTEXT".

Roman answered 4/3 at 17:12 Comment(0)
E
1

The answer is most likely in one of the 2 areas

a. socket.request.max.bytes

b. you are using a non SSL end point to connect the producer and the consumer too.

Note: the port you run it really does not matter. Make sure if you have an ELB the ELB is returning all the healthchecks to be successful.

In my case i had an AWS ELB fronting KAFKA. I had specified the Listernet Protocol as TCP instead of Secure TCP. This caused the issue.

#listeners=PLAINTEXT://:9092
inter.broker.listener.name=INTERNAL
listeners=INTERNAL://:9093,EXTERNAL://:9092
advertised.listeners=EXTERNAL://<AWS-ELB>:9092,INTERNAL://<EC2-PRIVATE-DNS>:9093

listener.security.protocol.map=INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT sasl.enabled.mechanisms=PLAIN sasl.mechanism.inter.broker.protocol=PLAIN

Here is a snippet of my producer.properties and consumer.properties for testing externally

bootstrap.servers=<AWS-ELB>:9092
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
Eklund answered 26/1, 2019 at 15:20 Comment(0)
F
1

In my case, some other application was already sending data to port 9092, hence the starting of server failed. Closing the application resolved this issue.

Facia answered 27/1, 2021 at 10:57 Comment(0)
A
0

Please make sure that you use .security.protocol=plaintext or you have mismatch server security compared to the clients trying to connect.

African answered 14/8, 2021 at 3:46 Comment(0)
L
0

For us it was kube-prom-stack that tried to scrape metrics. Once we deleted we stopped receiving those messages.

Letta answered 4/7, 2023 at 11:50 Comment(0)
F
-1

I was getting the same error when I enable SSL for my local kafka testing in yaml. It worked after removing it

kafka: security: protocol: "SSL" producer: ssl: protocol: "SSL" consumer: ssl: protocol: "SSL" properties: security.protocol: "SSL"

Freiburg answered 15/5 at 21:18 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.