Java heap space - Out of memory error - Kafka Broker with SASL_SSL
Asked Answered
B

1

7

when I use the below "/usr/bin/kafka-delete-records" command in the Kafka broker with PLAIN_TEXT port 9092, the command works fine, but when I use the SASL_SSL port 9094, the command throws the below error. Anyone know the solution to use the Kafka broker port 9094 with SASL_SSL?

$ssh **** ****@<IP address> /usr/bin/kafka-delete-records --bootstrap-server localhost:9094 --offset-json-file /kafka/records.json`

[2019-10-14 04:15:49,891] ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1': (org.apache.kafka.common.utils.KafkaThread)

java.lang.OutOfMemoryError: Java heap space
    at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
    at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
    at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:390)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:351)
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1125)
    at java.lang.Thread.run(Thread.java:748)
Executing records delete operation
Records delete operation completed:

NOTE: -Xmx gave as 8GB, also total memory of the server is 16 GB

please check the current Heap value below:

$ ps -ef | grep kafka
cp-kafka 11419     1  3 10:07 ?        00:05:27 java -Xms8g -Xmx8g  -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35  ........ io.confluent.support.metrics.SupportedKafka /etc/kafka/server.properties
Brusa answered 14/10, 2019 at 11:13 Comment(8)
this error message means the JVM where your application is deployed is running out of memory. You should try searching a bit.. #37835Avitzur
8 GB memory has given to the JVM, also it works with port 9092 as mentioned aboveBrusa
Did you increase -Xmx in KAFKA_HEAP_OPTS?Pasteurization
@ASR any idea why it works in the Port 9092 ?Brusa
-Xmx gave as 8GB, also total memory of the server is 16 GB.Brusa
It shouldn't be related to any specific port but i am thinking if the SASL configuration is referring some other configuration. grep the process and check if the new -Xmx is reflecting in the running kafka process.Pasteurization
@ASR, yes the " ps -ef | grep kafka " : shows -Xmx 8GBBrusa
i have updated the question with more detailsBrusa
B
8

Most likely, OOM exception is just a red herring, see JIRA KAFKA-4493. And the real issue is a SASL-SSL connection, which your client is unable to establish properly. Enable SSL debug on the client side and proceed from there:

$ export KAFKA_OPTS="-Djavax.net.debug=handshake"
$ /usr/bin/kafka-delete-records ...
Backcourt answered 14/10, 2019 at 14:4 Comment(2)
In our case, it usually occurs when we forget to add the credentials or use the wrong ones. It is very misleading.Isma
@Isma Excellent point. I'm assuming OP is using kerberos cache to login, but maybe he just omitted his jaas.conf to not overload the question.Backcourt

© 2022 - 2024 — McMap. All rights reserved.