We have a 3 node Kafka cluster deployment, with a total of 35 topics with 50 partitions each. In total, we have configured the replication factor=2
.
We are seeing a very strange problem that intermittently Kafka node stops responding with error:
ERROR Error while accepting connection (kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at kafka.network.Acceptor.accept(SocketServer.scala:460)
at kafka.network.Acceptor.run(SocketServer.scala:403)
at java.lang.Thread.run(Thread.java:745)
We have deployed the latest Kafka version and using spring-kafka as client:
kafka_2.12-2.1.0 (CentOS Linux release 7.6.1810 (Core))
- There are three observations:
- If we do
lsof -p <kafka_pid>|wc -l
, we get the total number of open descriptors as around 7000 only. - If we just do
lsof|grep kafka|wc -l
, we get around 1.5 Million open FD's. We have checked they are all belonging to Kafka process only. - If we downgrade the system to Centos6, then the out of
lsof|grep kafka|wc -l
comes back to 7000.
- If we do
We have tried setting the file limits to very large, but still we get this issue. Following is the limit set for the kafka process:
cat /proc/<kafka_pid>/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 513395 513395 processes
Max open files 500000 500000 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 513395 513395 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
We have few questions here:
- Why the broker is going down intermittently when we have already configured so large process limits? Does kafka require even more available file descriptors?
- Why is there a difference in output of
lsof
andlsof -p
in centos 6 and centos 7? - Is the number of broker nodes 3 less? Seeing the replication factor as 2, we have around 100 partitions per topic distributed among 3 nodes, thus around 33 partitions per node.
Edit 1: Seems like we are hitting the Kafka issue: https://issues.apache.org/jira/browse/KAFKA-7697
We will plan to downgrade the Kafka version to 2.0.1.
log.retention.check.interval.ms=300000 log.retention.ms=3600000 log.roll.ms=3600000
– Mercer