Is there a way to limit or define the max memory usage of a kafka streams application? I have enabled caching with my state stores but when I deploy in Openshift I get OOM killed on my pods. I have checked I have no memory leakes and all my state store iterators are being closed.
I have updated my RocksDbConfigSetter to the recommendations found in https://github.com/facebook/rocksdb/wiki/Setup-Options-and-Basic-Tuning#other-general-options with no luck.
When I look at the state store directory the size is about 2GB. Currently have 50GB of memory allocated to the application and it still OOMs