I've got a nat setup with thousands of devices connected to it. The gateway has its internet provided by eth0 and the devices on the LAN side connect to eth1 on the gateway.
I have the following setup with iptables:
/sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
/sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
/sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
eth1 is configured as follows:
ip: 192.168.0.1
subnet: 255.255.0.0
Clients are assigned the ips 192.168.0.2 through 192.168.255.254.
In /etc/sysctl.conf I have the following setup for ip_conntrack_tcp_timeout_established
net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=1200
Because of the number of client devices that connect to this gateway I can't use the default 5 day timeout.
This seems to work well and have tested the setup with over 10000 client devices.
However, the issue I am seeing is that the tcp established timeout of 1200 is only being applied to devices in the ip range of 192.168.0.2 through 192.168.0.255. All devices with ips in the 192.168.1.x through 192.168.255.x range are still using the 5 day default timeout.
This is leaving way too many "ESTABLISHED" connections in the /proc/net/ip_conntrack table and it eventually fills up, even though they should be timing out within 20 minutes, they are showing that they will timeout in 5 days.
Obviously I am missing a setting somewhere or have something configured incorrectly.
Any suggestions?
Thanks