Site to site OpenSWAN VPN tunnel issues with AWS
Asked Answered
U

2

8

We have a VPN tunnel with Openswan between two AWS regions and our colo facility (Used AWS’s guide: http://aws.amazon.com/articles/5472675506466066). Regular usage works OK (ssh, etc), but we are having some MySQL issues over the tunnel between all areas. Using mysql command line client on a linux server and trying to connect using the MySQL Connector J it basically stalls… it seems to open the connection, but then gets stuck. It doesn't get denied or anything, just hangs there.

After initial research thought this was an MTU issue, but I've messed with that a lot and no luck.

Connection to the server works fine, and we can choose a database to use and such, but using the Java connector it appears that the Java client isn't receiving any network traffic after the query is made.

When running a select in the MySQL client on linux we can get a max of 2 or 3 rows before it goes dead.

With this said, I also have a separate openswan VPN on the AWS side for client (mac and iOS) vpn connections. Everything works fantastically through the client VPN and it seems more stable in general. The main difference I've noticed is that the static connection is using "tunnel" as the type and the client is using "transport", but when switching the static tunnel connection to transport it says there's like 30 open connections and doesn't work.

I'm very new to OpenSWAN, so hoping someone can help to point me in the right direction of getting the static tunnel working as well as the client VPN.

As always, here's my config files:

ipsec.conf for BOTH static tunnel servers:

# basic configuration
config setup
# Debug-logging controls:  "none" for (almost) none, "all" for lots.
# klipsdebug=none
# plutodebug="control parsing"
# For Red Hat Enterprise Linux and Fedora, leave protostack=netkey
protostack=netkey
nat_traversal=yes
virtual_private=
oe=off
# Enable this if you see "failed to find any available worker"
# nhelpers=0

#You may put your configuration (.conf) file in the "/etc/ipsec.d/" and uncomment this.
include /etc/ipsec.d/*.conf

VPC1-to-colo tunnel conf

conn vpc1-to-DT
type=tunnel
authby=secret
left=%defaultroute
leftid=54.213.24.xxx
leftnexthop=%defaultroute
leftsubnet=10.1.4.0/24
right=72.26.103.xxx
rightsubnet=10.1.2.0/23
pfs=yes
auto=start

colo-to-VPC1 tunnel conf

conn DT-to-vpc1
type=tunnel
authby=secret
left=%defaultroute
leftid=72.26.103.xxx
leftnexthop=%defaultroute
leftsubnet=10.1.2.0/23
right=54.213.24.xxx
rightsubnet=10.1.4.0/24
pfs=yes
auto=start

Client point VPN ipsec.conf

# basic configuration

config setup
interfaces=%defaultroute
klipsdebug=none
nat_traversal=yes
nhelpers=0
oe=off
plutodebug=none
plutostderrlog=/var/log/pluto.log
protostack=netkey
virtual_private=%v4:10.1.4.0/24

conn L2TP-PSK
authby=secret
pfs=no
auto=add
keyingtries=3
rekey=no
type=transport
forceencaps=yes
right=%any
rightsubnet=vhost:%any,%priv
rightprotoport=17/0
# Using the magic port of "0" means "any one single port". This is
# a work around required for Apple OSX clients that use a randomly
# high port, but propose "0" instead of their port.
left=%defaultroute
leftprotoport=17/1701
# Apple iOS doesn't send delete notify so we need dead peer detection
# to detect vanishing clients
dpddelay=10
dpdtimeout=90
dpdaction=clear
Unbutton answered 13/2, 2014 at 17:52 Comment(1)
Speculation: the security group for backwards traffic toward the MySQL Server isn't allowing ICMP from 0.0.0.0/0, possibly breaking path MTU discovery. en.m.wikipedia.org/wiki/Path_MTU_discovery#Problems_with_PMTUDSpray
U
3

Found the solution. Needed to add the following IP tables rule on both ends:

iptables -t mangle -I POSTROUTING -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu

This along with an MTU of 1400 and we're looking very solid

Unbutton answered 20/2, 2014 at 8:29 Comment(1)
As far as I could understand, --clamp-mss-to-pmtu auto-calculates the maximum segment size, I don't know based on what exactly. Anyway, it turns out that it doesn't always work. I needed to force it statically with the option --set-mss and a value I found out trying, until it worked.Hyetology
H
2

We had the same issue with a server connecting from the EU region to an RDS instance in the US. This appears to be a known issue with the RDS instances not responding to ICMP which is needed to auto-discover the MTU settings. As a workaround, you'll need to configure a smaller MTU on the instance that is performing the query.

On the server that is making the connection to the RDS instance (not the VPN tunnel instances), run the following command to get a MTU setting of 1422 (which worked for us):

sudo ifconfig eth0 mtu 1422
Hamlen answered 20/4, 2014 at 23:24 Comment(1)
This worked for me for the same kind of issue, but I still don't get why lowering the client's MTU size solves the problem, once the problem is (as show by tcpdump) actually the response package size (an Elastic Cache node, in my case).Hyetology

© 2022 - 2024 — McMap. All rights reserved.