I'm trying to set up a pod running a pptp-client.
I want to access a single machine behind the VPN and this works fine locally , my docker container adds records to my localhost's routing table, all is well.
ip route add x.x.x.x dev ppp0
I am only able to establish a connection to the VPN-server as long as privileged is set to true and network_mode is set to "host"
The production environment is a bit different, the "localhost" would be one of our three operating nodes in our Google Container cluster.
I don't know if the route added after the established connection would be only accessible by the containers operating inside that node.. but this is a later problem.
docker-compose.yml
version: '2'
services:
pptp-tunnel:
build: ./
image: eu.gcr.io/project/image
environment:
- VPN_SERVER=X.X.X.X
- VPN_USER=XXXX
- VPN_PASSWORD=XXXX
privileged: true
network_mode: "host"
This seems to be more difficult to achieve with kubernetes, though both options exist and is declared as you can see in my manifest. (hostNetwork, privileged)
Kubernetes Version
Version 1.6.6
pptp-tunnel.yml
apiVersion: v1
kind: Service
metadata:
name: pptp-tunnel
namespace: default
labels:
spec:
type: ClusterIP
selector:
app: pptp-tunnel
ports:
- name: pptp
port: 1723
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: pptp-tunnel
namespace: default
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: pptp-tunnel
template:
metadata:
labels:
app: pptp-tunnel
spec:
hostNetwork: true
containers:
- name: pptp-tunnel
env:
- name: VPN_SERVER
value: X.X.X.X
- name: VPN_USER
value: XXXX
- name: VPN_PASSWORD
value: 'XXXXX'
securityContext:
privileged: true
capabilities:
add: ["NET_ADMIN"]
image: eu.gcr.io/project/image
imagePullPolicy: Always
ports:
- containerPort: 1723
I've also tried adding capabilities: NET_ADMIN as you can see, without effect. Setting the container in priviliged mode should disable the security, i shouldn't need both.
Would be nice to not have to set the container to priviliged mode and just rely on capabilities to bring the ppp0 interface up and add the routing.
What happens when the POD starts is that the pptp-client simply keeps sending requests and times out. (This happens with my docker container locally aswell until i turn network_mode "host" on.)
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xa43cd4b4> <pcomp> <accomp>]
LCP: timeout sending Config-Requests
But this is without hostNetwork enabled, if i enable it i simply get a single request sent, and then modem hangup.
using channel 42
Using interface ppp0
Connect: ppp0 <--> /dev/pts/0
sent [LCP ConfReq id=0x7 <asyncmap 0x0> <magic 0xcdae15b8> <pcomp> <accomp>]
Script ?? finished (pid 59), status = 0x0
Script pptp XX.XX.XX.XX --nolaunchpppd finished (pid 60), status = 0x0
Script ?? finished (pid 67), status = 0x0
Modem hangup
Connection terminated.
Declaring the HostNetwork boolean let's me see multiple interfaces shared from the host, so this is working but somehow im not able to establish a connection, i cant figure out why.
Perhaps there is a better solution? I will still need to establish a connection to the VPN-server but adding a routing record to the host may not be the best solution.
Any help is greatly appreciated!
modprobe
needs to be involved, which is a whole other ball of wax – Affray