Establishing PPTP-connection in a Kubernetes POD
Asked Answered
T

0

6

I'm trying to set up a pod running a pptp-client.

I want to access a single machine behind the VPN and this works fine locally , my docker container adds records to my localhost's routing table, all is well.

ip route add x.x.x.x dev ppp0

I am only able to establish a connection to the VPN-server as long as privileged is set to true and network_mode is set to "host"

The production environment is a bit different, the "localhost" would be one of our three operating nodes in our Google Container cluster.

I don't know if the route added after the established connection would be only accessible by the containers operating inside that node.. but this is a later problem.

docker-compose.yml

version: '2'
services:
  pptp-tunnel:
    build: ./
    image: eu.gcr.io/project/image
    environment:
     - VPN_SERVER=X.X.X.X
     - VPN_USER=XXXX
     - VPN_PASSWORD=XXXX
    privileged: true
    network_mode: "host"

This seems to be more difficult to achieve with kubernetes, though both options exist and is declared as you can see in my manifest. (hostNetwork, privileged)

Kubernetes Version

Version 1.6.6

pptp-tunnel.yml

apiVersion: v1
kind: Service
metadata:
  name: pptp-tunnel
  namespace: default
  labels:
spec:
  type: ClusterIP
  selector:
    app: pptp-tunnel
  ports:
    - name: pptp
      port: 1723
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: pptp-tunnel
  namespace: default
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  selector:
    matchLabels:
      app: pptp-tunnel
  template:
    metadata:
      labels:
        app: pptp-tunnel
    spec:
      hostNetwork: true
      containers:
      - name: pptp-tunnel
        env:
        - name: VPN_SERVER
          value: X.X.X.X
        - name: VPN_USER
          value: XXXX
        - name: VPN_PASSWORD
          value: 'XXXXX'
        securityContext:
          privileged: true
          capabilities:
            add: ["NET_ADMIN"]
        image: eu.gcr.io/project/image
        imagePullPolicy: Always
        ports:
        - containerPort: 1723

I've also tried adding capabilities: NET_ADMIN as you can see, without effect. Setting the container in priviliged mode should disable the security, i shouldn't need both.

Would be nice to not have to set the container to priviliged mode and just rely on capabilities to bring the ppp0 interface up and add the routing.

What happens when the POD starts is that the pptp-client simply keeps sending requests and times out. (This happens with my docker container locally aswell until i turn network_mode "host" on.)

sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xa43cd4b4> <pcomp> <accomp>]
LCP: timeout sending Config-Requests

But this is without hostNetwork enabled, if i enable it i simply get a single request sent, and then modem hangup.

using channel 42
Using interface ppp0
Connect: ppp0 <--> /dev/pts/0
sent [LCP ConfReq id=0x7 <asyncmap 0x0> <magic 0xcdae15b8> <pcomp> <accomp>]
Script ?? finished (pid 59), status = 0x0
Script pptp XX.XX.XX.XX --nolaunchpppd finished (pid 60), status = 0x0
Script ?? finished (pid 67), status = 0x0
Modem hangup
Connection terminated.

Declaring the HostNetwork boolean let's me see multiple interfaces shared from the host, so this is working but somehow im not able to establish a connection, i cant figure out why.

Perhaps there is a better solution? I will still need to establish a connection to the VPN-server but adding a routing record to the host may not be the best solution.

Any help is greatly appreciated!

Transitive answered 2/8, 2017 at 17:16 Comment(8)
Are you able to say why you are interested in establishing a VPN connection to just one Pod inside your cluster? The description reads as if you want to connect to the Node itself, which would be a substantially easier task than :point_up:Affray
That said, to solve this we'll need to know which, if any, software-defined network you're using in-cluster, and what version of docker the Node(s) is using would be handyAffray
this existing question leads me to believe the way kubernetes handles Services (iptables-trickery) is in conflict with the way pptp expects the world to work, setting aside whether modprobe needs to be involved, which is a whole other ball of waxAffray
I don't want to establish a VPN connection to a pod inside my cluster, the machine i want to connect to is not inside the cluster at all. I simply want to establish a connection to that outside machine.Transitive
I think the hostNetwork is just for that, shares the interfaces from the host, the kernel modules should be shared aswell. (?) Otherwise i'd have to mount the hosts modules folder and use that in the container.. well it get's ugly.Transitive
Then perhaps we have two nomenclature problems at work here: the first is that a Pod is an in cluster docker container, which has its own [traditionally] non-routable IP address, not a Node which is the outer host upon which Pods are scheduled. The second is that this question, based on what you said, has nothing to do with kubernetesAffray
Yes it is an in cluster docker container. It's probably a routing or a kernel module/interface problem. So yes it has to do with kubernetes since i'm declaring services to be running in/on/with kubernetes and i wan't to know how to correctly define them in order for this to work (?).Transitive
As you can see in the question you posted he works around the problem by appending --net host, the corresponding option in Kubernetes is hostNetworkTransitive

© 2022 - 2024 — McMap. All rights reserved.