Redirect from http port (80) to https port(443) on Kubernetes
Asked Answered
I

1

0

i'm new to this,

How can I redirect from http port(80) to https port(443) on the same domain(service) in Kubernetes.

I've tried putting nginx on same pod(container) and redirecting from http to https but it didn't work.

I tried this way on same pod

  //nginx
    server {
        listen         80;
        server_name    example.com;
        return         301 https://$server_name$request_uri;
        }

Kubernetes example deployment file.

//Jupyterhub is running on port 8000.
    spec:
      ports:
      - port: 443
        name: https
        protocol: TCP
        targetPort: 8000
      - port: 80
        name: http
        protocol: TCP
        targetPort: 433

Is there a default way within kubernetes?

Any help is much appreciated.

Incarnadine answered 27/8, 2018 at 16:6 Comment(7)
The target port in your Kubernetes service file is incorrect. It should be 443Hallowed
Can you shed some more light on what are you trying to achieve? Is Jupyterhub running within k8s as pod? (If so, what is service manifest for it). Is your nginx within same pod as Jupyterhub (seems so according to question), if so how it is installed? There are more ways to skin a cat and the more relevant details you give us (or minimal reproducible example) the more likely you will get proper answer...Building
I also did some research and found out that, nginx-ingress-controller can be use, Can it also be used for my scenario?Incarnadine
Hello, 1. When anyone hits example.com(http) it should redirect to example.com (https). I Have already enabled ssl on example.com. This way I wont have to type https:// for example.com specifically as it would redirect to example.com automatically. 2. Yes, jupyterhub is running within k8s pod using service type Loadbalancer 3. I've created a docker file and installed juypyterhub and nginx (apt-get), I've checked inside a pod, both nginx service any my nginx.conf changes are intact Internally Jupyterhub is running on port 8000, but i've exposed on 443Incarnadine
@Building any help?Incarnadine
Could you share full configuration for your deployment, also for service, and Jupyter configuration?Schmid
@Incarnadine Just added the answer with example configuration. It is lot of text and information, with minimal running example of your desired target if I got your goal right...Building
B
1

Any help is much appreciated.

Disclaimer: This is not a production setup by far and is aimed mainly to shed some light on the overall bits and pieces to help you out orient around. Also, it will be quite a wall of text.

Target: to run JupyterHub over https in kubernetes cluster.

Initial consideration: Running both nginx and JupyterHub is not really in line with k8s philosophy. Containers are to be placed together in same pod only in case they naturally scale together. Here is that not the case. Hence suggestion is to run them separately...

Creating minimal example for JupyterHub in k8s cluster.

Step 1: Create namespace for this example

This is rather straightforward and just here as additional safeguard not to mix things up.

Manifest file: ns-example.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ns-example

simply: kubectl create -f ns-example.yaml and namespace is there. From now on resources can be easily created/deleted this way.

Step 2: Create basic JupyterHub setup

To achive this jupyterhub/jupyterhub public official docker image is used. No customization or whatever, justo to have simple multiuser JupyterHub up and responding so we can encase it in service wrapper.

We start off with service, nothing fancy, just a handy name and 8000 port exposed to local cluster. Official documentation is recommending that services are to be crated before sts/deploy/pod resources so we are in line with that.

Manifest file: svc-jupyterhub.yaml

apiVersion: v1
kind: Service
metadata:
  namespace: ns-example
  name: svc-jupyterhub
  labels:
    name: jupyterhub
spec:
  selector:
    name: jupyterhub
  ports:
  - protocol: TCP
    port: 8000
    targetPort: 8000

Now for the actual deployment of the JupyterHub that above service will expose. Again, nothing fancy, this is simply mimicking of default docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub as noted in the official JupyterHub repository. This is without any customization, just as an basic example...

Manifest file: dep-jupyterhub.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: ns-example
  name: dep-jupyterhub
  labels:
    name: jupyterhub
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: jupyterhub
    spec:
      containers:
      - name: jupyterhub
        image: jupyterhub/jupyterhub
        command: ['jupyterhub']
        ports:
        - containerPort: 8000

Note: for my local test run initial image pull from the net took quite some minutes but ymmv...

After this resources are created JupterHub should be up and running, but visible only in local k8s cluster.

Step 3: Creating nginx server

Now we lack nginx to expose and terminate TLS around our JupyterHub. There are more ways to skin a cat, but since you shared only portion of setup for your nginx here are some, again sketchy, parts to get you started.

To create some minimal nginx and to mimic TLS we need some configuration files.

We start with nginx.conf file that will hold our nginx configuration. This is natural candidate for ConfigMap. Also, note that this is by no means perfect or complete or production ready setup - it is just some quick hack-on way to get nginx up for example run. There are repetitions, this can and should be optimized, redirection for port 80 is not working properly since it would fire you off to non existant domain, given domain for server is imaginary, wildcard certificate is self signed, yada, yada, yada... But it illustrates the idea: nginx is terminating TLS and sending traffic to upstream service around JupyterHub.

Manifest file: cm-nginx.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  namespace: ns-example
  name: cm-nginx
data:
  nginx.conf: |     
     # Exmaple nginx configuration file
     #
     # Commented out parts are left for pointers

     upstream jupyterhub {
        server svc-jupyterhub:8000 fail_timeout=0;
     }

     # jupyterhub.my-domain.com https request sent to upstream jupyterhub proxy
     server {
        listen 443 ssl;
        server_name jupyterhub.my-domain.com;

        ssl_certificate      /etc/nginx/ssl/wildcard.my-domain.com.crt;
        ssl_certificate_key  /etc/nginx/ssl/wildcard.my-domain.com.key;

        location / {
           proxy_set_header        Host $host:$server_port;
           proxy_set_header        X-Real-IP $remote_addr;
           proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header        X-Forwarded-Proto $scheme;
           proxy_redirect http:// https://;
           proxy_pass              http://jupyterhub;
           # Required for new HTTP-based CLI
           proxy_http_version 1.1;
           proxy_request_buffering off;
           proxy_buffering off; # Required for HTTP-based CLI to work over SSL
        }
     }

     # redicrection from http to https for jupyterhub.my-domain.com
     # this obviously doesn't work since my-domain.com is not pointing to our server
     server {
        listen 80;
        server_name jyputerhub.my-domain.com;

     #    root /nowhere;
     #    rewrite ^ https://jupyterhub.my-domain.com$request_uri permanent;

        location / {
           proxy_set_header        Host $host:$server_port;
           proxy_set_header        X-Real-IP $remote_addr;
           proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header        X-Forwarded-Proto $scheme;
           proxy_redirect http:// https://;
           proxy_pass              http://jupyterhub;
           # Required for new HTTP-based CLI
           proxy_http_version 1.1;
           proxy_request_buffering off;
           proxy_buffering off; # Required for HTTP-based CLI to work over SSL
        }
     }

     # if none of named servers is matched on http...
     # this obviously doesn't work since my-domain.com is not pointing to our server
     server {
        listen 80 default_server;

     #    root /nowhere;
     #    rewrite ^ https://jupyterhub.my-domain.com permanent;

        location / {
           proxy_set_header        Host $host:$server_port;
           proxy_set_header        X-Real-IP $remote_addr;
           proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header        X-Forwarded-Proto $scheme;
           proxy_redirect http:// https://;
           proxy_pass              http://jupyterhub;
           # Required for new HTTP-based CLI
           proxy_http_version 1.1;
           proxy_request_buffering off;
           proxy_buffering off; # Required for HTTP-based CLI to work over SSL
        }
     }

     # if none of named server is matched on https...
     # this obviously doesn't work since my-domain.com is not pointing to our server
     server {
        listen 443 default_server;

        ssl_certificate      /etc/nginx/ssl/wildcard.my-domain.com.crt;
        ssl_certificate_key  /etc/nginx/ssl/wildcard.my-domain.com.key;

     #    root /nowhere;
     #    rewrite ^ https://juputerhub.my-domain.com permanent;

        location / {
           proxy_set_header        Host $host:$server_port;
           proxy_set_header        X-Real-IP $remote_addr;
           proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header        X-Forwarded-Proto $scheme;
           proxy_redirect http:// https://;
           proxy_pass              http://jupyterhub;
           # Required for new HTTP-based CLI
           proxy_http_version 1.1;
           proxy_request_buffering off;
           proxy_buffering off; # Required for HTTP-based CLI to work over SSL
        }
     }

Now we need those certificates for example to function...

Granted, certificates (especially private keys) are perfect candidates for Secret k8s resources, but this is self-signed certificate (generated on the fly just for this post) for non-existant example domain... Next, I'd like to illustrate ConfigMap with two files here as well and finally, but maybe most important - I'm too lazy to type two more commands to get everything in base64 for example sake. So here it goes as ConfigMap again... (yes, it should be Secret instead, and yeah, REAL certificate/key should not be public but pssst, don't tell anyone)...

Manifest file: cm-wildcard-certificate-my-domain-com.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  namespace: ns-example
  name: cm-wildcard-certificate-my-domain-com
data:
  wildcard.my-domain.com.key: |
    -----BEGIN PRIVATE KEY-----
    MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQCtU3Yk+tKSnPFC
    l+0Iutma0xI79MiWEf8Z2vacyfgMUNvthqFxTfTIeeySzzFh1KVx8pYJbfL1Gkxx
    iDfYZbKwQxhlV363bx8J+j2YnIIQ4uZGQ0MlxMlb65e0JfLayLOIffo7vSPqqBDa
    6MY4qjqVuiJ7zW9/X9h+38Y76fHyEzde03cHihKnkW0smNKZcwYBLz5oa1D39zv5
    WTqQrq+2GXEGfHvArDc06azbAm3o55iRmFPhIWEJcX6oCs0nd5jLIpycy43ayIKv
    HvjEmChDnsrQDkMImFk0nDsMn0Leu0DAsyPopm3TIGqoPwZY4Sk+zn7ttjU/6VUI
    pndJDVd5AgMBAAECggEAH6mTd4XqWaYZ3JRsVJ/tiH7uYc2Bpwh6lXqOem3axkUv
    J+DkNRKMmOLM+LSozLpPztUF24seSvAW7tZ3fSx2zAQ1vK2TFGdUQDpabjqI+BS7
    BDLdXVTpg8Ux3VLhXl4zjceVorwWh5NUIOlM7KUMNrXd/se0iowzvFmcmO1PqWzU
    O6KI5EKz6LTUpEU/7RSl+wt/Ix4yTRYblkHlzWL1GXmQ50HYFZtC3iFEk4H4yDiQ
    Z4VI+gGSpQGKDBQdR9OIXc3seVPOPnSd5NjDXQU8IR36VWHE8xG6k9/+TeU8r9ue
    zNecjieWbFny4UE+uELXdeaRcmH+M8MTrKDApDj+QQKBgQDZ0WdOZ1O8QqILMwuR
    Up2+oT88A6JZjfUICpDlsXgCaitT4YyBXkBwQyyQiTVspo6+ENHSBS584JdmjRpe
    rqXazlwimY0vdINcm4O1279gmHOGaKffLzik1AKNSQEm52rNhle8xoXWD/cmLjvc
    NYgzpPPFIWwXG0dniCCnbfR8tQKBgQDLtXpuckotb8guCGThFn6nb01Hcsit9OfC
    QG9KXd8fpRV+YKqKF2wx1KeVgMoXMbmT78LRl0wArCQZsh16cqS/abH8S5k2v9jx
    L5q+YYVcXC1U7Oolekoddob8af0qp4FnVDjRU9GiMtv2UQoX4yoX4kHkdWZqqFNr
    q12VlksuNQKBgQCC6odq6lO7zVjT3mRPfhZto0D8czq7FMV3hdI9HAODgAh2rBPl
    FZ8pWlaIsM85dIpK1pUl5BNi3yJgcuKskdAByRI7gYsIQMFLgfUR8vf9uOOGn5R2
    Yk1rVDoMbRqSJXld+ib1wWRjmsjzW8qCunIYiEYz77il0rGCGqF1wHK4GQKBgQCN
    RCTLQua9667efWO31GmwozbsPWV9fUDbLOQApmh9AXaOVWruqJ+XTumIe++pdgpD
    1Rk9T7adIMNILoTSzX4CX8HWPHbbyN8hIuok7GwXSLUHF+SoaM3M8M1bbgTq9459
    oaJlR8MwwCRaBIkDV71xIq6fR+rmPCTdndEgU0F/oQKBgQCWC1K5FySXxaxzomsZ
    eM3Ey6cQ36QnidjuHAEiEcaJ+E/YmG/s9MPbLCRI8tn6KGvOW3zKzrHXOsoeXsMU
    SCmRUpB0J5PqVbbTdj12kggX3x6I7TIkXucopCA3Nparhlnqx7amski2EB/EVE0C
    YWkjEAMUCquUmJeEg2dELIiGOw==
    -----END PRIVATE KEY-----
  wildcard.my-domain.com.crt: |
    -----BEGIN CERTIFICATE-----
    MIIDNjCCAh4CCQCUtoVaGZH/NDANBgkqhkiG9w0BAQsFADBdMQswCQYDVQQGEwJV
    UzELMAkGA1UECAwCTlkxCzAJBgNVBAcMAk5ZMQwwCgYDVQQKDANOL0ExDDAKBgNV
    BAsMA04vQTEYMBYGA1UEAwwPKi5teS1kb21haW4uY29tMB4XDTE4MDgyODA5Mzkz
    N1oXDTIyMDUyNDA5MzkzN1owXTELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAk5ZMQsw
    CQYDVQQHDAJOWTEMMAoGA1UECgwDTi9BMQwwCgYDVQQLDANOL0ExGDAWBgNVBAMM
    DyoubXktZG9tYWluLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
    AK1TdiT60pKc8UKX7Qi62ZrTEjv0yJYR/xna9pzJ+AxQ2+2GoXFN9Mh57JLPMWHU
    pXHylglt8vUaTHGIN9hlsrBDGGVXfrdvHwn6PZicghDi5kZDQyXEyVvrl7Ql8trI
    s4h9+ju9I+qoENroxjiqOpW6InvNb39f2H7fxjvp8fITN17TdweKEqeRbSyY0plz
    BgEvPmhrUPf3O/lZOpCur7YZcQZ8e8CsNzTprNsCbejnmJGYU+EhYQlxfqgKzSd3
    mMsinJzLjdrIgq8e+MSYKEOeytAOQwiYWTScOwyfQt67QMCzI+imbdMgaqg/Bljh
    KT7Ofu22NT/pVQimd0kNV3kCAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAI+G44qo6
    BPTC+bLm+2SAlr6oEC09JZ8Q/0m8Se1MLJnzhIXrWJZIdvEB1TtXPYDChz8TPKTd
    QQCh7xNPZahMkVQWwbsknNCPdaLp0SAHMNs3nfTQjZ3cE/RRITqFkT0LGSjXkhtj
    dTZdzKvcP8YEYnDhNn3ZBK04djEsAoIyordRATFQh1B7/0I3BsUAwItDEwH+Mv5G
    rvSYkoi+yw7/koijxJHDbH0+WXYdcsmbWrMEh6H92Z64TMOFS+N6ZQRsNvzfiSwZ
    KM2yEtU9c74CPKS+UleQLjDufk8epmNHx6+80aHj7R9z3mbw4dL7yKwlbGws2GAW
    TE+Fk0HB+9W7fw==
    -----END CERTIFICATE-----

Now we need service around nginx.

There are more ways to skin a cat, but here is, again for simplicity sake, taken easiest - NodePort approach. You can use ingress, you can use externalIP or whatnot, but this is example so NodePort it is.

Manifest file: svc-nginx.yaml

apiVersion: v1
kind: Service
metadata:
  namespace: ns-example
  name: svc-nginx
  labels:
    name: nginx
spec:
  type:
    NodePort
  selector:
    name: nginx
  ports:
  - protocol: TCP
    name: http-port
    port: 80
    targetPort: 80
  - protocol: TCP
    name: ssl-port
    port: 443
    targetPort: 443

Finally, after all is created we can fire up our nginx deployment. Again, nothing fancy just to glue together all ConfigMaps with official nginx image (yes it is bad idea to use "latest" or omit tags for docker image as it was done here, but, again, this is example and keep in mind not to get bitten by it in production deployment...)

Manifest file: dep-nginx.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: ns-example
  name: dep-nginx
  labels:
    name: nginx
  annotations:
    ingress.kubernetes.io/secure-backends: "true"
    kubernetes.io/tls-acme: "true"
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
          - mountPath: /etc/nginx/conf.d
            name: nginx-conf
          - mountPath: /etc/nginx/ssl
            name: wildcard-certificate
      volumes:
      - name: nginx-conf
        configMap:
          name: cm-nginx
          items:
          - key: nginx.conf
            path: nginx.conf
      - name: wildcard-certificate
        configMap:
          name: cm-wildcard-certificate-my-domain-com

Final notes:

  • As mentioned before, those are not meant to be used in production, lot of tiny details from resource handling to versioning can bite you back. This is just an example.
  • Certificates are self-signed and browsers will complain about this if you navigate to the nginx.
  • Everything was pasted from tested setup on DockerCE Edge Version 18.06.0-ce-mac69 (26398) and 1.9.3 k8s so should be more or less error-free.
  • Typing kubectl get cm,deploy,svc,pod -n ns-example -o wide should display all info about (of which ports to target browser to for svc-nginx will be of particular interest).
  • Finally since everything is encased in yaml manifest files, cleanup is simply question of orderly deletion of resources (taking care to delete namespace last).
Building answered 28/8, 2018 at 12:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.