Microk8s, MetalLB, ingress-nginx - How to route external traffic?
Asked Answered
I

2

9

Kubernetes/Ubuntu newbie here!

I'm setting up a k8s cluster using a single Raspberry Pi (hoping to have more in the future). I'm using microk8s v1.18.8 and Ubuntu Server 20.04.1 LTS (GNU/Linux 5.4.0-1018-raspi aarch64).

I'm trying to access one of my k8s services on port 80, but I haven't been able to set it up correctly. I've also set a static IP address for accessing the service, and I'm routing traffic from the router to the service's IP address.

I would like to know what I'm doing wrong, or if there's a better approach for what I'm trying to do!

The steps I'm following:

  1. I've run microk8s enable dns metallb. I've given MetalLB IP addresses not being handled by the DHPC server (192.168.0.90-192.168.0.99).
  2. I've installed ingress-nginx by running kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/baremetal/deploy.yaml. This creates a NodePort service for the ingress-nginx-controller, which doesn't work with MetalLB. As mentioned here, I edit the spec.type of the service from NodePort to LoadBalancer by running kubectl edit service ingress-nginx-controller -n ingress-nginx. MetalLB then assigns IP 192.168.0.90 to the service.
  3. Then I apply the following configuration file:
apiVersion: v1
kind: Service
metadata:
  name: wow-ah-api-service
  namespace: develop
spec:
  selector:
    app: wow-ah-api
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  # Unique key of the Deployment instance
  name: wow-ah-api
  namespace: develop
spec:
  # 3 Pods should exist at all times.
  replicas: 3
  selector:
    matchLabels:
      app: wow-ah-api
  template:
    metadata:
      namespace: develop
      labels:
        # Apply this label to pods and default
        # the Deployment label selector to this value
        app: wow-ah-api
    spec:
      imagePullSecrets:
        - name: some-secret
      containers:
        - name: wow-ah-api
          # Run this image
          image: some-image
          imagePullPolicy: Always
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: wow-ah-api-ingress
  namespace: develop
spec:
  backend:
    serviceName: wow-ah-api-service
    servicePort: 3000

These are some outputs I'm seeing:

microk8s kubectl get all --all-namespaces

NAMESPACE        NAME                                            READY   STATUS      RESTARTS   AGE
develop          pod/wow-ah-api-6c4bff88f9-2x48v                 1/1     Running     4          4h21m
develop          pod/wow-ah-api-6c4bff88f9-ccw9z                 1/1     Running     4          4h21m
develop          pod/wow-ah-api-6c4bff88f9-rd6lp                 1/1     Running     4          4h21m
ingress-nginx    pod/ingress-nginx-admission-create-mnn8g        0/1     Completed   0          4h27m
ingress-nginx    pod/ingress-nginx-admission-patch-x5r6d         0/1     Completed   1          4h27m
ingress-nginx    pod/ingress-nginx-controller-7896b4fbd4-nglsd   1/1     Running     4          4h27m
kube-system      pod/coredns-588fd544bf-576x5                    1/1     Running     4          4h26m
metallb-system   pod/controller-5f98465b6b-hcj9g                 1/1     Running     4          4h23m
metallb-system   pod/speaker-qc9pc                               1/1     Running     4          4h23m

NAMESPACE       NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
default         service/kubernetes                           ClusterIP      10.152.183.1     <none>         443/TCP                      21h
develop         service/wow-ah-api-service                   ClusterIP      10.152.183.88    <none>         80/TCP                       4h21m
ingress-nginx   service/ingress-nginx-controller             LoadBalancer   10.152.183.216   192.168.0.90   80:32151/TCP,443:30892/TCP   4h27m
ingress-nginx   service/ingress-nginx-controller-admission   ClusterIP      10.152.183.41    <none>         443/TCP                      4h27m
kube-system     service/kube-dns                             ClusterIP      10.152.183.10    <none>         53/UDP,53/TCP,9153/TCP       4h26m

NAMESPACE        NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
metallb-system   daemonset.apps/speaker   1         1         1       1            1           beta.kubernetes.io/os=linux   4h23m

NAMESPACE        NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
develop          deployment.apps/wow-ah-api                 3/3     3            3           4h21m
ingress-nginx    deployment.apps/ingress-nginx-controller   1/1     1            1           4h27m
kube-system      deployment.apps/coredns                    1/1     1            1           4h26m
metallb-system   deployment.apps/controller                 1/1     1            1           4h23m

NAMESPACE        NAME                                                  DESIRED   CURRENT   READY   AGE
develop          replicaset.apps/wow-ah-api-6c4bff88f9                 3         3         3       4h21m
ingress-nginx    replicaset.apps/ingress-nginx-controller-7896b4fbd4   1         1         1       4h27m
kube-system      replicaset.apps/coredns-588fd544bf                    1         1         1       4h26m
metallb-system   replicaset.apps/controller-5f98465b6b                 1         1         1       4h23m

NAMESPACE       NAME                                       COMPLETIONS   DURATION   AGE
ingress-nginx   job.batch/ingress-nginx-admission-create   1/1           27s        4h27m
ingress-nginx   job.batch/ingress-nginx-admission-patch    1/1           29s        4h27m

microk8s kubectl get ingress --all-namespaces

NAMESPACE   NAME                 CLASS    HOSTS   ADDRESS         PORTS   AGE
develop     wow-ah-api-ingress   <none>   *       192.168.0.236   80      4h23m

I have been thinking it could be related to my iptables configuration, but I'm not sure how to configure them to work with microk8s.

sudo iptables -L

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
ACCEPT     all  --  10.1.0.0/16          anywhere             /* generated for MicroK8s pods */
ACCEPT     all  --  anywhere             10.1.0.0/16          /* generated for MicroK8s pods */
ACCEPT     all  --  10.1.0.0/16          anywhere            
ACCEPT     all  --  anywhere             10.1.0.0/16         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            

Chain KUBE-EXTERNAL-SERVICES (1 references)
target     prot opt source               destination         

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP       all  -- !localhost/8          localhost/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             ctstate INVALID
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-PROXY-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-SERVICES (3 references)
target     prot opt source               destination 

UPDATE #1

metallb ConfigMap (microk8s kubectl edit ConfigMap/config -n metallb-system)

apiVersion: v1
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.0.90-192.168.0.99
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"config":"address-pools:\n- name: default\n  protocol: layer2\n  addresses:\n  - 192.168.0.90-192.168.0.99\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"config","namespace":"metallb-system"}}
  creationTimestamp: "2020-09-19T21:18:45Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:config: {}
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
    manager: kubectl
    operation: Update
    time: "2020-09-19T21:18:45Z"
  name: config
  namespace: metallb-system
  resourceVersion: "133422"
  selfLink: /api/v1/namespaces/metallb-system/configmaps/config
  uid: 774f6a73-b1e1-4e26-ba73-ef71bc2e1060

I'd appreciate any help you could give me!

Iconolatry answered 20/9, 2020 at 1:54 Comment(2)
Could you provide also your MetalLB configuration YAML? How many services you want to reach (as your ingress have only one). I assume you configured Firewall?Incomprehensive
Hi @PjoterS, sorry I answer this late. Edited my question to include the MetalLB YAML. I only plan to access one service (service/wow-ah-api-service). I didn't configure a firewall. I tried configuring a firewall before using UFW, and opened port 80, but when I did that, service/ingress-nginx-controller started giving Liveness/Readiness probe failed error.Iconolatry
A
18

Short answer:

  1. You only need (and probably have) one IP address. You must can ping it from outside Microk8s machine.
  2. Here's the error. Remove this step

Long answer by example:

Clean Microk8s. Only one public IP (or local machine IP. In your use case, I'll use 192.168.0.90).

How do you test? For example

curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP

from outside the machine.

Run the test. It must fail.

Enable microk8s dns and ingress

microk8s.enable dns ingress

Run the test. Fails?

If it's the same error then: You need metallb

  • With Internet public IP

    microk8s.enable metallb:$(curl ipinfo.io/ip)-$(curl ipinfo.io/ip)

  • With LAN IP 192.168.0.90

    microk8s.enable metallb:192.168.0.90-192.168.0.90

Run the test again

If Test NOT return 503 or 404 then: You can't do next steps. Perhaps you have a network problem or firewall filter.

The Ingress layer

Our test arrived to the Microk8s Ingress controller. He doesn't know what to do and return a 404 error (sometimes 503).

That's ok. Go next!

I'll use an example from https://youtu.be/A_PjjCM1eLA?t=984 16:24

[ Kube 32 ] Set up Traefik Ingress on kubernetes Bare Metal Cluster

set kubectl alias

alias kubectl=microk8s.kubectl
Deploy apps
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-main.yaml
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-blue.yaml
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-green.yaml
Expose apps in the internal cluster network. ClusterIP by default.
kubectl expose deploy nginx-deploy-main --port 80
kubectl expose deploy nginx-deploy-blue --port 80
kubectl expose deploy nginx-deploy-green --port 80

Run Test. It doesn't work... yet.

Ingress rule example: how to delivery by host name

Configure the hosts nginx.example.com, blue.nginx.example.com, and green.nginx.example.com and distribute requests to exposed deployments:

kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/ingress-resource-2.yaml

Run this tests:

curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP

Now you'll have a response like

<h1>I am <font color=blue>BLUE</font></h1>

You can play with

curl -H "Host: nginx.example.com" http://PUBLIC_IP
curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP
curl -H "Host: green.nginx.example.com" http://PUBLIC_IP

Conclusion:

  • We only have 1 IP address and multiple hosts.
  • We have 3 different services using the same port.
  • The requests distribution is done using Ingress.
Ake answered 23/10, 2020 at 16:14 Comment(0)
K
1

Just started with MicroK8s - it appears to have great promise. After combing through info sites and docs; was able to implement bare metal demonstration with Traefik Ingress Controller (with Custom Resource Definitions and Ingress Routes); Linkerd service mesh; and metallb load balancer. This was done on a VirtualBox Guest VM running Ubuntu 20.04; also included with this github link is "way" to expose Traefik Ingress Controller external IP provided by metallb external to Guest VM. See https://github.com/msb1/microk8s-traefik-linkerd-whoami .

Prefer this implementation to what is shown in Youtube link as it includes working service mesh and uses custom resource definitions for Ingress (which is unique to Traefik and one of reasons to proceed with Traefik as opposed to other Ingress Controllers).

Hope this help others - should be able to build awesome deployments with MicroK8s following this demo (which is current focus).

Kapok answered 6/5, 2021 at 16:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.