Kubernetes/Ubuntu newbie here!
I'm setting up a k8s cluster using a single Raspberry Pi (hoping to have more in the future). I'm using microk8s v1.18.8
and Ubuntu Server 20.04.1 LTS (GNU/Linux 5.4.0-1018-raspi aarch64)
.
I'm trying to access one of my k8s services on port 80
, but I haven't been able to set it up correctly. I've also set a static IP address for accessing the service, and I'm routing traffic from the router to the service's IP address.
I would like to know what I'm doing wrong, or if there's a better approach for what I'm trying to do!
The steps I'm following:
- I've run
microk8s enable dns metallb
. I've given MetalLB IP addresses not being handled by the DHPC server (192.168.0.90-192.168.0.99
). - I've installed
ingress-nginx
by runningkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/baremetal/deploy.yaml
. This creates aNodePort
service for theingress-nginx-controller
, which doesn't work with MetalLB. As mentioned here, I edit thespec.type
of the service fromNodePort
toLoadBalancer
by runningkubectl edit service ingress-nginx-controller -n ingress-nginx
. MetalLB then assigns IP192.168.0.90
to the service. - Then I apply the following configuration file:
apiVersion: v1
kind: Service
metadata:
name: wow-ah-api-service
namespace: develop
spec:
selector:
app: wow-ah-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: wow-ah-api
namespace: develop
spec:
# 3 Pods should exist at all times.
replicas: 3
selector:
matchLabels:
app: wow-ah-api
template:
metadata:
namespace: develop
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: wow-ah-api
spec:
imagePullSecrets:
- name: some-secret
containers:
- name: wow-ah-api
# Run this image
image: some-image
imagePullPolicy: Always
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: wow-ah-api-ingress
namespace: develop
spec:
backend:
serviceName: wow-ah-api-service
servicePort: 3000
These are some outputs I'm seeing:
microk8s kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
develop pod/wow-ah-api-6c4bff88f9-2x48v 1/1 Running 4 4h21m
develop pod/wow-ah-api-6c4bff88f9-ccw9z 1/1 Running 4 4h21m
develop pod/wow-ah-api-6c4bff88f9-rd6lp 1/1 Running 4 4h21m
ingress-nginx pod/ingress-nginx-admission-create-mnn8g 0/1 Completed 0 4h27m
ingress-nginx pod/ingress-nginx-admission-patch-x5r6d 0/1 Completed 1 4h27m
ingress-nginx pod/ingress-nginx-controller-7896b4fbd4-nglsd 1/1 Running 4 4h27m
kube-system pod/coredns-588fd544bf-576x5 1/1 Running 4 4h26m
metallb-system pod/controller-5f98465b6b-hcj9g 1/1 Running 4 4h23m
metallb-system pod/speaker-qc9pc 1/1 Running 4 4h23m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 21h
develop service/wow-ah-api-service ClusterIP 10.152.183.88 <none> 80/TCP 4h21m
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.152.183.216 192.168.0.90 80:32151/TCP,443:30892/TCP 4h27m
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.152.183.41 <none> 443/TCP 4h27m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 4h26m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
metallb-system daemonset.apps/speaker 1 1 1 1 1 beta.kubernetes.io/os=linux 4h23m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
develop deployment.apps/wow-ah-api 3/3 3 3 4h21m
ingress-nginx deployment.apps/ingress-nginx-controller 1/1 1 1 4h27m
kube-system deployment.apps/coredns 1/1 1 1 4h26m
metallb-system deployment.apps/controller 1/1 1 1 4h23m
NAMESPACE NAME DESIRED CURRENT READY AGE
develop replicaset.apps/wow-ah-api-6c4bff88f9 3 3 3 4h21m
ingress-nginx replicaset.apps/ingress-nginx-controller-7896b4fbd4 1 1 1 4h27m
kube-system replicaset.apps/coredns-588fd544bf 1 1 1 4h26m
metallb-system replicaset.apps/controller-5f98465b6b 1 1 1 4h23m
NAMESPACE NAME COMPLETIONS DURATION AGE
ingress-nginx job.batch/ingress-nginx-admission-create 1/1 27s 4h27m
ingress-nginx job.batch/ingress-nginx-admission-patch 1/1 29s 4h27m
microk8s kubectl get ingress --all-namespaces
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
develop wow-ah-api-ingress <none> * 192.168.0.236 80 4h23m
I have been thinking it could be related to my iptables configuration, but I'm not sure how to configure them to work with microk8s.
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
ACCEPT all -- 10.1.0.0/16 anywhere /* generated for MicroK8s pods */
ACCEPT all -- anywhere 10.1.0.0/16 /* generated for MicroK8s pods */
ACCEPT all -- 10.1.0.0/16 anywhere
ACCEPT all -- anywhere 10.1.0.0/16
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP all -- !localhost/8 localhost/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- anywhere anywhere ctstate INVALID
ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (3 references)
target prot opt source destination
UPDATE #1
metallb ConfigMap
(microk8s kubectl edit ConfigMap/config -n metallb-system
)
apiVersion: v1
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.90-192.168.0.99
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"config":"address-pools:\n- name: default\n protocol: layer2\n addresses:\n - 192.168.0.90-192.168.0.99\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"config","namespace":"metallb-system"}}
creationTimestamp: "2020-09-19T21:18:45Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:config: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
manager: kubectl
operation: Update
time: "2020-09-19T21:18:45Z"
name: config
namespace: metallb-system
resourceVersion: "133422"
selfLink: /api/v1/namespaces/metallb-system/configmaps/config
uid: 774f6a73-b1e1-4e26-ba73-ef71bc2e1060
I'd appreciate any help you could give me!
service/wow-ah-api-service
). I didn't configure a firewall. I tried configuring a firewall before using UFW, and opened port 80, but when I did that,service/ingress-nginx-controller
started givingLiveness/Readiness probe failed
error. – Iconolatry