How to allow access to kubernetes api using egress network policy?
Asked Answered
S

4

26

Init container with kubectl get pod command is used to get ready status of other pod.

After Egress NetworkPolicy was turned on init container can't access Kubernetes API: Unable to connect to the server: dial tcp 10.96.0.1:443: i/o timeout. CNI is Calico.

Several rules were tried but none of them are working (service and master host IPs, different CIDR masks):

...
  egress:
  - to:
    - ipBlock:
        cidr: 10.96.0.1/32
    ports:
    - protocol: TCP
      port: 443
...

or using namespace (default and kube-system namespaces):

...
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: default
    ports:
    - protocol: TCP
      port: 443
...

Looks like ipBlock rules just don't work and namespace rules don't work because kubernetes api is non-standard pod.

Can it be configured? Kubernetes is 1.9.5, Calico is 3.1.1.

Problem still exists with GKE 1.13.7-gke.8 and calico 3.2.7

Scrappy answered 30/4, 2018 at 14:46 Comment(4)
Did you solve this problem?Starrstarred
Same issue on GKE 1.11.6-gke.3 (using Calico v3.2.4)Cresa
Did you label your default namespace with label name=default? For me it wasn't obvious that namespace required labeling - I learned it from TGI Kubernetes 085: Network PoliciesEllery
Hi. Did you manage to solve this? I'm stuck I tried everything. using CIDR, labels.Doolie
H
14

You need to get the real ip of the master using kubectl get endpoints --namespace default kubernetes and make an egress policy to allow that.

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1 
metadata:
  name: allow-apiserver
  namespace: test
spec:
  policyTypes:
  - Egress
  podSelector: {}
  egress:
  - ports:
    - port: 443
      protocol: TCP
    to:
    - ipBlock:
        cidr: x.x.x.x/32
Hair answered 7/6, 2019 at 12:38 Comment(3)
Is it possible for the master's IP to change? If so, this configuration may break when it does.Mansur
Make sure that you are also using the correct port. The 443 port used inside the pod may have been changed outside the pod to something like 4443. get endpoints --namespace default kubernetes -o wide lists the IP address + port.Elberfeld
This works but I'm a bit skeptical about if this will potentially break if IP updates. In which case for a less strict range you can use cidr: 10.0.0.0/8 to allow access generally inside the clusterNoyade
K
2

Had the same issue when using ciliumnetworkpolicy with helm. For anyone having a similar issue, something like this should work:

{{- $kubernetesEndpoint := lookup "v1" "Endpoints" "default" "kubernetes" -}}
{{- $kubernetesAddress := (first $kubernetesEndpoint.subsets).addresses -}}
{{- $kubernetesIP := (first $kubernetesAddress).ip -}}
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  ...
spec:
  ...
  egress:
    - toCIDRSet:
        - cidr: {{ $kubernetesIP }}/32
    ...
Kitchens answered 30/3, 2023 at 10:12 Comment(1)
toEntities: [ kube-apiserver ] – no need to look up the IP range if you can use the built-in functionality. Only works for Cilium.Hissing
A
1

You can allow egress traffic to the Kubernetes API endpoints IPs and ports.

You can get the endpoints by running $ kubectl get endpoints kubernetes -oyaml.

I don't understand why it doesn't work to just allow traffic to the cluster IP of the kubernetes service in the default namespace (what is in the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars), but anyway, it works to allow traffic to the underlying endpoints.

To do this in a Helm chart template, you could do something like:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ...
spec:
  podSelector: ...
  policyTypes:
    - Egress
  egress:
    {{- range (lookup "v1" "Endpoints" "default" "kubernetes").subsets }}
    - to:
        {{- range .addresses }}
        - ipBlock:
            cidr: {{ .ip }}/32
        {{- end }}
      ports:
        {{- range .ports }}
        - protocol: {{ .protocol }}
          port: {{ .port }}
        {{- end }}
    {{- end }}
Abhorrence answered 24/4, 2023 at 14:54 Comment(0)
L
0

We aren't on GCP, but the same should apply.

We query AWS for the CIDR of our master nodes and use this data as values for helm charts creating the NetworkPolicy for the k8s API access.

In our case the masters are part of an auto-scaling group, so we need the CIDR. In your case the IP might be enough.

Letta answered 17/12, 2019 at 6:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.