Kubernetes RBAC unable to upgrade connection: Forbidden (user=system:anonymous, verb=create, resource=nodes, subresource=proxy)
Asked Answered
P

2

9

I'm running Kubernetes 1.6.2 with RBAC enabled. I've created a user kube-admin that has the following Cluster Role binding

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: k8s-admin
subjects:
- kind: User
  name: kube-admin
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

When I attempt to kubectl exec into a running pod I get the following error.

kubectl -n kube-system exec -it kubernetes-dashboard-2396447444-1t9jk -- /bin/bash
error: unable to upgrade connection: Forbidden (user=system:anonymous, verb=create, resource=nodes, subresource=proxy)

My guess is I'm missing a ClusterRoleBinding ref, which role am I missing?

Pahl answered 1/6, 2017 at 16:26 Comment(0)
M
12

The connection between kubectl and the api is fine, and is being authorized correctly.

To satisfy an exec request, the apiserver contacts the kubelet running the pod, and that connection is what is being forbidden.

Your kubelet is configured to authenticate/authorize requests, and the apiserver is not providing authentication information recognized by the kubelet.

The way the apiserver authenticates to the kubelet is with a client certificate and key, configured with the --kubelet-client-certificate=... --kubelet-client-key=... flags provided to the API server.

See https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#overview for more information.

Monteiro answered 1/6, 2017 at 17:44 Comment(2)
This was it for me as well. I had configured kubeadm (on my master) to use a custom CA, cert and key...but I used the "self-hosting" option which didn't take my changes "10-kubeadm.conf" into account, meanwhile my worker nodes had their kubelet configured to specify their own key/cert...which didn't work b/c the master wasn't using my CA. I updated the worker to use the default cert configuration and things are better. still need to revisit the custom CA stuff later.Chigoe
Thanks for the answer/insights. If this is the case, what are the recommended steps to proceed? Would I need to restart the api server with different client cert / key? Or maybe restart the kubelets with new values?Wilford
W
3

I had this exact same error happening, but for me the problem was due to my setup with kops. I wanted to share my result here because it may help someone in the future.

With a bug that exists in kops v1.19.1, the config for kubelet anonymousAuth must be explicitly set to false

I was using kops version 1.19, and upgrading my cluster from k8s v1.11 to v1.19. After the upgrade is when I started seeing this error when trying to run kubectl port-forward, kubectl logs, kubectl exec, helm list, etc. The issue was a combination of a bug in kops and not having the anonymousAuth configuration set, or having it set to true. With this bug in kops, the config for kubelet anonymousAuth must be set explicitly to false.

To Fix

Edit the cluster

$ kops edit cluster

Add the config under spec.kubelet.anonymousAuth, i.e.

spec:
  kubelet:
    anonymousAuth: false

Update the cluster

$ kops update cluster --yes

$ kops rolling-update cluster --yes

Related

PR that fixes the immediate issue

PR that fixes a related issue

Related kops docs

Wilford answered 7/4, 2021 at 19:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.