Kubernetes RBAC - forbidden attempt to grant extra privileges
Asked Answered
D

1

5

I'm using Kubernetes v1.8.14 on custom built CoreOS cluster:

$ kubectl version --short 
Client Version: v1.10.5
Server Version: v1.8.14+coreos.0

When trying to create the following ClusterRole:

$ cat ClusterRole.yml 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch

I get the following error:

$ kubectl create -f ClusterRole.yml 
Error from server (Forbidden): error when creating "ClusterRole.yml": clusterroles.rbac.authorization.k8s.io "system:coredns" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]}] user=&{cluster-admin  [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

As far as I can tell I'm connecting as cluster-admin, therefore should have sufficient permissions for what I'm trying to achieve. Below are relevant cluster-admin config:

$ cat ~/.kube/config
apiVersion: v1
kind: Config
current-context: dev
preferences:
  colors: true

clusters:
- cluster:
    certificate-authority: cluster-ca.pem
    server: https://k8s.loc:4430
  name: dev

contexts:
- context:
    cluster: dev
    namespace: kube-system
    user: cluster-admin
  name: dev

users:
- name: cluster-admin
  user:
    client-certificate: cluster.pem
    client-key: cluster-key.pem


$ kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: 2018-07-30T14:44:44Z
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin
  resourceVersion: "1164791"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin
  uid: 196ffecc-9407-11e8-bd67-525400ac0b7d
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'


$ kubectl get clusterrolebinding cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: 2018-07-30T14:44:45Z
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin
  resourceVersion: "1164832"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
  uid: 19e516a6-9407-11e8-bd67-525400ac0b7d
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:masters


$ kubectl get serviceaccount cluster-admin -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2018-07-30T13:32:13Z
  name: cluster-admin
  namespace: kube-system
  resourceVersion: "1158783"
  selfLink: /api/v1/namespaces/kube-system/serviceaccounts/cluster-admin
  uid: f809e079-93fc-11e8-8b85-525400546bcd
secrets:
- name: cluster-admin-token-t7s4c

I understand this is RBAC problem, but have no idea how further debug this.

Edit-1.

I tried the suggested, no joy unfortunately...

$ kubectl get clusterrolebinding cluster-admin-binding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: 2018-07-31T09:21:34Z
  name: cluster-admin-binding
  resourceVersion: "1252260"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin-binding
  uid: 1e1c0647-94a3-11e8-9f9b-525400ac0b7d
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: cluster-admin
  namespace: default


$ kubectl describe secret $(kubectl get secret | awk '/cluster-admin/{print $1}')
Name:         cluster-admin-token-t7s4c
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=cluster-admin
              kubernetes.io/service-account.uid=f809e079-93fc-11e8-8b85-525400546bcd

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1785 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVyLWFkbWluLXRva2VuLXQ3czRjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmODA5ZTA3OS05M2ZjLTExZTgtOGI4NS01MjU0MDA1NDZiY2QiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3Rlci1hZG1pbiJ9.rC1x9Or8GArkhC3P0s-l_Pc0e6TEUwfbJtXAN2w-cOaRUCNCo6r4WxXKu32ngOg86TXqCho2wBopXtbJ2CparIb7FWDXzri6O6LPFzHWNzZo3b-TON2yxHMWECGjpbbqjDgkPKDEldkdxJehDBJM_GFAaUdNyYpFFsP1_t3vVIsf2DpCjeMlOBSprYRcEKmDiE6ehF4RSn1JqB7TVpvTZ_WAL4CRZoTJtZDVoF75AtKIADtVXTxVv_ewznDCKUWDupg5Jk44QSMJ0YiG30QYYM699L5iFLirzD5pj0EEPAoMeOqSjdp7KvDzIM2tBiu8YYl6Fj7pG_53WjZrvlSk5pgPLS-jPKOkixFM9FfB2eeuP0eWwLO5wvU5s--a2ekkEhaqHTXgigeedudDA_5JVIJTS0m6V9gcbE4_kYRpU7_QD_0TR68C5yxUL83KfOzj6A_S6idOZ-p7Ni6ffE_KlGqqcgUUR2MTakJgimjn0gYHNaIqmHIu4YhrT-jffP0-5ZClbI5srj-aB4YqGtCH9w5_KBYD4S2y6Rjv4kO00nZyvi0jAHlZ6el63TQPWYkjyPL2moF_P8xcPeoDrF6o8bXDzFqlXLqda2Nqyo8LMhLxjpe_wFeGuwzIUxwwtH1RUR6BISRUf86041aa2PeJMqjTfaU0u_SvO-yHMGxZt3o

Then amended ~/.kube/config:

$ cat ~/.kube/config
apiVersion: v1
kind: Config
current-context: dev
preferences:
  colors: true

clusters:
- cluster:
    certificate-authority: cluster-ca.pem
    server: https://k8s.loc:4430
  name: dev

contexts:
- context:
    cluster: dev
    namespace: kube-system
    user: cluster-admin-2
  name: dev

users:
- name: cluster-admin
  user:
    client-certificate: cluster.pem
    client-key: cluster-key.pem
- name: cluster-admin-2
  user:
    token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVyLWFkbWluLXRva2VuLXQ3czRjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmODA5ZTA3OS05M2ZjLTExZTgtOGI4NS01MjU0MDA1NDZiY2QiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3Rlci1hZG1pbiJ9.rC1x9Or8GArkhC3P0s-l_Pc0e6TEUwfbJtXAN2w-cOaRUCNCo6r4WxXKu32ngOg86TXqCho2wBopXtbJ2CparIb7FWDXzri6O6LPFzHWNzZo3b-TON2yxHMWECGjpbbqjDgkPKDEldkdxJehDBJM_GFAaUdNyYpFFsP1_t3vVIsf2DpCjeMlOBSprYRcEKmDiE6ehF4RSn1JqB7TVpvTZ_WAL4CRZoTJtZDVoF75AtKIADtVXTxVv_ewznDCKUWDupg5Jk44QSMJ0YiG30QYYM699L5iFLirzD5pj0EEPAoMeOqSjdp7KvDzIM2tBiu8YYl6Fj7pG_53WjZrvlSk5pgPLS-jPKOkixFM9FfB2eeuP0eWwLO5wvU5s--a2ekkEhaqHTXgigeedudDA_5JVIJTS0m6V9gcbE4_kYRpU7_QD_0TR68C5yxUL83KfOzj6A_S6idOZ-p7Ni6ffE_KlGqqcgUUR2MTakJgimjn0gYHNaIqmHIu4YhrT-jffP0-5ZClbI5srj-aB4YqGtCH9w5_KBYD4S2y6Rjv4kO00nZyvi0jAHlZ6el63TQPWYkjyPL2moF_P8xcPeoDrF6o8bXDzFqlXLqda2Nqyo8LMhLxjpe_wFeGuwzIUxwwtH1RUR6BISRUf86041aa2PeJMqjTfaU0u_SvO-yHMGxZt3o

And then tried to apply the same ClusterRole, which rendered the same error:

$ kubectl apply -f ClusterRole.yml 
Error from server (Forbidden): error when creating "ClusterRole.yml": clusterroles.rbac.authorization.k8s.io "system:coredns" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]}] user=&{system:serviceaccount:kube-system:cluster-admin f809e079-93fc-11e8-8b85-525400546bcd [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

Below are the flags which I use to start apiserver:

  containers:
    - name: kube-apiserver
      image: quay.io/coreos/hyperkube:${K8S_VER}
      command:
        - /hyperkube
        - apiserver
        - --bind-address=0.0.0.0
        - --etcd-servers=${ETCD_ENDPOINTS}
        - --allow-privileged=true
        - --service-cluster-ip-range=${SERVICE_IP_RANGE}
        - --secure-port=443
        - --advertise-address=${ADVERTISE_IP}
        - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
        - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
        - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
        - --client-ca-file=/etc/kubernetes/ssl/ca.pem
        - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
        - --runtime-config=extensions/v1beta1/networkpolicies=true
        - --anonymous-auth=false
        - --authorization-mode=AlwaysAllow,RBAC,Node

And here are the scripts, which I use to generate my tls certs:

root ca:

openssl genrsa -out ca-key.pem 4096
openssl req -x509 -new -nodes -key ca-key.pem -days 3650 -out ca.pem -subj "/CN=kube-ca"

apiserver:

cat > openssl.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]

[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = ${MASTER_LB_DNS}
IP.1 = ${K8S_SERVICE_IP}
IP.2 = ${MASTER_HOST}
EOF

openssl genrsa -out apiserver-key.pem 4096
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 3650 -extensions v3_req -extfile openssl.cnf

cluster-admin:

openssl genrsa -out cluster-admin-key.pem 4096
openssl req -new -key cluster-admin-key.pem -out cluster-admin.csr -subj "/CN=cluster-admin"
openssl x509 -req -in cluster-admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cluster-admin.pem -days 3650

I hope this gives you more insight, what's wrong with my system.

Edit-2.

I noted a slight discrepancy between my system configuration and what @MarcinRomaszewicz suggested, thus the namespace of the cluster-admin ServiceAccount, in my case it is in the kube-system as opposed to the default namespace:

$ kubectl delete clusterrolebinding cluster-admin-binding 
clusterrolebinding.rbac.authorization.k8s.io "cluster-admin-binding" deleted

$ kubectl create clusterrolebinding cluster-admin-binding \
 --clusterrole=cluster-admin --serviceaccount=kube-system:cluster-admin
clusterrolebinding.rbac.authorization.k8s.io "cluster-admin-binding" created

$ kubectl apply -f ClusterRole.yml 
clusterrole.rbac.authorization.k8s.io "system:coredns" created

However it still doesn't work with my certificates...

Edit-3.

As suggested in the comments, in order for apiserver to recognize the user as the cluster-admin, Subject line in the certificate of that user must contain the following items: Subject: CN = cluster-admin, O = system:masters. One way to generate such a certificate is as follows:

openssl genrsa -out cluster-admin-key.pem 4096
openssl req -new -key cluster-admin-key.pem -out cluster-admin.csr -subj "/CN=cluster-admin/O=system:masters"
openssl x509 -req -in cluster-admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cluster-admin.pem -days 3650
Dorotea answered 30/7, 2018 at 16:29 Comment(0)
R
7

There isn't enough information here to answer your question.

It sounds like you are running into privilege escalation prevention: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping

This would mean you aren't actually running as cluster-admin. Check your kubectl config. You might be running as "admin" constrained to a particular name space, for example.

(edit based on comment below)

Your identity to k8s is established by the contents of your cluster.pem certificate, not the user name from kubeconfig, since that user name is only valid inside the kubeconfig file. Your actual user is determined by that certificate.

I see that you have a service account named cluster-admin, but it is not a member of "system:masters", since groups are something that are a property of the authentication system which authenticates users - you need to create an explicit cluster role binding to bind your cluster-admin service account to the cluster-admin clusterrole.

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --serviceaccount=default:cluster-admin

You should see the clusterrole now bound with your service account.

$ kubectl get clusterrolebinding cluster-admin-binding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: 2018-07-30T22:02:33Z
  name: cluster-admin-binding
  resourceVersion: "71152"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin-binding
  uid: 42a2862c-9444-11e8-8b71-080027de17da
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: cluster-admin
  namespace: default

Note at the bottom, that the binding applies to "ServiceAccount", not group.

Your service account has an access token, use that to authenticate instead of your certificate. I made myself a cluster-admin service account, and this is how I get the token:

$ kubectl describe secret $(kubectl get secret | grep cluster-admin | awk '{print $1}')
Name:         cluster-admin-token-96vdz
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=cluster-admin
              kubernetes.io/service-account.uid=f872f08b-9442-11e8-8b71-080027de17da

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItYWRtaW4tdG9rZW4tOTZ2ZHoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2x1c3Rlci1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImY4NzJmMDhiLTk0NDItMTFlOC04YjcxLTA4MDAyN2RlMTdkYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmNsdXN0ZXItYWRtaW4ifQ.<signature snipped>
ca.crt:     1066 bytes
namespace:  7 bytes

Update kubeconfig to authenticate yourself using that token, instead of the certificate you are currently using, and you should be successfully authenticated as that cluster-admin service account.

(edit 2) It turned out that the certificate being used to authenticate into Kubernetes did not have any identity claims about the user. Kubernetes relies on authentication modules to authenticate users, in this case, based on certificates. It was expecting the certificate to contain a claim which put the user into the "system:masters" group, by setting the Organization to "system:masters".

There are many moving pieces here. The problem had nothing to do with service accounts or roles, but rather in user authentication, which is very opaque.

Rowena answered 30/7, 2018 at 17:56 Comment(3)
I agree, it is most likely privilege escalation prevention problem, but like I mentioned, I don't know how to debug any further. To my best knowledge everything on my system is configured appropriately. I added ~/.kube/config to the question, please let me know if you need more information.Azal
I think you're getting user accounts and service accounts confused. I updated my explanation, hopefully it helps.Rowena
Your certificate needs to put you in the system:masters group. My x509 cert that I use to connect to my local cluster contains "Subject: O=system:masters, CN=minikube-user". Try adding those fields to your cert.Rowena

© 2022 - 2024 — McMap. All rights reserved.