kube-apiserver not authenticating correctly in multi master cluster
Asked Answered
F

1

8

I am attempting to create a HA Kubernetes cluster in Azure using kubeadm as documented here https://kubernetes.io/docs/setup/independent/high-availability/

I have everything working when using only 1 master node but when changing to 3 master nodes kube-dns keeps crashing with apiserver issues

I can see when running kubectl get nodes that the 3 master nodes are ready

NAME           STATUS    ROLES     AGE       VERSION
k8s-master-0   Ready     master    3h        v1.9.3
k8s-master-1   Ready     master    3h        v1.9.3
k8s-master-2   Ready     master    3h        v1.9.3

but the dns and dashboard pod keep crashing

NAME                                    READY     STATUS             RESTARTS   AGE
kube-apiserver-k8s-master-0             1/1       Running            0          3h
kube-apiserver-k8s-master-1             1/1       Running            0          2h
kube-apiserver-k8s-master-2             1/1       Running            0          3h
kube-controller-manager-k8s-master-0    1/1       Running            0          3h
kube-controller-manager-k8s-master-1    1/1       Running            0          3h
kube-controller-manager-k8s-master-2    1/1       Running            0          3h
kube-dns-6f4fd4bdf-rmqbf                1/3       CrashLoopBackOff   88         3h
kube-proxy-5phhf                        1/1       Running            0          3h
kube-proxy-h5rk8                        1/1       Running            0          3h
kube-proxy-ld9wg                        1/1       Running            0          3h
kube-proxy-n947r                        1/1       Running            0          3h
kube-scheduler-k8s-master-0             1/1       Running            0          3h
kube-scheduler-k8s-master-1             1/1       Running            0          3h
kube-scheduler-k8s-master-2             1/1       Running            0          3h
kubernetes-dashboard-5bd6f767c7-d8kd7   0/1       CrashLoopBackOff   42         3h

The logs kubectl -n kube-system logs kube-dns-6f4fd4bdf-rmqbf -c kubedns indicate there is an api server issue

I0521 14:40:31.303585       1 dns.go:48] version: 1.14.6-3-gc36cb11
I0521 14:40:31.304834       1 server.go:69] Using configuration read from directory: /kube-dns-config with period 10s
I0521 14:40:31.304989       1 server.go:112] FLAG: --alsologtostderr="false"
I0521 14:40:31.305115       1 server.go:112] FLAG: --config-dir="/kube-dns-config"
I0521 14:40:31.305164       1 server.go:112] FLAG: --config-map=""
I0521 14:40:31.305233       1 server.go:112] FLAG: --config-map-namespace="kube-system"
I0521 14:40:31.305285       1 server.go:112] FLAG: --config-period="10s"
I0521 14:40:31.305332       1 server.go:112] FLAG: --dns-bind-address="0.0.0.0"
I0521 14:40:31.305394       1 server.go:112] FLAG: --dns-port="10053"
I0521 14:40:31.305454       1 server.go:112] FLAG: --domain="cluster.local."
I0521 14:40:31.305531       1 server.go:112] FLAG: --federations=""
I0521 14:40:31.305596       1 server.go:112] FLAG: --healthz-port="8081"
I0521 14:40:31.305656       1 server.go:112] FLAG: --initial-sync-timeout="1m0s"
I0521 14:40:31.305792       1 server.go:112] FLAG: --kube-master-url=""
I0521 14:40:31.305870       1 server.go:112] FLAG: --kubecfg-file=""
I0521 14:40:31.305960       1 server.go:112] FLAG: --log-backtrace-at=":0"
I0521 14:40:31.306026       1 server.go:112] FLAG: --log-dir=""
I0521 14:40:31.306109       1 server.go:112] FLAG: --log-flush-frequency="5s"
I0521 14:40:31.306160       1 server.go:112] FLAG: --logtostderr="true"
I0521 14:40:31.306216       1 server.go:112] FLAG: --nameservers=""
I0521 14:40:31.306267       1 server.go:112] FLAG: --stderrthreshold="2"
I0521 14:40:31.306324       1 server.go:112] FLAG: --v="2"
I0521 14:40:31.306375       1 server.go:112] FLAG: --version="false"
I0521 14:40:31.306433       1 server.go:112] FLAG: --vmodule=""
I0521 14:40:31.306510       1 server.go:194] Starting SkyDNS server (0.0.0.0:10053)
I0521 14:40:31.306806       1 server.go:213] Skydns metrics enabled (/metrics:10055)
I0521 14:40:31.306926       1 dns.go:146] Starting endpointsController
I0521 14:40:31.306996       1 dns.go:149] Starting serviceController
I0521 14:40:31.307267       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0521 14:40:31.307350       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0521 14:40:31.807301       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0521 14:40:32.307629       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
E0521 14:41:01.307985       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0521 14:41:01.308227       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0521 14:41:01.807271       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0521 14:41:02.307301       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0521 14:41:02.807294       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0521 14:41:03.307321       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0521 14:41:03.807649       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...

The output from kubectl -n kube-system logs kube-apiserver-k8s-master-0 looks relatively normal, except for all the TLS errors

    I0521 11:09:53.982465       1 server.go:121] Version: v1.9.7
I0521 11:09:53.982756       1 cloudprovider.go:59] --external-hostname was not specified. Trying to get it from the cloud provider.
I0521 11:09:55.934055       1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0521 11:09:55.935038       1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0521 11:09:55.938929       1 feature_gate.go:190] feature gates: map[Initializers:true]
I0521 11:09:55.938945       1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I0521 11:09:55.942042       1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0521 11:09:55.948001       1 master.go:225] Using reconciler: lease
W0521 11:10:01.032046       1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0521 11:10:03.333423       1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0521 11:10:03.340119       1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0521 11:10:04.188602       1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/05/21 11:10:04 log.go:33: [restful/swagger] listing is available at https://10.240.0.231:6443/swaggerapi
[restful] 2018/05/21 11:10:04 log.go:33: [restful/swagger] https://10.240.0.231:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/05/21 11:10:06 log.go:33: [restful/swagger] listing is available at https://10.240.0.231:6443/swaggerapi
[restful] 2018/05/21 11:10:06 log.go:33: [restful/swagger] https://10.240.0.231:6443/swaggerui/ is mapped to folder /swagger-ui/
I0521 11:10:06.424379       1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0521 11:10:10.910296       1 serve.go:96] Serving securely on [::]:6443
I0521 11:10:10.919244       1 crd_finalizer.go:242] Starting CRDFinalizer
I0521 11:10:10.919835       1 apiservice_controller.go:112] Starting APIServiceRegistrationController
I0521 11:10:10.919940       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0521 11:10:10.920028       1 controller.go:84] Starting OpenAPI AggregationController
I0521 11:10:10.921417       1 available_controller.go:262] Starting AvailableConditionController
I0521 11:10:10.922341       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0521 11:10:10.927021       1 logs.go:41] http: TLS handshake error from 10.240.0.231:49208: EOF
I0521 11:10:10.932960       1 logs.go:41] http: TLS handshake error from 10.240.0.231:49210: EOF
I0521 11:10:10.937813       1 logs.go:41] http: TLS handshake error from 10.240.0.231:49212: EOF
I0521 11:10:10.941682       1 logs.go:41] http: TLS handshake error from 10.240.0.231:49214: EOF
I0521 11:10:10.945178       1 logs.go:41] http: TLS handshake error from 127.0.0.1:56640: EOF
I0521 11:10:10.949275       1 logs.go:41] http: TLS handshake error from 127.0.0.1:56642: EOF
I0521 11:10:10.953068       1 logs.go:41] http: TLS handshake error from 10.240.0.231:49442: EOF
---
I0521 11:10:19.912989       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/admin
I0521 11:10:19.941699       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/edit
I0521 11:10:19.957582       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/view
I0521 11:10:19.968065       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0521 11:10:19.998718       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0521 11:10:20.015536       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0521 11:10:20.032728       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0521 11:10:20.045918       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node
I0521 11:10:20.063670       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0521 11:10:20.114066       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0521 11:10:20.135010       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0521 11:10:20.147462       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0521 11:10:20.159892       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0521 11:10:20.181092       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0521 11:10:20.197645       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0521 11:10:20.219016       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0521 11:10:20.235273       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0521 11:10:20.245893       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0521 11:10:20.257459       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0521 11:10:20.269857       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0521 11:10:20.286785       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0521 11:10:20.298669       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0521 11:10:20.310573       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0521 11:10:20.347321       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0521 11:10:20.364505       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0521 11:10:20.365888       1 trace.go:76] Trace[1489234739]: "Create /api/v1/namespaces/kube-system/configmaps" (started: 2018-05-21 11:10:15.961686997 +0000 UTC m=+22.097873350) (total time: 4.404137704s):
Trace[1489234739]: [4.000707016s] [4.000623216s] About to store object in database
Trace[1489234739]: [4.404137704s] [403.430688ms] END
E0521 11:10:20.366636       1 client_ca_hook.go:112] configmaps "extension-apiserver-authentication" already exists
I0521 11:10:20.391784       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0521 11:10:20.404492       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
W0521 11:10:20.405827       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [10.240.0.231 10.240.0.233]
I0521 11:10:20.423540       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0521 11:10:20.476466       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0521 11:10:20.495934       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0521 11:10:20.507318       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0521 11:10:20.525086       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0521 11:10:20.538631       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0521 11:10:20.558614       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0521 11:10:20.586665       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0521 11:10:20.600567       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0521 11:10:20.617268       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0521 11:10:20.628770       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0521 11:10:20.655147       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0521 11:10:20.672926       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0521 11:10:20.694137       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0521 11:10:20.718936       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0521 11:10:20.731868       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0521 11:10:20.752910       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0521 11:10:20.767297       1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0521 11:10:20.788265       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0521 11:10:20.801791       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0521 11:10:20.815924       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0521 11:10:20.828531       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0521 11:10:20.854715       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0521 11:10:20.864554       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0521 11:10:20.875950       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0521 11:10:20.900809       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0521 11:10:20.913751       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0521 11:10:20.924284       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0521 11:10:20.940075       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0521 11:10:20.969408       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0521 11:10:20.980017       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0521 11:10:21.016306       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0521 11:10:21.047910       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0521 11:10:21.058829       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0521 11:10:21.083536       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0521 11:10:21.100235       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0521 11:10:21.127927       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0521 11:10:21.146373       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0521 11:10:21.160099       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0521 11:10:21.184264       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0521 11:10:21.204867       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0521 11:10:21.224648       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0521 11:10:21.742427       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0521 11:10:21.758948       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0521 11:10:21.801182       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0521 11:10:21.832962       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0521 11:10:21.860369       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0521 11:10:21.892241       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0521 11:10:21.931450       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0521 11:10:21.963364       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0521 11:10:21.980748       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0521 11:10:22.003657       1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0521 11:10:22.434855       1 controller.go:538] quota admission added evaluator for: { endpoints}
...
I0521 11:12:06.609728       1 logs.go:41] http: TLS handshake error from 168.63.129.16:64981: EOF
I0521 11:12:21.611308       1 logs.go:41] http: TLS handshake error from 168.63.129.16:65027: EOF
I0521 11:12:36.612129       1 logs.go:41] http: TLS handshake error from 168.63.129.16:65095: EOF
I0521 11:12:51.612245       1 logs.go:41] http: TLS handshake error from 168.63.129.16:65141: EOF
I0521 11:13:06.612118       1 logs.go:41] http: TLS handshake error from 168.63.129.16:65177: EOF
I0521 11:13:21.612170       1 logs.go:41] http: TLS handshake error from 168.63.129.16:65235: EOF
I0521 11:13:36.612218       1 logs.go:41] http: TLS handshake error from 168.63.129.16:65305: EOF
I0521 11:13:51.613097       1 logs.go:41] http: TLS handshake error from 168.63.129.16:65354: EOF
I0521 11:14:06.613523       1 logs.go:41] http: TLS handshake error from 168.63.129.16:65392: EOF
I0521 11:14:21.614148       1 logs.go:41] http: TLS handshake error from 168.63.129.16:65445: EOF
I0521 11:14:36.614143       1 logs.go:41] http: TLS handshake error from 168.63.129.16:65520: EOF
I0521 11:14:51.614204       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49193: EOF
I0521 11:15:06.613995       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49229: EOF
I0521 11:15:21.613962       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49284: EOF
I0521 11:15:36.615026       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49368: EOF
I0521 11:15:51.615991       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49413: EOF
I0521 11:16:06.616993       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49454: EOF
I0521 11:16:21.616947       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49510: EOF
I0521 11:16:36.617859       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49586: EOF
I0521 11:16:51.618921       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49644: EOF
I0521 11:17:06.619768       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49696: EOF
I0521 11:17:21.620123       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49752: EOF
I0521 11:17:36.620814       1 logs.go:41] http: TLS handshake error from 168.63.129.16:49821: EOF

The output from a second api server however looks at lot more broken

E0521 11:11:15.035138       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:15.040764       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:15.717294       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:15.721875       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:15.728534       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:15.734572       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.036398       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.041735       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.730094       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.736057       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.741505       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.741980       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:17.037722       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:17.042680       1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
Faery answered 15/2, 2018 at 13:51 Comment(2)
How did you copy the certs from master0 or master 1 and 2 ?Kimberelykimberlee
I have a deployment script that copies the CA public and private key to each server but I don't copy any other certificates. Is there some other certificates that need to be the same on every server?Faery
F
4

I eventually got to the bottom of this. I had not copied the same Service Account signing keys onto each master node (sa.key, sa.pub).

These keys are documented here: https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.7.md

a private key for signing ServiceAccount Tokens (sa.key) along with its public key (sa.pub)

And the step that I had missed is documented here: https://kubernetes.io/docs/setup/independent/high-availability/

Copy the contents of /etc/kubernetes/pki/ca.crt, /etc/kubernetes/pki/ca.key, /etc/kubernetes/pki/sa.key and /etc/kubernetes/pki/sa.pub and create these files manually on master1 and master2

Faery answered 22/5, 2018 at 10:56 Comment(2)
That's why I had asked you yesterday how you copied the CAs, because I saw Option1 and Option2 in the article and I was think you copied it wrongly, but you said it is copied using script. So I didn't realize you copied the wrong files :-). So the solution was in the link itself you had posted.Kimberelykimberlee
Man you are GREAT, I was stuck at this thing for two days :-|Mailand

© 2022 - 2024 — McMap. All rights reserved.