I've been trying to enabled token auth for HTTP REST API Server access from a remote client.
I installed my CoreOS/K8S cluster controller using this script: https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh
My cluster works fine. This is a TLS installation so I need to configure any kubectl clients with the client certs to access the cluster.
I then tried to enable token auth via running:
echo `dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null`
this gives me a token. I then added the token to a token file on my controller containing a token and default user:
$> cat /etc/kubernetes/token
3XQ8W6IAourkXOLH2yfpbGFXftbH0vn,default,default
I then modified the /etc/kubernetes/manifests/kube-apiserver.yaml to add in:
- --token-auth-file=/etc/kubernetes/token
to the startup param list
I then reboot (not sure the best way to restart API Server by itself??)
At this point, kubectl from a remote server quits working(won't connect). I then look at docker ps
on the controller and see the api server. I run docker logs container_id
and get no output. If I look at other docker containers I see output like:
E0327 20:05:46.657679 1 reflector.go:188]
pkg/proxy/config/api.go:33: Failed to list *api.Endpoints:
Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0:
dial tcp 127.0.0.1:8080: getsockopt: connection refused
So it appears that my api-server.yaml config it preventing the API Server from starting properly....
Any suggestions on the proper way to configure API Server for bearer token REST auth?
It is possible to have both TLS configuration and Bearer Token Auth configured, right?
Thanks!