Is there a way to share secrets across namespaces in Kubernetes?
My use case is: I have the same private registry for all my namespaces and I want to avoid creating the same secret for each.
Is there a way to share secrets across namespaces in Kubernetes?
My use case is: I have the same private registry for all my namespaces and I want to avoid creating the same secret for each.
Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Basically, you will have to create the secret for every namespace.
For more details, see this: Kubernetes Documentation / Concepts / Configuration / Secrets
They can only be referenced by pods in that same namespace. But you can just copy secret from one name space to other. Here is a example of copying localdockerreg
secret from default
namespace to dev
:
kubectl get secret localdockerreg --namespace=default --export -o yaml | kubectl apply --namespace=dev -f -
###UPDATE###
In Kubernetes v1.14 --export
flag is deprecated. So, the following Command with -oyaml
flag will work without a warning in forthcoming versions.
kubectl get secret localdockerreg --namespace=default -oyaml | kubectl apply --namespace=dev -f -
or below if source namespace is not necessarily default
kubectl get secret localdockerreg --namespace=default -oyaml | grep -v '^\s*namespace:\s' | kubectl apply --namespace=dev -f -
--export
flag) I get an error saying "the namespace from the provided option does not match". kubectl version 1.15. I think you may need to use sed
or something in between those two kubectl
commands to remove the namespace from the output yaml –
Copp $ kubectl get secret <SECRET> --namespace <NS-SRC> -oyaml | grep -v '^\s*namespace:\s' | kubectl apply --namespace <NS-DST> -f -
p.s. not tested with other object types, but should work p.p.s. don't forget to delete source if you're moving –
Wideman The accepted answer is correct: Secrets can only be referenced by pods in that same namespace. So here is a hint if you are looking to automate the "sync" or just copy the secret between namespaces.
For automating the share or syncing secret across namespaces use ClusterSecret operator:
https://github.com/zakkg3/ClusterSecret
kubectl get secret <secret-name> -n <source-namespace> -o yaml \
| sed s/"namespace: <source-namespace>"/"namespace: <destination-namespace>"/\
| kubectl apply -n <destination-namespace> -f -
If you have jq, we can use the @Evans Tucker solution
kubectl get secret cure-for-covid-19 -n china -o json \
| jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' \
| kubectl apply -n rest-of-world -f -
Secrets are namespaced resources, but you can use a Kubernetes extension to replicate them. We use this to propagate credentials or certificates stored in secrets to all namespaces automatically and keep them in sync (modify the source and all copies are updated). See Kubernetes Reflector (https://github.com/EmberStack/kubernetes-reflector).
The extension allows you to automatically copy and keep in sync a secret across namespaces via annotations:
On the source secret add the annotations:
annotations:
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
This will create a copy of the secret in all namespaces. You can limit the namespaces in which a copy is created using:
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "namespace-1,namespace-2,namespace-[0-9]*"
The extension supports ConfigMaps and cert-manager certificates as well. Disclainer: I am the author of the Kubernetes Reflector extension.
Here's an example that uses jq
to delete the namespace and other metadata we don't want:
kubectl get secret cure-for-covid-19 -n china -o json \
| jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' \
| kubectl apply -n rest-of-world -f -
Note:
--export
flag is no longer supported in kubectl
,sed
is not the appropriate tool for editing YAML or JSON.ownerReferences
. So the new jq
string looks like jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid", "ownerReferences"])'
–
Boche Use RBAC to authorize the serviceaccount to use the secret on the original namespaces. But, this is not recommended to have a shared secret between namesapces.
I want to help clear up two misinformation I found on other answers.
imagePullSecret
across namespaces.Sure, RBAC can allow application running in a Pod to read Secret from another namespace using kubernetes API or client libraries. (See minimal working example in bonus section below.)
However, this approach does not work when sharing the Secret to be used in imagePullSecret
field of Pod specification. The reasons being:
kubelet
which itself is not running in a Pod and has no associated ServiceAccount.imagePullSecret
can only reference Secret within the same namespace. See the API reference.kubectl
. (No jq
or sed
needed.)The trick is to dry-run application of JSON patch to existing secret to get new Secret document with namespace edited.
kubectl patch secret \
-n SOURCE-NAMESPACE SECRET-NAME \
--type=json -p='[{"op": "replace", "path": "/metadata/namespace", "value": "DESTINATION-NAMESPACE"}]' \
-o yaml --dry-run=client |
kubectl apply -f -
I'm not going to explain the example further, because it does not apply to the kind of Secret OP is talking about. However, I think it will be helpful for those mislead by the RBAC solution like I was.
apiVersion: v1
kind: Namespace
metadata:
name: secret-owner-namespace
---
apiVersion: v1
kind: Namespace
metadata:
name: secret-reader-namespace
---
apiVersion: v1
kind: Secret
metadata:
name: example-secret
namespace: secret-owner-namespace
type: Opaque
data:
# echo -n secret-value | base64
SECRET_KEY: c2VjcmV0LXZhbHVl
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-reader-role
# NOTE: Reader's role is defined inside owner's namespace.
# That is how `resourcesNames` below can be resolved.
namespace: secret-owner-namespace
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["example-secret"]
verbs: ["get", "watch", "list"]
---
# Allow pods in secret-reader-namespace to read the example-secret by:
# binding the default ServiceAccount of the secret-reader-namespace
# to the Role of secret-reader-role.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-reader-rolebinding
namespace: secret-owner-namespace
subjects:
- kind: ServiceAccount
name: default
apiGroup: ""
namespace: secret-reader-namespace
roleRef:
kind: Role
name: secret-reader-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-reading-code-configmap
namespace: secret-reader-namespace
data:
read-secret.py: |
from kubernetes import client, config
config.load_incluster_config()
with client.ApiClient() as api_client:
api_instance = client.CoreV1Api(api_client)
name = 'example-secret'
namespace = 'secret-owner-namespace'
api_response = api_instance.read_namespaced_secret(name, namespace)
print(api_response)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: secret-reader-deployment
namespace: secret-reader-namespace
labels:
app: secret-reader-example
spec:
replicas: 1
selector:
matchLabels:
app: secret-reader-example
template:
metadata:
labels:
app: secret-reader-example
spec:
containers:
- name: secret-reading-container
image: python
command: ["/bin/sh","-c"]
args: ["pip install --no-cache-dir --upgrade kubernetes ; python read-secret.py ; sleep infinity"]
workingDir: /opt/code
volumeMounts:
- name: secret-reading-code-volume
mountPath: "/opt/code"
readOnly: true
volumes:
- name: secret-reading-code-volume
configMap:
name: secret-reading-code-configmap
Another option would be to use kubed, one of many recommended options from the kind folks at Jetstack who gave us cert-manager. Here is what they link to.
kubed
particularly, but rather they mention it among 3 recommendations. –
Cameroncameroon Improving from @NicoKowe
One liner to copy all secrets from one namespace to another
$ for i in `kubectl get secrets | awk '{print $1}'`; do kubectl get secret $1 -n <source-namespace> -o yaml | sed s/"namespace: <source-namespace>"/"namespace: <target-namespace>"/ | kubectl apply -n <target-namespace> -f - ; done
Based on @Evans Tucker's answer but uses whitelisting rather than deletion within the jq filter to only keep what we want.
kubectl get secret cure-for-covid-19 -n china -o json | jq '{apiVersion,data,kind,metadata,type} | .metadata |= {"annotations", "name"}' | kubectl apply -n rest-of-world -f -
Essentially the same thing but preserves labels.
kubectl get secret cure-for-covid-19 -n china -o json | jq '{apiVersion,data,kind,metadata,type} | .metadata |= {"annotations", "name", "labels"}' | kubectl apply -n rest-of-world -f -
Well, the question is good, but all the solutions are bad!
Secrets contain sensitive data, as you understand, by design you cant use secret from another namespace. So I dont recommend to use a fancy "cluster scope" operator, that will "push" your secret into namespace "toto-*".
That sounds a bad usage of secret and kubernetes declarative model
.
This is the easiest approach, create a Helm chart to create the namespace and setup it, by creating resources you want to share.
I love https://external-secrets.io/, this is a pull declarative approach. As you can read at https://external-secrets.io/v0.7.2/provider/kubernetes/ , you declare a ExternalSecret
to pull data from a Secret on another namespace.
external-secrets.io is production ready, battle tested, support some providers (vault ...).
To share CA easily, https://cert-manager.io/docs/projects/trust-manager/. This is a push approach ;-/ but the tool is prod ready.
As answered by Innocent Anigbo, you need to have the secret in the same namespace. If you need to support that dynamicaly or avoid forgeting secret creation, it might be possible to create an initialiser for namespace object https://kubernetes.io/docs/admin/extensible-admission-controllers/ (have not done that on my own, so cant tell for sure)
Solution for copying all secrets.
kubectl delete secret --namespace $TARGET_NAMESPACE--all;
kubectl get secret --namespace default --output yaml \
| sed "s/namespace: $SOURCE_NAMESPACE/namespace: $TARGET_NAMESPACE/" \
| kubectl apply --namespace $TARGET_NAMESPACE --filename -;
yq
is a helpful command-line tool for editing YAML files. I utilized this in conjunction with the other answers to get this:
kubectl get secret <SECRET> -n <SOURCE_NAMESPACE> -o yaml | yq write - 'metadata.namespace' <TARGET_NAMESPACE> | kubectl apply -n <TARGET_NAMESPACE> -f -
kubectl get secret <SECRET> -n <SOURCE_NAMESPACE> -o yaml | yq eval '.metadata.namespace = "<TARGET_NAMESPACE>"' - | kubectl apply -n <TARGET_NAMESPACE> -f -
–
Alarum You may also think about using GoDaddy's Kubernetes External Secrets! where you will be storing your secrets in AWS Secret Manager(ASM) and GoDaddy's secret controller will create the secrets automatically. Moreover, there would be sync between ASM And K8S cluster.
For me the method suggested by @Hansika Weerasena didn't work and got the following error:
error: the namespace from the provided object "ns_source" does not match the namespace "ns_dest". You must pass '--namespace=ns_source' to perform this operation.
To get around this problem I did the the following:
kubectl get secret my-secret -n ns_source -o yaml > my-secret.yaml
This file needs to be edited and the namespace changed to your desired destination namespace. Then simply do:
kubectl apply -f my-secret.yaml -n ns_destination
Export from one k8s cluster
mkdir <namespace>; cd <namespace>; for i in `kubectl get secrets -n <namespace> | awk '{print $1}'`; do kubectl get secret $i -n <namespace> -o yaml > $i.yaml; done
Import to Second k8s cluster
cd <namespace>; find . -type f -exec kubectl apply -f '{}' -n <namespace> \;
With helm, I usually define a (group) variable (e.g. $REGISTRY_PASS
) in my CD pipeline and add a template file to the helm chart:
apiVersion: v1
data:
.dockerconfigjson: |
{{ .Values.registryPassword }}
kind: Secret
metadata:
name: my-registry
namespace: {{ .Release.Namespace }}
type: kubernetes.io/dockerconfigjson
When deploying the chart, I set the variable registryPassword
on the command line like so:
helm install foo/ --values values.yaml \
--set registryPassword="$REGISTRY_PASS" \
--namespace whatever \
--create-namespace
This is fully compatible with local testing and CD.
To get the correctly formatted value for $REGISTRY_PASS
, I use kubectl create secret
kubectl create secret docker-registry secret-tiger-docker \
[email protected] \
--docker-username=tiger \
--docker-password=pass1234 \
--docker-server=my-registry.example:5000
to create the intial secret and then use kubectl get secret
to get the base64 encoded string (.dockerconfigjson
).
kubectl get secret secret-tiger-docker -o yaml
No matter what namespace the application gets installed to, it will always have access to the local registry, since the secret gets installed before the image gets pulled.
Use RBAC to access secrets across namespaces. https://kubernetes.io/docs/reference/access-authn-authz/rbac/
e.g. Uisng a ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: secret-reader
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing Secret
# objects is "secrets"
resources: ["secrets"]
verbs: ["get", "watch", "list"]
kubectl get secret gitlab-registry --namespace=revsys-com --export -o yaml |\ kubectl apply --namespace=devspectrum-dev -f -
© 2022 - 2024 — McMap. All rights reserved.