Kustomize - "failed to find unique target for patch ..."
Asked Answered
C

4

13

I just start using kustomize. I have the following yaml files for kustomize:

ls -l ./kustomize/base/
816 Apr 18 21:25 deployment.yaml
110 Apr 18 21:31 kustomization.yaml
310 Apr 18 21:25 service.yaml

where deployment.yaml and service.yaml are generated files with jib and they are fine in running. And the content of the kustomization.yaml is the following:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:  
- service.yaml
- deployment.yaml  

And in another directory

ls -l ./kustomize/qa
133 Apr 18 21:33 kustomization.yaml
95 Apr 18 21:37 update-replicas.yaml

where

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../base

patchesStrategicMerge:
- update-replicas.yaml

and

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2

After running "kustomize build ./kustomize/base", I run

~/kustomize build ./kustomize/qa
Error: no matches for OriginalId ~G_~V_Deployment|~X|my-app; no matches for CurrentId ~G_~V_Deployment|~X|my-app; failed to find unique target for patch ~G_~V_Deployment|my-app

I have a look related files and don't see any typo on the application name.

And here is the deployment.yaml file.

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: my-app
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: my-app
    spec:
      containers:
        - image: docker.io/[my Docker ID]/my-app
        name: my-app
        resources: {}
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness
        livenessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/liveness
        lifecycle:
          preStop:
            exec:
              command: ["sh", "-c", "sleep 10"]
status: {}

Again, the above file is generated with jib with some modifications. And it runs on Kubernetes directly.

How to resolve this problem?

Colourable answered 19/4, 2020 at 6:49 Comment(3)
It's stating it is not findind the object my-app to be patched. Please post the original deployment.yaml, as it's crucial to check for inconsistencies.Debility
@willrof Thanks very much for your information. Based on your information, I add the deployment.yaml file to my original post.Colourable
Whilst the answers below solve this specific problem, failed to find unique target for patch is a really weak and unhelpful error message. Personally I'd welcome any insights into how to get better diagnostic information from kustomize with a view to solving errors more efficiently.Comport
Z
31

I got the same issue and fixed. This issue is related to Kustomize version installed. Check! kustomize version. From Kustomize v3.0.x
 and above, we need to mention namespace in patches too. After added namespace in patches yaml files, issue get resolved.

In your example add namespace under metadata in update-replicas.yaml patch file.

For more details regarding kustomize version related issues(like "...failed to find unique target for patch..."): https://github.com/kubernetes-sigs/kustomize/issues/1351

Zircon answered 5/8, 2021 at 6:29 Comment(1)
This fixed my issue.Impudent
D
8

I was able to reproduce your scenario and didn't get any error.

I will post a step by step example so you can double check yours.

  • I'll use a simple nginx server as example, here is the files structure:
$ tree Kustomize/
Kustomize/
├── base
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── qa
    ├── kustomization.yaml
    └── update-replicas.yaml
2 directories, 5 files
  • Base Yamls:
$ cat Kustomize/base/kustomization.yaml 
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- deployment.yaml
- service.yaml
$ cat Kustomize/base/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: my-app
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: nginx
        ports:
        - containerPort: 80
$ cat Kustomize/base/service.yaml 
kind: Service
apiVersion: v1
metadata:
  name: nginx-svc
spec:
  selector:
    app: my-app
  type: NodePort
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  • Qa Yamls:
$ cat Kustomize/qa/kustomization.yaml 
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../base

patchesStrategicMerge:
- update-replicas.yaml
$ cat Kustomize/qa/update-replicas.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2
  • Now I'll build base and apply:
$ kustomize build ./Kustomize/base | kubectl apply -f -
service/nginx-svc created
deployment.apps/my-app created

$ kubectl get all
NAME                          READY   STATUS    RESTARTS   AGE
pod/my-app-64778f875b-7gsg4   1/1     Running   0          52s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/nginx-svc    NodePort    10.96.114.118   <none>        80:31880/TCP   52s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-app   1/1     1            1           52s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/my-app-64778f875b   1         1         1       52s

Everything got deployed as intended, pod, deployment, service and replicaset, with 1 replica. - Now let's deploy the qa update:

$ kustomize build ./Kustomize/qa/ | kubectl apply -f -
service/nginx-svc unchanged
deployment.apps/my-app configured

$ kubectl get all
NAME                          READY   STATUS    RESTARTS   AGE
pod/my-app-64778f875b-7gsg4   1/1     Running   0          3m26s
pod/my-app-64778f875b-zlvfm   1/1     Running   0          27s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/nginx-svc    NodePort    10.96.114.118   <none>        80:31880/TCP   3m26s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-app   2/2     2            2           3m26s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/my-app-64778f875b   2         2         2       3m26s
  • This is the expected behavior and the number of replicas was scaled to 2.

Suggestions:

  • I noticed you added to the question the deployment after being deployed (through kubectl get deploy <name> -o yaml) but maybe the issue is in the original file and when applied it's changed somewhat.
  • Try to reproduce it with the example files I provided to see if you get the same output.

Let me know your results!

Debility answered 24/4, 2020 at 14:54 Comment(0)
T
2

In Kustomize/kubectl client version 3 and above we need to mention the namespace as well. Your replica patch should look like the below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  # In Kustomize version 3 and above we need to mention the namespace as well.
  namespace: your-namespace
spec:
  replicas: 2

Now, this should work.

Terrific answered 3/3, 2023 at 8:33 Comment(0)
P
0

The issue I faced was trying by accident to execute the directory which contains kustomization.yaml with kubectl apply -f <directory> to create a namespace beforehand.

You need to do

kubectl apply -f <directory>/namespace_manifest.yaml
kubectl apply -k <directory>

instead.

Pulchia answered 1/8 at 12:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.