Helm Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists
Asked Answered
S

4

5

I have this error when the previous upgrade failed.

I cannot upgrade without deleting manually all my pods and services.

Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists.
Unable to continue with update: existing resource conflict: namespace: ns-xy, name: svc-xy, existing_kind: /v1, Kind=Service, new_kind: /v1, Kind=Service

I tried with helm upgrade --force but with no success.

One solution is to delete all the services and deployments updated, but that's long and creates a long interruption.

How can I force the upgrade?

Selfpronouncing answered 21/4, 2020 at 17:10 Comment(0)
T
8

OP doesn't mention what is the version of helm currently being used. So, assuming that you are using a version earlier than 3.1.0:

  • Upgrade helm to 3.2.4 (Which is the current 3.2 version)
  • Label and annotate the resource you want to upgrade (As per #7649):
    KIND=deployment
    NAME=my-app-staging
    RELEASE=staging
    NAMESPACE=default
    kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE --overwrite
    kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE --overwrite
    kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm 
    
  • Run your helm upgrade command as you were before.

This should tell Helm that it is okay to take over existing resource and begin managing it. That procedure also works for api upgrades (like "apps/v1beta2" changed to "apps/v1") or onboarding old elements in a namespace.

Tonguing answered 30/7, 2020 at 19:53 Comment(4)
That helped a lot. NOTE: I had to use --override. E.g. `kubectl annotate --override $KIND' ...Shapiro
helped me a lot so "like" but it also important to state that -n $NAMESPACE needs to be added, cause not all deployments are running on the default oneWashedout
what if you don't want helm to take over the resource but want it to install a new resource.Torrey
This solution works, but I was wondering if there's a way to do the annotate command as a parameter of the helm upgrade?Anabelle
C
3
  • List the services
kubectl get service

  • Delete them in the following sequence
kubectl delete service  <service-name>

And them run helm upgrade as normally

Condemnation answered 21/5, 2020 at 17:54 Comment(1)
unfortunately the best answer I found. that could be long if it includes multiple services, deployments, ...Selfpronouncing
A
0

I was having such an issue when having more than one ingress-nginx controllers with different class. This additonal parameter with a unique value solved the issue in my helm upgrade/install command. You can also set this property in the helm chart to make it work.

--set controller.ingressClassResource.name=

This answer helped to find the solution: https://github.com/kubernetes/ingress-nginx/issues/6100#issuecomment-925129859

Annulment answered 24/5, 2023 at 7:20 Comment(0)
J
-1

The issue has been answered. Just adding my experience why I happened for me.

I had first created a resource manually for testing purpose.

And later added that same resource to be installed through my helm-chart.

So, during my helm installation, helm was complaining that a resource already exists. Since I didn't add helm specific annotations during my first resource creation. Helm complains it's a resource not managed by helm. So, do take a look.

Solution was to either edit the resource and add helm's annotations. Or simply delete the resource so that helm will create that same resource with required annotations.

Jews answered 20/10, 2023 at 8:6 Comment(1)
This is basically what the accepted answers describes.....Calcar

© 2022 - 2024 — McMap. All rights reserved.