I am fairly new to helm and kubernetes so I'm not sure if this is a bug or I'm doing something wrong. I have looked everywhere for answer however before posting and can't find anything that answers my question.
I have a deployment which uses a persistent volume and an init container. I pass it values to let helm know if either the image for the init container has changed, or the main application container has changed.
Possibly relevant but probably not: I need to deploy one deployment for a range of web sources (which I call collectors). I don't know if this last part is relevant, but then if I did, I probably wouldn't be here.
When I run
helm upgrade --install my-release helm_chart/ --values values.yaml --set init_image_tag=$INIT_IMAGE_TAG --set image_tag=$IMAGE_TAG
The first time everything works fine. However, when I run it a second time, with INIT_IMAGE_TAG the same, but IMAGE_TAG changed
- a) it tries to re initialise the pod
- b) it fails to reinitialise the pod because it can't mount the volume
Expected behaviour:
- a) don't re initialise the pod since the init container hasn't changed
- b) mount the volume
My values.yaml just contains a list called collectors
My template is just:
{{ $env := .Release.Namespace }}
{{ $image_tag := .Values.image_tag }}
{{ $init_image_tag := .Values.init_image_tag }}
{{- range $colname := .Values.collectors }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ $colname }}-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $colname }}-ingest
labels:
app: {{ $colname }}-ingest
spec:
replicas: 1
selector:
matchLabels:
app: {{ $colname }}-ingest
template:
metadata:
labels:
app: {{ $colname }}-ingest
spec:
fsGroup: 1000
containers:
- name: {{ $colname }}-main
image: xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/main_image:{{ $image_tag }}
env:
- name: COLLECTOR
value: {{ $colname }}
volumeMounts:
- name: storage
mountPath: /home/my/dir
initContainers:
- name: {{ $colname }}-init
image: xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/init_image:{{ $init_image_tag }}
volumeMounts:
- name: storage
mountPath: /home/my/dir
env:
- name: COLLECTOR
value: {{ $colname }}
volumes:
- name: storage
persistentVolumeClaim:
claimName: {{ $colname }}-claim
---
{{ end }}
Output of helm version
: version.BuildInfo{Version:"v3.2.0-rc.1", GitCommit:"7bffac813db894e06d17bac91d14ea819b5c2310", GitTreeState:"clean", GoVersion:"go1.13.10"}
Output of kubectl version
: Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-f459c0", GitCommit:"f459c0672169dd35e77af56c24556530a05e9ab1", GitTreeState:"clean", BuildDate:"2020-03-18T04:24:17Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): EKS
Does anyone know if this is a bug or if I'm mis-using helm/kubernetes somehow?
Thanks