I have a cluster that has numerous services running as pods from which I want to pull logs with fluentd. All services show logs when doing kubectl logs service
. However, some logs don't show up in those folders:
- /var/log
- /var/log/containers
- /var/log/pods
although the other containers are there. The containers that ARE there are created as a Cronjob, or as a Helm chart, like a MongoDB installation.
The containers that aren't logging are created by me with a Deployment file like so:
kind: Deployment
metadata:
namespace: {{.Values.global.namespace | quote}}
name: {{.Values.serviceName}}-deployment
spec:
replicas: {{.Values.replicaCount}}
selector:
matchLabels:
app: {{.Values.serviceName}}
template:
metadata:
labels:
app: {{.Values.serviceName}}
annotations:
releaseTime: {{ dateInZone "2006-01-02 15:04:05Z" (now) "UTC"| quote }}
spec:
containers:
- name: {{.Values.serviceName}}
# local: use skaffold, dev: use passed tag, test: use released version
image: {{ .Values.image }}
{{- if (eq .Values.global.env "dev") }}:{{ .Values.imageConfig.tag}}{{ end }}
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
envFrom:
- configMapRef:
name: {{.Values.serviceName}}-config
{{- if .Values.resources }}
resources:
{{- if .Values.resources.requests }}
requests:
memory: {{.Values.resources.requests.memory}}
cpu: {{.Values.resources.requests.cpu}}
{{- end }}
{{- if .Values.resources.limits }}
limits:
memory: {{.Values.resources.limits.memory}}
cpu: {{.Values.resources.limits.cpu}}
{{- end }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.global.imagePullSecret }}
restartPolicy: {{ .Values.global.restartPolicy }}
{{- end }}
and a Dockerfile CMD like so:
CMD ["node", "./bin/www"]
One assumption might be that the CMD doesn't pipe to STDOUT, but why would the logs show up in kubectl logs
then?