Gunicorn issues on gcloud. Memory faults and restarts thread
Asked Answered
P

0

2

I am deploying a django application to gcloud using gunicorn without nginx.

Running the container locally works fine, the application boots and does a memory consuming job on startup in its own thread (building a cache). Approx. 900 MB of memory is used after the job is finished.

Gunicorn is started with:CMD gunicorn -b 0.0.0.0:8080 app.wsgi:application -k eventlet --workers=1 --threads=4 --timeout 1200 --log-file /gunicorn.log --log-level debug --capture-output --worker-tmp-dir /dev/shm

Now I want to deploy this to gcloud. Creating a running container with the following manifest:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app
  namespace: default
spec:
  selector:
    matchLabels:
      run: app
  template:
    metadata:
      labels:
      run: app
    spec:
      containers:
      - image: gcr.io/app-numbers/app:latest
        imagePullPolicy: Always
        resources:
          limits:
            memory: "2Gi"
          requests:
            memory: "2Gi"        
        name: app
        ports:
        - containerPort: 8080
          protocol: TCP

Giving the container 2 GB of memory.

Looking at the logs, guniucorn is booting workers [2019-09-01 11:37:48 +0200] [17] [INFO] Booting worker with pid: 17

Using free -m in the container shows the memory slowly being consumed and dmesg shows:

[497886.626932] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
[497886.636597] [1452813]     0 1452813      256        1       4       2        0          -998 pause
[497886.646332] [1452977]     0 1452977      597      175       5       3        0           447 sh
[497886.656064] [1452989]     0 1452989    10195     7426      23       4        0           447 gunicorn
[497886.666376] [1453133]     0 1453133      597      360       5       3        0           447 sh
[497886.676959] [1458304]     0 1458304   543235   520309    1034       6        0           447 gunicorn
[497886.686727] Memory cgroup out of memory: Kill process 1458304 (gunicorn) score 1441 or sacrifice child
[497886.697411] Killed process 1458304 (gunicorn) total-vm:2172940kB, anon-rss:2075432kB, file-rss:5804kB, shmem-rss:0kB
[497886.858875] oom_reaper: reaped process 1458304 (gunicorn), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

What could be going on creating a memory leak on gcloud and not locally?

Primitivism answered 1/9, 2019 at 10:13 Comment(2)
were you able to resolve this issue? I am facing the same issue. Kindly let me knowScammon
Hi, I do not remember exactly. But I think the solution lies in "-k WORKERCLASS, --worker-class=WORKERCLASS - The type of worker process to run. You’ll definitely want to read the production page for the implications of this parameter. You can set this to $(NAME) where $(NAME) is one of sync, eventlet, gevent, tornado, gthread. sync is the default. See the worker_class documentation for more information."Primitivism

© 2022 - 2024 — McMap. All rights reserved.