Restarting Play application Docker container results in 'This application is already running' - RUNNING_PID is not deleted
Asked Answered
C

5

19

Edit: There is a related issue being discussed on Github but in another mode of deployment (Typesafe Activator UI and not Docker).

I was trying to simulate a system reboot in order to verify the Docker restart policy which declares to be able to re-run containers in the correct order.

I have a Play framework application written in Java.

The Dockerfile looks like this:

FROM ubuntu:14.04
#
#  [Java8, ...]
#
RUN chmod +x /opt/bin/playapp
CMD ["/bin/bash"]

I start it using $ docker run --restart=always -d --name playappcontainer "./opt/bin/playapp".

When I $ service docker stop && service docker restart and then $ docker attach playappcontainer the console tells me:

Play server process ID is 7
This application is already running (Or delete /opt/RUNNING_PID file)

Edit: Same result when I follow the recommendation of the Play documentation to change the location of the file to /var/run/play.pid with -Dpidfile.path=/var/run/play.pid.

Play server process ID is 7
This application is already running (Or delete /var/run/play.pid file).

So: Why is the file containing the RUNNING_PID not deleted when the docker daemon stops, gets restartet and restarts previously run containers?


When I $ docker inspect playappcontainer, it tells me:

"State": {
    "ExitCode": 255,
    "FinishedAt": "2015-02-05T17:52:39.150013995Z",
    "Paused": false,
    "Pid": 0,
    "Restarting": true,
    "Running": true,
    "StartedAt": "2015-02-05T17:52:38.479446993Z"
},

Although:

The main process inside the container will receive SIGTERM, and after a grace period, SIGKILL.

from the Docker reference on $ docker stop

To kill a running Play server, it is enough to send a SIGTERM to the process to properly shutdown the application.

from the Play Framework documentation on stopping a Play application

Compute answered 5/2, 2015 at 18:29 Comment(0)
C
5

I sorted out a working workaround based on the answers and my further work on this question. If I start the containers as follows, they'll be up after an (un)expected stop/restart. The conflicting RUNNING_PID file won't prevent the container from restarting.

$ sudo docker run --restart=on-failure:5 -d \
--name container my_/container:latest \
sh -c "rm -f /var/run/play.pid && ./opt/bin/start \
-Dpidfile.path=/var/run/play.pid"

What it does is deleting the file containing the process ID which is put at a specific place using an option everytime before running the binary.

Compute answered 16/3, 2015 at 13:30 Comment(0)
P
30

I've just dockerized a Play! application and was also running into this issue - restarting the host caused the Play! application to fail to start in its container because RUNNING_PID had not been deleted.

It occurred to me that as the Play! application is the only process within its container, always has the same PID, and is taken care of by Docker, the RUNNING_PID file is (to the best of my knowledge) not actually needed.

As such I overrode pidfile.path to /dev/null by placing

javaOptions in Universal ++= Seq(
  "-Dpidfile.path=/dev/null"
)

in my project's build.sbt. And it works - I can reboot the host (and container) and my Play! application starts up fine.

The appeal for me of this approach is it does not require changing the way the image itself is produced by sbt-native-packager, just the way the application runs within it.

This works with sbt-native-packager 1.0.0-RC2 and higher (because that release includes https://github.com/sbt/sbt-native-packager/pull/510).

Protero answered 24/3, 2015 at 22:24 Comment(5)
+1 This solution worked for me. Alternatively after updating the native packager you can add an application.ini file instead of having this in the build script. I went with that option.Nexus
+1 I really don't understand why they can't keep up without breaking Upstart on every new release.Kellyekellyn
Just a check, should setting pidfile.path=/dev/null be enough in reference.conf / application.conf?Patois
I just tested it, and can confirm it works in that no RUNNING_PID file is created at /opt/docker/RUNNING_PID when I run a container that has such a configuration. However, the difference is that setting it in a .conf like this will make it apply in all run/deployment modes (even those where the PID file is required), whereas doing it in build.sbt, qualified by in Universal means it is only applied to sbt-native-packager deployments.Protero
It seems you can also set play.server.pidfile.path=/dev/null in application.confCamisado
C
5

I sorted out a working workaround based on the answers and my further work on this question. If I start the containers as follows, they'll be up after an (un)expected stop/restart. The conflicting RUNNING_PID file won't prevent the container from restarting.

$ sudo docker run --restart=on-failure:5 -d \
--name container my_/container:latest \
sh -c "rm -f /var/run/play.pid && ./opt/bin/start \
-Dpidfile.path=/var/run/play.pid"

What it does is deleting the file containing the process ID which is put at a specific place using an option everytime before running the binary.

Compute answered 16/3, 2015 at 13:30 Comment(0)
S
4

I don't know much about docker, but Play does not remove RUNNING_PID on stopping the server as far as I have tested. When I deployed my app in prod mode and try to stop it by Ctrl+D and Ctrl+C it din't remove the RUNNING_PID file from project directory so I had to manually delete it. From Play docs

Normally this(RUNNING_PID) file is placed in the root directory of your play project, however it is advised that you put it somewhere where it will be automatically cleared on restart, such as /var/run:

So - apart from manual deletion - the workaround is to change the path of RUNNING_PID and delete it every time the server starts through some script.

$ /path/to/bin/<project-name> -Dpidfile.path=/var/run/play.pid

Make sure that the directory exists and that the user that runs the Play application has write permission for it.

Using this file, you can stop your application using the kill command, for example:

$ kill $(cat /var/run/play.pid)

and you can also try docker command $ sudo docker rm --force redis

Maybe That could help

Source1 Source2 Source3

Sousaphone answered 11/2, 2015 at 7:58 Comment(1)
Hello singhakash, thank you for your answer. I already developed a workaround by an external script which kills the container on restart and restarts it. But this is not satisfying for me. Also, as you can see in my question, I already changed the location of the RUNNING_PID file. As I pointed out in my expectations I would like to have a restarting Docker container, not to kill it and then run it again. It is part of a larger infrastructure and other components rely on its presence.Compute
H
4

I had the exact same problem and worked my way around it by manually deleting the file every time the container would run. In order to do that I added in a companion file start.bash I use to start the play process from the results of the SBT dist task, the following line:

find . -type f -name RUNNING_PID -exec rm -f {} \;

Hope it helps.

Housebroken answered 12/2, 2015 at 23:29 Comment(4)
Hi Julian, thanks for this. I actually do use a star.sh script to to kill and run the containers (the application is stateless, so that's not a problem in terms of data). But this is some kind of dirty. Do you know if there is a way to run this line inside the container automatically upon every restart?Compute
Well... actually that's what I intend to do with that line. My applications are stateless too, so every time a container starts, this RUNNING_PID file gets removed and a new one with a different identifier gets created. I know it's not the most elegant thing in the world though. I don't know as of now of any other way of accomplishing the same in a more natural way using docker.Housebroken
But we agree on the fact that this is not a misbehaviour of Docker but of Play, right?Compute
Yes, we definitely do :)Housebroken
G
1

I ran into the same problem after a ctrl+c failed. I resolved this issue by running docker-compose down -v and then of course running docker-compose up. the -v option is to indicate that you want remove volumes associated to your container. Maybe docker-compose down would have been suffice.

Here's a rundown of some down options: `

Stop services only

docker-compose stop

Stop and remove containers, networks..

docker-compose down 

Down and remove volumes

docker-compose down --volumes 

Down and remove images

docker-compose down --rmi <all|local>`
Greatniece answered 23/7, 2019 at 9:25 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.