starting remote script via ssh containing nohup
Asked Answered
H

3

7

I want to start a script remotely via ssh like this:

ssh [email protected] -t 'cd my/dir && ./myscript data [email protected]'

The script does various things which work fine until it comes to a line with nohup:

nohup time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 &

it is supposed to do start the program myprog, pipe its output to mylog and send an email with some datafiles created by myprog as attachment and the log as body. Though when the script reaches this line, ssh outputs:

Connection to remote.org closed.

What is the problem here?

Thanks for any help

Hannahannah answered 6/11, 2010 at 12:58 Comment(6)
No. neither myprog is started nor mutt sending. For testing i ssh'ed on remote to check what is happening. Also my.log is empty (it is touch'ed before by the script).Hannahannah
What would ./myprog write to stdout if its arguments were incorrect? What does myerr.log contain when you write ./myprog $1 >my.log 2>myerr.log?Upbraiding
myprog first does some stuff which doesn't require the arguments, ie limiting system resources available to itself to 64GB, and outputs its success then checks for the arguments if they are correct filenames and tries to open them and outputs its success and so on. If any of it fails it outputs an error first to stdout then to stderr. Though myprog is not the problem here, as when i manually log on to remote and nohup myprog && mutt everything works fine.Hannahannah
I should add that myscript works as expected when i manually log on to remote and start. It just fails when i try to launch it via ssh as described above.Hannahannah
did you try running it without nohup to see what happens ? I feel the script myprog is erroring out which should be captured in the nohup.out file?Goad
can you try to run manually your command when you open ssh with -t option ? maybe it collides with nohup ?Subsolar
P
6

Your command runs a pipeline of processes in the background, so the calling script will exit straight away (or very soon afterwards). This will cause ssh to close the connection. That in turn will cause a SIGHUP to be sent to any process attached to the terminal that the -t option caused to be created.

Your time ./myprog process is protected by a nohup, so it should carry on running. But your mutt isn't, and that is likely to be the issue here. I suggest you change your command line to:

nohup sh -c "time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 " &

so the entire pipeline gets protected. (If that doesn't fix it it may be necessary to do something with file descriptors - for instance mutt may have other issues with the terminal not being around - or the quoting may need tweaking depending on the parameters - but give that a try for now...)

Predate answered 12/12, 2010 at 21:42 Comment(0)
A
5

This answer may be helpful. In summary, to achieve the desired effect, you have to do the following things:

  1. Redirect all I/O on the remote nohup'ed command
  2. Tell your local SSH command to exit as soon as it's done starting the remote process(es).

Quoting the answer I already mentioned, in turn quoting wikipedia:

Nohuping backgrounded jobs is for example useful when logged in via SSH, since backgrounded jobs can cause the shell to hang on logout due to a race condition [2]. This problem can also be overcome by redirecting all three I/O streams:

nohup myprogram > foo.out 2> foo.err < /dev/null &

UPDATE

I've just had success with this pattern:

ssh -f user@host 'sh -c "( (nohup command-to-nohup 2>&1 >output.file </dev/null) & )"'
Agincourt answered 28/10, 2012 at 19:2 Comment(1)
What does < /dev/null mean? Thanks.Constitutional
Q
-1

Managed to solve this for a use case where I need to start backgrounded scripts remotely via ssh using a technique similar to other answers here, but in a way I feel is more simple and clean (at least, it makes my code shorter and -- I believe -- better-looking), by explicitly closing all three streams using the stream-close redirection syntax (as discussed at the following locations:

  1. https://unix.stackexchange.com/questions/131801/closing-a-file-descriptor-vs

  2. https://unix.stackexchange.com/questions/70963/difference-between-2-2-dev-null-dev-null-and-dev-null-21

  3. http://www.tldp.org/LDP/abs/html/io-redirection.html#CFD

  4. https://www.gnu.org/software/bash/manual/html_node/Redirections.html

Rather than the more widely used but (IMHO) hackier "redirect to/from /dev/null", resulting in the deceptively simple:

    nohup script.sh >&- 2>&- <&-&

2>&1 works just as well as 2>&-, but I feel the latter is ever-so-slightly more clear. ;) Most people might have a space preceding the final "background job" ampersand, but since it is not required (as the ampersand itself functions like a semicolon in normal usage), I prefer to omit it. :)

Quigley answered 21/6, 2018 at 20:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.