Start script after another one (already running) finishes
Asked Answered
M

9

73

So I have a process running, and it will take several hours to complete. I would like to start another process right after that one finishes, automatically. Notice that I can't add a call to the second script in the first one, neither create another which sequentially runs both. Is there any way to do this in Linux?

Edit: One option is to poll every x minutes using pgrep and check if the process finished. If it did, start the other one. However, I don't like this solution.

PS: Both are bash scripts, if that helps.

Minicam answered 20/9, 2011 at 13:16 Comment(1)
This has probably been asked before, but I can't find it.Rivalee
S
34

Polling is probably the way to go, but it doesn't have to be horrible.

pid=$(ps -opid= -C your_script_name)
while [ -d /proc/$pid ] ; do
    sleep 1
done && ./your_other_script
Scabby answered 20/9, 2011 at 13:28 Comment(13)
@Tom Zych: Your oneliner relies on the user entering both commands at the same time. This will work after the first command has been independently executed (e.g. by a cron job, or spawned from a noninteractive process).Scabby
Oh. I thought you were complaining about it being a script. Sorry.Rivalee
This is the solution I was thinking of, however using wait seems better wait $PID && second_script. I don't know how wait is actually implemented so it may be the same.Minicam
@skd: wait is better if you can use it, but IIRC it only works for sub-processes, not arbitrary PIDs (which is too bad).Scabby
why do you need to have sleep 1 here, can you use something like echo > /dev/null 2>&1 ? Also,Cappella
@olala: it depends on how many cycles you want to burn. With GNU sleep you can also specify a fraction, e.g. 0.1. How important is reacting quickly vs. efficiency? You decide.Scabby
I disagree. Polling is almost never the way to go. With polling, there will always be discussion about the delay in the waiting loop: one second? A tenth of a second? Your choice will probably be OK for some use case but less than optimal for other use cases. However, UNIX/bash provides the way to do it right: wait. wait suspends your job until another process ends and then your job will be resumed immediately. No arbitrary delay. Subito. No loop necessary. Except if the job you want to wait for is not started in the same shell.Joviality
@Minicam wait in bash is implemented using the system call wait4. You can see it if you strace bash.Joviality
I accepted this answer since this is what I ended up doing. However, I agree that wait is probably always the way to go. This seems to be a popular question still, so if someone comes here looking for an answer just use wait as @Joviality and more suggested.Minicam
This doesn't work if the process have already stopped or never run. And this seem not work if I have multiple process on same name/script_name.Asper
@SharkIng: if the process may have already terminated you can replace && with ;; if it hasn't run yet there's really no way to be sure unless you can check for some side effect. If you may have multiple instances of the script running you must make some decisions about whether you're waiting for all to finish, for any, etc.. This will complicate the script.Scabby
Does while consume any CPU or memory? @SorpigalBello
@Avatar: All software uses CPU and memory. The loop above will not use much of either; it will not spin up one core to 100% if that's what you're worried about. The sleep interval prevents this.Scabby
S
67

Given the PID of the first process, the loop

while ps -p $PID; do sleep 1; done ; script2

should do the trick. This is a little more stable than pgrep and process names.

Slone answered 20/9, 2011 at 13:20 Comment(1)
I would modify this to not flood the terminal: echo Waiting...; while ps -p $PID > /dev/null; do sleep 1; done; script2Tully
H
61

Maybe you can press ctrl+z first and enter

fg; echo "first job finished"
Hadji answered 23/2, 2016 at 20:39 Comment(6)
A pragmatic (and probably very widely used) approach.Joviality
I like this approach because it is simple. Can you explain how the semi-colon works with fg? Is this the same as having run "myjob && mysecondjob" initially?Monamonachal
@ScheissSchiesser the ; separates commands, whatever they may be. So in this case, fg brings the suspended job to the foreground, and completes it; then the second command is run. Like you mention, you could also do fg && mysecondjob which would launch the second job only if the first (resumed) job returns a 0 exit code (i.e. completes successfully).Embay
Much better than polling every second to see if the job has finished or not. And works without having to fish for the pid!Annabel
This is really great!Mertens
This should have been the accepted best answer rather than the polling suggestion.Textualism
S
34

Polling is probably the way to go, but it doesn't have to be horrible.

pid=$(ps -opid= -C your_script_name)
while [ -d /proc/$pid ] ; do
    sleep 1
done && ./your_other_script
Scabby answered 20/9, 2011 at 13:28 Comment(13)
@Tom Zych: Your oneliner relies on the user entering both commands at the same time. This will work after the first command has been independently executed (e.g. by a cron job, or spawned from a noninteractive process).Scabby
Oh. I thought you were complaining about it being a script. Sorry.Rivalee
This is the solution I was thinking of, however using wait seems better wait $PID && second_script. I don't know how wait is actually implemented so it may be the same.Minicam
@skd: wait is better if you can use it, but IIRC it only works for sub-processes, not arbitrary PIDs (which is too bad).Scabby
why do you need to have sleep 1 here, can you use something like echo > /dev/null 2>&1 ? Also,Cappella
@olala: it depends on how many cycles you want to burn. With GNU sleep you can also specify a fraction, e.g. 0.1. How important is reacting quickly vs. efficiency? You decide.Scabby
I disagree. Polling is almost never the way to go. With polling, there will always be discussion about the delay in the waiting loop: one second? A tenth of a second? Your choice will probably be OK for some use case but less than optimal for other use cases. However, UNIX/bash provides the way to do it right: wait. wait suspends your job until another process ends and then your job will be resumed immediately. No arbitrary delay. Subito. No loop necessary. Except if the job you want to wait for is not started in the same shell.Joviality
@Minicam wait in bash is implemented using the system call wait4. You can see it if you strace bash.Joviality
I accepted this answer since this is what I ended up doing. However, I agree that wait is probably always the way to go. This seems to be a popular question still, so if someone comes here looking for an answer just use wait as @Joviality and more suggested.Minicam
This doesn't work if the process have already stopped or never run. And this seem not work if I have multiple process on same name/script_name.Asper
@SharkIng: if the process may have already terminated you can replace && with ;; if it hasn't run yet there's really no way to be sure unless you can check for some side effect. If you may have multiple instances of the script running you must make some decisions about whether you're waiting for all to finish, for any, etc.. This will complicate the script.Scabby
Does while consume any CPU or memory? @SorpigalBello
@Avatar: All software uses CPU and memory. The loop above will not use much of either; it will not spin up one core to 100% if that's what you're worried about. The sleep interval prevents this.Scabby
O
23

You can wait already running process using bash built-in command wait. man bash.

wait [n ...] Wait for each specified process and return its termination status. Each n may be a process ID or a job specification; if a job spec is given, all processes in that job's pipeline are waited for. If n is not given, all currently active child processes are waited for, and the return status is zero. If n specifies a non-existent process or job, the return status is 127. Otherwise, the return status is the exit status of the last process or job waited for.

Oryx answered 20/9, 2011 at 13:26 Comment(5)
This may be what I was looking forMinicam
I think this is the most elegant thanks. Use jobs to get the job number and then (assuming job 2) >wait %2 && php run.phpIze
I just want to point out that wait only works with child processes of the same shell.Priscillaprise
Only available within the same shell as your running pid is in: https://mcmap.net/q/125399/-wait-for-a-process-to-finish/1695680Heaviness
If it's a different shell, you could use strace like here: askubuntu.com/a/1071915Lightfingered
F
2

Often it happens that your program is running several demons. In that case your pid will be an array. Just use:

PID=($(pidof -x process_name)) #this saves all the PIDs of the given process in the $pid array

Now, just modify the thiton's code as :

while ps -p ${PID[*]}; do sleep 1; done ; script2

Frambesia answered 2/12, 2012 at 17:59 Comment(0)
D
1

watch -g ps -opid -p {targetPID}; command

You can use this command to run a specific command after a process finishes. The process is identified via its PID.

Denson answered 2/11, 2023 at 12:25 Comment(0)
R
0

I had a similar problem and solved it this way:

nohup bash script1.sh &

wait

nohup bash script2.sh &

Reyna answered 10/3, 2016 at 3:25 Comment(0)
R
0

I had the same requirement and solved it in the following way:

while [[ "$exp" != 0 ]]; do
exp=$(ps -ef |grep -i "SCRIPT_1" |grep -v grep |wc -l)
sleep 5;
done

call SCRIPT_2

Rough answered 28/5, 2018 at 10:50 Comment(0)
U
-2

The easiest way:

./script1.sh && ./script2.sh

The && says wait for the successful completion of script1 before proceeding to script2.

Unreligious answered 24/6, 2017 at 13:14 Comment(1)
This doesn't answer the question, namely, that script1.sh is already running, and OP doesn't want to restart that process.Embay

© 2022 - 2024 — McMap. All rights reserved.