redirect COPY of stdout to log file from within bash script itself
Asked Answered
D

9

251

I know how to redirect stdout to a file:

exec > foo.log
echo test

this will put the 'test' into the foo.log file.

Now I want to redirect the output into the log file AND keep it on stdout

i.e. it can be done trivially from outside the script:

script | tee foo.log

but I want to do declare it within the script itself

I tried

exec | tee foo.log

but it didn't work.

Drunken answered 3/7, 2010 at 23:4 Comment(3)
Your question is poorly phrased. When you invoke 'exec > foo.log', the stdout of the script is the file foo.log. I think you mean that you want the output to go to foo.log and to the tty, since going to foo.log is going to stdout.Wiles
what I'd like to do is to use the | on the 'exec'. that would be perfect for me, i.e. "exec | tee foo.log", unfortunately you can not use pipe redirection on the exec callDrunken
Related: How do I redirect the output of an entire shell script within the script itself?Hermineherminia
R
317
#!/usr/bin/env bash

# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)

# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1

echo "foo"
echo "bar" >&2

Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.

If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)


Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.

There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.

Rondarondeau answered 4/8, 2010 at 8:24 Comment(37)
Tee on most systems is buffered, so output may not arrive until after the script has finished. Also, since this tee is running in a subshell, not a child process, wait cannot be used to synchronize output to the calling process. What you want is an unbuffered version of tee similar to bogomips.org/rainbows.git/commit/…Patrolman
This is also likely to leak tee processes.Patrolman
@Barry: Would you care to elaborate how you make this "leak tee processes"?Rondarondeau
@Barry: POSIX specifies that tee should not buffer its output. If it does buffer on most systems, it's broken on most systems. That's a problem of the tee implementations, not of my solution.Rondarondeau
The main reason this is fragile is all of the extra processes that are started. If one ever has to kill or restart it, all of the related script processes will need to be killed one-by-one (HUP is not sent to them if backgrounded). Also, it also multiple concurrent writers and doesn't handle any errors. Consider adding -e to the hashbang.Patrolman
How can I stop logging using this method? i.e. reset stdout to only terminal (and not logfile.txt). I might ask new question as well, but it is very related.Albarran
@Sebastian: exec is very powerful, but also very involved. You can "back up" the current stdout to a different filedescriptor, then recover it later on. Google "bash exec tutorial", there's lots of advanced stuff out there.Rondarondeau
@Barry: I cannot make this approach leak tee processes no matter what I try. Please provide a test case.Scutage
@AdamSpiers: I'm not sure what Barry was about, either. Bash's exec is documented not to start new processes, >(tee ...) is a standard named pipe / process substitution, and the & in the redirection of course has nothing to do with backgrounding... ?:-)Rondarondeau
I copied this snippet on a file and then I ran it using 'bash myfile.sh' on a new terminal window and it does the trick but it keeps hanging until I press ctrl+c or I put an exit 1 at the end of the script. Why and how to avoid this? ThanksAllo
When I try this, I receive an error message objecting to one or the other of the ">" characters: syntax error near unexpected token `>'. I'm running GNU bash, version 4.1.2(1). Any ideas?Undone
@ChrisJohnson: This works for me on various bash versions ranging from 3.1.17 to 4.1.10. I have no idea where your problem comes from.Rondarondeau
Same "unexpected token" problem here with bash 4.2.37Muss
@LucaBorrione: The script is not hanging, you just get the output of the script after the new prompt. exit 1 doesn't actually change that.Rondarondeau
@Rondarondeau the problem was that the calling script was invoking sh myscript.sh instead of bash myscript.sh. Sorry for not checking before posting.Muss
Then, would there be a way to restore the output, or to force something to be output to the real original STDOUT ?Nobel
@abourget: Yes there is, but that's a different (and separate) question.Rondarondeau
This strips colors from stdout output. Is there any way to keep color?Abrogate
@AndyRay: This is an issue of tools (like grep) auto-detecting whether their output is to a terminal or a file, and adjusting their output accordingly. Since you are piping your output, these tools detect "not a terminal" and do not generate ANSI escapes. In the case of grep, you can give the option --color=always to enforce color. Other tools have similar options.Rondarondeau
I suggest passing -i to tee. Otherwise, signal interrupts (traps) will disrupt stdout in the main script. For example, if you have a trap 'echo foo' EXIT and then press ctrl+c, you will not see "foo". So I would modify the answer to exec &> >(tee -ia file).Unity
What could be the reason if it always wrties an error message /dev/fd/<Number>: no such file? In the end he log file exists but is empty, streams seem to get printed as normal, not redirected and buffered by tee.Trafalgar
@erikb85: Please don't post questions as comments. Post a question instead. (I'd welcome it if you'd delete that comment.)Rondarondeau
@Rondarondeau This is debugging the proposed solution, not a separate question. I thought instead of just down voting and saying it doesn't work it would be better to discuss what doesn't work. Copy&Paste results in the error message.Trafalgar
@erikb85: man bash, section Process Substitution: "Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files. It takes the form of <(list) or >(list). The process list is run with its input or output connected to a FIFO or some file in /dev/fd." --- At which point I'd look at your system and why bash thinks there should be /dev/fd/... when there is not. A problem of your system, not the solution presented here (and elsewhere, this is by no means an invention of myself).Rondarondeau
Yeah but why should my Ubuntu be different than others? Wouldn't believe that this only works on some weird Unix versions people don't use.Trafalgar
@erikb85: This is no "trick" that only works on some weird Unix version, this is a documented bash feature. Tested to work on Cygwin bash 4.3, Ubuntu/Mint bash 4.3, AIX bash 4.3, and SLES bash 3.2 (!!), just by myself and just this morning. I don't know which Unix the other 160 upvoters have been using over the last five years, or why it's so hard for you to understand that you're barking up the wrong tree. Please post your own question and delete your comments here. You are not adding any value to this answer, just noise. You keep this up, and I flag it for mod attention.Rondarondeau
Please note that with this solution, tee will keep running even after the script finished. This may result in e.g. a SSH connection not finishing after termination of the script.Septuple
@LarsNoschinski: I note that Barry had the same comment to make, was asked by Adam Spiers to provide a test case, and has fallen silent. Also note my comment from Aug 10 '12 at 10:56. I would welcome a test case.Rondarondeau
this was fairly nasty for me - it changed the output slightly, losing the initial carriage return - but also for some reason then required you to hit enter to continue. I tried the {... } 2>&1 | tee the.log from below - much cleaner and for me behaved as the original script didFlacon
Also see here on how to pipe the output to another program using exec, ts for instance.Linkous
Is there a way to also log everything sent through stdin? Like I have a few reads in my script and those are not captured...Enterectomy
@BrainStone: I'd suggest posting that as a separate question.Rondarondeau
@Rondarondeau fair point. Here it is: https://mcmap.net/q/113060/-redirect-copy-of-stdin-to-file-from-within-bash-script-itself/1996022Enterectomy
Err yeah so the fact that it keeps capturing stdout to the file after the script finishes is a pretty serious problem and makes the current solution completely unusable. The answer needs to include the step of ending the logging!Plyler
@BenFarmer: And with an answer standing for eleven years, you didn't care to double-check your assertion, or come up with a MCRE? Because this solution doesn't "keep capturing stdout". What it does is printing the new command prompt before the stdout from the script, which might catch some people unaware. But the "solution" is to just carry on (or press Enter once again). There is no "ending the logging".Rondarondeau
@DevSolar: Untrue. It continues capturing if you run the script by sourcing it into the current shell, i.e. with ".", which is the only way I can run scripts on my current machine due to security settings. If you run it in a subshell it is fine, but this can catch people out.Plyler
@BenFarmer If you can only run in your current shell "for security reasons" your system is pretty much FUBAR to begin with. Note that many other kinds of resource acquisition will also fail to release in your case, because Unix environments rely on process cleanup. That is only one point where your system's "security" setup compromises on your security. But at least I understand now where you are coming from. I might add a cleanup if I find the time. -- Note the comment from August 2012 that already touches on the issue.Rondarondeau
S
188

The accepted answer does not preserve STDERR as a separate file descriptor. That means

./script.sh >/dev/null

will not output bar to the terminal, only to the logfile, and

./script.sh 2>/dev/null

will output both foo and bar to the terminal. Clearly that's not the behaviour a normal user is likely to expect. This can be fixed by using two separate tee processes both appending to the same log file:

#!/bin/bash

# See (and upvote) the comment by JamesThomasMoon1979 
# explaining the use of the -i option to tee.
exec >  >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)

echo "foo"
echo "bar" >&2

(Note that the above does not initially truncate the log file - if you want that behaviour you should add

>foo.log

to the top of the script.)

The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.

Scutage answered 9/8, 2012 at 15:33 Comment(4)
For some reason, in my case, when the script is executed from a c-program system() call, the two tee sub-processes continue to exist even after the main script exits. So I had to add traps like this: exec > >(tee -a $LOG) trap "kill -9 $! 2>/dev/null" EXIT exec 2> >(tee -a $LOG >&2) trap "kill -9 $! 2>/dev/null" EXITJasik
I suggest passing -i to tee. Otherwise, signal interrupts (traps) will disrupt stdout in the script. For example, if you trap 'echo foo' EXIT and then press ctrl+c, you will not see "foo". So I would modify the answer to exec > >(tee -ia foo.log).Unity
I made some little "sourceable" scripts based on this. Can use them in a script like . log or . log foo.log: sam.nipl.net/sh/log sam.nipl.net/sh/log-aUnruh
The problem with this method is that messages going to STDOUT appear first as a batch, and then messages going to STDERR appear. They are not interleaved as usually expected.Truck
C
29

Solution for busybox, macOS bash, and non-bash shells

The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.

Hopefully this would be useful to someone else who doesn't have bash.

Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.

Note the use of $* is not necessarily safe.

#!/bin/sh

if [ "$SELF_LOGGING" != "1" ]
then
    # The parent process will enter this branch and set up logging

    # Create a named piped for logging the child's output
    PIPE=tmp.fifo
    mkfifo $PIPE

    # Launch the child process with stdout redirected to the named pipe
    SELF_LOGGING=1 sh $0 $* >$PIPE &

    # Save PID of child process
    PID=$!

    # Launch tee in a separate process
    tee logfile <$PIPE &

    # Unlink $PIPE because the parent process no longer needs it
    rm $PIPE    

    # Wait for child process, which is running the rest of this script
    wait $PID

    # Return the error code from the child process
    exit $?
fi

# The rest of the script goes here
Cherimoya answered 5/3, 2011 at 0:31 Comment(1)
This is the only solution I've seen so far that works on macMarkson
J
21

Inside your script file, put all of the commands within parentheses, like this:

(
echo start
ls -l
echo end
) | tee foo.log
Jonell answered 3/7, 2010 at 23:48 Comment(5)
pedantically, could also use braces ({})Melitamelitopol
well yeah, I considered that, but this is not redirection of the current shell stdout, its kind of a cheat, you actually running a subshell and doing a regular piper redirection on it. works thought. I'm split with this and the "tail -f foo.log &" solution. will wait a little to see if may be a better one surfaces. if not probably going to settle ;)Drunken
{ } executes a list in the current shell environment. ( ) executes a list in a subshell environment.Patrolman
Damn. Thank you. The accepted answer up there didn't work for me, trying to schedule a script to run under MingW on a Windows system. It complained, I believe, about unimplemented process substitution. This answer worked just fine, after the following change, to capture both stderr and stdout: ``` -) | tee foo.log +) 2>&1 | tee foo.logEscamilla
For me this answer is way simpler and easier to understand than the accepted one, and also doesn't keep redirecting output after the script finishes like the accepted answer does!Plyler
P
15

Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.

Add this line at the top:

exec &> >(logger -t myscript -s)

Alternatively, send the log to a separate file:

exec &> >(ts |tee -a /tmp/myscript.output >&2 )

This requires moreutils (for the ts command, which adds timestamps).

Perambulate answered 7/12, 2014 at 18:22 Comment(1)
It seems your solutions sends only stdout to a separate file. How do I send stdout and stderr to a separate file?Viewer
H
13

Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:

# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe

# Rest of my script

This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.

Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.

Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.

Hombre answered 9/7, 2011 at 14:1 Comment(3)
Similar answer as the second idea from David Z. Have a look at its comments. +1 ;-)Gillies
Works well. I'm not understanding the $logfile part of tee < ${logfile}.pipe $logfile &. Specifically, I tried to alter this to capture full expanded command log lines (from set -x) to file while only showing lines without leading '+' in stdout by changing to (tee | grep -v '^+.*$') < ${logfile}.pipe $logfile & but received an error message regarding $logfile. Can you explain the tee line in a little more detail?Undone
I tested this out and it seems this answer doesn't preserve STDERR (it is merged with STDOUT), so if you rely on the streams being separate for error detection or other redirection, you should look at Adam's answer.Pilate
S
2

Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.

Sakai answered 4/7, 2010 at 1:15 Comment(0)
E
1

Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:

# my script: 

check_tee_output()
{
    # copy (append) stdout and stderr to log file if TEE is unset or true
    if [[ -z $TEE || "$TEE" == true ]]; then 
        echo '-------------------------------------------' >> log.txt
        echo '***' $(date) $0 $@ >> log.txt
        TEE=false $0 $@ 2>&1 | tee --append log.txt
        exit $?
    fi 
}

check_tee_output $@

rest of my script

This allows you to do this:

your_script.sh args           # tee 
TEE=true your_script.sh args  # tee 
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args           # tee

You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.

Epicure answered 26/7, 2018 at 22:44 Comment(0)
K
-1

Neither of these is a perfect solution, but here are a couple things you could try:

exec >foo.log
tail -f foo.log &
# rest of your script

or

PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE

The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).

Kidderminster answered 3/7, 2010 at 23:25 Comment(4)
tail will leave a running process behind in the 2nd script tee will block, or you will need to run it with & in which case it will leave process as in 1st one.Drunken
@Vitaly: oops, forgot to background tee - I've edited. As I said, neither is a perfect solution, but the background processes will get killed when their parent shell terminates, so you don't have to worry about them hogging resources forever.Kidderminster
Yikes: these look appealing, but the output of tail -f is also going to foo.log. You can fix that by running tail -f before the exec, but the tail is still left running after the parent terminates. You need to explicitly kill it, probably in a trap 0.Wiles
Yeap. If the script is backgrounded, it leaves processes all over.Patrolman

© 2022 - 2024 — McMap. All rights reserved.