How to redirect stdout+stderr to one file while keeping streams separate?
Asked Answered
S

5

19

Redirecting stdout+stderr such that both get written to a file while still outputting to stdout is simple enough:

cmd 2>&1 | tee output_file

But then now both stdout/stderr from cmd are coming on stdout. I'd like to write stdout+stderr to the same file (so ordering is preserved assuming cmd is single threaded) but then still be able to also separately redirect them, something like this:

some_magic_tee_variant combined_output cmd > >(command-expecting-stdout) 2> >(command-expecting-stderr)

So combined_output contains the both with order preserved, but the command-expecting-stdout only gets stdout and command-expecting-stderr only gets stderr. Basically, I want to log stdout+stderr while still allowing stdout and stderr to be separately redirected and piped. The problem with the tee approach is it globs them together. Is there a way to do this in bash/zsh?

Sulphurbottom answered 20/9, 2012 at 17:10 Comment(5)
It seems like this isn't so much a shell issue as a *nix issue: there's no way to "split" a file-descriptor to point to two separate things, so the only way to reliably preserve the ordering of stdout and stderr is to dupe the "original" file-descriptors so they're pointing to the same place -- after which point you can't re-distinguish them anymore, because they're actually identical.Hubblebubble
You could write some_magic_tee_variant in C I think. It would poll stdout/stderr, and when it received data on either would immediately write it to the file and then output it on the respective stream. Technically it might not exactly preserve ordering if the scheduler puts some_magic_tee_variant to sleep and then it wakes up from poll with data waiting both on stdout and stderr -- but I imagine that reordering exists even with the shell outputting both stdout and stderr to the tty? That might be a good separate question for me to post...Sulphurbottom
Re: "I imagine that reordering exists even with the shell outputting both stdout and stderr to the tty": I'm not sure what behaviors are allowed by POSIX/SUS/etc., but in normal implementations, no, there's no reordering. The way it works is, the shell just points file-descriptors 1 and 2 (stdout and stderr) at e.g. /dev/tty17 and runs the command. It's not as though the shell had to poll for the command's output and forward that output to the TTY.Hubblebubble
Interesting requirement. Are you going to prefix each line in the combined file with 1: for the stdout lines and 2: for the stderr lines? I think the most nearly reasonable way to do it is to have a process analogous to nohup which takes the command and arguments and runs it under supervision: magic_trick -- cmd -o whatever. The magic_trick program would perform some minor miracles with ptys (pseudo-ttys) if its own output is going to a tty, because the behaviour of stdout and stderr depends on whether the output is going to a terminal or a pipe or a file. Lot's of details TBS.Stepdaughter
@JosephGarvin Any luck? I would be interested in this also. I want to have three files: stdout_and_stderr.log, stdout.log, and stderr.logNicotinism
J
2

From what I unterstand this is what you are looking for. First I made a litte script to write on stdout and stderr. It looks like this:

$ cat foo.sh 
#!/bin/bash

echo foo 1>&2
echo bar

Then I ran it like this:

$ ./foo.sh 2> >(tee stderr | tee -a combined) 1> >(tee stdout | tee -a combined)
foo
bar

The results in my bash look like this:

$ cat stderr
foo
$ cat stdout 
bar
$ cat combined 
foo
bar

Note that the -a flag is required so the tees don't overwrite the other tee's content.

Jacklynjackman answered 20/9, 2012 at 17:30 Comment(4)
That won't preserve ordering; if you change your foo.sh to wrap the echo statements in a loop, you'll find that combined doesn't end up with foo and bar strictly alternating.Hubblebubble
Are you sure? That sounds like a timing issue. It might be enough to fist tee -a combined and then tee std(out|err). I'll give it a try later today. Otherwise I don't see any way around named pipes.Jacklynjackman
Most readable answer I found! I wished to keep stdout and stderr separate on a terminal (my terminal colors them differently), so I did ./foo.sh 2> >(tee -a combined.log >&2) 1> >(tee -a combined.log) - note >&2 in the stderr's tee.Carvalho
@keks, I'm completely sure that the issue ruakh described is legitimate. There's no guarantee that a write to either descriptor will completely flush through the pipeline before the parent process writes to the other.Eradicate
K
1
{ { cmd | tee out >&3; } 2>&1 | tee err >&2; } 3>&1

Or, to be pedantic:

{ { cmd 3>&- | tee out >&3 2> /dev/null; } 2>&1 | tee err >&2 3>&- 2> /dev/null; } 3>&1

Note that it's futile to try and preserve order. It is basically impossible. The only solution would be to modify "cmd" or use some LD_PRELOAD or gdb hack,

Kimber answered 14/10, 2012 at 20:52 Comment(1)
That should work (I didn't try it), however it writes to two files (out and err), but the question was whether it is possible to get them into one file.Dragline
E
1

Order can indeed be preserved. Here's an example which captures the standard output and error, in the order in which they are generated, to a logfile, while displaying only the standard error on any terminal screen you like. Tweak to suit your needs.

1.Open two windows (shells)

2.Create some test files

touch /tmp/foo /tmp/foo1 /tmp/foo2

3.In window1:

mkfifo /tmp/fifo
</tmp/fifo cat - >/tmp/logfile

4.Then, in window2:

(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/1

Where /dev/pts/1 can be whatever terminal display you want. The subshell runs some "ls" and "echo" commands in sequence, some succeed (providing stdout) and some fail (providing stderr) in order to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file.

Evolution answered 2/8, 2013 at 20:10 Comment(2)
The time needed for ls to start up is much longer than the time taken to flush a pipeline under low-load conditions. This test is completely invalid at proving that order is preserved in close races.Eradicate
Also, the OP wants stdout to also be displayed on the terminal.Eradicate
C
1

Here's how I do it:

exec 3>log ; example_command 2>&1 1>&3 | tee -a log ; exec 3>&-

Worked Example

bash$ exec 3>log ; { echo stdout ; echo stderr >&2 ; } 2>&1 1>&3 | \
      tee -a log ; exec 3>&-
stderr
bash$ cat log
stdout
stderr

Here's how that works:

exec 3>log sets up file descriptor 3 to redirect into the file called log, until further notice.

example_command to make this a working example, I used { echo stdout ; echo stderr >&2 ; }. Or you could use ls /tmp doesnotexist to provide output instead.

Need to jump ahead to the pipe | at this point because bash does it first. The pipe sets up a pipe and redirects the file descriptor 1 into this pipe. So now, STDOUT is going into the pipe.

Now we can go back to where we were next in our left-to-right interpretation: 2>&1 this says errors from the program are to go to where STDOUT currently points, i.e. into the pipe we just set up.

1>&3 means STDOUT is redirected into file descriptor 3, which we earlier set up to output to the log file. So STDOUT from the command just goes into the log file, not to the terminal's STDOUT.

tee -a log takes it's input from the pipe (which you'll remember is now the errors from the command), and outputs it to STDOUT and also appends it to the log file.

exec 3>&- closes the file descriptor 3.

Coper answered 15/4, 2014 at 10:57 Comment(1)
Try replacing your pair of echos with a for loop that alternates them. Running this code with 10,000 iterations, the results are not remotely close to the even out/err/out/err result one would hope for.Eradicate
D
1

Victor Sergienko's comment is what worked for me, adding exec to the front of it makes this work for the entire script (instead of having to put it after individual commands)

exec 2> >(tee -a output_file >&2) 1> >(tee -a output_file)

Dunleavy answered 7/4, 2015 at 21:37 Comment(1)
This doesn't actually guarantee that ordering is preserved. If you have writes to stderr and stdout coming very near each other, they can show up in the opposite order in output_file (or on the TTY)!Eradicate

© 2022 - 2024 — McMap. All rights reserved.