How can I pipe stderr, and not stdout?
Asked Answered
P

11

1270

I have a program that writes information to stdout and stderr, and I need to process the stderr with grep, leaving stdout aside.

Using a temporary file, one could do it in two steps:

command > /dev/null 2> temp.file
grep 'something' temp.file

But how can this be achieved without temp files, using one command and pipes?

Padding answered 26/2, 2010 at 15:53 Comment(8)
A similar question, but retaining stdout: unix.stackexchange.com/questions/3514/…Uniformize
This question was for Bash but it's worth mentioning this related article for Bourne / Almquist shell.Rainer
@Rolf What do you mean? Bash gets updates fairly regularly; the syntax you propose is not very good, because it conflicts with existing conventions, but you can actually use |& to pipe both stderr and stdout (which isn't what the OP is asking exactly, but pretty close to what I guess your proposal could mean).Leeanneleeboard
@Leeanneleeboard I mean that the development of features or syntax seems to have ended or is happening at a very slow pace therefore we seem to be stuck with syntax that was determined decades ago.Exurbanite
@Exurbanite the syntax you proposed is ambiguous in bash. But even if we ignore the ambiguity, is the backwards incompatibility and user frustration really worth it to save a few key strokes, for a 'feature" that replicates existing behavior?Vocalist
@Vocalist how is it backwards incompatible and ambiguous? what does 2| otherwise mean in bash?Exurbanite
@Exurbanite These commands would have different behavior: echo 2 | tee my_file versus echo 2|tee my_file.Vocalist
@Vocalist Thanks. 2 | is not 2| indeed, I would not call it ambiguous, more like potentially error-inducing, just like echo 2 > /myfile and echo 2> /myfile which is even more of an issue. Anyway it's not about saving a few keystrokes, I find the other solutions convoluted and quirky and have yet to wrap my head around them which is why I would just fire up rc which has a straightforward syntax for determining the stream that you want to redirect.Exurbanite
K
1551

First redirect stderr to stdout — the pipe; then redirect stdout to /dev/null (without changing where stderr is going):

command 2>&1 >/dev/null | grep 'something'

For the details of I/O redirection in all its variety, see the chapter on Redirections in the Bash reference manual.

Note that the sequence of I/O redirections is interpreted left-to-right, but pipes are set up before the I/O redirections are interpreted. File descriptors such as 1 and 2 are references to open file descriptions. The operation 2>&1 makes file descriptor 2 aka stderr refer to the same open file description as file descriptor 1 aka stdout is currently referring to (see dup2() and open()). The operation >/dev/null then changes file descriptor 1 so that it refers to an open file description for /dev/null, but that doesn't change the fact that file descriptor 2 refers to the open file description which file descriptor 1 was originally pointing to — namely, the pipe.

Kim answered 26/2, 2010 at 15:55 Comment(29)
i just stumbled across /dev/stdout /dev/stderr /dev/stdin the other day, and I was curious if those are good ways of doing the same thing? I always thought 2>&1 was a bit obfuscated. So something like: command 2> /dev/stdout 1> /dev/null | grep 'something'Richella
You could use /dev/stdout et al, or use /dev/fd/N. They will be marginally less efficient unless the shell treats them as special cases; the pure numeric notation doesn't involve accessing files by name, but using the devices does mean a file name lookup. Whether you could measure that is debatable. I like the succinctness of the numeric notation - but I've been using it for so long (more than a quarter century; ouch!) that I'm not qualified to judge its merits in the modern world.Kim
haha, fair enough Jonathan. I didn't realize that there was an efficiency gain with 2>&1. Thanks for pointing that outRichella
@Jonathan Leffler: I take a little issue with your plain text explanation 'Redirect stderr to stdout and then stdout to /dev/null' -- Since one has to read redirection chains from right to left (not from left to right), we should also adapt our plain text explanation to this: 'Redirect stdout to /dev/null, and then stderr to where stdout used to be'.Vitalize
@KurtPfeifle: au contraire! One must read the redirection chains from left to right since that is the way the shell processes them. The first operation is the 2>&1, which means 'connect stderr to the file descriptor that stdout is currently going to'. The second operation is 'change stdout so it goes to /dev/null', leaving stderr going to the original stdout, the pipe. The shell splits things at the pipe symbol first, so, the pipe redirection occurs before the 2>&1 or >/dev/null redirections, but that's all; the other operations are left-to-right. (Right-to-left wouldn't work.)Kim
You need to parse each redirection operation from right to left. 2>&1 (or 2>& 1) consists of an operator 2>& and a file descriptor(fd) argument 1. The shell than 'dupes' the target of the fd argument to the file descriptor embedded in the operator (2 in this case).Vanhorn
The thing that really surprises me about this is that it works on Windows, too (after renaming /dev/null to the Windows equivalent, nul).Suite
@KurtPfeifle: I used to have difficulties with understanding the required order of the redirections too. Until I wrote a little shell, and realized how a redirection is done with the fd_redirect_into = open("file"); close(fd_to_redirect); dup(fd_redirect_into); close(fd_redirect_into); system-call sequence.Engler
@J.F.Sebastian could you please explain |& ?Nellnella
@VassilisGr it is a bash syntax that merges stdout/stderr (like 2>&1). It is used to capture stderr here: due to |&: command's stderr is redirected to stdout, then stdout is redirected to /dev/null, then grep receives only stderr from the command on its stdin.Wootan
@J.F.Sebastian thanks for your reply. It's a very interesting approach! Someone might check the bash versions though. For example in GNU bash, version 3.2.53(1)-release-(x86_64-apple-darwin13) it does raise syntax error near unexpected token &. With GNU Bash 4.3 in linux.. no problem!Nellnella
@J.F.Sebastian Reading from left-to-right (as Jonathan explains) wouldn't >/dev/null |& first redirect stdout to /dev/null and then 2>&1 redirects stderr also to /dev/null?Muliebrity
@legends2k: Note that first a pipeline is split into a sequence of commands connected by pipes. |& is a special case of |; it redirects both standard output and standard error to the pipe. Then the plain redirections are processed left-to-right. So, command > /dev/null |& grep 'something' splits the pipeline at the |&. On the LHS, the standard output and standard error are redirected to the pipe; then the >/dev/null redirection sends standard output to /dev/null, so only standard error is going to the pipe. The grep reads from the pipe, looking for 'something'.Kim
@legends2k: Overall, command >/dev/null |& grep 'something' is equivalent to command 2>&1 >/dev/null | grep 'something'. See also pipelines in the Bash manual. In fact, that says: If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command. Ugh! Well, it is what it is; RTFM applies.Kim
And I should add that the RTFM applies at least as much to me as anyone else; it wasn't until I read the fine print that I found the slightly odd behaviour of |&.Kim
Yes. I quoted the manual. It supersedes my interpretation of it. I won't be using |&; it is unnecessary and does not conform to my idea of what is useful to me. Others may do as they wish.Kim
@JonathanLeffler Found it! Both our interpretations that |& is a special case is based on Sebastian's comment (and partly due to the number of upvotes it garnered). However, that comment is incorrect. See here. I've come to tell you that your interpretation, without the special case, and as the manual explains is actually correct. So >/dev/null |& first redirects stdout to NUL and stderr also gets pointed to NUL (left-to-right); this it's not what the OP wants.Muliebrity
@legends2k: I've removed the incorrect comment. I thought that |& processing happens before the individiual redirections but as the manual says it happens after and the actual behavior confirms that it happens after the redirections i.e., command >/dev/null |& grep 'something' is equivalent to command >/dev/null 2>&1 | grep 'something' (all standard output is to /dev/null. grep sees nothing)Wootan
@KurtPfeifle I'm amused by "we have to read from right to left" and "no, we must read from left to right". Both statements are wrong, of course, since there are valid interpretations that could reasonably be labelled each way. Personally my brain always wants command $a>&$b $c>&d $e>&$f to do ((command $a>&$b) $c>&d) $e>&$f (i.e. "left-to-right" in a sense), but that's wrong. Instead, it does ((command $e>&$f) $c>&d) $a>&$b (i.e. a "right-to-left" reading of the unparenthesized pipeline, in a sense). Others have given "left-to-right" explanations which are valid as well.Simper
@JonathanLeffler; Is command &>1 >/dev/null | grep 'something' equivalent to command 2>&1 >/dev/null | grep 'something'?Vite
&>1 is a Bash neologism; in my (archaic, cranky) view, it's horrid and I'd not touch it with a barge-pole. If my reading of the Bash manual on &> is to be trusted, &>1 means 'redirect standard output and standard error to a file called 1' — and not to file descriptor number 1. I've not experimented with it; I have no plans to do so at the moment. I don't find it a useful addition to the shell syntax.Kim
OK. Actually I have seen a test like if kill -0 &>1 > /dev/null $pid to check whether a process, with process id stored in variable pid, is running or not. This was the source of confusion. I guess it should be either if kill -0 &> /dev/null $pid or if kill -0 $pid > /dev/null 2>&1.Vite
Conversely, if you want to see only standard output and not standard error you can do "command 1>&2 2> /dev/null". This is implied by the answer but wanted to spell it out for those simply seeking an answer for how to do this.Sibley
@GeorgeColpitts: Normally, if you want to lose standard error, you use just command 2>/dev/null. Using command 1>&2 2>/dev/null means that the standard output of the command goes to where the standard error was going (probably the terminal) — that's the 1>&2 part — and then standard error (but not the standard output) is sent to /dev/null. It isn't 100% wrong to use both, but it is unusual. Note, too, that the sequence matters. command 2>/dev/null 1>&2 sends standard error to /dev/null and then sends standard output to the same place.Kim
I have come back and read @JonathanLeffler's Jul 1 '12 comment every 3 to 6 months since he first made it. Today, for the first time, it… not only made sense, but felt… rhetorical. Like, "duh, of course." I think I'm finally done understanding it. I'll check back in a few months.Supranational
If this hangs and doesn't continue after execution try wrapping your command in {} curly brackets. Example {command} 2>&1 >/dev/null | grep 'something'Diagnostic
Note that you can replace /dev/null with a filename to redirect stdout to a file and stderr to a process.Hobbism
Note that this doesn't work (for some reason) in zsh: you need to wrap the whole thing in a subshell. See #58020428Mildew
On syntax: It might be useful to note that >/dev/null is equivalent to 1>/dev/null. Furthermore, without the & in 2>&1, it would write to a file named 1 rather than the file descriptor 1.Tecla
C
413

Or to swap the output from standard error and standard output over, use:

command 3>&1 1>&2 2>&3

This creates a new file descriptor (3) and assigns it to the same place as 1 (standard output), then assigns fd 1 (standard output) to the same place as fd 2 (standard error) and finally assigns fd 2 (standard error) to the same place as fd 3 (standard output).

Standard error is now available as standard output and the old standard output is preserved in standard error. This may be overkill, but it hopefully gives more details on Bash file descriptors (there are nine available to each process).

Cherie answered 4/3, 2010 at 18:18 Comment(13)
A final tweak would be 3>&- to close the spare descriptor that you created from stdoutKim
Can we create a file descriptor that has stderr and another that has the combination of stderr and stdout? In other words can stderr go to two different files at once?Benzidine
The following still prints errors to stdout. What am I missing? ls -l not_a_file 3>&1 1>&2 2>&3 > errors.txtConciliar
@Conciliar - I'm adding /etc/passwd to your command so it'll have non-empty stdout, to make things clearer. If your mind is like mine, you're assuming your ls -l /etc/passwd not_a_file 3>&1 1>&2 2>&3 > errors.txt should give you the same as (ls -l /etc/passwd not_a_file 3>&1 1>&2 2>&3) > errors.txt, which is wrong. You can get the latter if desired by typing exactly that. On the other hand if your goal is simply to redirect 2 to a file, that's way easier: ls -l /etc/passwd not_a_file 2> errors.txt .Simper
@Conciliar - To understand what your proposed command ls -l /etc/passwd not_a_file 3>&1 1>&2 2>&3 > errors.txt actually does, start by following Kramish's description; by the end of it, you've effectively swapped 1 and 2 which when run from command line isn't very interesting since they were originally both pointing at the terminal, so again they both point at the terminal. Your final > errors.txt, i.e. 1> errors.txt, means the prog's output 1 (listing /etc/passwd) gets finally redirected to errors.txt, with its output 2 (complaining about not_a_file) still pointed at the terminal.Simper
@JonathanLeffler Out of curiosity, does your tweak serve any purpose performance-wise, other than perhaps clarifying the role of file descriptor (3) for an observer?Tithonus
@JonasDahlbæk: the tweak is primarily an issue of tidiness. In truly arcane situations, it might make the difference between a process detecting and not detecting EOF, but that requires very peculiar circumstances.Kim
I think the main difference with 3>&- is that write(3, "blarg") in the command will immediately fail rather than eventually lock up the app.Nona
Caution: this assumes FD 3 is not already in use, doesn't close it, and doesn't undo the swapping of file descriptors 1 and 2, so you can't go on to pipe this to yet another command. See this answer for further detail and work-around. For a much cleaner syntax for {ba,z}sh, see this answer.Kamakura
@Tom What is the scope for closing a FD? So is >&- just closing the stdout for one command and therefore shorter than > /dev/null?Berrie
Or is it doing real harm? @Jonathan How would you undo this?Berrie
@Berrie — I don't understand your question. How would I undo what?Kim
Closing an FD with >&-. Or is it only for the scope of one command?Berrie
B
278

In Bash, you can also redirect to a subshell using process substitution:

command > >(stdout pipe)  2> >(stderr pipe)

For the case at hand:

command 2> >(grep 'something') >/dev/null
Blas answered 9/2, 2012 at 19:14 Comment(6)
Works very well for output to the screen. Do you have any idea why the ungrepped content appears again if I redirect the grep output into a file? After command 2> >(grep 'something' > grep.log) grep.log contains the same the same output as ungrepped.log from command 2> ungrepped.logCrowe
Use 2> >(stderr pipe >&2). Otherwise the output of the "stderr pipe" will go through the "stdlog pipe".Distributary
yeah!, 2> >(...) works, i tried 2>&1 > >(...) but it didn'tSutton
Here's a small example that may help me next time I look-up how to do this. Consider the following ... awk -f /new_lines.awk <in-content.txt > out-content.txt 2> >(tee new_lines.log 1>&2 ) In this instance I wanted to also see what was coming out as errors on my console. But STDOUT was going to the output file. So inside the sub-shell, you need to redirect that STDOUT back to STDERR inside the parentheses. While that works, the STDOUT output from the tee command winds-up at the end of the out-content.txt file. That seems inconsistient to me.Hedvah
@datdinhquoc I did it somehow like 2>&1 1> >(dest pipe)Untwine
@Alireza from my understanding, you then get both the stderr and stdout in your pipe. To all: pay attention that you have to type ... >(..., not ... > (... (the space is wrong)Berrie
G
234

Combining the best of these answers, if you do:

command 2> >(grep -v something 1>&2)

...then all stdout is preserved as stdout and all stderr is preserved as stderr, but you won't see any lines in stderr containing the string "something".

This has the unique advantage of not reversing or discarding stdout and stderr, nor smushing them together, nor using any temporary files.

Garald answered 10/4, 2013 at 21:5 Comment(5)
Isn't command 2> >(grep -v something) (without 1>&2) the same?Lucialucian
No, without that, the filtered stderr ends up being routed to stdout.Garald
This is what I needed - tar outputs "file changed as we read it" for a directory always, so just want to filter out that one line but see if any other errors occur. So tar cfz my.tar.gz mydirectory/ 2> >(grep -v 'changed as we read it' 1>&2) should work.Discus
this the only valid answer to the question.Gca
@Garald are you sure your comment still stands? I'm doing some tests and it seems that is not the case, at least not anymore.Hoskinson
S
123

It's much easier to visualize things if you think about what's really going on with "redirects" and "pipes." Redirects and pipes in bash do one thing: modify where the process file descriptors 0, 1, and 2 point to (see /proc/[pid]/fd/*).

When a pipe or "|" operator is present on the command line, the first thing to happen is that bash creates a fifo and points the left side command's FD 1 to this fifo, and points the right side command's FD 0 to the same fifo.

Next, the redirect operators for each side are evaluated from left to right, and the current settings are used whenever duplication of the descriptor occurs. This is important because since the pipe was set up first, the FD1 (left side) and FD0 (right side) are already changed from what they might normally have been, and any duplication of these will reflect that fact.

Therefore, when you type something like the following:

command 2>&1 >/dev/null | grep 'something'

Here is what happens, in order:

  1. a pipe (fifo) is created. "command FD1" is pointed to this pipe. "grep FD0" also is pointed to this pipe
  2. "command FD2" is pointed to where "command FD1" currently points (the pipe)
  3. "command FD1" is pointed to /dev/null

So, all output that "command" writes to its FD 2 (stderr) makes its way to the pipe and is read by "grep" on the other side. All output that "command" writes to its FD 1 (stdout) makes its way to /dev/null.

If instead, you run the following:

command >/dev/null 2>&1 | grep 'something'

Here's what happens:

  1. a pipe is created and "command FD 1" and "grep FD 0" are pointed to it
  2. "command FD 1" is pointed to /dev/null
  3. "command FD 2" is pointed to where FD 1 currently points (/dev/null)

So, all stdout and stderr from "command" go to /dev/null. Nothing goes to the pipe, and thus "grep" will close out without displaying anything on the screen.

Also note that redirects (file descriptors) can be read-only (<), write-only (>), or read-write (<>).

A final note. Whether a program writes something to FD1 or FD2, is entirely up to the programmer. Good programming practice dictates that error messages should go to FD 2 and normal output to FD 1, but you will often find sloppy programming that mixes the two or otherwise ignores the convention.

Satanic answered 20/8, 2013 at 18:9 Comment(3)
Really nice answer. My one suggestion would be to replace your first use of "fifo" with "fifo (a named pipe)". I've been using Linux for a while but somehow never managed to learn that is another term for named pipe. This would have saved me from looking it up, but then again I wouldn't have learned the other stuff I saw when I found that out!Whaley
@MarkEdington Please note that FIFO is only another term for named pipe in the context of pipes and IPC. In a more general context, FIFO means First in, first out, which describes insertion and removal from a queue data structure.Narial
@Narial Of course. The point of my comment was that even as a seasoned developer, I had never seen FIFO used as a synonym for named pipe. In other words, I didn't know this: en.wikipedia.org/wiki/FIFO_(computing_and_electronics)#Pipes - Clarifying that in the answer would have saved me time.Whaley
S
55

If you are using Bash, then use:

command >/dev/null |& grep "something"

https://www.gnu.org/software/bash/manual/bashref.html#Pipelines

>/dev/null first redirects stdout to /dev/null, then |& redirects stderr to stdout.

Hence you're left with stderr --> stdout and the original stdout is stripped.

The same thing can be achieved with

command >/dev/null 2>&1 | grep "something"

However, I have found through testing that |& can grab additional output that isn't redirected with 2>&1 alone, such as when running certain processes under Wine (which can includes many subshells/subprocesses and redirects). This doesn't seem to be documented anywhere.

Struma answered 18/4, 2014 at 21:56 Comment(5)
„If ‘|&’ is used, the standard error of command1 is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |” Taken verbatim from the fourth paragraph at your link.Bilberry
@Profpatsch: Ken's answer is correct, look that he redirects stdout to null before combining stdout and stderr, so you'll get in pipe only the stderr, because stdout was previously droped to /dev/null.Rhynchocephalian
I use mplayer a 2>/dev/null |& grep i(got output) and mplayer a >/dev/null |& grep i(no output at all ! ) to test in which file name a doesn't exist and wonder why your answer doesn't works, may be due to bash version ? Then i just figure out i need to use fd 3, i.e. mplayer a 3>/dev/null |& grep i to get the output lol.Ammonium
But i still found your answer is wrong, >/dev/null |& expand to >/dev/null 2>&1 | and means stdout inode is empty to pipe because nobody(#1 #2 both tied to /dev/null inode) is tied to stdout inode (e.g. ls -R /tmp/* >/dev/null 2>&1 | grep i will give empty, but ls -R /tmp/* 2>&1 >/dev/null | grep i will lets #2 which tied to stdout inode will pipe).Ammonium
Ken Sharp, I tested, and ( echo out; echo err >&2 ) >/dev/null |& grep "." gives no output (where we want "err"). man bash says If |& is used … is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command. So first we redirect command's FD1 to null, then we redirect command's FD2 to where FD1 pointed, ie. null, so grep's FD0 gets no input. See https://mcmap.net/q/22879/-how-can-i-pipe-stderr-and-not-stdout for a more in-depth explanation.Algy
C
12

For those who want to redirect stdout and stderr permanently to files, grep on stderr, but keep the stdout to write messages to a tty:

# save tty-stdout to fd 3
exec 3>&1
# switch stdout and stderr, grep (-v) stderr for nasty messages and append to files
exec 2> >(grep -v "nasty_msg" >> std.err) >> std.out
# goes to the std.out
echo "my first message" >&1
# goes to the std.err
echo "a error message" >&2
# goes nowhere
echo "this nasty_msg won't appear anywhere" >&2
# goes to the tty
echo "a message on the terminal" >&3
Christopher answered 14/11, 2012 at 8:59 Comment(0)
P
11

This will redirect command1 stderr to command2 stdin, while leaving command1 stdout as is.

exec 3>&1
command1 2>&1 >&3 3>&- | command2 3>&-
exec 3>&-

Taken from LDP

Pernambuco answered 7/10, 2014 at 7:39 Comment(2)
So if I'm understanding this correctly, we start by duplicating the stdout of the current process (3>&1). Next redirect command1's error to its output (2>&1), then point stdout of command1 to the parent process's copy of stdout (>&3). Clean up the duplicated file descriptor in the command1 (3>&-). Over in command2, we just need to also delete the duplicated file descriptor (3>&-). These duplicates are caused when the parent forked itself to create both processes, so we just clean them up. Finally in the end, we delete the parent process's file descriptor (3>&-).Ringler
In the end, we have command1's original stdout pointer, now pointing to the parent process's stdout, while its stderr is pointing to where its stdout used to be, making it the new stdout for command2.Ringler
C
4

I just came up with a solution for sending stdout to one command and stderr to another, using named pipes.

Here goes.

mkfifo stdout-target
mkfifo stderr-target
cat < stdout-target | command-for-stdout &
cat < stderr-target | command-for-stderr &
main-command 1>stdout-target 2>stderr-target

It's probably a good idea to remove the named pipes afterward.

Caterpillar answered 9/8, 2018 at 18:35 Comment(1)
It's worth remembering that if either command fails then it could end up blocking, but sometimes this may be a desired result.Struma
E
1

You can use the rc shell.

First install the package (it's less than 1 MB).

This an example of how you would discard standard output and pipe standard error to grep in rc:

find /proc/ >[1] /dev/null |[2] grep task

You can do it without leaving Bash:

rc -c 'find /proc/ >[1] /dev/null |[2] grep task'

As you may have noticed, you can specify which file descriptor you want piped by using brackets after the pipe.

Standard file descriptors are numerated as such:

  • 0 : Standard input
  • 1 : Standard output
  • 2 : Standard error
Exurbanite answered 17/3, 2019 at 19:17 Comment(2)
Suggesting installing an entirely different shell seems kindof drastic to me.Incarnadine
@Incarnadine What's so drastic about it? It does not replace the default shell and the software only takes up a few K of space. The rc syntax for piping stderr is way better than what you would have to do in bash so I think it is worth a mention.Exurbanite
E
-5

I try follow, find it work as well,

command > /dev/null 2>&1 | grep 'something'
Encephalic answered 11/4, 2014 at 3:27 Comment(1)
Doesn't work. It just sends stderr to the terminal. Ignores the pipe.Caterpillar

© 2022 - 2024 — McMap. All rights reserved.