Answer
import sys
for i in range(4000):
try:
print(i, flush=True)
except BrokenPipeError:
sys.stdout = None
Explanation
Even if you catch the BrokenPipeError exception, it will be thrown again by Python when your program exits and Python tries to flush stdout. By setting stdout to None, Python will not attempt to flush it.
Drawbacks
While Python routines, such as print()
, correctly check if stdout is None and will not fail, it is not uncommon to see programs that do not check. If your program attempts to use stdout.write()
, or similar, after setting stdout to None, then Python will throw an AttributeError.
Other answers (and why not)
No answer is shorter or simpler than sys.stdout = None
, but some of the common answers have significant problems.
/dev/null
The Python developers have their own suggested code for dealing with the BrokenPipeError.
import os
import sys
def main():
try:
# simulate large output (your code replaces this loop)
for x in range(10000):
print("y")
# flush output here to force SIGPIPE to be triggered
# while inside this try block.
sys.stdout.flush()
except BrokenPipeError:
# Python flushes standard streams on exit; redirect remaining output
# to devnull to avoid another BrokenPipeError at shutdown
devnull = os.open(os.devnull, os.O_WRONLY)
os.dup2(devnull, sys.stdout.fileno())
sys.exit(1) # Python exits with error code 1 on EPIPE
if __name__ == '__main__':
main()
While that is the canonical answer, it is rather grotesque in that it needlessly opens a new file descriptor to /dev/null just so that Python can flush it before it is closed.
Why not: For most people, it is pointless. This problem is caused by Python flushing a handle that we have already caught a BrokenPipeError on. We know it will fail, so the solution should be for Python to simply not flush that handle. To allocate a new file descriptor just to appease Python is silly.
Why (maybe): Redirecting stdout to /dev/null may actually be the right solution for some people whose programs, after receiving a BrokenPipeError, will continue manipulating stdout without checking it first. However, that is not the common case.
sys.stderr.close()
Some people have suggested closing stderr to hide the bogus BrokenPipe error message.
Why not: It also prevents any legitimate errors from being shown.
signal(SIGPIPE, SIG_DFL)
Another common answer is to use SIG_DFL
, the default signal handler, to cause the program to die whenever a SIGPIPE signal is received.
Why not: SIGPIPE can be sent for any file descriptor, not just stdout, so your entire program would suddenly and mysteriously die if, for example, it was writing to a network socket whose connection gets interrupted.
pipe.py | something | head
One non-python solution is to first pipe stdout to a program that will continue reading data from the Python program even when its own standard output is closed. For example, presuming you have the GNU version of tee
, this works:
pipe.py | tee -p /dev/null | head
Why not: The question was looking for an answer in Python. Also, it is suboptimal in that it will keep pipe.py running longer than it needs to, possible consuming significant resources.