It's the vagaries of scheduling.
Your producer — let's call it alphabeta
— is able to run for some amount time before head
is able to read and exit (thus breaking the pipe).
That "some amount of time", of course, is variable.
Sometimes alphabeta
runs 20 times before head
can read stdin and exit. Sometimes 200 times. On my system, sometimes 300 or 1000 or 2000 times. Indeed, it can theoretically loop up to the capacity of the pipe connecting producer and consumer.
For demonstration, let's introduce some delay so we can be reasonably sure that head
is stuck in a read() before alphabeta
produces a single line of output:
so$ { sleep 5; ./alphabeta; } | head -n 1
ABCDEFGHIJKLMNOPQRSTUVWXYZ
Iteration 0 done
(N.B. it's not guaranteed that alphabeta
will only iterate once in the above. However, on an unloaded system, this is more-or-less always going to be the case: head
will be ready, and its read/exit will happen more-or-less immediately.)
Watch instead what happens when we artificially delay head
:
so$ ./alphabeta | { sleep 2; head -n 1; }
Iteration 0 done
...
Iteration 2415 done # <--- My system *pauses* here as pipe capacity is reached ...
Iteration 2416 done # <--- ... then it resumes as head completes its first read()
...
Iteration 2717 done # <--- pipe capacity reached again; head didn't drain the pipe
ABCDEFGHIJKLMNOPQRSTUVWXYZ
As an aside, @R.. is quite right in his remarks that SIGPIPE is synchronous. In your case, the first fflush-induced write to a broken pipe (after head
has exited) will synchronously generate the signal. This is documented behavior.
const
char *s = ...
– Untouchable