bash: how to propagate errors in process substitution?

I want my shell scripts to fail whenever a command executed with them fails.

Typically I do that with:

set -e
set -o pipefail

(typically I add set -u also)

The thing is that none of the above works with process substitution. This code prints “ok” and exit with return code = 0, while I would like it to fail:

#!/bin/bash -e
set -o pipefail
cat <(false) <(echo ok)

Is there anything equivalent to “pipefail” but for process substitution? Any other way to passing to a command the output of commands as it they were files, but raising an error whenever any of those programs fails?

A poor’s man solution would be detecting if those commands write to stderr (but some commands write to stderr in sucessful scenarios).

Another more posix compliant solution would be using named pipes, but I need to lauch those commands-that-use-process-substitution as oneliners built on the fly from compiled code, and creating named pipes would complicate things (extra commands, trapping error for deleting them, etc.)


Thank you for visiting the Q&A section on Magenaut. Please note that all the answers may not help you solve the issue immediately. So please treat them as advisements. If you found the post helpful (or not), leave a comment & I’ll get back to you as soon as possible.

Method 1

You could only work around that issue with that for example:

cat <(false || kill $$) <(echo ok)

The subshell of the script is SIGTERMd before the second command can be executed (other_command). The echo ok command is executed “sometimes”:
The problem is that process substitutions are asynchronous. There’s no guarantee that the kill $$ command is executed before or after the echo ok command. It’s a matter of the operating systems scheduling.

Consider a bash script like this:

set -e
set -o pipefail
cat <(echo pre) <(false || kill $$) <(echo post)
echo "you will never see this"

The output of that script can be:
$ ./script
$ echo $?
143           # it's 128 + 15 (signal number of SIGTERM)

$ ./script
$ pre

$ echo $?

You can try it and after a few tries, you will see the two different orders in the output. In the first one the script was terminated before the other two echo commands could write to the file descriptor. In the second one the false or the kill command were probably scheduled after the echo commands.

Or to be more precisely: The system call signal() of the kill utillity that sends the the SIGTERM signal to the shells process was scheduled (or was delivered) later or earlier than the echo write() syscalls.

But however, the script stops and the exit code is not 0. It should therefore solve your issue.

Another solution is, of course, to use named pipes for this. But, it depends on your script how complex it would be to implement named pipes or the workaround above.


Method 2

For the record, and even if the answers and comments were good and helpful, I ended implementing something a little different (I had some restrictions about receiving signals in the parent process that I did not mention in the question)

Basically, I ended doing something like this:

command <(subcomand 2>error_file && rm error_file) <(....) ...

Then I check the error file. If it exists, I know what subcommand failed (and the contents of the error_file can be useful). More verbose and hackish that I originally wanted, but less cumbersome than creating named pipes in a one-liner bash commmand.

Method 3

This example shows how to use kill together with trap.

#! /bin/bash
failure ()
  echo 'sub process failed' >&2
  exit 1
trap failure SIGUSR1
cat < <( false || kill -SIGUSR1 $$ )

But kill can not pass a return code from your sub process to the parent process.

Method 4

In a similar way as you would implement $PIPESTATUS/$pipestatus with a POSIX shell that doesn’t support it, you can obtain the exit status of the commands by passing them around via a pipe:

unset -v false_status echo_status
{ code=$(
    exec 3>&1 >&4 4>&-
    cat 3>&- <(false 3>&-; echo >&3 "false_status=$?") 
             <(echo ok 3>&-; echo >&3 "echo_status=$?")
);} 4>&1
eval "$code"
printf '%s_code=%dn' cat   "$cat_status" 
                      false "$false_status" 
                      echo  "$echo_status"

Which gives:

Or you could use pipefail and implement process substitution by hand like you would with shells that don’t support it:
set -o pipefail
  false <&5 5<&- | {
    echo OK <&5 5<&- | {
      cat /dev/fd/3 /dev/fd/4
    } 4<&0 <&5 5<&-
  } 3<&0
} 5<&0

Method 5

The most reliable way I’ve found is to store the error code of the subprocess in a temp file, like this in the context of a function:

function() {
  VAR=$(some_command; echo $? > $ERR)
  if [[ $(cat $ERR && rm $ERR) -gt 0 ]]; then
    echo "An error occurred in the sub shell" > /dev/stderr
    return 1

All methods was sourced from or, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments