Redirecting bash script output to syslog

That’s right, a post about logging from bash scripts. Cool as fuck.

Anyway, this was prompted by the following tweet:

I could see roughly what was going on here, but I didn’t quite understand how it worked. In particular, what on earth was the 1> >(logger ...) bit all about?

What does it do?

Adding this line to the top of your bash script will cause anything printed on stdout and stderr to be sent to the syslog1, as well as being echoed back to the original shell’s stderr.

It’s certainly desirable to have your script output sent to a predictable location, so how does this work?

Deconstructing the command

The whole line again:

exec 1> >(logger -s -t $(basename $0)) 2>&1

exec is a bash builtin, so to see what it does, we run help exec:

exec: exec [-cl] [-a name] [command [args ...]] [redirection ...]
  Replace the shell with the given command.

  Execute COMMAND, replacing this shell with the specified program.
  ARGUMENTS become the arguments to COMMAND.  If COMMAND is not
  specified, any redirections take effect in the current shell.

In this case exec is being used without COMMAND – this line is redirecting I/O for the current shell. So what do the redirections do? To simplify things let’s clear out the nested commands and just look at the redirection:

exec 1> >(some-command) 2>&1

This is pretty simple redirection, obscured by that charming and easily google-able bash syntax that we all know and love bluff and tolerate. There are two redirections, the first being:

1> >(some-command)

This redirects file descriptor 1 (stdout) to the location >(some-command). That’s not a normal filename, though: it’s a process substitution, which is a non-POSIX bash feature2. >(some-command) returns a file descriptor that some-command will use as its stdin. This is exactly the same as piping our script’s stdout into some-command.

Then we redirect file descriptor 2 (stderr) to the same location as file descriptor 1:


In summary, we’ve redirected both stdout and stderr for our script to the same place: the stdin for another process, which is a command running in the background. In effect, this is the same as running the script at the command line like so:

$ ./some-script 2>&1 | some-command

In this case, some-command is:

logger -s -t $(basename $0)

From the logger(1) manpage we can see that this writes entries to syslog, tagged (-t) with the filename of our script ($(basename $0)) and echoing them to standard error (-s).

So the full line takes both stdout and stderr from our script, and redirects them to the logger command, which sends them to the syslog and echoes them back to stderr.

Testing it out

Let’s write a very simple test script, logger_test:


exec 1> >(logger -s -t $(basename $0)) 2>&1

echo "writing to stdout"
echo "writing to stderr" >&2

When we run this on a Ubuntu system we see the following:

$ ./logger_test
logger_test: writing to stdout
logger_test: writing to stderr

We can also inspect the syslog (you may need root privileges):

$ grep logger_test /var/log/syslog
Sep  9 15:39:37 my-machine logger_test: writing to stdout
Sep  9 15:39:37 my-machine logger_test: writing to stderr

Great! We’ve got our output in the syslog and in our own console. What’s the downside?

Mixed stderr and stdout

Because we’re redirecting both stdout and stderr to a logger process, and getting them back on stderr, we can no longer distinguish between normal and error output, either in the syslog or in our terminal.

We could address this by using two background processes:


exec 1> >(logger -s -t $(basename $0) 2>&1)
exec 2> >(logger -s -t $(basename $0))

echo "writing to stdout"
echo "writing to stderr" >&2

Here we send stdout to a separate logger process, but redirect that process’s stderr back to stdout. We can now distinguish between stderr and stdout in our terminal, but we run into a second problem…

Out-of-order messages

If we run our new script several times we are very likely to see the following:

$ ./logger_test
logger_test: writing to stderr
logger_test: writing to stdout

This is because two separate background processes are handling our stdout and stderr messages, and there’s no guarantee that they’ll write their output in order.

We’re caught: if we’re getting our terminal output from background processes, we can either use one background process, and receive ordered messages but lose the ability to distinguish stdout and stderr; or we can use separate background processes, and distinguish between stdout and stderr but lose guaranteed message ordering.

An alternative approach

Ideally we would like the following:

We’ve established that we can’t have all of this by simply redirecting our output to background processes. An alternative approach would be to use helper functions for logging:


readonly SCRIPT_NAME=$(basename $0)

log() {
  echo "$@"
  logger -p user.notice -t $SCRIPT_NAME "$@"

err() {
  echo "$@" >&2
  logger -p user.error -t $SCRIPT_NAME "$@"

log "writing to stdout"
err "writing to stderr"

This way we get our normal terminal output via the shell’s own stdout and stderr, but we can still send messages to syslog and tag them with appropriate priorities (we can also fancy up our terminal output with timestamps and colours if we want to).

The downside is that we have to explicitly log everything we want sent to syslog. If we want the output of a command our script runs to be sent to syslog, then we have to capture that output and log it, too.


I’m not very good at these. Um…


  1. Per Wikipedia, Syslog is a standard for computer message logging. In practice, this means that on any Unix-like system there will be a daemon running that will accept syslog messages, storing them in a central repository. This makes it a useful place to go looking for information about system processes and other processes that aren’t necessarily important enough for dedicated log files. 

  2. The same effect could be achieved in a POSIX-compliant way using named pipes

comments powered by Disqus