trap 'caller 1' ERR
should do the same thing. Also you should set "errtrace" (-E) and possibly "nounset" (-u) and "pipefail". set -e
is-oil()
{
test -n "$OIL_VERSION"
}
set -E || is-oil
trap 'echo "$BASH_SOURCE:$LINENO: error: failure during early startup! Details unavailable."' ERR
magic_exitvalue=$(($(kill -l CONT)+128))
backtrace()
{
{
local status=$?
if [ "$status" -eq "$magic_exitvalue" ]
then
echo '(omit backtrace)'
exit "$magic_exitvalue"
fi
local max file line func argc argvi i j
echo
echo 'Panic! Something failed unexpectedly.' "(status $status)"
echo 'While executing' "$BASH_COMMAND"
echo
echo Backtrace:
echo
max=${#BASH_LINENO[@]}
let max-- # The top-most frame is "special".
argvi=${BASH_ARGC[0]}
for ((i=1;i<max;++i))
do
file=${BASH_SOURCE[i]}
line=${BASH_LINENO[i-1]}
func=${FUNCNAME[i]}
argc=${BASH_ARGC[i]}
printf '%s:%d: ... in %q' "$file" "$line" "$func"
# BASH_ARGV: ... bar foo ...
# argvi ^
# argvi+argc ^
for ((j=argc-1; j>=0; --j))
do
printf ' %q' ${BASH_ARGV[argvi+j]}
done
let argvi+=argc || true
printf '\n'
done
if true
then
file=${BASH_SOURCE[i]}
line=${BASH_LINENO[i-1]}
printf '%s:%d: ... at top level\n' "$file" "$line"
fi
} >&2
exit "$magic_exitvalue"
unreachable
}
shopt -s extdebug
trap 'backtrace' ERRThe Bash code is not only fast but pretty easy to understand (other than perhaps the header, which I never have to change).
There are some ways around this:
#!/bin/sh
[ "${DEBUG:-0}" = "1" ] && set -x
set -u
foo="$( my-external-program | pipe1 | pipe2 | pipe3 )"
if [ -z "$foo" ] ; then
echo "Error: I didn't get any output; exiting!"
exit 1
fi
echo "Well I got something back. Was it right?"
if ! printf "%s\n" "$foo" | grep -q -E 'some-extended-regex' ; then
echo "Error: '$foo' didn't match what I was looking for; exiting!"
exit 1
fi
echo "Do the thing now..."
A lot of programs will either produce valid output as STDOUT, or if they encounter an error, not produce STDOUT. So for the most part you just need to 1) look for any STDOUT at all, and then 2) filter it for the specific output you're looking for. For anything else, just die with an error. If you need to find out why it didn't run, re-run with DEBUG=1.Advanced diagnosis code won't make your program work better, but it will make it more complicated. Re-running with tracing enabled works just as well 99% of the time.
Lots of programs produce nothing in the success case and only print in the failure case.
My solution just silently sits in the background for unexpected, unpredictable bugs.
$ osh -c 'set -E; set -o |grep errtrace'
set -o errtrace
I'd be interested in any bug reports if it doesn't behave the same way(The Oils runtime supports FUNCNAME BASH_SOURCE and all that, but there is room for a much better introspection API. It actually has a JSON crash report with a shell stack dump, but it probably needs some polish.)
What's the point? You can't fix them anyway
(FWIW my take away from issues like this is always: Bash is not a serious programming language. If you are running up against these limitations in real life it's time to switch language. The challenge is really in predicting when this will happen _before_ you write the big script!)
Here's an excerpt that shows how to set PS4 from a main() in a .env shell script for configuring devcontainer userspace:
for arg in "${@}"; do
case "$arg" in
--debug)
export __VERBOSE=1 ;
#export PS4='+${LINENO}: ' ;
#export PS4='+ #${BASH_SOURCE}:${LINENO}:${FUNCNAME[0]:+${FUNCNAME[0]}()}:$(date +%T)\n+ ' ;
#export PS4='+ ${LINENO} ${FUNCNAME[0]:+${FUNCNAME[0]}()}: ' ;
#export PS4='+ $(printf "%-4s" ${LINENO}) | '
export PS4='+ $(printf "%-4s %-24s " ${LINENO} ${FUNCNAME[0]:+${FUNCNAME[0]}} )| '
#export PS4='+ $(printf "%-4s %-${SHLVL}s %-24s" ${LINENO} " " ${FUNCNAME[0]:+${FUNCNAME[0]}} )| '
;;
--debug-color|--debug-colors)
export __VERBOSE=1 ;
# red=31
export ANSI_FG_BLACK='\e[30m'
#export MID_GRAY_256='\e[38;5;244m' # Example: a medium gray
export _CRESET='\e[0m'
export _COLOR="${ANSI_FG_BLACK}"
printf "${_COLOR}DEBUG: --debug-color: This text is ANSI gray${_CRESET}\n" >&2
export PS4='+ $(printf "${_COLOR}%-4s %-24s%s |${_CRESET} " ${LINENO} "${FUNCNAME[0]:+${FUNCNAME[0]}}" )'
;;
esac
done
This, too: function error_handler {
echo "Error occurred on line $(caller)" >&2
awk 'NR>L-4 && NR<L+4 { printf "%-5d%3s%s\n",NR,(NR==L?">>>":""),$0 }' L=$1 $0 >&2
}
if (echo "${SHELL}" | grep "bash"); then
trap 'error_handler $LINENO' ERR
fiAs an aside, I actually wonder if Bash's caller() was inspired by Perl's.
There is also Carp and friends, plus Data::Dumper when you not only need the stack trace but also the state of objects and data structures. Which is something that I don't think Bash can really do at all.
for value in "${SOMEARRAY[@]}"; do
echo "${value}"
done
or with the help of the keys: for key in "${!SOMEARRAY[@]}"; do
echo "key: ${key} - value: ${SOMEARRAY["${key}"]}"
done
If you want to dump the data of any variable you can just use declare -p declare -p SOMEARRAY
and you get something like this: declare -a SOMEARRAY=([0]="a" [1]="b" [2]="c" [3]="d" [4]="e" [5]="f")
What you can do, if you have a set of variables and you want them to be "dumped", is this:Let's "dump" all variables that start with "BASH":
for k in "${!BASH@}"; do
declare -p "${k}"
done
Or one could do something like this: for k in "${!BASH@}"; do
echo "${k}: ${!k}"
done
But the declare option is much more reliable as you don't have to test for the variable's type.And, POSIX shell can only shift and unshift on the $@ array; so it would be necessary to implement hashmaps or associative arrays with shell string methods and/or eval.
The awk and printf are as obscure and unreadable as Perl, but still probably faster than just starting Perl.
Ironically, in terms of portability, it's probably more likely that awk and printf are installed than Python (or Perl). This application doesn't need Python in the (devcontainer) container, and nobody does sysadmin scripts with lua (which can't `export VARNAME` for outer shells) so shell scripting is justified though indeed arcane.
Getopt is hardly more understandable than a few loops through $@ with case statements.
I don't understand the relevance of other tools to "getting decent error reports in Bash"?
There are great logging (and TAP testing) libraries in Python, but that doesn't solve for debugging Bash?
There is at least one debugger for Bash scripts.
vscode-bash-debug is a frontend for bashdb: https://github.com/rogalmic/vscode-bash-debug
newAccount2025•6mo ago
inetknght•6mo ago
For what it's worth, I think `set -euo pipefail` should be default for every script, and thoroughly checked with shellcheck.net.
scns•6mo ago
This
mananaysiempre•6mo ago
(I rather dislike shellcheck because it combines genuine smells with opinions, such as insisting on $(...) instead of `...`. For the same reason, with Python I regularly use pyflakes but can’t stand flake8. But to each their own.)
koolba•6mo ago
Only one of those can be (sanely) nested. Why would you ever want to use backticks?
oguz-ismail•6mo ago
imcritic•6mo ago
https://mywiki.wooledge.org/BashPitfalls#set_-euo_pipefail
larkost•6mo ago
It is not that `set -e` is bad, it is that bash is a bit weird in this area and you have to know when things eat errors and when they don't. This is not really changed by `set -e`: you already had to know them to make safe code. `set -e` does not wave a magic wand saying you don't have to understand bash error control.
But having `set -e` is almost universally better for people who do not understand it (and I would argue also for people who do). Without it you are responsible for implementing error handling on almost every line.
As other have already said: this is one of those things that generally pushes me to other languages (in my case often Python), as the error handling is much more intuitive, and much less tricky to get right.
fireflash38•6mo ago
inetknght•6mo ago
My favorite is `docker images`.
It outputs... not-empty when the output is empty. It returns zero (success) when output is not-empty and empty. So you have to both do error checking and parse its output before you know whether there are any matching images in the cache.
Contrast that with grep. You can do `if grep -q foo` or `if grep -qv foo`, you can do `if "foo" == "$(grep foo)"`, etc. Much more versatile and easy to use.
Then there's apps that report to stderr that some argument isn't used but then also exits with success without doing things. Then there's similar apps that do the same thing but... exits after also doing things. Ugh.
After some time you get to learn to test your apps before you write up scripts around them. That's a good thing though, it means that you know how to mix together a diverse toolset.
javier2•6mo ago
eikenberry•6mo ago
chasil•6mo ago
koolba•6mo ago
eastbound•6mo ago
fireflash38•6mo ago
forrestthewoods•6mo ago