for i in {0..60}; do
true -- "$i" # shelleck surpression
if eventually_succeeds; then break; fi
sleep 1s
done
Not super elegant, but relatively correct, next level is exponential back off. Generally leaves a bit of composability around.In Bash, or literally whenever you are dealing with POSIX/IO/Processes, you need to work with defensive coding practices.
Whatever you do has consequences
Could easily prefix eventually_succeeds with timeout.
for _ in
https://github.com/koalaman/shellcheck/wiki/SC2034#intention... gnu_sed=gsed
if ! command -v $gnu_sed; then
gnu_sed=$(detector_wizardry)
fi
$gnu_sed -Ee ...
The until keyword is part of the POSIX.2 shell specification, which does not include any sort of timeout functionality. It could be implemented in bash, but it would not be portable to other shells (Debian dash being the main concern).
This is the reason that it is implemented as a separate utility.
Search for "The until loop" below to see the specification.
https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V...
$ strace -e trace=clone -e fault=clone:error=EAGAIN
random link: https://medium.com/@manav503/using-strace-to-perform-fault-i...Thanks!
It's only intended for native/unmanaged code though.
timeout 1800 mplayer show.mp4 ; sudo pm-suspend
As my poor man's parental control to let my kids watch a show for 30 minutes without manual supervision when they were younger. Useful commandIs there an attempt anywhere to build a slightly modern standard library for bash scripts?
You know besides stack overflow?
And I like messy langs. My favorite language is groovy.
PowerShell is a missed opportunity. A project with a ton of resources dedicated by a company with bottomless coffers... which ended up being sub-par.
I wish there was a sensible alternative, but I haven't found one yet.
"builtins" are primitives that Bash can use internally without calling fork()/exec(). In fact, builtins originated in the Bourne shell to operate on the current shell process, because they would have no effect in a subprocess or subshell.
In addition to builtins and commands, Bash also defines "reserved words", which are keywords to make loops and control the flow of the script.
https://www.gnu.org/software/bash/manual/bash.html#Reserved-...
Many distros will ship a default or skeleton .bashrc which includes some useful aliases and functions. This is sort of like a "standard library", if you like having 14 different standards.
https://gist.github.com/marioBonales/1637696
'[' is an external binary in order to catch any shell or script that does not interpret it as a builtin operator. There may be a couple more. Under normal circumstances, it won't actually be invoked, as a Bash script would interpret '[' as the 'test' builtin.
I write a lot of shell scripts and they tend to be POSIX-compliant. For dependencies, you can use the `command` command to fail elegantly if they're not installed.
There's probably more.
The shell has a ____wide____ userbase with many kinds of users. Depending on your goal, the rabbit hole can go very deep (how portable across interpreters, how dependant on other binaries, how early can it work in a bootstrap scenario, etc).
These are mine:
https://github.com/alganet/coral
Bash scripts are so hacky. With any other language, my though process is "what's the syntax again? Oh right.." but with bash it's "how can I do this and avoid shooting myself in the foot?" when doing anything moderately complex like a for loop.
#!/usr/bin/env bash
runUntilDoneOrTimeout () {
local -i timeout=0
OPTIND=1
while getopts "t:" opt; do
case $opt in
t) timeout=$OPTARG;;
esac
done
shift $((OPTIND - 1))
runCommand="$*"
$runCommand &
runPID=$!
echo checking jobs
jobs # just to prove there are some
echo job check complete
while jobs %- >& /dev/null && ((timeout > 0)); do
echo "waiting for $runCommand for $timeout seconds"
sleep 1
((timeout--))
done
if (( timeout == 0 )); then
echo "$runCommand timed out"
kill -9 $runPID
wait $runPID
else
echo "$runCommand completed"
fi
echo checking jobs
jobs # just to prove there are none
echo job check complete
}
declare -i timeopt=10
declare -i sleepopt=100
OPTIND=1
while getopts "t:s:" opt; do
case $opt in
t) timeopt=$OPTARG;;
s) sleepopt=$OPTARG;;
esac
done
shift $((OPTIND - 1))
runUntilDoneOrTimeout -t $timeopt sleep $sleepopt
bash -c 'some command "$1" "$2"' -- "$var1" "$var2"
I use "--" because I like the way it looks but the first parameter goes in argv[0] which doesn't expand in "$@" so IMO something other than an argument should go there for clarity.Note that bash specifically has printf %q which could alternatively be used, but I prefer to use bourne-compatible things when the bash version isn't significantly cleaner.
Double hyphens ('--') has a very specific meaning to bash and most every Unix/Linux CLI program. From getopts(1p)[0]:
Any of the following shall identify the end of options: the
first "--" argument that is not an option-argument, finding
an argument that is not an option- argument and does not
begin with a '-', or encountering an error.
0 - https://www.man7.org/linux/man-pages/man1/getopts.1p.html #!/usr/bin/env bash
long_fn () { # this can contain anything, like OPs until curl loop
sleep $1
}
# to TIMEOUT_DURATION BASH_FN_NAME BASH_FN_ARGS...
to () {
local duration="$1"; shift
local fn_name="$1"; shift
export -f "$fn_name"
timeout "$duration" bash -c "$fn_name"' "$@"' _ $@
}
time to 1s long_fn 5 # will report it ran 1 second
long_fn() {
echo "$1"
sleep "$2"
}
to 1s long_fn "This has spaces in it" 5
function retry {
until $@; do :; done
alert
}
export -f retry
Works reasonably well for non-scripting usecases.This is very complex, because if you.write lots of functions that call functions, you really just want to run something that inherits while env from your process, that's why there is control and sleep process and naive race to decide which finished first...
That's probably reason I ignored built-in timeout...
Been working flawlessly since 20 years: so flawlessly that I don't remember how it works.
I haven’t quite been able to do it using _only_ builtins, but if you allow the sleep command (which has been standardised since the first version of POSIX, so it should be available pretty much anywhere that makes any sort of attempt to be POSIX compliant), then this seems ok:
# TIMEOUT SYSTEM
#
# Defines a timeout function:
#
# Usage: timeout <num_seconds> <command>
#
# which runs <command> after <num_seconds> have elapsed, if the script
# has not exited by then.
_alarm() {
local timeout=$1
# Spawn a subshell that sleeps for $timeout seconds
# and then sends us SIGALRM
(
sleep "$timeout"
kill -ALRM $$
) &
# If this shell exits before the timeout has fired,
# clean up by killing the subshell
subshell_pid=$!
trap _cleanup EXIT
}
_cleanup() {
if [ -n "$subshell_pid" ]
then
kill "$subshell_pid"
fi
}
timeout() {
local timeout=$1
local command=$2
trap "$command" ALRM
_alarm "$timeout"
}
# MAIN PROGRAM
times_up() {
echo 'TIME OUT!'
subshell_pid=
exit 1
}
timeout 10 times_up
for i in {1..20}
do
sleep 1
echo $i
done
https://github.com/gentoo/genkernel/commit/a21728ae287e988a1...
With that (minus the gen_die() line unless you copy that helper function too), you can do:
doSomething() {
for i in {1..20}
do
sleep 1
echo $i
done
}
if ! call_func_timeout doSomething 10; then
echo 'TIME OUT!'
exit 1
fi
Similarly to you, I only used shell builtins, plus the sleep command. The genkernel code is run by busybox ash, so the script had to be POSIX conformant. Note that both your script and my example script reimplementing your script with my code from 12 years ago, use {1..20}, which I believe is a bashism and is not POSIX conformant, but that is fine for your use case.My innovation over the stack overflow post was to have the exit status return true when the timeout did not trigger and false when the time out did trigger, so that error handling could be done inline in the main script (even if that error handling is just printing a message and exiting). I felt that made code using this easy to read.
<command> & sleep <timeout>; kill -SIGALRM %1
timeout 5 bash -c 'cat < /dev/null > /dev/tcp/google.com/80'
Replace google.com and port 80 with your web or tcp server (ssh too!). The command will error/time out if there isn’t a server listening or you have some firewall/proxy in the way.That's for a standard service health check anyway. That service and health check shouldn't be started until the container it depends on has started and is healthy. In Kubernetes that's an Init Container in a Pod, in AWS ECS that's a dependsOn stanza in your Task Container Definition, and for Docker Compose it's the depends_on stanza in a Services entry.
set -eu
nowtime="$(date +%s)"
maxwait=300
maxloop=5
c=0
while [ $c -lt $maxloop ] ; do
if timeout "$maxwait" curl --silent --fail-with-body 10.0.0.1:8080/health ; then
exit 0
else
sleep 1
fi
if [ "$(date +%s)" -gt "$((nowtime+maxwait))" ] ; then
echo "$0: Error: max wait time $maxwait exceeded"
exit 1
fi
c=$((c+1))
done
However, Curl already supports this natively so there's no need to write a script. curl --silent --fail-with-body --connect-timeout 5 --retry-all-errors --retry-delay 1 --retry-max-time 300 --retry 300 10.0.0.1:8080/health
It’s more useful if you are implementing this in a general programming language, not in the shell, or if you want to know how it works under the hood.
--connect-timeout <seconds>
and retry: --retry <num>
so you could do curl --retry 5 --connect-timeout 10
chii•1d ago
You dont need to timeout here, and you won't need to subshell another bash to just get the timeout to work.
xelxebar•1d ago
chii•1d ago
But in the general case, where the command being invoked does not have such an option, then it does make a lot of sense to do a check like that via the `timeout` utility.
diggan•1d ago
--connect-timeout = Times out if the connection wasn't established within N seconds
--max-time = Times out if the entire request wasn't completed within N seconds
But then I don't remember if connect-timeout takes DNS lookups into account, or TLS handshakes. I seem to remember there is another sort of timeout that tends to be hard to get right when the connections are flaky/drops a lot of packets, so you end up having to wrap curl anyways if you want a hard limit on the timeout.
SoftTalker•1d ago
chgs•12h ago
cb321•1d ago
These variables seem "under known". EDIT: For example, you can get a quickie wall time measurement from a Zsh shell function like this:
And then you can actually run in your shell without even a single fork/clone. (which you can confirm with an off to the side `strace -fv -o/dev/shm/dt.st -p WHATEVER_PID`, although I guess the culture these days is often to have even prompt printing launch a zoo of activity)crabbone•1d ago
To do it properly, you'd need some code before the loop to start a separate process that would check on the parent process... but, really, you don't want to go there, not in Bash anyways.
But, assuming curl won't hang, you could compare timestamps. It's better than counting iterations (in terms of emulating timeout command).
But then, you might want to get fancy and implement exponential backoff or whatever other strategy you fancy to not overload the whatever thing you are polling... again, probably not in Bash.