sed -n 2p file
prints the second line of file. The advantage sed has over this line script is it can also print more than one line, should you need to: sed -n 2,4p file
prints lines 2 through 4, inclusive.There also some very niche stuff that I won't use but found funny
Which often just confuses things further.
Me: My name is "Farb" F-A-R-B. B as in Baker.
Them: Farb-Baker, got it.
"S-T-E-V-E @ gmail.com, S as in sun, T as in taste, ..." "Got it, fpeve."
I once had someone sound out a serial number over a spotty phone connection years ago and they said "N as in NAIL". You know what sounds a lot like NAIL? MAIL.
And that is why we don't just arbitrarily make up phonetic alphabets.
It worked with me and I guess it must have usually worked for him in most of his customer interactions.
What's always interesting to me is how many of these I'll see and initially think, "I don't really need that." Because I'm well aware of the effect (which I'm sure has a name - I suppose it's similar to induced demand) of "make $uncommon_task much cheaper" -> "$uncommon_task becomes the basis of an entirely new workflow/skill". So I'm going to try out most of them and see what sticks!
Also: really love the style of the post. It's very clear but also includes super valuable information about how often the author actually uses each script, to get a sense ahead of time for which ones are more likely to trigger the effect described above.
A final aside about my own workflows which betrays my origins... for some of these operations and for others i occasionally need, I'll just open a browser dev tools window and use JS to do it, for example lowercasing a string :)
For example. The "saves 5 seconds task that I do once a month" from the blog post. Hopefully the author did not spend more than 5 minutes writing said script and maintaining it, or they're losing time in the long run.
1. even if it costs more time, it could also save more annoyance which could be a benefit
2. by publishing the scripts, anyone else who comes across them can use them and save time without the initial cost. similarly, making and sharing these can encourage others to share their own scripts, some of which the author could save time with
The annoyance of all these factors for outweighs the benefits, in my experience. It's just that the scripts feel good at first and the annoyance doesn't come until later and eventually you abandon them.
If something is time sensitive it is worth spending a disproportionate amount of time to speed things up at some later time. For example if you’re debugging something live, in a live presentation, working on something with a tight deadline etc.
Also you don’t necessarily know how often you’ll do something anyways.
The xkcd doesn't seem to be pushing an agenda, just providing a lookup table. Time spent vs time saved is factual.
To take a concrete example, if I spend 30 minutes on a task every six months, over 5 years that’s 5 hours of “work” hours. So the implication is that it’s not worth automating if it takes more than 5 hours to automate.
But if those are 5 hours of application downtime, it’s pretty clearly worth it even if I have to spend way more than 5 hours to reduce downtime.
>YOU DON'T UNDERSTAND. I NEED TO BE CONSTANTLY OPTIMIZING MY UPTIME. THE SCIENCE DEMANDS IT. TIMEMAXXING. I CAN'T FREELY EXPLORE OR BRAINSTORM, IT'S NOT XKCD 1205 COMPLIANT. I MUST EVALUATE EVERY PROPOSED ACTIVITY AGAINST THE TIME-OPTIMIZATION-PIVOT-TABLE.If you have to do that, the script needs improvement. Always add a `--help` which explains what it does and what arguments it takes.
Of course, I _do_ have some custom shell scripts and aliases, but these are only for things I will ever do locally.
A password or token generator, simple or complicated random text.
Scripts to list, view and delete mail messages inside POP3 servers
n, to start Nautilus from terminal in the current directory.
lastpdf, to open the last file I printed as PDF.
lastdownload, to view the names of the n most recent files in the Downloads directory.
And many more but those are the ones that I use often and I remember without looking at ~/bin
- When I was a fresh engineer I used a pretty vanilla shell environment
- When I got a year or two of experience, I wrote tons of scripts and bash aliases and had a 1k+ line .bashrc the same as OP
- Now, as a more tenured engineer (15 years of experience), I basically just want a vanilla shell with zero distractions, aliases or scripts and use native UNIX implementations. If it's more complicated than that, I'll code it in Python or Go.
Now I have many
nix computers and I want them consistent and with only the most necessary packages installed.*may not be applicable to all wives, ymmv.
(had to use a double backslash to render that correctly)
Some are desktops, some laptops, some servers. Different packages installed, different hardware. Three more variants.
Yes, I do have a script to set up my environment, but it already has a lot of conditional behavior to handle these five total variants. And I don't want to have to re-test the scripts and re-sync often.
Personally I tend to agree... there is a very small subset of things I find worth aliasing. I have a very small amount and probably only use half of them regularly. Frankly I wonder how my use case is so different.
edit: In the case of the author I guess he's probably wants to live in the terminal full time. And perhaps offline. there is a lot of static data he's stored like http status codes: https://codeberg.org/EvanHahn/dotfiles/src/commit/843b9ee13d...
In my case i'd start typing it in my browser then just click something i've visited 100 times before. There is something to be said about reducing that redundant network call but I dont think it makes much practical difference and the mental mapping/discoverability of aliases isnt nothing.
Nowadays I just try to be quite selective with my tooling and learn to change with it - "like water", so to speak.
(I say this with no shade to those who like maintaining their dotfiles - it takes all sorts :))
- if you commit them to git, they last your entire career
- improving your setup is basically compound interest
- with a new laptop, my setup script might cause me 15 minutes of fixing a few things
- the more you do it, the less any individual hassle becomes, and the easier it looks to make changes – no more "i don't have time" mindset
as a person who loves their computer, my ~/bin is full. i definitely (not that you said this) do not think "everything i do has to be possible on every computer i am ever shelled into"
being a person on a computer for decades, i have tuned how i want to do things that are incredibly common for me
though perhaps you're referring to work and not hobby/life
When I watch the work of coworkers or friends who have gone these rabbit holes of customization I always learn some interesting new tools to use - lately I've added atuin, fzf, and a few others to my linux install
The amount of shit you'll get for "applying your dotfiles" on a client machine or a production server is going to be legendary.
Same with containers, please don't install random dotfiles inside them. The whole point of a container is to be predictable.
If something is wrong with a server, we terminate it and spin up a new one. No need for anyone to log in.
In very rare cases it might be relevant to log in to a running server, but I haven’t done that in years.
You said you were already using someone else's environment.
You can't later say that you don't.
Whether or not shell access makes sense depends on what you are doing, but a well written application server running in a cloud environment doesn't need any remote shell account.
It's just that approximately zero typical monolithic web applications meet that level of quality and given that 90% of "developers" are clueless, often they can convince management that being stupid is OK.
Accounts are basically free. Not having accounts; that's expensive.
Let's assume they need access to the full service account environment for the work, which means they need to login or run commands as the service account.
This is a bit outside my domain, so this is a genuine question. I've worked on single user and embedded systems where this isn't possible, so I find the "unprofessional" statement very naive.
Aren't you therefore optimizing for 1% of the cases, but sabotaging the 99%?
I'd rather take the pain of writing scripts to automate this for multiple environments than suffer the death by a thousand cuts which are the defaults.
https://github.com/atuinsh/atuin
Discussed 4 months ago:
Atuin – Magical Shell History https://news.ycombinator.com/item?id=44364186 - June 2025, 71 comments
The right way this would work is via a systemd service and then it should be instant.
Admittedly, I've toned down the configs of some programs, as my usage of them has evolved or diminished, but many are still highly tailored to my preferences. For example, you can't really use Emacs without a considerable amount of tweaking. I mean, you technically could, but such programs are a blank slate made to be configured (and Emacs is awful OOB...). Similarly for zsh, which is my main shell, although I keep bash more vanilla. Practically the entire command-line environment and the choices you make about which programs to use can be considered configuration. If you use NixOS or Guix, then that extends to the entire system.
If you're willing to allow someone else to tell you how you should use your computer, then you might as well use macOS or Windows. :)
That, plus knowing how to parse a man file to actually understand how to use a command (a skill that takes years to master) pretty much removes the need for most aliases and scripts.
I use ctrl-R with a fuzzy matching program, and let my terminal remember it for me.
And before it's asked: yes that means I'd have more trouble working in a different/someone else's environment. But as it barely ever happens for me, it's hardly an important enough scenario to optimize for.
Does this mean that you learned to code to earn a paycheck? I'm asking because I had written hundreds of scripts and Emacs Lisp functions to optimize my PC before I got my first job.
to this day, i still get tripped up when using a shell for the first time without those as they're muscle memory now.
Rob Pike: Those days are dead and gone and the eulogy was delivered by Perl.
sed, awk, grep, and xargs along with standard utilities get you a long long way.
I value out of the box stuff that works most everywhere. I have a fairly lightweight zsh config I use locally but it’s mostly just stuff like a status like that suits me, better history settings, etc. Stuff I won’t miss if it’s not there.
Otherwise, I am happy to be pulled into your discussion, Marshall McLuhan style[3] to adjudicate, for a very reasonable fee.
[1] https://craphound.com/lifehacksetcon04.txt
[2] https://archive.org/details/Notcon2004DannyOBrienLifehacks
[3] https://www.openculture.com/2017/05/woody-allen-gets-marshal...
The way you’re doing it trashes files sequentially, meaning you hear the trashing sound once per file and ⌘Z in the Finder will only restore the last one. You can improve that (I did it for years) but consider just using the `trash` commands which ships with macOS. Doesn’t use the Finder, so no sound and no ⌘Z, but it’s fast, official, and still allows “Put Back”.
> jsonformat takes JSON at stdin and pretty-prints it to stdout.
Why prioritise node instead of jq? The latter is considerably less code and even comes preinstalled with macOS, now.
> uuid prints a v4 UUID. I use this about once a month.
Any reason to not simply use `uuidgen`, which ships with macOS and likely your Linux distro?
$ echo '{ "hello": "world" }' | python3 -m json.tool
{
"hello": "world"
}The best part about sharing your config or knowledge is that someone will always light up your blind spots.
- either critique is solid and I learn something
- or commenter is clueless which makes it entertaining
there is very seldom a “middle”
I've found that pedantic conversations here seem to actually have a greater potential for me to learn something from them than other forums/social platforms. On other platforms, I see someone providing a pedantic response and I'll just keep moving on, but on HN, I get curious to not only see who wins the nerd fight, but also that I might learn at least one thing along the way. I like that it's had an effect on how I engage with comment sections.
OTOH I don’t like flagging stories because good ones get buried regularly. But then HN is not a great place for peaceful, nuanced discussion and these threads often descend into mindless flame wars, which would bury the stories even without flagging.
So, meh. I think flagging is a moderately good thing overall but it really lacks in subtlety.
On stories however, I think the flag system is pretty broken. I've seen so many stories that get flagged because people find them annoying (especially AI-related things) or people assume it will turn into a flame war, but it ends up burying important tech news. Even if the flags are reversed, the damage is usually done because the story fell off the front page (or further) and gets very little traction after that.
In short, I've never seen somebody flagged simply for having the wrong opinion. Even controversial opinions tend to stay unflagged, unless they're incredibly dangerous or unhinged.
I agree that most dead posts would be a distraction and good to have been kept out.
Can you back this up with data? ;-)
I see citations and links to sources about as little as on reddit around here.
The difference I see is in the top 1% comments, which exist in the first place, and are better on average (but that depends on what other forums or subreddits you compare it to, /r/AskHistorians is pretty good for serious history answers for example), but not in the rest of the comments. Also, less distractions, more staying on topic, the joke replies are punished more often and are less frequent.
People who agree with an article will most likely just upvote. Hardly anyone ever bothers to comment to offer praise, so most comments that you end up seeing are criticisms.
https://meta.wikimedia.org/wiki/Cunningham%27s_Law
...aaand less directly (though referenced in the wikipedia article)...
Yes! I will take this as a chance to thank every people who shared their knowledge on the Internet. You guys are so freaking awesome! You are always appreciated.
A big chunk of my whole life's learning came from all the forums that I used to scour through, hours after hour! Because these awesome people always sharing their knowledge, and someone adding more. That's what made Internet, Internet. And all is now almost brink of loss, because of greedy corporates.
This habit also helped me with doom-scrolling. I sometimes, do doomscroll, but I can catch it quickly and snap out of it. Because, my whole life, I always jumped in to the rabbit holes, and actually read those big blog posts, where you had those `A-ha` moments, "Oohh, I can use that", "Ahh, that's clever!".
When, browsing, do not give me that, by brain actually triggers, "What are you doing?"
Later, I got lazy, which I am still paying for. But I am going to get out of it.
Never stop jumping into those rabbit holes!! Well, obviously, not always it's a good rabbit hole, but you'll probably come out wiser.
In powershell I just do
> echo '{"foo": "bar"} | ConvertFrom-Json | ConvertTo-Json
{
"foo": "bar"
}
But as a functionDoes all the right things and works great.
There’s a similar tool that works well on Linux/BSDs that I’ve used for years, but I don’t have my FreeBSD desktop handy to check.
> vim [...] I select a region and then run :'<,'>!markdownquote
Just select the first column with ctrl-v, then "i> " then escape. That's 4 keys after the selection, instead of 20.
> u+ 2025 returns ñ, LATIN SMALL LETTER N WITH TILDE
`unicode` is widely available, has a good default search, and many options. BTW, I wonder why "2025" matched "ñ".
unicode ñ
U+00F1 LATIN SMALL LETTER N WITH TILDE
UTF-8: c3 b1 UTF-16BE: 00f1 Decimal: ñ Octal: \0361
> catbin foo is basically cat "$(which foo)"Since the author is using zsh, `cat =foo` is shorter and more powerful. It's also much less error-prone with long commands, since zsh can smartly complete after =.
I use it often, e.g. `file =firefox` or `vim =myscript.sh`.
It's not installed by default on macOS or Ubuntu, for me.
$ unicode
Command 'unicode' not found, but can be installed with:
sudo apt install unicode
and it did. So it really was available. That's Debian 11.That was my thought. I use jq to pretty print json.
What I have found useful is j2p and p2j to convert to/from python dict format to json format (and pretty print the output). I also have j2p_clip and p2j_clip, which read from and then write to the system clipboard so I don't have to manually pipe in and out.
> Any reason to not simply use `uuidgen`, which ships with macOS and likely your Linux distro?
I also made a uuid, which just runs uuidgen, but then trims the \n. (And maybe copied to clipboard? It was at my old job, and I don't seem to have saved it to my personal computer.)
% cat /proc/sys/kernel/random/uuid
464a4e91-5ce4-47b6-bb09-8a60fde572fbLike, it's okay -- even good -- for the tools to bend to the user and not the other way around.
I have almost the same, but differently named with scratch(day), copy(xc), markdown quote(blockquote), murder, waitfor, tryna, etc.
I used to use telegram-send with a custom notification sounnd a lot for notifications from long-running scripts if I walked away from the laptop.
I used to have one called timespeak that would speak the time to me every hour or half hour.
I have go_clone that clones a repo into GOPATH which I use for organising even non-go projects long after putting go projects in GOPATH stopped being needed.
I liked writing one-offs, and I don't think it's premature optimization because I kept getting faster at it.
mkdir /some/dir
cd !$
(or cd <alt+.>)Edit: looks like it’s a zsh thing
Mine is called "md" and it has "-p" on the mkdir. "mkdir -p $1 && cd $1"
function mkcd {
newdir=$1
mkdir -p $newdir
cd $newdir
} mkcd() {
mkdir -p -- "$1" &&
cd -- "$1"
}You can configure your shell to notify the terminal of directory changes, and then use your terminal’s “open new window” function (eg: ctrl+shift+n) to open a new window retaining the current directory.
^ (jump to the beginning)
ctrl+v (block selection)
j (move cursor down)
shift+i (bulk insert?)
type ><space>
ESC
:'<,'>s/^/> /I use this as a bookmarklet to grab the front page of the new york times (print edition). (You can also go back to any date up to like 2011)
I think they go out at like 4 am. So, day-of, note that it will fail if you're in that window before publishing.
javascript:(()=>{let d=new Date(new Date().toLocaleString('en-US',{timeZone:'America/New_York'})),y=d.getFullYear(),m=('0'+(d.getMonth()+1)).slice(-2),g=('0'+d.getDate()).slice(-2);location.href=`https://static01.nyt.com/images/${y}/${m}/${g}/nytfrontpage/scan.pdf`})()Anyways, my favourite alias that I use all the time is this:
alias a='nvim ~/.zshrc && . ~/.zshrc'
It solves the ,,not loaded automatically'' part at least for the current terminal #!/bin/sh
if test "$#" != 2
then
echo 'Error: unmv must have exactly 2 arguments'
exit 1
fi
exec mv "$2" "$1"I genuinely wonder, why would anyone want to use this, often?
abcdefghijklmnopqrstuvwxyz
ABCDEFGHIJKLMNOPQRSTUVWXYZ var alpha_lu = "abcdefghijklmnopqrstuvwxyz";
Typing it out by hand is error prone as it's not easy to see if you've swapped the order or missed a character.I've needed the alphabet string or lookup rarely, but I have needed it before. Some applications could include making your own UUID function, making a small random naming scheme, associating small categorical numbers to letters, etc.
The author of article mentioned they do web development, so it's not hard to imagine they've had to create a URL shortener, maybe more than once. So, for example, creating a small name could look like:
function small_name(len) {
let a = "abcdefghijklmnopqrstuvwxyz",
v = [];
for (let i=0; i<len; i++) {
v.push( a[ Math.floor( Math.random()*a.length ) ] );
}
return v.join("");
}
//...
small_name(5); // e.g. "pfsor"
Dealing with strings, dealing with hashes, random names, etc., one could imagine needing to do functions like this, or functions that are adjacent to these types of tasks, at least once a month.Just a guess on my part though.
alias ..='cd ..'
alias ...='cd ../..'
alias ....='cd ../../..'
alias .....='cd ../../../..'
alias ......='cd ../../../../..'
alias .......='cd ../../../../../..'
up 2, up 3 etc.
zsh: permission denied: ..
zsh: command not found: ...I also aliased - to run cd -
Other single-key bindings I use often are:
KP* executes 'ls'
KP- executes 'cd -'
KP+ executes 'make -j `nproc`'
c $> pwd
/a/b/c
c $> dir1
dir1 $> ..
c $> ../..
/ $> # Modified from
# https://github.com/fish-shell/fish-shell/issues/1891#issuecomment-451961517
function append-slash-to-double-dot -d 'expand .. to ../'
# Get commandline up to cursor
set -l cmd (commandline --cut-at-cursor)
# Match last line
switch $cmd[-1]
case '*.'
commandline --insert './'
case '*'
commandline --insert '.'
end
end ..() { # Usage: .. [N=1] -> cd up N levels
local d="" i
for ((i = 0; i < ${1:-"1"}; i++))
d="$d/.." # Build up a string & do 1 cd to preserve dirstack
[[ -z $d ]] || cd ./$d
}
Of course, what I actually have been doing since the early 90s is realize that a single "." with no-args is normally illegal and people "cd" soooo much more often than sourcing script definitions. So, I hijack that to save one "." in the first 3 cases and then take a number for the general case. # dash allows non-AlphaNumeric alias but not function names; POSIX is silent.
cd1 () { if [ $# -eq 0 ]; then cd ..; else command . "$@"; fi; } # nice "cd .."
alias .=cd1
cdu() { # Usage: cdu [N=2] -> cd up N levels
local i=0 d="" # "." already does 1 level
while [ $i -lt ${1:-"2"} ]; do d=$d/..; i=$((i+1)); done
[ -z "$d" ] || cd ./$d; }
alias ..=cdu
alias ...='cd ../../..' # so, "."=1up, ".."=2up, "..."=3up, ".. N"=Nup
and as per the comment this even works in lowly dash, but needs a slight workaround. bash can just do a .() and ..() shell function as with the zsh.And you can type `rn -rf *` to see all timezones recursively. :)
clippy image.png # then paste into Slack, etc. as upload
clippy -r # copy most recent download
pasty # copy file in Finder, then paste actual file here
https://github.com/neilberkman/clippy / brew install clippy clippy image.png # then paste into Slack, etc. as upload
Also: pasty # paste actual file, after copying file in FinderHere are some super simple ones I didn't see that I use almost every day:
cl="clear"
g="git"
h="history"
ll="ls -al"
path='echo -e ${PATH//:/\\n}'
lv="live-server"
And for common navigation:
dl="cd ~/Downloads"
dt="cd ~/Desktop"
That and exit (CTRL-d). A guy I used to work with just mentioned it casually and someone it just seared itself into my brain.
https://evanhahn.com/why-alias-is-my-last-resort-for-aliases...
$ cat /usr/local/bin/awkmail
#!/bin/gawk -f
BEGIN { smtp="/inet/tcp/0/smtp.yourco.com/25";
ORS="\r\n"; r=ARGV[1]; s=ARGV[2]; sbj=ARGV[3]; # /bin/awkmail to from subj < in
print "helo " ENVIRON["HOSTNAME"] |& smtp; smtp |& getline j; print j
print "mail from:" s |& smtp; smtp |& getline j; print j
if(match(r, ","))
{
split(r, z, ",")
for(y in z) { print "rcpt to:" z[y] |& smtp; smtp |& getline j; print j }
}
else { print "rcpt to:" r |& smtp; smtp |& getline j; print j }
print "data" |& smtp; smtp |& getline j; print j
print "From:" s |& smtp; ARGV[2] = "" # not a file
print "To:" r |& smtp; ARGV[1] = "" # not a file
if(length(sbj)) { print "Subject: " sbj |& smtp; ARGV[3] = "" } # not a file
print "" |& smtp
while(getline > 0) print |& smtp
print "." |& smtp; smtp |& getline j; print j
print "quit" |& smtp; smtp |& getline j; print j
close(smtp) } # /inet/protocol/local-port/remote-host/remote-port # ex - archive extractor
# usage: ex <file>
function ex() {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xjf $1 ;;
*.tar.gz) tar xzf $1 ;;
*.tar.xz) tar xf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xf $1 ;;
*.tbz2) tar xjf $1 ;;
*.tgz) tar xzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted via ex()" ;;
esac
else
echo "'$1' is not a valid file"
fi
}So, you created a square wheel, instead of a NASA wheel.
un () {
unsetopt extendedglob
local old_dirs current_dirs lower do_cd
if [ -z "$1" ]
then
print "Must supply an archive argument."
return 1
fi
if [ -d "$1" ]
then
print "Can't do much with directory arguments."
return 1
fi
if [ ! -e "$1" -a ! -h "$1" ]
then
print "$1 does not exist."
return 1
fi
if [ ! -r "$1" ]
then
print "$1 is not readable."
return 1
fi
do_cd=1
lower="${(L)1}"
old_dirs=(*(N/))
undone=false
if which unar > /dev/null 2>&1 && unar "$1"
then
undone=true
fi
if ! $undone
then
INFO="$(file "$1")"
INFO="${INFO##*: }"
if command grep -a --line-buffered --color=auto -E "Zstandard compressed data" > /dev/null <<< "$INFO"
then
zstd -T0 -d "$1"
elif command grep -a --line-buffered --color=auto -E "bzip2 compressed" > /dev/null <<< "$INFO"
then
bunzip2 -kv "$1"
elif command grep -a --line-buffered --color=auto -E "Zip archive" > /dev/null <<< "$INFO"
then
unzip "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E "RAR archive" > /dev/null
then
unrar e "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E 'xar archive' > /dev/null
then
xar -xvf "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "tar archive" > /dev/null
then
if which gtar > /dev/null 2>&1
then
gtar xvf "$1"
else
tar xvf "$1"
fi
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "LHa" > /dev/null
then
lha e "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "LHa" > /dev/null
then
lha e "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E "compress'd" > /dev/null
then
uncompress -c "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E "xz compressed" > /dev/null
then
unxz -k "$1"
do_cd=0
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E "7-zip" > /dev/null
then
7z x "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E "RPM " > /dev/null
then
if [ "$osname" = "Darwin" ]
then
rpm2cpio "$1" | cpio -i -d --quiet
else
rpm2cpio "$1" | cpio -i --no-absolute-filenames -d --quiet
fi
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E "cpio archive" > /dev/null
then
cpio -i --no-absolute-filenames -d --quiet < "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E "Debian .* package" > /dev/null
then
dpkg-deb -x "$1" .
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i " ar archive" > /dev/null
then
ar x "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "ACE archive" > /dev/null
then
unace e "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "ARJ archive" > /dev/null
then
arj e "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "xar archive" > /dev/null
then
xar -xvf "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "ZOO archive" > /dev/null
then
zoo x "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -Ei "(tnef|Transport Neutral Encapsulation Format)" > /dev/null
then
tnef "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "InstallShield CAB" > /dev/null
then
unshield x "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -Ei "(mail|news)" > /dev/null
then
formail -s munpack < "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "uuencode" > /dev/null
then
uudecode "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "cab" > /dev/null
then
cabextract "$1"
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E -i "PPMD archive" > /dev/null
then
ln -s "$1" . && ppmd d "$1" && rm `basename "$1"`
elif [[ $lower == *.zst ]]
then
zstd -T0 -d "$1"
elif [[ $lower == *.bz2 ]]
then
bunzip2 -kv "$1"
elif [[ $lower == *.zip ]]
then
unzip "$1"
elif [[ $lower == *.jar ]]
then
unzip "$1"
elif [[ $lower == *.xpi ]]
then
unzip "$1"
elif [[ $lower == *.rar ]]
then
unrar e "$1"
elif [[ $lower == *.xar ]]
then
xar -xvf "$1"
elif [[ $lower == *.pkg ]]
then
xar -xvf "$1"
elif [[ $lower == *.tar ]]
then
if which gtar > /dev/null 2>&1
then
gtar xvf "$1"
else
tar xvf "$1"
fi
elif [[ $lower == *.tar.zst || $lower == *.tzst ]]
then
which gtar > /dev/null 2>&1
if [[ $? == 0 ]]
then
gtar -xv -I 'zstd -T0 -v' -f "$1"
elif [[ ${OSTYPE:l} == linux* ]]
then
tar -xv -I 'zstd -T0 -v' -f "$1"
else
zstd -d -v -T0 -c "$1" | tar xvf -
fi
elif [[ $lower == *.tar.gz || $lower == *.tgz ]]
then
which gtar > /dev/null 2>&1
if [[ $? == 0 ]]
then
gtar zxfv "$1"
elif [[ ${OSTYPE:l} == linux* ]]
then
tar zxfv "$1"
else
gunzip -c "$1" | tar xvf -
fi
elif [[ $lower == *.tar.z ]]
then
uncompress -c "$1" | tar xvf -
elif [[ $lower == *.tar.xz || $lower == *.txz ]]
then
which gtar > /dev/null 2>&1
if [[ $? == 0 ]]
then
xzcat "$1" | gtar xvf -
else
xzcat "$1" | tar xvf -
fi
elif echo "$INFO" | command grep -a --line-buffered --color=auto -E 'gzip compressed' > /dev/null || [[ $lower == *.gz ]]
then
if [[ $lower == *.gz ]]
then
gzcat -d "$1" > "${1%.gz}"
else
cat "$1" | gunzip -
fi
do_cd=0
elif [[ $lower == *.tar.bz2 || $lower == *.tbz ]]
then
bunzip2 -kc "$1" | tar xfv -
elif [[ $lower == *.tar.lz4 ]]
then
local mytar
if [[ -n "$(command -v gtar)" ]]
then
mytar=gtar
else
mytar=tar
fi
if [[ -n "$(command -v lz4)" ]]
then
$mytar -xv -I lz4 -f "$1"
elif [[ -n "$(command -v lz4cat)" ]]
then
lz4cat -kd "$1" | $mytar xfv -
else
print "Unknown archive type: $1"
return 1
fi
elif [[ $lower == *.lz4 ]]
then
lz4 -d "$1"
elif [[ $lower == *.epub ]]
then
unzip "$1"
elif [[ $lower == *.lha ]]
then
lha e "$1"
elif which aunpack > /dev/null 2>&1
then
aunpack "$@"
return $?
else
print "Unknown archive type: $1"
return 1
fi
fi
if [[ $do_cd == 1 ]]
then
current_dirs=(*(N/))
for i in {1..${#current_dirs}}
do
if [[ $current_dirs[$i] != "$old_dirs[$i]" ]]
then
cd "$current_dirs[$i]"
ls
break
fi
done
fi
}The best solution for automatically cd'ing into the repo is to wrap git clone in a shell function or alias. Unfortunately I don't think there's any way to make git clone print the path a repository was cloned to, so I had to do some hacky string processing that tries to handle the most common usage (ignore the "gh:" in the URL regex, my git config just expands it to "git@github.com:"):
https://github.com/Andriamanitra/dotfiles/blob/d1aecb8c37f09...
I'm curious to hear some examples (feel like I'm missing out)
Like, I'd have to remember both `prettypath` and `sed`, and given that there's hardly any chance I'll not need `sed` in other situations, I now need to remember two commands instead of one.
On top of that `prettypath` only does s/:/\\n/ on my path, not on other strings, making its use extremely narrow. But generally doing search and replace in a string is incredibly useful, so I'd personally rather just use `sed` directly and become more comfortable with it. (Or `perl`, but the point is the same.)
As I said, that's obviously just my opinion, if loads of custom scripts/commands works for you, all the more power to you!
this fella doesn't know what "toggle" means. in this context, it means "turn off if it's currently on, or turn on if it's currently off."
this should be named `wifi cycle` instead. "cycle" is a good word for turning something off then on again.
naming things is hard, but it's not so hard that you can't use the right word. :)
alias vz="vim ~/.zshrc && . ~/.zshrc"
I alias mdfind to grep my .docx files on my Mac: docgrep() {
mdfind "\"$@\"" -onlyin /Users/xxxx/Notes 2> >(grep --invert-match ' \[UserQueryParser\] ' >&2) | grep -v -e '/Inactive/' | sort
}
I use an `anon` function to anonymize my Mac clipboard when I want to paste something to the public ChatGPT, company Slack, private notes, etc. I ran it through itself before pasting it here, for example. anonymizeclipboard() {
my_user_id=xxxx
account_ids="1234567890|1234567890" #regex
corp_words="xxxx|xxxx|xxxx|xxxx|xxxx" #regex
project_names="xxxx|xxxx|xxxx|xxxx|xxxx" # regex
pii="xxxx|xxxx|xxxx|xxxx|xxxx|xxxx" # regex
hostnames="xxxx|xxxx|xxxx|xxxx|xxxx|xxxx|xxxx|xxxx|xxxx" # regex
# anonymize IPs
pbpaste | sed -E -e 's/([0-9]{1,3})\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/\1.x.x.x/g' \
-e "s/(${corp_words}|${project_names}|${my_user_id}|${pii}|${hostnames})/xxxx/g" -e "s/(${account_ids})/1234567890/g" | pbcopy
pbpaste
}
alias anon=anonymizeclipboard
It prints the new clipboard to stdout so you can inspect what you'll be pasting for anything it missed.Folks interested in scripting like this might like this tool I'm working on https://github.com/amterp/rad
Rad is built specifically for writing CLI scripts and is perfect for these sorts of small to medium scripts, takes a declarative approach to script arguments, and has first-class shell command integration. I basically don't write scripts in anything else anymore.
alias mpa='mpv --no-video'
mpa [youtube_url]
I use this to listen to music / lectures in the terminal.I think it needs yt-dlp installed — and reasonably up to date, since YouTube keeps breaking yt-dlp... but the updates keep fixing it :)
ytsub() {
yt-dlp \
--write-sub \
--write-auto-sub \
--sub-lang "en.*" \
--skip-download \
"$1" && vtt2txt
}
ytsub [youtube_url]
Where vtt2txt is a python script — slightly too long to paste here — which strips out the subtitle formatting, leaving a (mostly) human readable transcript.The Mac Shortcut at https://github.com/e-kotov/macos-shortcuts lets you select a particular area of the screen (as with Cmd-Shift-4) and copies the text out of that, allowing you to copy exactly the text you need from anywhere on your screen with one keyboard shortcut. Great for popups with unselectable text, and copying error messages from coworkers' screenshares.
The flags are for maximum compatibility (e.g. without them, some MP4s don't play in WhatsApp, or Discord on mobile, or whatever.)
ffmp4() {
input_file="$1"
output_file="${input_file%.*}_sd.mp4"
ffmpeg -i "$input_file" -c:v libx264 -crf 33 -profile:v baseline -level 3.0 -pix_fmt yuv420p -movflags faststart "$output_file"
echo "Compressed video saved as: $output_file"
}
ffmp4 foo.webm-> foo_sd.mp4
fftime() {
input_file="$1"
output_file="${input_file%.*}_cut.mp4"
ffmpeg -i "$input_file" -c copy -ss "$2" -to "$3" "$output_file"
echo "Cut video saved as: $output_file"
}
fftime foo.mp4 01:30 01:45-> foo_cut.mp4
Note, fftime copies the audio and video data without re-encoding, which can be a little janky, but often works fine, and can be much (100x) faster on large files. To re-encode just remove "-c copy"
As an aside, I find most of these commands very long. I tend to use very short aliases, ideally 2 characters. I'm assuming the author uses tab most of the time, if the prefixes don't overlap beyond 3 characters it's not that bad, and maybe the history is more readable.
Even more useful is just learning the ICAO Spelling Alphabet (aka NATO Phonetic Alphabet, of which it is neither). It takes like an afternoon and is useful in many situations, even if the receiver does not know it.
https://gist.github.com/jgbrwn/7dd4b262c544f750cb0291161b2ec...
(actually avoids having to do a one liner like: for h in {1..5}; do dig +short A mail”${h}”.domain.com @1.1.1.1 )
Hmm speaking of which I need to add in support for using a specific DNS server
I set this stuff up so long ago I sort of forgot that I did it at all; it's like a standard feature. I have to remember I did it.
I tend to try to not get too used to custom "helper" scripts because I become incapacitated when working in other systems. Nevertheless, I really appreciate all these scripts if nothing else than to see what patterns other programmers pick up.
My only addition is a small `tplate` script that creates HTML, C, C++, Makefile, etc. "template" files to start a project. Kind of like a "wizard setup". e.g.
$ tplate c
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
}
And of course, three scripts `:q`, `:w` and `:wq` that get used surprisingly often: $ cat :q
#!/bin/bash
echo "you're not in vim" #!/usr/bin/env bash
# ~/bin/,dehex
echo "$1" | xxd -r -p
and #!/usr/bin/env bash
# ~/bin/,ht
highlight() {
# Foreground:
# 30:black, 31:red, 32:green, 33:yellow, 34:blue, 35:magenta, 36:cyan
# Background:
# 40:black, 41:red, 42:green, 43:yellow, 44:blue, 45:magenta, 46:cyan
escape=$(printf '\033')
sed "s,$2,${escape}[$1m&${escape}[0m,g"
}
if [[ $# == 1 ]]; then
highlight 31 $1
elif [[ $# == 2 ]]; then
highlight 31 $1 | highlight 32 $2
elif [[ $# == 3 ]]; then
highlight 31 $1 | highlight 32 $2 | highlight 35 $3
elif [[ $# == 4 ]]; then
highlight 31 $1 | highlight 32 $2 | highlight 35 $3 | highlight 36 $4
fi
I also use the comma-command pattern where I prefix my personal scripts with a `,` which allows me to cycle between them fast etc.One thing I have found that's worth it is periodically running an aggregation on one's history and purging old ones that I don't use.
alias v='nvim'
alias vv='f=$(fzf --preview-window "right:50%" --preview "bat --color=always {1}"); test -n "$f" && v "$f"'
alias vvv='f=$(rg --line-number --no-heading . | fzf -d: -n 2.. --preview-window "right:50%:+{2}" --preview "bat --color=always --highlight-line {2} {1}"); test -n "$(echo "$f" | cut -d: -f1)" && v "+$(echo "$f" | cut -d: -f2)" "$(echo "$f" | cut -d: -f1)"'python3 -m http.server 1337
Then I turned it into an alias, called it "serveit" and tweeted about it. And now I see it as a bash script, made a little bit more robust in case python is not installed :)
jsonformat -> jq
running -> pgrep
# If this is an xterm set the title to the directory stack
case "$TERM" in
xterm*|rxvt*)
if [ -x ~/bin/shorten-ds.pl ]; then
PS1="\[\e]0;\$(dirs -v | ~/bin/shorten-ds.pl)\a\]$PS1"
else
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
fi
;;
\*)
;;
esac
The script shorten_ds.pl takes e.g. 0 /var/log/apt
1 ~/Downloads
2 ~
and shortens it to: 0:apt 1:Downloads 2:~
#!/usr/bin/perl -w
use strict;
my @lines;
while (<>) {
chomp;
s%^ (\d+) %$1:%;
s%:.*/([^/]+)$%:$1%;
push @lines, $_
}
print join ' ', @lines;
That coupled with functions that take 'u 2' as shorthand for 'pushd +2' and
'o 2' for 'popd +2' make for easy manipulation of the directory stack: u() {
if [[ $1 =~ ^[0-9]+$ ]]; then
pushd "+$1"
else
pushd "$@"
fi
}
o() {
if [[ $1 =~ ^[0-9]+$ ]]; then
popd "+$1"
else
popd "$@" # lazy way to cause an error
fi
}E.g. cat --copy
echo 1 2 3 | each "rm {}"
is the same as rm 1
rm 2
rm 3
while echo 1 2 3 | xargs rm
is the same as rm 1 2 3
I would rather say that 'each' replaces (certain uses of) 'for': for i in 1 2 3; do rm $i; done -I replace-str
Replace occurrences of replace-str in the initial-arguments
with names read from standard input. Also, unquoted blanks
do not terminate input items; instead the separator is the
newline character. Implies -x and -L 1. function unix() {
if [ $# -gt 0 ]; then
echo "Arg: $(date -r "$1")"
fi
echo "Now: $(date) - $(date +%s)"
}
Prints the current date as UNIX timestamp. If you provide a UNIX timestamp as arg, it prints the arg as human readable date. epoch () {
if [[ -z "${1:-}" ]]
then
date +'%s'
else
date --date="@${1}"
fi
}
% epoch
1761245789
% epoch 1761245789
Thu Oct 23 11:56:29 PDT 2025 $ posh /home/ramrachum/Dropbox/notes.txt
$DX/notes.txt
Of course, it only becomes useful when you define a bunch of environment variables for the paths that you use often.I use this a lot in all of my scripts. Basically whenever any of my script prints a path, it passes it through `posh`.
# High level examples
run_some_command | clip
clip > file_from_my_clipboard.txt
# Copy a file's contents
clip < file.txt
# indent for markdown:
$ clip|sed 's/^/ /'|clipThis sounds pretty useful!
Coincidentally, I have recently learned that Daniel Stenberg et al (of cURL fame) wrote trurl[1], a libcurl-based CLI tool for URL parsing. Its `--json` option seems to yield similar results as TFA's url, if slightly less concise because of the JSON encoding. The advantage is that recent releases of common Linux distros seem to include trurl in their repos[2].
First X bytes: dd bs=X count=1
1. stripping fist X bytes: dd bs=1 skip=X
2. stripping last X bytes: truncate -s -X
[that, plus LinkHint plugin for Firefox, and i3 for WM is my way to go for a better life]
It occurred to me that it would be more useful to me in Emacs, and that might make a fun little exercise.
And that's how I discovered `M-x nato-region` was already a thing.
`alias clip="base64 | xargs -0 printf '\e]52;c;%s\007'"`
It just sends it to the client’s terminal clipboard.
`cat thing.txt | clip`
It memoizes the command passed to it.
$ memo curl https://some-expensive.com/api/call | jq . | awk '...'
Manually clearing it (for example if I know the underlying data has changed: $ memo -c curl https://some-expensive.com/api/call
In-pipeline memoization (includes the input in the hash of the lookup): $ cat input.txt | memo -s expensive-processor | awk '...'
This allows me to rapidly iterate on shell pipelines. The main goal is to minimize my development latency, but it also has positive effects on dependencies (avoiding redundant RPC calls). The classic way of doing this is storing something in temporary files: $ curl https://some-expensive.com/api/call > tmpfile
$ cat tmpfile | jq . | awk '...'
But I find this awkward, and makes it harder than necessary to experiment with the expensive command itself. $ memo curl https://some-expensive.com/api/call | jq . | awk '...'
$ memo curl --data "param1=value1" https://some-expennsive.com/api/call | jq . | awk '...'
Both of those will run curl once.NOTE: Currently environment variables are not taken into account when hashing.
I wonder if we have gotten to the point where we can feed an LLM our bash history and it could suggest improvements to our workflow.
If you do it, I'd love to hear your results.
In general, I wonder if we're at the point where an LLM watching you interact with your computer for twenty minutes can improve your workflow, suggest tools, etc. I imagine so, because when I think to ask how to do something, I often get an answer that is very useful, so I've automated/fixed far more things than in the past.
I currently only have two memoized commands:
$ for f in /tmp/memo/aktau/* ; do
ls -lh "$f" =(zstd -d < $f)
done
-rw-r----- 1 aktau aktau 33K /tmp/memo/aktau/0742a9d8a34c37c0b5659f7a876833b6dad9ec689f8f5c6065d05f8a27d993c7bbcbfdc3a7337c3dba17886d6f6002e95a434e4629.zst
-rw------- 1 aktau aktau 335K /tmp/zshSQRwR9
-rw-r----- 1 aktau aktau 827 /tmp/memo/aktau/8373b3af893222f928447acd410779182882087c6f4e7a19605f5308174f523f8b3feecbc14e1295447f45b49d3f06da5da7e8d7a6.zst
-rw------- 1 aktau aktau 7.4K /tmp/zshlpMMdo
That's roughly 10x compression ratio.Separately from that:
- The invocation contains *memo* right in there, so you (the user) knows that it might memoize.
- One uses memo(1) for commands that are generally slow. Rerunning your command that has a slow part and having it return in a millisecond while you weren't expecting it should make the spider-sense tingle.
In practice, this has never been a problem for me, and I've used this hacked together command for years.If you pipe curl's output to it, you'll get a live playground where you can finesse the rest of your pipeline.
$ curl https://some-expensive.com/api/call | upIt looks like up(1) and memo(1) have similar use cases (or goals). I'll give it a try to see if I can appreciate its ergonomics. I suspect memo(1) will remain my mainstay:
1. After executing a pipeline, I like to press the up arrow (heh) and edit. Surprisingly often I need to edit something that's *not* the last part, but somewhere in the middle. I find this cumbersome in default line editing mode, so I will often drop into my editor (^X^E) to edit the command.
2. Up seems to create a shell command after completion. Avoiding the creation of extra files was one of my goals for memo(1). I'm sure some smart zsh/bash integration could be made that just returns the completed command after completing.Uhm, jq _is_ as powerful (more) as awk. You can use jq directly and skip awk.
(I know, old habits die hard, and learning functional programming languages is not easy.)
$ awk '...' | grep | ...
Because I'm too lazy to go back to the start of the awk invocation and add a match condition there. If I'm going to save it to a script, I'll clean it up. (And for jq, I gotta be honest that my starting point these days would probably be to show my contraption to an LLM and use its answer as a starting point, I don't use jq nearly enough to learn its language by memory.)also, this seems a lot like an automated way to write shell scripts that you can pipe to and from. so why not use a shell script that won't surprise anyone instead of this, which might?
$ memo my-complex-command --some-flag my-positional-arg-1
In this invocation, a hash (sha512) is taken of "my-complex-command --some-flag my-positional-arg-1", which is then stored in /tmp/memo/${USER}/{sha512hash}.zst (if you've got zstd installed, other compression extensions otherwise). #!/usr/bin/env bash
#
# memo(1), memoizes the output of your command-line, so you can do:
#
# $ memo <some long running command> | ...
#
# Instead of
#
# $ <some long running command> > tmpfile
# $ cat tmpfile | ...
# $ rm tmpfile
to save output, sed can be used in the pipeline instead of tee
for example,
x=$(mktemp -u);
test -p $x||mkfifo $x;
zstd -19 < $x > tmpfile.zst &
<long running command>|sed w$x|<rest of pipeline>;
# You can even use it in the middle of a pipe if you know that the input is not
# extremely long. Just supply the -s switch:
#
# $ cat sitelist | memo -s parallel curl | grep "server:"
grep can be replaced with sed and search results sent to stderr
< sitelist curl ...|sed '/server:/w/dev/stderr'|zstd -19 >tmpfile.zst;
or send search results to stderr and to some other file
sed can save output to multiple files at a time
< sitelist curl ...|sed -e '/server:/w/dev/stderr' -e "/server:/wresults.txt"|zstd -19 >tmpfile.zst;Also re: alphabet
$ echo {a..z}
a b c d e f g h i j k l m n o p q r s t u v w x y zIf you want the exact alphabet behaviour as the OP:
$ echo {a..z} $'\n' {A..Z} | tr -d ' ' lt () { ls --color=always -lt ${1} | head }They're not all necessarily the most efficient/proper way to accomplish a task, but they're nice to have on hand and be able to quickly share.
Admittedly, their usefulness has been diminished a bit since the rise of LLMs, but they still come in handy from time to time.
Over time that's grown to an @foo script for every project I work on, every place I frequent that has some kind of specific setup. They are prefixed with an @ because that only rarely conflicts with anything, and tab-complete helps me remember the less frequently used ones.
The @project scripts setup the whole environment, alias the appropriate build tools and versions of those tools, prepare the correct IDE config if needed, drop me in the project's directory, etc. Some start a VPN connection because some of my clients only have git access over VPN etc.
Because I've worked on many things over many years, most of these scripts also output some "help" output so I can remember how shit works for a given project.
Here's an example:
# @foo
PROJECT FOO
-----------
VPN Connection: active, split tunnel
Commands:
tests: mvn clean verify -P local_tests
build all components: buildall
Tools:
java version: 17.0.16-tem
maven version: 3.9.11
Edit: a word on aliases, I frequently alias tools like maven or ansible to include config files that are specific to that project. That way I can have a .m2 folder for every project that doesn't get polluted by other projects, I don't have to remember to tell ansible which inventory file to use, etc. I'm lazy and my memory is for shit.MISE_ENV=testing bun run test
(“testing” in this example can be whatever you like)
- direnv: https://direnv.net/ simple tool and integrates with nix
- devenv: https://devenv.sh/ built on nix and is pretty slice
I've always wanted a linux directory hook that runs some action. Say I have a scripts dir filled with 10 different shells scripts. I could easily have a readme or something to remember what they all do.
What I want is some hook in a dir that every time I cd into that dir it runs the hook. Most of the time it would be a simple 'cat usage.txt' but sometimes it maybe 'source .venv/bin/activate'.
I know I can alias the the cd and the hook together but I don't want that.
Direnv is awesome! Note, thought, that it does not depend on Nix, just a Unix-like OS and a supported shell: https://direnv.net/#prerequisites
Its intended use case is loading environment variables (you could use this to load your virtualenv), but it works by sourcing a script — and that script can be ‘cat usage.txt.’
Great tool.
If you use Emacs (and you should!), there’s a direnv mode. Emacs also has its own way to set configuration items within a directory (directory-local variables), and is smart enough to support two files, so that there can be one file checked into source control for all members of a project and another ignored for one’s personal config.
So if you're in your projects folder and want to keep working on your latest project, I just type "cdn" to go there.
I have used it to build an "escmd" tool for interacting with Elasticsearch. It makes the available commands much more discoverable, the output it formats in tables, and gets rid of sending JSON to a curl command.
A variety of small tools that interact with Jira (list my tickets, show tickets that are tagged as needing ops interaction in the current release).
A tool to interact with our docker registry to list available tags and to modify tags, including colorizing them based on the sha hash of the image so it's obvious which ones are the same. We manage docker container deploys based on tags so if we "cptag stg prod" on a project, that releases the staging artifact to production, but we also tag them by build date and git commit hash, so we're often working with 5-7 tags.
Script to send a "Software has successfully been released" message via gmail from the command-line.
A program to "waituntil" a certain time to run a command: "waituntil 20:00 && run_release", with nice display of a countdown.
I have a problem with working on too many things at once and then committing unrelated things tagged with a particular Jira case. So I had it write me a commit program that lists my tickets, shows the changed files, and lets me select which ones go with that ticket.
All these are things I could have built before, but would have taken me hours each. With the GenAI, they take 5-15 minutes of my attention to build something like this. And Gen AI seems really, really great at building these small, independent tools.
and alias debian="docker run -it --rm -v $(pwd):/mnt/host -w /mnt/host --name debug-debian debian"
Heres my script if anyone is interested in as I find it to be incredibly useful.
find . -type f \( -name ".tf" -o -name ".tfvars" -o -name ".json" -o -name ".hcl" -o -name ".sh" -o -name ".tpl" -o -name ".yml" -o -name ".yaml" -o -name ".py" -o -name ".md" \) -exec sh -c 'for f; do echo "### FILE: $f ###"; cat "$f"; echo; done' sh {} +
Ex:
> echo $PWD
/foo/bar/batz/abc/123
> dc bar && echo $PWD
/foo/bar
Useful for times when I don't want to type a long train of dot slashes(ex. cd ../../..).Also useful when using Zoxide, and I tab complete into a directory tree path where parent directories are not in Zoxide history.
Added tab complete for speed.
So 'p ISSUE-123' :
* creates a folder issues/ISSUE-123 for work files, containing links to a backed up folder and the project repository . The shell is cd'd to it
* The repo might get a new branch with the issue name.
* An IDE might start containing the project.
* The browsers home button brings you to a page with all kinds of relevant links:. The issue tracker, the CI, all kinds of test pages, etc...
* The open/save dialogs for every program gets a shortcut named'issue'
* A note is made in a log that allows me to do time tracking at the end if the week.
* A commit message template with the issue is created.
kp () {
if [ -z "$1" ] then
echo "Usage: kp <port>"
return 1
fi
lsof -nP -iTCP:"$1" -sTCP:LISTEN | awk 'NR>1 {print $2}' | xargs kill -9
} #!/usr/bin/perl
# http://stackoverflow.com/a/9790056
use List::Util qw(max min sum);
@a=();
while(<>){
$sqsum+=$_*$_;
push(@a,$_)
};
$n=@a;
$s=sum(@a);
$a=$s/@a;
$m=max(@a);
$mm=min(@a);
$std=sqrt($sqsum/$n-($s/$n)*($s/$n));
$mid=int @a/2;
@srtd=sort @a;
if(@a%2){
$med=$srtd[$mid];
}else{
$med=($srtd[$mid-1]+$srtd[$mid])/2;
};
print "records:$n\nsum:$s\navg:$a\nstd:$std\nmed:$med\max:$m\nmin:$mm";This can be replaced with
sed -n $1p\;$1q
Test it versus
head -$1|tail -1
On every machine of mine I tend to accumulate a bunch of random little scripts along these lines in my ~/.local/bin, but I never seem to get around to actually putting them anywhere. Trying to knock that habit by putting any new such scripts in a “snippets” repo (https://fsl.yellowapple.us/snippets); ain't a whole lot in there yet, but hopefully that starts to change over time.
weregiraffe•1d ago