frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
232•isitcontent•14h ago•25 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
332•vecti•16h ago•145 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
289•eljojo•17h ago•176 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•14h ago•14 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
91•antves•1d ago•66 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
2•melvinzammit•2h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•2h ago•1 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
17•denuoweb•1d ago•2 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
25•dchu17•19h ago•12 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
47•nwparker•1d ago•11 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
151•bsgeraci•1d ago•63 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
10•michaelchicory•4h ago•1 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
17•NathanFlurry•22h ago•8 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
13•keepamovin•5h ago•5 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
23•JoshPurtell•1d ago•5 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•19h ago•7 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•7h ago•0 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
172•vkazanov•2d ago•49 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
4•ambitious_potat•8h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
2•rs545837•9h ago•1 comments

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly

https://rahuljaguste.github.io/Nethack_Falcons_Eye/
4•rahuljaguste•14h ago•1 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
25•Shubham_Amb•1d ago•2 comments

Show HN: FastLog: 1.4 GB/s text file analyzer with AVX2 SIMD

https://github.com/AGDNoob/FastLog
5•AGDNoob•10h ago•1 comments

Show HN: A password system with no database, no sync, and nothing to breach

https://bastion-enclave.vercel.app
12•KevinChasse•19h ago•16 comments

Show HN: Gohpts tproxy with arp spoofing and sniffing got a new update

https://github.com/shadowy-pycoder/go-http-proxy-to-socks
2•shadowy-pycoder•11h ago•0 comments

Show HN: GitClaw – An AI assistant that runs in GitHub Actions

https://github.com/SawyerHood/gitclaw
9•sawyerjhood•20h ago•0 comments

Show HN: I built a directory of $1M+ in free credits for startups

https://startupperks.directory
4•osmansiddique•11h ago•0 comments

Show HN: A Kubernetes Operator to Validate Jupyter Notebooks in MLOps

https://github.com/tosin2013/jupyter-notebook-validator-operator
2•takinosh•12h ago•0 comments

Show HN: 33rpm – A vinyl screensaver for macOS that syncs to your music

https://33rpm.noonpacific.com/
3•kaniksu•13h ago•0 comments

Show HN: Craftplan – I built my wife a production management tool for her bakery

https://github.com/puemos/craftplan
568•deofoo•5d ago•166 comments
Open in hackernews

Show HN: qqqa – A fast, stateless LLM-powered assistant for your shell

https://github.com/matisojka/qqqa
165•iagooar•3mo ago
I built qqqa as an open-source project, because I was tired of bouncing between shell, ChatGPT / the browser for rather simple commands. It comes with two binaries: qq and qa.

qq means "quick question" - it is read-only, perfect for the commands I always forget.

qa means "quick agent" - it is qq's sibling that can run things, but only after showing its plan and getting an approval by the user.

It is built entirely around the Unix philosophy of focused tools, stateless by default - pretty much the opposite of what most coding agent are focusing on.

Personally I've had the best experience using Groq + gpt-oss-20b, as it feels almost instant (up to 1k tokens/s according to Groq) - but any OpenAI-compatible API will do.

Curious if the HN crowd finds it useful - and of course, AMA.

Comments

iagooar•3mo ago
And of course, if you find any bugs or feature requests, report them via issues on Github.
kissgyorgy•3mo ago
There is also the llm tool written by simonwillison: https://github.com/simonw/llm

I personally use "claude -p" for this

iagooar•3mo ago
Compared to the llm tool, qqqa is as lightweight as it gets. In the Ruby world it would be Sinatra, not Rails.

I have no interest in adding too many complex features. It is supposed to be fast and get out of your way.

Different philosophies.

NSPG911•3mo ago
very cool, can be useful for simple commands, but i find github cli's copilot extension useful for this, i just do `ghcs <question>` and it gives me an command, i can ask it how it works, or make it better, copy it, or run it
silentsanctuary•3mo ago
I like using ghcs for this as well! Or at least, I liked to - it's deprecated now, in favor of the new CLI which doesn't provide the same functionality.

https://github.com/github/gh-copilot/commit/c69ed6bf954986a0...

https://github.com/github/copilot-cli/issues/53

RamtinJ95•3mo ago
This looks really cool and I love the idea but I will stick with opencode run ”query” and for specific agents which have specific models, I can just configure that also in an agent.md then add opencode run ”query” -agent quick
iagooar•3mo ago
I think it is more about what it doesn’t do. It is not a coding agent. It is a lightweight assistant, Unix style “Do One Thing and Do It Well”.

https://en.wikipedia.org/wiki/Unix_philosophy

CGamesPlay•3mo ago
Looks interesting! Does it support multiple tool calls in a chain, or only terminating with a single tool use?

Why is there a flag to not upload my terminal history and why is that the default?

iagooar•3mo ago
Thanks!

It does not support chaining multiple tool calls - if it did, it would not be a lightweight assistant anymore, I guess.

The history is there to allow referencing previous commands - but now that I think about it, it should clearly not be on by default.

Going to roll out a new version soon. Thanks for the feedback!

CGamesPlay•3mo ago
Given that it doesn't support multiple tool calls, one thing I noticed that is not ideal is that it seems to buffer stdout and stderr. This means that I don't see any output if the command takes 10 minutes, and I also can't see stdout mixed with stderr. It would be ideal to actually "exec" the target process instead, honestly. https://doc.rust-lang.org/std/os/unix/process/trait.CommandE...
iagooar•3mo ago
This one is a bit tricky. The tool needs the output to process stuff after the AI returns results. And since the focus is on rather short interactions, this is an OK-ish tradeoff I believe. But I will give it a couple more thoughts, not saying no to it, but need to go through the possible ramifications.
armcat•3mo ago
One mistake in your README - groq throughput is actually 1000 tokens per "second" (not "minute"), for gpt-oss-20b.
iagooar•3mo ago
Nice catch - fixed!
flashu•3mo ago
Good one, but I do not see release for MacOS :(
iagooar•3mo ago
Darwin is the MacOS release - should make that clear - will update readme. Thanks.
shellfishgene•3mo ago
I don't see any binaries on github?
flashu•3mo ago
That was my point, nothing in releases on GH
iagooar•3mo ago
The readme clearly links to releases. I am not using GH releases, but that does not mean they are not there.
imcritic•3mo ago
Pushing releases right into the repository? That's kinda nuts.
iagooar•3mo ago
Just learned something new! Will soon change how releases are delivered, fixing a few other issues I got reported.
imcritic•3mo ago
If you were unaware that such approach is frowned upon then you might also not know that even if you delete the binary files from your git - they will stay there and thus be bloating your repository forever. To truly cut them away from the repository you will need to use some special instruments that will rewrite git history while trying to remove the bloat and the downside of that is that commits checksums will change and you will essentially have to force push existing commits but with new checksums.
iagooar•3mo ago
Point taken. The files were really small, no need to exaggerate.
krzkaczor•3mo ago
This is nice. Reminds me how in warp terminal you can (could?) just type `# question` and it would call some LLM under the hood. Good UX.
iagooar•3mo ago
Thank you - appreciate it. I really tried to create something simple, that solve one problem really well.
baalimago•3mo ago
For inspiration (and, ofc, PR since I'm salty that this gets attention while my pet project doesn't), you can checkout clai[0] which works very similarly but has a year or so's worth of development behind it.

So feature suggestions:

* Pipe data into qq ("cat /tmp/stacktrace | qq What is wrong with this: "),

* Profiles (qq -profile legal-analysis Please checkout document X and give feedback)

* Conversations (this is simply appending a new message to a previous query)

[0]: https://github.com/baalimago/clai/blob/main/EXAMPLES.md

iagooar•3mo ago
The net is vast and more often than not we miss the good things out there.

A little anecdote: a few years ago I published an open source library that for many years would go completely underappreciated. For a few years I did not even check it - and then one day I realized it had over 500 stars on GH (+700 today). Good things take time.

Appreciate the ideas!

jakewins•3mo ago
Very similar experience with several libraries. Wrote up a particularly pleasant one a few years ago: https://tech.davis-hansson.com/p/clickbait/
4m1rk•3mo ago
https://llm.datasette.io/ is great too.
ErikBjare•2mo ago
Same with mine that has been around for over 2 years now: https://github.com/gptme/gptme
jcmontx•3mo ago
why use this and not claude code?
iagooar•3mo ago
"Do One Thing and Do It Well" - https://en.wikipedia.org/wiki/Unix_philosophy

Also, groq + gpt-oss is so much faster than Claude.

d4rkp4ttern•3mo ago
I built a similar tool called “lmsh” (LM shell) that uses Claude-code non-interactive mode (hence no API keys needed, since it uses your CC subscription): it presents the shell command on a REPL like line that you can edit first and hit enter to run it. Used Rust to make it a bit snappier:

https://github.com/pchalasani/claude-code-tools?tab=readme-o...

It’s pretty basic, and could be improved a lot. E.g make it use Haiku or codex-CLI with low thinking etc. Another thing is have it bypass reading CLAUDE.md or AGENTS.md. (PRs anyone? ;)

iagooar•3mo ago
This a pretty neat approach, indeed. Having to use the API might be an inconvenience for some people indeed. I guess having the Claude or ChatGPT subscription and using it with the CLI tools is what makes developers stick with these tools, instead of using what is out there.
d4rkp4ttern•3mo ago
Right, when we’re already paying $100 or $200 per month, leveraging that “almost-all-you-can eat buffet” is always going to be more attractive than spending more on per token API billing.
fouc•2mo ago
>it presents the shell command on a REPL like line that you can edit first and hit enter to run it.

Oh genius, that's the best UX idea for the situation of asking an LLM to flesh out the CLI command without relying entirely on blind faith.

Even better if we can have that kind of behavior in the shell itself. For example if we started typing "cat list | grep foo | " and then suddenly realized we want help with the awk command so that it drops the first column.

buster•3mo ago
I'm using https://github.com/kagisearch/ask

It's a simple shell script of 204 lines.

pmarreck•3mo ago
Just about everyone has already written one of these. Mine are called "ask" and "please". My "ask" has a memory though, since I often needed to ask followup questions:

https://github.com/pmarreck/dotfiles/blob/master/bin/ask

I have a local version of ask that works with ollama: https://github.com/pmarreck/dotfiles/blob/master/bin/ask_loc...

And here is "please" as in "please rename blahblahblah in this directory to blahblah": https://github.com/pmarreck/dotfiles/blob/master/bin/please

iagooar•3mo ago
I can type qq faster than you can type ask. Even more so with qa vs please ;)
baalimago•3mo ago
Length of the binaries name doesn't really matter though as one easily can "alias please=p"
pmarreck•3mo ago
yeah I already aliased ask to "a" and please to "p" lol
amarble•3mo ago
Since we're sharing, I have a "claude" command that lets me get quick answers but also saves the conversation and outputs an identifier so in the rare case I want a follow-up, I can ask a question with the ID to continue the conversation.

https://gist.github.com/rbitr/bfbc43b806ac62a5230555582d63d4...

pmarreck•3mo ago
Neat idea! Although as an identifier, instead of a hash, I'd probably ask it to summarize the conversation into 3 to 7 underscore-separated words and use that as the identifier (plus maybe a timestamp), since a list of them will more easily tell you which is relevant
stevedsimkins•3mo ago
Feel like this might have already been done and beyond by aichat (which I give the alias `ai` on my machines)

https://github.com/sigoden/aichat

Nevertheless it’s good to see more tools with the Unix philosophy!

swah•3mo ago
I usually do this in Raycast but the Groq tip is good...
psychoslave•3mo ago
Can it run local LLM with quick parameters?
iagooar•3mo ago
I would like to add support, but I do not have a computer powerful enough to run an LLM fast enough, so I am not able to test.

Is it possible to use an OpenAI-compatible API locally, or how does that work?

psychoslave•3mo ago
https://github.com/simonw/llm proposes some hints to run in local
foobarqux•3mo ago
llm cmdcomp is better:

    - it puts the command in the shell editor line so you can edit it (for example to specify filenames using the line editor after the fact and make use of the shell tools like glob expansion etc.) 
    - it goes into the history. 
    - It can use a binding so you can start writing something without remembering to prefix it with a command and invoke the cmd completion at any place in the line editor. 
    - It also allows you to refine the command interactively.
I haven't see any of the other of the myriad of tools do these very obvious things.

https://github.com/CGamesPlay/llm-cmd-comp

iagooar•3mo ago
Thanks. I guess it all depends on the perspective. I do not see how editing the command is a good tradeoff here in terms of complexity+UI. Once you get the command suggested by the LLM, you can quickly copy and modify it, before running it.

qqqa uses history - although in a very limited fashion for privacy reasons.

I am taking note of these ideas though, never say never!

foobarqux•3mo ago
> Once you get the command suggested by the LLM, you can quickly copy and modify it, before running it.

Copying and pasting tends to be a very tedious operation in the shell, which usually requires moving your hands away from the keyboard to the mouse (there are terminals which allow you to quick-select and insert lines but they are still more tedious than simply pressing enter to have the command on the line editor). Maybe try using llm-cmd-comp for a while.

> I do not see how editing the command is a good tradeoff here in terms of complexity+UI.

I don't find it a tradeoff, I think it's strictly superior in every way including complexity. llm-cmd-comp is probably the way I most often interface with llms (maybe second to basic search-engine-replacement) and I almost always either 1. don't have the file glob or the file names themselves ready (they may not exist yet!) at the time when I want to start writing the command or they are easier to enter using a fuzzy selector like fzf 2. don't want the llm to do weird things with globs when I pass them directly and having the shell expand them is usually difficult because the prompt is not a command (so the completion system won't do the right thing).

But even in your own demo it is faster to use llm-cmd-comp and you also get the benefit that the command goes into the history and you can optionally edit it if you want or further revise the prompt! It does require pressing enter twice instead of "y" but I don't find that a huge inconvenience especially since I almost always edit the command anyway.

Again, try installing llm-cmd-comp and try out your demo case.

sheepscreek•3mo ago
On the stateless part - I increasingly believe that state keeping is an absolute necessity. Not necessarily across requests but on the local storage. Handoffs are proving invaluable in overcoming context limitations and I would like more tools to support a higher level of coordination and orchestration across sessions and with sub-agents.

I believe the best “worker” agents of the future are going to be great at following instructions, have a fantastic intuition but not so much knowledge. They’ll be very fast but will need to retain their learnings so they can build on it, rather than relearning everything in every request - which is slow and a complete waste a resources. Much like what Claude is trying to achieve with skills.

I’m not suggesting that every tool reinvent this paradigm in its own unique way. Perhaps we a single system that can do all the necessary state keeping so each tool can focus on doing its job really well.

Unfortunately, this is more art than science - for example, asking each model to carry out handoff in the expected way will be a challenge. Especially on current gen small models. But many people are using frontier models, that are slowly converging in their intuition and ability to comprehend instructions. So it might still be worth the effort.

hmokiguess•3mo ago
Nice, the `qq` part reminds me of this project: https://github.com/tldr-pages/tldr

That said, I rather use claude in headless mode https://code.claude.com/docs/en/headless

ripped_britches•3mo ago
I’ve used sgpt and really liked that as prior art
etaioinshrdlu•3mo ago
I can suggest our service (previously here https://news.ycombinator.com/item?id=44849129 ) that might be helpful -- If you want a zero-setup backend to try qqqa, ch.at might be a useful option. We built ch.at — a single-binary, OpenAI‑compatible chat service with no accounts, no logs, and no tracking. You can point qqqa at our API endpoint and it should “just work”:

OpenAI-compatible endpoint: https://ch.at/v1/chat/completions (supports streamed responses)

Also accessible via HTTP/SSH/DNS for quick tests: curl ch.at/?q=… , ssh ch.at Privacy note: we don’t log anything, but upstream LLM providers might...

iagooar•3mo ago
That would be pretty cool for testing the waters, will give it a thought!

How do you guys pay for this? I guess the potential for abuse is huge.

etaioinshrdlu•3mo ago
Cool! Right now it's just IP address rate limiting and the costs have not mattered too much, but yes long term I am not sure what we'll do...
Zetaphor•3mo ago
I personally prefer aichat, as it allows me the option to copy the command its proposing to the clipboard, iterate further on the prompt, or to describe its choice

https://github.com/sigoden/aichat

insane_dreamer•3mo ago
Nice! Do you have plans to make it work with a CC subscription? Great idea but not really interested in paying for another API key
iagooar•3mo ago
What a phenomenal launch it has been! Thanks a lot to everyone, for the many ideas and feedback. It has really made me push harder to make qqqa even cooler.

Since I launched it yesterday, I added a few new features - check out the latest version on Github!

Here is what we have now:

* added support for OpenRouter

* added support for local LLMs (Ollama)

* qqqa can be installed via Homebrew, to avoid signing issues on MacOS

* qq/qa can ingest piped input from stdin

* qa now preserves ANSI colors and TTY behavior

* hardened the agent sandbox - execute_command can't escape the working directory anymore

* history is disabled by default - can be enabled at --init, via config or flag

* qq --init refuses to override an existing .qq/config.json

Jotalea•3mo ago
apparently everyone has made their own, some better, others worse. but here's my implementation (not as full-featured as this one but it does the job): https://github.com/Jotalea/FRIDAY

it's inspired on F.R.I.D.A.Y. from the Marvel Cinematic Universe, a digital assistant with access to all of the (fictional) hardware.