frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
233•theblazehen•2d ago•68 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
694•klaussilveira•15h ago•206 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
6•AlexeyBrin•1h ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
962•xnx•20h ago•555 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
130•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
67•videotopia•4d ago•6 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
54•jesperordrup•5h ago•24 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
36•kaonwarb•3d ago•27 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
10•matt_d•3d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
233•dmpetrov•16h ago•124 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
32•speckx•3d ago•21 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
502•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
386•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
300•eljojo•18h ago•186 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•185 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
10•__natty__•3h ago•0 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
425•lstoll•21h ago•282 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
68•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
19•1vuio0pswjnm7•1h ago•5 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
264•i5heu•18h ago•216 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•28 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1076•cdrnsf•1d ago•460 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
39•gmays•10h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
298•surprisetalk•3d ago•44 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
154•vmatsiiako•20h ago•72 comments
Open in hackernews

Agentic Patterns

https://github.com/nibzard/awesome-agentic-patterns
171•PretzelFisch•1mo ago

Comments

hmcamp•1mo ago
I like this list.
solomatov•1mo ago
This all sounds interesting, but how effective are they? Did anyone has experience with any of them?
aranchelk•1mo ago
Yes, agentic search over vector embeddings. It can be very effective.
solomatov•1mo ago
It's a very well known pattern. But what about others? There're a lot of very interesting stuff there.
aranchelk•1mo ago
Tool Use Steering via Prompting. I’ve seen that work well also, but I don’t know if I’d quite call it an architectural pattern.
nkko•1mo ago
I’m eager to tackle issues and PRs.
rammy1234•1mo ago
I find it interesting that we already have patterns established, while agentic approach is still being adopted in various industries in varying maturity.
zeckalpha•1mo ago
Agents have been around for decades. Some of these patterns pre-exist the current LLM boom.

1996: https://web.archive.org/web/19961221024144/http://www.acm.or... > Computer-based agents have gotten attention from computer scientists and human interface designers in recent years

nkko•1mo ago
Yep—many of these predate LLMs.
nkko•1mo ago
At some point, we need to begin. My initial thought was that this is a growing and evolving resource, primarily for my own use. We are slowly but steadily learning what makes sense annd patterns emerge. Also, if others find it interesting and contribute, that would be even better.
hsaliak•1mo ago
What if this repo itself was vibed
only-one1701•1mo ago
It’s slop all the way down
skhameneh•1mo ago
I didn't have the patience to click through after visiting a few pages only to find the depth lacking.

About an hour ago or so I used Opus 4.5 to give me a flat list with summaries. I tried to post it here as a comment but it was too long and I didn't bother to split it up. They all seem to be things I've heard of in one way another, but nothing that really stood out for me. Don't get me wrong, they're decent concepts and it's clear others appreciate this resource more than I.

ColinEberhardt•1mo ago
Looking at the commit log, almost every commit references an AI model:

https://github.com/nibzard/awesome-agentic-patterns/commits/...

Unfortunately it isn’t possible to detect whether AI was being used in an assistive fashion, or whether it was the primary author.

Regardless, a skim read of the content reveals it to be quite sloppy!

nkko•1mo ago
The flow was, me finding interesting pattern -> Claude ingesting the reference and putting it in a template -> Me figuring out if it makes sense -> push
nkko•1mo ago
Author here. Yes, CC is the maintainer. When I stumble on a decent idea, I would just feed it to CC to create a pattern out of it. This was my quick and dirty approach to a public learning log with an idea that I would get back to it at some point and clean it up. Which I did on a few occasions.
vzaliva•1mo ago
I like the idea, but I am confused why many of them expressed as code. How I am suppose to use them?
jamesrom•1mo ago
This comment defines the next era of software development.
greatgib•1mo ago
Looks like all bullshit to me. When you try to make up complex terms to pretend that you are doing engineering but it is baseless.

Something like if I do a list of dev pattern and I say:

- caffeinated break for algorithmic thinking improvement

When I'm thinking about an algorithmic logic, go to have a coffee break, and then go back to my work desk to work on it again.

Here is one of the first "pattern" of the project I opened for example:

   Dogfooding with rapid iteration for agent improvement.

   Developing effective AI agents requires understanding real-world usage and quickly identifying areas for improvement. External feedback loops can be slow, and simulated environments may not capture all nuances.

   Solution:
The development team extensively uses their own AI agent product ("dogfooding") for their daily software development tasks.

Or

"Extended coherence work sessions"

   Early AI agents and models often suffered from a short "coherence window," meaning they could only maintain focus and context for a few minutes before their performance degraded significantly (e.g., losing track of instructions, generating irrelevant output). This limited their utility for complex, multi-stage tasks that require sustained effort over hours.

   Solution
   Utilize AI models and agent architectures that are specifically designed or have demonstrably improved capabilities to maintain coherence over extended periods (e.g., several hours)

Don't tell me that it is not all bullshit...

I don't say that what is said is not true.

Just imagine you took a 2 pages pamphlet about how to use an LLM and you splitted every sentence into a wannabee "pattern".

soulchild77•1mo ago
I felt the same and I asked Claude about it. The answer made me chuckle:

> There’s definitely a tendency to dress up fairly straightforward concepts in academic-sounding language. “Agentic” is basically “it runs in a loop and decides what to do next.” Sliding window is just “we only look at the last N tokens.” RAG is “we search for stuff and paste it into the prompt.” [...] When you’re trying to differentiate your startup or justify your research budget, “agentic orchestration layer” lands differently than “a script that calls Claude in a loop.“

spaceman_2020•1mo ago
I had someone argue on Twitter recently that they had made an “agent” when all they had really done was use n8n to make a loop that used LLMs and ran on a schedule

People are calling if-then cron tasks “agents” now

greatgib•1mo ago
Now that you say it, I just realize that it might be useful to me one day if I'm a bland useless startup and I try to dress up my pitch with these terms to try to raise investor money...
jauntywundrkind•1mo ago
Typically awsome-subject-matter repositories link out to other resources.

There are so so so many prompts & agentic pattern repositories out there. I'm pretty turned off by this repo flouting the convention of what awsome-* repos are, being the work, rather than linking the good work that's out there. For us to decide with.

matsemann•1mo ago
I'd rather have a single repo with a curated format and thought behind it (not sure if this is, just assuming), than the normal awesome-* lists that are just linking to every single page on a subject with loads of overlap so I don't even know which one to look at for a given problem.
nkko•1mo ago
This type of things are time sink holes, as surprisingly it takes a lot of time to figure everything. I was hoping to dedicate a decent amount of time to review and structuring but sadly life got in the way. If you have a suggestion how to structure it better I am all ears.
matsemann•1mo ago
My point was that I like your approach better than the huge lists where no one has really vetted whatever is put on them. Of course, a curated list has the drawback of someone having to curate it :)
nkko•1mo ago
Fast reading was always my Achilles heel ;)
nkko•1mo ago
Most of the patterns should link to external resources since they were derived from them. If there’s no link, it was probably obvious or I’ve derived it from my own project.
baalimago•1mo ago
Who is this for? Apart from the contributors ofc, who wish to feel good about eternalizing their 'novel' idea
usefulposter•1mo ago
It's a mix of signaling, busywork and productivity porn for the ingroup.

A few years ago we had GitHub resource-spam about smart contracts and Web3 and AWESOME NFT ERC721 HACK ON SOLANA NEXT BIG THING LIST.

Now we have repos for the "Self-Rewriting Meta-Prompt Loop" and "Gas Town":

https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

If you haven't got a Rig for your project with a Mayor whose Witness oversees the Polecats who are supervised by a Deacon who manages Dogs (special shoutout to Boot!) who work with a two-level Beads structure and GUPP and MEOW principles... you're not gonna make it.

dandelionv1bes•1mo ago
I thought Gas Town was a satire until I saw the GitHub. Maybe it’s a very involved satire?

It is right? “ Do not use Gas Town.”

keybored•1mo ago
> Who is this for?

Star-farming anno 2026.

nkko•1mo ago
See my comment above. The repository is from May when I was intensely exploring everything agentic. I used it as a public bookmarking tool and also in the hope of receiving contributions. Thanks to this HN share, I received four PRs.
keybored•1mo ago
Anno 2025. Makes a difference I guess.
nkko•1mo ago
Hi, author here. Honestly, I just used this as a bookmarking place for myself. Which you could infer if you go through some patterns. I’ve created a flow with CC where I would just dump a new source like a podcast, post, or whatever to have it for reference.
dandelionv1bes•1mo ago
Thank you for putting it together. I looked at a couple of the references and they look like they point to your blog. Do you have a view at all of popular patterns in terms of citations? Might be useful
nkko•1mo ago
That’s a good idea, will have to think a bit on how to implement it.
d-lisp•1mo ago
Is this a joke like FizzBuzzEnterpriseEdition [0] ?

https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

amelius•1mo ago
Inspired by https://en.wikipedia.org/wiki/Design_Patterns I presume.
nkko•1mo ago
Yes!
nialse•1mo ago
Note: At the point of writing this, the comments are largely skeptical.

Reading this as an avid Codex CLI user, some things make sense and reflect lessons learned along the way. However, the patterns also get stale fast as agents improve and may be counterproductive. One such pattern is context anxiety, which probably reflects a particular model more than a general problem, and is likely an issue that will go away over time.

There are certainly patterns that need to be learned, and relearned over time. Learning the patterns is sort of an anti-pattern, since it is the model that should be trained to alleviate its shortcomings rather than the human. Then again, a successful mindset over the last three years has been to treat models as another form of intelligence, not as human intelligence, by getting to know them and being mindful of their strengths and weaknesses. This is quite a demanding task in terms of communication, reflection, and perspective-taking, and it is understandable that this knowledge is being documented.

But models change over time. The strengths and weaknesses of yesterday’s models are not the same as today’s, and reasoning models have actually removed some capabilities. A simple example is giving a reasoning model with tools the task of inspecting logs. It will most likely grep and parse out smaller sections, and may also refuse an instruction to load the file into context to inspect it. The model then relies on its reasoning (system 2) rather than its intuitive (system 1) thinking.

This means that many of these patterns are temporary, and optimizing for them risks locking human behavior to quirks that may disappear or even reverse as models evolve. YMMV.

CjHuber•1mo ago
I have the theory that agents will improve a lot when trained on more recent training data. Like I‘ve had agents have context anxiety because they still think an average LLM context window is around 32k tokens. Also building agents with agents, letting them do prompt engineering etc, still is very unsatisfactory, they keep talking about GPT-3.5 or Gemini 1.5 and try to optimize the prompts for those old models, which of course was almost a totally different thing. So I‘m thinking if that‘s how they are thinking of themselves as well, maybe that artificially limits their agentic behavior too, because they just don’t know how much more capable they are than GPT-3.5
nkko•1mo ago
Strong point. I’m considering to tag patterns better and add stuff like “model/toolchain-specific,” and something like “last validated (month/year)” field. Things change fast and for example “Context anxiety” is likely less relevant and should be reframed that way (or retired).
blks•1mo ago
Because “strengths” of a model is based not on inherit characteristics, but on various user perception. It feels that model A is doing some thing better, same at it feels that your productive is high.
lunias•1mo ago
This is the real secret sauce right here: "score_7, score_8, score_9, watermark, paid_reward". Adding this to the end of all my prompts has unlocked response quality that I didn't think was possible! /s
nkko•1mo ago
Author here (nibzard). I started this back in May as a personal learning log. I agree with the skepticism about jargon and novelty. However, if something reads like overly complex common sense, that’s a bug, and I’d like to fix it. If you can point out 1–2 specific pages that feel sloppy or unactionable, I’ll rewrite them (or remove them). I’m also happy to add flags or improve the structure. Also, contributing new patterns would be grand. Of course, some or even all patterns are explicitly “emerging.”