1996: https://web.archive.org/web/19961221024144/http://www.acm.or... > Computer-based agents have gotten attention from computer scientists and human interface designers in recent years
About an hour ago or so I used Opus 4.5 to give me a flat list with summaries. I tried to post it here as a comment but it was too long and I didn't bother to split it up. They all seem to be things I've heard of in one way another, but nothing that really stood out for me. Don't get me wrong, they're decent concepts and it's clear others appreciate this resource more than I.
https://github.com/nibzard/awesome-agentic-patterns/commits/...
Unfortunately it isn’t possible to detect whether AI was being used in an assistive fashion, or whether it was the primary author.
Regardless, a skim read of the content reveals it to be quite sloppy!
Something like if I do a list of dev pattern and I say:
- caffeinated break for algorithmic thinking improvement
When I'm thinking about an algorithmic logic, go to have a coffee break, and then go back to my work desk to work on it again.
Here is one of the first "pattern" of the project I opened for example:
Dogfooding with rapid iteration for agent improvement.
Developing effective AI agents requires understanding real-world usage and quickly identifying areas for improvement. External feedback loops can be slow, and simulated environments may not capture all nuances.
Solution:
The development team extensively uses their own AI agent product ("dogfooding") for their daily software development tasks.Or
"Extended coherence work sessions"
Early AI agents and models often suffered from a short "coherence window," meaning they could only maintain focus and context for a few minutes before their performance degraded significantly (e.g., losing track of instructions, generating irrelevant output). This limited their utility for complex, multi-stage tasks that require sustained effort over hours.
Solution
Utilize AI models and agent architectures that are specifically designed or have demonstrably improved capabilities to maintain coherence over extended periods (e.g., several hours)
Don't tell me that it is not all bullshit...I don't say that what is said is not true.
Just imagine you took a 2 pages pamphlet about how to use an LLM and you splitted every sentence into a wannabee "pattern".
> There’s definitely a tendency to dress up fairly straightforward concepts in academic-sounding language. “Agentic” is basically “it runs in a loop and decides what to do next.” Sliding window is just “we only look at the last N tokens.” RAG is “we search for stuff and paste it into the prompt.” [...] When you’re trying to differentiate your startup or justify your research budget, “agentic orchestration layer” lands differently than “a script that calls Claude in a loop.“
People are calling if-then cron tasks “agents” now
There are so so so many prompts & agentic pattern repositories out there. I'm pretty turned off by this repo flouting the convention of what awsome-* repos are, being the work, rather than linking the good work that's out there. For us to decide with.
A few years ago we had GitHub resource-spam about smart contracts and Web3 and AWESOME NFT ERC721 HACK ON SOLANA NEXT BIG THING LIST.
Now we have repos for the "Self-Rewriting Meta-Prompt Loop" and "Gas Town":
https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
If you haven't got a Rig for your project with a Mayor whose Witness oversees the Polecats who are supervised by a Deacon who manages Dogs (special shoutout to Boot!) who work with a two-level Beads structure and GUPP and MEOW principles... you're not gonna make it.
It is right? “ Do not use Gas Town.”
Star-farming anno 2026.
https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
Reading this as an avid Codex CLI user, some things make sense and reflect lessons learned along the way. However, the patterns also get stale fast as agents improve and may be counterproductive. One such pattern is context anxiety, which probably reflects a particular model more than a general problem, and is likely an issue that will go away over time.
There are certainly patterns that need to be learned, and relearned over time. Learning the patterns is sort of an anti-pattern, since it is the model that should be trained to alleviate its shortcomings rather than the human. Then again, a successful mindset over the last three years has been to treat models as another form of intelligence, not as human intelligence, by getting to know them and being mindful of their strengths and weaknesses. This is quite a demanding task in terms of communication, reflection, and perspective-taking, and it is understandable that this knowledge is being documented.
But models change over time. The strengths and weaknesses of yesterday’s models are not the same as today’s, and reasoning models have actually removed some capabilities. A simple example is giving a reasoning model with tools the task of inspecting logs. It will most likely grep and parse out smaller sections, and may also refuse an instruction to load the file into context to inspect it. The model then relies on its reasoning (system 2) rather than its intuitive (system 1) thinking.
This means that many of these patterns are temporary, and optimizing for them risks locking human behavior to quirks that may disappear or even reverse as models evolve. YMMV.
hmcamp•16h ago