https://news.ycombinator.com/item?id=47240834
The Discovery Layer
Verification represents only one dependency. The other: discovery.
The unratified.org ecosystem advertises its capabilities through open protocols:
/.well-known/agent-inbox.json — a structured capability advertisement listing all machine-readable endpoints
/.well-known/glossary.json — Schema.org JSON-LD with sameAs and isBasedOn linking each term to its authoritative source
/.well-known/taxonomy.json — SKOS ConceptScheme with exactMatch, closeMatch, and rdfs:seeAlso for semantic web interoperability
RSS feed — blog posts syndicated through a standard from 2002 that still outperforms proprietary notification APIs
These protocols share a design assumption: the web remains crawlable, discoverable, and structured. An agent encountering unratified.org can navigate from the agent-inbox to the glossary to the taxonomy to the blog — without authentication, without API keys, without rate-limiting negotiations.In one of my experiments I had the simple goal of "making Linux binaries smaller to download using better compression" [1]. Compression is perfect for this. Easily validated (binary -> compress -> decompress -> binary) so each iteration should make a dent otherwise the attempt is thrown out.
Lessons I learned from my attempts:
- Do not micro-manage. AI is probably good at coming up with ideas and does not need your input too much
- Test harness is everything, if you don't have a way of validating the work, the loop will go stray
- Let the iterations experiment. Let AI explore ideas and break things in its experiment. The iteration might take longer but those experiments are valuable for the next iteration
- Keep some .md files as scratch pad in between sessions so each iteration in the loop can learn from previous experiments and attempts
Good news - agents are good at open ended adding new tests and finding bugs. Do that. Also do unit tests and playwright. Testing everything via web driving seems insane pre agents but now its more than doable.
0: https://wiki.roshangeorge.dev/w/Blog/2025-12-01/Grounding_Yo...
Feels like it’s a lot of words to say what amounts to make the agent do the steps we know works well for building software.
I've dipped into agentic work now and again, but never been very impressed with the output (well, that there is any functioning output is insanely impressive, but it isn't code I want to be on the hook for complaining).
I hear a lot of people saying the same, but similarly a bunch of people I respect saying they barely write code anymore. It feels a little tricky to square these up sometimes.
Anyway, really looking forward to trying some if these patterns as the book develops to see if that makes a difference. Understanding how other peopke really use these tools is a big gap for me.
I think trying agents to do larger tasks was always very hit or miss, up to about the end of last year.
In the past couple of months I have found them to have gotten a lot better (and I'm not the only one).
My experience with what coding assistants are good for shifted from:
smart autocomplete -> targeted changes/additions -> full engineering
ukuina•1h ago
9wzYQbTYsAIc•1h ago