Example: https://yazzy.carter.works/https://paulgraham.com/submarine....
https://github.com/carterworks/yazzy
It uses Steph Ango's/kepano's defuddle under the hood.
The live version is hosted on a single free-tier fly.io node, but it is easily self-hosted.
Also, I understand writing READMEs is boring, but please at least edit what the LLM produces. You do not need this many content-free emoji bullet point sections.
Edit: Looking at the prompt made me realize that the output of this would obviously be completely untrustworthy: https://github.com/subranag/declutter/blob/main/src/llm.ts#L...
It does seem like a lot of computational effort to achieve what F9 / Reader View does in FF.
Sorry, let me ask ChatGPT to put it in terms people seem to prefer now (I don't think this stuff is actually quite right but who cares anymore):
## 1. They Optimize for Politeness, Not Usefulness
ChatGPT READMEs tend to:
- Over-explain obvious things
- Avoid strong claims
- Hedge unnecessarily
The result is text that feels safe but not informative. A good README should reduce uncertainty quickly, not pad it with disclaimers and filler.
## 2. They Follow Templates Instead of Intent
Most generated READMEs look structurally correct but contextually shallow:
- Generic section headings (“Installation”, “Usage”, “Contributing”) regardless of relevance
- Boilerplate language that could apply to almost any project
- No clear prioritization of what actually matters
This signals that the README was assembled, not written with purpose.
## Summary
ChatGPT READMEs are usually:
- Correct but unhelpful
- Polished but shallow
- Complete but low-signalThey claim the protocol is resilient to enshittification.
Or perhaps the LLM should have known to do that.
subbu_devhub•1h ago