I have been on both the AI crawler side and the website owner side, and I keep seeing the same mess.
Bots scrape, publishers block, and the open web suffers.
Now, AI reads more webpages than people do. Can we design an equitable partnership instead of fighting an arms race?
PEAC Protocol is an open, collaborative attempt towards it.
Why should you care?
If PEAC works, we get transparency instead of walls.
- Sites stay economically sustainable.
- AIs get cleaner, higher-quality data.
- The web remains open and beneficial for everyone.
But it only works if both sides join in.
- As a publisher, drop a pricing.txt or .well-known/peac.json on your website with your terms. You can choose between free, attribution, or pay-per-query.
- As an AI company/agent, you signal compliance and unlock higher-quality, trusted access.
- For both, it is a trackable and enforceable layer for consent, compliance, and commerce.
Open questions for HN:
AI/agent builders, if you found a `pricing.txt` or `.well-known/peac.json` on a site, would you feel compelled to honor it as part of your normal workflow?
Publishers, does a clear, simple file for setting access terms (like `pricing.txt`) feel like something you’d adopt, or does “block everything” still seem the path of least resistance?
Developers, what would you do differently to make programmable access and consent both reliable and frictionless?
Best, JR