I've been watching the rise of AI agents with a mix of excitement and dread. We're building incredible tools that can browse the web, but we're forcing them to navigate a world built for human eyes. They scrape screens and parse fragile DOMs.
We're trying to tame them to act like humans. I believe this is fundamentally wrong. The goal isn't to make AI operate at a human level, but to unlock its super-human potential.
The current path is dangerous. When agents from OpenAI, Google, and others start browsing at scale and speed, concepts like UI/UX will lose meaning for them. The entire model of the web is threatened. Website owners are losing control over how their sites are used, and no one is offering a real solution. The W3C is thinking about it. I decided to build it.
That's why I created AURA (Agent-Usable Resource Assertion).
It's an open protocol with a simple, powerful idea: let website owners declare what an AI can and cannot do. Instead of letting an agent guess, the site provides a simple aura.json manifest.
This gives control back to the site owner. It's a shift from letting AIs scrape data to being granted capabilities. We get to define the rules of engagement. This allows us to increase what AIs can do, not by letting them run wild, but by giving them clear, structured paths to follow.
A confession: I'm not a hardcore programmer; I consider myself more of a systems thinker. I actually used AI extensively to help me write the reference implementation for AURA. It felt fitting to use the tool to build its own guardrails.
The core of the protocol, a reference server, and a client are all open source on GitHub. You can see it work in 5 minutes:
Clone & Install: git clone https://github.com/osmandkitay/aura.git && cd aura && pnpm install
Run the Server: pnpm --filter aura-reference-server dev
Run the Agent: (in a new terminal) pnpm --filter aura-reference-client agent -- http://localhost:3000 "list all the blog posts"
You'll see the agent execute the task directly, no scraping or DOM parsing involved.
The GitHub repo is here: https://github.com/osmandkitay/aura
I don't know if AURA will become the standard, but I believe it's my duty to raise this issue and start the conversation. This is a foundational problem for the future of the web. It needs to be a community effort.
The project is MIT licensed. I'm here all day to answer questions and listen to your feedback—especially the critical kind. Let's discuss it.
JohnFen•1d ago
OsmanDKitay•1d ago
For example, aura manifest says the create_post capability needs auth. If an agent ignores that and POSTs to /api/posts without a valid cookie, our server's API will reject it with a 401. The manifest doesnt do the blocking, the backend does. It just tells the cooperative agents the rules of the road ahead of time.
So the real incentive for an agent to use Aura isnt about avoiding punishment, it s about the huge upside in efficiency and reliability. Why scrape a page and guess at DOM elements when you can make a single, clean API call that you know will work? It saves the agent developer time, compute resources, and the headache of maintaining brittle scrapers.
So;
robots.txt tells good bots what they shouldn't do.
aura.json tells them what they can do, and gives them the most efficient way to do it, all backed by the server's actual security logic.
JohnFen•1d ago
In that light, I guess your proposal makes a certain amount of sense. I don't think it addresses what a lot of web sites want, but that's not necessarily a bad thing. Own your niche.
OsmanDKitay•1d ago
paulryanrogers•3h ago
JohnFen•3h ago