Hey everyone,
Yathartha here, I/m the solo founder of Kodec AI, and I'd love your feedback on a POC I’ve been heads-down building.
The idea came from a simple observation: AI agents are getting smarter, but the web wasn't built for them. It's like dropping a robot into a kitchen with no labels causing them to fumble, hallucinate, or fail entirely. That's why we see stories like the "one-hour cupcake order" from ChatGPT agents.
I believe the fix is a smarter, machine-readable web.
So I built a prototype for a new open protocol: 'kodec.txt'.
Its like robots.txt, but instead of saying "what bots cant do" it says what they can do. A sitess /kodec.txt file defines actionable intents (like 'OrderAction', 'ScheduleAction', 'EmailAction') and specifies what inputs are required to perform them. It’s a structured, declarative instruction manual for agents, no need for scraping or guessing.
You can check out the demo (kodec.net/blog/showhn) of a single agent completing 3 tasks using just this protocol:
* Ordering a product
* Sending an email
* Booking a consultation (with Kodec itself)
Below the video, I've included the full unedited log of the agent's step-by-step reasoning.
It's very early, and I know there are edge cases everywhere — but I'd love your thoughts on:
1. Decentralized vs centralized: Does this make sense as a decentralized standard (like robots.txt), or would developer adoption require a centralized registry (like OpenAI plugins)?
2. Security & abuse vectors: What attack surfaces or pitfalls do you foresee if this became widely adopted?
3. Great-fit use cases: What other kinds of actions would benefit from this protocol?
Thanks in advance for taking a look. I'll be around all day and would love to discuss more.
ifeanyi_sa•20h ago
The way I’m understanding it, it’s basically a way to turn your website into an MCP server by defining the list of tools your site has, is that correct?
Really cool idea! It’s true that the web isn’t yet agent-ready, and this could be an easy way to start that process. I think the main challenge that comes to mind is auth on the tool-calling side, unless I’m not understanding correctly
yregmi•19h ago
Yes! It's exactly about creating a standardized way for a website to expose its tools to the agentic web.
Auth is level 2 of this design. It is huge piece of the puzzle. My vision for this is that the kodec.txt protocol would evolve to include an optional authentication block, likely declaring a standard like OAuth 2.0.
Doing so, a trusted agent (that the user has already connected to their account, similar to a "Login with Google" flow) would see that the action requires authorization. The agent would then be responsible for managing the OAuth token on behalf of the user to securely make the authenticated API call.
For this POC, all the actions are public and unauthenticated to keep it simple, but your question is exactly where the protocol needs to go next to handle private, user-specific actions. It's a massive and exciting challenge. Thanks for bringing it up!
yregmi•22h ago
The idea came from a simple observation: AI agents are getting smarter, but the web wasn't built for them. It's like dropping a robot into a kitchen with no labels causing them to fumble, hallucinate, or fail entirely. That's why we see stories like the "one-hour cupcake order" from ChatGPT agents.
I believe the fix is a smarter, machine-readable web.
So I built a prototype for a new open protocol: 'kodec.txt'.
Its like robots.txt, but instead of saying "what bots cant do" it says what they can do. A sitess /kodec.txt file defines actionable intents (like 'OrderAction', 'ScheduleAction', 'EmailAction') and specifies what inputs are required to perform them. It’s a structured, declarative instruction manual for agents, no need for scraping or guessing.
You can check out the demo (kodec.net/blog/showhn) of a single agent completing 3 tasks using just this protocol:
* Ordering a product
* Sending an email
* Booking a consultation (with Kodec itself)
Below the video, I've included the full unedited log of the agent's step-by-step reasoning.
It's very early, and I know there are edge cases everywhere — but I'd love your thoughts on:
1. Decentralized vs centralized: Does this make sense as a decentralized standard (like robots.txt), or would developer adoption require a centralized registry (like OpenAI plugins)?
2. Security & abuse vectors: What attack surfaces or pitfalls do you foresee if this became widely adopted?
3. Great-fit use cases: What other kinds of actions would benefit from this protocol?
Thanks in advance for taking a look. I'll be around all day and would love to discuss more.
ifeanyi_sa•20h ago
Really cool idea! It’s true that the web isn’t yet agent-ready, and this could be an easy way to start that process. I think the main challenge that comes to mind is auth on the tool-calling side, unless I’m not understanding correctly
yregmi•19h ago
Auth is level 2 of this design. It is huge piece of the puzzle. My vision for this is that the kodec.txt protocol would evolve to include an optional authentication block, likely declaring a standard like OAuth 2.0.
Doing so, a trusted agent (that the user has already connected to their account, similar to a "Login with Google" flow) would see that the action requires authorization. The agent would then be responsible for managing the OAuth token on behalf of the user to securely make the authenticated API call.
For this POC, all the actions are public and unauthenticated to keep it simple, but your question is exactly where the protocol needs to go next to handle private, user-specific actions. It's a massive and exciting challenge. Thanks for bringing it up!