Protocols and standards like HTML built around "be liberal with what you accept" have turned out to be a real nightmare. Best-guessing the intent of your caller is a path to subtle bugs and behavior that's difficult to reason about.
If the LLM isn't doing a good job calling your api, then make the LLM get smarter or rebuild the api, don't make the API looser.
This feels a bit like the setup to the “But you have heard of me” joke in Pirates of the Caribbean [2003].
Back when XHTML was somewhat hype and there were sites which actually used it, I recall being met with a big fat "XML parse error" page on occasion. If XHTML really took off (as in a significant majority of web pages were XHTML), those XML parse error pages would become way more common, simply because developers sometimes write bugs and many websites are server-generated with dynamic content. I'm 100% convinced that some browser would decide to implement special rules in their XML parser to try to recover from errors. And then, that browser would have a significant advantage in the market; users would start to notice, "sites which give me an XML Parse Error in Firefox work well in Chrome, so I'll switch to Chrome". And there you have the exact same problem as HTML, even though the standard itself is strict.
The magical thing of HTML is that they managed to make a standard, HTML 5, which incorporates most of the special case rules as implemented by browsers. As such, all browsers would be lenient, but they'd all be lenient in the same way. A strict standard which mandates e.g "the document MUST be valid XML" results in implementations which are lenient, but they're lenient in different ways.
HTML should arguably have been specified to be lenient from the start. Making a lenient standard from scratch is probably easier than trying to standardize commonalities between many differently-lenient implementations of a strict standard like what HTML had to do.
The main argument about XHTML not being "lenient" always centred around client UX of error display - Chrome even went on to actually implement a user-friendly partial-parse/partial-render handling of XHTML files that literally solved everyone's complaints via UI design without any spec changes but by this stage it was already too late.
The whole story of why we went with HTML is somewhat hilarious: 1 guy wrote an ill informed blog post bitching about XHTML, generated a lot of hype, made zero concrete proposals to solve its problems, & then somehow convinced major browser makers (his current & former employers) to form an undemocratic rival group to the W3C, in which he was appointed dictator. An absolutely bizarre story for the ages, I do wish it was documented better but alas most of the resources around it were random dev blogs that link rotted.
The only difference between that and not being lenient in the first place is a whole lot more complex logic in the specification.
The answer of course depends on the context and the circumstance, admitting no general answer for every case though the cognitively self-impoverishing will as ever seek to show otherwise. What is undeniable is that if you didn't specify your reservations API to reject impermissible or blackout dates, sooner or later whether via AI or otherwise you will certainly come to regret that. (Date pickers, after all, being famously among the least bug-prone of UI components...)
Internally at Stytch three sets of folks had been working on similar paths here, e.g. device auth for agents, serving a different documentation experience to agents vs human developers etc and we realized it all comes down to a brand new class of users on your properties: agents.
IsAgent was born because we wanted a quick and easy way to identify whether a user agent on your website was an agent (user permissioned agent, not a "bot" or crawler) or a human, and then give you a super clean <IsAgent /> and <IsHuman /> component to use.
Super early days on it, happy to hear others are thinking about the same problem/opportunity.
[1] GitHub here: http://github.com/stytchauth/is-agent
metayrnc•3h ago
bubblyworld•2h ago
In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).
freedomben•2h ago
The idea of writing docs for AI (but not humans) does feel a little reflexively gross, but as Spock would say, it does seem logical
zahlman•49m ago
Another framing: documentation is talking to the AI, in a world where AI agents won't "admit they need help" but will read documentation. After all, they process documentation fundamentally the same way they process the user's request.
righthand•2h ago
And all those tenets of building good APIs, documentation, and code are opposite the incentive of building enshittified APIs, documentation, and code.