Docs will always have things missing regardless if a human or an AI writes them. A fuzzer might overshoot and document a ton of "unintended features" (bugs). Bugs are inevitable for similar reasons. And lastly, is this how the rest of the world finally realizes how hard this stuff really is? Can we please get rid of pointy haired bosses and iron fisted management that refuse to cut some slack for lower level problems like this?
I'm all for living in this century including the AI, but that also includes new ways of running a business and the people we hire.
Have you seen the average user in action? I'm fairly sure that's true at least on average. Even putting huge red warnings like "This action is irreversible" for some things will lead to user reaching out to you saying they didn't see it.
No. You should assume your user is dumber than a brick - and not one of the smart or clever bricks, one of the really dumb ones.
The actual contents of the article are more about using an AI agent to playtest your docs. The premise is actually the opposite of the title: if an AI agent can figure out your API then your users probably can too.
I'll grant you your silly hamburger icon, I can memorize that one, if you grant me text icons for everything else, and low latency, and leave your ego and whatever you think you know about customers wanting to memorize 8 different icons for each app they use at the door.
Agreed.
> Do you really think it makes sense for your basic assumption to be that your user is dumber than a token generator?
Absolutely. Users will fuck up basic things. I wrote a simple inotify wrapper and half of the issues I got were that it didn't work on Windows and MacOS.
In fact, I suspect that endpoints that create users and upgrade permissions will probably have to have special attention to protect against AI agent attacks.
"Claude -- sign me up for a new account so I can get free shipping on my first purchase!"
If AI can't use X, then there is something wrong with X.
X in { website, codebase, function, language, library, mcp, ...}
The next AI winter is going to be brutal and highly profitable for actual skilled devs.
The challenge with making things idiot proof is the ingenuity of idiots. Remember 50% of people are "bellow" median.
That's why programming salaries are so low and why nobody stays in the field after a couple years - it's too hard to make a living when you have to compete with people fresh out of training bootcamps.
Lol, where you live/work? Almost everywhere you get paid more as a software developer than as a nurse (just one example), and the difference of impact on your health/sanity from both roles is huge.
I think people who never worked in anything else than software don't know how easy they have it.
Now I'm waiting for the change to use LLMs for creating an API for a package of mine. It will be averaged from all other apis and won't have unexpected calls.
Something I realized when refactoring is it's easier to vibe refactor a codebase that was itself vibe coded because all the code is "in distribution". If I try to vibe refactor a codebase I wrote, it doesn't cohere with what it expects to see and hallucinates more.
Did it though? It didn't create an API, it created the appearance of an API. The reality of the API, something the library author had to wrestle with and the LLM didn't, is probably much more complex and nuanced than the LLM is hallucinating.
Maybe the API it's hinting at would be better if made real. But it pains me you're telling us the LLM, a tool known for making things up and being wrong, despite not even having done it, can do it better than a person who actually spent their time to make and put a real thing out into the world. Maybe one day, but just making up BS is not creating a better API.
That's the kind of thinking that leads to janky APIs. When you say "just" you're doing the same thing the LLM does - you're removing all the nuance and complexity from the activity.
For example, your concept of an API as just a set of functions does not consider how the API changes over time. Library authors who take this into account will have a better time evolving the library API. Library authors who don't might write themselves into a corner, which might force some sort of API version schism which causes half the API to be nice while the other half has questionable decisions, causing perpetual confusion and frustration with users for decades.
The LLM hallucinating some nice looking function calls doesn't really take any of that into account.
* Simplicity of input knobs - way too many APIs are unapproachable with the number and complexity of inputs
* Complete documentation - if you don’t document a parameter or endpoint, expect an AI agent is never going to use it (or at least use it the way you want it to) especially in multi-agentic systems where your tool needs to be chosen by the LLM
* Clear, descriptive API outputs so that an agentic system knows if and how to include in its final output
* No overloading of endpoint functionality - one endpoint, one purpose
* Handle errors and retries gracefully
These are not rocket science guidelines but API design had shifted to be too user “unfriendly” and lacked empathy for users. It’s ironic that user empathy needs to increase again now with agent users :)
P.S. this topic is near and dear to me heart since I just designed and implemented an agent friendly API at Tako. Check out the docs here - docs.trytako.com and the API playground (trytako.com/playground) [requires login for free custom queries] we built to showcase how easy the API is to use. Feedback and discussion welcome!
I mean — maybe in some glorious alternate timeline, but not ours, I guess?
Not exactly where I'd like to see us go, but at least we'll never get outdated information.
For example, if you're deploying a Postgres proxy, it will have a TCP timeout setting that you can tweak. Neither the docs nor the code will tell you what the value should be set to though.
Your engineers might know, because they have seen your internal network fail dozens of times and have a good intuition about it.
Software complexity has a wide range. If you're thinking of simple things like Sendgrid, Twilio or Stripe APIs, sure, an agent can easily write some boilerplate. But I think in certain sectors, we would need to attach some more inputs to the model that we currently don't have to get it to a good spot.
We do have:
* WADLs: https://en.wikipedia.org/wiki/Web_Application_Description_La...
* JSON Schema:https://json-schema.org/learn/miscellaneous-examples
And when they are available they're incredible, but nobody uses them.
Yes, having an API type declaration is really important. And yes, somehow a lot of people just don't use those things. But the WSDL was one of the worst standards for that in all times. (And also, it inherited all of the shitness from XML, even the allowing non-deterministic processing and side effects while reading the file.)
Anyway, I'm not really disagreeing on your main point. Read the JSON Schema docs, people, use it.
...
"My theory" is that the ease at which one can turn a function into an exposed, documented API is inversely proportional to the likelihood of it being a quality API. I think automagic annotations which turn functions into JSON APIs obey the same principle, for what its worth.
To be fair, it's a very simple format, but it made me feel good about the quality of the documentation.
jelambs•2h ago
stronglikedan•11m ago
EDIT: Coincidentally, it just dawned on me that I'm very likely replying to an AI that knows how to use the HN API.