Eventually the quality drops to such a level that some poor bastard spends their time bringing it all back up to standard - and the cycle repeats.
Gaining that trust is really hard. The documentation needs to be safe to read, in that it won't mislead you and feed you stale information - the moment that happens, people lose trust in it.
Because the standard of internal docs at most companies is so low, employees will default to not trusting it. They have to be won over! That takes a lot of dedicated work, both in getting the documentation to a useful state and promoting it so people give it a chance.
Would you care to make an actual argument about why these things aren’t good for humans too?
> Step one, write the documentation yourself.
> Step two, bots hit your website hundreds of times per minute.
> Step three, users never come to your site, they use OpenAI's site.
> Step four, ??? openAI profits
1. Write plan 2. Ask Claude to review for understandability 3. Update as needed until it's clear 4. Execute the task(s) in the plan.
I'm finding Claude gets much further on the first pass. And I can version the plans.
Here's a bookmarklet I found on HN years and years ago. I have it bound to a hot key so whenever a web site does something stupid like that, I can dismiss it with a keystroke.
Works about 90% of the time, and doesn't require any installation of anything.
javascript:(function()%7B(function%20()%20%7Bvar%20i%2C%20elements%20%3D%20document.querySelectorAll('body%20*')%3Bfor%20(i%20%3D%200%3B%20i%20%3C%20elements.length%3B%20i%2B%2B)%20%7Bif%20(getComputedStyle(elements%5Bi%5D).position%20%3D%3D%3D%20'fixed')%20%7Belements%5Bi%5D.parentNode.removeChild(elements%5Bi%5D)%3B%7D%7D%7D)()%7D)()
javascript:void([].forEach.call(document.querySelectorAll('body *'),e=>/fixed|sticky/.test(getComputedStyle(e).position)&&e.parentNode.removeChild(e)))What I'd be interested in seeing is best practices for creating documentation intended only for consumption by RAG systems, with the assumption that it's much easier and cheaper to do (and corresponding best practices for prompting systems to generate optimal output for different scenarios).
I can imagine a near future where crud endpoints are just entirely tested by an AI service that tries to read the docs and navigate the endpoints and picks up any inconsistencies and faults.
emil_sorensen•7mo ago
esafak•7mo ago
mooreds•7mo ago
We see a surprising number of folks who discover our product from GenAI solutions (self-reported). I'm not aware of any great tools that help you dissect this, but I'm sure someone is working on them.
0: Generative Engine Optimization
nlawalker•7mo ago
esafak•7mo ago
It's a fortunate turn of events for people who like documentation.
jilles•7mo ago
corysama•7mo ago
appreciatorBus•7mo ago
arscan•7mo ago
bobbiechen•7mo ago
https://stytch.com/blog/if-an-ai-agent-cant-figure-out-how-y...
thom•7mo ago
truculent•7mo ago
thom•7mo ago
truculent•7mo ago
Cthulhu_•7mo ago
Which is another issue, indifference. It's hard to find people that actually care about things like API design, let alone multiple that check each other's work. In my experience, a lot of the time people just get lazy and short-circuit the reviews to "oh he knows what he's doing, I'm sure he thought long and hard about this".
QRY•7mo ago
I'm in the process of learning how to work with AI, and I've been homebrewing something similar with local semantic search for technical content (embedding models via Ollama, ChromaDB for indexing). I'm currently stuck at the step of making unstructured knowledge queryable, so these docs will come in handy for sure. Thanks again!
shafyy•7mo ago
klysm•7mo ago
drusepth•7mo ago
It's just effective linguistics and speech; what people have called "soft skills" forever is now, obviously, trying to be a science for some reason.
ketzo•7mo ago
Otherwise known as empathy
Cthulhu_•7mo ago
(assumption / personal theory)
starkparker•7mo ago
taneq•7mo ago
alganet•7mo ago
1. Stuff that W3C already researched and defined 20 years ago to make the web better. Acessibility, semantic simple HTML that works with no JS, standard formats. All the stuff most companies just plain ignored or sidelined.
2. Suggestions to workaround obvious limits on current LLM tech (context size, ambiguity, etc).
There's really nothing to talk about category 1, except that a lot of people already said this and they were practically mocked.
Regarding category 2, it's the first stage of AI failure acceptance. "Ok, it can't reliably reason on human content. But what if we make humans write more dumb instead?"
troupo•7mo ago