https://literateprogramming.com/
so that specific problems are documented.
One could argue that no literate programming system has had more than one user. Knuth's WEB and CWEB never really caught on.
Well, I worked up:
https://github.com/WillAdams/gcodepreview/blob/main/literati...
for my current project (and will use it going forward for any new ones) and:
https://github.com/topics/literate-programming
has 443 projects...
In my experience documentation generation has a lower error rate than code generation, and the costs of errors are lower too.
I'm not really a big fan of AI agents writing features end-to-end, but I can definitely see them updating documentation alongside pull requests.
AI ,because by default only sees the code, in general describes the functionality not the intent behind the code.
Of course, that's what your tests are for: To document your intent, while providing a mechanism by which to warn future developers if your intent is ever violated as the codebase changes. So the information is there. It's just a question of which language you want to read it in.
"Updating docs" seems pointless, though. LLMs can translate in realtime, and presumably LLMs will get better at it with time, so caching the results of older models is not particularly desirable.
For myself, I tend to keep inline documentation to a minimum, maybe only adding a note, as to why a certain line might be there (as opposed to what it does).
I do make sure to always provide entrypoint and property descriptions, headerdoc-style.
Here's my own take on the topic: https://littlegreenviper.com/leaving-a-legacy/
1. You have to maintain both documentation and code. If you change code and forget to update documentation it can be very confusing and cost a lot of time.
2. Proper code should explain itself (to some extend).
3. Taking a lot of time to write proper documentation is rarely appreciated in a world where long term strategic thinking has no place anymore.
4. It's harder to fire you if you when you are the only guy who knows all the stuff.
And before someone links Yet Another Docs Framework, I recommend taking a different approach: https://passo.uno/beyond-content-types-presentation/
It makes tests better. Instead of a shady snippet of code that just passes an assertion, it should generate human readable examples with additional prose included by the developer for special cases.
It makes docs easier to maintain. You probably already need to find the test for the code you changed. If the docs are really close, it's easier to maintain it.
There are many ways of achieving this. I particularly like literate programming, just for the test suite. You can code whatever way you like, but the tests must be in a literate form.
I also like the idea of having a documentation that can fail a build. If you commit a bad example snippet on a markdown somewhere, the CI should fail. This can already be done with clitest, for example (scaling it culturally is a bit hard though).
Somehow, xUnit-like tools and spec frameworks already point in that direction (DSLs that embrace human language, messages in assertions, etc). They're already documentation, and developers already use test suites for "knowing how something works" very often. We just need to stop writing it twice (once on the tests, once on prose) and find a common ground that can serve both purposes.
I mean this for API docs mainly, but for other stuff as well.
Devs aren't the only problem here. In the few large companies I've been in, the assigned doc writers haven't made a net positive. It always takes me so much effort to walk them through what to write about and how it should be written to match how the users actually read and understand content that I end up writing it myself during such meetings. It's a bit of a living rubber duck exercise at times. I've grown to be a high paid doc writer that writes code too.
quietbritishjim•6h ago
raincole•5h ago
dcminter•5h ago
Edit: Oh, and now the submission is flagged. Fairly IMO. There's an interesting post to be had here, but this wasn't it.
hk1337•5h ago