I also see it in mostly spaghetti code bases, not in great code bases where no one uses "AI".
Edit: I guess some commenters misunderstood my message. I'm saying that by serving also the needs of LLMs we might get more resources to improve docs overall.
Easy now. You might be skilled in documentation, but most developers write docs like they write code. For the most part all you are only going to get the program written twice, once in natural language and once in a programming language. In which case, you could have simply read the code in the first place (or had an LLM explain the code in natural language, if you can't read code for some reason).
How is this a bad thing? Personally, I'm not superhuman and more readily understand natural language.
If I have the choice between documentation explaining what a function does and reading the code, I'm going to just read the docs every time. Unless I have reason to think something is screwy or having an intricate understanding is critical to my job.
If you get paid by the hour then go for it, but I don't have time to go line by line through library code.
It is not good or bad. I don't understand your question.
> If I have the choice between documentation explaining what a function does and reading the code, I'm going to just read the docs every time.
The apparent "choice", given the context under which the discussion is taking place, is writing your program in natural language and have an LLM transcribe it to a programming language, or writing your program in a programming language and having an LLM transcribe it to natural language.
It turns out, to describe a system in enough detail to implement the system, you need to use a programming language. For the system to be performant and secure you need well educated high skill engineers. LLMs aren't going to change that.
Anyway, this is tacitly declaring LLM bankruptcy - the LLM can't understand what to do by reading the most precise specification, the code, so we're going to provide less specific instructions and it will do better?
Go back to design patterns. Not the Gang of Four, rather the book where the name and concept was lifted from.
What you will find is that implementations are impacted by factors that are not always intuitive without ancillary information.
It's clear when there is a cowpath through a campus, and the need for a sidewalk becomes apparent. It's not so clear when that happens in code because it often isnt linear. It's why documentation is essential.
"Agile" has made this worse, because the why is often lost, meetings or offline chats lead to tickets with the what and not the why. It's great that those breadcrumbs are linked through commits but the devil is in the details. Even when all the connections exist you often have to chase them through layers of systems and pry them out of people ... emails, old slack messages, paper notes, a photo of a white board.
I find this hard to believe. I‘m not sure I’ve ever seen llms.txt in the wild and in general I don’t think most tech writing shops are that much on the cutting edge.
I have seen more companies add options to search their docs via some sort of AI, but I doubt that’s a majority yet.
Shame, because it's a bunch of nice looking words - but it doesn't matter if they're completely false.
Maybe a secret positive outcome of using automation to write code is that library maintainers have a new pressure to stop releasing totally incompatible versions every few years (looking at Angular, React...)
Sometimes it ignores you but it works more often than not.
Deprecated code is quickly identified by VSCode (like Text.textScaleFactor) but not the new way of separating items in a column/row by using the "Spacing" parameters (instead of manually adding a SizedBox between every items).
Coding with an LLM is like coding with a Senior Dev who doesn't follow the latest trends. It works, has insights and experience that you don't always have, but sometimes it might code a full quicksort instead of just calling list.sort().
Quite badly. Can't tell you how many times an LLM has suggested WORKSPACE solutions to my Bazel problems, even when I explicitly tell them that I'm using Bzlmod.
From recent experience, 95% of changes are good and are done in 15 minutes.
5% of changes are made, but break things because the API might have documentation, but your code probably doesn't document "Why I use this here" and instead has "What I do here" in bits.
In hindsight it was an overall positive experience, but if you'd asked me at the end of the first day, I'd have been very annoyed.
I thought this would take me from Mon-Fri if I was asked to estimate, but it took me till Wed afternoon.
But half a day in I thought I was 95% done, but then it took me 2+ more days to close that 5% of hidden issues.
And that's because the test-suite was catching enough class of issues to go find them everywhere.
That's what the point of these text documents is, and that's why it doesn't actually produce an efficiency gain the majority of the time.
A programmer who expects the LLM to solve an engineering problem is rolling the dice and hoping. A programmer who has solved an engineering problem and expects the implementation from the LLM will usually get something close to what they want. Will it be faster than doing it yourself? Maybe. Is it worth the cost of the LLM? Probably not.
The wild estimates and hype about AI-assisted programming paradigms come from people winning the dice roll on the former case and thinking that result is not only consistent, but also the same for the latter case.
Politely need to disagree with this.
Quick example. I'm wrapping up a project where I built an options back-tester from scratch.
The thing is, before starting this, I had zero experience or knowledge with:
1. Python (knew it was a language, but that's it)
2. Financial microstructure (couldn't have told you what an option was - let alone puts/calls/greeks/etc)
3. Docker, PostgreSQL, git, etc.
4. Cursor/IDE/CLIs
5. SWE principles/practices
This project used or touched every single one of these.
There were countless (majority?) of situations where I didn't know how to define the problem or how to articulate the solution.
It came down to interrogating AI at multiple levels (using multiple models at times).
I think that they have much more use for someone with no/little experience just trying to get proof of concepts/quick projects done because accuracy and adherence to standards don't really matter there.
(That being said, if Google were still as useful of a tool as it was in its prime, I think you'd have just as much success by searching for your questions and finding the answers on forums, stackexchange, etc.)
Say, “Give me the stock status of an iPhone 16e 256GB White in San Francisco.”
I still have to provide the API details somewhere — whether it’s via an agent framework (e.g. LangChain) or a custom function making REST calls.
The LLM’s real job in this flow is mostly translating your natural language request into structured parameters and summarizing the API’s response back into something human-readable.
righthand•2h ago
theletterf•2h ago
righthand•2h ago
> I’ve been noticing a trend among developers that use AI: they are increasingly writing and structuring docs in context folders so that the AI powered tools they use can build solutions autonomously and with greater accuracy
To me this means a lot of engineers are spending time maintaining files that help them automate a few things for their job. But sinking all that time into context for an LLM is most likely going to net you efficiency gains only for the projects that the context was originally written for. Other projects might benefit from smaller parts of these files, but if engineers are really doing this then there probably is some efficiency lost in the creation and management of it all.
If I had to guess contrary to your post is that devs aren’t RTFM, but instead asking Llm or web search what a good rule/context/limitation would be and pasting it into a file. In which case the use of Llms is a complexity shift.
theletterf•2h ago
righthand•2h ago
theletterf•2h ago
righthand•2h ago
It’s hard for me to believe that people are writing more technical documentation, understanding more, when they want to use the Llm to bypass that. Maybe a handful of disciplined engineers per capita but when the trend is largely the opposite, the academic approach tends to lose out.
bluGill•1h ago
righthand•1h ago
MoreQARespect•1h ago
This is an artefact of the language which the creators are in total denial about.
There are better languages for writing executable user stories but none very popular.
fmbb•1h ago
Why do these "agents" need so much hand holding?
groestl•1h ago