This post brings up a lot of (imo true) points that I honestly can't share with the ai-lovers at work because they will just get in a huff. But the OP is right - we automate stuff we don't value doing, and the people automating all their code-gen have made a very clear statement about what they want to be doing - they want _results_ and don't actually care about the code (which includes ideas like testing, maintainability, consistent structure, etc).
It's extra hilarious to hear someone you _thought_ treated their code work as a craft refer to "producing 3 weeks worth of work in the last week" because (a) I don't believe it, not for one bit, unless you are the slowest typist on earth and (b) it clearly positions them as a code _consumer_, not a code _creator_, and they're happy about it. I would not be.
Code is my tool for solving problems. I'd rather write code than _debug_ code - which is what code-gen-bound people are destined to do, all day long. I'd rather not waste the time on a spec sheet to convince the llm to lean a little towards what I want.
Where I've found LLMs useful is in documentation queries, BUT (and it's quite a big BUT) they're only any good at this when the documentation is unchanging. Try ask it questions about nuances of the new extension syntax in c# between dotnet 8 and dotnet 10 - I just had to correct it twice in the same session, on the same topic, because it confidently told me stuff that would not compile. Or in the case of elasticsearch client documentation - the REST side has remained fairly constant, but if you want help with the latest C# library, you have to remind it all the time of the fact - not because it doesn't have any information on the latest stuff, but because it consistently conflates old docs with new libraries. An attempt to upgrade a project from webpack4 to webpack5 had the same problems - the llm confidently telling me to do "X", which would not work in webpack 5. And the real kicker is that if you can prove the LLM wrong (eg respond with "you're wrong, that does not compile"), it will try again, and get closer - but, as in the case with C# extension methods, I had to push on this twice to get to the truth.
Now, if they can't reliably get the correct context when querying documentation, why would I think they could get it right when writing code? At the very best, I'll get a copy-pasta of someone else's trash, and learn nothing. At the worst, I'll spin for days, unless I skill up past the level of the LLM and correct it. Not to mention that the bug rate in suggested code that I've seen is well over 80% (I've had a few positive results, but a lot of the time, if it builds, it has subtle (or flagrant!) bugs - and, as I say, I'd rather _write_ code than _debug_ someone else's shitty code. By far.
davydm•1h ago
It's extra hilarious to hear someone you _thought_ treated their code work as a craft refer to "producing 3 weeks worth of work in the last week" because (a) I don't believe it, not for one bit, unless you are the slowest typist on earth and (b) it clearly positions them as a code _consumer_, not a code _creator_, and they're happy about it. I would not be.
Code is my tool for solving problems. I'd rather write code than _debug_ code - which is what code-gen-bound people are destined to do, all day long. I'd rather not waste the time on a spec sheet to convince the llm to lean a little towards what I want.
Where I've found LLMs useful is in documentation queries, BUT (and it's quite a big BUT) they're only any good at this when the documentation is unchanging. Try ask it questions about nuances of the new extension syntax in c# between dotnet 8 and dotnet 10 - I just had to correct it twice in the same session, on the same topic, because it confidently told me stuff that would not compile. Or in the case of elasticsearch client documentation - the REST side has remained fairly constant, but if you want help with the latest C# library, you have to remind it all the time of the fact - not because it doesn't have any information on the latest stuff, but because it consistently conflates old docs with new libraries. An attempt to upgrade a project from webpack4 to webpack5 had the same problems - the llm confidently telling me to do "X", which would not work in webpack 5. And the real kicker is that if you can prove the LLM wrong (eg respond with "you're wrong, that does not compile"), it will try again, and get closer - but, as in the case with C# extension methods, I had to push on this twice to get to the truth.
Now, if they can't reliably get the correct context when querying documentation, why would I think they could get it right when writing code? At the very best, I'll get a copy-pasta of someone else's trash, and learn nothing. At the worst, I'll spin for days, unless I skill up past the level of the LLM and correct it. Not to mention that the bug rate in suggested code that I've seen is well over 80% (I've had a few positive results, but a lot of the time, if it builds, it has subtle (or flagrant!) bugs - and, as I say, I'd rather _write_ code than _debug_ someone else's shitty code. By far.