And that's kinda what one would expect, given that LLMs are basically a blurry JPEG of the web/github etc.
Like, I think reasoning can help here, but I rarely see good results when prompting an LLM with something complicated (technical statistical problems and good approaches) while they are fantastic at less edge case stuff (working with docker and well known frameworks).
So yeah, definite productivity gains but I'm not convinced that they're as transformational as they are being pitched.
A friend currently has an AI workflow that pits two AIs against each other. They start with an issues database. AI 1 pulls issues, and fixes them, and commits them. AI 2 reviews the work and makes new issues. And the cycle repeats.
After a bunch of work is complete and issues are flagged as done, the tests run green, he grabs all of the commits, walks through them, cleans them up if necessary, and crunches them into one big commit to the main branch.
He loves waking up in the morning to a screen filled with completed things.
He has, essentially, self-promoted to management over some potentially untrustworthy junior developers with occasional flashes of brilliance.
And I was pondering that, and it just reminded me of something I've been feeling for sometime.
A lot of modern development is wiring together large, specialized libraries. Our job is to take these things (along with their cascade of dependencies) and put our own little bits of glue and logic around them.
And, heck, if I wanted to glue badly documented black boxes together, I would have gone into Electrical Engineering.
But here's the thing.
While the layers are getting thicker, the abstractions more opaque, in the end, much like a CEO is responsible for that ONE PERSON down in the factory behaving badly, we, as users of this stuff, are responsible for ALL OF IT. Down to bugs in the microprocessor.
When push comes to shove, it's all on us.
We can whine and complain and delegate. "I didn't write the compiler, not my fault." "Not my library..." "The Manager assured the VP who assured me that..."
But doesn't really matter when you have a smoking crater of a system, does it?
Because we're the ones delivering services and such, putting our names on it.
So, yea, no, you don't have to be "as skilled", perhaps, when using something.
But you're still responsible for it.
Now to your ponderoo about libraries, something I've found that's really fascinating is I've really stopped using open source libraries unless there's a network ecosystem effect for example like Tailwind. But for everything else, it's much easier to code generate it. And if there's something wrong with the implementation, I can take ownership and accountability for that implementation and I can just fix it with a couple more prompts. No more bullshit in open source. Some person might not even be maintaining the project anymore or having to deal with like getting a pull request fixed or open source supply chain attack vectors related to project takeovers and all that noise. It just doesn't exist anymore. It's really changed how I do software development.
I started Brokk to give humans better tools with which to do this: https://github.com/BrokkAi/brokk
Do you mean writing manual tests? Because having the LLM write tests is key in iteration speed w/o backtracking.
Edit: I've just updated the post.
FWIW, since I just realized my only main comment was a criticism, I found your article very insightful. It baffles me how many people will disagree with the general premise or nit-pick one tiny detail. The only thing more surprising to me than the rate at which AI is developing is the number of developers jamming their heads into the sand over it.
But yeah, debugger is your friend
So, when you write tests, your main job is to think (define what is good and what is bad)
As such, using IA to write tests is writing useless tests
I got a job interview this monday; I asked the guy : "do you use IA ?" He mumbled something like "yes" Then, I baited him: "It's quite handy to write tests!" He responded: yes, but no (for the above reason)
He got the job
Chess saw no innovation since hundreds of years or something
I get your point: with IA in charge, the world will stagnate
What I do not share is your belief that this is a good outcome
I don't believe AI will cause the world to stagnate at all. I think it will unleash humanity's creativity in a way orders of magnitude greater than history has ever seen.
With that arrogance, my only question is: Where is your own code and what makes you more qualified than Linux kernel or gcc developers?
Also, there are far more generic web developers than there are Linux kernel developers, and they represent the vast majority of the market share / profit generation in software development, so your metric isn't really relevant either.
The DOM API is old, All the mainstream backend languages are old, unix administrations has barely changed (only the way to use those tools have). Even Elasticsearch is 15 years old. Amazon S3 is past drinking age in many countries around the world. And that's just pertaining to web projects.
You just need to open a university textbook to realize how old many of the fundamentals are. Most shiny new things is old stuff repackaged.
It's akin to people who refused to learn C because they knew assembly.
The same thing is happening with LLMs. If anything, the gap is far smaller than between assembly and C, which only serves to prove my point. People who don't understand it or like it could easily experience massive productivity gains with a minimum of effort. But they probably never will, because their mindset is the limiting factor, not technical ability.
It really comes down to neural plasticity and willingness to adapt. Some people have it, some people don't. It's pretty polarizing, because for the people that don't want change it becomes an emotional conversation rather than a logical one.
What's the opportunity cost of properly exploring LLMs and learning what everybody else is talking about? Near zero. But there are plenty of people who haven't yet.
Let's say I'm writing and Eloquent query (Laravel's ORM) and I forgot the signature for the where method. It's like 5 seconds to find the page and have the answer (less if I'm using Dash.app). It would take me longer to write a prompt for that. And I have to hope the model got it right.
For bigger use cases, a lot of times I already know the code, the reason I haven't written it yet is I'm thinking how it would impact the whole project architecture. Once I have a good feel, writing the code is a nice break from all of those thinking sessions. Like driving on a scenic route. Yeah you could have an AI drive you there, but not when you're worrying it taking the wrong turn at every intersection.
I've yet to see a single occurrence at work (a single!) of something done better/quicker/easier with IA (as a dev) I've read lots of bullshit on the internet, sure, but from my day to day real-world experience, it was always a disaster disguised as glorious success story
But you can be arrogant without referencing yourself directly.
After all, anything you say is implicitly prefixed with “I declare that”.
E.g. one of Feynman’s “most arrogant” statements is said to be: “God was always invented to explain the mystery. God is always invented to explain those things that you do not understand.”[1] - and there’s no direct self reference there.
[1]: https://piggsboson.medium.com/feynmans-most-arrogant-stateme...
Writing code has never been a bottleneck for me. Planning out a change across multiple components, adhering to both my own individual vision and the project direction and style, fixing things along the way (but only when it makes sense), comparing approaches and understanding tradeoffs, knowing how to interpret loose specs... that's where my time goes. I could use LLM assistance, but given all of the above, it's faster for me to write it myself than to try to distill all of this into a prompt.
Those AI accelerationists are not thinking and, as such, are indeed boosted by a non-thinking machine
In the end, code is nothing but a way to map your intelligence into the physical world (using an interface called "computer")
I spent loads of time learning to use special syntax that helped GPT 3.5 or comfy ui for Stable diffusion. Now the latest models can do exactly what I want without any of those “high skill” prompts. The context windows are so big that we can be quite lazy about dumping files into prompts without optimisation.
The only general advice I’d give it to take more risk and continually ask more of the models.
The LLM tells me that they prefere the "older way" because it's more broadly compatible, that's ok if you are aiming for that. But If the programmer doesn't know about that they will be stuck with the LLM calling the shots for them.
How many professional programmers don't have assemblers/compilers/interpreters "calling the shots" on arbitrary implementation details outside the problem domain?
https://vivekhaldar.com/articles/when-compilers-were-the--ai...
We're not yet at a point where LLM coders will learn all your idiosyncrasies automatically, but those feedback loops are well within our technical ability. LLM's are roughly a knowledgeable but naïve junior dev; you must train them!
Hint: add that requirement to your system/app prompt and be done with it.
> Please write this thing
> Here it is
> That's asinine why would you write it that way, please do this
> I rewrote it and kept backward compatibility with the old approach!
:facepalm:
It's mostly useful when you work a lot with "legacy code" and can't just remove things willy nilly. Maybe that sort of coding is over-represented in the datasets, as it tends to be pretty common in (typically conservative) larger companies.
The less cruft and red herrings in the context, the better. And likewise with including key info, technical preferences, and guidelines. The model can’t read our minds, although sometimes we wish so :)
There are lots of simple tricks to make it easier for the model to provide a higher quality result.
Using these things effectively is definitely an complex skill set.
Um, what? This is the sort of knowledge you definitely do not need in your back pocket. It’s literally the perfect kind of question for AI to answer. Also this is such a moving target that I suspect most hiring processes change at a slower pace.
I know plenty of good communicators who aren't using AI effectively. At the very least, if you don't know what an LLM is capable of, you'll never ask it for the things it's capable of and you'll continue to believe it's incapable when the reality is that you just lack knowledge. You don't know what you don't know.
its still a bad interview question unless you’re hiring someone to build ai agents imho
But by the same logic, should we be asking for the same knowledge of the language server protocol and algorithms like treesitter? They're integral right now in the same way these new tools are expected to become (and have become for many).
As I see it, knowing the internals of these tools might be the thing that makes the hire, but not something you'd screen every candidate with who comes through the door. It's worth asking, but not "critical." Usage of these tools? sure. But knowing how they're implemented is simply a single indicator to tell if the developer is curious and willing to learn about their tools - an indicator which you need many of to get an accurate assessment.
I crack myself up.
And it never is. There's just about nothing that fits this criteria.
Explain how it works and what you use for.
If you don’t know this, you’re not a programmer in any language, platform, framework, front end or back end.
It’s my go-to interview question.
Tell me what’s wrong with that.
It's such a foundational thing in all of modern programming, that I just can't imagine someone being "skilled" and not knowing even this much. Even scripting languages use hash tables, as do all interpreted languages such as Python and JavaScript.
Keep in mind that I'm not asking anyone to implement a hash table from scratch on a white board or some nonsense like that!
There ought to be a floor on foundational knowledge expected from professional developers. Some people insist on this being zero. I don't understand why it shouldn't be at least this?
You can't get a comp-sci or comp-eng degree from any reputable university (or even disreputable ones) without being taught at least this much!
What next? Mechanical engineers who can't be expected to hold a pencil or draw a picture using a CAD product?
Surgeons that have never even seen a scalpel?
Surveyors that don't know what "trigonometry" even means?
Where's your threshold?
I think with the rise of LLMs, my coding time has been cut down by almost half. And I definitely need to bring in help less often. In that sense it has raised my floor, while making the people above me (not necessarily super coders, but still more advanced) less needed.
For example, managing the context window has become less of a problem with increased context windows in newer models and tools like the auto-resummarization / context window refresh in claude code make it so that you might be just fine without doing anything yourself.
All this to say that the idea that you're left significantly behind if you aren't training yourself on this feels bogus (I say this as a person who /does/ use these tools daily). It should take any programmer not more than a few hours to learn these skills from scratch, with the help of a doc, meaning any employee you hire should be able to pick these up no problem. I'm not sure it makes sense as a hiring filter. Perhaps in the future this will change. But right now these tools are built more like user friendly appliances - more like a cellphone or a toaster than a technology to wrap your head around, like a compiler or a database.
That said, I think there’s value in a catch-all fallback: running a prompt without all the usual rules or assumptions.
Sometimes, a simple prompt on the latest model just works—and often in surprisingly more effective ways than one with a complex prompt.
MCP is a tool and may only be minor in terms of relevance to a position. If I use tools that use MCP but our actual business is about something else, the interview should be about what the company actually does.
Your arrogance and presumptions about the future don't make you look smart when you are so likely to be wrong. Enumerating enough predictions until one of them is right is not prescient, it's bullshit.
If we take "operator skill" to mean "they know how to make prompts", there is some truth to it and we can see that by whether or not the operator is designing the context window or not.
But for the more important question, whether or not LLMs are useful has an inverse relationship with how skilled the person is in the context they are using for. This is why the best engineers mostly shrug at LLMs while those that arent the best feel a big lift.
So, LLMs are not mirrors of operator skill. This post is instead an argument that everyone should become prompt engineers.
But they move quickly around that circle making them feel much more productive. And if you don't need something outside of the circle it is good enough.
On the other hand, something like wrestling with the matplotlib API? I don't have too much experience there and an LLM was a great help in piecing things together.
If you think you know “the best LLM to use for summarizing” then you must have done a whole lot of expensive testing— but you didn’t, did you? At best you saw a comment on HN and you believed it.
And if you did do such testing, I hope it wasn’t more than a month ago, because it’s out of date, now.
The nature of my job affords me the luxury of playing with AI tech to do a lot of things, including helping me write code. But I’m not able to answer detailed technical questions about which LLM is best for what. There is no reliable and durable data. The situation itself is changing too quickly to track unless you have the resources to have full subscriptions to everything and you don’t have any real work to do.
My 2 year old graphics card sure could be bigger... kicks can... it was plenty for starfield...
holy crap vram is expensive!
>If they waste time by not using the debugger, not adding debug log statements, or failing to write tests, then they're not a good fit.
Why would you deprive yourself of a tool that will make you better?
ninetyninenine•1d ago
Realistically though I think AI will come to a point where it can take over my job. But if not, this is my hope.
whatnow37373•1d ago
In one way, yes, this massively shifts power into the hands of the less skilled. On the other hand if you’d need some proper and I mean proper marketing materials, who are you going to hire? A professional artist using AI or some dipshit with AI?
There will be slop of course but after a while everyone has slop and the only differentiating factor will be quality or at least some gate-kept arbitrary level of complexity. Like how rich people want fancy hand made stuff.
Edit: my point is mainly that the level will rise to a point that you’d need to be scientist to create a - then - fancy app again. You see this with web. It was easy, we made it ridiculous and I mean ridiculously complicated where you need to study computer science to debug React rendering for your marketing pamphlet.