I feel this is a bit like the "don't be poor" advice (I'm being a little mean here maybe, but not too much). Sure, focus on improving understanding & judgement - I don't think anybody really disagrees that having good judgement is a valuable skill, but how do you improve that? That's a lot trickier to answer, and that's the part where most people struggle. We all intuitively understand that good judgement is valuable, but that doesn't make it any easier to make good judgements.
I think the current state of AI is absolutely abysmal, borderline harmful for junior inexperienced devs who will get led down a rabbit hole they cannot recognize. But for someone who really knows what they are doing it has been transformative.
Maybe a few of them will pursue it further, but most won't. People don't like hard labor or higher-level planning.
Long term, software engineering will have to be more tightly regulated like the rest of engineering.
I'm also thinking about a world where more programmers are trying to enter the workforce self-taught using AI. The current world is the continued lowering of education standards and political climate against universities.
The answer to all of the above from the perspective of who don't know or really care about the details may be to cut the knot and impose regulation.
Delegate the details to auditors with AI. We're kinda already doing this on the cybersecurity front. Think about all the ads you see nowadays for earning your "cybersecurity certification" from an online-only university. Those jobs are real and people are hiring, but the expertise is still lacking because there aren't clearer guidelines yet.
With the current technology and generations of people we have, how else but AI can you translate NIST requirements, vulnerability reports, and other docs that don't even exist yet but soon will into pointing someone who doesn't really know how to code towards a line of code they can investigate? The tools we have right now like SAST and DAST are full of false positives and non-devs are stumped how to assess them.
Programming will still exist, it will be just different. Programming has changed a lot of times before as well. I don't think this time is different.
If programming became suddenly too easy to iterate upon, people would be building new competitors to SAP, Salesforce, Shopify and other solutions overnight, but you rarely see any good competitor coming around.
The necessary involvement behind understanding your customers needs, iterating on it between product and tech is not to be underestimated. AI doesn't help with that at all, at maximum is a marginal iteration improvement.
Knowing what to build has been for a long time the real challenge.
Not saying you should disregard today's AI advancements, I think some level of preparedness is a necessity, but to go all in on the idea that deep learning will power us to true AGI is a gamble. We've dumped billions of dollars and countless hours of research into developing a cancer cure for decades but we still don't have a cure.
I suspect the reality around programming will be the same - a chasm between perception and reality around the cost.
Something similar might be happening in software. LLMs allow us to produce more software, faster and cheaper, than companies can realistically absorb. In the short term this looks amazing: there’s always some backlog of features and technical debt to address, so everyone’s happy.
But a year or two from now, we may reach saturation. Businesses won’t be able to use or even need all the software we’re capable of producing. At that point, wages may fall, unemployment among engineers may grow, and some companies could collapse.
In other words, the bottleneck in software production is shifting from labor capacity to market absorption. And that could trigger something very much like an overproduction crisis. Only this time, not for physical goods, but for code.
"Every small business becomes a software company. Every individual becomes a developer. The cost of "what if we tried..." approaches zero.
Publishing was expensive in 1995, exclusive. Then it became free. Did we get less publishing? Quite the opposite. We got an explosion of content, most of it terrible, some of it revolutionary."
If it only were the same and so simple.
Additional code is additional complexity, "cheap" code is cheap complexity. The decreasing cost of code is comparable to the decreasing costs of chainsaws, table saws, or high powered lasers. If you are a power user of these things then having them cheaply available is great. If you don't know what you're doing, then you may be exposing yourself to more risk than reward by having easier access to them. You could accidentally create an important piece of infrastructure for your business that gives the wrong answers, or requires expensive software engineers to come in and fix. You accidentally cost yourself more in time dealing with the complexity you created than the automation ever brought in benefit.
I believe the reason for this is that we still need judgement to do those tasks, AIs are not perfect at it and they spit a lot of extra code and complexity at times. Then now you need to reduce that complexity. But to reduce it, you need to understand the code in the first place. Now you cut here and there, you find a bug, but you are diving in code you do not understand fully yet.
So the human cognition has to go on par with what the AI is doing.
What ended up happening to me (not all the time, for example this for one-off scripts or small scripts is irrelevant, or to author a well-known algorithm that is short enough without bugs) is that I have a sense of speed that ends up not being really true once you have to complete the task as a whole.
On top of that, you tend to lose more context if you generate a lot of code with AI, as a human, and the judgement must be yours anyway. At least, until AIs get really brilliant at it.
They are good at other things. For example, I think they do decently well at reviewing code and finding potential improvements. Bc if they say bullsh*t, as any of us could say in a review, you just go ahead to the next comment and you can always find something valuable from there.
Same for "combinatoric thinking". But for tasks they need more "surgery" and precision, I do not think they are particularly good, but just that they make you feel like they are particularly good, but when you have to deal with the whole task, you notice this is not the case.
How would one even market oneself in a world where this is what is most valued?
Question 2: Do you think this will ever become valuable?
That's basically the job description of any senior software development role, at least at any place I've worked. As a senior pumping out straightforward features takes a backseat to problem analysis and architectural decisions, including being able to describe tradeoffs and how they impact the business.
That was an odd experience, I would use any tool for the job if it seems fit, I don't mind using Agentic AI for that, but I can't really own the feature and timeline if I'm handed this directive.
If it's a poc, then they still can't really counton having it around at q4.
I tried my best to make them understand this. I hope they do.
Not if you believe most other articles related to AI posted here including the one from today (from Singularity is Nearer).
djoldman•1h ago
> Economics gives us two contradictory answers simultaneously.
> Substitution. The substitution effect says we'll need fewer programmers—machines are replacing human labor.
> Jevons’. Jevons’ paradox predicts that when something becomes cheaper, demand increases as the cheaper good is economically viable in a wider variety of cases.
The answer is a little more nuanced. Assuming the above, the economy will demand fewer programmers for the previous set of demanded programs.
However. The set of demanded programs will likely evolve. So to over-simplify it absurdly: if before we needed 10 programmers to write different fibonacci generators, now we'll need 1 to write those and 9 to write more complicated stuff.
Additionally, the total number of people doing "programming" may go up or down.
My intuition is that the total number will increase but that the programs we write will be substantially different.