Edit: What I mean by this is there may be some circumstantial evidence (less hiring for juniors, more AI companies getting VC funding). We currently have no _hard_ evidence that programming has had a substantial speed increase/deskilling from LLMs yet. Any actual __science__ on this has yet to show this. But please, if you have _hard_ evidence on this topic I would love to see it.
I definitely think a lot of junior tasks are being replaced with AI, and companies are deciding it's not worth filling junior roles at least temporarily as a result.
I think team expansion is being reduced as well. If you took a dev team of 5, armed them all with Claude Code + training on where to use it and where not to I think you could get the same productivity as hiring 2 additional FTE software devs. I'm assuming your existing 5 devs fully adopt the tool and not reject it like a bad organ transplant. Maybe an analogy could be the invention of email reducing the need for corporate typing pools and therefore fewer jr. secretaries ( typists) are hired.
/i'm just guessing that being a secretary is in the career progression path of someone in the typing pool but you get the idea.
edit: one thing i missed in my email analogy is that when email was invented it was free and available to anyone that could set up sendmail/*.MTA
If anything, the expectations for an individual developer have never been higher, and now you’re not getting any 22-26 year olds with enough software experience to be anything but a drain on resources when the demand for profitability is yesterday.
Maybe we need to go back to ZIRP if only to get some juniors back on to the training schedule, across all industries.
For other insanely toxic and maladaptive training situations, also see: medicine in the US.
one last thing to point out then my lunch is over. I think AI coding agents are going to hit services/marketplaces like Fiverr especially hard. I think the AI agents are the new gig-economy with respect to code, I spent about $50 on Claude Code pay-as-you-go over the past 3 days to put together a website i've had in the back of my mind for months. Claude Code got it to a point where I can easily pick up and run with to finish it out over a few more nights/weekends. UI/UX is especially tedious for me and Claude Code was able to take my vague descriptions and make the interface nicely organized and contemporary. The architecture is perfectly reasonable for what i want to do ( Auth0 + react + python(flask) + postgres + an OAuth2 integration to a third party ). It got all of that about 95% right on the first try.. for $50!. Services/marketplaces like Fiverr have to be thinking really hard right now.
I am not sure how it's for others byt for me it's a lot harder to read chunk of code to understand and verify it than to take the problem head on with code and then maybe consult it using LLM.
If you only care about a single metric you can convince yourself to make all kinds of bad decisions.
> Nobody is lying.
Nobody is being honest either. That happens all the time.
The newbie prototype was never all that hard. You could, in my day, have a lot of fun that first week with dreamweaver, Visual Basic, or cargo cutting HTML.
There’s nothing wrong with this.
But to get much further than that ceiling you probably needed to crack a book.
“Build me a recipe app”, sure.
Building anything substantial has consistently failed for me unless you take claude or codex by the hand and guide them through it step by step.
> Every week there seems to be a new tool that promises to let anyone build applications 10x faster. The promise is always the same and so is the outcome.
Is the second sentence true? Regardless of AI, I think that programming (game development, web development, maybe app development) is easier than ever? Compare modern languages like Go & Rust to C & C++, simply for their ease-of-compilation and execution. Compare modern C# to early C#, or modern Java to early Java, even.
I'd like to think that our tools have made things easier, even if our software has gotten commensurately more complicated. If they haven't, what's missing? How can we build better tools for ourselves?
Also, I'm not sure anyone was making 10x claims about the tools you cite.
Think of the Game hits from the 90's. A room full of people made games which shaped a generation. Maybe it was orders of magnitude harder then, but today, it's multiple orders of magnitude more people required to make them.
Same is true for websites. Sure, the websites were dingy with poor UX and oodles of bugs... but the size of the team required to make them was absolutely tiny compared to today.
Things are simultaneously the best they've ever been, and the worst they've ever been, it's a weird situation to be in for sure.
But truthfully; orders of magnitude more powerful hardware was the real unlock.
Why is slack and discord popular? Because it's possible to use multiple gigabytes of ram for a chat client.
25 years ago? Multiple gigabytes of ram put your machine firmly in the "I have unlimited money and am probably a server doing millions of things" class.
I think this is more about rising consumer expectations than rising implementation difficulty.
You needed much more marketing budget in 2000 than today, I think you have that reversed. There is a reason indie basically wasn't a thing until steam could do marketing for you.
The market demands not just better, more complicated games, but mostly much higher art budgets. Go look at, say, Super Metroid, and compare it to Team Cherry's games in the same genre, made mostly by three people. Compare Harvest Moon from the 90s with Stardew Valley, made one person. Compare old school Japanese RPGs with Undertale, again with a tiny team. Lead developer who is also the lead music composer. And it's not like those games didn't sell: Every game I mentioned destroyed the old games in revenue, even though the per-unit price was tiny. Silksong managed to overload Steam on release!
And it's not just games. I was a professional programmer in the 90s. My team's job involved mostly work that today nobody would ever write, because libraries just do it for you. We just have higher demands than we ever did.
That's something that seems to eat up AAA games, each person they add adds less of a person due to communication effects and inefficiencies. That and massive amounts of created artwork/images/stories.
There are a lot of indie game studios that make games much more complicated than what was in the 90s, and have a lot less people than AAA teams.
And ya, tons of memory has unlocked tons of capability.
Causes: Bubble economics, perverse incentives, lack of objectivity, and more.
The good news is that huge competitive advantages are available to those who refuse to accept norms without careful evaluation.
Crapping out code that does the thing was never the hard part, the hard part is reading the crap someone did and changing it. There are tradeoffs here, perhaps you might invest in modeling up front and use more or less formal methods, or you're just great at executing code over and over very fast with small adjustments and interpreting the result. Either way you'll eventually produce something robust that someone else can change reasonably fast when needed.
The additions to Java and C# are a lot about functional programming concepts, and we've had those since forever way back in the sixties. Map/reduce/filter are old concepts, and every loop is just recursion with some degree of veiling, it's not a big thing whether you piece it together in assembly or Scheme, typing it out isn't where you'll spend most of your time. That'll be reading it once it's no longer yesterday that you wrote it.
If I were to invent a 10x-meganinja-dev-superpower-tool it would be focused on static and execution analysis, with strong extendability in a simple DSL or programming language, and decent visualisation API:s. It would not be 'type here to spin the wheels and see what code drops out', that part is solved many times over already, in Wordpress, JAXB oriented CRM and so on. The ability to confidently implement change in a large, complex system is enabled by deterministic immediate analysis and visualisation.
Then there are the soft skills. While you're doing it you need to keep bosses and "stakeholders" happy and make sure they do not start worrying about the things you do. So you need to communicate reliably and clearly, in a language they understand, which is commonly pictures with simple words they use a lot every day and little arrows that bring the message together. Whether you use this or that mainstream programming language will not matter at all in this.
I don't really know if AI makes programming easier or harder. At one side, you can explore any topic with AI. This is super powerful ability when it comes to learning. At another side, the temptation to offload your work to AI is big and if you do that, you'll learn nothing. So it comes down to a person type, I guess. Some people will use AI to learn and some people will use AI to avoid learning, both behaviours are empowered.
I have simple and useless answer how to solve that. Throw it all out. Start from the scratch. Start with simple CPU. Start with simple OS. Start with simple protocols. Do not write frameworks. Make the number of layers between your code and hardware as small as possible. So it's actually possible to understand it all. Right now the number of abstraction layers is too big. Of course nobody's going to do that, people will put more abstraction layers and it'll work, it always works. But that sucks. Software stack was much simpler 20-30 years ago. We didn't even had source control, I was the young developer who introduced subversion into our company, but we still delivered useful software.
Or does it just seem that way because you've had a whole lifetime to digest it one little bit at a time so that it all seems intuitive now? If "easy to understand and get started with" were the bar for programming capability, we'd have stopped with COBOL.
I'm not saying that they can actually do that per sé; switching costs are so low that if you are doing worse than an existing competitor, you'd lose that volume. Nor am I saying they are deliberately bilking folks -- I think it would be hard to do that without folks cottoning on.
But, I did see an interesting thread on Twitter that had me pondering [1]. Basically, Claude Code experimented with RAG approaches over the simple iterative grep that they now use. The RAG approach was brittle and hard to get right in their words, and just brute forcing it with grep was easier to use effectively. But Cursor took the other approach to make semantic searching work for them, which made me wonder about the intrinsic token economics for both firms. Cursor is incentivized to minimize token usage to increase spread from their fixed seat pricing. But for Claude, iterative grep bloating token usage doesn't harm them and in fact increases gross tokens purchased, so there is no incentive to find a better approach.
I am sure there are many instances of this out there, but it does make me inclined to wonder if it will be economic incentives rather than technical limitations that eventually put an upper limit on closed weight LLM vendors like OpenAI and Claude. Too early to tell for now, IMO.
[1] https://x.com/antoine_chaffin/status/2018069651532787936
As someone with 0 (zero) swift skills and who has built a very well functioning iOS app purely with AI, I disagree.
AI made me infinitly faster because without it I wouldn‘t even have tried to build it.
And yes, I know the limits and security concerns and understand enough to be effective with AI.
You can build functioning applications just fine.
It‘s complexity and novel problems where AI _might_ struggle, but not every software is complex or novel.
This is so frustratingly common.
That doesn't match my experience. I think AI tools have their own skill curve, independent of the skill curve of "reading/writing good code." If you figure out how to use the AI tools well, you'll get even more value out of them with expertise.
Use AI to solve problems you know how to solve, not problems that are beyond your understanding. (In that case, use the AI to increase your understanding instead.)
Use the very newest/best LLM models. Make the AI use automated tests (preferring languages with strict type checks). Give it access to logs. Manage context tokens effectively (they all get dumber the more tokens in context). Write the right stuff and not the wrong stuff in AGENTS.md.
I'd rather spend my time thinking about the problem and solving it, than thinking about how to get some software to stochasticaly select language that appears like it is thinking about the problem to then implement a solution I'm going to have to check carefully.
Much of the LLM hype cycle breaks down into "anyone can create software now", which TFA makes a convincing argument for being a lie, and "experts are now going to be so much more productive", which TFA - and several studies posted here in recent months - show is not actually the case.
Your walk-through is the reason why. You've not got magic for free, you've got something kinda cool that needs operational management and constant verification.
This is how I see hand-building software goes.
There's such a wide divergence of experience with these tools. Often times people will say that anyone finding incredible value in them must not be very good. Or that they fall down when you get deep enough into a project.
I think the reality is that to really understand these tools, you need to open your mind to a different way of working than we've all become accustomed to. I say this as someone who's made a lot of software, for a long time now. (Quite successfully too!)
In someways, while the ladder may be getting pulled up on Junior developers, I think they're also poised to be able to really utilize these tools in a way that those of us with older, more rigid ways of thinking about software development might miss.
When tools prove their worth, they get taken into to normal way software is produced. Older people start using them, because they see the benefit.
The key thing about software production is that it is a discussion among humans. The computer is there to help. During a review, nobody is going to look at what assembly a compiler produces (with some exceptions of course).
When new tools arrive, we have to be able to blindly trust them to be correct. They have to produce reproducible output. And when they do, the input to those tools can become part of the conversation among humans.
(I'm ignoring editors and IDEs here for the moment, because they don't have much effect on design, they just make coding a bit easier).
In the past, some tools have been introduced, got hyped, and faded into obscurity again. Not all tools are successful, time will tell.
That said, I don't think this negates what TFA is trying to say. The difficulty with software has always been around focusing on the details while still keeping the overall system in mind, and that's just a hard thing to do. AI may certainly make some steps go faster but it doesn't change that much about what makes software hard in the first place. For example, even before AI, I would get really frustrated with product managers a lot. Some rare gems were absolutely awesome and worth their weight in gold, but many of them just never were willing to go to the details and minutiae that's really necessary to get the product right. With software engineers, if you don't focus on the details the software often just flat out doesn't work, so it forces you to go to that level (and I find that non-detail oriented programmers tend to leave the profession pretty quickly). But I've seen more that a few situations where product managers manage to skate by without getting to the depth necessary.
Particularly when the human acts as the router/architect.
However, I've found Claude Code and Co only really work well for bootstrapping projects.
If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.
It will probably change once the approach to large scale design gets more formalized and structured.
We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.
Yes, AI will one shot crappy static sites. And you can vibe code up to some level of complexity before it falls apart or slows dramatically.
Containment of state also happens to benefit human developers too, and keep complexity from exploding.
I've found the same principles that apply to humans apply to LLMs as well.
Just that the agentic loops in these tools aren't (currently) structured and specific enough in their approach to optimally bound abstractions.
At the highest level, most applications can be written in simple, plain english (expressed via function names). Both humans and LLMs will understand programs much better when represented this way
Worse, as its planning the next change, it's reading all this bad code that it wrote before, but now that bad code is blessed input. It writes more of it, and instructions to use a better approach are outweighed by the "evidence".
Also, it's not tech debt: https://news.ycombinator.com/item?id=27990979#28010192
Debt doesn't imply it's productively borrowed or intelligently used. Or even knowingly accrued.
So given that the term technical debt has historically been used, it seems the most appropriate descriptor.
If you write a large amount of terrible code and end up with a money producing product, you owe that debt back. It will hinder your business or even lead to its collapse. If it were quantified in accounting terms, it would be a liability (though the sum of the parts could still be net positive)
Most "technical debt" is not buying the code author anything and is materialized through negligence rather than intelligently accepting a tradeoff
The primary difference between a programmer and an engineer.
> All software construction involves essential tasks, the fashioning of the complex conceptual structures that compose the abstract software entity, and accidental tasks, the representation of these abstract entities in programming languages and the mapping of these onto machine languages within space and speed constraints. Most of the big past gains in software productivity have come from removing artificial barriers that have made the accidental tasks inordinately hard, such as severe hardware constraints, awkward programming languages, lack of machine time. How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.
AI, the silver bullet. We just never learn, do we?
The essence: query all the users within a certain area and do it as fast as possible
The accident: spending an hour to survey spatial tree library, another hour debating whether to make our own, one more hour reading the algorithm, a few hours to code it, a few days to test and debug it
Many people seem to believe implementing the algorithm is "the essence" of software development so they think the essence is the majority. I strongly disagree. Knowing and writing the specific algorithm is purely accidental in my opinion.
The essence: I need to make this software meet all the current requirements while making it easy to modify in the future.
The accident: ?
Said another way: everyone agrees that LLMs make it very easy to build throw away code and prototypes. I could build these kind of things when I was 15, when I still was on a 56k internet connection and I only knew a bit of C and html. But that's not what software engineers (even junior software engineers) need to do.
What the LLM-driven approach does is basically the same thing, but with a lossy compression of the software commons. Surely having a standard geospatial library is vastly preferable to each and every application generating its own implementation?
Now, can it actually do those things? Not in my estimation. But from the perspective of a less experienced developer it can sure look like it does. It is, after all, primarily a plausibility engine.
I'm all for investing in integrating these generative tools into workflows, but as of yet they should not be given agency, or even the aesthetic appearance of agency. It's too tempting to the human brain to shut down when it looks like someone or something else is driving and you're just navigating and correcting.
And eventually, with a few more breakthroughs in architecture maybe this tech actually will make digital people who can do all the programming work, and we can all retire (if we're still alive). Until then, we need to defend against sleepwalking into a future run by dumb plausibility-generators being used as accountability sinks.
No-code is the same trend that has abstracted out all the generic stuff into infrastructure layers, letting the developers to focus on Lambda functions, while everything in the lower levels is config-driven. This was happening all the time, pushing the developer to easier higher layers and absorbing all complexity and algorithmic work into config-driven layers.
Runtime cost of a Lambda function might far exceed that of a fully hand-coded application hosted on your local server. But there could be other factors to consider.
Same with AI. You get a jump-start with full speed, and then you can take the wheel.
Building a plane is easier than building software. That's why they don't have bootcamps for building planes or becoming a rocket engineer. Building rockets or planes as an engineer is a breeze so there's no point in making a bootcamp.
That's the awesome thing about being a swe, it's so hard that it's beyond getting a university degree, beyond requiring higher math to learn. Basically the only way to digest the concept of software is to look at these "tutorials" on the internet or have AI vibe code the whole thing (which shows how incredibly hard it is, just ask chatGPT).
My friend became a rocket engineer and he had to learn calculus, physics and all that easy stuff which university just transferred into his brain in a snap. He didn't have to go through an internet tutorial or bootcamp.
As most people here probably know, it's now called Xojo and in my opinion both somewhat outdated and expensive. So I'm not recommending it, but credit to were it's due and it certainly was due for early versions of REALbasic when it was still affordable shareware.
The problem with all RAD tools seems to be that they eventually morph into expensive corporate tools no matter what their origins were. I don't know any cross-platform exception (I don't count Purebasic as RAD and it's also not structured).
As for AI, it seems to be just the same. The right AI tool accelerates the easy parts so you have more time for the hard parts. Another thing that bothers me a lot when alleged "professionals" are arguing against everyday computing for everyone. They're accelerating the death of general computing platforms and in the end no one will benefit from that.
It's worth actually being specific about what differentiates a junior engineer from a senior engineer. There's two things: communication and architecture. the combination of these two makes you a better problem solver. talking to other people helps you figure out your blindspots and forces you to reduce complex ideas down to their most essential parts. the loop of solving a problem and then seeing how well the solution worked gives you an instinct for what works and what doesn't work for any given problem. So how do agents make you better at these two things?
If you are better at explaining what you want, you can get the agents to do what you want a lot better. So you'd end up being more productive. I've seen junior developers that were pretty good problem solvers improve their ability to communicate technical ideas after using agents.
Senior engineers develop instincts for issues down the road. So when they begin any project, they'll take this into account and work by thinking through this. They can get the agents to build towards a clean architecture from the get go such that issues are easily traceable and debuggable. Junior developers get better at architecture by using agents because they can quickly churn through candidate solutions. this helps them more rapidly learn the strengths and weaknesses of different architectures.
On the learning front, I spend the weekend asking Claude questions about Rust, and then getting it to write code that achieved the result I wanted. I also now have a much better understanding of the different options because I've gotten three different working examples and gotten to tinker with them. It's a lot faster to learn how an engine works when you have a working engine on a dyno than when you have no engine. Claude built me a diesel, a gasoline and an electric engine and then I took them apart.
xiaohanyu•3d ago
Pretty nice description.
advisedwang•1h ago
In my own career I switched role to get more time on a area where I felt I needed more growth an practice. Turns out I never got really very good at it, and basically was just in a role I wasn't great at for 6 years. It was miserable. My lesson is "if you know you are bad at something, don't make it load-bearer in your life or career".
hobs•1h ago
In most professions barely anyone is doing the continual education or paying attention to the "scene" for that profession, if you do that alone you're probably already in the top 10%.
Joel_Mckay•1h ago
A Generalist knows less and less about more and more until he knows absolutely nothing about everything"
Getting paid well doing something you actually enjoy doing is key =3
https://stevelegler.com/2019/02/16/ikigai-a-four-circle-mode...