Also, I get dozens of calls/emails a month from my undergrad/grad alma matter. The bottom seems to have fallen out of the labor force when even ivy leave and top-5 cs/tech schools have students desperately seeking entry level jobs.
To be fair, as I mentioned on another comment, there are other factors:
1. Record numbers of CS undergrads (more supply)
2. More remote-CS/tech grad programs (yet more supply, many overseas)
3. Bursting of the tech-vc bubble (less demand)
Would be great to see some industry-wide stats here. There are three OTHER factors are play here:
1. Record numbers of CS undergrads (more supply)
2. More remote-CS/tech grad programs (yet more supply, many overseas)
3. Bursting of the tech-vc bubble (less demand)
4. AI (???)
Not sure how much can be attributed to AI. That said, i'd confidently say our team is at least 2x more productive than 3yrs ago. Huge numbers of loose change get thrown to the LLM to solve, instead of writing clever algos, etc.
things are getting back to normal actually, and the companies who are embarrassed to be making cutbacks are saying it's actually because they're using AI, not because they over-hired.
I'm personally only more productive with the help of AI if one the following conditions are met;
1. It's something I was going to type anyway, but I can just press Tab and/or make a minor edit
2. The code produced doesn't require many changes or time in understanding, as the times where it has required many changes or deeper understanding probably would have been faster to just code myself
Where it has been helpful, though, is debugging errors or replacing search engines for helping out with docs or syntax. But, sometimes it produces bullsh*t that doesn't exist and this can lead you down a rabbithole to nowhere.
More than once it's suggested something to me that solved all of the things I needed, only to realise none of it existed.
https://www.theverge.com/news/657594/duolingo-ai-first-repla...
Duolingo needs better content, not a faster way of producing the same stuff.
We've found that most, if not all, models are extremely bad at writing frontend code. I really hope you all know what you're doing here; you could end up with unmaintainable, incomprehensible AI slop...
For basic components, I’ve found that by asking for more complexity (e.g. ask it to wrap your nav component in a react context or a custom hook) yields better overall code.
Does this include UI design? We're finding tools like v0 decent, but nowhere near production design quality. Same for just using Claude or Gemini directly.
I don't see any AI yet anywhere near good enough to literally do a person's job.
But I can easily see it making someone, say 20%-50% more effective, based on my own experience using it for coding, data processing, lots of other things.
So now you need 8 people instead of 10 people to do a job.
That's still 2 people who won't be employed, but they haven't been 'replaced' by AI in the way people seem to think they will be.
That's exactly what people mean by "replacing".
I would agree about needing fewer heads performing types of roles, and I could even buy that tech staff would hardly ever need to handwrite code directly.
For serious projects where critical data, physical safety, etc for end users are at stake, I still don’t see the path toward simply having no in-house engineer to certify changes generated by an LLM
To argue against myself though - it might just mean more/better code is written by the same number of engineers.
If code gets cheaper, people will use more of it
Maybe I’m just stating the obvious…
This is an excellent question for society to answer, and hopefully for policy-makers to think about. A challenge with capitalism as I see it practices -- is that most for-profit orgs think quarter to quarter about earnings, costs, etc. They are not focused on second order issues arising 5-10yrs later.
We've all seen this play out in our own lives -- with the gutting of American manufacturing...and the resulting discord a generation later.
So yes, given this broadly speaking junior devs are not needed. If someone is a junior dev and can't find a job they'll need to prove they can function at a senior level if they want to be employable going forward. But this is basically the market today anyway.
But these are my predictions – you can disagree and I'm sure I will be out to some degree, but I'd put money on being mostly correct in these claims.
The role will change and individuals will become more productive. These tools are impressive and moving in the right direction to your prediction. But, personally, I think it is naive to think that the need for junior roles will be entirely eliminated in 5 years.
In my opinion there would be no point in getting junior developer to do anything right now in the same way I'm not going to pay a rookie artist or web designer to do anything for me anymore because I'd get better results from AI. Obviously companies which are not productivity and cost optimised might not care/realise they can do this right away (there will always be the odd inefficient hire here and there), but my guess is that 99.9% of these hires make no economical sense and will be so few and far between that the role will effectively be eliminated in place of something else. And this happens often in tech. I used to know "webmasters" who just did HTML/CSS. The web still runs on HTML/CSS, but those jobs no longer exist and people who used to do that work are now doing other things. Again why the hell would I pay someone to write HTML/CSS when there are plenty WYSIWYGs and AI tools which could do a better job, cheaper and quicker?
Once this starts happening and senior developers in these companies are doing nothing but code reviewing PRs written by AI and fixing bugs in that code, they will leave and the company will have no developers.
As part of this AI-first shift, all engineers now have access to Cursor, and we’re still figuring out how to integrate it. We just started defining .cursorrules files for projects.
What’s been most noticeable is how quickly some people rely too much on AI outputs, especially the first pass. I’ve seen PRs where it’s obvious that the generated code wasn’t even run or reviewed. I know this is part of the messy adjustment period, but right now, it feels like I’m spending more time reviewing and cleaning up code than I did before.
If the answer would be related to extensive testing, who verifies the model-generated tests?
Given that a nontechnical PM would neither be able to inspect the system code nor its tests, this is the part that does not add up for me. It seems at least one person still has to really understand the “hard part” of computing as it relates to their domain.
We are a team of 5 down from 8 a few months ago, and we are working on more stuffs. I would not be able to survive without AI writing some queries and scripts for me. It really saves a tons of time.
Before LLMs got good enough, there were projects I would scope with the expectation of having one junior consultant do the coding grunt work - simple Lambdas, Python utility scripts, bash scripts, infrastructure as code, translating some preexisting code to the target language of the customer.
This is the perfect use case for ChatGPT. It’s simple well contained work that can fit in its context window, the AWS SDKs in various languages are well documented, there is plenty of sample code, and it’s easy enough to test.
I can tell it to “verify all AWS SDK functions on the web” or give it the links to newer SDK functionality.
I don’t really ever need a junior developer for anything. If I have to be explicit about the requirements anyway, I can use an LLM.
And before the the gate keeping starts, I’ve been coding as a hobby in 1986 and started coding in assembly language then and have been coding professionally since 1996.
The leading edge models surpass humans in some ways, but still make weird oversights routinely. I think the models will continue to get bigger and have more comprehensive world models and the remaining brittleness will go away over the next few years.
We are early on in a process that will go from only a few jobs to almost all (existing) jobs very quickly as the models and tools continue to rapidly improve.
As the complexity grow, the usefulness of AI agents decrease: a lot, and quite fast.
In particular, integration of microservices are a really hard case to crack for any AI agent as it often mix training data with context data.
It is more useful in centralised apps, and especially for front dev, as long as you don't use finite state machines. I don't understand why, even Claude/Cursor trip on otherwise really easy code (and btw if you don't use state machines for your complex front end code, you're doing it wrong).
As long as you know what your agent is shitty at however, using AI is a net benefit as you don't loose time trying to communicate your needs and just do it, so it is only gains and no loses.
But I think the central question is not how much of software development can be automated. It's rather how many engineers companies _believe_ they need.
Having spent some time in mid sized companies adjacent to large companies, the sheer size of teams working on relatively simple stuff can be stunning. I think companies with a lot of money have overstaffed on engineers for at least a decade now. And the thing is: It kinda works. An individual or a small team can only go so far, a good team can only grow at a certain rate. If you throw hundreds of engineers at something, they _will_ figure it out, even if you could theoretically do it with far less, by optimising for quality hires and effective ways of working. That's difficult and takes time, so if you have the money for it, you can throw more bodies at it instead. You won't get it done cheaper, probably also not better, but most likely faster.
The mere _idea_ that LLMs can replace human engineers kinda resets this. The base expectation is now that you can do stuff with a fraction of the work force. And the thing is: You can, you always could, before LLMs. I've been preaching this for probably 20 years now. It's just that few companies dared to attempt it, investors would scoff at it, think you're being too timid. Now they celebrate it.
So like many, I think any claims of replacing developers with AI are likely cost savings in disguise, presented in a way the stock market might accept more than "it's not going so well, we're reducing investments".
All that aside, I also find it difficult as a layperson to separate the advent of coding LLMs from other, probably more consequential effects, like economic uncertainty. When the economy is stable, companies invest. When it's unstable, they wait.
In a sprint planning scenario, I think tasks that were 1,2,3,5,8,13, etc., get put down a notch, nothing more, with the invention of AI. AIs have not made an 8-point task into a 3-point one at all. There is a 50/50 chance that an old 8-point task before AI remains 8 points, with it sometimes dropping to 5.
... accidentally hit reply before the post was ready?
I don't think it is possible NOW.
But for specific areas, productivity gain you get from a single developer with LLM is much higher than before. Some areas I see it is shining:
* building independent React/UI components
* boilerplate code
* reusing already solved solutions (e.g. try algorithm X,Y,Z. plot the chart in 2D/3D,...)
> What changed significantly in your workflow?
Hiring freeze, because leaders are not sure yet about the gains from AI, what if we hire bunch of people and can't come up with projects for them (not because we are out of ideas, because getting investment is hard if you are not AI company), while LLM is generating so much code.> Are you 10x more efficient? Not always, but I am filtering out things faster giving me opportunity to get into the code concepts sooner (because AI is summarizing it for me before I read 10 page blogpost)
So instead of seeing mass drop in job openings you will see companies that are not bottlenecked by org issues start to move very fast. In general that will create new markets and have a positive effect on kobs
The people writing boring crud apps should be scared (but I think it's a failure in our industry that this is still a thing).
The technical debt that will be amassed by AI coding is worrying however. Coworkers here routinely try to merge inn stuff that is just absolute slop, and now I even have to argue with them on the basis that they think it's right because the AI wrote it...
In many ways LLMs feel like the next iteration of search engines: they’re easier to use, you can ask follow up questions or for examples and get an immediate response tailored to your scenario, you can provide the code and get a response for what the issue is and how to fix it, you can let it read internal documentation and get specialized support that wouldn’t be on the internet, you can let it read whole code bases and get reasonable answers to queries about said code, etc.
I don’t really see LLMs automating engineers end-to-end any time soon. They really are incapable of deductive reasoning, the extent to which they are is emergent from inductive phenomena, and breaks down massively when the input is outside the training distribution (see all the examples of LLMs failing basic deductive puzzles that are very similar to a well known one, but slightly tweaked).
Reading, understanding, and checking someone else’s code is harder than writing it correctly in the first place, and letting LLMs write entire code bases has produced immense garbage in all the examples I’ve seen. It’s not even junior level output, it’s something like _panicked CS major who started programming a year ago_ level output.
Eventually I think AI will automate software engineering, but by the time it’s capable of doing so _all_ intellectual pursuits will be automated because it requires human level cognition and adaptability. Until then it’s a moderate efficiency improvement.
What's changed in the workflow is a lot really. We do a lot of documentation, so most of that boilerplate is not done via AI based workflows. In the past, that would have been one of us copy-pasting from older documents for about a month. Now it takes seconds. Most of the content is still us and the other stakeholders. But the editing passes are mostly AI too now. Still, we very much need humans in the loop.
We don't use copilot as we're doing documentation, not code. We mostly use internal AIs that the company is building and then a vendor that supports workflow-style AI. So, like, iterative passes under the token limits for writing. These workflows do get pretty long, like 100+ steps, just to get to boilerplate.
We're easily 100x more efficient. Four of us can get a document done in a week that took the whole team years to do before.
The effort is more concentrated now. I can shepherd a document to near final review with a meeting or two from the specialist engineers, that used to take many meetings with much of both teams. We were actually able to keep up and not fall behind for about 3 months. But, management see us as a big pointless cost center of silly legal compliance, so we're permanently doomed to never get to caught up. Whatever, still have a job for now.
I guess my questions back are:
- How do you think AI is going to change the other parts of your company than coding/engineering?
- Have you seen other non engineering roles be changed due to AI?
- What do your SOs/family think of AI in their lives and work?
- How fast do you think we're getting to the 'scary' phase of AI? 2 years? 20 years? 200 years?
[0] I try to keep this account anonymous as possible, so no, I'm not sharing the company.
aristofun•5h ago
Looks like the higher the management, the farther away from real engineering work — the more excitement there is and the less common sense and real understanding of how developers and llms work.
> Are you 10x more efficient?
90% of my time is spent thinking and talking about the problem and solutions. 10% is spent coding (sometimes 1% with 9% integrating this into existing infrastructure and processes). Even with ideal AGI coding agent id be only 10% more efficient.
Imagine a very bright junior developer. You still are heavily time taxed mentoring him and communicating.
Not many non technical people (to my surprise) get it.
Based on posts and comments here there are plenty “technical enough” people who don’t understand the essence of engineering work (software engineering in particular).
Spitting out barely (yet) working throwaway grade code is an impressive accomplishment for TikTok, but it has very little to do with complex business critical software most real engineers deal with everyday
phyalow•4h ago
I would consider myself today 2-3x more effective than where I was 12 months ago.
I can grok on a new code bases much faster by having an AI explain things to me only a grey beard could previously, I can ask Gemini 2.5 (1M context length) crazy things like “please create a sprint program for new feature xyz” and get really good high quality answers. Even crazier I can feed those sprints to Claude Code (CI/CD tests all running) and it will do a very good job of implementing. My other option is I can farm those sprints out to human dev resources I have at hand and then spend 90% of my time “thinking, hand holding and talking about code and solutions” and working with other devs to get code in prod.
Imo this is a false victory, emphasis should be on shipping. Although each domain / pipeline / field needs and prioritises different things and rightfully so. AI lets me ship so much faster and for me that means $$$.
I think I am a realist and your last point about “engineering” - is a contradiction. Maybe try better tools? Lastly:
“While the problem of ai can be viewed as, “Which of all the things humans do can machines also do?,” I would prefer to ask the question in another form: “Of all of life’s burdens, which are those machines can relieve, or significantly ease, for us?”
Richard Hamming, pg.43 The Art of Doing Science and Engineering: Learning to Learn
aristofun•3h ago
How often you grok a new code base per year? If that's the core of your work, then yes - you benefit from ai much more than some other engineers.
Every situation is unique for sure.
> I would class myself a mid to high skill dev
It's not about your skill level, rather about the nature of your job (working on a single product, outsourcing company with time framed projects, r&d etc.)
moribvndvs•43m ago
markus_zhang•4h ago
I think those stakeholders are the true engine of promoting AI.
ookblah•4h ago
or maybe at these companies the product is pretty stable or you're in an area where it's more optimizations vs. feature building?
aristofun•3h ago
Because if the rest 90% spent well enough - you do the right thing in remaining 10%.
Just try to work in a company with 100+ engineers and at least few years old profitable product with real customers and you'll get it.