I continue to believe the first 12-18 months after graduation are vital. I still can't believe some of the things I thought about s/w before I entered the workforce and I can't believe some of the things I got away with over the years since.
DataStructures, Databases, Operating Systems, Discreet Math etc are usually done without any programming at all. besides one or two assignments
The rest of the program it's mostly Math stuff. The only other time you do programming is if you choose an elective towards your graduation.
I have an MS in CS from a small Eastern-European state run university, but the curriculum looked very differently: a ton of applied math yes, but also assembly programming, graphics programming, networking, databases, programming intro I-I, advanced programming I-II, project management, artificial intelligence, all kind of specializations, algorithms and data structures I-II, etc.
Is this not how it is in the US?
Edit: and most of those courses had practical lab sections with hands-on programming
Absolutely. The value in a CS degree is not in learning this or that language or tool. It is in being able to understand how to solve difficult problems effectively.
1. Juniors grow. Most of them grow fast, becoming solid mid-levels in 1-1.5y (in the right environment)
2. The industry depends heavily on a wide pool of mid-levels. These are the folks who can produce decent quality solutions, and don't need hand-holding anymore. They are the “velocity” of the team. Engineers tend to spend a few years there, before they might grow into seniors.
3. Seniors will age out.
4. AI doesn't grow (as it is today), it's stuck on the low-junior level mostly. This might change, but currently there are no signs for this.
5. Seniors would need to spend _a lot_ of time fixing AI's output, and course-correcting.
Now, all of this combined: the junior --> senior transition takes say, 5+ years on average (I know, depends). If we move on with the "1/2 senior + AI agents" model, how does a company form a new team? When those seniors move away / retire, who's taking their place? What happens to the velocity of the team without mid-levels now?
If we let this go on for a couple of years before a reckoning of "oh crap" happens, it'll be very hard to come back from this --> certain "muscles" of the aforementioned seniors will have atrophied (e.g. mentoring, growing others), a lot of juniors (and mediors!) will have left the industry.
I hope companies will recognize this risk in time...
That's at least my experience working with multiple digital agencies and seeing it all unfold. Most juniors don't last long these days precisely because they skip the part that actually makes them valuable - storing information in their head. And that's concerning, because if to make actually good use of AI you have to be an experienced engineer, but to become an experienced engineer you had to get there without AI doing all your work for you, then how are we going to get new experienced engineers?
The nerdy tinkerers stay the same and AI empowers them even more. Are they rare? Yes. But this shifts the topic from science/engineering to economics/sociology. Granted, that was the topic of the original submission but for me that’s the less interesting part.
If Seniors are too busy babysitting the AI, then no.
I disagree. LLM assisted coding is yet another level of abstraction. It’s the same thing as an assembly programmer saying that OOP programmers don’t learn from OOP coding.
Today’s juniors still learn the same core skill, the abstract thinking and formalization of a problem is still there. It’s just done on a higher level of abstraction and in an explicitly formulated natural language instead of a formal one for the first time ever. They’re not even leaving out the formal one completely because they need to integrate and fine tune the code for it to work as needed.
Does it introduce new problems? Sure. Does it mean that today’s juniors will be less capable compared to today’s seniors once they have the same amount of experience? I really doubt it.
One can be confident that they wrote correct Java code without knowing what the JVM machine code output is. But you can't know if the code outputted by an LLM is correct without understanding the code itself.
I'm sure there're some pretty major bugs in production code because someone used some Java functionality intuitively without understanding it fully and in some edge case it behaves differently than anticipated.
Of course this issue is much more prominent in LLM assisted coding but we're back to square one. The higher the level of abstraction provided by the tool, the more room for mistakes it leaves but the higher the productivity is. It's easier to avoid bugs of this type when using assembly vs when using Java.
Today’s juniors still learn the same core skill, the abstract thinking and formalization of a problem is still there. It’s just done on a higher level of abstraction and in an explicitly formulated natural language instead of a formal one for the first time ever. They’re not even leaving out the formal one completely, because they need to integrate and fine tune the code for it to work as needed.
Does it introduce new problems? Sure. Does it mean that today’s juniors will be less capable compared to today’s seniors once they have the same amount of experience? I really doubt it.
Not the same. Whether you are writing Assembly or Java, you have to determine and express your intent with high accuracy - that is a learned skill. Providing problem details to an LLM until it produces something that looks correct is not the same type of skill.
If that is the skill that the business world demands in 5 years, so be it, but arguing that it's simply the next step in the Assembly, C, Java progression makes no sense.
If you're using LLMs to code like this, you're using them incorrectly. You need to specify your intent precisely from the start which is a skill you learn. You can use other tools like OOP languages incorrectly with "working" results as well, that's why code quality, clean code and best practices are a thing.
I'm sure many junior developers use LLMs the way you described but maybe that's exactly what the universities and seniors need to teach them?
Or maybe I'm wrong and we're all headed for a future of being prompt engineers.
I was a sceptic up until recently, where I failed to create a solution myself.
Since I am mostly a hobbyist programmer(for 25 years and counting) I often find I don’t have the time to sit down for prolonged periods to reason about my code.
ChatGPT helps my tired brain from work develop my code 10x quicker, easily, by removing road blocks for me.
I find it an excellent tool.
This is a senior-level view. Juniors don't have this skill yet, and is one of the things we're concerned they won't pick up.
Wildly debatable
> Juniors with LLMs can do more than juniors without them could
Except that being juniors they lack the critical skills to understand when code produced by AI is trash
Who would pay for that?
Not a lot of companies, which acts as a filter.
As it turns out, a few companies do because they are super strapped for cash. That's why a lot of junior first experiences are a trial by fire in environments that are on another level of dysfunction working with either no seniors or bottom of the barrel seniors.
This acts as another filter. Some juniors give up at this point.
These filters prevent junior engineers from becoming senior. This is actually pretty good for seniors - being a rare, in demand commodity usually is.
I dont think AI changes this calculus much except insofar as AI amplifies the capacity for juniors to build ever bigger code jenga towers.
This is largely a result of the compensation behaviour of the industry. A junior that gets hired and grows does not get a raise in their salary to the market rate, the only way for them to get the compensation commensurate with their new skills is to leave and get hired somewhere else. Companies can avoid this problem by not doing this.
It's kind of a tragedy of the commons effect except the "tragedy" is for tech employers - who are stroppy coz other companies dont have to provide their workers with a free sushi bar so why should they????
Thankfully, that's not my experience with most juniors. Again, my experience is limited (as all of ours is), but if you filter juniors well during hiring, you can get a wonderful set of high-potential learning machines, who, with the right mentors, grow like crazy.
> If we let this go on for a couple of years before a reckoning of "oh crap" happens, it'll be very hard to come back from this --> certain "muscles" of the aforementioned seniors will have atrophied (e.g. mentoring, growing others), a lot of juniors (and mediors!) will have left the industry.
> I hope companies will recognize this risk in time...
As someone in an org who has some exposure to legacy support, don't underestimate management's ability to stick its head in the sand until it's too late in order to focus on the new shiny.
I think we're down to two mainframe developers, still have a crap-ton of dependencies on it and systems that have been "planned to be decommissioned" for decades, and at least one major ticking-time bomb that has a big mainframe footprint.
This is such a bizarre take. The entire history of AI is of growth, to say nothing of the last few decades, or even the past few years. To say that there are no signs that AI grows is, if nothing else, counter-proof that humans don't grow from generation to generation. We make the same logically fallacies that we did millennia ago.
Compilers aren't AI, static analysis isn't AI, TDD isn't AI.
Your forecast is based on which data points?
just having a brain and thinking for yourself will be a super power
half the US economy already relies on the fact that younger people dont know how to use computers
That sounds interesting, could you expand on what you mean by that?
> whitebeards: programming was assembly on big iron. these guys understand computers at the lowest level and can code down to the punchcard
>graybeards: programming was C/C++/pascal/vanilla javascript/html etc these guys sometimes know assembly but do generally understand memory, stacks, heaps, and can write optimized code
>brownbeard: programming is python, react.js or similar. grew up in times of cycles-a-plenty, so never learned as much about optimization (nor, never really needed to)
what we see throughout is a russian doll of abstractions. and 99.99% of the time it works great these abstractions, everything is faster and easier and what's underneath doesn't really matter. the debatable point is that 0.01% of the time where "under the hood" can be useful, sometimes tremendously useful.
If we're talking about engineering effort to create performant, deployable apps and systems, it may be different.
There are a significant number of difficult software systems that require complete knowledge of what's going on 'under the hood'. Think, runtime for all those wonderful abstractions. Libraries to port huge systems to different hardware and operating systems.
The large datacenter infrastructure that supports all the magic.
Those things are an entirely different universe than 'put up a webpage' or 'auto-generate a schema' or whatever.
I suspect this will be the future; the vast industry we're in now will shrink (as we're seeing now), and will rely on self-taught programmers more as AI removes the Junior role.
There will always be people who enjoy writing code and will do that for fun, I think. It'll be interesting to see what happens to all the other folks who never wanted to do this in the first place and only got into it because it's a good career.
1. Cliffs. There are big swathes of theoretical CS where we know things can't be done, and the self-taught people would smash their faces into such problems forever because they won't even realise it's insoluble in principle rather than merely difficult, whereas with a more principled background you needn't expend this wasted effort.
2. Local Maxima. Without a principled understanding it's easier to mistake a local maximum you've stumbled into for a global maximum. After all, all the small tweaks you tried make it worse, so, how are you expected to guess that a violently different solution would be better? Theory could help but you don't have any.
I hit another wall later in my career and that drove me to go to grad school. I met some folks there who clearly leveraged their CS knowledge (eg, people who were solving niche compiler problems “for fun”) and I realized that there was a lot more to CS than the stuff I had learned on my own. So I stuck with CS and never went back.
With AI it's infinitely easier, but even before there have been plenty of self taught developers with good CS grounding.
Because that's what self taught means. I would not consider myself to be a "self taught coder" for example because I have a CS degree.
Point 2:
One of the most important techniques as a self-taught programmer is to develop an instinct for "foundational solution that wants to get out" so to speak. So you search for the name of the type of problem you're looking at or for the approach you half-way discovered. You don't have a tutor or professor who will tell you where you are, so you need to figure that out yourself, often by assuming that someone else already put a label on the map.
On the other hand, self-taught does not mean "you don't have theory". From my experience getting the raw knowledge is the easy part. There are a lot of excellent resources out there, from book recommendations/compilations to freely available lectures and so on.
The thing that I always missed or were generally harder to come by are far more precious than that: guidance and feedback from teachers/mentors, interaction with peers who are in the same boat and most importantly time.
There is hardly any real logic or coding work involved. You would believe me if you work in a non-tech company. Most of your time goes into navigating the company's process, tools, and people. Yes, it is called People-Process-Technology for a reason. Even the word "Technology" here refers to engaging the tech vendors and consultants to get some work done, or just figuring out how to use a legacy software or constant chasing others to get dependencies resolved. Weeks and months pass by, waiting for a dependency to get resolved so that your last code commit could actually finish it's build. Tons of JIRA tickets and INCs would have flown around meanwhile which creates an imaginary realm of huge work and productivity - all resulting in a single line of code change getting tested. It could even be celebrated via huge email thanking every one (100s of people), making it look like a big achievement.
The point is, AI doesn't replace junior engineer roles. Junior engineers are preferred for assigning all dirty drudgery, who can be blamed if things go wrong and who can give away their credit to bosses when things work well and who kind of flexible with good attitude. That's very attractive!
Basically we hire people to own some risk and accountability. Distribution of work is primarily to distribute the risk, blame and accountability, not really to get the work done. Actual work is done by non-employee consultants in India, eastern Europe or Vietnam etc.
For example, we don't use opensource because we can't hold someone accountable. The fact that opensource simply works, doesn't count. However if some vendor offers the same opensource tech with enterprise support or as a managed SaaS, we would buy that.
Yes I know how to code and passed the whiteboard interview. I don't write lots of code everyday because that's not what the business wants from me. The business wants me to understand a large blob of legacy services and act like a normal human when someone in product ask for a change that I don't understand yet.
The main thing I believe AI is going to help me with is making decisions under uncertainity by processing large amounts of data and producing a context/environment that I am faster in.
Other people are going to drown in that context and be unable to act. My value is my decision making ability and my ability to talk to humans and find out what they want.
My value is not the production of text.
No offense, but it kind of shows with every product by that company I've touched over the years. But then again they are largely not a technology company. More like a lobbying firm that happens to be very good at politicking to get their wares on every enterprise PC. The quality doesn't even matter, since most of their end users are a captive audience.
You still have code review process which will be done by the owner of that space and someone with the language fluency.
Sam Altman already outlined that this year AI agents will reach the level of a base-line senior software engineer on most tasks including web development such as HTML, CSS and Javascript and those web apps can be built in minutes.
One can even say by their own definition of "AGI", it is actually "AGI" for web developers today.
All I have to do is show them the CEO’s quote to demonstrate what they think of their engineers. How can they see a future at that employer?
From my perspective most UIs suck in the sense that they almost always can be better in some way or another.
Anything that helps speeding up feedback loops in this space will be extremely welcome. It will enable developers to push for higher quality and make room for innovation.
The expectation that LLMs will multiply the output of more experienced engineers is simply ridiculous.
I work in a FAANG on one of the main teams using GenAI. The impact from GenAI we've calculated and that senior leadership is taking decision upon is an overall gain of about 5% velocity on CODING tasks, so about 2.5% on overall tasks.
If you feel like GenAI has overall doubled or tripled your productivity I'm sorry but it's a sign that you suck and are working on incredibly simple problems.
Beware, your comment is likely to age incredibly poorly.
Also, I wouldn't want to work in the same team as you, if that's how you communicate.
I use ChatGPT occasionally, as a teacher.
I feed it a query and typically it responds with the correct result.
Wouldn’t this be empowering junior developers and make them more productive?
I use ChatGPT in the same way to learn German.
Presumably this occured because the Systems Analyst was virtually archtecture agnostic, but a computer person could consider both the problem breakdown and the optimal architecture to fit, juggling each a little to optimise that interface between requirements and implementation. This personage then claimed the title of "Architect".
However, I am seeing, in certain areas, a deep lack of specification skills and wonder the Systems Analyst might not partially have a resurgence as a profession.
pootietangus•1mo ago
andsoitis•1mo ago
dedup•1mo ago
What I suspect is going to happen is something like the regional pilot situation, where one toils for pennies at lower levels and then gets to comfortable compensation numbers 10-15 years later.
ringeryless•1mo ago
I have yet to hear of a productive software development shop successfully reducing staff count due to LLM usage.
gorbachev•1mo ago
Terr_•1mo ago
Millionaires, at best. :p
Terr_•1mo ago
ochronus•1mo ago
MonkeyClub•1mo ago
The juniors you train today someone will poach, surely, as you'll poach someone else's, and it levels out.
rndmio•1mo ago
MonkeyClub•1mo ago
Fast forward half a decade forward, and there's no juniors or mediors to onboard, no-one to have gemeric expertise that can be fine tuned to your product.
amy214•1mo ago
It's not a legal agreement but a gentleman's agreement. They may still freely leave for whatever reason, especially if they were not treated well or treated like a slave (ie employer broke the agreement), but if all goes well it's some assurance, or at least a communication so everyone is on the same page. And usually being on the same stage and starting out with such trust sets the stage nicely for a good, trust-based safe working relationship
devoutsalsa•1mo ago