Except that there are still a lot of assembly programmers.
And even more C/C++ programmers.
It's also likely that C/C++ programmers didn't become Python programmers but that people who wouldn't have otherwise been programmers became Python programmers.
> At the well-specified end, you have tasks where the inputs are clean, the desired output is clear, and success criteria are obvious: processing a standard form, writing code to a precise spec, translating a document, summarising a report. LLMs are excellent at this
Yeah
> At the ambiguous end, you have tasks where the context is messy, the right approach isn’t obvious, and success depends on knowledge that isn’t written down anywhere
Sounds like most programming
Almost all of the programming I've ever done.
> I’m arguing that the most likely outcome is something like “computers” or “the internet” rather than “the end of employment as we know it.”
Yeah
The number of programmers has changed so much, from ~zero in the 1940s to tens of thousands in the sixties, to, what, maybe 30 million today? While most programmers worked a little or a lot in ASM from invention until the 1980s, it's a very specialized role today.
I do not believe that 'roughly similar numbers of people were employed writing' ASM, C, and Python except for the instant that C outpaced ASM in the seventies and when Python outpaced ASM somewhere around the millennium.
Probably at no time were ASM, C, and Python programmers even close to similarly numerous.
Sure, you barely meet any 40-year veterans... but that's partly because 40 years ago there were barely any 0-year newbies to start with.
It’s basically the revealed preference of hiring managers. The seldom spoken reality is this: managers like them young and hungry with no external obligations; thus they’re maximally extractible.
> [it's] observable
Nobody said it wasn't, I was quite deliberate in my wording.
At face value your post implies ageism is not a problem. Change the target demographic to a different “minority” - racial, ethnic, diverse - and observe the heavy connotation carried by downplaying severity.
There was exponential growth in newcomers and with different languages and hardware becoming available. Each wave drowned out all of the previous field combined. Like colloquial "Moore's law".
This isn't a new take. The problem is, "boring" doesn't warrant the massive bet the market has made on it, so this argument is essentially "AI is worthless" to someone significantly invested.
It's not so much that people aren't making this argument, it's that it's being tossed immediately into the "stochastic parrot" bunch.
That's just simply not true.
It has become difficult to grade students using anything other than in-person pen and paper assessments, because AI cheating is rampant and hard to detect. It is particularly bad for introductory-level courses; the simpler the material the hardest it is to make it AI-proof.
What is your guess as to how this will be different from pocket calculators?
But logically, calculators only do math, and they have primitive inputs that aren't going to match exactly what is on the sheet for anything other than THE most simplest of equations. You can't talk to a calculator in natural language, you have to learn how to use one (kind of like a ahem programming language). I never found calculators helped me "cheat" at math, it was still hard.
No AI it is not like calculators, looms, engines, or any other advancement.
If AI continues to improve we will need a complete reset of how human society works. That will not happen without mountains of bodies. There are 2 main ways civilization re-balances when work/worker ratio becomes untenable. War or famine. Hope you and you loved ones are on the lucky side.
When I look at this january's results it's all near top mark or near bottom or failed. Almost nothing in between, and my grades match what has been reported by other examiners so far.
Unlike calculators, making an assessment slop-proof often demands more resources to grade it, be it because the assignment needs to be more complicated, or because it needs more teaching assistants, or more time allotted for oral presentations. I also shudder at the suggestions to just come up with assignments that assume the students will use AI assistance anyway. That's how you end up with Programming 102 students that can't code their way around a for loop.
This shouldn't be a big deal. This was the norm for decades. My CS undergrad I only finished ~10 years ago, and every test was proctored and pen and paper. Very, very rarely would there be a remote submission. It did not seem possible to easily cheat in that environment unless the test allowed notes you yourself did not write, or if you procured a copy of the test beforehand and were able to study off that previously, but the material was sufficiently rigorous that you sort of had to know it well to pass the class, which seems to me the whole aim of a college course.
We need to hire more professors, then, as the ratio of FTE profs to FTE students is significantly lower, even over just a decade.
Edit: But I agree. I've mentioned to my professor wife that there needs to be movement back to oral exams. Orals exams are graded, nothing else is. IT works for law school. One of the only things that works for law school. One exam at the end of the semester. Nothing else matters, because the only thing a class needs to measure is mastery of the material, not whether you are diligent at completing basic work with the help of textbooks and friends and the Internet.
Arguably, with the increase in literacy, Jevon's paradox would say we need to hire more writers. Indeed, a lot more people DO write for their job.
But its not like we went from a handful of professional, full-time scribes, to 10x professional full-time scribes. Instead, the value of being just a scribe has gone down (unless you're phenomenal at it). It stops being a specialized skill unto itself. It's just a part of a lot of people's jobs, alongside other skills, like knowing how to do basic arithmetic.
Coding, like writing, becomes a part of everyone's job. Not a specialization unto itself. We will have more coders, but since everyone is a coder, very few are a captial C "Coder".
Writing unikernels will probably not be part of an accountant or plumber job.
Stupid automation and writing CSS probably won't be either, for different reasons, it's so stupid that a CSS expert was replaced yesterday
(Also me of two months ago would be shocked at how bullish I've become on LLMs. AI is literally the printing press... get a grip, me!)
Precisely because of this, some people that couldn't code for whatever reason crossed the border and now can somewhat produce something substantially more complex then what they can comprehend. This causes problems. You probably should not shit out code you don't understand, or rely on code you don't understand.
> If agentic systems start successfully navigating the illegible organisational context that currently requires human judgement, things like understanding unstated preferences, political sensitivities, and implicit standards, that would be significant.
How much of this is actually required for the actual work though and how much is legacy office politics "monkeys love to monkey around" nonsense?
Yes, this is still programming.
(Though I think syntax was most definitely a binding constraint.)
How much of the world currently runs on horribly broken,outdated and inaccurate spreadsheets? Disposable AI slop apps are just a continuation of this paradigm. I do think that AI slop apps will become competitive with broken spreadsheets and 10,000 scattered python notebook files, causing a massive drop in need for various SWE,Analysis, Data Scientist type jobs. I've seen report systems with individual reports in the millions that were only run once (and many not at all), a huge percent of digital work is one off disposable items.
SWE is first and foremost a power structure for the people in charge. The first thing you do when you are in power is always minimize the amount of people with power you depend on. I think that AI already is reducing the amount of people needed to maintain and develop what are basically state of the company applications for the C-suite. Sure tech first companies will be hit less hard but for example the fortune 500 will be making continuous cuts for the next decade.
This is nicely expressed, and could serve as a TDLR for the article, though buried in the middle.
We have the most automation we've ever had, AND historically low unemployment. We have umpteen labor saving devices, AND people labor long, long hours.
> Labour Market Reallocation Actually Works
It really does, given a little time.
pedalpete•2h ago
The things that most people ignore when thinking about AI and health is that 2/3rds of Americans are suffer from chronic illness and there is a shortage of doctors. Could AI really do much worse than the status quo? Doctors won't be replaced, but if we could move them up the stack of health to actually doing the work of saving lives rather than just looking at rising cholesterol numbers and writing scripts?
JohnFen•2h ago
I don't have a primary care physician because in the area I live in, there are no doctors that I can find that are taking new patients.
Regardless, I wouldn't want any of my medical data exposed to an AI system even if that was the only way to get health care. I simply don't trust them enough for that (and HIPAA isn't strong enough to make me more comfortable).
nephihaha•2h ago
My friend died last weekend from cancer. Human support/contact was very important to her. AI can't do that.
JohnFen•2h ago
Zigurd•1h ago
ceejayoz•1h ago
(Doctors will, for example, still tend to type plenty during an appointment in, say, the English NHS.)
onemoresoop•1h ago
JohnFen•1h ago
nhinck2•2h ago
AI could allow the whole system to kick the can down the road.
Zigurd•2h ago
ceejayoz•2h ago
Yes? https://en.wikipedia.org/wiki/Politician%27s_syllogism
Quarrelsome•1h ago
> you're undiagnosed? I thought it was obvious.
guess I was the last to clock it.
It was people that made me think of it first: a hookup that was adament I had it and then a therapist that mentioned in our first session. I started the diagnosis like over a year ago and completely forgot about it. Its only been asking gipity about some symptoms I have and seeing it throw up ADHD a lot as a possibility, that encouraged me to go back to sorting out the diagnosis.
galleywest200•1h ago
> Doctors won't be replaced, but if we could move them up the stack of health to actually doing the work of saving lives rather than just looking at rising cholesterol numbers and writing scripts
I presume your AI assistant did not prescribe medication to you.
Quarrelsome•1h ago
ceejayoz•1h ago
We don’t have enough info to determine whether such anecdotes translate to widespread benefit or surprising consequences.
Quarrelsome•1h ago
ceejayoz•1h ago
dapperdrake•1h ago
toomuchtodo•1h ago
We should fix the shortage of healthcare practitioners, not hand folks a fancy search engine and say "problem solved." Would you put forth "Google your symptoms" as a solution to this same problem? The token output is fancy, the confidence in accuracy is similar.