And it's now at 80 million views! https://x.com/mattshumer_/status/2021256989876109403
It appears to have really caught the zeitgeist.
Did the 80 million people believe what they were reading?
Have we now transitioned to a point where we gaslight everyone for the hell of it just because we can, and call it, what, thought-provoking?
Those numbers are likely greatly exaggerated. Twitter is nowhere near where it was at its peak. You could almost call it a ghost town. Linkedin but for unhinged crypto- and AI bros.
I'm sure the metrics report 80 million views, but that's not 80 million actual individuals that cared about it. The narrative just needs these numbers to get people to buy into the hype.
I work on this technology for my job and while I'm very bullish pieces like that are as you said slopish and as I'll say breathless because there are so many practical challenges here to deal with standing between what is being said there and where we are now.
Capability is not evenly distributed and it's getting people into loopy ideas of just how close we are to certain milestones, not that it's wrong to think about those potential milestones but I'm wary of timelines.
I just don't understand people working on improving ai. It just isn't worth the risk.
I’m younger than most on this site. I see the next decades of my life being defined by a multi-generational dark age via a collapse in literacy (“you use a calculator right?”), median prosperity (the only truly functional distribution system we have figured out is labor), and loss of agency (kinda obvious). This outcome is now, as of 2026, essentially priced into the public markets and accepted as fact by most media outlets.
“It’s inevitable” is at least a hard point to argue with. “Well I’M so productive, I’m having the time of my life”, the dominant position in many online tech spaces, seems short-sighted at best.
I miss being a techno optimist, it’s much more fun. But it’s increasingly hard.
A cynical/accelerationist perspective would be: it enables you to rake in huge amounts of money, so no matter what comes next, you will be set up to endure it better than most.
https://arxiv.org/abs/2510.15061
I thought normies would have caught onto the EM dash, overuse of semicolons, overuse of fancy quotes, lack of exclamation marks, "It's not X, it's Y", etc. Clearly I was wrong.
That's a weird way of saying 80 million times.
Take even the most unskilled labor that people can think about such as flipping a burger at a restaurant like McDonald's. In reality that job is multiple different roles mixed into one that are constantly changing. Multiple companies have experimented with machines and robots to perform this task all with very limited success and none with any proper economics.
Let's be charitable and assume that this type of fast food worker gets paid $50,000 a year. For that job to be displaced it needs to be performed by a robot that can be acquired for a reasonable capital expenditure such as $200,000 and requires no maintenance, upkeep, or subscription fees.
This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer. The reality is that they pay service technicians and professionals a lot of money to keep that equipment barely working.
People are worried about white-collar not blue-collar jobs being replaced. Robotics is obviously a whole different field from AI.
Being the hype-man that he is I assume he meant humanoid robots - I think he's being silly here, and the sentence made me roll my eyes.
It will happen to you.
It's a non reality in America's extremely piss poor restaurant industry. We have a competency crisis (the big key here) and worker shortage that SK doesn't, and they have far higher trust in their society.
Flipping burgers is WAY more demanding than I ever imagined. That's the danger of AI:
It takes jobs faster than creating new ones PLUS for some fields (like software development) downshifting to just about anything else is brutal and sometimes simply not doable.
Forget becoming manager at McDonald's or be even good at flipping burgers at the age of 40: you are competing with 20yr olds doing sports with amazing coordination etc
Any job that is predominantly done on a computer though is at risk IMO. AI might not completely take over everything, but I think we'll see way fewer humans managing/orchestrating larger and larger fleets of agents.
Instead of say 20 people doing some function, you'll have 3 or 4 prompting away to manage the agents to get the same amount of work done as 20 people did before.
So the people flipping the burgers and serving the customers will be safe, but the accountants and marketing folks won't be.
Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.
E.g. once I was tasked to build a new matching algorithm for a trading platform, and upon fully understanding of the specs I realized it can be interpreted as a mixed integer programming problem; the idea got shot down right away because PM don't understand it. There're all kinds of limiting factors once you get into the details.
1 You are not affected somehow (you got savings, connections, not living paycheck to paycheck, and have food on the table).
2 You prefer to persue no troubles in matters of complexity.
Time will tell, is showing it already.
Those who downplay it are either business owners themselves or have been employed for 2+ years.
I think a lot of software engineers who _haven't_ looked for jobs in the past few years don't quite realize what the current market feels like.
As an American I found a new job last year (Staff SW), and it was falling off a log easy, for a 26% pay bump.
I know smart and capable people that have been unemployed for 6+ months now, and a few much longer. Some have been through multiple layoffs.
I am presently employed, but have looked for a job. The market is the worst I've seen in my almost 30 year career. I feel deeply for anyone who needs a new job right now. It is really bad out there.
I look at my ticket tracker and I see basically 100% of it that can be done by AI. Some with assistance because business logic is more complex/not well factored than it should be, but most of the work that is done AI is perfectly capable of doing with a well defined prompt.
That's a sign that you have spurious problems under those tickets or you have a PM problem.
Also, a job is a not a task- if your company has jobs which is a single task then those jobs would definitely be gone.
Live stream validation results as they come in
The body doesn't give much other than the high-level motivation from the person who filed the ticket. In order to implement this, you need to have a lot of context, some of which can be discovered by grepping through the code base and some of which can't:- What is the validation system and how does it work today?
- What sort of UX do we want? What are the specific deficiencies in the current UX that we're trying to fix?
- What prior art exists on the backend and frontend, and how much of that can/should be reused?
- Are there any scaling or load considerations that need to be accounted for?
I'll probably implement this as 2-3 PRs in a chain touching different parts of the codebase. GPT via Codex will write 80% of the code, and I'll cover the last 20% of polish. Throughout the process I'll prompt it in the right direction when it runs up against questions it can't answer, and check its assumptions about the right way to push this out. I'll make sure that the tests cover what we need them to and that the resultant UX feels good. I'll own the responsibility for covering load considerations and be on the line if anything falls over.
Does it look like software engineering from 3 years ago? Absolutely not. But it's software engineering all the same even if I'm not writing most of the code anymore.
This cyborg process is exactly how we're using AI in our organisation as well. The human in the loop understands the full context of what the feature is and what we're trying to achieve.
For the UX, have it explore your existing repos and copy prior art from there and industry standards to come up with something workable.
Web scale issues can be inferred by the rest of the codebase. If your terraform repo has one RDS server, vs a fleet of them, multi-region, then the AI, just as well as a human, can figure out if it needs Google Spanner level engineering or not. (probably not)
Bigger picture though, what's the process of a human logs an under specified ticket and someone else picks it up and has no clue what to do with it? They're gonna go ask the person who logged the bug for their thoughts and some details beyond "hurr Durr something something validation". If we're at the point where AI is able to make a public blog post shaming the open source developer for not accepting a patch, throwing questions back to you in JIRA about the details of the streaming validation system is well within its capabilities, given the right set of tools.
The self-setup here is too obvious.
This is exactly why man + machine can be much worse than just machine. A strong argument needs to address what we can do as an extremely slow operating, slow learning, and slow adapting species, that machines that improve in ability and efficiency monthly and annually will find they cannot do well or without.
It is clear that we are going through a disruptive change, but COVUD is not comparable. Job loss is likely to have statistics more comparable to the Black Plague. And sensible people are concerned it could get much worse.
I don’t have the answers, but acknowledging and facing the uncertainty head on won’t make things worse.
Here's an article:
https://history.wustl.edu/news/how-black-death-made-life-bet...
But yes, if lots of people deathed by AI, the remaining humans might have more job security! Could that be called a "soft landing"?
Maybe this is overly optimistic, but if AI starts to have negative impacts on average people comparable to the plague, it seems like there's a lot more that people can do. In medieval Europe, nobody knew what was causing the plague and nobody knew how to stop it.
On the other hand, if AI quickly replaces half of all jobs, it will be very obvious what and who caused the job loss and associated decrease in living standards. Everybody will have someone they care about affected. AI job loss would quickly eclipse all other political concerns. And at the end of the day, AI can be unplugged (barring robot armies or Elon's space-based data centers I suppose).
Not sure if there's an analogy to make somewhere though
I'm not worried about AI job loss in the programming space. I can use Claude to generate ~80% of my code precisely because I have so much experience as a developer. I intuitively know what is a simple mechanical change; that is to say, uninteresting editing of lines of code; as opposed to a major architectural decision. Claude is great at doing uninteresting things. I love it because that leaves me free to do interesting things.
You might think I'm being cocky. But I've been strongly encouraging juniors to use Claude as well, and they're not nearly as successful. When Claude suggests they do something dumb--and it DOES still suggest dumb things--they can't recognize that it's dumb. So they accept the change, then bang their head on the wall as things don't work, and Claude can't figure it out to help them. Then there are bad developers who are really fucked by Claude. The ones who really don't understand anything. They will absolutely get destroyed as Claude leads them down rabbit holes. I have specific anecdotes about this from people I've spoken to. One had Claude delete a critical line in an nginx config for some reason and the dev spent a week trying to resolve it. Another was tasked with doing a simple database maintenance script, and came back two weeks later (after constant prodding by teammates for a status update) with a Claude-written reimplementation of an ORM. That developer just thought they would need another day of churning through Claude tokens to dig themselves out of an existential hole. If you can't think like a developer, these tools won't help you.
I have enough experience to review Claude's output and say "no, this doesn't make sense." Having that experience is critical, especially in what I call the "anti-Goldilocks" zone. If you're doing something precise and small-scoped, Claude will do it without issues. If you try to do something too large ("write a Facebook for dogs app") Claude will ask for more details about what you're trying to do. It's the middle ground where things are a problem: Claude tries to fill in the details when there's something just fundamentally wrong with what it's being asked.
As a concrete example, I was working on a new project and I asked Claude to implement an RPC to update a database table. It did so swimmingly, but also added a "session.commit()" line... just kind in the middle of somewhere. It was right to do so, of course, since the transaction needed to be committed. And if this app was meant to a prototype, sure. But anyone with experience knows that randomly doing commits in the middle of business logic code is a recipe for disaster. The issue, of course, was not having any consistent session management patterns. But a non-developer isn't going to recognize that that's an issue in the first place.
Or a more silly example from the same RPC: the gRPC API didn't include a database key to update. A mistake on my part. So Claude's initial implementation of the update RPC was to look at every row in the table and find ones where the non-edited fields matched. Makes... sense, in a weird roundabout way? But God help whoever ends up vibe coding something like that.
The type of AI fears are coming from things like this in the original article:
> I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. [...] when I test it, it's usually perfect.
Which is great. How many developers are getting paid full-time to make new apps on a regular basis? Most companies, I assume, only build one app. And then they spend years and many millions of dollars working on that app. "Making a new app from scratch" is the easy part! What's hard is adding new features to that app while not breaking others, when your lines of code go from those initial tens of thousands to tens of millions.
There's something to be said about the cheapness of making new software, though. I do think one-off internal tools will become more frequent thanks to AI support. But developers are still going to be the ones driving the AI, as the article says.
At first, it's a pretty big energy hog and if you don't know how to work it, it might crash and burn.
After some time, the novelty wears off. More and more people begin using it because it is a massive convenience that does real work. Luddites who still walk or ride their bike out of principle will be mocked and scoffed.
Then the mandatory compliance will come. A government-issued license will be required to use it and track its use. This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.
Last will come the AI-integrated brain computer interface. You won't have any choice when machine-gun-wielding Optimus robots coral you into a self-driving Tesla bus to the nearest FEMA camp to receive your Starlink-connected Neuralink N1 command and control chip. You will be decapitated if you refuse the mark of the beast. Rev 20:4
That's just an American thing, I've never owned a car and most people of my age I know haven't either.
If AI can do 80% of your tasks but fails miserably on the remaining 20%, that doesn't mean your job is safe. It means that 80% of the people in your department can be fired and the remaining 20% handle the parts the AI can't do yet.
It might all wash out eventually, but eventually could be a long time with respect to anybody’s personal finances.
There exists some fact about the true value of AI, and then there is the capitalist reaction to new things. I'm more wary of a lemming effect by leaders than I am of AI itself.
Which is pretty much true of everything I guess. It's the short sighted and greedy humans that screw us over, not the tech itself.
> The most important thing to know about labor substitution...is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process... AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.
The remaining "surplus" 20% roles retained will then be devoted to developing features and implementing fixes using AI where those features and fixes would previously not have been high enough priority to implement or fix.
When the price of implementing a feature drops, it becomes economically viable (and perhaps competitively essential) to do so -- but in this scenario, AI couldn't do _all_ the work to implement such features so that's why 40% rather than 20% of the developer roles would be retained.
The 40% of developer roles that remain will, in theory, be more efficient also because they won't be spending as much time babysitting the "lesser" developers in the 60% of the roles that were eliminated. As well, "N" in the Mythical Man Month is reduced leading to increased efficiency.
(No, I have no idea what the actual percentages would be overall, let alone in a particular environment - for example, requirements for Spotify are quite different than for Airbus/Boeing avionics software.)
I’m definitely worried about job loss as a result of the AI bubble bursting, though.
There will also be far fewer positions demanding these skills. Easy access to generating code has moved the bottleneck in companies to positions & skills that are substantially harder to hire for (basically: Good Judgement); so while adding Agentic Sorcerers would increase a team's code output, it might be the wrong code. Corporate profit will keep scaling with slower-growing team sizes as companies navigate the correct next thing to build.
Therefore, the best way to increase profit is to lower cost.
Software engineers work on Jira tickets, created by product managers and several layers of middle managers.
But the power of recent models is not in working on cogs, their true power is in working on the entire mechanism.
When talking about a piece of software that a company produces, I'll use the analogy of a puzzle.
A human hierarchy (read: company) works on designing the big puzzle at the top and delegating the individual pieces to human engineers. This process goes back and forth between levels in the hierarchy until the whole puzzle slowly emerges. Until recently, AI could only help on improving the pieces of the puzzle.
Latest models got really good at working on the entire puzzle - big picture and pieces.
This makes human hierarchy obsolete and a bottleneck.
The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.
Of course, it's not just about the software, but streams of information - customer support, bug tickets, testing, changing customer requirements.. but all of these can be handled by AI even today. And it will only get better.
This means different things depending on which angle you look at it - yes, it will mean companies will become obsolete, but also that each employee can become a company.
for me the 2 main factors are:
1. whether your company's priority is growing or saving
- growing companies especially in steep competition fight for talent and ai productivity results in more hiring to outcompete
- saving companies are happy to cut jobs to save on margin due to their monopoly or pressure from investors
2. how 'sequence of tasks-like' your job is
- SOTA models can easily automate long running sequences of tasks with minimal oversight
- the more your job resembles this the more in-danger you are (customer service diffusion is just starting, but i predict this will be one of the first to be heavily disrupted)
- i'm less worried about jobs where your job is a 'role' that comes with accountability and requires you to think big picture on what tasks to do in the first place
Lest we forget, software engineers aren't exactly ordinary people: they make quite a bit above the median wage.
AI taking our jobs is scary because it will turn us into "ordinary people". And ordinary people are not ok. They're barely surviving.
Flavius•1h ago
mudil•1h ago
RIMR•1h ago
irishcoffee•1h ago
I ain't paying for shit.
yifanl•1h ago
yoyohello13•1h ago
“Some of you may die, but that’s a risk I’m willing to make” -also Elon Mush probably
podgietaru•1h ago
paulryanrogers•1h ago