It wasn’t just Elon. The hype train on self driving cars was extreme only a few years ago, pre-LLM. Self driving cars exist sort of, in a few cities. Quibble all you want but it appears to me that “uber driver” is still a popular widespread job, let alone truck driver, bus driver, and “car owner” itself.
I really wish the AI ceos would actually make my life useful. For example, why am I still doing the dishes, laundry, cleaning my house, paying for landscaping, painters, and on and on? In terms of white collar work I’m paying my fucking lawyers more than ever. Why don’t they solve an actual problem
Rule 0 is that you never put your angel investors out of work if you want to keep riding on the gravy train
TBH, I do think that AI can deliver on the hype of making tools with genuinely novel functionality. I can think of a dozen ideas off the top of my head just for the most-used apps on my phone (photos, music, messages, email, browsing). It's just going to take a few years to identify how to best integrate them into products without just chucking a text prompt at people and generating stuff.
Like in Europe where you're forced to pay a notary to start a business - it's not really even necessary, nevermind something that couldn't be automated, but it's just but of the establishment propping up bureaucrats.
Whereas LLMs and generative models in art and coding for example, help to avoid loads of bureaucracy in having to sort out contracts, or even hire someone full-time with payroll, etc.
Sure you'll have destroyed the company, but at least you'll have avoided bureaucracy.
Like in the US you have a choice of which jurisdiction you want to start your company. Not all require a notary
Do you have a specific country in mind, as the statement is not true for quite a lot of EU member states... and likely untrue for most of the European countries.
Same as a washing machine / drier. Chuck the clothes in, press a button, done.
There are Roomba style lawnmowers for your grass cutting.
I'll grant you painting a house and plumbing a toilet aren't there yet!
It’s less work than it used to be, but remove the human who does all that and the dirty dishes and clothes will still pile up. It’s not like we have Rosie, from The Jetsons, handling all those things (yet). How long before the average person has robot servants at home? Until that day, we are effectively project managers for all the machines in our homes.
The really modern stuff is pretty much as simple as “load, start, unload” - you can buy combo washing machines that wash and dry your clothes, auto dispense detergent, etc. It’s not folding or putting away your clothes, and you still need to maintain it (clean the filter, add detergent occasionally, etc)… but you’re chipping away at what is left for a human to do. Who cares when it’s done? You unload it when you feel like it, just like every dishwasher.
Leave things wet in the washer too long and they smell like mold and you have to run it again. Leave them in the dryer too long and they are all wrinkled, and you have to run it again (at least for a little while).
I grew up watching everyone in my family do this, sometimes multiple times for the same load. That’s why I set timers and remove stuff promptly.
The dishwasher I agree, and it’s usually best to leave them in there at least for a little while once it’s done. However, not unloading it means dirty dishes start to stack up on the counter or in the sink, so it still creates a problem.
As far as “load, start, unload” goes. We covered unload, but load is also an issue where some people do have issues. They load the dishwasher wrong and things don’t get clear, or they start it wrong and are left with spots all over everything. Washing machines can be overloaded, or unbalanced. Washing machines and dryers can also be started wrong, the settings need to match the garments being washed. Some clothes are forgiving, others are not. There is still human error in the mix.
Not a problem for the two-in-one washer/dryers for the mildew issue, and for the wrinkles, most dryers have a cycle to keep running them intermittently after the cycle finishes for hours to mitigate most of the wrinkling issues. You’ve got a much much longer window before wrinkles are an issue with that setup.
If you want to waste my time with an automated nonsense we should at least even the playing field.
This is feasible with today’s technology.
I still fail to see why people think we're going to innovate ourselves into global poverty, it makes no sense.
Sure there can be rich people who are radical enough to push for another phase of capitalism.
That’s a kind of a capitalism which is worse for workers and consumers. With even more power in the hands of capitalists.
I'm sure we are, but it doesn't look like an improvement for most people.
It seems like we'll need to generate a lot more power to support these efficiency gains at scale, and unless that is coming from renewables (and even if it is) that cost may outweigh the gains for a long time.
I also respect the operative analysis, but the strategical, long-term thinking, is that this will come and it will only speed up everything else.
All the people employed by the government and blue collar workers? All the entrepreneurs, gig workers, black market workers, etc?
It's easy to imagine a world in which there are way less white collar workers and everything else is pretty much the same.
It's also easy to imagine a world in which you sell less stuff but your margins increase, and overall you're better off, even if everybody else has less widgets.
It's also easy to imagine a world in which you're able to cut more workers than everyone else, and on aggregate, barely anyone is impacted, but your margins go up.
There's tons of other scenarios, including the most cited one - that technology thus far has always led to more jobs, not less.
They're probably believing any combination of these concepts.
It's not guaranteed that if there's 5% less white-collar workers per year for a few decades that we're all going to starve to death.
In the future, if trends continue, there's going to be way less workers - since there's going to be a huge portion of the population that's old and retired.
You can lose x% of the work force every year and keep unemployment stable...
A large portion of the population wants a lot more people to be able to not work and get entitlements...
It's pretty easy to see how a lot of people can think this could lead to something good, even if you think all those things are bad.
Two people can see the same painting in a museum, one finds it beautiful, and the other finds it completely uninteresting.
It's almost like asking - how can someone want the Red team to win when I want the Blue team to win?
History seems to show this doesn't happen. The trend is not linear, but the trend is that we live better lives each century than the previous century, as our technology increases.
Maybe it will be different this time though.
Yes, the lives of "people selling stuff" will likely get better and better in the future, through technology, but the wellbeing of normal people seems to have peaked at around the year 2000 or so.
But it is myth. It has always been in the interest of the rulers and the old to try to imprint on the serfs and on the young how much better they have it.
Many of us, maybe even most of us, would be able to have fulfilling lives in a different age. Of course, it depends on what you value in life. But the proof is in the pudding, humanity is rapidly being extinguished in industrial society right now all over the world.
If people don’t have jobs, government doesn’t have taxes to employ other people. If CEOs are salivating at the thought of replacing white collar workers, there is no reason to think next step of AI augmented with robotics won’t replace blue collar workers as well.
Robotics seems harder, though, and has been around for longer than LLMs. Robotic automation can replace blue collar factory workers, but I struggle to imagine it replacing a plumber who comes to your house and fixes your pipes, or a waiter serving food at a restaurant, or someone who restocks shelves at grocery stores, that kind of thing. Plus, in the case of service work like being a waiter, I imagine some customers will always be willing to pay for a human face.
These are three totally different jobs requiring different kinds of skills, but they will all be replaced with automation.
1. Plumber is a skilled trade, but the "skilled" parts will eventually be replaced with 'smart' tools. You'll still need to hire a minimum wage person to actually go into each unique home and find the plumbing, but the tools will do all the work and will not require an expensive tradesman's skills to work.
2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.
3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.
I'm not a plumber, but my background knowledge was that pipes can be really diverse and it could take different tools and strategies to fix the same problem for different pipes, right? My thought was that "robotic plumber" would be impossible for the same reasons it's hard to make a robot that can make a sandwich in any type of house. But even with a human worker that uses advanced robotic tools, I would think some amount of baseline knowledge of pipes would always be necessary for the reasons I outlined.
> 2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.
That's true. I forgot about fast-food kiosks. And the other person showed me a link to some robotic waiters, which I didn't know about. Seems kind of depressing, but you're right.
> 3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.
The way I imagine it, to automate it, you'd have to have some sort of 3D design software to choose where all the items would go, and customize it in the case of those special display stands for certain products, and then choose where in the backroom or something for it to move the products to, and all that doesn't seem to save much labor over just doing it yourself, except the physical labor component. Maybe I just lack imagination.
But if you have to be trained in the use of a variety of 'smart' tools - that sounds like engineering to know what tool to deploy and how.
It's also incredibly optimistic about future tools - what smart tool fixes leaky faucets, hauls and installs water heaters, unclogs or replaces sewer mains, runs new pipes, does all this work and more to code, etc? There are cool tools and power tools and cool power tools out there, but vibe plumbing by the unskilled just fills someone's house with water or worse...
> 2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.
Takeout culture is popular among GenZ, and we're more likely to see walk-up orders with online order ahead than a facsimile of table service.
Why would cheap restaurants buy robots and allow a dining room to go unmanned and risk walkoffs instead of just skipping the whole make-believe service aspect and run it like a pay-at-counter cafeteria? You're probably right that waiters will disappear outside of high-margin fine dining as labor costs squeeze margins until restaurants crack and reorganize.
>3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.
Do-anything-like-a-human robots might crack that, but today it's still sci-fi. Humans are going to haul things from A to B for a bit longer, I think. I bet we see drive-up and delivery groceries win via lights-out warehouses well before "I, Robot" shelf stockers.
I have already eaten at three restaurants that have replaced the vast majority of their service staff with robots, and they're fine at that. Do I think they're better than a human? No, personally, but they're "good enough".
Over the last few years, I've seen a few in use here in Berlin: https://www.alibaba.com/showroom/robot-waiter-for-sale.html
> or someone who restocks shelves at grocery stores
For physical retail, or home delivery?
People are working on this for traditional stores, but I can't tell which news stories are real and which are hype — after around a decade of Musk promising FSD within a year or so, I know not to simply trust press releases even when they have a video of the thing apparently working.
For home delivery, this is mostly kinda solved: https://www.youtube.com/watch?v=ssZ_8cqfBlE
> Plus, in the case of service work like being a waiter, I imagine some customers will always be willing to pay for a human face.
Sure… if they have the money.
But can we make an economy where all the stuff is free, and we're "working" n-hours a day smiling at bad jokes and manners of people we don't like, so we can earn money to spend to convince someone else who doesn't like us to spend m-hours a day smiling at our bad jokes and manners?
Wow. I genuinely didn't think robotic waiters would ever exist anytime soon.
> For physical retail, or home delivery?
I was thinking for physical retail. Thanks for the video link.
Tech-wise this could have existed 30 years ago (maybe going around the restaurant would have been more challenging than today but it’s a fixed path and the robots don’t leave the restaurant).
They've already replaced part of that job at one of the grocery stores that I go to, there's a robot that checks the level of stock on the shelves, https://www.simberobotics.com/store-intelligence/tally.
I've seen this already at a pizza place. Order from a QR code menu and a robot shows up 20-25 minutes later at your table with your pizza. Wait staff still watched the thing go around.
Wouldn't you have struggled to imagine most of what LLMs can now do 5 years ago?
Hey, is there a good board game in there somewhere? Serfs and Nobles™
End of conversation.
Surely the modern history of decision making has been to move as much of it as possible away from humans and to algorithms, even "dumb" ones?
You forgot the born-wealthy.
I feel increasingly like a rube for having not made my little entrepreneurial side-gigs focused strictly on the ultra-wealthy. I used to sell tube amplifier kits, for example, so you and I could have a really high-end audio experience with a very modest outlay of cash (maybe $300). Instead I should have sold the same amps but completed for $10K. (There is no upper bounds for audio equipment though — I guess we all know.)
I briefly did a startup that was kind of a side-project of a guy whose main business was building yachts. Why was he OK with a market that just consisted of rich people? "Because rich people have the money!"
My prediction is that the poor will reinvent the guillotine
The rich were able to insulate themselves in space which is much harder to get to than some place on Earth. If the rich want to turtle up on some island because that's the only place they're safe, that's probably a better outcome for us all. They lose a lot of ability to influence because they simply can't be somewhere in person.
It also relies heavily on a security force (or military) being complicit, but they have to give those people a better life than average to make it worth it. Even those dumb MAGA idiots won't settle for moldy bread and leaky roofs. That requires more and more resources, capital, and land to sustain and grow it, which then takes more security to secure it. "Some rich dude controlling everything" has an exponential curve of security requirements and resources. This even comes down to how much land they need to be able to farm and feed their security guys.
All this assuming your personal detail and larger security force actually likes you enough, because if society has broken down to this point, they can just kill the boss and take over.
If you, a CEO, eliminate a bunch of white-collar workers, presumably you drive your former employees into all these jobs they weren't willing to do before, and hey, you make more profits, your kids and aging parents are better-taken-care-of.
Seems like winning in the fundamental game of society - maneuvering everyone else into being your domestic servants.
So, flooding those industries with more warm bodies probably won't help anything. I imagine it would make the already fucked labor relations even more fucked.
I can tell you for many of those professions their customers are the same white collar workers. The blue collar economy isn't plumbers simply fixing the toilets of the HVAC guy, while the HVAC guy cools the home of the electrician, while...
That is exactly what blue collar economy used to be though: people making and fixing stuff for each other. White collar jobs is a new thing.
So far, for any given automation, each actor gets to cut their own costs to their benefit — and if they do this smarter than anyone else, they win the market for a bit.
Every day the turkey lives, they get a bit more evidence the farmer is an endless source of free food that only wants the best for them.
It's easy to fool oneself that the economics are eternal with reference to e.g. Jevons paradox.
Had to look that up: https://en.wikipedia.org/wiki/Turkey_illusion
Ironically a friend of mine noticed that the team in India they work with is now largely pushing AI-generated code... At that point you just need management to cut out the middleman.
Management will cut down your team’s headcount and outsource even more to India ,Vietnam and Philippines.
A CFO looks at balance sheet not operations context, even if you’re idea is better the opposite of what you think is likely going to happen very soon.
Management did all that at companies I've worked for for years before 'AI'. The big change is that the teams in India won't 200 developers, but 20 developers handholding an AI.
Caveat that this is anecdotal, not sure if there are numbers on this.
That said, the first thing that jumps to my mind is cars. Back when they were first introduced you had to be a mechanically inclined person to own one and deal with it. Today, people just buy them and hire the very small number of experts (relative to the population of drivers) to deal with any issues. Same with smartphones. The majority of users have no idea how they really work. If it stop working they seek out an expert.
ATM, AI just seems like another level of that. JS/Python programmers don't need to know bits and bytes and memory allocation. Vibe coders won't need to know what JS/Python programmers need to know.
Maybe there won't be enough experts to keep it all going though.
When you consider how this interacts with the population collapse (which is inevitable now everywhere outside of some African countries) this seems even worse. In 20 years, we will have far fewer people under age 60 than we have now, and among that smaller cohort, the percentage of people at any given age who have useful levels of experience will be less because they may not be able to even begin meaningful careers.
Best case scenario, people who have gotten 5 or more years of experience by now (college grads of 2020) may scrape by indefinitely. They'll be about 47 then and have no one to hire that's more qualified than AI. Not necessarily because AI is so great; rather, how will there be someone with 20 years of experience when we simply don't hire any junior people this year?
Worst case, AI overtakes the Class of 2020 and moves up the experience-equivalence ladder faster than 1 year per year, so it starts taking out the classes of 2015, 2010, etc.
This is my bet. Similar to Moores law. Where it plateaus is anybody’s guess…
We've already eliminated certain junior level domains essentially by design. There aren't any 'barber-surgeons' with only two years of training for good reason. Instead we have surgery integrated it into a more lengthy and complicated educational path to become what we now would consider a 'proper' surgeon.
I think the answer is that if the 'junior' is uneconomical or otherwise unacceptable be prepared to pay more for the alternative, one way or another.
It just happens that up to this point there have been things that couldn't be done by capital. Now we're entering a world where there isn't such a thing and it is unclear what that implies for the job market. But people not having jobs is hardly a bad thing as long as it isn't forced by stupid policy, ideally nobody has to work.
I guess funding for processing power and physical machinery to run the AI backing a product would be the biggest barrier to entry?
This feels a lot like the dot boom/dot bust era where a lot of new companies are going to sprout up from the ashes of all this disruption.
AI certainly will increase competition in some areas, but there are countless examples where being the best at something doesn't make you the leader.
And if it could think, it would probably be very proud of the quarter (hour) figures that it could present. The Number has gone up, time for a reward.
50% of a group of workers losing their jobs to this tech is not a worrisome future for him. It's a pitch!
Your UBI will be controlled by the government, you will have even less agency than you currently have and a hyper elite will control the thinking machines. But don't worry, the elite and the government are looking out for your best interest!
In 2010, I put together a list of alternatives here to address the rise of AI and Robotics and its effect on jobs: https://pdfernhout.net/beyond-a-jobless-recovery-knol.html "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."
Exactly. These people are growth-seekers first, domain experts second.
Yet I saw progressive[1] outlets reacting to this as a neutral reporting. So it apparently takes a “legacy media” outlet to wake people out of their AI stupor.
[1] American news outlets that lean social-democratic
Most criticisms I see of management consulting seem to come from the perspective, which I get the sense you subscribe to, that management strategy is broadly fake so there's no underlying thing for the consultants to do better or worse on. I don't think that's right, but I'm never sure how to bridge the gap. It'd be like someone telling me that software architecture is fake and only code is real.
That said, how would we measure if our KPMG engagement worked or not? There's no control group company, so any comparison will have to be statistical or vibes-based. If there is a large enough sample size this can work: I'm sure there is somebody out there that can prove management consulting works for dentist practices in mid-size US cities or whatever, though any well-connected group that discovers this information can probably make more money by just doing a rollup of them. This actually seems to be happening in many industries of this kind. Why consult on how to be a more profitable auto repair business when you can do a leveraged buyout of 30 of them, make them all more profitabl, and pocket that insight yourself? I can understand if you're an poorly-connected individual and short on capital, but the big consulting firms are made up entirely of well-connected people who rub elbows with rich people all day.
Fundamentally, there will never be enough data to prove that IBM engaging McKinsey on AI in 2025 will have made any difference in IBM's bottom line. There's only one IBM and only one 2025!
I just wish that instead of getting more efficient at generating bullshit, we could just eliminate the bullshit.
That covers majority of sales, advertising and marketing work. Unfortunately, replacing people with AI there will only make things worse for everyone.
The only reason this existed in the first place is because measuring performance is extremely difficult, and becomes more difficult the more complex a person's job is.
AI won't fix that. So even if you eliminate 50% of your employees, you won't be eliminating the bottom 50%. At worst, and probably what happens on average, your choices are about as good as random choice. So you end up with the same proportion of shitty workers as you had before. At worst worst, you actively select the poorest workers because you have some shitty metrics, which happens more often than we'd all like to think.
So, what you're describing is a mythical situation for me. But - US corporations are fabulously rich, or perhaps I should say highly-valued, and there are lots of investors to throw money at things I guess, so maybe that actually happens.
Note that AI wipes out the jobs, but not the tasks themselves. So if that's true, as a consumer, expect more sleepwalked, half-assed products, just created by AI.
Management will be thrilled.
But the last few paragraphs of the piece kind of give away the game — the author is an AI skeptic judging only the current products rather than taking in the scope of how far they’ve come in such a short time frame. I don’t have much use for this short sighted analysis. It’s just not very intelligent and shows a stubborn lack of imagination.
It reminds me of that quote “it is difficult to get a man to understand something, when his salary depends on his not understanding it.”
People like this have banked their futures on AI not working out.
It's the AI hype squad that are banking their future on AI magically turning into AGI; because, you know, it surprised us once.
Or these guys pivot and go back to building CRUD apps. They’re either at the front of something revolutionary… or not… and they’ll go back to other lucrative big tech jobs.
All I can tell you is that for what I use AI for now in both my personal and professional life, I would pay a lot of money (way more than I already am) to keep just the current capabilities I already have access to today.
Because I wouldn't miss it at all if it disappeared tomorrow, and I'm pretty sure the society would be better off without it.
I’m a software engineer so for work I use it daily. It doesn’t “do my job” but it makes my job vastly more enjoyable. Need unit tests? Done. Want a prototype of an idea that you can refine? Here. Shell script? Boom. Somewhat complicated SQL query? Here ya go. Working with some framework you haven’t used before? Just having a conversation with AI about what I’m trying to do is so much better than sorting through often poorly written documentation. It’s like talking to another engineer who just recently worked on that same kind of problem… except for almost any problem you encounter. My productivity is higher. More than that, I find myself much more willing to take on bigger, harder problems because I know there’s powerful resources to answer just about any question I could have. It just makes me enjoy the job more.
In my personal life, I use it to cut through the noise that in recent year has begun to overwhelm the signal on the internet. Give me a salmon recipe. This used to be the sort of thing you’d put into Google and get great results. Now first result is some ad-stuffed website that is 90% fluff piece and a recipe hidden at the bottom. Just give me the fricken recipe! AI does that.
The other day I was trying to figure out whether a designer-made piece of furniture was authentic despite missing tags. Had a back and forth with ChatGPT, sharing photos, describing the build quality, telling it what the store owner had told me. Incredible depth of knowledge about an obscure piece of furniture.
I also use the image generation all the time. For instance, for the piece of furniture I talked about, I took a picture of my apartment, and the furniture, and asked it to put the furniture into my space, allowing me to visualize it before purchase.
It’s a frickin super power! I cannot even begin to understand how people are still skeptical about the transformative power of this stuff. It kind of feels like people are standing outside the library of Alexandria, debating whether it’s providing any value, when they haven’t even properly gone inside.
Yes, there are flaws. I’m sure there’s people reading this about to tell me it made them put glue on their salad or whatever. But what we have is already so deeply useful to me. Could I have done all of this through old fashioned search? Mastered Photoshop and put the furniture into my apartment on my own? Of course! But the immediacy here is the game changer.
But if the business model collapsed and they had to raise prices, or work cheaped out and stopped paying for our access, then yeah, I’d step up and spend the money to keep it.
It was never used in the sense of denigrating potential competitors in order to stay employed.
> People like this have banked their futures on AI not working out.
If "AI" succeeds, which is unlikely, what is your recommendation to journalists? Should they learn how to code? Should they become prostitutes for the 1%?
Perhaps the only option would be to make arrangements with the Mafia like dock workers to protect their jobs. At least it works: Dock workers have self confidence and do not constantly talk about replacing themselves. /s
As to my recommendation to what they do — I dunno man. I’m a software engineer. I don’t know what I am going to do yet. But I’m sure as shit not burying my head in the sand.
The gross injustices in the original quote were already a fact, which makes the quote so powerful.
We don’t need AGI for there to be large displacement of human labor. What’s here is already good enough to replace many of us.
Sometimes my boss has asked me to do something that in the long run will cost the company dearly. Luckily for him, I am happy to push back, because I can understand what we're trying to achieve and help figure the best option for the company based on my experience, intuition and the data I have available.
There's so much more to working with a team than: "Here is a very specific task, please execute it exactly as the spec says". We want ideas, we want opinions, we want bursts of creative inspiration, we want pushback, we want people to share their experiences, their intuition, the vibe they get, etc.
We don't want AI agents that do exactly what we say; we want teams of people with different skill sets who understand the problem and can interpret task through the lens of their skill set and experience, because a single person doesn't have all the answers.
I think your ex-boss Mike will very soon find himself trapped in local minima of innovation, with only his own understanding of the world, and a sycophantic yes-man AI employee that will always do exactly as he says. The fact that AI mostly doesn't work is only part of the problem.
I truly belive these types of paper don't deserve to be valued so much.
Some managers read Dilbert and think it's intended as advice.
"The reality is that women are treated differently by society for exactly the same reason that children and the mentally handicapped are treated differently. It’s just easier this way for everyone. You don’t argue with a four-year old about why he shouldn’t eat candy for dinner. You don’t punch a mentally handicapped guy even if he punches you first. And you don’t argue when a women tells you she’s only making 80 cents to your dollar. It’s the path of least resistance. You save your energy for more important battles." -Scott Adams
"Women define themselves by their relationships and men define themselves by whom they are helping. Women believe value is created by sacrifice. If you are willing to give up your favorite activities to be with her, she will trust you. If being with her is too easy for you, she will not trust you." -Scott Adams
"Nearly half of all Blacks are not OK with White people. That’s a hate group." -Scott Adams
"Based on the current way things are going, the best advice I would give to White people is to get the hell away from Black people. Just get the fuck away. Wherever you have to go, just get away. Because there’s no fixing this. This can’t be fixed." -Scott Adams
"I’m going to back off from being helpful to Black Americas because it doesn’t seem like it pays off. ... The only outcome is that I get called a racist." -Scott Adams
Should have been 'better still'.
And AI cannot provide that kind of value. Will a VP in charge of 100 AI agents be respected as much as a VP in charge of 100 employees?
At the end of the day, we're all just monkeys throwing bones in the air in front of a monolith we constructed. But we're not going to stop throwing bones in the air!
https://www.youtube.com/watch?v=-azFNwF6fa0
Afterlife (video game)
The data does not support this. The businesses with the highest market caps are the ones with the highest earnings.
https://companiesmarketcap.com/
Sort by # of employees and you get a list of companies with lower market caps.
Either way, there is no data I have seen to suggest market cap correlates with number of employees. The strongest correlation I see is to net income (aka profit), and after that would be growing revenues and/or market share.
Which is the sole reason automation will not make most people obsolete until the VP level themselves are automated.
Then they had some disappointing results due to their bad decision-making elsewhere in the company, and they turned to my friend and said "Let's lay off some of your guys."
I think quotes around "real value" would be appropriate as well. Consider all the great engineering it took to create Netflix, valued at $500b - which achieves what SFTP does for free.
The parent comment was complaining about certain employees contributions to "real value" or lack thereof. My question is, how do you ascertain the value of work in this context where the software isn't what's valuable but the IP is, and further how do justify working on a product thats already a solved problem and still refer to it as "creating 'real' value"?
You said you were at large companies, so this is a hard call to make. A lot of large companies work on lots of small products knowing they probably won't work, but one of them might, so it's still worth it to try. It's essentially the VC model.
Software was truly truly insane for a bit there. Straight out of college, no-name CS degree, making $120, $150k (back when $120k really meant $120k)? The music had to stop on that one.
Honestly it was 10 years too late. The big innovations of the 2010 era were maturing. I’ve spent my career maintaining and tweaking those, which does next to zero for your career development. It’s boring and bloated. On the bright side I’ve made a lot of money and have no issues getting jobs so far.
For example think of space x, Waymo, parts of US national defense, and the sciences (cancer research, climate science - analyzing satellite images, etc). They are doing novel work that’s certainly not boring!
I think you’re probably referring to excitement and cutting edge in consumer products? I agree that has been stale for a while.
Of course, that growth in wages in this sector was a contributing factor to home/rental price increases as the "market" could bear higher prices.
The issue is salary expectations in the US are much higher than those in much of Western Europe despite having similar CoL.
And $120k for a new grad is only a tech specific thing. Even new grad management consultants earn $80-100k base, and lower for other non-software roles and industries.
But in UK an Ireland they get free healthcare, paid vacation, sick leave and labor protections, no?
There's a reason you don't see new grad hiring in France (where they actually try to enforce work hours), and they have a subsequently high youth unemployment rate.
Though even these new grad roles are at risk to move to CEE, where their administrations are giving massive tax holidays on the tune of $10-20k per employee if you invest enough.
And the skills gap I mentioned about CS in the US exists in Weatern Europe as well. CEE, Israel, and India are the only large tech hubs that still treat CS as an engineering disciple instead of as only a form of applied math.
I happen to have a sibling in consulting who was seconded from London to New York for a year, doing the same work for the same company, and she found the work hours in NY to be ludicrously long (and not for a significant productivity gain: more required time-at-desk). So there are varying levels of "expected to work off the clock hours".
I pay over 40% effective tax rate. Healthcare is far from free.
But that's my point - salaries are factored based on labor market demands and comparative performance of your macroeconomy (UK high finance and law salaries are comparable with the US), not CoL.
I’ve never been to Boston. Why are the prices high there?
Think they're too high? You're free to start a company and pay less.
[1] https://en.wikipedia.org/wiki/List_of_United_States_metropol...
[2] https://en.wikipedia.org/wiki/Personal_income_in_the_United_...
More like it means ending up with government-provided bare minimum handouts to not have you starve (assuming you somehow manage to stay on minimum wage all your life).
The "min wage" of HN seems to be "living better than 98% of everyone else"
I mean a real wage associated with standards of living that one took for granted as "normal" when I was young.
If I took a job for ~100k in Washington, I'd live worse than I did as a PhD student in Sweden. It would basically suck. I'm not sure ~120k would make things that different.
I'd also highlight that beyond over-hiring being responsible for the downturn in tech employment, I think offshoring is way more responsible for the reduction in tech than AI when it comes to US jobs. Video conferencing tech didn't get really good and ubiquitous (especially for folks working from home) until the late teens, and since then I've seen an explosion of offshore contractors. With so many folks working remotely anyway, what does it matter if your coworker is in the same city or a different continent, as long as there is at least some daily time overlap (which is also why I've seen a ton of offshoring to Latin America and Europe over places like India).
Both sides of the aisle retreated from domestic labor protection for their own different reasons so the US labor force got clobbered.
Sorry, dude, it's like, all I know.
One theory is that the benefit they might be providing over domestic "grads" is lack of prerequisites for promotion above certain levels (language, cultural fit, and so on). For managers, this means the prestige of increased headcount without the various "burdens" of managing "careerists". For example, less plausible competition for career-ladder jobs which can then be reserved for favoured individuals. Just a theory.
Obviously the only real solution to creating an artificial labor shortage is looking externally from the existing labor force. Simply randomly hiring underserved groups didn't really make sense because they weren't participants.
Where I work, we have two main goals when I'm involved in the technical hiring process: hire the cheapest labor and try to increase diversity. I'm not necessarily against either, but those are our goals.
> nothing ever happens here that helps the workers and whatever rights we have now are slowly dwindling
its almost as if we need a 'workers party' or something... though i'd imagine first-past-the-post in the u.s makes that difficult.The problem is that the left, which was historically pro-labor, abdicated this position for racial reasons, and the right was always about maximizing the economic zone.
I already know that the right-wing supports h1bs, Trump himself said so.
Even literal Nazis were exempted from immigration controls on the basis of extreme merit.
People in tech are so quick to shoot themselves in the foot.
Tech has its barriers too. Most people I've met in tech come from relatively rich families. (Families where spending $70k+/yr on college is not a major concern for multiple kids - that's not normal middle class at all even for the US)
TACO Trump himself said he'd reveal his health care plan in two weeks, many many years ago, many many times. But then he chickened out again and again and again and again and again. So that the buk buk buk are you talking about?
Don’t understand why other countries make it harder.
EU would flourish economically and there would be no room for ultra conservative right to gain any real foothold (which is 95% just failed immigration topic just like Brexit was).
Alas, we are where we are, they slowly backpedal but its too little too late, as usually. I blame Merkel for half of EU woes, she really was a horrible leader of otherwise very powerful nation made much weaker and less resilient due to her flawed policies and lack of grokking where world is heading to.
Btw she still acknowledges nothing and keeps thinking how great she was. Also a nuclear physicist who turned off all existing nuclear plants too early so Germany has to import massive amount of electricity from coal burning plants. You can't make it up.
Basically, progressives in Denmark have argued for very strict immigration rules, the essential argument being that Denmark has an expensive social welfare state, and to get the populace to support the high taxes needed to pay for this, you can't just let anyone in who shows up on your doorstep.
The American left could learn a ton of lessons from this. I may loath Greg Abbott for lots of reasons, but I largely support what he did bussing migrants to NYC and other liberal cities. Many people in these cities wanted to bask in the feelings of moral superiority by being "sanctuary cities", but public sentiment changed drastically when they actually had to start bearing a large portion of the cost of a flood of migrants.
I think the real problem is that the median voter is either unable to, has no time to or no interest to understand basic economics and second-order consequences. We see this on both sides of the aisle. Policies like caps on credit card interest rates, rent control or no tax on tips are very, very popular while also being obviously bad after thinking about it for just 1 minute.
This is compounded by there being relatively little discussion of policies like that. They get reported on but not discussed and analyzed. This takes us back to your point about the perception of the Democratic party. The media (probably because the median voter prefers it) will instead discuss issues that are more emotionally relatable, like the border being "overwhelmed", trans athletes, etc. which makes it less likely to get people to think about economic policy.
This causes a preference for simple policies that seem to aim straight for the goal. Rent too high? Prohibit higher rent! Credit card fees too high? Prohibit high fees! Immigrants lower wages? Have fewer immigrant!
Telling the median voter that H1-B visa holders are lowering wages due to the high friction of changing sponsors and that the solution is to loosen the visa restrictions is gonna go over well with much of the electorate. Even only the portion of initial problem statement will likely reach most voters in the form of "H1-B visas lower wages". Someone who will simply take that simplified issue and run with cutting down further on immigration will be much more likely to succeed with how public opinion is currently formed.
All this stuff is why I love learning about policy and absolutely loath politics.
What do you think of that?
The real reason is that they are totally beholden to powerful business interests that benefit from mass immigration, and the ensuing suppression of American labor movements. The racial equity bit is just the line that they feed to their voters.
I felt enormous sympathy for my coworkers here with that visa. Their lives sucked because there was little downside for sociopathic managers to make them suck.
Most frustrating was when they were doing the same kind of work I was doing, like writing Python web services and whatnot. We absolutely could hire local employees to do those things. They weren't building quantum computers or something. Crappy employers gamed the system to get below-market-rate-salary employees and work them like rented mules. It was infuriating.
While working at Google I worked with many many amazing H1B (and other kinds) visa holders. I did 3 interviews a week, sat on hiring committees (reading 10-15 packets a week) and had a pretty good gauge of what we could find.
There was just no way I could see that we could replace these people with Americans. And they got paid top dollar and had the same wlb as everyone else (you could not generally tell what someone’s status was).
But wanna use it as a way to undercut American jobs with 80-hour-a-week laborers, as I've personally witnessed? Nah.
My criticisms against the H1B program are completely against the companies who abuse it. By all means, please do use it to bring in world-class scientists, researchers, and engineers!
But, for existing teams they wanted (reasonably) to avoid splitting between locations. So you need someone local.
https://www.linkedin.com/posts/jamesfobrien_tech-jobs-have-d...
In big dollar markets, the program is used more for special skills. But when a big bank or government contractor needs marginally skilled people onshore, they open an office in Nowhere, Arizona, and have a hard time finding J2EE developers. So some company from New Jersey will appear and provide a steady stream of workers making $25/hr.
The calculus is that more H1=less offshore.
The smart move would be to just let skilled workers from India, China, etc with a visa that doesn’t tie them to an employer. That would end the abusive labor practices and probably reduce the number of lower end workers or the incentive to deny entry level employment to US nationals.
Other than a few international visitors, I’d expect the makeup to look like the domestic tech worker demographics rather than like the global population demographics.
Nadella ascending to the leadership of Micro"I Can't Believe It's Not Considered A State-Sponsored Defense Corp"soft is what got my mildly xenophobic (sorry) gears turning.
Actually disregard, this isn’t worth it, but I don’t grant any freebies.
I hear this argument where I live for various reasons, but surely it only ever comes down to wages and/or conditions?
If the company paid a competitive rate (ie higher), locals would apply. Surely blaming a lack of local interest is rarely going to be due to anything other than pay or conditions?
I enjoy meeting the very smart people from all sorts of backgrounds - they share the values of education and hard work that my parents emphasized, and they have an appreciation for what we enjoy as software engineers; US born folks tend to have a bit of entitlement, and want success without hard work.
I interview a fair number of people, and truly first rate minds are a limited resource - there's just so many in each city (and not everyone will want to or be able to move for a career). Even with "off-shoring" one finds after hiring in a given city for a while, it gets harder, and the efficient thing to do is to open a branch in a new city.
I don't know, perhaps the realtors from my class get more money than many scientists or engineers, and certainly more than my peers in India (whose salaries have gone from 10% of mine to about 40% of mine in the past decade or two), but the point is the real love of solving novel problems - in an industry where success leads to many novel problems.
Hard work, interesting problems, and building things that actual people use - these are the core value prop for software engineering as a career; the money is pretty new and not the core; finding people who share that perspective is priceless. Enough money to provide a good start to your children and help your family is good, but never the heart of the matter.
According to these people, politicians like you here and labour doesn't. If that's true, do you want to empower labour to kick you out?
The whole reason H1Bs were invented is to disempower the existing workforce. Not reaching for a (long overdue) tool of power for tech workers is playing right into their hand.
You can call it what you want to legitimize it but these people want immigrants out and empowering them means immigrants get kicked out.
If you want to get kicked out as an immigrant definitely support them.
Knowing one’s enemy is key to fighting them.
It's a hard truth for many Americans to swallow, but it is the truth nonetheless.
Not to say there isn't an incredible amount of merit... but the historical impact of rampant nepotism in the US is widely acknowledged, and this newer manifestation should be acknowledged just the same.
I have never once worked with a product manager who I could describe as “worth their weight in gold”.
Not saying they don’t exist, but they’re probably even rarer than you think.
And there's multiple confounding factors at play.
Yes, lots of jobs are bullshit, so maybe AI is a plausible excuse to downside and gain efficiency.
But also the dynamic that causes the existence of bullshit jobs hasn't gone away. In fact, assuming AI does actually provide meaningful automation or productivity improvemenet, it might well be the case that the ratio of bullshit jobs increases.
Everywhere I've ever worked, we had 3-4X more work to do than staff to do it. It was always a brutal prioritization problem, and a lot of good projects just didn't get done because they ended up below the cut line, and we just didn't have enough people to do them.
I don't know where all these companies are that have half their staff "not doing anything productive" but I've never worked at one.
What's more likely? 1. Companies are (for reasons unknown) hiring all these people and not having them do anything useful, or 2. These people actually do useful things, but HN commenters don't understand those jobs and simply conclude they're doing nothing?
Managers always want more headcount. Bigger teams. Bigger scope. Promotions. Executives have similar incentives or don’t care. That’s the reason why they’re bloated.
I’ve seen those guys it is painful to watch.
I’m worried about the shrinking number of opportunities for juniors.
I have definitely seen real world examples where adding junior hires at ~$100k+ is being completely forgone when you can get equivalent output from someone making $40k offshore.
Because they don't have to do that. They could just operate at max efficiency all the time.
Instead, they spread the wealth a bit by having bullshit jobs, even if the existence of these jobs is dependent on the market cycle.
I do.
It's much more important that people live a dignified life and be able to feed their families than "increasing shareholder value" or whatever.
I'm a person that would be hypothetically supportive of something like DOGE cuts, but I'd rather have people earning a living even with Soviet-style make work jobs than unemployed. I don't desire to live in a cutthroat "competitive" society where only "talent" can live a dignified life. I don't know if that's "wealth distribution" or socialism or whatever; I don't really care, nor make claim it's some airtight political philosophy.
> It's much more important that people live a dignified life and be able to feed their families than "increasing shareholder value" or whatever.
its just my intuition, but talking to many people around me, i get the feeling like this is why people on both "left" and "right" are in a lot of ways (for lack of a better word) irate at the system as a whole... if thats true, i doubt ai will improve the situation for either...First, is AI really a better scapegoat? "Reducing headcount due to end of ZIRP" maybe doesn't sound great, but "replacing employees with AI" sounds a whole lot worse from a PR perspective (to me anyway).
Second, are companies actually using AI as the scapegoat? I haven't followed it too closely, but I could imagine that layoffs don't say anything about AI at all, and it's mostly media and FUD inventing the correlation.
whereas "AI" is intuitively an external force; it's much harder to assign blame to company leadership.
This had me thinking, how are they going to get "clout", by comparing AI spending?
As a research engineer in the field of AI, I am again getting this feeling. People keep doubting that AI will have any kind of impact, and I'm absolutely certain that it will. A few years ago people said "AI art is terrible" and "LLMs are just autocomplete" or the famous "AI is just if-else". By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.
Is it going to fulfill all the promises made by billionaire tech CEOs? No, of course not, at least not on the time scale that they're projecting. But they are incredibly useful tools that can enhance efficiency of almost any job that involves setting behind a computer. Even just something like copilot autocomplete or talking with an LLM about a refactor you're planning, is often incredibly useful. And the amount of "intelligence" that you can get from a model that can actually run on your laptop is also getting much better very quickly.
The way I see it, either the AI hype will end up like cryptocurrency: forever a part of our world, but never quite lived up to it's promises, but I made a lot of money in the meantime. Or the AI hype will live up to it's promises, but likely over a much longer period of time, and we'll have to test whether we can live with that. Personally I'm all for a fully automated luxury communism model for government, but I don't see that happening in the "better dead than red" US. It might become reality in Europe though, who knows.
As a user, I haven’t seen a huge impact yet on the tech I use. I’m curious what the coming years will bring, though.
LLMs are good productivity tools. I've been using it for coding, and it is massively helpful, really speeds things up. There's a few asterisks there though
1) I does generate bullshit, and this is an unavoidable part of what LLMs are. The ratio of bullshit seems to come down with reasoning layers above it, but they will always be there.
2) LLMs, for obvious reasons, tend to be more useful the more mainstream languages and libraries I am working with. The more obscure it is, the less useful it gets. It may have a chilling effect on technological advancement - new improved things are less used because LLMs are bad at them due to lack of available material, the new things shrivel and die on the vine without having a chance of organic growth.
3) The economics of it are super unclear. With the massive hype there's a lot of money slushing around AI, but those models seem obscenely expensive to create and even to run. It is very unclear how things will be when the appetite of losing money at this wanes.
All that said, AI is multiple breakthroughs away of replacing humans, which does not mean they are not useful assistants. And increase in productivity can lead to lower demand for labor, which leads ro higher unemployment. Even modest unemployment rates can have grim societal effects.
The world is always ending anyway.
Enough to cause the next financial crash, achieving a steady increase of 10% global unemployment in the next decade at worst,
That is the true definition of AGI.
Of course, in the medium term, those companies may find out that they needed those people, and have to hire, and then have to re-train the new people, and suffer all the disruption that causes, and the companies that didn't do that will be ahead of the game. (Or, they find out that they really didn't need all those people, even if AI is useless, and the companies that didn't get rid of them are stuck with a higher expense structure. We'll see.)
1a. most seed/A stage investing is acyclical because it is not really about timing for exits, people just always need dry powder
1b. tech advancement is definitely acyclical - alexnet, transformers, and gpt were all just done by very small teams without a lot of funding. gpt2->3 was funded by microsoft, not vc
2a. (i have advance knowledge of this bc i've previewed the keynote slides for ai.engineer) free vc money slowed in 2022-2023 but has not at all dried up and in fact reaccelerated in a very dramatic way. up 70% this yr
2b. "vc" is a tenous term when all biglabs are >>10b valuation and raising from softbank or sovereign wealth. its no longer vc, its about reallocating capital from publics to privates because the only good ai co's are private
The point is that there's a correlation between macroeconomic dynamics (ie., the price of credit increasing) and the "rise of AI". In ordinary times, absent AI, the macroeconomic dynamics would fully explain the economic shifts we're seeing.
So the question is why do we event need to mention AI in our explanation of recent economic shifts?
What phenomena, exactly, require positing AI disruption?
AI company CEOs trying to juice their stock evaluations?
Spinning that to say you're a "visionary" for replacing expensive employees with AI (even when it's clear we're not there yet) is risky, but a good enough smoke screen to distract the average bear from poking holes in your financials.
> ...I'm wondering if we would be having the same conversation if money for startups was thrown around (and more jobs were being created for SWEs) the way it was when interest rates were zero.
The end of free money probably has to do with why C-level types are salivating at AI tools as a cheaper potential replacement for some employees, but describing the interest rates returning to nonzero percentages as going insane is really kind of a... wild take?
The period of interest rates at or near zero was a historical anomaly [1]. And that policy clearly resulted in massive, systemic misallocation of investment at global scale.
You're describing it as if that was the "normal?"
[1]: https://www.macrotrends.net/2015/fed-funds-rate-historical-c...
Putting that aside, how is this article called an analysis and not an opinion piece? The only analysis done here is asking a labor economist what conditions would allow this claim to hold, and giving an alternative, already circulated theory that AI companies CEOs are creating a false hype. The author even uses everyday language like "Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ ".
Is this really the level of analysis CNN has to offer on this topic?
They could have sketched the growth in foundation model capabilities vs. finite resources such as data, compute and hardware. They could have wrote about the current VC market and the need for companies to show results and not promises. They could have even wrote about the giant biotech industry, and its struggle with incorporating novel exciting drug discovery tools with slow moving FDA approvals. None of this was done here.
Its an apt comparison. The criticisms in the cnn article are already out date in many instances.
Humans are. We have tools to measure exponential growth empirically. It was done for COVID (i.e. epidemiologists do that usually) and is done for economy and other aspects of our life. If there's to be exponential growth, we should be able to put it in numbers. "True me bro" is not a good measure.
Edit: typo
"A person is smart. People are dumb, panicky dangerous animals and you know it."
What does this mean? What do you apply to populace at large? Do you mean a populace doesn’t model the exponential change right?
We can have a constructive discussion instead. My problem was not actually parsing what you said. I’m questioning the assumption if populace collectively modeling exponential change is really meaningful. You can, for example, describe how does it look like when populace can model change exponentially. Is there any relevant literature on this subject that I can look into? Does this phenomenon have a name?
Which ones, specifically? I’m genuinely curious. The ones about “[an] unfalsifiable disease-free utopia”? The one from a labor economist basically equating Amodei’s high-unemployment/strong economy claims to pure fantasy? The fact that nothing Amodei said was cited or is substantiated in any meaningful way? Maybe the one where she points out that Amodei is fundamentally a sales guy, and that Anthropic is making the rounds saying scary stuff just after they released a new model - a techbro marketing push?
I like anthropic. They make a great product. Shame about their CEO - just another techbro pumping his scheme.
In my experience, for practical usage LLMs aren't even improving linearly at this point as I personally see Claude 3.7 and 4.0 as regressions from 3.5. They might score better on artificial benchmarks but I find them less likely to produce useful work.
2 years ago it was cool but unreliable.
Today I just did an entire “photo shoot” in Midjourney.
Yeah. Imagine if COVID had actually killed 10% of the world population. Killing millions sucks, but mosquitos regularly do that too, and so does tuberculosis, and we don't shut down everything. Could've been close to a billion. Or more. Could've been so much worse.
Not just this topic.
We are still dealing with the aftereffects, which led to the elimination of any working class representation in politics and suppression of real protests like Occupy Wall Street.
When this bubble bursts, the IT industry will collapse for some years like in 2000.
It's not CNN exlusive. Newsmedia that did not evolve towards clicks, riling up people, hatewatching and paid propaganda to the highest bidder went extinct a decade ago. This is what did evolve.
Besides the labor economist bit, it also makes the correct point that tech people regularly exaggerate and lie. A great example of this is biotech, a field I work in.
We will wake up in 5 yrs to find we replaced people for a dependence on a handful of companies that serve llms and make inference chips. Its beyond dystopian.
This isn't very informative. Indeed, engaging in this argument-by-analoguy betrays a lack of actual analysis, credible evidence and justification for a position. Arguing "by analogy" in this way, which picks and chooses an analogy, just restates your position -- it doesnt give anyone reasons to believe it.
Uh, not to be petty, but the growth was not exponential — neither in retrospect, nor given what was knowable at any point in time. About the most aggressive, correct thing you could’ve said at the time was “sigmoid growth”, but even that was basically wrong.
If that’s your example, it’s inadvertently an argument for the other side of the debate: people say lots of silly, unfounded things at Peak Hype that sound superficially correct and/or “smart”, but fail to survive a round of critical reasoning. I have no doubt we’ll look back on this period of time and find something similar.
Compare: "Whenever I think of skeptics dismissing completely novel and unprecedented outcomes occurring by mechanisms we can't clearly identify or prove (will) exist... I think of skeptics who dismissed an outcome that had literally hundreds of well-studied historical precedents using proven processes."
You're right that humans don't have a good intuition for non-linear growth, but that common thread doesn't heal over those other differences.
But that didn’t happen. All of the people like pg who drew these accelerating graphs were wrong.
In fact, I think just about every commenter on COVID was wrong about what would happen in the early months regardless of political angle.
This moment feels exactly to me like that moment when we were going to “shut down for two weeks” and the majority of people seemed to think that would be the end of it.
It was clear where the trend was going, but exponentials always seem ridiculous on an intuitive level.
"Starting" is doing a hell of lot of work in that sentence. I'm starting to become a billionaire and Nobel Prize winner.
Anyway, I agree with Mark Cuban's statement in the article. The most likely scenario is that we become more productive as AI complements humans. Yesterday I made this comment on another HN story:
"Copilot told me it's there to do the "tedious and repetitive" parts so I can focus my energy on the "interesting" parts. That's great. They do the things every programmer hates having to do. I'm more productive in the best possible way.
But ask it to do too much and it'll return error-ridden garbage filled with hallucinations, or just never finish the task. The economic case for further gains has diminished greatly while the cost of those gains rises."
Suggests you are accumulating money, not losing it. That I think is the point of the original comment: AI is getting better, not worse. (Or humans are getting worse? Ha ha, not ha ha.)
Well, in order to meet the standard of the quote "wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years" we need more than just getting better. We need considerably better technology with a better cost structure to wipe out that many jobs. Saying we're starting on that task when the odds are no better than me becoming a billionaire within two years is what we used to call BS.
It flickers for a moment, then it either says
"In 2025, mankind vastly underestimated the amount of jobs AI can do in 2035"
or
"In 2025, mankind vastly overestimated the amount of jobs AI can do in 2035"
How would you use that information to invest in the stock market?
So it's index funds (as always) with me anyway.
I've been a heavy user of AI ever since ChatGPT was released for free. I've been tracking its progress relative to the work done by humans at large. I've concluded that it's improvements over the last few years are not across-the-board changes, but benefit specific areas more than others. And unfortunately for AI hype believers, it happens to be areas such as art, which provide a big flashy "look at this!" demonstration of AI's power to people. But... try letting AI come up with a nuanced character for a novel, or design an amplifier circuit, or pick stocks, or do your taxes.
I'm a bit worried about YCombinator. I like Hacker News. I'm a bit worried that YC has so much riding on AI startups. After machine learning, crypto, the post-Covid 19 healthcare bubble, fintech, NFTs, can they take another blow when the music stops?
Why is that the counter-narrative? Doesn't it seem more likely that it will contine to gradually improve, perhaps asymptotically, maybe be more specifically trained in the niches where it works well, and it will just become another tool that humans use?
Maybe that's a flop compared to the hype?
LLM bulls will say that they are going to generate synthetic data that is better than the real data.
For any bet that involves purchasing bits of profits you you could be right and lose money because because the government generally won't allow the entire economy to implode.
By the time a bubble pops literally everyone knows they're in a bubble, knowing something is a bubble doesn't make it irrational to jump on the bandwagon.
The answer (as always) lies somewhere in the middle. Expert software developers who embrace the tech whole heartedly while understanding its' limitations are now in an absolute golden era of being able to do things they never could have dreamed of before. I have no doubt we will see the first unicorns made of "single pizza" size teams here shortly.
Yet when tech CEOs do the same thing, people tend to perk up."
Silicon Valley and Redmond make desperate attempts to argue for their own continued relevance.
For Silicon Valley VC, software running on computers cannot be just a tool. It has to cause "disruption". It has to be "eating the world". It has to be a source of "intelligence" that can replace people.
If software and computers are just boring appliances, like yesterday's typewriters, calculators, radios, TVs, etc., then Silicon Valley VC may need to find a new line of work. Expect the endless media hype to continue.
No doubt soda technology is very interesting. But people working at soda companies are not as self-absorbed, detached from reality and overfunded as people working for so-called "tech" companies.
I’d love a journalist using Claude to debunk Dario: “but don’t believe me, I’m just a journalist - we asked Dario’s own product if he’s lying through his teeth, and here’s what it said:”
“ Final Thought (as a CEO):
I wouldn’t force a full return unless data showed a clear business case. Culture, performance, and employee sentiment would all guide the decision. I’d rather lead with transparency, flexibility, and trust than mandates that could backfire.
Would you like a sample policy memo I’d send to employees in this scenario?”
A better, more reasonable CEO than the one I have. So I’m looking forward to AI taking that white collar job especially.
Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person. In many cases, it's both. I work with some people who I believe have the capacity and potential to one day be competent, but the time and resource investment to make that happen is too much. I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now. If I handed it off to them I would not get it fast, and I would need to also go through it with them in several back-and-forth feedback-review loops to get it to a state that's usable.
Given they are human, this would push back delivery times by 2-3 business days. Or... I can prompt and handhold an AI to get it done in 3 hours.
Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.
There have never been that many businesses able to hire novices for this reason.
Programming is a craft, and just like any other, the best time to learn it is when it's free to learn.
I spend a lot of time encouraging people to not fight the tide and spend that time intentionally experimenting and seeing what you can do. LLMs are already useful and it's interesting to me that anybody is arguing it's just good for toy applications. This is a poisonous mindset and results in a potentially far worse outcome than over-hyping AI for an individual.
I am wondering if I should actually quit a >500K a year job based around LLM applications and try to build something on my own with it right now.
I am NOT someone that thinks I can just craft some fancy prompt and let an LLM agent build me a company, but I think it's a very powerful tool when used with great intention.
The new grads and entry level people are scrappy. That's why startups before LLMs liked to hire them. (besides being cheap, they are just passionate and willing to make a sacrifice to prove their worth)
The ones with a lot of creativity have an opportunity right now that many of us did not when we were in their shoes.
In my opinion, it's important to be technically potent in this era, but it's now even more important to be creative - and that's just what so many people lack.
Sitting in front of a chat prompt and coming up with an idea is hard for the majority of people that would rather be told what to do or what direction to take.
My message to the entry-level folks that are in this weird time period. It's tough, and we can all acknowledge that - but don't let cynicism shackle you. Before LLMs, your greatest asset was fresh eyes and the lack of cynicism brought upon by years of industry. Don't throw away that advantage just because the job market is tough. You, just like everybody else, have a very powerful tool and opportunity right in front of you.
The amount of people trying to convince you that it's just a sham and hype means that you have less competition to worry about. You're actually lucky there's a huge cohort of experienced people that have completely dismissed LLMs because they were too egotistical to spend meaningful time evaluating it and experimenting with it. LLM capabilities are still changing every 6 months-1 year. Anybody that has decided concretely that there is nothing to see here is misleading you.
Even in the current state of LLM if the critics don't see the value and how powerful it is mostly a lack of imagination that's at play. I don't know how else to say it. If I'm already able to eliminate someone's role by using an LLM then it's already powerful enough in its current state. You can argue that those roles were not meaningful or important and I'd agree - but we as a society are spending trillions on those roles right now and would continue to do so if not for LLMs
Just as the internet was a democratization of information, llms are a democratization of output.
That may be in terms of production or art. There is clearly a lower barrier for achieving both now compared to pre-llm. If you can't see this then you don't just have your head stuck in the sand, you have it severed and blasted into another reality.
The reason why you reacted in such a way is again, a lack of imagination. To you, "work" means "employment" and a means to a paycheck. But work is more than that. It is the output that matters, and whether that output benefits you or your employer is up to you. You now have more leverage than ever for making it benefit you because you're not paying that much time/money to ask an LLM to do it for you.
Pre-llm, most for-hire work was only accessible to companies with a much bigger bank account than yours.
There is an ungodly amount of white collar workers maintaining spreadsheets and doing bullshit jobs that LLMs can do just fine. And that's not to say all of those jobs have completely useless output, it's just that the amount of bodies it takes to produce that output is unreasonable.
We are just getting started getting rid of them. But the best part of it is that you can do all of those bullshit jobs with an LLM for whatever idea you have in your pocket.
For example, I don't need an army of junior engineers to write all my boilerplate for me. I might have a protege if I am looking to actually mentor someone and hire them for that reason, but I can easily also just use LLMs to make boilerplate and write unit tests for me at the same time. Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.
The junior engineer can also do this too, albeit in most cases less effectively.
That's democratization of work.
In your "5% unemployment" world you have many more gatekeepers and financial barriers.
> Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.
this sounds like the death of employment and the start of plutocracy
not what I would call "democratisation"
Well, I've said enough about cynicism here so not much else I can offer you. Good luck with that! Didn't realize everybody loved being an employee so much
so, employee or destitute? tough choice
I write code to drive hardware, in an unusual programming style. The company pays for Augment (which is now based on o4, which is supposed to be really good?!?). It's great at me typing: print_debug( at which point it often guesses right as to which local variables or parameters I want to debug - but not always. And it can often get the loop iteration part correct if I need to, for example, loop through a vector. The couple of times I asked it to write a unit test? Sure, it got a the basic function call / lambda setup correct, but the test itself was useless. And a bunch of times, it brings back code I was experimenting with 3 months ago and never kept / committed, just because I'm at the same spot in the same file..
I do believe that some people are having reasonable outcomes, but it's not "out of the box" - and it's faster for me to write the code I need to write than to try 25 different prompt variations.
Thanks for sharing your perspective with ACTUAL details unlike most people that have gotten bad results.
Sadly hardware programming is probably going to lag or never be figured out because there's just not enough info to train on. This might change in the future when/if reasoning models get better but there's no guarantee of that.
> which is now based on o4
based on o4 or is o4, those are two different things. augment says this: https://support.augmentcode.com/articles/5949245054-what-mod...
Augment uses many models, including ones that we train ourselves. Each interaction you have with Augment will touch multiple models. Our perspective is that the choice of models is an implementation detail, and the user does not need to stay current with the latest developments in the world of AI models to fully take advantage of our platform.
Which IMO is....a cop out, a terrible take, and just...slimey. I would not trust a company like this with my money. For all you know they are running your prompts against a shitty open source model running on a 3090 in their closet. The lack of transparency here is concerning.You might be getting bad results for a few reasons:
- your prompts are not specific enough
- your context is poisoned. how strategically are you providing context to the prompt? a good trick is to give the llm an existing file as an example to how you want it to produce the output and tell it "Do X in the style of Y.file". Don't forget with the latest models and huge context windows you could very well provide entire subdirectories into context (although I would recommend being pretty targeted still)
- the model/tool you're using sucks
- you work in a problem domain that LLMs are genuinely bad at
Note: your company is paying a subscription to a service that isn't allowing you to bring your own keys. they have an incentive to optimize and make sure you're not costing them a lot of money. This could lead to worse results.see here for Cline team's perspective on this topic: https://www.reddit.com/r/ChatGPTCoding/comments/1kymhkt/clin...
I suggest this as the bare minimum for the HN community when discussing their bad results with LLMs and coding:
- what is your problem domain
- show us your favorite prompt
- what model and tools are you using?
- are you using it as a chat or an agent?
- are you bringing your own keys or using a service?
- what did you supply in context when you got the bad result?
- how did you supply context? copy paste? file locations? attachments?
- what prompt did you use when you got the bad result?
I'm genuinely surprised when someone complaining about LLM results provides even 2 of those things in their comment.Most of the cynics would not provide even half of this because it'd be embarrassing and reveal that they have no idea what they are talking about.
> But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?
I mean....i'm doing it and getting paid for it so...
In other words, did the AI actually replace you in this case? Do you expect it to? Because people clearly expect it, then we have such discussions as this.
good luck with that
AI/ML and Offshoring/GCCs are both side effects of the fact that American new grad salaries in tech are now in the $110-140k range.
At $70-80k the math for a new grad works out, but not at almost double that.
Also, going remote first during COVID for extended periods proved that operations can work in a remote first manner, so at that point the argument was made that you can hire top talent at American new grad salaries abroad, and plenty of employees on visas were given the option to take a pay cut and "remigrate" to help start a GCC in their home country or get fired and try to find a job in 60 days around early-mid 2020.
The skills aspect also played a role to a certain extent - by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple - I don't care if you can prove Dixon's factorization method using induction if you can't tell me how threading works or the rings in the Linux kernel.
The Japan example mentioned above only works because Japanese salaries in Japan have remained extremely low and Japanese is not an extremely mainstream language (making it harder for Japanese firms to offshore en masse - though they have done so in plenty of industries where they used to hold a lead like Battery Chemistry).
That doesn’t fit my experience at all. The applied math vs engineering continuum is mostly dependent on whether a CS program at a given school came out of the engineering department or the math apartment. I haven’t noticed any shift on that spectrum coming from CS departments except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware.
That’s the same across countries though. I certainly haven’t noticed that Indian or Eastern European CS grads have a better understanding of the OS or the underlying hardware.
Absolutely, but that's if they are exposed to these concepts, and that's become less the case beyond maybe a single OS class.
> except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware
I feel that's part of the issue, but also, CS programs in the US are increasingly making computer architecture an optional class. And network specific classes have always been optional.
---------
Mind you, I am biased towards Cybersecurity, DevOps, DBs, and HPC because that is the industry I've worked on for over a decade now, and it legitimately has become difficult hiring new grads in the US with a "NAND-to-Tetris" mindset because curriculums have moved away from that aside from a couple top programs.
This is part of why some companies have minimum terminal levels (often 5/Sr) before which a failure to improve means getting fired.
An intern is much more valuable than AI in the sense that everyone makes micro decisions that contribute to the business. An Intern can remember what they heard in a meeting a month ago or some important water-cooler conversation and incorporate that in their work. AI cannot do that
Today, you hire an intern and they need a lot of hand-holding, are often a net tax on the org, and they deliver a modest benefit.
Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more. Their total impact will be much higher.
The whole "entry level is screwed" view only works if you assume that companies want all of the drawbacks of interns and entry level employees AND there is some finite amount of work to be done, so yeah, they can get those drawbacks more cheaply from AI instead.
But I just don't see it. I would much rather have one entry level employee producing the work of six because they know how to use AI. Everywhere I've worked, from 1-person startup to the biggest tech companies, has had a huge surplus of work to be done. We all talk about ruthless prioritization because of that limit.
So... why exactly is the entry level screwed?
You don’t need managers, or CEOs. You don’t even need VCs.
Well, maybe it'll be the other way around: Maybe they'll need more hand-holding since they're used to relying on AI instead of doing things themselves, and when faced with tasks they need to do, they will be less able.
But, eh, what am I even talking about? The _senior_ developers in a many companies need a lot of hand-holding that they aren't getting, write bad code, with poor practices, and teach the newbies how to get used to doing that. So that's why the entry-level people are screwed, AI or no.
But if the purpose of an internship is to learn how to work in a company, while producing some benefit for the company, I think everything gets better. Just like we don’t measure today’s terms by words per minute typed, I don’t think we’ll measure tomorrow’s interns by Lines of code that hand – written.
So much of the doom here comes from a thought process that goes “we want the same outcomes as today, but the environment is changing, therefore our precious outcomes are at risk.“
Maybe tomorrow's interns will be "AI experts" who need less hand-holding, but the day after that will be kids who used AI throughout elementary school and high school and know nothing at all, deferring to AI on every question, and have zero ability to tell right from wrong among the AI responses.
I tutor a lot of high school students and this is my takeaway over the past few years: AI is absolutely laying waste to human capital. It's completely destroying students' ability to learn on their own. They are not getting an education anymore, they're outsourcing all their homework to the AI.
But if you deskill processes, it makes it harder to argue in favor of paying the same premium you did before.
What I had growing up though were interests in things, and that has carried me quite far. I worry much more about the addictive infinite immersive quality of video games and other kinds of scrolling, and by extension the elimination of free time through wasted time.
The whole idea of interns, is as training positions. They are supposed to be a net negative.
The idea is that they will either remain at the company, after their internship, or move to another company, taking the priorities of their trainers, with them.
But nowadays, with corporate HR, actively doing everything they can to screw over their employees, and employees, being so transient, that they can barely remember the name of their employer, the whole thing is kind of a worthless exercise.
At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers, upon returning to Japan. It was well worth it.
Damn, I wish that was me. Having someone mentor you at the beginning of your career instead of having to self learn and fumble your way around never knowing if you're on the right track or not, is massive force multiplier that pays massive dividends over your career. It's like entering the stock market with 1 million $ capital vs 100 $. You're also less likely to build bad habits if nobody with experience teaches you early on.
They are a marquée company, and get the best of the best, direct from top universities.
Also, no one has less than a Master's, over there.
We got damn good engineers as interns.
I feel this is pretty much the norm everywhere in Europe and Asia. No serious engineering company in Germany even looks at your resume it there's no MSc. degree listed, especially since education is mostly free for everyone so not having a degree is seen as a "you problem", but also it leads to degree inflation, where only PhD or post-docs get taken seriously for some high level positions. I don't remember ever seeing a senior manager/CTO without the "Dr." or even "Prof. Dr." title in the top German engineering companies.
I think mostly the US has the concept of the cowboy self taught engineer who dropped out of college to build a trillion dollar empire in his parents garage.
Also because US salaries are sky high compared to their European counterparts, so I could understand if the extra salary wasn’t worth the risk that they might not have that much extra productivity.
I’ve certainly worked with advanced degree people who didn’t seem to be very far along on the productivity curve, but I assume it’s like that for everything everywhere.
There’s no such a thing as loyalty in employer-employee relationships. There’s money, there’s work and there’s [collective] leverage. We need to learn a thing or two from blue collars.
A majority of my friends are blue-collar.
You might be surprised.
Unions are adversarial, but the relationships can still be quite warm.
I hear that German and Japanese unions are full-force stakeholders in their corporations, and the relationship is a lot more intricate.
It's like a marriage. There's always elements of control/power play, but the idea is to maximize the benefits.
It can be done. It has been done.
It's just kind of lost, in tech.
Because you can't offshore your clogged toilet or broken HVAC issue to someone abroad for cheap on a whim like you can with certain cases in tech.
You're dependent on a trained and licensed local showing up at your door, which gives him actual bargaining power, since he's only competing with the other locals to fix your issue and not with the entire planet in a race to the bottom.
Unionization only works in favor of the workers in the cases when labor needs to be done on-site (since the government enforces the rules of unions) and can't be easily moved over the internet to another jurisdiction where unions aren't a thing. See the US VFX industry as a brutal example.
There are articles discussing how LA risks becoming the next Detroit with many of the successful blockbusters of 2025 being produced abroad now due to the obscene costs of production in California caused mostly by the unions there. Like 350 $ per hour for a guy to push a button on a smoke machine, because only a union man is allowed to do it. Or that it costs more to move across a Cali studio parking lot than to film a scene in the UK. Letting unions bleed companies dry is only gonna result them moving all jobs that can be moved abroad.
Yet. You can’t yet. Humanoids and VR are approaching the point quite rapidly where a teleoperated or even autonomous robot will be a better and cheaper tradesman than Joe down the road. Joe can’t work 24 hours a day. Joe realises that, so he’ll rent a robot and outsource part of his business, and will normalise the idea as quickly as LLMs have become normal. Joe will do very well, until someone comes along with an economy of scale and eats his breakfast.
IMO, real actual people don’t want to live in the world you described. Hell, they don’t wanna live in this one! The “elites” have failed us. Their vision of the future is a dystopian nightmare. If the only reason to exist is to make 25 people at the top richer than gods? What is the fucking point of living?
Startups are less enlightened than that about "interns".
Literally today, in a startup job posting, to a top CS department, they're looking for "interns" to bring (not learn) hot experience developing AI agents, to this startup, for... $20/hour, and get called an intern.
It's also normal for these startup job posts to be looking for experienced professional-grade skills in things like React, Python, PG, Redis, etc., and still calling the person an intern, with a locally unlivable part-time wage.
Those startups should stop pretending they're teaching "interns" valuable job skills, admit that they desperately need cheap labor for their "ideas person" startup leadership, to do things they can't do, and cut the "intern" in as a founding engineer with meaningful equity. Or, if you can't afford to pay a livable and plausibly competitive startup wage, maybe they're technical cofounders.
Employees are lucky when incentives align and employers treat them well. This cannot be expected or assumed.
A lot of people want a different kind of world. If we want it, we’re gonna have to build it. Think about what you can do. Have you considered running for office?
I don’t think it is helpful for people to play into the victim narrative. It is better to support each other and organize.
This feels like the ultimate pulling up the ladder after you type of move.
Delegation, properly defined, involves transferring not just the task but the judgment and ownership of its outcome. The perfect delegation is when you delegate to someone because you trust them to make decisions the way you would — or at least in a way you respect and understand.
You can’t fully delegate to AI — and frankly, you shouldn’t. AI requires prompting, interpretation, and post-processing. That’s still you doing the thinking. The implementation cost is low, sure, but the decision-making cost still sits with you. That’s not delegation; it’s assisted execution.
Humans, on the other hand, can be delegated to — truly. Because over time, they internalize your goals, adapt to your context, and become accountable in a way AI never can.
Many reasons why AI can't fill your shoes:
1. Shallow context – It lacks awareness of organizational norms, unspoken expectations, or domain-specific nuance that’s not in the prompt or is not explicit in the code base.
2. No skin in the game – AI doesn’t have a career, reputation, or consequences. A junior human, once trained and trusted, becomes not only faster but also independently responsible.
Junior and Interns can also use AI tools.
Maybe some day AI will truly be able to think and reason in a way that can approximate a human, but we're still very far from that. And even when we do, the accountability problem means trusting AI is a huge risk.
It's true that there are white collar jobs that don't require actual thinking, and those are vulnerable, but that's just the latest progression of computerization/automation that's been happening steadily for the last 70 years already.
It's also true that AI will completely change the nature of software development, meaning that you won't be able to coast just on arcane syntax knowledge the way a lot of programmers have been able to so far. But the fundamental precision of logical thought and mapping it to a desirable human outcome will still be needed, the only change is how you arrive there. This actually benefits young people who are already becoming "AI native" and will be better equipped to leverage AI capabilities to the max.
1. Because, generally, they don't.
2. Because an LLM is not a person, it's a chatbot.
3. "Hire an intern" is that US thing when people work without getting real wages, right?
Grrr :-(
If LLMs continue to become more powerful, hiring more juniors who can use them will be a no-brainer.
AI can barely provide the code for a simple linked list without dropping NULL pointer dereferences every other line...
Been interviewing new grads all week. I'd take a high performing new grad that can be mentored into the next generation of engineer any day.
If you don't want to do constant hand holding with a "meh" candidate...why would you want to do constant hand holding with AI?
> I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now.
Not sure what you are working on. I would never prioritize speed over quality - but I do work in a public safety context. I'm actually not even sure of the legality of using an AI for design work but we have a company policy that all design analysis must still be signed off on by a human engineer in full as if it were 100% their own.
I certainly won't be signing my name on a document full of AI slop. Now an analysis done by a real human engineer with the aid of AI - sure, I'd walk through the same verification process I'd walk through for a traditional analysis document before signing my name on the cover sheet. And that is something a jr. can bring to me to verify.
I've been interviewing marketing people for the last few months (I have a marketing background from long ago), and the senior people were either way too expensive for our bootstrapped start-up, or not of the caliber we want in the company.
At the same time, there are some amazing recent grads and even interns who can't get jobs.
We've been hiring the younger group, and contracting for a few days a week with the more experienced people.
Combine that with AI, and you've got a powerful combination. That's our theory anyway.
It's worked pretty well with our engineers. We are a team of 4 experienced engineers, though as CEO I don't really get to code anymore, and 1 exceptional intern. We've just hired our 2nd intern.
The same thing will happen to Gen Z because of AI.
In both cases, the net effect of this (and the desired outcome) is to suppress wages. Not only of entry-level job but every job. The tech sector is going to spend the next decade clawing back the high costs of tech people from the last 15-20 years.
The hubris here is that we've had a unprecedented boom such that many in the workforce have never experienced a recession, what I'd call "children of summer" (to borrow a George RR Martin'ism). People have fallen into the trap of the myth of meritocracy. Too many people thing that those who are living paycheck to paycheck (or are outright unhoused) are somehow at fault when spiralling housing costs, limited opportunities and stagnant real wages are pretty much responsible for everything.
All of this is a giant wealth transfer to the richest 0.01% who are already insanely wealthy. I'm convinced we're beyond the point where we can solve the problems of runaway capitalism with electoral politics. This only ends in tyranny of a permanent underclass or revolution.
You’re probably not going to transform your company by issuing Claude licenses to comfortable middle-aged career professionals who are emotionally attached to their personal definition of competency.
Companies should be grabbing the kids who just used AI to cheat their way through senior year, because that sort of opportunistic short-cutting is exactly what companies want to do with AI in their business.
A company that I know of is having a L3 hiring freeze also and some people are downgraded from L4 to L3 or L5 to L4 also.. Getting more work for less cost.
This obviously not being the case shows that we're not in a AI driven fundamental paradigm shift, but rather run of the mill cost cutting measures. Like suppose a tech bubble pops and there are mass layoffs (like the Dotcom bubble). Obviously people will loose their jobs. AI hype merchants will almost definitely try to push the narrative that these losses are from AI advancements in an effort to retain funding.
"Move fast and break things" - Zuckerberg
"A good plan violently executed now is better than a perfect plan executed next week." - George S. Patton
You're not going to sell me your SaaS when I can rent AIs to make faster cheaper IP that I actually own to my exact specifications.
If you can’t extrapolate on your own thesis you can’t be knowledgeable in the field.
Good example was a guy on here who was convinced every company would be ran by one person because of AI. You’d wake up in the morning and decide which products your AI came up with while you slept would be profitable. The obvious next question is “then why are you even involved?”
All that needs to be understood is that the narcissistic grandeur delusion that you will singularly be positioned to benefit from sweeping restructuring of how we understand labor must be forcibly divested from some people's brains.
Only a very select few are positioned to benefit from this and even their benefit is only just mostly guaranteed rather than perfectly guaranteed.
Robot run iron mine that sells iron ore to a robot run steel mill that sells steel plate to a robot run heavy truck manufacturer that sells heavy trucks to robot run iron mines, etc etc.
The material handling of heavy industry is already heavily automated, almost by definition. You just need to take out the last few people.
Think of it as an IQ test of how new technology is used
Let me give you an easier example of such a test
Let's say they suddenly develop nearly-free unlimited power, ie. fusion next year
Do you think the world will become more peaceful or much more war?
If you think peaceful, you fail, of course more war, it's all about oppression
It's always about the few controlling the many
The "freedom" you think you feel on a daily basis is an illusion quickly faded
If you don’t snatch up the smartest engineers before your competition does: you lose.
Therefore at a certain level of company, hiring is entirely dictated by what the competition is doing. If everyone is suddenly hiring, you better start doing it too. If no one is, you can relax, but you could also pull ahead if you decide to hire rapidly, but this will tip off competitors and they too will begin hiring.
Whether or not you have any use for those engineers is irrelevant. So AI will have little impact on hiring trends in this market. The downturn we’ve seen in the past few years is mostly driven by the interest rate environment, not because AI is suddenly replacing engineers. An engineer using AI gives more advantage than removing an engineer, and hiring an engineer who will use AI is more advantageous than not hiring one at all.
AI is just the new excuse for firing or not hiring people, previously it was RTO but that hype cycle has been squeezed for all it can be.
Money is just rationing. If you devalue the economy implicitly you accept that, and the consequences for society at large.
Lenin's dictum: A capitalist will sell you the rope you hang him with Comes to mind
People charging on their credit cards. Consumers are adding $2 billion in new debt every day.
"Total household debt increased by $167 billion to reach $18.20 trillion in the first quarter"
Rich people buying even fancier goods and services. You already see this in the auto industry. Why build a great $20,000 car for the masses when you can make the same revenue selling $80,000 cars to rich people (and at higher margins)? This doesn't work of course when you have a reasonably egalitarian society with reasonable wealth inequality. But the capitalists have figured out how to make 75% of us into willing slaves for the rest. A bonus of this is that a good portion of that 75% can be convinced to go into lifelong debt to "afford" those things they wish they could actually buy, further entrenching the servitude.
I won't paste in the result here, since everyone here is capable of running this experiment themselves, but trust me when I say ChatGPT produced (in mere seconds, of course) an article every bit as substantive and well-written as the cited article. FWIW.
Productivity doesn’t increase on its own; economists struggle to separate it from improved processes or more efficient machinery (the “multi factor productivity fudge”). Increased efficiency in production means both more efficient energy use AND being able to use a lot more of it for the same input of labour.
I am not saying this is a nothing burger, the tech can be applied to many domains and improve productivity, but it does not think, not even a little, and scaling won’t make that magically happen.
Anyone paying attention should understand this fact by now.
There is no intelligence explosion in sight, what we’ll see during the next few years is a gradual and limited increase in automation, not a paradigm change, but the continuation of a process that started with the industrial revolution.
But we're going to get to a point where "the quality goes up" means the quality exceeds what I can do in a reasonable time frame, and then what I can do in any time frame...
They spent huge amounts of time on things that software either does automatically or makes 1,000x faster. But by and large that actually created more white collar jobs because those capabilities meant more was getting done which meant new tasks needed to be performed.
On the first point, unemployment during the Great Depression was “only” 30%. And those people were eventually able to find other jobs. Here, we are talking about permanent unemployment for even larger numbers of people.
The Luddites were right. Machines did take their jobs. Those individuals who invested significantly in their craft were permanently disadvantaged. And those who fought against it were executed.
And on point 2, to be precise, a lack of jobs doesn’t mean a lack of problems. There are a ton of things society needs to have accomplished, and in a perfect world the guy who was automated out of packing Amazon boxes could open a daycare for low income parents. We just don’t have economic models to enable most of those things, and that’s only going to get worse.
It'll be a slow burn, though. The projection of rapid, sustained large-scale unemployment assumes that the technology rapidly ascends to replace a large portion of the population at once. AI is not currently on a path to replacing a generalized workforce. Call center agents, maybe.
Second, simply "being better at $THING" doesn't mean a technology will be adopted, let alone quickly. If that were the case, we'd all have Dvorak keyboards and commuter rail would be ubiquitous.
Third, the mass unemployment situation requires economic conditions where not leveraging a presumably exploitable underclass of unemployed persons is somehow the most profitable choice for the captains of industry. They are exploitable because this is not a welfare state, and our economic safety net is tissue-paper thin. We can, therefore, assume their labor can be had at far less than its real worth, and thus someone will find a way to turn a profit off it. Possibly the Silicon Valley douchebags who caused the problem in the first place.
> It'll be a slow burn, though.
Have you been watching the current developer market?
It's really, really rough out here for unemployed software developers.
And there are some laws of nature that are relevant such as supply-demand economics. Technology often makes things cheaper which unlocks more demand. For example, I’m sure many small businesses would love to build custom software to help them operate but it’s too expensive.
A good analogy would be web development transition from c to java to php to Wordpress. I feel like it did make web sites creation for small business more accessible. OTOH a parallel trend was also mass-scale production of industry-specific platforms, such as Yahoo Shopping.
It’s not clear to me which trend won in the end.
One of which was the occupation of being a computer!
Nowadays I'm learning my parents' tongue (Cantonese) and Mandarin. It's just comical how badly the LLMs do sometimes. I swear they roll a natural 1 on a d20 and then just randomly drop a phrase. Or at least that's my head canon. They're just playing DnD on the side.
Supposing that you are trying to increase AI adoption among white-collar workers, why try to scare the shit out them in the process? Or is he moreso trying to sell to the C-suite?
This is why free market economies create more wealth over time than centrally planned economies: the free market allows more people to try seemingly crazy ideas, and is faster to recognize good ideas and reallocate resources toward them.
In the absence of reliable prediction, quick reaction is what wins.
Anyway, even if AI does end up “destroying” tons of existing white collar jobs, that does not necessarily imply mass unemployment. But it’s such a common inference that it has its own pejorative: Luddite.
And the flip side of Ludddism is what we see from AI boosters now: invoking a massive impact on current jobs as a shorthand to create the impression of massive capability. It’s a form of marketing, as the CNN piece says.
You know what's hard? Moving from a poor "shithole" to a wealthy country, with expensive accommodation, where a month of rent is something you'd save up months for.
Knowing and displaying (faking really) 'correct' cultural status signifiers to secure a good job. And all the associated stress, etc.
Moving the other direction to a low-cost-of-living or poor shithole country is extremely easy in comparison with a fat stack of resources.
You literally don't have to worry about anything in the least.
So basically once you are rich, you have to choose to leave most of it on the table to go to a poor country.
> make $1mm in a rich country and move to a poorer country and chill if you so desire
i wonder if such trends are good for said poorer country (e.g real estate costs) in the long run?On an aggregate level this is true and contrary to the prevailing sentiment of doomer skepticism, the developed world is usually still the best place to do it. On an individual level, a lot of things can go wrong between here and a million dollars.
Fun fact what most people ignore: There have been around ~7000 people on Mount Everest - while the US alone has around 300.000 / 350.000 people earning more than 1 million USD a year.
So - its clear: Is more easier to become an "income-millionaire" than to climb Mount Everest! :-)
You have to always keep on moving just to stay in the same place.
Even if you think all the naysayers are “luddites”, do you really think it’s a great idea to have no backup plan beyond “whupps we all die or just go back to the Stone Age”?
What makes you think people haven’t made back up plans?
Or are you saying government needs to do it for us?
History has shown us quite clearly what happens if governments, and not individuals, are responsible for finding employment.
They should all just find a way be set for life within the next 3 years, is this your proposal ?
I don’t think this 3 year timeline is realistic and pondering what we’re going to do in 20 years is unpredictable.
What’s a better alternative?
People don’t want society to collapse. So if you think it’s something that people can prevent, feel comforted that everyone is trying to prevent it.
If these mechanisms you mention are in place and functioning, why is there, for example, such large growth of the economic inequality gap?
For instance, upper-middle-class and middle-class individuals in countries like India and Thailand often have access to better services in restaurants, hotels, and households compared to their counterparts in rich nations.
Elderly care and health services are two particularly important sectors where society could benefit from allocating a larger workforce.
Many others will have roles to play building, maintaining, and supervising robots. Despite rapid advances, they will not be as dexterous, reliable, and generally capable as adult humans for many years to come. (See: Moravec's paradox).
Those people who were able to get work were now subject to a much more dangerous workplace and forced into a more rigid legalized employer/employee structure, which was a relatively new "corporate innovation" in the grand scheme of things. This, of course, allowed/required the state to be on the hook for enforcement of the workplace contract, and you can bet that both public and private police forces were used to enforce that contract with violence.
Certainly something to think about for all the users on this message board who are undoubtedly more highly skilled craftspeople than most, and would never be caught up in a mass economic displacement driven by the introduction of a new technological innovation.
At the very least, it's worth a skim through the Wikipedia article: https://en.wikipedia.org/wiki/Luddite
Sure it is painful but a ZIRP economy doesn't listen to the end consumers. No reason to innovate and create crazy ideas if you have plenty of income.
I think this situation is very similar in terms of the underestimation of scope of application, however differs in the availability of new job categories - but then that may be me underestimating new categories which are as yet as unforeseen as stokers and train conductors once were.
But what this means at scale, over time, is that if AI can do 80% of your job, AI will do 80% of your job. The remaining 20% human-work part will be consolidated and become the full time job of 20% of the original headcount while the remaining 80% of the people get fired.
AI does not need to do 100% of any job (as that job is defined today ) to still result in large scale labor reconfigurations. Jobs will be redefined and generally shrunk down to what still legitimately needs human work to get it done.
As an employee, any efficiency gains you get from AI belong to the company, not you.
If these tools are really making people so productive, shouldn't it be painfully obvious in companies' output? For example, if these AI coding tools were an amazing productivity boost in the end, we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products. And we'd expect that to be in a way that would be obvious to customers and users, not just in the form of some blog post or earnings call.
For cost center work, this would lead to layoffs right away, sure. But companies that make and sell software should be capitalizing on this, and only laying people off when they get to the point of "we just don't know what to do with all this extra productivity, we're all out of ideas!". I haven't seen one single company in this situation. So that makes me think that these decisions are hype-driven short term thinking.
LLMs are also not very useful for long term strategy or to come up with novel features or combinations of features. They also are not great at maintaining existing code, particularly without comprehensive test suites. They are good at coming up with tests for boiler plate code, but not really for high-level features.
From my experience, this stuff is rarely introduced to save developers from typing in the code for their logic. Actual reasons I observe:
1. SaaS sales/marketing pushing their offerings on decision makers - software being a pop culture, this works pretty well. It can be hard for internal staff to push back on What Everyone Is Using (TM). Even if it makes little to no sense.
2. Outsourcing liability, maintenance, and general "having to think about it". Can be entirely valid, but often it indeed comes from an "I don't want to think of it" kind of place.
I don't see this stuff slowing down GenAI or not, mainly because it has usually little to do with saving time or money.
How do you know this? What are the bottlenecks?
What makes you so sure of the productivity boost when we aren't seeing a change in output?
Shipping features faster != innovation or improvements to existing products
I’m not as bullish as some are on the impact of AI, but it does feel nice when you can deliver something in a fraction of the time it used to take. For me, it’s more useful as a research and idea exploration tool, less so about writing code. Part of that is that I’m in Scala land, so it just tends to not work as well as a more mainstream language.
We haven’t used it to help the product management and solution exploration side, which seems to be a big constraint on our execution.
It may help you build a real product feature quicker, but AI is not necessarily doing the research and product design which is probably the bottleneck for seeing real impact.
Maybe overall complexity creeping up rolls over any small gains, or devs are becoming more lazy and just copy paste llms output without a serious look at it?
My company didnt even adapt or allow use of llms in any way for anything so far (private client data security is more important than any productivity gains, which anyway seems questionable when looking around.. and serious data breaches can end up with fines in hundreds of millions ballpark easily).
Having worked on software infrastructure, it’s a thankless job. You’re most heroic work has little visibility and the result is that nothing catastrophic happened.
So maybe products will have better reliability and fewer bugs? And we all know there’s crappy software that makes tons of money, so there isn’t necessarily a strong correlation.
Luckily software companies are not ball bearings factories.
Why wouldn't you just 10x the productive output instead?
Firstly, the capex is currently too high for all but the few.
This is a rather obvious statement, sure. But the impact is a lot of companies "have tried language models and they didn't work", and the capex is laughable.
Secondly, there's a corporate paralysis over AI.
I received a panicky policy statement written in legalaise forbidding employees from using LLMs in any form. Written both out of a panic regarding intellectual property leaking but also a panic about how to manage and control staff moving forward.
I think a lot of corporates still clutch at this view that AI will push the workforce costs down and are secretly wasting a lot money failing at this.
The waste is extraordinary, but it's other peoples money (it's actually the shareholders money) and it's seen as being all for a good cause and not something to discuss after it's gone. I can never get it discussed.
Meanwhile, at a grass roots level, I see AI is being embraced and is improving productivity, every second IT worker is using it, it's just that because of this corporate panicking and mismanagement, it's value is not yet measured.
The tools are often cringe because the capex was laughable. E.g. one solution, the trial was done using public LLMs and then they switched over to an internally built LLM which is terrible.
Or, secondly, the process is often cringe because the corporate aims are laughable.
I've had an argument with a manager making a multi-million dollar investment in a zero coding solution that we ended up throwing in the bin years later.
They argued that they are going with this bad product because "they don't want to have to manage a team of developers".
They responded "this product costs millions of dollars, how dare you?"
How dare me indeed...
They promptly left the company but it took 5 years before it was finally canned, and plenty of people wasted 5 years of their career on a dead-end product.
The Google web-based office productivity suite is similar. I heard a rumor that at some point Google senior mgmt said that nearly all employees (excluding accounting) must use Google Docs. I am sure that they fixed a huge number of bugs plus added missing/blocking feature, which made the product much more competitive vs MSFT Office. Fifteen years ago, Google Docs was a curiosity -- an experiment for just how complex web apps could become. Today, Google Docs is the premiere choice for new small businesses. It is cheaper than MSFT Office, and "good enough".
> This is a rather obvious statement,
Nobody is saying companies have to make LLMs themselves.
SASS is a thing.
In regards to Private LLMs, the situation has become disappointing in the 6 months.
I can only think of Mistral as being a genuine vendor.
But given the limitations in context window size, fine tuning is still necessary, and even that requires capex that I rarely see.
But my comment comes from the fact that I heard from several sources, smart people say "we tried language models at work and it failed".
However in my discussion with them, they have no concept of the size of the datacentres used by the webscalers.
I think the reality is less like a switch and more like there are just certain jobs that get easier and you just need fewer people overall.
And you DO see companies laying off people in large numbers fairly regularly.
Sure but, so far, too regularly to be AI-gains-driven (at least in software). We have some data on software job postings and the job apocalypse, and corresponding layoffs, coincided with the end of ultra-low interest rates. If AI had a recent effect this year or last, its quite tiny in comparison.
https://fred.stlouisfed.org/graph/?g=1JmOr
so one can argue more is to come, but its hard to see how its had a real effect on jobs/layoffs thus far.
Worker productivity is secondary to business destruction, which is the primary event we're really waiting for.
So let me keep it real, I am shorting Atlassian over the next 5 years. Asana is another, there's plenty of startup IPOs that need to be shorted to the ground basically.
I think that this sentiment, along with all of the hype around AI in general, is failing to grasp a lot of the complexity around software creation. I'm not just talking about writing the code for a new application - I'm talking about maintaining that application, ensuring that it executes reliably and correctly, thinking about the features and UX required to make it as frictionless as possible (and voice input isn't the solution there, I'm very confident of that).
I'll be here in a year, we can have this exact discussion again.
"AI" is not going to wholesale replace software development anytime soon, and certainly not within a year's time because of the reasons I mentioned. The way you worded your post made it sound like you believed that capability was already here - nevertheless, whether you think it's here now or will be here in a year, both estimates are way off IMO.
Me too. Mostly so I can laugh though.
In smaller businesses some roles won’t need to be hired anymore.
Meanwhile in big corps, some roles may transition from being the source of presumed expertise to being one neck to choke.
I’d love it not to be true, but the truth is Jira is to projects what Slack/Teams are to messaging. When everybody is a project manager Jira gets paid more, not less.
Realistically though, they might incorporate that high schooler's software into Jira, to make it even more bloated and they will sell it to your employer soon enough! Then team lead Chris will enter your birthday and your vacation days in it too, to enable it to also do vacation planning, without asking you. Next thing is, that Atlassian sells you out and you receive unsolicited AI calls for your holiday planning.
When I used a not-so-simple LLM to make it act as a text adventure game it could barely keep track of the items in my inventory, so TBH i am a little bit skeptical that an LLM can handle entire project management - even without voice.
Perhaps it might be able to use tools/MCP/RPC to call out to real project management software and pretend to be your accountant/manager/whoever, but i wouldn't call that the LLM itself doing the project management task - and someone would need to write that project management software.
We just have to wait for the cards to flip, and that’s happening on a quadratic curve (some say exponential).
No.
The bottleneck isn't intellectual productivity. The bottleneck is a legion of other things; regulation, IP law, marketing, etc. The executive email writers and meeting attenders have a swarm of business considerations ricocheting around in their heads in eternal battle with each other. It takes a lot of supposedly brilliant thinking to safely monetize all the things, and many of the factors involved are not manifest in written form anywhere, often for legal reasons.
One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields. Another is art "creatives": graphic artists in particular. They're early victims and likely to be fully supplanted in the near future. A little further on and it'll be writers, actors, etc.
In the scenario being discussed - if a bunch of companies hired a whole bunch of lawyers, markerters, etc that might make salaries go up due to increased demand (but probably not super high amoung as tech isnt the only industry in the world). That still first requires companies to be hiring more of these types of people for that effect to happen, so we should still see some of the increased output even if there is a limiting factor. We would also notice the salaray of those professions going up, which so far hasn't happened.
The tech is going to have to be absolutely flawless, otherwise the uncanny-valley nature of AI "actors" in a movie will be as annoying as when the audio and video aren't perfectly synced in a stream. At least that's how I see it..
For most of them I'm not seeing any of those issues.
A couple years ago, we thought the trend was without limits - a five second video would turn into a five minute video, and keep going from there. But now I wonder if perhaps there are built in limits to how far things can go without having a data center with a billion Nvidia cards and a dozen nuclear reactors serving them power.
Again, I don't know the limits, but we've seen in the last year some sudden walls pop up that change our sense of the trajectory down to something less "the future is just ten months away."
The quick cuts thing is a huge turnoff so if they have a 15 second clip later on, I missed it.
When I say "1second" I mean that's what I was doing with automatic1111 a couple years ago. And every video I've seen is the same 30-60 generated frames...
Can you give an example, say in Medicine, where AI made a significant advancement? That is we are talking neural networks and up (ie: LLM) and not some local optimization.
"Our study suggests that LLMs have achieved superhuman performance on general medical diagnostic and management reasoning"
> One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
Great point. The perfect example: (From Wiki): > In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
AFAIK: They are talking about DeepMind AlphaFold.Related: (Also from Wiki):
> Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who is the CEO.
Yes, it's an example of ML used in science (other examples include NN based force fields for molecule dynamics simulations and meteorological models) - but a biologist or meteorologist usually cares little how the software package they are using works (excluding the knowledge of different limitation of numerical vs statistical models).
The whole thing "but look AI in science" seem to me like Motte-and-bailey argument to imply the use of AGI-like MLLM agents that perform independent research - currently a much less successful approach.
LLMs only exist because the companies developing them are so ridiculously powerful that can completely ignore the rule of law, or if necessary even change it (as they are currently trying to do here in Europe).
Remember we are talking about a technology created by torrenting 82 TB of pirated books, and that's just one single example.
"Steal all the users, steal all the music" and then lawyer up, as Eric Schmidt said at Stanford a few months ago.
They have trouble with debugging obvious bugs though.
Like let's take operating systems as an example. If there are great productivity gains from LLMs while aren't companies like Apple, Google and MS shipping operating systems with vastly less bugs and cleaning up backlogged user feature requests?
> shipping features and fixes faster than ever before
Meanwhile Apple duplicated my gf's contract, creating duplicate birthdays on my calendar. It couldn't find duplicates despite matching name, nickname, phone number, birthdays, and that both contacts were associated with her Apple account. I manually merged and ended up with 3 copies of her birthday in my calendar...Seriously, this shit can be solved with a regex...
The number of issues like these I see is growing exponentially, not decreasing. I don't think it's AI though, because it started before that. I think these companies are just overfitting whatever silly metrics they have decided are best
I don't get it either. You hire someone in the hope for ROI. Some things work some kinda don't. Now people will be n times more productive therefore you should hire fewer people??
That would mean you have no ideas. It says nothing about the potential.
Doesnt really matter if AI actually works or not.
It also matters a bit where the reputation cost hits. Layoffs can spook investors because it makes it look like the company is doing poorly. If the reputation hit for ai is to non-investors, then it probably matters less.
Content producers are blocking scrapers of their sites to prevent AI companies from using their content. I would not assume that AI is either inevitable or on a easy path to adoption. AI certainly isn't very useful if what it "knows" is out of date.
https://www.ft.com/content/4f20fbb9-a10f-4a08-9a13-efa1b55dd...
> The bank [Goldman Sachs] now has 11,000 engineers among its 46,000 employees, according to [CEO David] Solomon, and is using AI to help draft public filing documents.
> The work of drafting an S1 — the initial registration prospectus for an IPO — might have taken a six-person team two weeks to complete, but it can now be 95 per cent done by AI in minutes, said Solomon.
> “The last 5 per cent now matters because the rest is now a commodity,” he said.
In my eyes, that is major. Junior ibankers are not cheap -- they make about 150K USD per year minimum (total comp).Note: I’m talking about your run of the mill SE waggie work, not startups where your food is based on your output.
E.g. look at the indie games count on steam by year: https://steamdb.info/stats/releases/?tagid=492
For example, I founded a SaaS company late last year which has been growing very quickly. We are track to pass $1M ARR before the company's first birthday. We are fully bootstrapped, 100% founder owned. There are 2 of us. And we feel confident we could keep up this pace of growth for quite a while without hiring or taking capital. (Of course, there's an argument that we could accelerate our growth rate with more cash/human resources)
Early in my career, at different companies, we often solved capacity problems by hiring. But my cofounder and I have been able to turn to AI to help with this, and we keep finding double digit percentage productivity improvements without investing much upfront time. I don't think this would have been remotely possible when I started my career, or even just a few years ago when AI hadn't really started to take off.
So my theory as to why it doesn't appear to be "painfully obvious": you've never heard of most of the businesses getting the most value out of this technology, because they're all too small. On average, the companies we know about are large. It's very difficult for them to reinvent themselves on a dime to adapt to new technology - it takes a long time to steer a ship - so it will take a while. But small businesses like mine can change how we work today and realize the results tomorrow.
In 1987 the economist Robert Solow said "You can see the computer age everywhere but in the productivity statistics".
We should remark he said this long before the internet, web and mobile, so probably the remark needs an update.
However, I think it cuts through the salesmen hype. Anytime we see these kinds of claims we should reply "show me the numbers". I'll wait until economists make these big claims, will not trust CEOs and salesmen.
Only if you want to add "internet, web, and mobile" before "age". Otherwise it doesn't need any change.
But that phrase is about the productivity statistics, not about computers or actual productivity.
Ok, so by 2027 we should be having fleets of autonomous AI agents swarming around every bug report and solving it x times faster than a human. Cool, so I guess by 2028 buggy software will be a thing of the past (for those companies that fully adopt AI of course). I'm so excited for a future where IT projects stop going overtime and overbudget and deliver more value than expected. Can you blame us for thinking this is too good to be true?
In complex systems, you can't necessarily perceive the result of large internal changes, especially not with the tiny amount of vibes sampling you're basing this on.
You really don't have the pulse on how fast the average company is shipping new code changes, and I don't see why you think you would know that. Shipping new public end-use features isn't even a good signal, it's a downstream product and a small fraction of software written.
It's like thinking you are picking up a vibe related to changes in how many immigrants are coming into the country month to month when you walk around the mall.
That doesn't mean it isn't a real productivity gain, but it might be spread across enough domains (bugs, features, internal tools, experiments) to not be immediately or "painfully obvious".
It'll probably get more obvious if we start to see uniquely productive small teams seeing success. A sort of "vibe-code wonder".
The more likely scenario is that if those tools make developer so much more productive, we would see a large surge in new companies, with 1 to 3 developers creating things that were deemed too hard for them to do.
But it's still possible that we didn't give people enough time yet.
AI also helps immensely in creating those other inefficiencies.
just look at this:
https://fred.stlouisfed.org/graph/?g=1JmOr
In terms of magnitude the effect of this is just enormous and still being felt, and never recovered to pre-2020 levels. It may never. (Pre-pandemic job postings indexed to 100, its at 61 for software)
Maybe AI is having an effect on IT jobs though, look at the unique inflection near the start of 2025: https://fred.stlouisfed.org/graph/?g=1JmOv
For another point of comparison, construction and nursing job postings are higher than they were pre-pandemic (about 120 and 116 respectively, where pre-pandemic was indexed to 100. Banking jobs still hover around 100.)
I feel like this is almost going to become lost history because the AI hype is so self-insistent. People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner. We're on year 3-4 of a lot of other companies wondering the same thing. Maybe AI will play into that eventually. But so far companies have needed no such crutch for reducing headcount.
p.s.: I'm a big fan of yours on Twitter.
> the tune is "be leaner".
Seems like they're happy to start cutting limbs to lose weight. It's hard to keep cutting fat if you've been aggressively cutting fat for so long. If the last CEO did their job there shouldn't be much fat leftIt's amazing and cringy the level of parroting performed by executives. Independent thought is very rare amongst business "leaders".
At this point I'm not sure it's lack of independent thought so much as lack of thought. I'm even beginning to question if people even use the products they work on. Shouldn't there be more pressure from engineers at this point? Is it yes men from top to bottom? Even CEOs seem to be yes men in response to share holders but that's like being a yes man to the wind.
When I bring this stuff up I'm called negative, a perfectionist, or told I'm out of touch with customers and or understand "value". Idk, maybe they're right. But I'm an engineer. My job is to find problems and fix them. I'm not negative, I'm trying to make the product better. And they're right, I don't understand value. I'm an engineer, it's not my job to make up a number about how valuable some bug fix is or isn't. What is this, "Whose Line Is It Anyways?" If you want made up dollar values go ask the business monkeys, I'm a code monkey
So you think all bugs are equally important to fix?
Do you think every bug's monetary value is perfectly aligned with user impact? Certainly that isn't true. If it were we'd be much better at security and would be more concerned with data privacy. There's no perfect metric for anything, and it would similarly be naïve to think you could place a dollar value on everything, let alone accurately. That's what I'm talking about.
My main concern as an engineer is making the best product I can.
The main concern of the manager is to make the best business.
Don't get confused and think those are the same things. Hopefully they align, but they don't always.
funny how that fat analogy works...because the head (brain) has a lot more fat content than muscles/limbs.
If we were to unionize, we could force this machine to a halt and shift the balance of power back in our favor.
But we don't, because many of us have been brainwashed to believe we're on the same side as the ones trying to squeeze us.
Last time it was tried the union coerced everyone to root for their exploiters. People that unionize aren't magically different.
Human brains seem like an existence proof for what’s possible, but it would be surprising if humans also represent the farthest physical limits of what’s technologically possible without the constraints of biology (hip size, energy budget etc).
We’ve been building actuators for 100s of years and we still haven’t got anything comparable to a muscle. And even if you build a better hydraulic ram or brushless motor driven linear actuator you will still never achieve the same kind of behaviour, because the technologies are fundamentally different.
I don’t know where the ceiling of LLM performance will be, but as the building blocks are fundamentally different to those of biological computers, it seems unlikely that the limits will be in any way linked to those of the human brain. In much the same way the best hydraulic ram has completely different qualities to a human arm. In some dimensions it’s many orders of magnitudes better, but in others it’s much much worse.
It’s not just that ‘we don’t know how to build them’, it’s that the actuators aren’t a standalone part - and we don’t know how to build (or maintain/run in industrial enviroments!) the ‘other stuff’ economically either.
For text generation, it seems like the fast progress was mainly due to feeding the models exponentially more data and exponentially more compute power. But we know that the growth in data is over. The growth in compute has a shifted from a steep curve (just buy more chips) to a slow curve (have to make exponentially more factories if we want exponentially more chips)
Im sure we will have big improvements in efficiency. Im sure nearly everyone will use good LLMs to support them in their work, and they may even be able to do all they need to do on-device. But that doesn’t make the models significantly smarter.
The thing about the latter 1/3rd of a sigmoid curve is, you're still making good progress, it's just not easy any more. The returns have begun to diminish, and I do think you could argue that's already happening for LLMs.
There is a lag in how humans are reacting to AI which is probably a reflexive aspect of human nature. There are so many strategies being employed to minimize progress in a technology which 3 years ago did not exist and now represents a frontier of countless individual disciplines.
If you took a Tesla or a Waymo and dropped into into a tier 2 city in India, it will stop moving.
Driving data is cultural data, not data about pure physics.
You will never get to full self driving, even with more processing power, because the underlying assumptions are incorrect. Doing more of the same thing, will not achieve the stated goal of full self driving.
You would need to have something like networked driving, or government supported networks of driving information, to deal with the cultural factor.
Same with GenAI - the tooling factor will not magically solve the people, process, power and economic factors.
Absolutely driving is cultural (all things people do are cultural) but given 10’s of millions of miles driven by Waymo, clearly it has managed the cultural factor in the places they have been deployed. Modern autonomous driving is about how people drive far more than the rules of the road, even on the highly regulated streets of western countries. Absolutely the constraints of driving in Chennai are different, but what is fundamentally different? What leads to an impossible leap in processing power to operate there?
I definitely recall reading some thinkpieces along the lines of "In the year 203X, there will be no more human drivers in America!" which was and still is clearly absurd. Just about any stupidly high goalpost you can think of has been uttered by someone in the world early on.
Anyway, I'd be interested in a breakdown on reliability figures in urban vs. suburban vs. rural environments, if there is such a thing, and not just the shallow take of "everything outside cities is trivial!" I sometimes see. Waymo is very heavily skewed toward (a short list of) cities, so I'd question whether that's just a matter of policy, or whether there are distinct challenges outside of them. Self-driving cars that only work in cities would be useful to people living there, but they wouldn't displace the majority of human driving-miles like some want them to.
As others will attest, when adherence to driving rules is spotty, behavior is highly variable and unpredictable. You need to have a degree of straight up agression, if you want to be able to handle an auto driver who is cheating the laws of physics.
Another example of something thats obvious based on crimes in India; people can and will come up to your car during a traffic jam, tap your chassis to make it sound like there was an impact, and then snatch your phone from the dashboard when you roll your window down to find out what happened.
This is simply to illustrate and contrast how pared down technical intuitions of "driving" are, when it comes to self driving discussions.
This is why I think level 5 is simply not happening, unless we redefine what self driving is, or the approach to achieving it. I feel theres more to be had from a centralized traffic orchestration network that supplements autonomous traffic, rather than trying to solve it onboard the vehicle.
Do you really think Waymos in SF operate solely on physics? There are volumes of data on driver behavior, when to pass, change lanes, react to aggressive drivers, etc.
And the point that I am making, is that this view was never baked into the original vision of self driving, resulting in predictions of a velocity that was simply impossible.
Physical reality does not have vibes, and is more amenable to prediction, than human behavior. Or Cow behavior, or wildlife if I were to include some other places.
This is a semantic discussion, because it is about what people mean when they talk about self driving.
Just ditching the meaning is unfair, because goddamit, the self driving dream was awesome. I am hoping to be proved wrong, but not because we moved our definition.
Carve a separate category out, which articulates the updated assumptions. Redefining it is a cop out and dare I say it, unbecoming of the original ambition.
Networked Autonomous vehicles?
Or actual intelligence. That observes its surroundings and learns what's going on. That can solve generic problems. Which is the definition of intelligence. One of the obvious proofs that what everybody is calling "AI" is fundamentally not intelligent, so it's a blatant misnomer.
Lol. If you dropped the average westerner into Chennai, they would either: a) stop moving b) kill someone
Decades of machine learning research would like to have a word.
Basically, what if GenAI is the Minitel and what we want is the internet.
I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved. But the base model powering the whole operation seems stuck.
It really hasn't.
The problem is that a GenAI system needs to not only understand the large codebase but also the latest stable version of every transitive dependency it depends on. Which is typically in the order of hundreds or thousands.
Having it build a component with 10 year old, deprecated, CVE-riddled libraries is of limited use especially when libraries tend to be upgraded in interconnected waves. And so that component will likely not even work anyway.
I was assured that MCP was going to solve all of this but nope.
MCP would allow it to instead get this information at run-time from language servers, dependency repositories etc. But it hasn't proven to be effective.
I can't. GPT-4 was useless for me for software development. Claude 4 is not.
why don't you bring it up then.
> There will be a turning point but it’s not happened yet.
do you know something that rest of us don't ?
3D printing is making huge progress in heavy industries. It’s not sexy and does not make headlines but it absolutely is happening. It won’t replace traditional manufacturing at huge scales (either large pieces or very high throughput). But it’s bringing costs way down for fiddly parts or replacements. It is also affecting designs, which can be made simpler by using complex pieces that cannot be produced otherwise. It is not taking over, because it is not a silver bullet, but it is now indispensable in several industries.
The same thing with AI. You'd be blind or lying if you said it hasn't advanced a lot. People aren't denying that. But people are fed up being constantly being promised the moon and getting a cheap plastic replica instead.
The tech is rapidly advancing and doing good. But it just can't keep up with the bubble of hype. That's the problem. The hype, not the tech.
Frankly, the hype harms the tech too. We can't solve problems with the tech if we're just throwing most of our money at vaporware. I'm upset with the hype BECAUSE I like the tech.
So don't confuse the difference. Make sure you understand what you're arguing against. Because it sounds like we should be on the same team, not arguing against one another. That just helps the people selling vaporware
And each successive model that has been released has done nothing to fundamentally change the use cases that the technology can be applied to i.e. those which are tolerant of a large percentage of incoherent mistakes. Which isn't all that many.
So you can keep your 10x better and 100x cheaper models because they are of limited usefulness let alone being a turning point for anything.
The explosion of funding, awareness etc only happened after gpt-3 launch
Nonetheless it took openai til Nov 2022 for 1 Million users.
The overall awareness and breakthrough was probably not at 2020.
10 years into "we'll have self driving cars next year"
We're 10 years into "it's just completely obvious that within 5 years deep learning is going to replace radiologists"
Moravec's paradox strikes again and again. But this time it's different and it's completely obvious now, right?
They try it, but it’s not reliable
You're going to have to specify which 2 you think happened
Why do I think this?
1) They smelled slightly funny. 2) They got the diagnosis wrong.
OK maybe #2 is a red herring. But I stand by the other reason.
So there's some room for interpretation, the weaker interpretation is less radical (that AI could beat humans in radiology tasks in 5 years).
> Helion has a clear path to net electricity by 2024, and has a long-term goal of delivering electricity for 1 cent per kilowatt-hour. (!)
[0] https://observer.com/2025/01/sam-altman-nuclear-fusion-start...
Realistically, we're 2.5 years into it at most.
they have failed in sfo, phoenix and other cities that rolled red carpet for them
It's pretty damning that it failed there.
And more specifically, I'm referencing Elon where the context is that its going to be a software push into Teslas that people already own
There's a big gap between seeing something work in the lab and being ready for real world use. I know we do this in software, but that's a very abnormal thing (and honestly, maybe not the best)
When someone talks about "having" self-driving cars next year, they're not talking about what are essentially pilot programs.
Not to mention that HN gets really tetchy about achieving specifically SAE Level 6 when in practice some pretty basic driver assist tools are probably closer to what people meant. It reminds me of a gentlemen I ran into who was convinced that the OpenAI DoTA bot with a >99% win rate couldn't really be said to be playing the game. If someone can take their hands off the wheel for 10 minutes we're there in a common language sense; the human in the car isn't actively in control.
And it took what like 2 decades to get there. So no, we don't have self-driving even close. Those examples look more like hard-coded solution for custom test cases.
I don't care about SF. I care about what I can but as a typical American. Not as an enthusiast in one of the most technologically advanced cities on the planet
You read the words but missed their meaning
I admit they don't operate everywhere - only certain routes. Still they are undoubtedly cars that drive themselves.
I imagine it'll be the same with AGI. We'll have robots / AIs that are much smarter than the average human and people will be saying they don't count because humans win X Factor or something.
Did the cotton gin therefore not compete with human labor?
The argument that self-driving cars should be allowed on public roads as long as they are statistically as safe as human drivers (on average) seems valid, but of course none of these cars have AGI... they perform well in the anticipated simulator conditions in which they were trained (as long as they have the necessary sensors, e.g. Waymo's lidar, to read the environment in reliable fashion), but will not perform well in emergency/unanticipated conditions they were not trained on. Even outside of emergencies, Waymos still sometimes need to "phone home" for remote assistance in knowing what to do.
So, yes, they are out there, perhaps as safe on average as a human (I'd be interested to see a breakdown of the stats), but I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down. I'd also not trust a Tesla where penny pinching, or just arrogant stupidity, has resulted in a sensor-poor design liable to failure modes like running into parked trucks.
That's the main difference with a human driver. If I take an Uber and we crash, that driver is liable. Waymo would fight tooth and nail to blame anything else.
I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down.
The challenge is that most people think they’re better than average drivers.My point was that if you are part of one of these accident-prone groups, you are certainly worse than average, and are probably safer (both for yourself, and everyone around you) in a Waymo. However, if you are an intelligent non-impaired experienced driver, then maybe not, and almost certainly not if we're talking about emergency and dangerous situations which is where it really matters.
A recent example - a few weeks ago I was following another car in making a turn down a side road, when suddenly that car stops dead (for no externally apparent reason), and starts backing up fast about to hit me. I immediately hit my horn and prepare to back up myself to get out of the way, since it was obvious to me - as a human - that they didn't realize I was there, and without intervention would hit me.
Driving away I watch the car in my rear view mirror and see it pull a U-turn to get back out of the side road, making it apparent why they had stopped before. I learned something, but of course the driverless car is incapable of learning, and certainly has no theory of mind, and would behave same as last time - good or bad - if something similar happened again.
I'm not at all saying that it's impossible some improvement will be discovered in the future that allows AI progress to continue at a breakneck speed, but I am saying that the "progress will only accelerate" conclusion, based primarily on the progress since 2017 or so, is faulty reasoning.
> it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling
What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.I don't know about the rest, but I spoke up because I didn't want to hit a brick wall, I want to keep going! I still want to keep going! But if accurate predictions (with good explanations) aren't a reason to shift resource allocation then we just keep making the same mistake over and over. We let the conmen come in and people who get too excited by success that they get blind to pitfalls.
And hey, I'm not saying give me money. This account is (mostly) anonymous. There's plenty of people that made accurate predictions and tried working in other directions but never got funding to test how methods scale up. We say there's no alternatives but there's been nothing else that's been given a tenth of the effort. Apples and oranges...
You need to model the business world and management more like a flock of sheep being herded by forces that mostly don't have to do with what actually is going to happen in future. It makes a lot more sense.
It's all a big hype bubble and not only is no one in the industry willing to pop it, they actively defend against popping a bubble that is clearly rupturing on its own. It's endemic of how modern businesses no longer care about a proper 10 year portfolio and more about how to make the next quarter look good.
There's just no skin in the game, and everyone's ransacking before the inevitable fire instead of figuring out how to prevent the fire to begin with.
> mostly don't have to do with what actually is going to happen
Yet I'm talking about what did happen.I'm saying we should have memory. Look at predictions people make. Reward accurate ones, don't reward failures. Right now we reward whoever makes the craziest predictions. It hasn't always been this way, so we should go back to less crazy
But if you had been wrong and we would now have had superintelligence, the upside for its owners would presumably be great.
... Or at least that's the hypothesis. As a matter of fact intelligence is only somewhat useful in the real world :-)
Those people always do that. Shouting about cryptocurrencies and NFTs from the rooftops 3-4 years ago, now completely gone.
I suspect they're the same people, basically get rich quick schemers.
A year ago I expected a golden age of local model intelligence integrated into most software tools, and more powerful commercial tools like Google Jules to be something used perhaps 2 or 3 times a week for specific difficult tasks.
That said, my view of the future is probably now wrong, I am just saying what I expected.
But we are going to see a huge explosion in how those models are integrated into the rest of the tech ecosystem. Things that a current model could do right now, if only your car/watch/videogame/heart monitor/stuffed animal had a good working interface into an AI.
Not necessarily looking forward to that, but that's where the growth will come.
Regarding class struggle I think class division always existed but we the mass have all the tools to improve our situation.
We have agency. Whether we are brainwashed or not. If we cared about ourselves, then we don’t need another class, or race, or whatever other grouping to do this for us.
AI may give us more efficiency, but it will be filled with more bullshit jobs and consumption, not more leisure.
> For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!
http://www.econ.yale.edu/smith/econ116a/keynes1.pdf
https://www.aspeninstitute.org/wp-content/uploads/files/cont...
(Quotes because I personally have a significantly harder time doing bloody housework...)
I don’t know if it’s induced demand, revealed preference or Jevon’s paradox, maybe all 3.
OK, but I doubt we're washing 10 times as much clothes, unless are people wearing them for one hour between washes...
We live in a time that the working class is unbelievably brainwashed and manipulated.
All the free money dried up and the happy clapping Barney the Dinosaur Internet was no more!
No need for AI. Troll farms are well documented and were in action before transformers could string two sentences together.
> We live in a time that the working class is unbelievably brainwashed and manipulated.
I think it has always been that way. Looking through history, there are many examples of turkeys voting for Christmas and propaganda is an old invention. I don’t think there is anything special right now. And to be fair to the working class, it’s not hard to see how they could feel abandoned. It’s also broader than the working class. The middle class is getting squeezed as well. The only winners are the oligarchs.
Does one have savings? Can they afford to spend time with their children outside of working day to day? Do they have the ability to take reasonable risks without chancing financial ruin in pursuit of better opportunities?
These are things we typically attribute to someone in the middle class. I worry that boiling down these discussions to “you work and they don’t” misses a lot of opportunity for tangible improvement to quality of life for large number of people.
If you have an actual job and an income constrained by your work output, you could be middle class, but you could also recognize that you are getting absolutely ruined by the billionaire class (no matter what your level of working wealth)
The words 'have to' are doing a lot of work in that statement. Some people 'have to' work to literally put food on the table, other people 'have to' work to able to making payments on their new yacht. The world is full of people who could probably live out the rest of their lives without working any more, but doing so would require drastic lifestyle changes they're not willing to make.
I personally think the metric should be something along the lines of how long would it take from losing all your income until you're homeless.
Now what?
What income? Income from job, or from capital? A huge difference. Also a lot harder to lose the latter, gross incompetence or a revolution, while the former is much easier.
I’m willing to bet you haven’t lived long enough to know that’s a more or less a proxy for old age. :) That aside, even homeless people acquire possessions over time. If you have a lot of homeless in your neighborhood, try to observe that. In my area, many homeless have semi functional motor homes. Are they legit homeless, or are they “homeless oligarchs”? I can watch any of the hundreds of YouTube channels devoted to “van life.” Is a 20 year old who skipped college which their family could have afforded, and is instead living in an $80k van and getting money from streaming a “legit homeless”? The world is not so black and white it will turn out in the long run.
https://sanjosespotlight.com/san-jose-to-crack-down-on-rv-re...
I think progress (in the sense of economic growth) was roughly in line with what Keynes expected. What he didn't expect is that people, instead of getting 10x the living standard with 1/3 the working hours, rather wanted to have 30x the living standard with the same working hours.
I will not go into specifics because the authoritarians still disagree and think everything is fine with degenerative debauchery and try to abuse anyone even just pointing to failing systems, but it all does seem like civilization ending developments regardless of whether it leads to the rise of another civilization, e.g., the Asian Era, i.e., China, India, Russia, Japan, et al.
Ironically, I don’t see the US surviving this transitional phase, especially considering it essentially does not even really exist anymore at its core. Would any of the founders of America approve of any of America today? The forefathers of India, China, Russia, and maybe Japan would clearly approve of their countries and cultures. America is a hollowed out husk with a facade of red, white, and blue pomp and circumstance that is even fading, where America means both everything and nothing as a manipulative slogan to enrich the few, a massive private equity raid on America.
When you think of the Asian countries, you also think of distinct and unique cultures that all have their advantages and disadvantages, the true differences that make them true diversity that makes humanity so wonderful. In America you have none of that. You have a decimated culture that is jumbled with all kinds of muddled and polluted cultures from all over the place, all equally confused and bewildered about what they are and why they feel so lost only chasing dollars and shiny objects to further enrich the ever smaller group of con artist psychopathic narcissists at the top, a kind of worst form of aristocracy that humanity has yet ever produced, lacking any kind of sense of noblesse oblige, which does not even extend to simply not betraying your own people.
That there's any cultural "degenerative debauchery" is an extraordinary claim. Can you back up this claim with evidence?
"Decimated," "muddled," and "polluted" imply you have an objective analysis framework for culture. Typically people who study culture avoid moralizing like this because one very quickly ends up looking very foolish. What do you know that the anthropologists and sociologists don't, to where you use these terms so freely?
If I seem aggressive, it's because I'm quite tired of vague handwaving around "degeneracy" and identity politics. Too often these conversations are completely presumptive.
What's the sense in asking for examples? If one person sees ubiquitous cultural decay and the other says "this is fine," I think the difference is down to worldview. And for a pessimist and an optimist to cite examples at one another is unlikely to change the other's worldview.
If a pessimist said, "the opioid crisis is deadlier than the crack epidemic and nobody cares," would that change the optimist's mind?
If a pessimist said, "the rate of suicide has increased by 30% since the year 2000," would that change the optimist's mind?
If a pessimist said, "corporate profits, wealth inequality, household debt, and homelessness are all at record highs," ...?
And coming from the other side, all these things can be Steven Pinker'd if you want to feel like "yes there are real problems but actually things are better than ever."
There was a book that said something about "you will recognize them by their fruit." If these problems are the fruit born of our culture, it's worth asking how we got here instead of dismissing it with "What do you know that the anthropologists and sociologists don't?"
Capitalism arrives for everyone, Asia is just late for the party. Once it eventually financializes everything, the same will happen to it. Capitalism eventually eats itself, doesn't matter the language or how many centuries your people might have.
Keynes lived in a time when the working class could not buy cheap from China... and complain that everybody else was doing the same!
AI isn't going to generate those jobs, it's going to automate them.
ALL our bullshit jobs are going away, and those people will be unemployed.
When kids stop learning to code for real, who writes GCC v38?
This whole LLM is just the next bitcoin/nft. People had a lot of video cards and wanted to find a new use for them. In my small brain it’s so obvious.
to compare that to NFT’s is pretty disingenuous. i don’t know anyone who has ever accomplished anything with an NFT. (i’m happy to be wrong about that, and i have yet to find a single example).
Maybe consider it's not all on the AI tools if they work for others but not for you.
Human-written code also needs reviews, and is also frequently broken until subjected to testing, iteration, and reviews, and so our processes are built around proper qa, and proper reviews, and then the original source does not matter much.
It's however a lot easier to force an LLM into a straighjacket of enforced linters, enforced test-suite runs, enforced sanity checks, enforced processes at a level that human developers would quit over, and so as we build out the harness around the AI code generation, we're seeing the quality of that code increase a lot faster than the quality delivered by human developers. It still doesn't beat a good senior developer, but it does often deliver code that handles tasks I could never hand to my juniors.
(In fact, the harness I'm forcing my AI generated code through was written about 95%+ by an LLM, iteratively, with its own code being forced through the verification steps with every new iteration after the first 100 lines of code or so)
You can feel free not to believe it, as I have no plans to open up my tooling anytime soon - though partly because I'm considering turning it into a service. In the meantime these tools are significantly improving the margins for my consulting, and the velocity increases steadily as every time we run into a problem we make the tooling revise its own system prompt or add additional checks to the harness it runs to avoid it next time.
A lot of it is very simple. E.g a lot of these tools can produce broken edits. They'll usually realise and fix them, but adding an edit tool that forces the code through syntax checks / linters for example saved a lot of pain. As does forcing regular test and coverage runs, not just on builds.
For one of my projects I now let this tooling edit without asking permission, and just answer yes/no to whether it can commit once it's ready. If no, I'll tell it why and review again when it thinks it's fixed things, but a majority of commit requests are now accepted on the first try.
For the same project I'm now also experimenting with asking the assistant to come up with a todo list of enhancements for it based on a high level goal, then work through it, with me just giving minor comments on the proposed list.
I'm vaguely tempted to let this assistant reload it's own modified code when tests pass and leave it to work on itself for a a while and see what comes of it. But I'd need to sandbox it first. It's already tried (and was stopped by a permissions check) to figure out how to restart itself to enable new functionality it had written, so it "understands" when it is working on itself.
But, by all means, you can choose to just treat this as fiction if it makes you feel better.
It's also the jobs that involve keeping people happy somehow, which may not be "productive" in the most direct sense.
One class of people that needs to be kept happy are managers. What makes managers happy is not always what is actually most productive. What makes managers happy is their perception of what's most productive, or having their ideas about how to solve some problem addressed.
This does, in fact, result in companies paying people to do nothing useful. People get paid to do things that satisfy a need that managers have perceived.
NONE of the bullshit jobs are going away, there will simply be bigger, more numerous bullshit.
https://www.theguardian.com/commentisfree/2024/nov/21/icelan...
Policy matters
There can be a certain snobbishness with academics where they are like of course I enjoy working away on my theories of employment but the unwashed masses do crap jobs where they'd rather sit on their arses watching reality TV. But it isn't really like that. Usually.
I don't know that I've ever heard this rationally articulated. I think it's a "gut feel" that at least some people have.
If taxes take 10% of what you make, you aren't happy about it, but most of us are OK with it. If taxes take 90% of what you make, that feels different. It feels like the government thinks it all belongs to them, whereas at 10%, it feels like "the principle is that it all belongs to you, but we have to take some tax to keep everything running".
So I think the way this plays out in practice is, the amount of taxes needed to supply everyones' basic needs is across the threshold in many peoples' minds. (The threshold of "fairness" or "reasonable" or some such, though it's more of a gut feel than a rational position.)
I'll take capitalism with all its warts over that workers paradise any day.
Even myself, work a job that I enjoy building things that I’m good at, that is almost stress free, and after 10-15 years find that I would much rather spend time with my family or even spend a day doing nothing rather than spend another hour doing work for other people. the work never stops coming and the meaninglessness is stronger than ever.
This creates supply-demand pressure for goods and services. Anything with limited supply such as living in the nice part of town will price out anyone working 15 hours/week.
And so society finds an equilibrium…
Most people with a modest retirement account could retire in their forties to working 15-hour workweeks somewhere in rural America.
And then after living at the center of everything for 15-20 years be mentally prepared to move to “nowhere”, possibly before your kids head off to college.
Most cannot meet all those conditions and end up on the hedonic treadmill.
instead, corporations chose to consume us
That said, I’m not what you’d call a high-earning person (I earn < 100k) I simply live within my means and do my best to curb lifestyle creep. In this way, Keynes’ vision is a reality, but it’s a mindset and we also have to know when enough wealth is enough.
The arrangement was arrived at because the irregular income schedule makes an hourly wage or a salary a poor option for everyone involved. I’m grateful to work for a company where the owners value not only my time and worth but also value a similar work routine themselves.
Now that someone's said to Trump's face that Wall Street thinks he always chickens out, he may or may not stop doing it.
The point is he’s powerless not to. The alternative is allowing a bond rout to trigger a bank collapse, probably in rural America. He didn’t do the prep that produces actual leverage. (Xi did.)
edit: grammar
Because he suddenly had to pay interest on that gigantic loan he (and his business associates) took to buy Twitter.
It may not be the only reason for everything that happened, but it sure is simple and has some very good explanatory powers.
Doubling interest rate from .1% to .2% does a lot for your DCF models, and in this case we went from zero (or in some cases negative) to several percentage units. Of course stock prices tanked. That's what any schoolbook will tell you, and that's what any investor will expect.
Companies thus have to start turning dials and adjust parameters to make number go up again.
At least in my professional circles the number of late 2020-mid 2022 job switchers was immense. Like 10 years of switches condensed into 18-24 months.
Further lot of experiences and anecdotes talking to people who saw their company/org/team double or triple in size when comparing back to 2019.
Despite some waves of mag7 layoffs we are still I think digesting what was essentially an overhiring bubble.
That said, the vibe has definitely shifted. I started working in software in uni ~2009 and every job I've had, I'd applied for <10 positions and got a couple offers. Now, I barely get responses despite 10x the skills and experience I had back then.
Though I don't think AI has anything to do with it, probably more the explosion of cheap software labor on the global market, and you have to compete with the whole world for a job in your own city.
Kinda feels like some major part of the gravy train is up.
That part is so overblown. Twitter was still trying to hit moonshots. X is basically in "keep the lights on" mode as Musk doesn't need more. Yeah, if Google decides it doesn't want to grow anymore, it can probably cut it's workforce by 90%. And it will be as irrelevant as IBM in maximum 10 years.
In 2000 I was moved cities and I had a job lined-up at a company that was run by my friends, I had about 15 good friends working at the company including the CEO, and I was guaranteed the job in software development at the company. The interview was supposed to be just a formality. So I moved, and went in to see the CEO, and he told me he could not hire me, the funding was cut and there was a hiring freeze. I was devastated. Now what? Well I had to freelance and live on whatever I could scrape together, which was a few hundred bucks a month, if I was lucky. Fortunately the place I moved into was a big house with my friends who worked at said company, and since my rent was so low at the time, they covered me for a couple of years. I did eventually get some freelance work from the company, but things did not really recover until about 2004 when I finally got a full-time programming job, after 4 very difficult years.
So many tech companies over-hired during covid, there was a gigantic bubble happening with FAANG and every other tech company at the time. The crash in tech jobs was inevitable.
I feel bad for people who got left out in the cold this time, I know what they are going through.
AI is somewhat creating a similar bubble now, because investors still have money, and the current AI efforts are way over-hyped. 6.5 billion paid to aquihire Jony Ive is a symptom of that.
It's not like companies laid off whole functions. These jobs will continue to be performed by humans - ZIRP just changes the number of humans and how much they get paid.
> These workers need to retrain and move on.
They only need to "retrain" insofar as they keep up with the current standards and practices. Software engineers are not going anywhere.
First part of this statement is clearly false. People on the phone in a tech support company are very much necessary to generate revenue, people tending to field were very much necessary to extract the value of the fields. Draftsmen before CAD were absolutely necessary etc.
Yet technology replaced them, or is in the process of doing so.
So then, your statement simplifies to “if you want to be safe for replacement have a job that’s hard to replace” which isn’t very useful anymore.
The demand for these products was not where it was intended at the time probably. Perhaps the answer to its biggest effect lies in how it will free up human potential and time.
If AI can do that — and that is a big if — then how and what would you do with that time? Well ofc, more activity, different ways to spend time, implying new kinds of jobs.
Where AI will be different (when we get there - LLMs are not AGI) is that it is a general human-replacement technology meaning there will be no place to run ... They may change the job landscape, but the new jobs (e.g. supervising AIs) will ALSO be done by AI.
I don't buy this "AGI by 2027" timeline though - LLMs and LLM-based agents are just missing so many basic capabilities compared to a human (e.g. ability to learn continually and incrementally). It seems that RL, test-time compute (cf tree search) and agentic application, have given a temporary second wind to LLMs which were otherwise topping out in terms of capability, but IMO we are already seeing the limits of this too - superhuman math and coding ability (on smaller scope tasks) do not translate into GENERAL intelligence since they are not based on general mechanism - they are based on vertical pre-training in these (atypical in terms of general use case) areas where there is a clean reward signal for RL to work well.
It seems that this crazy "we're responsibly warning you that we're going to destroy the job market!" spiel is perhaps because these CEOs realize there is a limited window of opportunity here to try to get widespread AI adoption (and/or more investment) before the limitations become more obvious. Maybe they are just looking for an exit, or perhaps they are hoping that AI adoption will be sticky even if it proves to be a lot less capable that what they are promising it will be.
(ftr i’m not even taking a side re: will AI take all the jobs. even if they do, the reporting on this subject by MSM has been abysmal)
(ftr i’m not even taking a side re: is AI going to take all the jobs. regardless of what happens the fact remains that the reporting has been absolute sh*t on this. i guess “the singularity is here” gets more clicks than “sales person makes sales pitch”)
A lot of the BS jobs are being killed off. Do some non-bs jobs get burn up in the fire along the way, yes. But it's only the beginning.
History is always strikingly similar, the AI revolution is the fifth industrial revolution, and it is wise to embrace AI and collaborate with AI as soon as possible.
One can argue about the timeline and technology (maybe not LLM based), but it does seem that human-level AGI will be here relatively soon - next 10 or 20 years, perhaps, if not 2. When this does happen, history is unlikely to be a good predictor of what to expect... AGI may create new jobs as well as detstroy old ones, but what's different is that AGI will also be doing those new jobs! AGI isn't automating one industry, or creating a technology like computers that can help automate any industry - AGI is a technology that will replace the need for human workers in any capacity, starting with all jobs that can be conducted without a physical presence.
It is confusing because many of the dismissals come from programmers, who are unequivocally the prime beneficiaries of genAI capability as it stands.
I work as a marketing engineer at a ~1B company and the amount of gains I have been able to provide as an individual are absolutely multiplied by genAI.
One theory I have is that maybe it is a failing of prompt ability that is causing the doubt. Prompting, fundamentally, is querying vector space for a result - and there is a skill to it. There is a gross lack of tooling to assist in this which I attribute to a lack of awareness of this fact. The vast majority of genAI users dont have any sort of prompt library or methodology to speak of beyond a set of usual habits that work well for them.
Regardless, the common notion that AI has only marginally improved since GPT-4 is criminally naive. The notion that we have hit a wall has merit, of course, but you cannot ignore the fact that we just got accurate 1M context in a SOTA model with gemini 2.5pro. For free. Mere months ago. This is a leap. If you have not experienced that as a leap then you are using LLM's incorrectly.
You cannot sleep on context. Context (and proper utilization of it) is literally what shores up 90% of the deficiencies I see complained about.
AI forgets libraries and syntax? Load in the current syntax. Deep research it. AI keeps making mistakes? Inform it of those mistakes and keep those stored in your project for use in every prompt.
I consistently make 200k+ token queries of code and context and receive highly accurate results.
I build 10-20k loc tools in hours for fun. Are they production ready? No. Do they accomplish highly complex tasks for niche use cases? Yes.
The empowerment of the single developer who is good at manipulating AI AND an experienced dev/engineer is absolutely incredible.
Deep research alone has netted my company tens of millions in pipeline, and I just pretend it's me. Because that's the other part that maybe many aren't realizing - its right under your nose - constantly.
The efficiency gains in marketing are hilariously large. There are countless ways to avoid 'AI slop', and it involves, again, leveraging context and good research, and a good eye to steer things.
I post this mostly because I'm sad for all of the developers who have not experienced this. I see it as a failure of effort (based on some variant of emotional bias or arrogance), not a lack of skill or intellect. The writing on the wall is so crystal clear.
however there seems to be a big disconnect on this site and others
If you believe AGI is possible and that AI can be smarter than humans in all tasks, naturally you can imagine many outcomes far more substantial than job loss.
However many people don’t believe AGI is possible, thus will never consider those possibilities
I fear many will deny the probability that AGI could be achieved in the near future, thus leaving themselves and others unprepared for the consequences. There are so many potential bad outcomes that could be avoided merely if more smart people realized the possibility of AGI and ASI, and would thus rationally devote their cognitive abilities to ensuring that the potential emergence of smarter than human intelligences goes well.
We are absolutely in a hype and market bubble around AI right now - and like the dot com bubble, the growth came not in 2000, but years later. It turns out it takes time for a new technology to percolate through society, and I use the “mom metric” as a bellwether - if your/my mother is using the tech, you’d better believe it has achieved market penetration.
Until 2011 my mum was absolutely not interested in the web. Now she does most of her shopping on it, and spends her days boomerposting.
She recently decided to start paying for ChatGPT.
Sure, it’s a fuzzy thing, but I think the adoption cycle this time around will be faster, as the access to the tech is already in peoples’ hands, and there are plenty of folks who are already finding useful applications for genai.
Robotaxis, whether they end up dominated by Tesla or waymo or someone else entirely, are inarguably here, and the adoption rates (the USA is not the only market in the world) are ramping significantly this year.
I’m not sure I get your point about smartphones? They’re in practically every pocket on the planet, now, they’re not some niche thing.
AI / GP robotic labor will not penetrate the market so much in existing companies, which will have huge inertial buffers, but more in new companies that arise in specific segments where the technology proves most useful.
The layoffs will come not as companies replace workers with AI, but as AI companies displace non-AI companies in the market, followed by panicked restructuring and layoffs in those companies as they try to react, probably mostly unsuccessfully.
Existing companies don’t have the luxury of buying market share with investor money, they have to make a profit. A tech darling AI startup powered by unicorn farts and inference can burn through billions of SoftBank money buying market share.
For the moment, AI is enabling a bunch of stuff that was too expensive or time consuming to do before (flooding the commons with shiny garbage and pedantic text to drive “engagement”.
Despite the hype, It’s going to be 2-3 years before AI application really fall into stride, and 3-7 before general purpose robotics really get up to speed.
The fallacy is in the statement “AI will replace jobs.” This shirks responsibility, which immediately diminishes credibility. If jobs are replaced or removed, that’s a choice we as humans have made, for better or worse.
Even older people prefer to hire younger people.
DrillShopper•1d ago
sevensor•1d ago
pixl97•1d ago
DrillShopper•1d ago
threeseed•13h ago
I remember the pre-Web days of Usenet and BBS and no one thought those were trendy.
AI is far more akin to crypto.
pixl97•6h ago
Pretty much everyone I know uses AI for something.