1: https://xkcd.com/806/ - from an era when the worst that could happen was having to speak with incompetent, but still human, tech support.
I got myself into a loop where no matter what I did, there was no human in the loop.
Even the "threaten to cancel" trick didn't work, still just chatbots / automated services.
Thankfully more and more of the UK is getting FTTH. Sadly for me I accidentally misunderstood the coverage checker when I last moved house.
You're acting like it's not the companies that are monopolies that implement these systems first.
I would say the use cases are only coming into view.
So, as of yet, according to these researchers, the main effect is that of a data pump, certain corporations get a deep insight into people's and other corporation's inner life.
I'm not saying that I think LLMs are useless, far from it, I use them when I think it's a good fit for the research I'm doing, the code I need to generate, etc., but the way it's being pushed from a marketing perspective tells me that companies making these tools need people to use them to create a data moat.
Extremely annoying to be getting these pop-ups to "use our incredible Intelligence™" at every turn, it's grating on me so much that I've actively started to use them less, and try to disable every new "Intelligence™" feature that shows up in a tool I use.
The boards in turn instruct the CEOs to "adopt AI" and so you get all the normal processes about deciding what/if/when to do stuff get short circuited and so you get AI features that no one asked for or mandates for employees to adopt AI with very shallow KPIs to claim success.
The hype really distorts both sides of the conversation. You get the boosters for which any use of AI is a win, no matter how inconsequential the results, and then you get things like the original article which indicate it hasn't caused job losses yet as a sign that it hasn't changed anything. And while it might disprove the hype (especially the "AI is going to replace all mental labour in $SHORT_TIMEFRAME" hype), it really doesn't indicate that it won't replace anything.
Like when has a technology making the customer support experience worse for users or employees ever stopped it's rollout if there's cost savings to be had?
I think this why AI is so complicated for me. I've used it, and I can see some gains. But it's on the order of when IDE auto complete went from substring matches of single methods to when it could autocomplete chains of method calls based on types. The agent stuff fails on anything but the most bite size work when I've tried it.
Clearly some people seem it as something more transformative than that. There's other times when people have seen something transformative and it's just been so clearly nothing of value (NFTs for example) that it's easy to ignore the hype train. The reason AI is challenging for me is it's clearly not nothing, but also it's so far away from the vision that others have that it's not clear how realistic that is.
Fundamentally, we (the recipient of llm output) are generating the meaning from the words given. ie, llms are great when the recipient of their output is a human.
But, when their recipient is a machine, the model breaks down, because, machine to machine requires deterministic interactions. this is the weakness I see - regardless of all the hype about llm agents. fundamentally, the llms are not deterministic machines.
LLMs lack a fundamental human capability of deterministic symbolization - which is to create NEW symbols with associated rules which can deterministically model worlds we interact with. They have a long way to go on this.
It's very telling that we see "we won't use your data for training" sometimes and opt-outs but never "we won't collect your data". 'Training' being at best ill defined.
It sounds like they didn't ask those who got laid off.
For me, the most interesting takeaway. It's easy to think about a task, break it down into parts, some of which can be automated, and count the savings. But it's more difficult to take into account any secondary consequences from the automation. Sometimes you save nothing because the bottleneck was already something else. Sometimes I guess you end up causing more work down the line by saving a bit of time at an earlier stage.
This can make automation a bit of a tragedy of the commons situation: It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
in this case, the total cost would've gone up, and thus, eventually the stakeholder (aka, the person who pays) is going to not want to pay when the "old" way was cheaper/faster/better.
> It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
not really, as long as the precondition i mentioned above (the total cost dropping) is true.
But there's also adversarial situations. Hiring would be one example: Companies use automated CV triaging tools that make it harder to get through to a human, and candidates auto generate CVs and cover letters and even auto apply to increase their chance to get to a human. Everybody would probably be better off if neither side attempted to automate. Yet for the individuals involved, it saves them time, so they do it.
I am 100% convinced that Ai will and already has destroyed lots of Jobs. We will likely encounter world order disrupting changes in the coming decades when computer get another 1000 times faster and powerful in the coming 10 years.
The jobs described might get lost (obsolete or replaced) as well in the longer term if AI gets better than them. For example just now another article was mentioned in HN: "Gen Z grads say their college degrees were a waste of time and money as AI infiltrates the workplace" which would make teachers obsolete.
It is like expecting cars to replace horses before anyone starts investing in the road network and getting international petroleum supply chains set up - large capital investment is an understatement when talking about how long it takes to bring in transformative tech and bed it in optimally. Nonetheless, time passed and workhorses are rare beasts.
This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.
When a sector collapses and become irrelevant, all its workers no longer need to be employed. Some will no longer have any useful qualifications and won't be able to find another job. They will have to go back to training and find a different activity.
It's fine if it's an isolated event. Much worse when the event is repeated in many sectors almost simultaneously.
Many, many industries and jobs transformed or were relegated to much smaller niches.
Overall it was great.
Why? When we've seen a sector collapse, the new jobs that rush in to fill the void are new, never seen before, and thus don't have training. You just jump in and figure things out along the way like everyone else.
The problem, though, is that people usually seek out jobs that they like. When that collapses they are left reeling and aren't apt to want to embrace something new. That mental hurdle is hard to overcome.
That means either:
1. The capitalists failed to redeploy capital after the collapse.
2. We entered into some kind of post-capitalism future.
To explore further, which one are you imagining?
Not a sustainable strategy in the long term though.
>Google’s core search and advertising business grew almost 10 per cent to $50.7bn in the quarter, surpassing estimates for between 8 per cent and 9 per cent.[0]
The "Google's search is garbage" paradigm is starting to get outdated, and users are returning to their search product. Their results, particularly the Gemini overview box, are (usually) useful at the moment. Their key differentiator over generative chatbots is that they have reliable & sourced results instantly in their overview. Just concise information about the thing you searched for, instantly, with links to sources.
[0] https://www.ft.com/content/168e9ba3-e2ff-4c63-97a3-8d7c78802...
Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results are AI slop.
Their increased revenue probably comes down to the fact that they no longer show any search results in the first screenful at all for mobile and they've worked hard to make ads indistinguishable from real results at a quick glance for the average user. And it's not like there exists a better alternative. Search in general sucks due to SEO.
It's actually sadder than that. Google appear to have realised that they make more money if they serve up ad infested scrapes of Stack Overflow rather than the original site. (And they're right, at least in the short term).
If anything my frustration with google search comes from it being much harder to find niche technical information, because it seems google has turned the knobs hard towards "Treat search queries like they are coming from the average user, so show them what they are probably looking for over what they are actually looking for."
Where is this slop you speak of?
Not because the LLM is better, but because the search is close to unusable.
The general tone of this study seems to be "It's 1995, and this thing called the Internet has not made TV obsolete"; same for the Acemoglu piece linked elsewhere in the. Well, no, it doesn't work like that, it first comes for your Blockbuster, your local shops and newspaper and so on, and transforms those middle class jobs vulnerable to automation into minimum wages in some Amazon warehouse. Similarly, AI won't come for lawyers and programmers first, even if some fear it.
The overarching theme is that the benefits of automation flow to those who have the bleeding edge technological capital. Historically, labor has managed to close the gap, especially trough public education; it remains to be seen if this process can continue, since eventually we're bound to hit the "hardware" limits of our wetware, whereas automation continues to accelerate.
So at some point, if the economic paradigm is not changed, human capital loses and the owners of the technological capital transition into feudal lords.
There's also going to be a shrinkage in the workforce caused by demographics (not enough kids to replace existing workers).
At the same time education costs have been artificially skyrocketed.
Personally the only scenario I see mass unemployment happening is under a "Russia-in-the-90s" style collapse caused by an industrial rugpull (supply chains being cut off way before we are capable of domestically substituting them) and/or the continuation of policies designed to make wealth inequality even worse.
There is brewing conflict across continents. India and Pakistan, Red sea region, South China sea. The list goes on and on. It's time to accept it. The world has moved on.
the individual phenomena you describe are indeed detritus of this failed reaction to an increasing awareness of all humans of our common conditions under disparate nation states.
nationalism is broken by the realization that everyone everywhere is paying roughly 1/4 to 1/3 of their income in taxes, however what you receive for that taxation varies. your nation state should have to compete with other nation states to retain you.
the nativist movement is wrongful in the usa for the reason that none of the folks crying about foreigners is actually native american,
but it's globally in error for not presenting the truth: humans are all your relatives, and they are assets, not liabilities: attracting immigration is a good thing, but hey feel free to recycle tired murdoch media talking points that have made us nothing but trouble for 40 years.
> There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, 'I don't see the use of this; let us clear it away.' To which the more intelligent type of reformer will do well to answer: 'If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.' [1]
The problem with anti-border extremism is that it ignores the huge success national borders have had since pre-recorded history in building social cohesion, community, and more generally high-trust societies. All those things are precious, they are worth making sacrifices for, they are things small town America has only recently lost, and still remembers, and wants back. Maybe you haven't experienced those things, not like these people you so casually dismiss have.
https://www.dhl.com/global-en/microsites/core/global-connect...
Source for counter argument?
We have had thousands of years of globalising. The trend has always been towards a more connected world. I strongly suspect the current Trump movement (and to an extent brexit depending on which brexit version you chose to listen to) will be blips in that continued trend. That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
It happens in cycles. Globalization has followed deglobalization before and vice versa. It's never been one straight line upward.
>That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
It'll break down into blocs, not 200 individual countries.
Ask Estonia why they buy overpriced LNG from America and Qatar rather than cheap gas from their next door neighbor.
If you think the inability to source high end microchips from anywhere apart from Taiwan is going to prevent a future conflict (the Milton Friedman(tm) golden arches theory) then I'm afraid I've got bad news.
Agree, but I never said it was.
>If you think the inability to source high end microchips from anywhere apart from Taiwan is going to prevent a future conflict (the Milton Friedman(tm) golden arches theory) then I'm afraid I've got bad news.
Why are you saying that? Again, I didn't suggest that.
BRICs have been trying to substitute for some of them and have made some nonzero progress but theyre still far, far away from stuff like a reserve currency.
(Racist memes and furry pornography doesn't count.)
The sandwich shop next to my work has a music playlist which is 100% ai generated repetitive slop.
Do you think they'll be paying graphic designers, musicians etc. for now on when something certainly shittier than what a good artist does, but also much better than what a poor one is able to achieve, can be used in five minutes for free?
People generating these things weren't ever going to be customers of those skillsets. Your examples are small business owners basically fucking around because they can, because it's free.
Most barber shops just play the radio, or "spring" for satellite radio, for example. AI generated music might actively lose them customers.
What you are truly seeking is high level specifications for automation systems, which is a flawed concept to the degree that the particulars of a system may require knowledgeable decisions made on a lower level.
However, CAD/CAM, and infrastructure as code are true amplifiers of human power.
LLMs destroy the notion of direct coupling or having any layered specifications or actual levels involved at all, you try to prompt a machine trained in trying to ascertain important datapoints for a given model itself, when the correct model is built up with human specifications and intention at every level.
Wrongful roads lead to erratic destinations, when it turns out that you actually have some intentions you wish to implement IRL
If you want to reach the actual destination because conditions changed (there is a wreck in front of you) you need a system to identify changes that occur in a chaotic world and can pick from an undefined/unbounded list of actions.
But that doesn't mean the article they wrote in each of those scenarios in not useful and economically valuable enough for them to maintain a job.
Similar thing goes to delivery. Moving single pallet to store or replacing carpets or whatever. Lot of complexity if you do not offload it to receiver.
More regular the environment is easier it is to automate. A shelving in store in my mind might be simpler than all environments where vehicles need to operate in.
And I think we know first to go. Average or below average "creative" professionals. Copywriter, artists and so on.
This is completely untrue. Google Search still works, wonderfully. It works even better than other attempts at search by the same Google. For example, there are many videos that you will NEVER find on Youtube search that come up as the first results on Google Search. Same for maps: it's much easier to find businesses on Google Search than on maps. And it's even more true for non-google websites; searching Stack Overflow questions on SO itself is an exercice in frustration. Etc.
Resume filtering by AI can work well on the first line (if implemented well). However, once we get to the the real interview rounds and I see the CV is full of AI slop, it immediately suggests the candidate will have a loose attitude to checking the work generated by LLMs. This is a problem already.
I think the plastic surgery users disagree here: it seems like visible plastic surgery has become a look, a status symbol.
"Like all ‘magic’ in Tolkien, [spiritual] power is an expression of the primacy of the Unseen over the Seen and in a sense as a result such spiritual power does not effect or perform but rather reveals: the true, Unseen nature of the world is revealed by the exertion of a supernatural being and that revelation reshapes physical reality (the Seen) which is necessarily less real and less fundamental than the Unseen" [1].
The writing and receiving of resumes has been superfluous for decades. Generative AI is just revealing that truth.
[1] https://acoup.blog/2025/04/25/collections-how-gandalf-proved...
First, LLMs are a distillation of our cultural knowledge. As such they can only reveal our knowledge to us.
Second, they are limited even more so by the users knowledge. I found that you can barely escape your "zone of proximal development" when interacting with an LLM.
(There's even something to be said about prompt engineering in the context of what the article is talking about: It is 'dark magic' and 'craft-magic' - some of the full potential power of the LLM is made available to the user by binding some selected fraction of that power locally through a conjuration of sorts. And that fraction is a product of the craftsmanship of the person who produced the prompt).
In this sense, I have rarely seen AI have negative impacts. Insofar as an LLM can generate a dozen lines of code, it forces developers to engage in less "performative copy-paste of stackoverflow/code-docs/examples/etc." and engage the mind in what those lines should be. Even if, this engagement of the mind, is a prompt.
Where input' is a distorted version of input. This is the new reality.
We should start to be less impressed volume of text and instead focus on density of information.
Presenting soft skills is entirely random, anyway, so the only marker you can have on a cv is "the person is able to write whatever we deem well-written [$LANGUAGE] for our profession and knows exactly which meaningless phrases to include that we want to see".
So I guess I was a bit strong on the low information content, but you better have a very, very strong resume if you don't know the unspoken rules of phrasing, formatting and bragging that are required to get through to an actual interview. For those of us stuck in the masses, this means we get better results by adding information that we basically only get by already being part of the in-group, not by any technical or even interpersonal expertise.
Edit: If I constrain my argument to CVs only, I think my statement holds: They test an ability to send in acceptably written text, and apart from that, literally only in-group markers.
Always was.
In other words, there is a lot more spam in the world. Efficiencies in hiring that implicitly existed until today may no longer exist because anyone and their mother can generate a professional-looking cover letter or personal web page or w/e.
Note also that ad blockers are much less prevalent on mobile.
And even if we solve this problem of hallucination, the ai agents still need a platform to do search.
If I was Google I’d simply cut off public api access to the search engine.
Google search is fraught with it's own list of problems and crappy results. Acting like it's infallible is certainly an interesting position.
>If I was Google I’d simply cut off public api access to the search engine.
The convicted monopolist Google? Yea, that will go very well for them.
OpenAI o3
Gemini 2.5 Pro
Grok 3
Anything below that is obsolete or dumbed down to reduce cost
I doubt this feature is actually broken and returning hallucinated links
What people call "AI slop" existed before AI and AI where I control the prompt is getting to be better than what you will find on those sorts of websites.
Well their Search revenue actually went up last quarter, as all quarters. Overall traffic might be a bit down (they don't release that data so we can't be sure) but not revenue. While I do take tons of queries to LLMs now, the kind of queries Google actually makes a lot of money on (searching flights, restaurants etc) I don't go to an LLM for - either because of habit or because of fear these things are still hallucinating. If Search was starting to die I'd expect to see it in the latest quarter earnings but it isn't happening.
Seen a whole lot of gen AI deflecting customer questions which would have been previously tickets. That is a reduced ticket volume that would have been taken by a junior support engineer.
We are a couple of years away from the death of the level 1 support engineer. I can't even imagine what's going to happen to the level 0 IT support.
And this trend isn't new; a lot of investments into e.g. customer support is to need less support staff, for example through better self-service websites, chatbots / conversational interfaces / phone menus (these go back decades), or to reduce expenses by outsourcing call center work to low-wage countries. AI is another iteration, but gut feeling says they will need a lot of training/priming/coaching to not end up doing something other than their intended task (like Meta's AIs ending up having erotic chats with minors).
One of my projects was to replace the "contact" page of a power company with a wizard - basically, get the customers to check for known outages first, then check their own fuse boxes etc, before calling customer support.
In the future, we will do a lot more.
In other terms: There will be a lot more work. So even if robots do 80% of it, if we do 10x more - the amount of work we need humans to do will double.
We will write more software, build more houses, build more cars, planes and everything down the supply chain to make these things.
When you look at planet earth, it is basically empty. While rent in big cities is high. But nobody needs to sleep in a big city. We just do so because getting in and out of it is cumbersome and building houses outside the city is expensive.
When robots build those houses and drive us into town in the morning (while we work in the car) that will change. I have done a few calculations, how much more mobility we could achieve with the existing road infrastructure if we use electric autonomous buses, and it is staggering.
Another way to look at it: Currently, most matter of planet earth has not been transformed to infrastructure used by humans. As work becomes cheaper, more and more of it will. There is almost infinitely much to do.
Which of the few remaining wild creatures will be displaced?
https://www.worldwildlife.org/press-releases/catastrophic-73...
Costs of buses are mostly the driver. Which will go away. The rest is mostly building and maintaining them. Which will be done by robots. The rest is energy. The sun sends more energy to earth in an hour than humans use in a year.
And use of solar energy is absolutely unrelated to doubling the living areal. That can, and should, be done anyway.
That said, the fact that I can't find an opensource LLM front-end which will accept a folder full of images to run a prompt on sequentially, then return the results in aggregate is incredibly frustrating.
I think we are at a crossroads as to what this will result in, however. In one case, the benefits will accrue at the top, with corporations earning greater profits while employing less people, leaving a large part of the population without jobs.
In the second case, we manage to capture these benefits, and confer them not just on the corporations but also the public good. People could work less, leaving more time for community enhancing activities. There are also many areas where society is currently underserved which could benefit from freed up workforce, such as schooling, elderly care, house building and maintenance etc etc.
I hope we can work toward the latter rather than the former.
It will for sure! Just today the impact is collosal.
As an example, people used to read technical documentation, now, they ask LLMs. Which replaces a simple static file by 50k matrix multiplication.
for sure, we are doing our best to eradicate the conditions that make earth habitable, however i suggest that the first needed change is for computer screen humans to realize that other life forms exist. this requires stepping outside and questioning human hubris, so it might be a big leap, but i am fairly confident that you will discover that absolutely none of our planet is empty.
Demand for software has high elasticity
Apparently not, since the sort of specific work which one used to find for this has all but vanished --- every AI-generated image one sees represents an instance where someone who might have contracted for an image did not (ditto for stock images, but that's a different conversation).
Instead of uploading your video ad you already created, you'll just enter a description or two and the AI will auto-generate the video ads in thousands of iterations to target every demographic.
Google is going to run away with this with their ecosystem - OpenAI etc al can't compete with this sort of thing.
And on the other end we'll have "AI" ad blockers, hopefully. They can watch each other.
1. If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.
2. If the goal is not achieved and we stay in this uncanny valley territory (not at the bottom of it but not being able to climb out either), then eventually in a few years' time we should see a return to many fragmented almost indie-like platforms offering bespoke human-made content. The only way to hope to achieve the acceptable quality will be to favor it instead of scale as the content will have to be somehow verified by actual human beings.
Question on two fronts:
1. Why do you think, considering the current rate of progress think it is very unlikely that LLM output becomes indistinguishable from expert creatives? Especially considering a lot of tells people claim to see are easily alleviated by prompting.
2. Why do you think a model whose output reaches that goal would rise in any way to what we’d consider AGI?
Personally, I feel the opposite. The output is likely to reach that level in the coming years, yet AGI is still far away from being reached once that has happened.
1. The progress is there but it's been slowing down yet the downsides have largely remained.
1.1. With the LLMs, while thanks to the larger context window (mostly achieved via hardware, not software), the models can keep track of the longer conversations better, the hallucinations are as bad as ever; I use them eagerly yet I haven't felt any significant improvements to the outputs in a long time. Anecdotally, a couple days ago I decided to try my luck and vibe-code a primitive messaging library and it led me in the wrong path even though I was challenging it along the way; it was so convincing that I wouldn't have noticed hadn't my colleague told me there was a better way. Granted, the colleague is extremely smart, but LLM should have told me what was the right approach because I was specifically questioning it.
1.2. The image generation has also barely improved. The biggest improvement during the past year has been with 4o, which can be largely attributed to move from diffusion to autoregression but it's far from perfect and still suffers from hallucinations even more than LLMs.
1.3. I don't think video models are even worth discussing because you just can't get a decent video if you can't get a decent still in the first place.
2. That's speculation, of course. Let me explain my thought process. A truly expert level AI should be able to avoid mistakes and create novel writings or research just by the human asking it to do it. In order to validate the research, it can also invent the experiments that need to be done by humans. But if it can do all this, then it could/should find the way to build a better AI, which after an iteration or two should lead to AGI. So, it's basically a genius that, upon human request, can break itself out of the confines.
It feels to me that the SOTA video models today are pretty damn good already, let alone in another 12 months when SOTA will no doubt have moved on significantly.
People will think they have an eye for AI-generated content, and miss all the AI that doesn't register. If anything it would benefit the whole industry to keep some stuff looking "AI" so people build a false model of what "AI" looks like.
This is like the ChatGPT image gen of last year, which purposely put a distinct style on generated images (that shiny plasticy look). Then everyone had an "eye for AI" after seeing all those. But in the meantime, purpose made image generators without the injected prompts were creating indistinguishable images.
It is almost certain that every single person here has laid eyes on an image already, probably in an ad, that didn't set off any triggers.
Most if it wasn't bespoke assets created by humans but stock art picked by if lucky, a professional photo editor, but more often the author themselves.
That said I don’t think entry level illustration jobs can be around if software can do their job better than they do. Just like we don’t have a lot of calculators anymore, technological replacement is bound to occur in society, AI or not.
Well at least that's the potential.
This is not at all true. Some percentage of AI generated images might have become a contract, but that percentage is vanishingly small.
Most AI generated images you see out there are just shared casually between friends. Another sizable chunk are useless filler in a casual blog post and the author would otherwise have gone without, used public domain images, or illegally copied an image.
A very very small percentage of them are used in a specific subset of SEO posts whose authors actually might have cared enough to get a professional illustrator a few years ago but don't care enough to avoid AI artifacts today. That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.
I prefer to get my illegally copied images from only the most humanely trained LLM instead of illegally copying them myself like some neanderthal or, heaven forbid, asking a human to make something. Such a though is revolting; humans breathe so loud and sweat so much and are so icky. Hold on - my wife just texted me. "Hey chat gipity, what is my wife asking about now?" /s
It feels very short-sighted from the company side because I nope'd right out of there. They didn't make me feel any trust for the company at all.
I'd still hire an entry level graphic designer. I would just expect them to use these tools and 2x-5x their output. That's the only changing I'm sensing.
"Equip yourself with skills that other people are willing to pay for." –Thomas Sowell
As a father, my forward-thinking vision for my kids is that creativity will rule the day. The most successful will be those with the best ideas and most inspiring vision.
Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
But yeah, tech debt isn't unique to AIs, and I haven't seen anything conclusive that AIs generate more tech debt than regular people - but please share if you've got sources of the opposite.
(disclaimer, I'm very skeptical about AI to generate code myself, but I will admit to use it for boring tasks like unit test outlines)
Is that what's going to happen? These are still LLMs. There's nothing in the future generations that guarantees those changes would be better, if not flat out regressions. Humans can't even agree on what good code looks like, as its very subjective and context heavy with the skills of the team.
Likely, you ask gpt-6 to improve your code and it just makes up piddly architecture changes that don't fundamentally improve anything.
It'd still suck to lose your job / vocation though, and some of those won't be able to find a new job.
When the car was invented, entire industries tied to horses collapsed. But those that evolved, leveled up: Blacksmiths became auto mechanics and metalworkers, etc.
As a creatively minded person with entrepreneurial instincts, I’ll admit: my predictions are a bit self-serving. But I believe it anyway—the future of work is entrepreneurial. It’s creative.
There already isn't enough meaningful work for everyone. We see people with the "right training" failing to find a job. AI is already making things worse by eliminating meaningful jobs — art, writing, music production are no longer viable career paths.
How is this the conclusion you've come to when the sectors impacted most heavily by AI thus far have been graphic design, videography, photography, and creative writing?
This has never been the truth of the world, and I doubt AI will make it come to fruition. The most successful people are by and large those with powerful connections, and/or access to capital. There are millions of smart, inspired people alive right now who will never rise above the middle class. Meanwhile kids born in select zip codes will continue to skate by unburdened by the same economic turmoil most people face.
We're coming up in 3 years of ChatGPT and well over a year since I started seeing the proliferation of these 10X claims, and yet LLM users seem to be bearing none of the fruit one might expect from a 10X increase in productivity.
I'm beginning to think that this 10X thing is overstated.
And any important jobs won’t be replaced because managers are too lazy and risk averse to try AI.
We may never see job displacement from AI. Did you know bank teller jobs actually increased in the decades following the roll out of ATMs.
But even then, I'm not saying all are equally vital, I'm just saying that the statement, "most jobs are performative" doesn't even come close to being supported by "I've worked 10 performative jobs".
> AI chatbots have had no significant impact on earnings or recorded hours in any occupation
But Generative AI is not just AI chatbots. There are ones that generate sounds/music, ones that generates imagines etc.
Another thing is, the research only looked Denmark, a nation with fairly healthy altitude towards work-life-balance, not a nation that gives proud to people who work their own ass off.
And the research also don't cover the effect of AI generated product: if music or painting can be created by an AI within just 1 minute based on prompt typed in by a 5 year old, then your expected value for "art work" will decrease, and you'll not pay the same price when you're buying from a human artist.
Example: I recently used Gemini for some tax advice that would have cost hundreds of dollars to get from a licensed tax agent. And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.
Also took a picture of my tire while at the garage and asked it if I really needed new tires or not.
Took a picture of my sprinkler box and had it figure out what was going on.
Potentially all situations where I would’ve paid (or paid more than I already was) a local laborer for that advice. Or at a minimum spent much more time googling for the info.
It's something that can be empirically measured instead of visually guessed at by a human or magic eight-ball. Using a tool that costs only a few dollars, no less, like the pressure gauge you should already keep in your glovebox.
These will likely be cell-phone-plan level expensive, but the value prop would still be excellent.
You can use a penny and your eyeballs to assess this, and all it costs is $0.01
It blows my mind the degree that people are offloading any critical thinking to AI
There is no moat. Most of these AI APIs and products are interchangeable.
Me: "Looks like your tire is a little low."
Youth: "How can you tell, where's your phone?"
But, also, the threshold of things we manage ourselves versus when we look to others is constantly moving as technology advances and things change. We're always making risk tradeoff decisions measuring the probability we get sued or some harm comes to us versus trusting that we can handle some tasks ourselves. For example, most people do not have attorneys review their lease agreements or job offers, unless they have a specific circumstance that warrants they do so.
The line will move, as technology gives people the tools to become better at handling the more mundane things themselves.
In a more general sense sometimes, but not always, it is easier to verify something than to come up with it at the first place.
It's more about automating workflows that are already procedural and/or protocolized, but where information gathering is messy and unstructured (I.e. some facets of law, health, finance, etc).
Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs, your medical history, your preferences, etc. But gathering all of that information requires a mix of collecting medical records, talking to the patient, etc. Once that information is available, we can execute a fairly procedural plan to put together a diet that will likely work for you.
These are cases that I believe LLMs are actually very well suited, if the solution can be designed in such a way as to limit hallucinations.
> Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs
No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive. and I would know, I've had to use one to help me manage an eating disorder!
There is already so much bullshit in the diet space that adding AI bullshit (again, using the technical definition of bullshit here) only stands to increase the value of an interaction with a person with knowledge.
And that's without getting into what happens when brand recommendations are baked into the training data.
0 https://link.springer.com/article/10.1007/s10676-024-09775-5
I understand your perspective, but the intention was to use a term we've all heard to reflect the thing we're all thinking about. Whether or not this is the right term to use for scenarios where the LLM emits incorrect information is not relevant to this post in particular.
> No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive.
No, this is not why real dietitians are expensive. Real dietitians are expensive because they go through extensive training on a topic and are a licensed (and thus supply constrained) group. That doesn't mean they're operating without a grounding fact base.
Dietitians are not making up nutritional evidence and guidance as they go. They're operating on studies that have been done over decades of time and millions of people to understand in general what foods are linked to what outcomes. Yes, the field evolves. Yes, it requires changes over time. But to suggest we "don't know" is inconsistent with the fact that we're able to teach dietitians how to construct diets in the first place.
There are absolutely cases in which the confounding factors for a patient are unique enough such that novel human thought will be required to construct a reasonable diet plan or treatment pathway for someone. That will continue to be true in law, health, finances, etc. But there are also many, many cases where that is absolutely not the case, the presentation of the case is quite simple, and the next step actions are highly procedural.
This is not the same as saying dietitians are useless, or physicians are useless, or attorneys are useless. It is to say that, due to the supply constraints of these professions, there are always going to be fundamental limits to the amount they can produce. But there is a credible argument to be made that if we can bolster their ability to deliver the common scenarios much more effectively, we might be able to unlock some of the capacity to reach more people.
Just like every other form of ML we've come up with, LLMs are imperfect. They get things wrong. This is more of an indictment of yeeting a pure AI chat interface in front of a consumer than it is an indictment of the underlying technology itself. LLMs are incredibly good at doing some things. They are less good at other things.
There are ways to use them effectively, and there are bad ways to use them. Just like every other tool.
I think it’s weird to reject AI based on its current form.
O3's web research seems to have gotten much, much better than their earlier attempts at using the web, which I didn't like. It seems to browse in a much more human way (trying multiple searches, noticing inconsistencies, following up with more refined searches, etc).
But I wonder how it would do in a case like yours where there is conflicting information and whether it picks up on variance in information it finds.
As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
Main value you get from a programmer is they understand what they are doing and they can take the responsibility of what they are developing. Very junior developers are hired mostly as an investment so they become productive and stay with the company. AI might help with some of this but doesn’t really replace anyone in the process.
For support, there is massive value in talking to another human and having them trying to solve your issue. LLMs don’t feel much better than the hardcoded menu style auto support there already is.
I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
How exactly is this different from getting advice from someone who acts confidently knowledgeable? Diet advice is an especially egregious example, since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one.
It's bad enough the AI marketers push AI as some all knowing, correct oracle, but when the anti-ai people use that as the basis for their arguments, it's somehow more annoying.
Trust but verify is still a good rule here, no matter the source, human or otherwise.
If I ask it how to accomplish a task with the C standard library and it tells me to use a function that doesn't exist in the C standard library, that's not just "wrong" that is a fabrication. It is a lie
If you ask me to remove whitespace from a string in Python and I mistakenly tell you use ".trim()" (the Java method, a mistake I've made annoyingly too much) instead of ".strip()", am I lying to you?
It's not a lie. It's just wrong.
The bullshitter doesn't care about if what they say is true or false or right or wrong. They just put out more bullshit.
> Lying requires intent to deceive
LLMs do have an intent to deceive, built in!
They have been built to never admit they don't know an answer, so they will invent answers based on faulty premises
I agree that for a human mixing up ".trim()" and ".strip()" is an honest mistake
In the example I gave you are asking for a function that does not exist. If it invents a function, because it is designed to never say "you are wrong that doesn't exist" or "I don't know the answer" that seems to qualify to me as "intent to deceive" because it is designed to invent something rather than give you a negative sounding answer
People are forthcoming with things they know they don't know. It's the stuff that they don't know that they don't know that get them. And also the things they think they know, but are wrong about. This may come as a shock, but people do make mistakes.
E.g. Next time a lawyer abandons your civil case and ghosts you after being clearly negligent and down-right bad in their representation. Good luck holding them accountable with any body without consequences.
Because, as Brad Pilon of intermittent fasting fashion repeatedly stresses, "All diets work."*
* Once there is an energy deficit.
From what I know dieticians don't design exercise plans. (If true) the LLM has better odds to figure it out.
I wouldn't have a clue how to verify most things that get thrown around these days. How can I verify climate science? I just have to trust the scientific consensus (and I do). But some people refuse to trust that consensus, and they think that by reading some convincing sounding alternative sources they've verified that the majority view on climate science is wrong.
The same can apply for almost anything. How can I verify dietary studies? Just having the ability to read scientific studies and spot any flaws requires knowledge that only maybe 1 in 10000 people could do, if not worse than that.
>I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
No way. NFTs did not make any headway in "the real world": their value proposition was that their cash value was speculative, like most other Blockchain technologies, and that understandably collapsed quickly and brilliantly. Right now developers are using LLMs and they have real tangible advantages. They are more successful than NFTs already.
I'm a huge AI skeptic and I believe it's difficult to measure their usefulness while we're still in a hype bubble but I am using them every day, they don't write my prod code because they're too unreliable and sloppy, but for one shot scripts <100 lines they have saved me hours, and they've entirely replaced stack overflow for me. If the hype bubble burst today I'd still be using LLMs tomorrow. Cannot say the same for NFTs
People talk a lot of about false info and hallucinations, which the models do in fact do, but the examples of this have become more and more far flung for SOTA models. It seems that now in order to elicit bad information, you pretty much have to write out a carefully crafted trick question or ask about a topic so on the fringes of knowledge that it basically is only a handful of papers in the training set.
However, asking "I am sensitive to sugar, make me a meal plan for the week targeting 2000cal/day and high protein with minimally processed foods" I would totally trust the output to be on equal footing with a run of the mill registered dietician.
As for the junior developer thing, my company has already forgone paid software solutions in order to use software written by LLMs. We are not a tech company, just old school manufacturing.
But it is replacing it. There's a rapidly-growing number of large, publicly-traded companies that replaced first-line support with LLMs. When I did my taxes, "talk to a person" was replaced with "talk to a chatbot". Airlines use them, telcos use them, social media platforms use them.
I suspect what you're missing here is that LLMs here aren't replacing some Platonic ideal of CS. Even bad customer support is very expensive. Chatbots are still a lot cheaper than hundreds of outsourced call center people following a rigid script. And frankly, they probably make fewer mistakes.
> and it will blow up like NFTs
We're probably in a valuation bubble, but it's pretty unlikely that the correct price is zero.
It doesn’t wholly replace the need for human support agents but if it can adequately handle a substantial number of tickets that’s enough to reduce headcount.
A huge percentage of problems raised in customer support are solved by otherwise accessible resources that the user hasn’t found. And AI agents are sophisticated enough to actually action on a lot of issues that require action.
The good news is that this means human agents can focus on the actually hard problems when they’re not consumed by as much menial bullshit. The bad news for human agents is that with half the workload we’ll probably hit an equilibrium with a lot fewer people in support.
LLM:s create real value. I save a bunch of time coding with an LLM vs without one. Is it perfect? No, but it does not have to be for still creating a lot of value.
Are some people hyping it up too much? Sure, an reality will set in but it wont blow up. It will rather be like the internet. 2000s and everyone thought "slap some internet on it and everything will be solved". They overestimated the (shorterm) value of the internet. But internet was still useful.
Can't disagree more (on LLMs. NFTs are of course rubbish). I'm using them with all kinds of coding tasks with good success, and it's getting better every week. Also created a lot of documents using them, describing APIs, architecture, processes and many more.
Lately working on creating an MCP for an internal mid-sized API of a task management suite that manages a couple hundred people. I wasn't sure about the promise of AI handling your own data until starting this project, now I'm pretty sure it will handle most of the personal computing tasks in the future.
It doesn't have to. It can replace having no support at all.
It would be possible to run a helpdesk for a free product. It might suck but it could be great if you are stuck.
Support call centers usually work in layers. Someone to pick up the phone who started 2 days ago and knows nothing. They forward the call to someone who managed to survive for 3 weeks. Eventually you get to talk to someone who knows something but can't make decisions.
It might take 45 minutes before you get to talk to only the first helper. Before you penetrate deep enough to get real support you might lose an hour or two. The LLM can answer instantly and do better than tortured minimum wage employees who know nothing.
There may be large waves of similar questions if someone or something screwed up. The LLM can do that.
The really exciting stuff will come where the LLM can instantly read your account history and has a good idea what you want to ask before you do. It can answer questions you didn't think to ask.
This is specially great if you've had countless email exchanges with miles of text repeating the same thing over and over. The employee can't read 50 pages just to get up to speed on the issue, if they had the time you don't so you explain again for the 5th time that delivery should be on adress B not A and be on these days between these times unless it are type FOO orders.
Stuff that would be obvious and easy if they made actual money.
Have you somehow managed to avoid the last several decades of human-sourced dieting advice?
The legal profession specifically saw the rise of computers, digitization of cases and records, and powerful search... it's never been easier to "self help" - yet people still hire lawyers.
Google is pretty much useless now as it changed into ann ad platform, and I suspect AI will go the same way soon enough.
It has always been easy to imagine how advertising could destroy the integrity of LLM's. I can guarantee that there will be companies unable to resist the temporary cash flows from it. Those models will destroy their reputation in no time.
https://www.washingtonpost.com/technology/2025/04/17/llm-poi...
One major problem is the payment mechanism. The nature of LLMs means you just can't really know or force it to spit out ad garbage in a predictable manor. That'll make it really tricky for an advertiser to want to invest in your LLM advertising (beyond being able to sell the fact that they are an AI ad service).
Another is going to be regulations. How can you be sure to properly highlight "sponsored" content in the middle of an AI hallucination? These LLM companies run a very real risk of running a fowl of FTC rules.
You certainly can with middleware on inference.
That’s like buying a wrench and changing your own spark plugs. Wrenches are not putting mechanics out of business.
I wouldn't be saving on tax advisors. Moreover, I would hire two different tax advisors, so I could cross check them.
Technically, all you have to do is follow the written instructions. But there are a surprising number of maybes in those instructions. You hit a checkbox that asks whether you qualify for such-and-such deduction, and find yourself downloading yet another document full of conditions for qualification, which aren't always as clear-cut as you'd like. You can end up reading page after page to figure out whether you should check a single box, and that single box may require another series of forms.
My small side income takes me from a one-page return to several pages, and next year I'm probably going to have to pay estimated taxes in advance because that non-taxed income leaves me owing at the end of the year more than some acceptable threshold that could result in fines. All because I make an extra 10% doing some evening freelancing.
Most people's taxes shouldn't be complex, but in practice they're more complex than they should be.
If I can do this, most people can do a simple 2-page 1040EZ.
This fact is so simple and yet here we are having arguments about it. To me people are conflating an economic assessment - whose jobs are going to be impacted and how much - with an aspirational one - which of your acquaintances personally could be replaced by an AI, because that would satisfy a beef.
Your accountant also is probably saving hundreds of dollars in other areas using AI assistance.
Personally I still think you should cross check with a professional.
What call. Maybe some readers miss the (perhaps subtle) difference between "Generative AI is not ..." and "Generative Ai is not going to ..."
Then first can be based on fact, e.g., what has happened so far. The second is based on pure speculation. No one knows what will happen in the future. HN is continually being flooded with speculation, marketing, hype.
In contrast, this article, i.e., the paper it discusses, is is based on what has happened so far. There is no "call" being made. Only an examination of what has heppened so far. Facts not opinions.
What happened in 2023 and 2024 actually
Nitpicky but it's worth noting that last year's AI capabilities are not the April 2025 AI capabilities and definitely won't be the December 2025 capabilities.
It's using deprecated/replaced technology to make a statement, that is not forward projecting. I'm struggling to see the purpose. It's like announcing that the sun is still shining at 7pm, no?
And the hype was insane in 2023 already - it's useful to compare actual outcomes vs historic hype to gauge how credible the hype sellers are.
Maybe progress over the last 2-3 months is hard to see, but progress over the last 6 is very clear.
Could be data is lagging as sibling comment said but this seems wildly difficult to report on a number like this.
It also doesn't take into account the benefits to colleagues of active users of LLMs (second order savings).
My use of LLMs often means I'm saving other people time because I can work through issues without communications loops and task switching. I can ask about much more important, novel items of discussion.
This is an important omission that lowers the paper's overall value and sets it up for headlines like this.
This is because the economy is not a static thing. If one variable changes (productivity), it’s not a given that GDP will remain constant and jobs/wages will consequently be reduced. More likely is that all of the variables are always in flux, reacting and responding to changes in the market.
However, the parent comment is about an examination of what has happened so far and facts that feed into the paper and its conclusions.
I was focused on what I see as important gaps in measuring impact of AI, and its actual (if difficult to measure) impact right now.
Mostly people aren't worried about productivity itself, which would be weird. "Oh no, AI is making us way more productive, and now we're getting too much stuff done and the economy is growing too much." The major concern is that the productivity is going to impact jobs and wages, and at least so far (according to this particular paper) that seems to not be happening.
Unless twice the work is suddenly required, which I doubt.
I would also be surprised if the twice the work was "suddenly" required, but would you be surprised if people buy more of something if it costs less? In the 1800s ordinary Americans typically owned only a few outfits. Coats were often passed down several generations. Today, ordinary Americans usually own dozens of outfits. Did Americans in the 1800s simply not like owning lots of clothing? Of course not. They would have liked to own more clothing, but demand was constrained by cost. As the price of clothing has gone down, demand for clothing has increased.
With software, won't it be the same? If engineers are twice as productive as before, competitive pressure will push the price of software down. Custom software for businesses (for example) is very expensive now. If it were less expensive, maybe more businesses will purchase custom software. If my Fastmail subscription becomes cheaper, maybe I will have more money to spend on other software subscriptions. In this way, across the whole economy, it is very ordinary for productivity gains to not reduce employment or wages.
Of course demand is not infinitely elastic (i.e. there is a limit on how many outfits a person will buy, no matter how cheap), but the effects of technological disruption on the economy are complex. Even if demand for one kind of labor is reduced, demand for other kinds of labor can increase. Even if we need less weavers, maybe we need more fashion designers, more cotton farmers, more truckers, more cardboard box factory workers, more logistics workers, and so on. Even if we need less programmers, maybe we need more data center administrators?
No one knows what the future economy will look like, but so far the long term trends in economic history don't link technological innovations with decreased wages or unemployment.
So you could work on more things with the same number of employees, make more money as a result, and either further increase the number of things you do, or if not, increase your revenue and hopefully profits per-employee.
I like this sentence because it is grammatically and syntactically valid but has the same relationship to reality as say, the muttering of an incantation or spell has, in that it seeks to make the words come true by speaking them.
Aside from simply hoping that, if somebody says it it could be true, “If everyone’s hours got cut in half, employers would simply keep everyone and double wages” is up there with “It is very possible that if my car broke down I’d just fly a Pegasus to work”
But more generally, my comment is not absurd; it's a pattern that has played itself out in economic history dozens of times.
Despite the fact that modern textile and clothing machinery are easily 1000x more efficient than weaving cloth and sewing shirts by hand, the modern garment industry employs more people today than that of middle age Europe.
Will AI be the same? I don't know, but it wouldn't be unusual if it was.
More people are also available since the fields are producing by themselves, comparatively. Not to mention less of us die to epidemies, famines and swords.
This makes sense. If everyone’s current workloads were suddenly cut in half tomorrow, there would simply be enough demand to double their workloads. This makes sense across the board because much like clothing and textiles, demand for every product and service scales linearly with population.
I was mistaken, you did not suggest that employers would gift workers money commensurate with productivity, you simply posit that demand is conceptually infinite and Jevons paradox means that no jobs ever get eliminated.
In the the past 200 years we've seen colossal productivity gains from technology across every area of the economy. Over the same period, wages have increased and unemployment has remained stable. That's where my priors come from. I'll update them if we get data to the contrary, but the data we have so far (like this paper) mostly confirm them.
My company just redid our landing page. It would probably have taken a decent developer two weeks to build it out. Using AI to create the initial drafts, it took two days.
I would (similarly insultingly) suggest that if you think this is true, you're spending time doing things more slowly that you could be doing more productively by using contemporary tools.
It's not miraculous but I feel like it saves me a couple hours a week from not going on wild goose chases. So maybe 5% of my time.
I don't think any engineering org is going to notice 5% more output and layoff 1/20th of their engineers. I think for now most of the time saved is going back to the engineers.
But here's the thing - there is already plenty of documented proof of individuals losing their job to ChatGPT. This is an article from 2 years ago: https://www.washingtonpost.com/technology/2023/06/02/ai-taki...
Early on in a paradigm shift, when you have small moves, or people are still trying to figure out the tech, it's likely that individual moves are hard to distinguish from noise. So I'd argue that a broad-based, "just look at the averages" approach is simply the wrong approach to use at this point in the tech lifecycle.
FWIW, I'd have to search for it, but there were economic analyses done that said it took decades for the PC to have a positive impact on productivity. IMO, this is just another article about "economists using tools they don't really understand". For decades they told us globalization would be good for all countries, they just kinda forgot about the massive political instability it could cause.
> In contrast, this article, i.e., the paper it discusses, is based on what has happened so far.
Not true. The article specifically calls into question whether the massive spending on AI is worth it. AI is obviously an investment, so determine whether it's "worth it", you need to consider future outcomes.
I honestly think computers have a net negative productivity impact in many organizations. Maybe even "most".
https://usafacts.org/articles/what-is-labor-productivity-and...
Even more surprising for me is that productivity growth declined during the ZIRP era. How did we take all that free money and product less?
Could you say a few more words on this please? Are you referring to the rise of China?
Sounds like reddit could also do a good job at this, though nobody said "reddit will replace your jobs". Maybe because not as many people actively use reddit as they use generative AI now, but I cannot imagine any other reason than that.
The only thing I can remotely trust is my own experience. Recently, I decided to have some business cards made, which I haven't done in probably 15 years. A few years ago, I would have either hired someone on Fiverr to design my business card or pay for a premade template. Instead, I told Sora to design me a business card, and it gave me a good design the first time; it even immediately updated it with my Instagram link when I asked it to.
I'm sorry, but I fail to see how AI, as we now know it, doesn't take the wind out of the sails of certain kinds of jobs.
The point is that I would have paid for another human being's time. Why? Because I am not a young man anymore, and have little desire to do everything myself at this point. But now, I don't have to pay for someone's time, and that surplus time doesn't necessarily transfer to something equivalent like magic.
I am not talking about whether I have to pay more or less for anything. My problem is not paying. I want to pay so that I don't have to make something myself or waste time fiddling with a free template.
What I am proposing is that, in the current day, a human being is less likely to be at the other end of the transaction when I want to spend money to avoid sacrificing my time.
Sure, one can say that whomever is working for one of these AI companies benefits, but they would be outliers and AI is effectively homogenizing labor units in that case. Someone with creative talent isn't going to feasibly spin up a competitive AI business the way they could have started their own business selling their services directly.
That's both pompous and bizarre. The "real" economy doesn't end at the walls of corporate offices. Far from it.
For copywriting, analyzing contracts, exploring my business domain, etc etc. Each of those tasks would have required me to consult with an expert a few years ago. Not anymore.
That is a great use for it too, rather than replacing artists we have personal advisors who can navigate almost any level of complex bureaucracy instantaneously. My girlfriend hates AI, like rails against it at any opportunity, but after spending a few hours on the DMV website I sat down and fed her questions into Claude and had answers in a few seconds. Instant convert.
These examples aren't wrong but you might be overstating their impact on the economy as a whole.
E.g. the overwhelming majority of people do not pay solely for tax advice, or have a dietician, etc. Corporations already crippled their customer support so there's no remaining damage to be dealt.
Your tax example won't move the needle on people who pay to have their taxes done in their entirety.
Even if every job that exists today were currently automated _people would find other stuff to do_. There is always going to be more work to do that isn't economical for AIs to do for a variety of reasons.
But are those really the same? You're not paying the tax agent to give you the advice per se: even before Gemini, you could do your own research for free. You're really paying the tax agent to provide you advice that you can trust without having to go to the extra steps of doing deep research.
One of the most important bits of information I get from my tax agent is, "is this likely to get me audited if we do it?" It's going to be quite some time before I trust AI to answer that correctly.
Jevons law in action: some pieces of work get lost, but lower cost of doing work generates more demand overall...
I doubt it.
Search already "obsoletes" these fields in the same way AI does. AI isn't really competing against experts here, but against search.
It's also really not clear that AI has an overall advantage over dumb search in this area. AI can provide more focused/tailored results, but it costs more. Keep in mind that AI hasn't been enshittified yet like search. The enshittification is inevitable and will come fast and hard considering the cost of AI. That is, AI responses will be focused and tailored to better monetize you, not better serve you.
If that’s true, probably for the best that those jobs get replaced. Then again, the value may have been in the personal touch (pay to feel good about your decisions) rather than quality of directions.
So…all you needed was a decent search engine, which in the past would have been Google before it was completely enshittified.
Yes.
"...all you need" A good search engine is a big ask. Google at its height was quite good. LLMs are shaping up to be very good search engines
That would be enough, for me to be very pleased with them
Ever since the explosion in popularity of the internet in the 2000's, anything journalism related has been in terminal decline. The arrival of the smartphones accelerated this process.
I know it’s replaced marketing content writers in startups. I know it has augmented development in startups and reduced hiring needs.
The effects as it gains capability will be mass unemployment.
In other words, this more likely answers the question "If customer support agents all use ChatGPT or some in-house equivalent, does the company need fewer customer support agents?" than it answers the question "If we deploy an AI agent for customers to interact with, can it reduce the volume of inquiries that make it to our customer service team and, thus, require fewer agents?"
You already see attorneys using it to write briefs; often to hilarious effect. These are clearly the precursor though to a much reduced need to for Jr / associate level attorneys at firms.
I have a 185 year old treatise on wood engraving. At the time, to reproduce any image required that it be engraved in wood or metal for the printer; the best wood engravers were not mere reproducers, as they used some artistry when reducing the image to black and white, to keep the impression from continuous tones. (And some, of course, were also original artists in their own right). The wood engraving profession was destroyed by the invention of photo-etching (there was a weird interval before the invention of photo etching, in which cameras existed but photos had to be engraved manually anyway for printing).
Maybe all the wood engravers found employment; although I doubt it. But at this speed, there will be a lot of people who won't be able to retrain during employment and will either have to use up their savings while doing so, or have to take lower paid jobs.
This is how engraving went too. It wasn't overnight. The tools were not distributed evenly and it was a good while before amateurs could produce anything like what the earlier professionals did.
Because you can buy a microwave and pizza rolls doesn't make you a chef. Maybe in 100 years the tooling will make you as good as the chefs of our time, but by then they'll all be doing even better work and there are people who will pay for higher quality no matter how high the bar is raised for baseline quality so eliminating all work in a profession is rare.
I'm a little confused by your point here:
>This is how engraving went too. It wasn't overnight. The tools were not distributed evenly and it was a good while before amateurs could produce anything like what the earlier professionals did.
In the case of engraving, most engravers weren't the original artist. The artist would draw their illustration on a wood blank, and the engraver would convert it to a print block. So artists were not completely replaced by photographers, except for journalistic sketchers, but the entire process changed and eliminated the job of engraving. Sure, high end artist-engravers kept going, but jobbing engravers were out of luck.
There are still a few artists who specialise in engraving. But the point here isn't whether a few of the most accomplished professionals will still be in demand, but what happens to the vast bulk of average people.
The wise, will displace economists and consultants with LLMs, but the trend followers will hire them to prognostic about the future impact - such that the net affect could be zero.
This is the wrong question.
The question should be to hiring managers: Do you expect LLM based tools to increase or decrease your projected hiring of full time employees?
LLM workflows are already *displacing* entry-level labor because people are reaching for copilot/windsurf/CGPT instead of hiring a contract developer, researcher, BD person. I’m watching this happen across management in US startups.
It’s displacing job growth in entry level positions across primary writing copy, admin tasks or research.
You’re not going to find it in statistics immediately because it’s not a 1:1 replacement.
Much like the 1971 labor-productivity separation that everyone scratched their head about (answer: labor was outsourced and capital kept all value gains), we will see another asymptote to that labor productivity graph based on displacement not replacement.
> Duolingo will replace contract workers with AI. The company is going to be ‘AI-first,’ says its CEO.
https://www.theverge.com/news/657594/duolingo-ai-first-repla...
-
And within that article:
> von Ahn’s email follows a similar memo Shopify CEO Tobi Lütke sent to employees and recently shared online. In that memo, Lütke said that before teams asked for more headcount or resources, they needed to show “why they cannot get what they want done using AI.”
As with all other technologies the jobs it removes are not normally in country that introduces it but that they never happen elsewhere.
For example, while the automated looms that the Luddites were protesting about didn't result in significant job losses in the UK. How much clothing manufacturing has been curtailed in Africa because of it and similar innovations since that have lead to cheap mass produced clothes making it uneconomic to produce there.
As suggest by this report, Denmark and West will probably be make good elsewhere and be largely unaffected.
However, places like India, Vietnam with large industries based on call centres and outsourced development servicing the West are likely to be more vulnerable.
Maybe instead look at the US in 2025. EU labor regulations make it much harder to fire employees. And 2023 was mainly a hype year for GenAI. Actual Enterprise adoption (not free vendor pilots) started taking off in the latter half of 2024.
That said, a lot of CEOs seem to have taken the "lay off all the employees first, then figure out how to have AI (or low cost offshore labor) do the work second" approach.
Case in point: Klarna.
2024: "Klarna is All in on AI, Plans to Slash Workforce in Half" https://www.cxtoday.com/crm/klarna-is-all-in-on-ai-plans-to-...
2025: "Klarna CEO “Tremendously Embarrassed” by Salesforce Fallout and Doubts AI Can Replace It" https://www.salesforceben.com/klarna-ceo-tremendously-embarr...
For example, the mass layoffs of federal employees.
Anecdotal situation - I use ChatGPT daily to rewrite sentences in the client reports I write. I would have traditionally had a marketing person review these and rewrite them, but now AI does it.
So I find this result improbable, at best, given that I personally know several people who had to scramble to find new ways of earning money when their opportunities dried up with very little warning.
even customer service bots are just nicer front ends for knowledge bases.
Imagine if a tool made content writers 10x as productive. You might hire more, not less, because they are now better value! You might eventually realise you spent too much, but this will come later.
ADAIK no company I know of starts a shiny new initiative by firing, they start by hiring then cutting back once they have their systems in place or hit a ceiling. Even Amazon runs projects fat then makes them lean AFAIK.
There's also pent up demand.
You never expect a new labour saving device to cost jobs while the project managers are in the export building phase.
Be wary of people trying to deflect the away from the managerial class for these issues.
As an example, many companies have recently shifted their support to "AI first" models. As a result, even if the team or certain team members haven't been fired, the general trend of hiring for support is pretty much down (anecdotal).
I agree that some automation is better for the humans to do their jobs better, but this isn't one of those. When you're looking for support, something has clearly went wrong. Speaking or typing to an AI which responds with random unrelated articles or "sorry I didn't quite get that" is just evading responsibility in the name of "progress", "development", "modernization", "futuristic", "technology", <insert term of choice>, etc.
Software development jobs there have bigger threat: outsourcing to cheaper locations.
As well for teachers: it is hard to replace a person supervising kids with a chatbot.
Both of those can be true, because companies are placing bets that AI will replace a lot of human work (by layoffs and reduced hiring), while also using it in the short term as a reason to cut short term costs.
Both your experience and what the article (research) says can be valid at the same time. That’s how statistics works.
>Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days
I'm someone who tries to avoid AI tools. But this paper is literally basing its whole assessment off of two things; wages and hours. This is a disingenuous assertion.
Lets assume that I work 8 hours per day. If I am able to automate 1h of my day with AI, does that mean I get to go home 1 hour early? No. Does that mean I get an extra hour of pay? No.
So the assertion that there has been no economic impact assumes that the AI is a separate agent that would normally be paid in wages for time. That is not the case.
The AI is an augmentation for an existing human agent. It has the potential to increase the efficiency of a human agent by n%. So we need to be measuring the impact that is has on effectiveness and efficiency. It will never offset wages or hours. It will just increase the productivity for a given wage or number of hours.
The overall rate of participation in the labor work force is falling. I expect this trend to continue as AI makes the economy more and more dynamic and sets a higher and higher bar for participation.
Overall GDP is rising while labor participation rate is falling. This clearly points to more productivity with fewer people participating. At this point one of the main factors is clearly technological advancement, and within that I believe if you were to make a survey of CEOS and ask what technological change has allowed them to get more done with fewer people, the resounding consensus would definitely be AI
Truth is, companies that don’t need layoffs are pushing employees to use AI to supercharge their output.
You don’t grow a business by just cutting costs, you need to increase revenue. And increasing revenue means more work, which means it’s better for existing employees to put out more with AI.
Here's my own take:
- It is far too early to tell.
- The roll-out of ChatGPT caused a mind-set revolution. People now "get" what is possible already now, and it encourages conceiving and persuing new use cases on what people have seen.
- I would not recommend any kinds to train to become a translator for sure; even before LLMs, people were paid penny amounts per word or line translated, and rates plummeted further due to tools that cache translations in previous versions of documents (SDL TRADOS etc.). The same decline not to be expected for interpreters.
- Graphic designers that live from logo designs and similar works may suffer fewer requests.
- Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.
- LLMs are a basic technology that now will be embedded into various products, from email clients over word processors to workflow tools and chat clients. This will take 2-3 years, and it may reduce the number of people needed in an office with a secretarial/admin/"analyst" type background after that.
- Industry is already working on the next-gen version of smarter tools for medics and lawyers. This is more of a 3-5 year development, but then again some early adopters started already 2-3 years ago. Once this is rolled out, there will be less demand for assitants-type jobs such as paralegals.
Do you mean Philip Tetlock? He wrote Superforecasting, which might be what you're referring to?
But I already trust my dentist. A new dentist deferring to AI is scary, and obviously will happen.
The mistake on mine was caught when a radiologist checked over the work of the weekend X-ray technician who missed a hairline crack. A second look is always good, and having one look be machine and the other human might be the best combo.
For now I agree. 2-4 years from now it can be 20 ultra strong models each trained somewhat differently that converse on the X-ray and reach a conclusion. I don't think technicians will have much to add to the accuracy.
Why aren’t you using the AI x-ray? Because it too often misdiagnoses things and I have to spend even more time double checking. And I still have to get a radiologist consult.
Why are you frustrated that we swapped out the blood testing thingamabob with an AI machine? Because it takes 10 minutes to do what took me 30 seconds with a microscope and is STILL not doing the full job, despite bringing this up multiple times.
Why aren’t you relying more on the AI text to speech for medical notes? Because the AVMA said that a doctor has to review all notes. I do, and it makes shit up in literally every instance. So I write my own and label the transcription as AI instead of having to spend even more time correcting it.
The best part is that the majority of vets (at least in this city) didn’t do medical notes for pets. Best you’d often get when asking is a list of costs slapped together in the 48 hours they had to respond. Now, they just use the AI notes without correcting them. We’ve gone from zero notes, so at least the next doctor knows to redo everything they need, to medical notes with very frequent significant technical flaws but potentially zero indication that it’s different from a competent doctor’s notes.
This is the wrong direction, and it’s not just new doctors. It’s doctors who are short on time doing what they can with tools that promised what isn’t being delivered. Or doctors being strong armed into using tools by the PE owners who paid for something without checking to see if it’s a good idea. I honestly do believe that AI will get there, but this is a horrible way to do it. It causes harm.
This is such a broad category that I think it's inaccurate to say that all editors will be automated, regardless of your outlook on LLMs in general. Editing and proofreading are pretty distinct roles; the latter is already easily automated, but the former can take on a number of roles more akin to a second writer who steers the first writer in the correct direction. Developmental editors take an active role in helping creatives flesh out a work of fiction, technical editors perform fact-checking and do rewrites for clarity, etc.
It has been a very, very long time since editors have been proof-reading prose for typos and grammar mistakes, and you don't need LLMs for that. Good editors do a lot more creative work than that, and LLMs are terrible at it.
watch out for headcount lacking in segments of the market
If each of my developers is 30% more productive that means we can ship 30% more functionally which means more budget to hire more developers. If you think you’ll just pocket that surplus you have another thing coming.
The .com boom and bust is an apt reference point. The technological shift WAS real, and the value to be delivered ultimately WAS delivered…but not in 1999/2000.
It may be we see a massive crash in valuations but AI still ends up the dominant driver of software value over the next 5-10 years.
It doesn't work: even for the tiny slice of human work that is so well defined and easily assessed that it is sent out to freelancers on sites like Fiverr, AI mostly can't do it. We've had years to try this now, the lack of any compelling AI work is proof that it can't be done with current technology.
You can't build on top of it: unlike foundational technologies like the internet, AI can only be used to build one product, a chatbot. The output of an AI is natural language and it's not reliable. How are you going to meaningfully process that output? The only computer system that can process natural language is an AI, so all you can do is feed one AI into another. And how do you assess accuracy? Again, your only tool is an AI, so your only option is to ask AI 2 if AI 1 is hallucinating, and AI 2 will happily hallucinate its own answer. It's like The Cat in the Hat Comes Back, Cat E trying to clean up the mess Cat D made trying to clean up the mess Cat C made and so on.
And it won't get any better. LLMs can't meaningfully assess their training data, they are statistical constructions. We've already squeezed about all we can from the training corpora we have, more GPUs and parameters won't make a meaningful difference. We've succeeded at creating a near-perfect statistical model of wikipedia and reddit and so on, it's just not very useful even if it is endlessly amusing for some people.
Can you pinpoint the date which LLMs stagnated?
More broadly, it appears to me that LLMs have improved up to and including this year.
If you consider LLMs to not have improved in the last year, I can see your point. However, then one must consider ChatGPT 4.5, Claude 3.5, Deepseek, and Gemini 2.5 to not be improvements.
Whatever the case, there are open platforms that give users a chance to compare two anonymous LLMs and rank the models as a result [1].
What I observe when I look for these rankings is that none of the top ranked models come from before your stagnation cut off date of September 2024 [2].
I'm worried the shock will not be abrupt enough to encourage a proper rethink.
For all those 250 years most people have predicted that the next new technology will make the replaced workforce permanently unemployed, despite the track record of that prediction. We constantly predict poverty and get prosperity.
I kinda get why: The job loss is concrete reality while the newly created jobs are speculation.
Still, I'm confident AI will continue the extremely strong trend.
The rate of improvement increased a lot at the Industrial Revolution, but the process has always been with us, to varying degree.
We will have to get to 100% test coverage and document everything and add more bells and whistles to UI etc. The day to day activity may change but there will always be developers.
Sometimes that decrease in quality is matched by an increase in reach / access, and so the benefits can outweigh the costs. Think about language translation in web browsers and even smart spectacles, for example. Language translation has been around forever but generally limited to popular books or small-scale proprietary content because it was expensive to use mult-lingual humans to do that work.
Now even my near-zero readership blog can be translated from English to Portuguese (or most other widely used languages) for a reader in Brazil with near-zero cost/effort for that user. The quality isn't as good as human translation, often losing nuance and style and sometimes even with blatant inaccuracies, but the increased access offered by language translation software makes the lower standard acceptable for lots of use cases.
I wouldn't depend on machine translation for critical financial, healthcare, or legal use cases, though I might start there to get the gist, but for my day-to-day reading on the web, it's pretty amazing.
Software at scale is different than individuals engaging in leisure activities. A loss of nuance and occasional catastrophic failures in a piece of software with hundreds of millions or billions of users could have devastating impacts.
I was able to pre-process the agreement, clearly understand most of the major issues, and come up with a proposed set of redlines all relatively easily. I then waited for his redlines and then responded asking questions about a handful of things he had missed.
I value a lawyer being willing to take responsibility for their edits, and he also has a lot of domain specific transactional knowledge that no LLM will have, but I easily saved 10 hours of time so far on this document.
the rest is fugazi
At no point did that company choose to pivot to GenAI to cut costs and reduce headcount. It's more reactive than that.
Either mathematics sucks or economists suck. Real hard choice.
1) AI/automation will replace jobs. This is 100% certain in some cases. Look at the industrial revolution.
2) AI/automation will increase unemployment. This has never happened and it's doubtful it will ever happen.
The reason is that humans always adapt and find ways to be helpful that automation can't do. That is why after 250 years after the industrial revolution started, we still have single-digit unemployment.
> The reason is that humans always adapt and find ways to be helpful that automation can't do. That is why after 250 years after the industrial revolution started, we still have single-digit unemployment.
Horses, for thousand of years, were very useful to humans. Even with the various technological advances through that time their "unemployment" was very low. Until the invention and perfection of internal combustion engines.
To say that it is doubtful that it will ever happen to us is basically saying that human cognitive and/or physical capabilities are without bounds and that there is some reason that with our unbounded cognitive capabilities we will never be able to create a machine that could replicate those capabilities. That is a ridiculous claim.
This reminds me of some early stage startup pitches. During a pitch, I might ask: "what do you think about competitor XYZ?" And sometimes the answer is "we don't think highly of them, we have never even seen them in a single deal we've competed for!" But that's almost a statistical tautology: if you both have .001% market share and you're doubling or tripling annually, the chance that you're going to compete for the same customers is tiny. That doesn't mean you can just dismiss that competitor. Same thing with the article above dismissing AI as a threat to jobs so quickly.
To give a concrete example of a job disappearing: I run a small deep tech VC fund. When I raised the fund in early '24, my plan was to hire one investor and one researcher. I hired a great investor, but given all of the AI progress I'm now 80% sure I won't hire a researcher. ChatGPT is good enough for research. I might end up adding a different role in the near future, but this is a research job that likely disappeared because of AI.
jruohonen•10h ago
https://economics.mit.edu/news/daron-acemoglu-what-do-we-kno...