1: https://xkcd.com/806/ - from an era when the worst that could happen was having to speak with incompetent, but still human, tech support.
I got myself into a loop where no matter what I did, there was no human in the loop.
Even the "threaten to cancel" trick didn't work, still just chatbots / automated services.
Thankfully more and more of the UK is getting FTTH. Sadly for me I accidentally misunderstood the coverage checker when I last moved house.
You're acting like it's not the companies that are monopolies that implement these systems first.
I would say the use cases are only coming into view.
So, as of yet, according to these researchers, the main effect is that of a data pump, certain corporations get a deep insight into people's and other corporation's inner life.
I'm not saying that I think LLMs are useless, far from it, I use them when I think it's a good fit for the research I'm doing, the code I need to generate, etc., but the way it's being pushed from a marketing perspective tells me that companies making these tools need people to use them to create a data moat.
Extremely annoying to be getting these pop-ups to "use our incredible Intelligence™" at every turn, it's grating on me so much that I've actively started to use them less, and try to disable every new "Intelligence™" feature that shows up in a tool I use.
The boards in turn instruct the CEOs to "adopt AI" and so you get all the normal processes about deciding what/if/when to do stuff get short circuited and so you get AI features that no one asked for or mandates for employees to adopt AI with very shallow KPIs to claim success.
The hype really distorts both sides of the conversation. You get the boosters for which any use of AI is a win, no matter how inconsequential the results, and then you get things like the original article which indicate it hasn't caused job losses yet as a sign that it hasn't changed anything. And while it might disprove the hype (especially the "AI is going to replace all mental labour in $SHORT_TIMEFRAME" hype), it really doesn't indicate that it won't replace anything.
Like when has a technology making the customer support experience worse for users or employees ever stopped it's rollout if there's cost savings to be had?
I think this why AI is so complicated for me. I've used it, and I can see some gains. But it's on the order of when IDE auto complete went from substring matches of single methods to when it could autocomplete chains of method calls based on types. The agent stuff fails on anything but the most bite size work when I've tried it.
Clearly some people seem it as something more transformative than that. There's other times when people have seen something transformative and it's just been so clearly nothing of value (NFTs for example) that it's easy to ignore the hype train. The reason AI is challenging for me is it's clearly not nothing, but also it's so far away from the vision that others have that it's not clear how realistic that is.
Fundamentally, we (the recipient of llm output) are generating the meaning from the words given. ie, llms are great when the recipient of their output is a human.
But, when their recipient is a machine, the model breaks down, because, machine to machine requires deterministic interactions. this is the weakness I see - regardless of all the hype about llm agents. fundamentally, the llms are not deterministic machines.
LLMs lack a fundamental human capability of deterministic symbolization - which is to create NEW symbols with associated rules which can deterministically model worlds we interact with. They have a long way to go on this.
It's very telling that we see "we won't use your data for training" sometimes and opt-outs but never "we won't collect your data". 'Training' being at best ill defined.
It sounds like they didn't ask those who got laid off.
For me, the most interesting takeaway. It's easy to think about a task, break it down into parts, some of which can be automated, and count the savings. But it's more difficult to take into account any secondary consequences from the automation. Sometimes you save nothing because the bottleneck was already something else. Sometimes I guess you end up causing more work down the line by saving a bit of time at an earlier stage.
This can make automation a bit of a tragedy of the commons situation: It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
in this case, the total cost would've gone up, and thus, eventually the stakeholder (aka, the person who pays) is going to not want to pay when the "old" way was cheaper/faster/better.
> It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
not really, as long as the precondition i mentioned above (the total cost dropping) is true.
But there's also adversarial situations. Hiring would be one example: Companies use automated CV triaging tools that make it harder to get through to a human, and candidates auto generate CVs and cover letters and even auto apply to increase their chance to get to a human. Everybody would probably be better off if neither side attempted to automate. Yet for the individuals involved, it saves them time, so they do it.
I am 100% convinced that Ai will and already has destroyed lots of Jobs. We will likely encounter world order disrupting changes in the coming decades when computer get another 1000 times faster and powerful in the coming 10 years.
The jobs described might get lost (obsolete or replaced) as well in the longer term if AI gets better than them. For example just now another article was mentioned in HN: "Gen Z grads say their college degrees were a waste of time and money as AI infiltrates the workplace" which would make teachers obsolete.
It is like expecting cars to replace horses before anyone starts investing in the road network and getting international petroleum supply chains set up - large capital investment is an understatement when talking about how long it takes to bring in transformative tech and bed it in optimally. Nonetheless, time passed and workhorses are rare beasts.
This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.
When a sector collapses and become irrelevant, all its workers no longer need to be employed. Some will no longer have any useful qualifications and won't be able to find another job. They will have to go back to training and find a different activity.
It's fine if it's an isolated event. Much worse when the event is repeated in many sectors almost simultaneously.
Many, many industries and jobs transformed or were relegated to much smaller niches.
Overall it was great.
Why? When we've seen a sector collapse, the new jobs that rush in to fill the void are new, never seen before, and thus don't have training. You just jump in and figure things out along the way like everyone else.
The problem, though, is that people usually seek out jobs that they like. When that collapses they are left reeling and aren't apt to want to embrace something new. That mental hurdle is hard to overcome.
>Google’s core search and advertising business grew almost 10 per cent to $50.7bn in the quarter, surpassing estimates for between 8 per cent and 9 per cent.[0]
The "Google's search is garbage" paradigm is starting to get outdated, and users are returning to their search product. Their results, particularly the Gemini overview box, are (usually) useful at the moment. Their key differentiator over generative chatbots is that they have reliable & sourced results instantly in their overview. Just concise information about the thing you searched for, instantly, with links to sources.
[0] https://www.ft.com/content/168e9ba3-e2ff-4c63-97a3-8d7c78802...
Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results are AI slop.
Their increased revenue probably comes down to the fact that they no longer show any search results in the first screenful at all for mobile and they've worked hard to make ads indistinguishable from real results at a quick glance for the average user. And it's not like there exists a better alternative. Search in general sucks due to SEO.
It's actually sadder than that. Google appear to have realised that they make more money if they serve up ad infested scrapes of Stack Overflow rather than the original site. (And they're right, at least in the short term).
Not because the LLM is better, but because the search is close to unusable.
The general tone of this study seems to be "It's 1995, and this thing called the Internet has not made TV obsolete"; same for the Acemoglu piece linked elsewhere in the. Well, no, it doesn't work like that, it first comes for your Blockbuster, your local shops and newspaper and so on, and transforms those middle class jobs vulnerable to automation into minimum wages in some Amazon warehouse. Similarly, AI won't come for lawyers and programmers first, even if some fear it.
The overarching theme is that the benefits of automation flow to those who have the bleeding edge technological capital. Historically, labor has managed to close the gap, especially trough public education; it remains to be seen if this process can continue, since eventually we're bound to hit the "hardware" limits of our wetware, whereas automation continues to accelerate.
So at some point, if the economic paradigm is not changed, human capital loses and the owners of the technological capital transition into feudal lords.
There's also going to be a shrinkage in the workforce caused by demographics (not enough kids to replace existing workers).
At the same time education costs have been artificially skyrocketed.
Personally the only scenario I see mass unemployment happening is under a "Russia-in-the-90s" style collapse caused by an industrial rugpull (supply chains being cut off way before we are capable of domestically substituting them) and/or the continuation of policies designed to make wealth inequality even worse.
There is brewing conflict across continents. India and Pakistan, Red sea region, South China sea. The list goes on and on. It's time to accept it. The world has moved on.
the individual phenomena you describe are indeed detritus of this failed reaction to an increasing awareness of all humans of our common conditions under disparate nation states.
nationalism is broken by the realization that everyone everywhere is paying roughly 1/4 to 1/3 of their income in taxes, however what you receive for that taxation varies. your nation state should have to compete with other nation states to retain you.
the nativist movement is wrongful in the usa for the reason that none of the folks crying about foreigners is actually native american,
but it's globally in error for not presenting the truth: humans are all your relatives, and they are assets, not liabilities: attracting immigration is a good thing, but hey feel free to recycle tired murdoch media talking points that have made us nothing but trouble for 40 years.
https://www.dhl.com/global-en/microsites/core/global-connect...
Source for counter argument?
We have had thousands of years of globalising. The trend has always been towards a more connected world. I strongly suspect the current Trump movement (and to an extent brexit depending on which brexit version you chose to listen to) will be blips in that continued trend. That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
BRICs have been trying to substitute for some of them and have made some nonzero progress but theyre still far, far away from stuff like a reserve currency.
(Racist memes and furry pornography doesn't count.)
The sandwich shop next to my work has a music playlist which is 100% ai generated repetitive slop.
Do you think they'll be paying graphic designers, musicians etc. for now on when something certainly shittier than what a good artist does, but also much better than what a poor one is able to achieve, can be used in five minutes for free?
People generating these things weren't ever going to be customers of those skillsets. Your examples are small business owners basically fucking around because they can, because it's free.
Most barber shops just play the radio, or "spring" for satellite radio, for example. AI generated music might actively lose them customers.
What you are truly seeking is high level specifications for automation systems, which is a flawed concept to the degree that the particulars of a system may require knowledgeable decisions made on a lower level.
However, CAD/CAM, and infrastructure as code are true amplifiers of human power.
LLMs destroy the notion of direct coupling or having any layered specifications or actual levels involved at all, you try to prompt a machine trained in trying to ascertain important datapoints for a given model itself, when the correct model is built up with human specifications and intention at every level.
Wrongful roads lead to erratic destinations, when it turns out that you actually have some intentions you wish to implement IRL
If you want to reach the actual destination because conditions changed (there is a wreck in front of you) you need a system to identify changes that occur in a chaotic world and can pick from an undefined/unbounded list of actions.
Similar thing goes to delivery. Moving single pallet to store or replacing carpets or whatever. Lot of complexity if you do not offload it to receiver.
More regular the environment is easier it is to automate. A shelving in store in my mind might be simpler than all environments where vehicles need to operate in.
And I think we know first to go. Average or below average "creative" professionals. Copywriter, artists and so on.
This is completely untrue. Google Search still works, wonderfully. It works even better than other attempts at search by the same Google. For example, there are many videos that you will NEVER find on Youtube search that come up as the first results on Google Search. Same for maps: it's much easier to find businesses on Google Search than on maps. And it's even more true for non-google websites; searching Stack Overflow questions on SO itself is an exercice in frustration. Etc.
Resume filtering by AI can work well on the first line (if implemented well). However, once we get to the the real interview rounds and I see the CV is full of AI slop, it immediately suggests the candidate will have a loose attitude to checking the work generated by LLMs. This is a problem already.
"Like all ‘magic’ in Tolkien, [spiritual] power is an expression of the primacy of the Unseen over the Seen and in a sense as a result such spiritual power does not effect or perform but rather reveals: the true, Unseen nature of the world is revealed by the exertion of a supernatural being and that revelation reshapes physical reality (the Seen) which is necessarily less real and less fundamental than the Unseen" [1].
The writing and receiving of resumes has been superfluous for decades. Generative AI is just revealing that truth.
[1] https://acoup.blog/2025/04/25/collections-how-gandalf-proved...
First, LLMs are a distillation of our cultural knowledge. As such they can only reveal our knowledge to us.
Second, they are limited even more so by the users knowledge. I found that you can barely escape your "zone of proximal development" when interacting with an LLM.
(There's even something to be said about prompt engineering in the context of what the article is talking about: It is 'dark magic' and 'craft-magic' - some of the full potential power of the LLM is made available to the user by binding some selected fraction of that power locally through a conjuration of sorts. And that fraction is a product of the craftsmanship of the person who produced the prompt).
In this sense, I have rarely seen AI have negative impacts. Insofar as an LLM can generate a dozen lines of code, it forces developers to engage in less "performative copy-paste of stackoverflow/code-docs/examples/etc." and engage the mind in what those lines should be. Even if, this engagement of the mind, is a prompt.
Where input' is a distorted version of input. This is the new reality.
We should start to be less impressed volume of text and instead focus on density of information.
Always was.
In other words, there is a lot more spam in the world. Efficiencies in hiring that implicitly existed until today may no longer exist because anyone and their mother can generate a professional-looking cover letter or personal web page or w/e.
Note also that ad blockers are much less prevalent on mobile.
And even if we solve this problem of hallucination, the ai agents still need a platform to do search.
If I was Google I’d simply cut off public api access to the search engine.
Google search is fraught with it's own list of problems and crappy results. Acting like it's infallible is certainly an interesting position.
>If I was Google I’d simply cut off public api access to the search engine.
The convicted monopolist Google? Yea, that will go very well for them.
Seen a whole lot of gen AI deflecting customer questions which would have been previously tickets. That is a reduced ticket volume that would have been taken by a junior support engineer.
We are a couple of years away from the death of the level 1 support engineer. I can't even imagine what's going to happen to the level 0 IT support.
And this trend isn't new; a lot of investments into e.g. customer support is to need less support staff, for example through better self-service websites, chatbots / conversational interfaces / phone menus (these go back decades), or to reduce expenses by outsourcing call center work to low-wage countries. AI is another iteration, but gut feeling says they will need a lot of training/priming/coaching to not end up doing something other than their intended task (like Meta's AIs ending up having erotic chats with minors).
One of my projects was to replace the "contact" page of a power company with a wizard - basically, get the customers to check for known outages first, then check their own fuse boxes etc, before calling customer support.
In the future, we will do a lot more.
In other terms: There will be a lot more work. So even if robots do 80% of it, if we do 10x more - the amount of work we need humans to do will double.
We will write more software, build more houses, build more cars, planes and everything down the supply chain to make these things.
When you look at planet earth, it is basically empty. While rent in big cities is high. But nobody needs to sleep in a big city. We just do so because getting in and out of it is cumbersome and building houses outside the city is expensive.
When robots build those houses and drive us into town in the morning (while we work in the car) that will change. I have done a few calculations, how much more mobility we could achieve with the existing road infrastructure if we use electric autonomous buses, and it is staggering.
Another way to look at it: Currently, most matter of planet earth has not been transformed to infrastructure used by humans. As work becomes cheaper, more and more of it will. There is almost infinitely much to do.
Which of the few remaining wild creatures will be displaced?
https://www.worldwildlife.org/press-releases/catastrophic-73...
Costs of buses are mostly the driver. Which will go away. The rest is mostly building and maintaining them. Which will be done by robots. The rest is energy. The sun sends more energy to earth in an hour than humans use in a year.
That said, the fact that I can't find an opensource LLM front-end which will accept a folder full of images to run a prompt on sequentially, then return the results in aggregate is incredibly frustrating.
I think we are at a crossroads as to what this will result in, however. In one case, the benefits will accrue at the top, with corporations earning greater profits while employing less people, leaving a large part of the population without jobs.
In the second case, we manage to capture these benefits, and confer them not just on the corporations but also the public good. People could work less, leaving more time for community enhancing activities. There are also many areas where society is currently underserved which could benefit from freed up workforce, such as schooling, elderly care, house building and maintenance etc etc.
I hope we can work toward the latter rather than the former.
It will for sure! Just today the impact is collosal.
As an example, people used to read technical documentation, now, they ask LLMs. Which replaces a simple static file by 50k matrix multiplication.
for sure, we are doing our best to eradicate the conditions that make earth habitable, however i suggest that the first needed change is for computer screen humans to realize that other life forms exist. this requires stepping outside and questioning human hubris, so it might be a big leap, but i am fairly confident that you will discover that absolutely none of our planet is empty.
Demand for software has high elasticity
Apparently not, since the sort of specific work which one used to find for this has all but vanished --- every AI-generated image one sees represents an instance where someone who might have contracted for an image did not (ditto for stock images, but that's a different conversation).
Instead of uploading your video ad you already created, you'll just enter a description or two and the AI will auto-generate the video ads in thousands of iterations to target every demographic.
Google is going to run away with this with their ecosystem - OpenAI etc al can't compete with this sort of thing.
And on the other end we'll have "AI" ad blockers, hopefully. They can watch each other.
1. If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.
2. If the goal is not achieved and we stay in this uncanny valley territory (not at the bottom of it but not being able to climb out either), then eventually in a few years' time we should see a return to many fragmented almost indie-like platforms offering bespoke human-made content. The only way to hope to achieve the acceptable quality will be to favor it instead of scale as the content will have to be somehow verified by actual human beings.
Question on two fronts:
1. Why do you think, considering the current rate of progress think it is very unlikely that LLM output becomes indistinguishable from expert creatives? Especially considering a lot of tells people claim to see are easily alleviated by prompting.
2. Why do you think a model whose output reaches that goal would rise in any way to what we’d consider AGI?
Personally, I feel the opposite. The output is likely to reach that level in the coming years, yet AGI is still far away from being reached once that has happened.
It feels to me that the SOTA video models today are pretty damn good already, let alone in another 12 months when SOTA will no doubt have moved on significantly.
Most if it wasn't bespoke assets created by humans but stock art picked by if lucky, a professional photo editor, but more often the author themselves.
That said I don’t think entry level illustration jobs can be around if software can do their job better than they do. Just like we don’t have a lot of calculators anymore, technological replacement is bound to occur in society, AI or not.
Well at least that's the potential.
This is not at all true. Some percentage of AI generated images might have become a contract, but that percentage is vanishingly small.
Most AI generated images you see out there are just shared casually between friends. Another sizable chunk are useless filler in a casual blog post and the author would otherwise have gone without, used public domain images, or illegally copied an image.
A very very small percentage of them are used in a specific subset of SEO posts whose authors actually might have cared enough to get a professional illustrator a few years ago but don't care enough to avoid AI artifacts today. That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.
I prefer to get my illegally copied images from only the most humanely trained LLM instead of illegally copying them myself like some neanderthal or, heaven forbid, asking a human to make something. Such a though is revolting; humans breathe so loud and sweat so much and are so icky. Hold on - my wife just texted me. "Hey chat gipity, what is my wife asking about now?" /s
It feels very short-sighted from the company side because I nope'd right out of there. They didn't make me feel any trust for the company at all.
I'd still hire an entry level graphic designer. I would just expect them to use these tools and 2x-5x their output. That's the only changing I'm sensing.
"Equip yourself with skills that other people are willing to pay for." –Thomas Sowell
As a father, my forward-thinking vision for my kids is that creativity will rule the day. The most successful will be those with the best ideas and most inspiring vision.
Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
But yeah, tech debt isn't unique to AIs, and I haven't seen anything conclusive that AIs generate more tech debt than regular people - but please share if you've got sources of the opposite.
(disclaimer, I'm very skeptical about AI to generate code myself, but I will admit to use it for boring tasks like unit test outlines)
It'd still suck to lose your job / vocation though, and some of those won't be able to find a new job.
When the car was invented, entire industries tied to horses collapsed. But those that evolved, leveled up: Blacksmiths became auto mechanics and metalworkers, etc.
As a creatively minded person with entrepreneurial instincts, I’ll admit: my predictions are a bit self-serving. But I believe it anyway—the future of work is entrepreneurial. It’s creative.
There already isn't enough meaningful work for everyone. We see people with the "right training" failing to find a job. AI is already making things worse by eliminating meaningful jobs — art, writing, music production are no longer viable career paths.
And any important jobs won’t be replaced because managers are too lazy and risk averse to try AI.
We may never see job displacement from AI. Did you know bank teller jobs actually increased in the decades following the roll out of ATMs.
> AI chatbots have had no significant impact on earnings or recorded hours in any occupation
But Generative AI is not just AI chatbots. There are ones that generate sounds/music, ones that generates imagines etc.
Another thing is, the research only looked Denmark, a nation with fairly healthy altitude towards work-life-balance, not a nation that gives proud to people who work their own ass off.
And the research also don't cover the effect of AI generated product: if music or painting can be created by an AI within just 1 minute based on prompt typed in by a 5 year old, then your expected value for "art work" will decrease, and you'll not pay the same price when you're buying from a human artist.
Example: I recently used Gemini for some tax advice that would have cost hundreds of dollars to get from a licensed tax agent. And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.
Also took a picture of my tire while at the garage and asked it if I really needed new tires or not.
Took a picture of my sprinkler box and had it figure out what was going on.
Potentially all situations where I would’ve paid (or paid more than I already was) a local laborer for that advice. Or at a minimum spent much more time googling for the info.
These will likely be cell-phone-plan level expensive, but the value prop would still be excellent.
You can use a penny and your eyeballs to assess this, and all it costs is $0.01
It blows my mind the degree that people are offloading any critical thinking to AI
There is no moat. Most of these AI APIs and products are interchangeable.
But, also, the threshold of things we manage ourselves versus when we look to others is constantly moving as technology advances and things change. We're always making risk tradeoff decisions measuring the probability we get sued or some harm comes to us versus trusting that we can handle some tasks ourselves. For example, most people do not have attorneys review their lease agreements or job offers, unless they have a specific circumstance that warrants they do so.
The line will move, as technology gives people the tools to become better at handling the more mundane things themselves.
In a more general sense sometimes, but not always, it is easier to verify something than to come up with it at the first place.
It's more about automating workflows that are already procedural and/or protocolized, but where information gathering is messy and unstructured (I.e. some facets of law, health, finance, etc).
Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs, your medical history, your preferences, etc. But gathering all of that information requires a mix of collecting medical records, talking to the patient, etc. Once that information is available, we can execute a fairly procedural plan to put together a diet that will likely work for you.
These are cases that I believe LLMs are actually very well suited, if the solution can be designed in such a way as to limit hallucinations.
> Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs
No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive. and I would know, I've had to use one to help me manage an eating disorder!
There is already so much bullshit in the diet space that adding AI bullshit (again, using the technical definition of bullshit here) only stands to increase the value of an interaction with a person with knowledge.
And that's without getting into what happens when brand recommendations are baked into the training data.
0 https://link.springer.com/article/10.1007/s10676-024-09775-5
I understand your perspective, but the intention was to use a term we've all heard to reflect the thing we're all thinking about. Whether or not this is the right term to use for scenarios where the LLM emits incorrect information is not relevant to this post in particular.
> No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive.
No, this is not why real dietitians are expensive. Real dietitians are expensive because they go through extensive training on a topic and are a licensed (and thus supply constrained) group. That doesn't mean they're operating without a grounding fact base.
Dietitians are not making up diets as they go. They're operating on studies that have been done over decades of time and millions of people to understand in general what foods are linked to what outcomes. Yes, the field evolves. Yes, it requires changes over time. But to suggest we "don't know" is inconsistent with the fact that we're able to teach dietitians how to construct diets in the first place.
There are absolutely cases in which the confounding factors for a patient are unique enough such that novel human thought will be required to construct a reasonable diet plan or treatment pathway for someone. That will continue to be true in law, health, finances, etc. But there are also many, many cases where that is absolutely not the case, the presentation of the case is quite simple, and the next step actions are highly procedural.
This is not the same as saying dietitians are useless, or physicians are useless, or attorneys are useless. It is to say that, due to the supply constraints of these professions, there are always going to be fundamental limits to the amount they can produce. But there is a credible argument to be made that if we can bolster their ability to deliver the common scenarios much more effectively, we might be able to unlock some of the capacity to reach more people.
As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
Main value you get from a programmer is they understand what they are doing and they can take the responsibility of what they are developing. Very junior developers are hired mostly as an investment so they become productive and stay with the company. AI might help with some of this but doesn’t really replace anyone in the process.
For support, there is massive value in talking to another human and having them trying to solve your issue. LLMs don’t feel much better than the hardcoded menu style auto support there already is.
I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
How exactly is this different from getting advice from someone who acts confidently knowledgeable? Diet advice is an especially egregious example, since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one.
It's bad enough the AI marketers push AI as some all knowing, correct oracle, but when the anti-ai people use that as the basis for their arguments, it's somehow more annoying.
Trust but verify is still a good rule here, no matter the source, human or otherwise.
If I ask it how to accomplish a task with the C standard library and it tells me to use a function that doesn't exist in the C standard library, that's not just "wrong" that is a fabrication. It is a lie
If you ask me to remove whitespace from a string in Python and I mistakenly tell you use ".trim()" (the Java method, a mistake I've made annoyingly too much) instead of ".strip()", am I lying to you?
It's not a lie. It's just wrong.
The bullshitter doesn't care about if what they say is true or false or right or wrong. They just put out more bullshit.
> Lying requires intent to deceive
LLMs do have an intent to deceive, built in!
They have been built to never admit they don't know an answer, so they will invent answers based on faulty premises
I agree that for a human mixing up ".trim()" and ".strip()" is an honest mistake
In the example I gave you are asking for a function that does not exist. If it invents a function, because it is designed to never say "you are wrong that doesn't exist" or "I don't know the answer" that seems to qualify to me as "intent to deceive" because it is designed to invent something than give you a negative sounding answer
People are forthcoming with things they know they don't know. It's the stuff that they don't know that they don't know that get them. And also the things they think they know, but are wrong about. This may come as a shock, but people do make mistakes.
Because, as Brad Pilon of intermittent fasting fashion repeatedly stresses, "All diets work."*
* Once there is an energy deficit.
I wouldn't have a clue how to verify most things that get thrown around these days. How can I verify climate science? I just have to trust the scientific consensus (and I do). But some people refuse to trust that consensus, and they think that by reading some convincing sounding alternative sources they've verified that the majority view on climate science is wrong.
The same can apply for almost anything. How can I verify dietary studies? Just having the ability to read scientific studies and spot any flaws requires knowledge that only maybe 1 in 10000 people could do, if not worse than that.
>I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
No way. NFTs did not make any headway in "the real world": their value proposition was that their cash value was speculative, like most other Blockchain technologies, and that understandably collapsed quickly and brilliantly. Right now developers are using LLMs and they have real tangible advantages. They are more successful than NFTs already.
I'm a huge AI skeptic and I believe it's difficult to measure their usefulness while we're still in a hype bubble but I am using them every day, they don't write my prod code because they're too unreliable and sloppy, but for one shot scripts <100 lines they have saved me hours, and they've entirely replaced stack overflow for me. If the hype bubble burst today I'd still be using LLMs tomorrow. Cannot say the same for NFTs
People talk a lot of about false info and hallucinations, which the models do in fact do, but the examples of this have become more and more far flung for SOTA models. It seems that now in order to elicit bad information, you pretty much have to write out a carefully crafted trick question or ask about a topic so on the fringes of knowledge that it basically is only a handful of papers in the training set.
However, asking "I am sensitive to sugar, make me a meal plan for the week targeting 2000cal/day and high protein with minimally processed foods" I would totally trust the output to be on equal footing with a run of the mill registered dietician.
As for the junior developer thing, my company has already forgone paid software solutions in order to use software written by LLMs. We are not a tech company, just old school manufacturing.
But it is replacing it. There's a rapidly-growing number of large, publicly-traded companies that replaced first-line support with LLMs. When I did my taxes, "talk to a person" was replaced with "talk to a chatbot". Airlines use them. Social media platforms use them.
I suspect what you're missing here is that LLMs here aren't replacing something high quality. Even bad customer support is very expensive. Chatbots are still a lot cheaper than hundreds of outsourced call center people following a rigid script. And frankly, they probably make fewer mistakes.
> and it will blow up like NFTs
We're probably in a valuation bubble, but it's pretty unlikely that the correct price is zero.
It doesn’t wholly replace the need for human support agents but if it can adequately handle a substantial number of tickets that’s enough to reduce headcount.
A huge percentage of problems raised in customer support are solved by otherwise accessible resources that the user hasn’t found. And AI agents are sophisticated enough to actually action on a lot of issues that require action.
The good news is that this means human agents can focus on the actually hard problems when they’re not consumed by as much menial bullshit. The bad news for human agents is that with half the workload we’ll probably hit an equilibrium with a lot fewer people in support.
The legal profession specifically saw the rise of computers, digitization of cases and records, and powerful search... it's never been easier to "self help" - yet people still hire lawyers.
Google is pretty much useless now as it changed into ann ad platform, and I suspect AI will go the same way soon enough.
It has always been easy to imagine how advertising could destroy the integrity of LLM's. I can guarantee that there will be companies unable to resist the temporary cash flows from it. Those models will destroy their reputation in no time.
https://www.washingtonpost.com/technology/2025/04/17/llm-poi...
One major problem is the payment mechanism. The nature of LLMs means you just can't really know or force it to spit out ad garbage in a predictable manor. That'll make it really tricky for an advertiser to want to invest in your LLM advertising (beyond being able to sell the fact that they are an AI ad service).
Another is going to be regulations. How can you be sure to properly highlight "sponsored" content in the middle of an AI hallucination? These LLM companies run a very real risk of running a fowl of FTC rules.
That’s like buying a wrench and changing your own spark plugs. Wrenches are not putting mechanics out of business.
I wouldn't be saving on tax advisors. Moreover, I would hire two different tax advisors, so I could cross check them.
Technically, all you have to do is follow the written instructions. But there are a surprising number of maybes in those instructions. You hit a checkbox that asks whether you qualify for such-and-such deduction, and find yourself downloading yet another document full of conditions for qualification, which aren't always as clear-cut as you'd like. You can end up reading page after page to figure out whether you should check a single box, and that single box may require another series of forms.
My small side income takes me from a one-page return to several pages, and next year I'm probably going to have to pay estimated taxes in advance because that non-taxed income leaves me owing at the end of the year more than some acceptable threshold that could result in fines. All because I make an extra 10% doing some evening freelancing.
Most people's taxes shouldn't be complex, but in practice they're more complex than they should be.
If I can do this, most people can do a simple 2-page 1040EZ.
This fact is so simple and yet here we are having arguments about it. To me people are conflating an economic assessment - whose jobs are going to be impacted and how much - with an aspirational one - which of your acquaintances personally could be replaced by an AI, because that would satisfy a beef.
Your accountant also is probably saving hundreds of dollars in other areas using AI assist.
What call. Maybe some readers miss the (perhaps subtle) difference between "Generative AI is not ..." and "Generative Ai is not going to ..."
Then first can be based on fact, e.g., what has happened so far. The second is based on pure speculation. No one knows what will happen in the future. HN is continually being flooded with sepculation, marketing, hype.
In contrast, this article is is based on what has happened so far. There is no "call" being made. Only an examination of what has heppened so far.
Ever since the explosion in popularity of the internet in the 2000's, anything journalism related has been in terminal decline. The arrival of the smartphones accelerated this process.
I know it’s replaced marketing content writers in startups. I know it has augmented development in startups and reduced hiring needs.
The effects as it gains capability will be mass unemployment.
In other words, this more likely answers the question "If customer support agents all use ChatGPT or some in-house equivalent, does the company need fewer customer support agents?" than it answers the question "If we deploy an AI agent for customers to interact with, can it reduce the volume of inquiries that make it to our customer service team and, thus, require fewer agents?"
You already see attorneys using it to write briefs; often to hilarious effect. These are clearly the precursor though to a much reduced need to for Jr / associate level attorneys at firms.
I have a 185 year old treatise on wood engraving. At the time, to reproduce any image required that it be engraved in wood or metal for the printer; the best wood engravers were not mere reproducers, as they used some artistry when reducing the image to black and white, to keep the impression from continuous tones. (And some, of course, were also original artists in their own right). The wood engraving profession was destroyed by the invention of photo-etching (there was a weird interval before the invention of photo etching, in which cameras existed but photos had to be engraved manually anyway for printing).
Maybe all the wood engravers found employment; although I doubt it. But at this speed, there will be a lot of people who won't be able to retrain during employment and will either have to use up their savings while doing so, or have to take lower paid jobs.
The wise, will displace economists and consultants with LLMs, but the trend followers will hire them to prognostic about the future impact - such that the net affect could be zero.
This is the wrong question.
The question should be to hiring managers: Do you expect LLM based tools to increase or decrease your projected hiring of full time employees?
LLM workflows are already *displacing* entry-level labor because people are reaching for copilot/windsurf/CGPT instead of hiring a contract developer, researcher, BD person. I’m watching this happen across management in US startups.
It’s displacing job growth in entry level positions across primary writing copy, admin tasks or research.
You’re not going to find it in statistics immediately because it’s not a 1:1 replacement.
Much like the 1971 labor-productivity separation that everyone scratched their head about (answer: labor was outsourced and capital kept all value gains), we will see another asymptote to that labor productivity graph based on displacement not replacement.
> Duolingo will replace contract workers with AI. The company is going to be ‘AI-first,’ says its CEO.
https://www.theverge.com/news/657594/duolingo-ai-first-repla...
-
And within that article:
> von Ahn’s email follows a similar memo Shopify CEO Tobi Lütke sent to employees and recently shared online. In that memo, Lütke said that before teams asked for more headcount or resources, they needed to show “why they cannot get what they want done using AI.”
As with all other technologies the jobs it removes are not normally in country that introduces it but that they never happen elsewhere.
For example, while the automated looms that the Luddites were protesting about didn't result in significant job losses in the UK. How much clothing manufacturing has been curtailed in Africa because of it and similar innovations since that have lead to cheap mass produced clothes making it uneconomic to produce there.
As suggest by this report, Denmark and West will probably be make good elsewhere and be largely unaffected.
However, places like India, Vietnam with large industries based on call centres and outsourced development servicing the West are likely to be more vulnerable.
Maybe instead look at the US in 2025. EU labor regulations make it much harder to fire employees. And 2023 was mainly a hype year for GenAI. Actual Enterprise adoption (not free vendor pilots) started taking off in the latter half of 2024.
That said, a lot of CEOs seem to have taken the "lay off all the employees first, then figure out how to have AI (or low cost offshore labor) do the work second" approach.
Case in point: Klarna.
2024: "Klarna is All in on AI, Plans to Slash Workforce in Half" https://www.cxtoday.com/crm/klarna-is-all-in-on-ai-plans-to-...
2025: "Klarna CEO “Tremendously Embarrassed” by Salesforce Fallout and Doubts AI Can Replace It" https://www.salesforceben.com/klarna-ceo-tremendously-embarr...
For example, the mass layoffs of federal employees.
Anecdotal situation - I use ChatGPT daily to rewrite sentences in the client reports I write. I would have traditionally had a marketing person review these and rewrite them, but now AI does it.
So I find this result improbable, at best, given that I personally know several people who had to scramble to find new ways of earning money when their opportunities dried up with very little warning.
even customer service bots are just nicer front ends for knowledge bases.
Imagine if a tool made content writers 10x as productive. You might hire more, not less, because they are now better value! You might eventually realise you spent too much, but this will come later.
ADAIK no company I know of starts a shiny new initiative by firing, they start by hiring then cutting back once they have their systems in place or hit a ceiling. Even Amazon runs projects fat then makes them lean AFAIK.
There's also pent up demand.
You never expect a new labour saving device to cost jobs while the project managers are in the export building phase.
Be wary of people trying to deflect the away from the managerial class for these issues.
As an example, many companies have recently shifted their support to "AI first" models. As a result, even if the team or certain team members haven't been fired, the general trend of hiring for support is pretty much down (anecdotal).
I agree that some automation is better for the humans to do their jobs better, but this isn't one of those. When you're looking for support, something has clearly went wrong. Speaking or typing to an AI which responds with random unrelated articles or "sorry I didn't quite get that" is just evading responsibility in the name of "progress", "development", "modernization", "futuristic", "technology", <insert term of choice>, etc.
Software development jobs there have bigger threat: outsourcing to cheaper locations.
As well for teachers: it is hard to replace a person supervising kids with a chatbot.
jruohonen•4h ago
https://economics.mit.edu/news/daron-acemoglu-what-do-we-kno...