AI becomes a stand-in for a bigger problem. We keep arguing about models and chatbots, but the real issue is that the economic safety net has not been updated in decades. Until that changes, people will keep treating AI as the thing to be angry at instead of the system that leaves them vulnerable.
Triumphant Posts on linkedin from former seo/cryptoscam people telling everyone they'll be left behind if they don't adopt the latest flavor text/image generator.
All these resources being spent too on huge data centres for text generators when things like protein folding would be far more useful, billion dollar salaries for "AI Gurus" that are just throwing sh*t at the wall and hoping their particular mix of models and training works, while laying people off.
AI would be much more pleasant if it only showed up when summoned for a specific task.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
Much of the meaning we humans derive from work is tied to the value it provides to society. One can do coding for fun but doing the same coding where it provides value to others/society is far more meaningful.
Presently some may say: AI is amazing I am much more productive, AI is just a tool or that AI empowers me. The irony is that this in itself shows the deficiency of AI. It demonstrates that AI is not yet powerful enough to NOT need to empower you to NOT need to make you more productive. Ultimately AI aims to remove the need for a human intermediary altogether that is the AI holy grail. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months may be dramatically impacted.
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
Where are the new luddites, really? I just don't see them. I see people talking about them, but they never actually show up.
My theory is that they don't actually exist. Their existence would legitimize AI, not bring it down, so AI people fantasize about this imaginary nemesis.
This tech cycle does not even pretend to be "likable guys". They are framing themselves as sociopaths due to, well, being interested only in millionaires money.
Makes up bad optics.
This is what it is for me. I can see the value in AI tech, but big tech has inserted themselves as unneeded middlemen in way too much of our lives. The cynic in me is convinced this is just another attempt at owning us.
That leaked memo from Zuckerberg about VR is a good example. He's looking at Google and Apple having near absolute control over their mobile users and wants to get an ecosystem like that for Facebook. There's nothing about building a good product or setting things up so users are in control. It's all about wanting to own an ecosystem with trapped users.
If they can, big tech will gate every interaction or transaction and I think they see AI as a way to do that at scale. Don't ask your neighbour how to change a tire on your car. Ask AI. And pay them for the "knowledge".
The big tech bro AI mega-corporations need to pay us - aka mankind - for the damage they cause here. The AI bubble is already subsiding, we see that, despite Trump trying to protect the mafiosi here. They owe us billions now in damage. Microsoft also recently announced it will milk everyone by increasing the prices due to "new AI features in MS office". Granted, I don't use Microsoft products as such (I do have a computer running Win10 though, so my statement is not 100% correct; I just don't use a Microsoft paid-for office suite or any other milk-for-money service), but I think it is time to turn the odds.
These corporations should pay us, for the damage they are causing here in general. I no longer accept the AI mafia method, even less so as the prices of hardware went up because of this. This mafia owes us money.
It looks like the "car problem" in yet another form. Many people will agree that our cities have become too car-centric and that cars take way too much public space, but few will give up their own personal car.
When you design the built environment for humans people drive less and own fewer personal vehicles.
We may end up building a world where AI is similarly necessary. The AI companies would certainly like that. But at the moment we still have a choice. The more people exercise their agency now the more likely we are to retain that agency in the future.
It seemed to me that regardless of the city, many people will drive until the point where traffic jams and parking become a nightmare, and only then consider the alternatives. This point of pain is much lower in old European cities that weren't built as car-centric and much higher in the US, but the pattern seems to repeat itself.
Me. I never use AI to write content that I put my name to. I use AI in the same way that I use a search engine. In fact, that is pretty much what AI is -- a search engine on steroids.
I am also a bit afraid of a future where the workload will be adjusted to heavy AI use, to the degree that a human working with his own head won't be able to satisfy the demands.
This happened around the 'car problem' too: how many jobs are in a walkable / bikeable distance now vs. 1925?
Probably the same amount. The only difference is that people are willing to commute farther for a job than someone would've in 1925.
In Ostrava, where I live, worker's colonies were located right next to the factories or mines, within walking distance, precisely to facilitate easy access. It came with a lot of other problems (pollution), but "commute" wasn't really a thing. Even streetcars were fairly expensive, and most people would think twice before paying the fare twice a day.
Nowadays, there are still industrial zones around, but they tend to be located 5-10 km from the residential areas, far too far to walk.
Even leaving industry aside, how many kids you know walk to school, because it is in a walking distance from them?
But I understood quite early that I am a fluke of nature and many other people, including smart ones, really struggle when putting their words on paper or into Word/LibreWriter. A cardiologist who saved my wife's life is one of them. He is a great surgeon, but calling his writing mediocre would be charitable.
Such people will resort to AI if only to save time and energy.
Bad writing starts in the "wtf was that meant to say" territory, which can cause unnecessary conflicts or prolong an otherwise routine communication.
I don't like people using AI to communicate with other people either, but I understand where they come from.
But further and to the point, spelling / grammar errors might be a boutique sign of authenticity, much like fake "hand-made" goods with intentional errors or aging added in the factory.
OTOH, I’d never use it to write emails to friends and family, but then I don’t need to sound professional.
We know from Paris that systemic change is required - it isn't simply individual choice.
As Newsweek points out*, the people most unhappy about AI are the ones who CAN'T use ChatGPT to write their work e-mails and assignments because they NO LONGER have access to those jobs. There are many of us who believe that the backlash against AI would never have gotten so strong if it hadn't come at the expense of the creators, the engineers, and the unskilled laborers first.
AI agents are the new scabs, and the people haven't been fooled into believing that AI will be an improvement in their lives.
---
*and goes deeper on in this article: https://www.newsweek.com/clanker-ai-slur-customer-service-jo...
Unless they are being ironic, using an AI accent with a statement like that for an article talking about the backlash to lazy AI use is an interesting choice.
It could have been human written (I have noticed that people that use them all the time start to talk like them), but the "its not just x — its y" format is the hallmark of mediocre articles being written / edited by AI.
Not sure who you talk to, but the 'It's Not Just X, It's Y' format doesn't show up in everyday speech (caveat, in my experience).
This AI speech pattern is not just an em dash—it's a trite and tonally awkward pairing of statements following the phrase "not just".
I also have a tougher time judging the reliability of others because you can get grammatically perfect, well organized emails from people that are incompetent. AI has significantly increased the signal to noise ratio for me.
Ten Ways To Tell AI Listicles From Human Ones—You Won't Believe Number SevenWhat does this statement even mean?
https://en.wikipedia.org/wiki/Antithesis
"AI" surely overuses it but this article didn't seem suspect to me. I agree that "AI" speak rubs off on heavy users though.
This is fine for topics that don’t need to be exciting, like back office automation, data analysis, programming etc. but leads me to believe most content made for human consumption will still need to be human generated.
I’ve ceased using ai for writing assistance beyond spell check/accuracy/and as an automated reviewer. The core prose has to be human written to not sound like slop.
There is nothing called deep value. Stock market rises on speculation of other people's buying patterns, not company fundamentals.
Where are deep values? Politics? media? academia? human relations? business? What do you mean by deep values? We can't even look beyond one year ahead.
Modern human behavior is highly optimized, to bother only about immediate goals. The other day, I was reviewing a software architecture and asked the architect who the audience/consumer for this document is. She said it is the reviewers. I asked again hoping to identify the downstream process that uses this document, and got the same answer, a bit sternly this time.
This is AI's "dialup era" (pre-56k, maybe even the 2400 baud era).
We've got a bunch of models, but they don't fit into many products.
Companies and leadership were told to "adopt AI" and given crude tools with no instructions. Of course it failed.
Chat is an interesting UX, but it's primitive. We need better ways to connect domains, especially multi-dimensional ones.
Most products are "bolting on" AI. There are few products that really "get it". Adobe is one of the only companies I've seen with actually compelling AI + interface results, and even their experiments are just early demos [1-4]. (I've built open source versions of most of these.)
We're in for another 5 years of figuring this out. And we don't need monolithic AI models via APIs. We need access to the AI building blocks and sub networks so we can adapt and fine tune models to the actual control surfaces. That's when the real take off will happen.
[1] Relighting scenes: https://youtu.be/YqAAFX1XXY8?si=DG6ODYZXInb0Ckvc&t=211
[2] Image -> 3D editing: https://youtu.be/BLxFn_BFB5c?si=GJg12gU5gFU9ZpVc&t=185 (payoff is at 3:54)
[3] Image -> Gaussian -> Gaussian editing: https://youtu.be/z3lHAahgpRk?si=XwSouqEJUFhC44TP&t=285
[4] 3D -> image with semantic tags: https://youtu.be/z275i_6jDPc?si=2HaatjXOEk3lHeW-&t=443
Then they extracted our privacy and sold it to advertisers.
Now with AI they're extracting our souls. Who do they expect to sell them to?
Notably, this story is pitched as a "News Story", but it's not really that at all; it's an opinion piece with a couple of quotes from AI opponents. Frustratingly, not many people understand what "Newsweek" is today, so they're always going to be able to collect some quotes for whatever story they're running.
Go and read all of the anti-AI articles and they will eventually boil down to something to the effect of:
“the problems we have are more foundational and fundamental and AI looks like a distraction”
However this is a directionless complaint that falls under the “complaining about technology“ trope
As a result there is no real coherent conversation about what AI is how do we define it what are people actually complaining about what are we rallying against because people are overwhelmingly utilizing it in every part of life
cmiles8•49m ago
The tech isn’t going away, but a hard reset is overdue to bring things back down for a cold hard reality check. Article yesterday about MSFT slashing quotas on AI sales as customers aren’t buying is in line with this broader theme.
Morgan Stanley also quietly trying to offload its exposure to data center financing in a move that smells very summer of 2008-ish. CNBC now talks about the AI bubble multiple times a day. OpenAI looks incredibly vulnerable and financially over-extended.
I don’t want a hard bubble pop such that it nukes the tech ecosystem, but we’re reaching a breaking point.
bluefirebrand•43m ago
Keep your eyes out on the skies, I forecast executives in golden parachutes in the near future
cmiles8•35m ago
I don’t see any big AI company having a successful IPO anytime soon which is going to leave some folks stuck holding the financial equivalent of nuclear waste.
Sevii•19m ago
donmcronald•10m ago
Some days I wonder if we'd be better off or worse off if we had a complete collapse of technology. I think it'd be painful with a massive drop in standard of living, but we could still recover. I wonder if the same will be true in a couple more generations.
I think it's dangerous to treat younger generations like replaceable cogs. What happens when there's no one around that knows how the cogs are supposed to fit together?
mrtksn•6m ago
I think your wording is the correct wording, not the "AI fatigue" because I don't want to go to pre-AI era and I can't stand another "OMG It's over" tweet at the same time.