The anti-ai stance just makes em even cooler.
The luddites didn't destroy automatic looms because they hated technology; they did it because losing their jobs and seeing their whole occupation disappear ruined their lives and lives of their families.
The problem to fix isn't automation, but preventing it from destroying people's lives at scale.
That is what happened with the 19th century factories.
It’s really intriguing how an increasingly popular view of what’s “ethical” is anything that doesn’t stand in the way of the ‘proletariat’ getting their bag, and anything that protects content creators’ intellectual property rights, with no real interest in the greater good.
Such a dramatic shift from the music piracy generation a mere decade or two ago.
It’s especially intriguing as a non-American.
Again, as you say, many sensible arguments against AI, but for some people it really takes a backseat to “they took our jerbs!”
Capitalism is not prepared nor willing to retrain people, drastically lower the workweek, or bring about a UBI sourced from the value of the commons. So indeed, if the promises of AI hold true, a catastrophe is incoming. Fortunately for us, the promises of AI CEOs are unlikely to be true.
If we manage to replace all the workers with AI - that's awesome! We will obviously have to work out a system for everyone to get shelter, and food, and so on. But that post-scarcity utopia of everyone being able to do whatever they want with their time and not have to work, that's the goal, right? That's where we want to be.
Jerbs are an interim nightmare that we have had to do to get from subsistence agriculture to post-scarcity abundance, they're not some intrinsic part of human existence.
Look at the current administration. Do you think they would even consider providing anything like UBI?
They actively want to take us down the cyberpunk dystopia route (or even the Christofascist regressive dystopia route...). They want us to become serfs to technofeudal overlords. Or just die, and decrease the surplus population.
Furthermore, they really, really want to be absolute rulers being treated like (the popular conception of) medieval lords by all of us, the peasants. They deeply believe that we are beneath them; that we do not deserve to have the means to thrive or even survive if they do not explicitly grant it to us; that our natural state is that of supplication, and theirs is that of power and control.
UBI would give that up. It would give us the unconditional means to live, regardless of their approval. And that they cannot abide.
An ordinary person, working diligently at a decent-paying job, can save up a million dollars if they're not unlucky.
Even a million-dollar-a-year salary is only 10x a fairly modest tech salary of 100k.
But a billion dollars a year is 1000 times that.
So...no, I don't know any billionaires personally either. I'm extrapolating from the things they say and do. But frankly, with the way the media is today, do you really think that more than a tiny fraction of it is trying to portray billionaires as worse than they are? Given how much of it is actually controlled by them, and bears the clear marks of their editorial hand?
Somehow there is always this huge leap between "Strong AI" -> stuff happens -> "about 10k people live in cloud cities and everyone else lives in the dirt".
I find it completely implausible.
Money is a tool that has no value by itself. Billionaires are billionaires because they get a much bigger part of the work their group is producing (the group can be one company, a region, a country or the whole world depending on how you see things). If AI does the work instead of people, it will change nothing for them.
You can be optimistic (it will self regulate and everyone will benefit from AI) or pessimistic (only the billionaire class will benefit from AI). But in any case, there will be no need to sell products or share if there is a class of artificial slaves that can replace workers
Forty years ago I would've had a personal secretary for my engineering job, and most likely a private office. Now I get to manage more things myself in addition to being expected to be online 24x7 - so I'm not even convinced that eliminating those jobs improve things for the people who now get to self-serve instead of being more directly assisted.
One question perhaps is, even if AI can do everything I can do (i.e., has the skills for it), will it do everything I do? I'm sure there are many people in the world with the superset of my skills, yet I bet there are some things only I'm doing, and I don't think a really smart AI will change that.
The Industrial Revolution caused a great deal of damage. It was a net positive in the long term because new jobs were created to replace those that were lost, but it took decades and enormous violence. Now, the promise of AI is that it will be more efficient than any human being. If this becomes a reality, there will be, by definition, no new jobs created for the people replaced by AI.
Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game? Yarn Spinner seems to conflate "Enterprise Scale Replacement" (firing 500 support staff) with "assistive tooling" (a solo dev using GenAI for texture variants).
By drawing such a hard line, they might be signaling virtue to their base, but they are also ignoring the nuance that AI -- like the spellcheckers and compilers before it -- can be a force multiplier for the very creatives they want to protect.
Personally, I do agree that there are many problems with companies behind major LLMs today, as well as big tech companies C-levels who don't understand why AI can't replace engineers. But this post, as much as written in a nice tone, doesn't frame the problem correctly in my mind.
> You need to realise that if you use them, you’re both financially and socially supporting dodgy companies doing dodgy things. They will use your support to push their agenda. If these tools are working for you, we’re genuinely pleased. But please also stop using them.
> Your adoption helps promote the companies making these tools. People see you using it and force it onto others at the studio, or at other workplaces entirely. From what we’ve seen, this is followed by people getting fired and overworked. If it isn’t happening to you and your colleagues, great. But you’re still helping it happen elsewhere. And as we said, even if you fixed the labour concerns tomorrow, there are still many other issues. There’s more than just being fired to worry about.
It's because generative AI has become part of the "culture wars" and is therefore black and white to lots of people.
I think it's self-defeating, but virtue signallers gonna virtue signal.
Personally I'd rather a future where everyone used local models.
> Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game?
No amount of framing (unless written into law) would stop small indie devs from doing this. AI is just too efficient, making too much sense economically. People who are willing to starve for their ideology is always the minority.
Even artisans who build hand-made wooden furniture use power tools today. The tools that make economical sense will prevail one way or another.
What about learning the tools you use everyday by yourself?
So if the tricky C# problem isn't already in their data set, the output of the LLM is, at best, random crap. Even the worst human effort would exceed the output of the LLM, and that is the average case for any "tricky" problem. LLMs are fundamentally only useful on the most common types of problems that are can better be addressed by using frameworks, plugins, or APIs.
(And on that note: every programmer I've met who says that LLM coding agents 10x'd their output is the type of programmer that would have been PIP'd or fired 10 years ago for incompetence. We used to call them "code monkeys" for obvious reasons. Junior programmers think that LLM coding agents are awesome because they don't have the experience or skill to understand just how bad the output of LLM coding agents is, and the few that survive in the industry long enough to become senior programmers will laugh at their younger selves at how much of an unmaintainable mess they made vibe coding.)
While not as bad as firing 500 people, using ai to generate slop (and it is inherently slop due to being generated quickly by ai) is still bad.
Left behind where? We all live in the same world, anyone can pick up AI at any moment, it’s not hard, an idiot can do it (and they do).
If you’re not willing to risk being “left behind”, you won’t be able to spot the next rising trend quickly enough and jump on it, you’ll be too distracted by the current shiny thing.
If you take some percent longer to finish a some code, because you want that code to maintain some level of "purity", you'll finish slower than others. If his is a creative context, you'll spend more time on boilerplate than interesting stuffs. If this is a profit driven context, you'll make less money, less money for staff. Etc.
> If you’re not willing to risk being “left behind”...
I think this is orthogonal. Some tools increase productivity. Using a tool doesn't blind a component person...they just have an another tool under their belt to use if they personally find it valuable.
I have decided I can only use AI that has a benefit to society at all. Say lower energy use apps for eink devices.
As a result, I think we'll eventually see a mean shift from rewarding those that are "technically competent" more towards those that are "practically creative" (I assume the high end technical competence will always be safe).
Should Mozart have constructed the instruments himself? Or plucked the strings himself? No, he had someone else take care of all that so he could compose music. AI can be used the same way: take care of boring stuff so I can compose a solution to a real world problem. No, that doesn't mean AI has to do everything for you, which outright bans don't seem to be able to comprehend.
What other people and companies do because I happen to use something correctly (as an assistive technology), is not my responsibility. If someone happens to misuse it or enforce it use in a dysfunctional work environment, that is their doing and not mine.
If a workplace is this dysfunctional, there are likely many other issues that already exist that are making people miserable. AI isn't the root cause of the issue, it is the workplace culture that existed before the presence of AI.
Lots of folks are mad about how the power of these tools comes from training things they put out in the open but didn't intend to be used to enrich or exclude others like this technology is enabling.
Interesting times ahead... it's so powerful people who ignore it are going to get left behind to some degree. (I say this as someone who actively avoids kubernetes and it does give off the vibe I've been left behind compared to my peers who do resume driven development.)
The Yarn Spinner team explains they don’t use AI in their game development tool despite having academic and professional backgrounds in machine learning—they’ve written books on it and gave talks about ML in games. Their position shifted around 2020 when they observed AI companies pivoting from interesting technical applications toward generative tools explicitly designed to replace workers or extract more output without additional hiring. They argue that firing people has become AI’s primary value proposition, with any other benefits being incidental. Rather than adopt technology for its own sake (“tool-driven development”), they focus on whether features genuinely help developers make better games. While they acknowledge numerous other AI problems exist and may revisit ML techniques if the industry changes, they currently refuse to use, integrate, or normalize AI tools because doing so would financially and socially support companies whose business model centers on eliminating jobs during a period when unemployment can be life-threatening.
In essence we have an ownership problem. If I own the AI, I can do my work in couple of hours and then some and then have rest of the day off to enjoy things I like. If the company owns AI - I'm out of work. The difference between a world of plenty and beauty vs the world of misery for many of us - is who owns the AI.
But that's not what companies expect from you, even if you owns AI. They expect you to output more, and when you do, someone else is probably out of work.
I’m not sure what the authors are looking for, a pat on the back? Good boy points? Reddit updoots? To feel like a real 1337 h4xx0r dev?
Nobody cares about this stance and I feel like I see it daily now. People do care about the quality and usefulness of your product and what you’re doing to continue to improve it.
It has the same energy as when a dude orders “black coffee” despite hating the taste to look more badass.
Many developers never bothered with IDEs. We were happy using Vim, Emacs and many of us continue to do so today.
It's not surprising the first "innovation" was agentic programming with a modified IDE.
I'm sure many people will enjoy their new IDEs. I don't enjoy it. I enjoy doing things a different way.
localhoster•3w ago
I sorry friends, I think imma quit to farming :$
risyachka•3w ago
amelius•3w ago
rhplus•3w ago
The bar for human approval and testing should be even higher for critical fixes.
madeofpalk•3w ago
deepsun•3w ago
I know folks tend to frown on security compliances, but if you honestly implement and maintain most of the controls in there, not just to get a certificate -- it really make a lot of sense and improves security/clarity/risks.
UqWBcuFx6NV4r•3w ago
sheeh•3w ago
But to just copy, paste and move on… terrible.
localhoster•3w ago
cmcaleer•3w ago
People with this kind of attitude existed long before AI and will continue to exist.
sheeh•3w ago
It’s always been this way in toxic workplaces - LLM’s amplify this.
localhoster•3w ago