The anti-ai stance just makes em even cooler.
The luddites didn't destroy automatic looms because they hated technology; they did it because losing their jobs and seeing their whole occupation disappear ruined their lives and lives of their families.
The problem to fix isn't automation, but preventing it from destroying people's lives at scale.
It’s really intriguing how an increasingly popular view of what’s “ethical” is anything that doesn’t stand in the way of the ‘proletariat’ getting their bag, and anything that protects content creators’ intellectual property rights, with no real interest in the greater good.
Such a dramatic shift from the music piracy generation a mere decade or two ago.
It’s especially intriguing as a non-American.
Again, as you say, many sensible arguments against AI, but for some people it really takes a backseat to “they took our jerbs!”
Capitalism is not prepared nor willing to retrain people, drastically lower the workweek, or bring about a UBI sourced from the value of the commons. So indeed, if the promises of AI hold true, a catastrophe is incoming. Fortunately for us, the promises of AI CEOs are unlikely to be true.
One question perhaps is, even if AI can do everything I can do (i.e., has the skills for it), will it do everything I do? I'm sure there are many people in the world with the superset of my skills, yet I bet there are some things only I'm doing, and I don't think a really smart AI will change that.
Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game? Yarn Spinner seems to conflate "Enterprise Scale Replacement" (firing 500 support staff) with "assistive tooling" (a solo dev using GenAI for texture variants).
By drawing such a hard line, they might be signaling virtue to their base, but they are also ignoring the nuance that AI -- like the spellcheckers and compilers before it -- can be a force multiplier for the very creatives they want to protect.
Personally, I do agree that there are many problems with companies behind major LLMs today, as well as big tech companies C-levels who don't understand why AI can't replace engineers. But this post, as much as written in a nice tone, doesn't frame the problem correctly in my mind.
Left behind where? We all live in the same world, anyone can pick up AI at any moment, it’s not hard, an idiot can do it (and they do).
If you’re not willing to risk being “left behind”, you won’t be able to spot the next rising trend quickly enough and jump on it, you’ll be too distracted by the current shiny thing.
If you take some percent longer to finish a some code, because you want that code to maintain some level of "purity", you'll finish slower than others. If his is a creative context, you'll spend more time on boilerplate than interesting stuffs. If this is a profit driven context, you'll make less money, less money for staff. Etc.
> If you’re not willing to risk being “left behind”...
I think this is orthogonal. Some tools increase productivity. Using a tool doesn't blind a component person...they just have an another tool under their belt to use if they personally find it valuable.
I have decided I can only use AI that has a benefit to society at all. Say lower energy use apps for eink devices.
As a result, I think we'll eventually see a mean shift from rewarding those that are "technically competent" more towards those that are "practically creative" (I assume the high end technical competence will always be safe).
What other people and companies do because I happen to use something correctly (as an assistive technology), is not my responsibility. If someone happens to misuse it or enforce it use in a dysfunctional work environment, that is their doing and not mine.
If a workplace is this dysfunctional, there are likely many other issues that already exist that are making people miserable. AI isn't the root cause of the issue, it is the workplace culture that existed before the presence of AI.
localhoster•1h ago
I sorry friends, I think imma quit to farming :$
risyachka•53m ago
amelius•48m ago
rhplus•38m ago
The bar for human approval and testing should be even higher for critical fixes.
madeofpalk•26m ago
deepsun•46m ago
I know folks tend to frown on security compliances, but if you honestly implement and maintain most of the controls in there, not just to get a certificate -- it really make a lot of sense and improves security/clarity/risks.
UqWBcuFx6NV4r•22m ago
sheeh•16m ago
But to just copy, paste and move on… terrible.
cmcaleer•11m ago
People with this kind of attitude existed long before AI and will continue to exist.