Many of my coworkers have embraced AI coding and the quality of our product has suffered for it. They deliver bad, hard-to-support software that technically checks some boxes and then rush on to produce more slop. It feels like a regression to the days of measuring LOC as a proxy for productivity.
I just don't really wanna hear about your pro-AI peddling anymore.
In your highly objective opinion, of course.
Everyone else is just busy using it to get work done.
I use AI, what I'm tired of is shills and post-apocalyptic prophets
You're confusing fear with disgust. Nobody is afraid of your slop, we're disgusted by it. You're making a huge sloppy mess everywhere you go and then leaving it for the rest of us to clean up, all while acting like we should be thankful for your contribution.
Copyright issues (related to training data and inference), openness (OSS, model parameters, training data), sovereignty (geopolitically, individually), privacy, deskilling, manipulation (with or without human intent), AGI doom. I have a list but not in front of me right now.
Did you read Mr. Bushell's policy [0], which is linked to by TFA? Here's a very relevant pair of sentences from the document:
Whilst I abstain from AI usage, I will continue to work with clients and colleagues who choose to use AI themselves. Where necessary I will integrate AI output given by others on the agreement that I am not held accountable for the combined work.
And from the "Ensloppification" article [1], also linked by TFA: I’d say [Declan] Chidlow verges towards AI apologism in places but overall writes a rational piece. [2] My key takeaway is to avoid hostility towards individuals†. I don’t believe I’ve ever crossed that line, except the time I attacked you [3] for ruining the web.
† I reserve the right to “punch up” and call individuals like Sam Altman a grifter in clown’s garb.
Based on this information, it doesn't seem that Mr. Bushell will hate anyone for using "AI" tools... unless they're CEO pushers.Or are you talking in generalities? If you are, then I find the unending stream of hype articles from folks using this quarter's hottest tool to be extremely disinteresting. It's important for folks who object to the LLM hype train to publish and publicize articles as a counterpoint to the prevailing discussion.
As an aside, the LLM hype reminds me of the hype for Kubernetes (which I was personally enmeshed in for a great many years), as well as the Metaverse and various varieties of Blockchain hype (which I was merely a bystander for).
[0] <https://dbushell.com/ai/>
[1] <https://dbushell.com/2025/05/30/ensloppification/>
[2] link in the pull quote being discussed to: <https://vale.rocks/posts/ai-criticism>
[3] inline link to: <https://dbushell.com/2025/05/15/slopaganda/>
People would do much better if they just stopped listening so much and started thinking and doing a bit more. But as a lazy person, I definitely understand why it's hard, it requires effort.
I am still concerned with how it's going to impact society going forward. The idea of what this is being used for by those with a monopoly on the use of violence is terrifying: https://www.palantir.com/platforms/aip/
Am I a shill or a post-apocalyptic prophet?
(For me it's been as transformational a change as discovering I could do my high school homework on a word processor in the 90s when what I suspect was undiagnosed dyspraxia made writing large volumes of text by hand very painful).
I'm also interested in understanding if the envisaged transformation of developers into orchestrators, supervisors, tastemakers and curators is realistic, desirable or possible. And if that is even the correct mental model.
Is it so hard to understand why people are reacting against this argument?
Also when I hear another human suggest using AI for ____, my perception of them is that they are an unserious person.
So in my opinion AI has had a net negative effect on the world so far. Reading through this persons AI policy resonates with me. It tells me they are a thoughtful individual who cares about their work and the broader implications of using AI.
It's fine to be tired of this. What is not fine is pretending your beliefs/feelings represent everybody else's.
No one forcing you to read the article. He is as free to write what he wants as you are to complain about it. Balanced. Like all things should be.
And I don't want to hear about how the world of software engineering has been revolutionized because you always hated programming with a passion, but can now instead pay $200 to have Claude bring your groundbreaking B2B SaaS Todo app idea to life, yet that's basically all I hear about in any tech discussion space.
You should ask your AI assistant to explain to you why people would go out of their way to take a stand against this.
Low quality (low precision) news, code, marketing, diagnosis, articles, books, food, entertainment (shorts, tik-tok), engineering is in my opinion the biggest problem in XXI century so far.
Low quality AI usage decisions, low quality AI marketing, retraining, placement, investments are accelerating the worst trends even more. It's like Soviet nuclear trains - just because nuclear is powerful and real it doesn't mean most of it's applications made any sense.
So as a pro-AI person and AI-builder in general, I want more anti-AI-slop content, more pro-discipline opinions.
The same person who ignores a crooked door frame or a CSS overflow now has a "mostly right" button to bring mediocrity to scale. We unfortunately aren't invested in teaching craftsmanship as a society.
Let's do a quick analysis of the amount of money put forth to push AI:
> OpenAI has raised a total of $57.9B over 9 funding rounds
> Groq has raised a total of $1.75 billion as of September, 2025
Well, we could go on, but I think that's probably a good enough start.
I looked into it, but I wasn't able to find information on funding rounds that David Bushell had undergone for his anti-AI agenda. So I would assume that he didn't get paid for it, so I guess it's about $0.
Meanwhile:
- My mobile phone keyboard has "AI"
- Gmail has "AI". Google docs has "AI". At one point every app was becoming a chat app, then a TikTok clone. Now every app is a ChatGPT or Gemini frontend.
- I'm using a fork of Firefox that removes most of the junk, and there's still some "AI" in the form of Link Preview summaries.
- Windows has "AI". Notepad has "AI". MS Paint has "AI".
- GitHub stuck an AI button in place of where the notifications button was, then, presumably after being called every single slur imaginable about 50000 times per day, moved it thirty or so pixels over and added about six more AI buttons to the UI. They have a mildly useful AI code review feature, but it's surprisingly half-baked considering how heavily it is marketed. And I'm not even talking about the actual models being limited, the integration itself is lame. I still consider it mildly useful for catching typos, but that is not with several billion dollars of investment.
- Sometimes when I log into Hacker News, more than half of the posts are about AI. Sometimes you get bored of it, so you start trying to look at entries that are not overtly about AI, but find that most of those are actually also about AI, and if not specifically about AI, goes on a 20 minute tangent about AI at some point.
- Every day every chat every TV program every person online has been talking about AI this AI that for literally the past couple of years. Literally.
- I find a new open source project. Looks good at first. Start to get excited. Dig deeper, things start to look "off". It's not as mature or finished as it looks. The README has a "Directory Structure" listing for some odd reason. There's a diagram of the architecture in a fixed width font, but the whitespace is misaligned on some lines. There's comments in the code that reference things like "but the user requested..." as if, the code wasn't written by the user. Because it wasn't, and worse, it wasn't read by them either. They posted it as if they wrote it making no mention at all that it was prompts they didn't read, wasting everyone's time with half-baked crapware.
And you're tired of anti-AI sentiment? Well God damn, allow me to Stable Diffusion generate the world's smallest violin and synthesize a song to play on it using OpenAI Jukebox.
I'm not really strictly against AI entirely, but it is the most overhyped technology in human history.
And I don't ever see it under a fifth, anymore. There is a Hell of a marketing push going on, and it's genuinely hard to tell the difference between the AI true believers and the marketing bots.
No, because the banner is cut off on my phone.
I don't really understand the policy either. I assumed this was a contractor's website. I've never met one who accepted tool recommendations and never a company who cared. Use Solaris and emacs for all I care.
I'm sick to death of people trying to grandstand, flag wave and chest pound about "the evils of AI" and "the failings of AI." You hate billionaires and you're afraid of losing your job, I get it, stop trying to propagandize and just do the thing you love to do as if AI didn't exist.
If I meet someone who hand carves stuff, if it's good I'm into it. If they start to rave about the evils of machines I nope tf out and never return.
I'm interested to hear more about the rationale behind the "remain employable" part of this line.
All things equal, we would normally expect someone deliberately saying they won't use a certain tool to perform a certain job as limiting their employment opportunities, not expanding it. The classic example is people who refuse to drive for work; there are good non-employment reasons for this (driving is the most dangerous thing many people do on a daily basis) but it's hard to argue that it doesn't restrict where one can work.
I think the most likely rationale is that the author thinks that posting a no-AI policy for professional work is itself seen as a signal of certain things about them, their skill level, etc., and that wins out for the kinds of clients they wish to take on. This doesn't have to be a long- or even medium-run bet to make, given that it's cheap to backtrack on such a policy down the line. Either way it's clear from reading the measured prose that there's an iceberg of thinking behind what's visible here and they are probably smarter than I am.
Thus, they won't use it directly themselves, but are willing to work with people who do.
It’s an absolute joy to be able to achieve essentially anything (within reason), things that previously I’d have known how to design but not build in any reasonable timeframe.
Who are these anti AI programmers? Computing and programming has just been unlocked and they’re not interested.
I’ve always had far more ideas than I’d ever be able to build and now I can get at least some of them built very quickly. I just don’t understand why this would t be exciting to a developer.
20 years ahead it will be completely taken for granted that computers can program themselves and we will look back on that painful era when every line of code had to be hand written by wizards and it will look ancient and quaint.
Join the party, join the revolution it’s incredible fun to be able to create beyond your hand coding skills.
I love creation and creating computer software. A vision appears in my head for a software idea and I have to build it, I am utterly compelled.
So I had to learn to program. I quite like programming it’s good to feel clever.
But my deepest joy is creating and it’s like a gift from heaven to get LLMs that can help me realize even my most ambitious creative visions.
It’s the outcome I want, not the experience of getting there.
I absolutely love what LLMs have brought to programming - accelerated creation.
For one, I think there's a sense of unfairness that people are expressing as well. A skill that took considerable time to learn and build up can be reproduced with a machine and that just feels unfair. Another, is obviously companies mandating employees use AI in their work. And then there's the environmental cost in training. Then there are the cases where it's being just for slop or submitting PRs that have not even been reviewed by their creator.
In my opinion, all of these factors make people refuse to see that some of us actually do find use for these tools and that we're not vibe-coding everything in some mad rush to ship trash.
Perhaps something will change, but right now, Claude code does not change anything for me. If what I do is ancient and quaint, so be it. I’m not competing for who can churn out the most code, never have, never will, because that’s not what software development is about.
I did not say that.
>> But there are many people who have spent years building the skills necessary to be able to realize all of their ideas, and that their ideas are inextricably linked to the process.
I have been designing and building software for 35 years and have many open source projects.
You are implying that I don’t know how to program and I need AI to build stuff. Evidence to the contrary is on my GitHub.
It’s typical anti AI to suggest that you must love AI because you have no real skill.
There is no shame whatsoever in using AI. You've edited your comment since I replied. I am not anti-AI. If you can build great things with or without AI, whether it takes 1 day or 1 week, or 1 year, it doesn't matter: good software is good software. Many very talented developers are using AI. There is also no shame in not having certain skills.
I am responding to "can’t get my head around why a developer wouldn’t want to use AI assisted programming". I explained that there are many developers who have a process that doesn't benefit from being able to generate lots of code very quickly. You said AI enables you to create "things that previously I’d have known how to design but not build in any reasonable timeframe". I'm happy for you, I'm glad AI has given you that, but there are many types of developer, many for whom that isn't a benefit of AI.
Reddit is filled with vibecoders sharing how vibecoding is a panacea because it enabled them to build an idea they've always wanted to build but never had the time. When pressed, they reveal an idea that could be achieved very simply but their vision for how it should be built is very complicated and unsophisticated. They needed AI to achieve it because their design needed millions of lines of code. I assume you're one of those people. And that's okay.
I am bad at math. Asking AI to do math for me will always be faster than doing it myself. However, unlike you, I can get my head around the reality that mathematicians are more efficient at doing math themselves.
Because it doesn't work in a useful way.
Yesterday in 3 hours I built a fully working golang program that captures all browser network traffic from chrome and Firefox and filters and logs it with a TUI and includes a wide range of help and checks and fine tuning and environment config and command line options. Multiple cycles of debugging and problem solving and refinement until it was finished.
That is simply not possible to do by hand.
And if you don’t believe this, you think I’m exaggerating, then yeah you’re being left behind while the industry moves forward.
I wanted to get it to help write a very simple Django website, basically a phone directory thing. After dicking about trying to get Copilot to actually help I had about 5000 lines of code that didn't work.
I was able to write something that did work in about an hour and about 50 lines of code.
Do you actually understand how the program your AI created works?
Of course yes I designed and architected it.
>> I've tried using it. I can't get it to do anything useful
Look I really don’t want this to come across as mean or snarky, but you can’t be trying very hard if you haven’t explored the unbelievable power of Claude ChatGPT and Gemini. Or worse if you have, but couldn’t work out how to get them to do anything useful. Id encourage you to go try them and give them all a real go and invest some time learning how to get the best from them.
Sure, there are some boring rote parts of coding, just like sawing boards might not be the most enjoyable part of woodworking. I guess you could use AI as the analog of power tools (would that be using AI to generate awk and jq command lines?). But I wouldn't want to use a CNC router and take all the fun and skill out of the craft, nor do I find agentic AI enjoyable.
And AIs fail badly anyway when you are doing things not found much online, e.g. in embedded microcontroller development (which I do) or with company internal frameworks.
I fully respect this. Lots of craftspeople love to work with wood by hand whilst factories build furniture on an industrial scale.
>> AIs fail badly anyway when you are doing things not found much online, e.g. in embedded microcontroller development (which I do)
But you are very wrong about embedded systems development and AI. I do a huge amount of microcontroller programming and AI is a massive productivity multiplier and the LLMs are extremely good at it.
Probably true if you use an SDK there are lots of examples for. I have worked with embassy in Rust and AIs were not good, not with a company internal SDK in C++ at work. They will frequently hallucinate non-existing methods or get the parameters wrong. And for larger systems (e.g. targeting embedded Linux) they can't keep enough context in their head to work with large (millions of lines) existing proprietary code bases. They make mistakes and you end up with unmaintainable soup. Humans can learn on the job, AIs can't yet do that.
“Artificial Intelligence (AI) is trained using human-created content. If humans stop producing new content and rely solely on AI, online content across the world may become repetitive and stagnant.
If your content is not AI-generated, add the badge to your work.”
[0]: https://notbyai.fyi/help/what-is-the-not-by-ai-90-rule.php
However, nothing indicates that this will happen soon, we're talking about a timeline of a decade and longer. Maybe pricing as well as a hardware and energy shortage will further slow down the transition. Right now, AI doesn't seem to be profitable for the companies offering it.
Feel free to downvote this comment but make sure you re-visit this post in 10 years from now.
It's much more fun to write code than to review code.
1. Its a great tool to reduce boilerplate 2. Its great for experimenting with ideas without the overhead that comes with starting a new non trivial project 3. Its great for one offs, demos or anything like that. 4. It helps me to work on some personal side projects that would have never seen the light of day otherwise.
The downsides:
1. As with dynamic languages its a great tool for EXPERT engineers ( not that i am calling me one ) but is often used by Juniors/Entree Level engineers who do not understand the problem, can't tell it exactly what to do, and can't judge the result. And thus it leads to codebases riddled with issues that are hard to find and since they produce a lot of code are a huge liability.
"But look what i made" .... no... no you didn't you don't even understand why its doing something.
As a software engineer, you don’t get paid for simply writing code, people pay u for problem-solving.
sgt•1h ago
LLM's tend to regurgitate known design patterns and familiar UX. Like those typical "keep scrolling down to learn about our app as we show you animations" - gets a bit of.
sovnade•1h ago
seanmcdirmid•1h ago
Cthulhu_•1h ago
sgt•11m ago
epolanski•1h ago
But there's way more LLMs can do, like assist you in connecting dots in complex codebases, find patterns to refactor given some rules, provide ideas, allow you to dig in dependencies to find apis that are undocumented or edge cases that are subtle to find, find information that you might've had to dig up by endless google queries and hard to find GitHub issues, provide support in technologies you have sometimes to use (sed, awk, regexes, Google sheets apis, etc) but you just don't care enough to learn because it happens to use them few times an year allowing you to focus on what matters, etc, etc.
I'm frankly tired of those pointless debates conflating LLMs in the context of code just for the same boring arguments of people hating on vibecoding or thinking every developer is delegating everything to AI and pushing slop. If that's your colleagues fire them. They are indeed useless and can be replaced by a prompt.
If one cannot see the infinite uses of LLMs even just for supporting, without ever authoring a single line of code or never ever touching a file, it's only been that person limiting it's own growth and productivity.
Seriously, this is beyond tiring.
sodapopcan•48m ago
> it's only been that person limiting it's own growth and productivity.
Maybe limiting raw productivity, but I sure don't buy that it limits growth. Maybe if all you ever did was copy and paste off of SO, but taking the time to study and deeply understand something is going to be much better for your overall growth. Also collaborating with humans instead of robots is always nice.
CuriouslyC•1h ago
redox99•1h ago
Just like a chess engine beats any human.
People think LLMs are still at the point of programming based on what they learned from the data they scraped. We're well past that. We're at the point of heavy reinforcement learning. LLMs will train on billions of LoC of synthetic code they generate, like chess engines train on self-play games.
abcde666777•1h ago
redox99•39m ago
mpalmer•1h ago