* “I am bullish about AI”
* “I am an AI skeptic, [long rambling], but overall, I am bullish about AI”
It’s amazing how even criticism of the technology somehow ends up being a hype post. At least there are still places on the Internet where we can have a serious discussion about the downsides.
It tends to get downvoted and flagged.
If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
This sort of tells me that you are pro-LLM, and most pro-LLM people mostly paste the contents of their ChatGPT output and try to pass it off as their own.
Given that you say you aren't, the most likely explanation might be that you are spending a lot of time reading LLM prose, and are starting to write like it now too.
> Timing just worked out this way. New month, ideal timing for testing a new rule.
Conversations about which model to use aren’t conversations about programming.
A better analogy would be some topic that you can’t discuss without it boiling down to which text editor you should use. It’s related to programming, a little. But it’s not programming.
But "there can't be any interesting discussion about AI programming" is completely false.
if you use LLMs frequently it's possible you'll forget to think critically?
Nowadays, you can have a sub-agent to think critically for you. ;)Other tech-adjacent subreddits such as /r/rust have banned LLM discussion for similar, more pragmatic reasons.
Like saying theres no interesting discussions about programming. Just whether OOP is overhyped, python is slow, how well you can convert a c codebase to rust
Just like discussions about traditional programming never were only about syntax and type systems, AI discussions aren't only about prompts and harnesses. I find there's quite a bit of overlap actually! "How do you approach this problem?" Is a question that is valid in both discussions, for example.
Claude does that for me. :)
I am also annoyed by the endless stream of articles and projects related to LLM-assisted coding. Not because I dislike LLM-assisted coding as an idea, but because it's all more of the same (as you said). I think that there are still a lot of low-hanging fruit in improving LLM harnesses that no one is working on because everyone seems to be chasing the latest trends ("agentic", "multiagentic", "skills") without thinking bigger.
But I'm afraid that if I finally invest time and implement some of my ideas on making LLM-assisted coding better (reliable, safer, easier for humans to interpret and understand generated code), I won't be able to gather any feedback. People will simply dismiss it as "yet another slop for creating more slop" and that's it.
What is the way out of this conundrum?
If only, just this once, it were true. Sigh.
There are some true gems however but usually in smaller focused subreddits.
I never thought I’d miss vBulletin so much.
> Just angry people scolding each other all the time.
This really does describe it perfectly. I don't know about others, but focusing on my career pulled me out of a relatively low-income and dysfunctional environment. Reddit too often reminds me of people I used to know in real life.
It's been so many years since then, and finding and living a better life was so intertwined with my young adulthood that I almost convinced myself people like that don't exist in real life anymore. I thought the whole world had moved on, but search results nowadays prioritize Reddit enough that I'm routinely proven wrong.
Contrary to popular belief, I don't think most of the stuff on there is fake. Those people probably really are like that. Certain ways of thinking can become so normalized that they don't even see what there is to be ashamed about. What I sense the most on there is a lot of stress and the resulting irrational fears that pour out of people when they feel too much pressure. People under a seemingly endless and vague threat will go a little nuts and start to swat at anything that disturbs their worldview.
A good test for any community is: try posting that is factually incorrect but that supports the agenda of the community. Does the community call it out? In Reddit it does happen.
1: https://www.reddit.com/r/blog/comments/o5tjcn/evolving_the_b...
The reality is, the masses, the real world, the average person. Is an asshole.
It doesn't reflect in the real world, because people learn to hide their assholeness at a very early age (Or they learn how to get punched in the face).
On an anonymous forum. You don't have to hide your assholeness.
Frankly it's amazing the site never devolved into 4chan. I attribute that to all the people doing free labor --> mods.
/r/assembly bans all discussion of 4GL
LLM programming isn't going away by not talking about it. It's time to move on, and eventually considering farming.
Makes sense. If I'm looking to read discussions about stables selection, feed prices, etc, why would discussions of spark plugs be relevant?
> /r/assembly bans all discussion of 4GL
Also makes sense; people wanting to discuss register allocation, bit twiddling, etc probably aren't interested in insurance claims taxonomies or similar.
> LLM programming isn't going away by not talking about it.
Right, but is the context still /r/programming? After all, there are tons of subreddits you can go to to discuss LLM programming. Why do you need to shove it into a space created for human thoughts on programming?
> It's time to move on, and eventually considering farming.
Okay, understood, but my question still stands - why conflate programming with viber-coding?
Ironically, that comment was added three months after I posted the article, when it was nowhere near the front page anymore, in a clearly automated and AI-driven review.
Still salty about it.
The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.
We are SO past the point of software being developed without LLMs at _all_, the trend line is never going to reverse. I don't understand the people digging in as zero LLM absolutists.
Relevant read: https://en.wikipedia.org/wiki/Luddite
I feel like it’s easy to understand what’s motivating these individuals to take that stance.
Anti AI crowd on the other hand just doesn’t like AI. A modern equivalent of a Luddite would be someone going on strike to protest firings.
But current AI is actively destroying our breathable/livable planet by drawing unmatched quantities of resources (see also DRAM shortage, etc), all the while exploiting millions of non-union workers across the world (for classification/transcription/review), and all this for two goals:
1) try to replace human labor: problem is we know any extracted value (if at all) will benefit the bourgeoisie and will never be redistributed to the masses, because that's exactly what happened with the previous industrial revolutions (Asimov-style socialism is not exactly around the corner)
2) try to surveil everyone with cameras and microphones everywhere, and build armed (semi-)autonomous robots to guard our bourgeois masters and their data centers
There is nothing in this entire project that can be interpreted to benefit the workers. People opposing AI are just lucid about who that's benefiting, and in that sense the luddite comparison is very appropriate.
I divide anti-AI people into two groups. Those who don’t like AI because of what it is, and those who don’t like it because of its impact on society. Naturally there is an overlap.
Luddites were not opposed to the technology. So the comparison to them is only correct for the latter group.
Not talking about LLMs on a forum is not going to change anything in the grand scheme of things. It could be a protest, but I see it more (the feeling I get from the announcement) as a means to protect the forum from being overrun regardless whether AI is ultimately good or bad.
Also note that nowhere in my comment I have stated my position in this argument.
I'm sure your experience is different, but you can't _seriously_ claim we're "past the point" of not using LLMs for programming.
Vinecoding is a fundamentally different kind of activity than actual programming. It's a pure delusional dopamine rush, compared to the deliberate engineering required to build quality software.
If you give a regular person a race car, they will crash it about as fast as their vibecoded app crashes. Give the same race car to a pro age it’s a different story.
I still think this was the right decision by the programming mods there. Talking about tools is pretty boring, and you need to train to use something like an LLM assistant. No one who can’t program a language should be using an LLM to learn it unless they know about 2-3 other languages already, IMO.
I generally agree that while I think vibe-coding is here to stay, it's different from designing useful products and systems, and I don't know how to convince colleagues that we should uhh be careful about all this code we're pushing. I fear all they see is the guy aging out.
They weren't useless, they proved if the direction that the prototype was exploring was worthwhile. I've personally made many completely shit code prototypes in the years before we had LLM's, of course they weren't magically production ready, that's not the point of a prototype.
How about Claude Code? 100% of it was vibe-coded according to its creator.[1] Google and Microsoft also claim a lot of their internal code is AI-generated now. [2] [3]
Naturally, none of the big tech companies will just release a pure vibe-coded project due to structural reasons, but you also _seriously_ can't claim that serious projects don't use LLMs as well these days. Maybe in your limited experience, it isn't true, but that doesn't generalize to what's actually happening.
1. https://www.reddit.com/r/Anthropic/comments/1pzi9hm/claude_c...
2. https://fortune.com/2024/10/30/googles-code-ai-sundar-pichai...
3. https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...
Some sub reddits forbid memes, because else they get flooded and the good content drowns in it.
Some sub reddits only allow certain content of certain days to counter this.
What do you want to mods to do?
It makes sense that a programming subreddit first and foremost discusses programming (the skill). We can go complain about Claude somewhere else if we want to.
This is an interesting thing I've also noticed in public hobbyist forums/discussion spaces where someone who is more interested in making a "product" clashes with people who are just there to talk about the activity itself. It's unfortunate that it happens but it will self-correct over time (like /r/programming here) and the LLM enthusiasts of Reddit will find another place to discuss ways of using them.
Sometimes a topic gets too popular, it drowns out all the other topics. At that point, aren't they just a glorified version of r/llm?
I'll give you one personal example:
The year Caitlin Clark was drafted to the wnba.
r/wnba went from a subreddit of 9000, to eventually 200k subs.
We were bombarded with CC posts every hour.
- Some of it was trolls staging a race war (this was during US elections).
- Some of it was genuine CC fans, who wanted to talk about CC.
- Some of it was bball nerds, who you know... wanted to talk about a bball player in a bball forum (regardless of who that bball player happens to be).
So what happened was, at any given day, 80% of the front page was CC content.
At that point, we might as well have been r/caitlinclark.
So the mods did something drastic and controversial. They banned all "low effort" CC content.
WTF does "low effort" mean? It pretty much meant 99% of CC posts got removed.
The forum went back to something that resembled a bball forum. That talked about other players. And other teams. Not just Caitlin Clark.
THAT SAID, I think this might be what gets me to go back to that place. I used to come here to read about new Python tooling, latest database development news, interesting thinkpieces on development practices, etc. Now it's dominated by AI evangelism, "I'm Showing HN™ What I Used By Claude Tokens On :)", AI complaining, AI agent strategies, AI's impacts on the industry news, etc. There are some non-AI posts but not as many good ones as there used to be, and a lot of the non-AI posts quickly turn out to be AI written. Because they respect their time as a writer greatly and my time as a reader not at all. It's ClankerNews, the Hackers are in short supply.
Now it’s people sharing AI apps that look exactly like other AI apps that they have never heard of [1]
Project rise then implode hilariously in a month [2]
An ebook management project that grew over a year with pretty conservative feature set, then in 3 months implements every ebook feature under the sun, breaks every thing, then implodes. Funniest thing is when the “AI Slop” callout is itself AI written and no body notices. [3]
Like… amazing comedy. Then after the owner deletes the repo, 10 people have to role-play the hero who “has the code” because clicking Fork on GitHub is the sign of a true hacker.
[1] https://old.reddit.com/r/selfhosted/comments/1r9s2rn/musicgr...
[2] https://old.reddit.com/r/selfhosted/comments/1rckopd/huntarr...
[3] https://old.reddit.com/r/selfhosted/comments/1rs275q/psa_thi...
AI programming is fundamentally different from programming and as such the discussions merit to have separate forums.
If r/programming wants to be the one solely focusing on programming then power to them. Discussing both in combination also makes sense, but the value of reddit is having a subreddit for anything and “just programming” should be on the list.
It's really not. Maybe vibecoding, in its original definition (not looking at generated code) is fundamentally different. But most people are not vibe coding outside of pet projects, at least yet.
They truly believe LLMs are close to useless and won't improve. They believe it's all just a bubble that will pop and people will go back to coding character by character.
How can that be true? Reddit is vote-based. So if people weren't interested, they wouldn't vote it up and it wouldn't appear on the front page. Hacker News has no rule banning posts about Barbie and yet, amazingly, Barbie rarely makes it to the front page, because that's how upvotes work.
People clearly are interested enough to vote LLM related posts up, but a bunch of mods who don't like AI are upset enough to want to dictate what others can find interesting. Which is not unusual for Reddit.
From the user responses to the linked ban, said ban was a positive decision for that community.
And the discussion on LLM itself can in the long run be fairly tiring, follow r/LocalLLaMA for a while and you'll see what I mean. But if you are really into LLMs though, that sub is great.
It is simply not fun to go on to a subreddit, seeing 90% being projects and blogs that is obviously created using AI, and authentic content being pushed to the side due to the high volume of artificial works. r/Python was horrible at one point, but the mods have been stepping up their game.
Last time I checked only political posts (like related to offshore programmers) got any kind of attention. Most technical posts barely gets 10 comments. Some of the smaller subreddits (like /r/ProgrammingLanguages) are much better.
austin-cheney•2h ago
59nadir•1h ago