What you have to understand is the avalanche of low-quality questions that came in, pushed by "academics" who should have known better, telling students things like "just ask on SO - they will write it for you", which is bad on so many levels.
But when it got really bad was when the new owners, after Jeff & co sold out, took over. Woke nonsense sprouted everywhere, people with no technical knowledge did moderation. And, whoa, if you criticised this, you were banned. That's when I gave up.
Wokeness is a control mechanism. If you tread one inch off the "correct" path (as any normal person is going to do occasionally), you will get stomped. For example:
[jane] Using "delete this" is perfectly safe in C++
[neil] Jane, not at all. For example .....
[moderator] neil, i think you are disrespecting jane - 6 hour ban.
And I am not joking about this. But it was only the company employee moderators that did this, and they did it excessively. They also got rid of the few mods who knew how to mod and wouldn't toe the line.
But I'm sure the website still receives a lot of views, as it still is highly ranked in Google.
So much of life these days is purely extractive, trying to squeeze more money out of less productive activity. It's no wonder young people feel disillusioned and are increasingly focused on gambling and "investing" in meme stocks.
Why do people think this is necessary? When you learn new things, like bicycling, you don't start with relearning how to walk.
I don't expect an LLM to have deep inbuilt knowledge of libraries. I expect it to be able to use a language server to find the right definitions and load them into context as needed. I expect it to have very deep inbuilt knowledge of computer science and architecture to make sense of everything it sees.
Meaning as technology evolves and does things in novel ways, without explainers annotating it the LLM won't have anything to draw on - reducing the quality of answers. Which brings us full circle, what will companies use as training data without answers in places like SO?
i.e. try asking it to swap the meanings of the words red and green and ask it to describe the colors in a painting and analyse it with color theory - notice how quickly the results degrade, often attributing "green" qualities to "red" since it's now calling it "green".
What this shows us is that training data (where the associations are made) plays a significant role in the level of answer an LLM can give, no matter how good your context is (at overriding the associations / training data). This demonstrates that training data is more important (for "novel" work) than context is.
Another one: ask a person to say 'silk' 5 times, then ask them what cows drink.
Exploiting such quirks only tells you that you can trick people, not what their capabilities are.
This poses a problem for new frameworks/languages/whatever that do things in a wholly different way since we'll be forced to rely on context that will contradict the training data that's available.
If you had someone familiar with every computer science concept, every textbook, every paper, etc. up to say 2010 (or even 2000 or earlier), along with deep experience using dozens of programming languages, and you sat them down to look at a codebase, what could you put in front of them that they couldn't describe to you with words they already know?
You started with 'they can't understand anything new' and then followed it up with 'because I can trick it with logic problems' which doesn't prove that.
Have you even tried doing what you say won't work?
But it’s almost trivial for an LLM to generate every question and answer combo you could every come up with based on new documentation and new source code for a new framework. It doesn’t need StackOverflow anymore. It’s already miles ahead.
It not only explained the math but created a react app to demonstrate it. I'm not that can be explained by regurgitating part of it with noise.
I encourage you to try it with something of your own.
Abstract:
Discriminantal arrangements are hyperplane arrangements that are generalization of braid arrangements. They are con- structed from given hyperplane arrangements, but their com- binatorics are not invariant under combinatorial equivalence. However, it is known that the combinatorics of the discrimi- nantal arrangements are constant on a Zariski open set of the space of hyperplane arrangements. In the present paper, we introduce (T, r)-singularity varieties in the space of hyper- plane arrangements to classify discriminantal arrangements and show that the Zariski open set is the complement of (T, r)-singularity varieties. We study their basic properties and operations and provide examples, including infinite fami- lies of (T, r)-singularity varieties. In particular, the operation that we call degeneration is a powerful tool for constructing (T, r)-singularity varieties. As an application, we provide a list of (T, r)-singularity varieties for spaces of small line ar- rangements.
So adding a new framework already doesn’t need human input. It’s artificial intelligence now, not a glorified search engine or autocomplete engine.
How will CharGPT/CoPilot/whatever learn about the next great front-end framework? The LLMs know about existing frameworks by learning on existing content (from StackOverflow and elsewhere). If StackOverflow (and elsewhere) go away, there's nothing to provide a training material.
The results of your claude code session, for example, make fine training data.
Did the user commit the final answer? What changes were made before they did?
Does Claude code copy your repository onto its server?
Yes, the context from working sessions moves over the wire - claude "the model" doesn't work inside the CLI on your machine - it's an API service that the cli wraps.
Edit: I also mean to imply that maybe this could be more observable in the future. Opt-in, of course.
The moderation could be very aggressive with "duplicate" posts getting closed fast. The problem is sometimes the "solution" in the duplicate was either irrelevant or dated. Things like telling someone to us jQuery in 2020.
Never once saw anyone discussing how to implement CRUD or claiming one framework was better than another. That was the point - concrete answers not opinions.
Are you saying that because you don't like web apps or the frameworks that people use to make them, there shouldn't be a way for people to publicly ask questions about programming?
As the underlying software evolves the log messages will change and the APIs will change and the answers won't make sense anymore.
Hopefully Reasoning is much better that training on original code and API docs are sufficient
Isn't that a graph of questions per timeframe, not total questions? If it was total questions, that would imply a massive cull of existing questions, not a decline in usage
Also, this is a new set of "rounds" it's making. Graphs like this have been getting shopped around off and on for a few years now.
Previous rounds of discussion famously include https://meta.stackoverflow.com/questions/433864 . The decline was even noted before ChatGPT, e.g. https://meta.stackoverflow.com/questions/413657.
That graph is number of questions asked being posted: very often the question already exists (although obviously with technology and frameworks changing over time, things aren't constant, and answers can be out of date at some point), so you don't need to post the question.
Also: Would LLMs be as good for answers if they hadn't been trained on scraping StackOverflow in the first place?
But on your 2nd question,
>Would LLMs be as good for answers [...]
Yes.
StackOverflow has been in a clear, strong decline for quite a while. LLMs just hammered in the final nail. It's very clear that they caused a substantial drop, but not as impactful as the years of stagnation and rise of competition.
There are no obvious flows in the original design, and there were no endemic wrongdoings in the governance either. It just rotted slowly like any other community does. And nobody in the world knows how to keep communities from becoming toxic. There is simply no recipe. And that's why StackOverflow doesn't serve as a lesson either.
It's honestly really frustrating to keep reading these takes.
In that case, dang would be part of the "vetted set of mods" I mention with the power to overturn my decisions, because HN is paying him to make sure the site runs in line with its vision, even if that means disagreeing with the "ground-level" moderators.
I'm pretty sure you and I see eye to eye on this, given your other comments on the topic here.
I think a well-moderated community can be non-toxic. Lobste.rs is a good if not extreme example: it's kind of a vouch system for the people you refer and there's pretty good moderation to prevent overly mean discussion.
I find that there's still a subset of users that make it worse than it should be, by making too much noise about "tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage", but that is already against the rules/guidelines.
But even in those spaces, few things end up actually being flagged even when the flames are burning hot.
I think it’s fine they are hidden by default. But unt we can see all removed comments we can’t understand the debate.
Do you have showdead on?
There's a recipe, it's just expensive. Intense moderation, exclude disruptive users, no grand plan for massive growth/inclusion.
StackOverflow did have intense moderation, but it also seemed to want to be a place where anyone could ask a question and that desire for mass inclusion competes with the desire to be moderated/curated and then there's issues.
I think the thing that ultimately made wikipedia work is that moderation actions are done pretty publicly with a heavy focus on citing sources. The problem a lot of these OG Web 2.0 sites have is many of them have focused on keeping moderation discussion and decisions as private as possible.
That, I think, is where SO failed. Someone can mark something as a dup and that's basically the end of discussion. I think that's also where sites like Reddit really struggle. A power tripping mod can destroy a community pretty quickly and they can hide their tracks by deleting comments of criticism.
It's one thing I think HN actually gets pretty right. It's a much smaller userbase, which helps, but also the mods here are both well known and they post the reasons for their actions. There's still some opacity, but the fact that mod decisions are mostly public seems to have worked well here.
Once it became a product there was constant tension between community and management. A 24 year old PM who had never worked in software would come declare a sweeping change and then accuse the community for being toxic uninclusive trolls.
Also Joel violated all rules and norms and used it to promote his personal political platform.
Mostly true, but there are exceptions... HN is about as good as I've seen for a publicly accessible forum, but has very active moderation (both from Dang and team and a pretty good vote & flag mechanism).
The other good forums I've seen are all private and/or extremely niche and really only findable via word of mouth. And have very active moderation.
But, yeah, I think you're probably right for any sufficiently large forum. It'll trend to enshittification without very active management.
I think a large part of that is due to moderation actions being very public and open for discussion on the "talk" page.
I wonder how much of it is because of good moderation versus having a site that deliberately doesn't appeal to the masses. The layout is purely text (Aside from the small "Y" logo in the top). No embedded images or videos. Comment scores (aside from your own) are hidden, usernames aren't emphasized with bold, and there are no profile pictures, so karma farming is even more pointless (no pun intended) than on reddit. There's no visible score to give the dopamine from the knowledge of others seeing your high score.
In other words, a smaller community is easier to keep clean, and HN's design naturally keeps the community small.
Shirky's "A Group is its Own Worst Enemy" is highly relevant here.
The point of StackOverflow was explicitly not to help the question-askers, but to prioritize the people who would reach the question via Google. That's why so many people have bad stories about times they went to ask questions on StackOverflow: it was supposed to be very high-friction and there was supposed to be a high standard for the questions asked there.
Now with LLMs users get the best of both worlds. They don't need to use Google to find a high-quality StackOverflow question/answer AND they can ask any question even if it's been asked 1,000 times before or is low-quality or would lead to discussion rather than a singular answer.
When dealing with those personalities, seems only way to get them to completely reconsider them approach is hard "F off". Which I why I understand old Linus T. Emails. They were clearly in response to someone acting like "I just need to convince them"
A good question isn't just "how do I do x in y language?" But something more like "I'm trying to do x in y language. Here's what I've tried: <code> and here is the issue I have <output or description of issue>. <More details as relevant>"
This does two things: 1. It demonstrates that the question ask-er actually cares about whatever it is they are doing, and it's just trying to get free homework answers. 2. Ideally it forces the ask-er to provide enough information that an answer-er can do so without asking follow ups.
Biggest thing as someone who has been in Discords that are geared towards support, you can either gear towards new people or professionals but walking the line between both is almost impossible.
There are no stupid questions, but there are stupid choices about whom to ask.
Often the right choice is yourself.
Please read "How long should we wait for a poster to clarify a question before closing?" (https://meta.stackoverflow.com/questions/260263), especially my answer (https://meta.stackoverflow.com/a/425738/523612), and "Why should I help close "bad" questions that I think are valid, instead of helping the OP with an answer?" (https://meta.stackoverflow.com/questions/429808).
Personally I find a lot of "welcoming" language to be excessively saccharine and ultimately insincere. Something between being talked down to like I'm a child and corpo-slop. Ultimately I don't think there's necessarily a one-size-fits-all solution here and it's weird that some people expect that such a thing can or should exist.
I agree completely (this was part of my findings in https://meta.stackexchange.com/a/394952/173477).
This is an entirely different problem than toxicity is it not? Like, if the moderators are bad at their job that seems uniquely different than the moderators were mean to me while doing their job.
Sure, there was a whole appeals process you could go through if you had infinite time and patience to beg the same cohort for permission, pretty please, to ask the question on the ask-the-question website, but the graph of people willing to do so over time looks a lot like their traffic graph.
The gamification is mostly via reputation, and only asking, answering (and very limited editing) grant reputation.
And that stuff is important, but when it becomes a metric to optimize and brag about…
This discussion needs a grounded definition of "toxic" then.
Elsewhere in this thread I see:
> I disagree with this. You can tell someone that a question is not appropriate for a community without being a jerk. Or you can tell them that there needs to be more information in a question without being a jerk. I do not think being mean is a prerequisite for successful curation of information.
So we're all speaking about different things it appears.
When I wrote about the issue on MSE (https://meta.stackexchange.com/a/394952/173477) a couple years ago I explicitly called out that the terminology is not productive. It generally seems to describe dissatisfaction with the user experience that results from a failure to meet the user's expectations; but the entire reason for the conflict is that the user's expectations are not aligned with what the existing community seeks to provide.
And yes, the ambiguity you note has been spammed all over the Internet (everywhere Stack Overflow is discussed) the entire time. Some people are upset about how things are done; others consider what is done to be inherently problematic. And you can't even clearly communicate about this. For example, someone who writes "You can tell someone that a question is not appropriate for a community without being a jerk." might have in mind "don't point people at the policy document as if they should have known better, and don't give specific interpretation as if their reading comprehension is lacking"; but might also have in mind "point people at the policy document, and give specific interpretation, because otherwise there's no way for them to know". Or it might be "say something nice, but don't close the question because that sends the wrong message inherently" (this interpretation is fundamentally misguided and fundamentally misunderstands both the purpose and consequences of question closure).
And yes, every now and then, the person making the complaint actually encountered someone who said something unambiguously nasty. For those cases, there is a flagging system and a Code of Conduct. (But most Code of Conduct violations come from new users complaining when they find out that they aren't entitled to an open, answered question. And that's bad enough that many people don't comment to explain closures specifically to avoid de-anonymizing themselves.)
> Earn at least 1000 total score for at least 200 non-community wiki answers in the $TAG tag. These users can single-handedly mark $TAG questions as duplicates and reopen them as needed.
So these are definitely not people averse to the idea of answering questions.
2. I can guarantee you that the overwhelming majority of these cases are not people trying to be "mean". Users are actively incentivized against closing duplicates, which has historically led to nowhere near enough duplicate questions being recognized and closed (although there have been many proposals to fix this). Dupe-hammering questions "to be mean" is considered abusive, and suspicion of it is grounds to go to the meta site and discuss the matter.
No, people close these questions because they genuinely believe the question is a duplicate, and genuinely believe they improve the site with this closure. It's important to understand that: a) people who ask a question are not entitled to a personalized answer; b) leaving duplicate questions open actively harms the site by allowing answers to get spread around, making it harder for the next person to find all the good ones; c) the Stack Overflow conception of duplication is not based on just what the OP understands or finds useful, but on what everyone else afterward will find useful.
For example, there are over a thousand duplicate links to https://stackoverflow.com/questions/45621722 , most of which is from my own effort — spending many consecutive days closing dozens of questions a day (and/or redirecting duplicate closures so that everything could point at a "canonical"). Yes, that's a question about how to indent Python code properly. I identified candidates for this from a search query and carefully reviewed each one, verifying the issue and sending other duplicates to more specific canonicals in many cases (such as https://stackoverflow.com/questions/10239668). And put considerable effort into improvements to questions and existing answers, writing my own answer, and adding links and guidelines for other curators so that they can choose more appropriate duplicate targets in some cases. I also looked at a wider search that probably had a fairly high false positive rate, but implies that there could be thousands more that I missed.
3. When your question is closed as a duplicate, you immediately get a link to an answer. You don't even need to wait for someone to write it! It's someone saying "here, I was able to find it for you, thanks perhaps to my familiarity with other people asking it".
4. Stack Overflow users really do "try to find what's different about" the question. It just... doesn't actually matter in a large majority of cases. "I need to do X with a tuple, not a list" — well, you do it the same way. "I need to Y the Xs" — well, it seems like you understand how to Y an X and the real problem is with finding the Xs; here's the existing Q&A about finding Xs; you shouldn't need someone else to explain how to feed that into your Y-the-things loop, or if you do, we can probably find a separate duplicate for that. Things like that happen constantly.
Sometimes a question shows up with multiple duplicates. This almost always falls into two patterns: the user is really asking multiple separate things (due to failing to try to break up a problem into logical steps) and each one is a duplicate; or the question is constantly asked but nobody knows a good version of the question, and gives multiple links to previous attempts out of frustration with constantly seeing it. (The latter is bad; someone is supposed to write the good version and send everything else there. But that typically requires behind-the-scenes coordination. Better would be if the first bad attempt got fixed, but you know.)
5. Closing a question is emphatically not about telling people to go away. The intended message (unless the question is off topic or the OP just made a typo or had a brainfart) is "please stay and fix this". However, it's perfectly reasonable that an explicit attempt to catalog and organize useful information treats redundant indices by pointing them at the same target rather than copies of the target. And questions are indices in the Q&A model.
* The originally asked question was very low quality; for example, it might have basically been a code dump and a "what's wrong?" where many things were wrong, one of which is what you were both asking about. Someone else may have decided that something else was the more proximate issue.
* The OP was confused, and didn't really have your question. Or the question title was misleading or clickbaity. These should get deleted, but they tend to get forgotten about for a variety of reasons.
* Sometimes two very different problems are described with all the same keywords, and it takes special effort to disentangle them. Even when the questions are properly separated, and even if every dupe is sent to the correct one of the two options, search engines can get confused. On the flip side, sometimes there are very different valid ways to phrase fundamentally the same question.
My favourite example of the latter: "How can I sort a list, according to where its elements appear in another list?" (https://stackoverflow.com/questions/18016827) is a very different question from "Given parallel lists, how can I sort one while permuting (rearranging) the other in the same way?" (https://stackoverflow.com/questions/9764298). But the latter is fundamentally the same problem as in "Sorting list according to corresponding values from a parallel list" (https://stackoverflow.com/questions/6618515). It's very easy to imagine how someone with one of these problems could find the wrong Q&A with a search engine. And there were a lot of other duplicate questions I found that were directed to the wrong one, and if the site were as active as it was in 2020, I'm sure it would still be happening.
And that's after the effort I (and others) put in to improve the prose and especially the titles, and add reference sections. The original titles for these questions were, respectively: "python sort list based on key sorted list"; "Is it possible to sort two lists(which reference each other) in the exact same way?"; "Sorting list based on values from another list?". No wonder people didn't get what they wanted.
They drove people away, on purpose, who were creating their content. Which was a successful strategy until it wasn't.
I think they've accelerated it by making it easy to make all comments private and by hamstringing moderators.
Part of that directly ties back to AI as well. The API limiting has a lot to do with making it hard to scrape reddit for data.
Stack Overflow's situation, of course, is totally different. It only changed ownership once.
In reality, the Venn diagram of people wishing to moderate online spaces for virtual points and petty bureaucrats that get off on making arbitrary rules is pretty much a circle.
You are supposed to go to the existing question and post a new answer on it.
Answer approval means almost nothing and should never have been implemented. In the early days it helped experts spread out their attention, as there was an immediate signal that a question had at least one answer good enough for the OP. But there is really no reason to prioritize the OP's opinion like this. (The reputation system has been misaligned with the site's goals in many ways.)
Best of both worlds? I disagree vehemently. At least my job is secure; experts with decades of experience are in high demand, and I can be even more selective who I decide to work for. I'm done contributing for free to the corpo Internet though.
That _might_ have been true in 2023.
And then, the other day someone showed an example of a "how to configure WireGuard" article, padded to hell, in LLM house style, aimlessly wandering... being hosted on the webpage of a industrial company selling products made out of wire meshes.
You _can_ write well with AI. You _can_ also create good products with AI. It's a tool. You need to learn how to use it.
The incentives to do so are seriously lacking, however. A big part of why SO had to ban LLM content so firmly is that otherwise hordes of people will literally copy someone else's question into ChatGPT, and copy its answer back into the answer submission form in the hopes of getting some reputation points. It was much worse for bounties, of course, which had largely become ignored by anyone not doing that.
This is a very charitable read of the situation. Much more likely is, as another commenter posted, a set of people experiencing a small amount of power for the first time immediately used it for status and took their "first opportunity to be the bully". Question quality and curation was always secondary to this.
> > The point of StackOverflow was explicitly not to help the question-askers, but to prioritize the people who would reach the question via Google.
It obviously was only tolerated because of that, as evidenced by the exodus the moment a viable alternative became available.
It always looks like this from the outside. Especially for those who don't understand what the quality standards are, or what the motivations are for having those standards.
There is a Code of Conduct and a flagging system for a reason.
> It obviously was only tolerated because of that, as evidenced by the exodus the moment a viable alternative became available.
This is not a contradiction or rebuttal. Every Internet community is allowed to decide its own objectives. Stack Overflow's was explicitly not "help the question-askers". It was brought into existence specifically because of the social problems, and lack of utility for later searchers, observed in "help the question-asker" environments (i.e., traditional discussion forums). Of course there was an exodus when it was no longer required to bother a human to make a natural-language query find the right information (more or less, most of the time). From Stack Overflow's perspective, that's just an improvement on conventional search, and no more of a problem than the fact that Google used to be good at indexing the site.
(I still don't understand why Firefox spell-check doesn't think "asker" and "answerer" are words. They're not in my /usr/dict/share/words, either. I've been speaking English for over four decades and I still hate it.)
Instead of a rich interaction forum, it became a gamified version of Appease The Asshole. I stopped playing when I realized I’d rather be doing almost anything else with my free time.
For me, SO is a proof that communities need a BDFL with a vision for how they should run, who is empowered to say “I appreciate your efforts but this isn’t how we want to do things here” and veto the ruiners. Otherwise you inevitably seem to end up with a self-elected bureaucracy that exists to keep itself in place, all else be damned.
(Bringing it back to a local example, I can’t imagine HN without dang and the new mods. Actually, I guess I can: it would look a lot like Reddit, which is fine if that’s what you’re into, but I vastly prefer the quality of conversation here, thanks.)
That became more and more clear as the site and content aged, and afaict they have done absolutely nothing to address it. So after a few years the site had good information... but often only if you had accidentally time traveled.
I had FAR too many cases where the correct answer now was much further down the page, and the highest rated (and correct at the time) answer was causing damage, and editing it to fix that would often be undone unless it was super obvious (if I even could). It shifted the site from "the most useful" to "everything still needs to be double checked off-site" and at that point any web search is roughly equivalent. And when it's not a destination for answers, it's not a destination for questions (or moderation) either.
That’s the weird feedback loop from practically forcing new askers back to the old answers, which was bad for everyone involved.
2026: We can get you a discount on the CASA audit you have to complete before you’re allowed to ask.
That was Jeff Atwood. Who said a lot of very interesting things about how the site was explicitly intended to differ from traditional forums where "perfectly good" questions are constantly asked.
> Answers were shot down because they weren’t phrased in the form of an MLA essay.
This is absurd and I can't think of anything remotely like this happening in practice. The opposite is true: popular questions attract dozens of redundant, low-quality answers that re-state things that were said many years ago, list two different options from two other different previous answers, list a subset of options from some previous answers, describe some personal experience that ultimately led to applying someone else's answer with a trivial modification for personal circumstances unrelated to the actual subject of the question, etc. etc.
Mindless stackoverflow copy-pasting was a scourge of the programming world before. I can't imagine the same low quality stackoverflow answers mashed into slop being the best of any world...
If you were in the New queue, and found a question you could answer, by the time you posted your answer the question itself may have been nuked by mods making your answer/effect not seen by many.
It's considered part of your responsibility, as someone answering questions, to understand the standards for closing questions (https://meta.stackoverflow.com/questions/417476) and the motivations behind those standards, and to skip over (better yet, flag or vote to close) those not meeting those standards (https://meta.stackoverflow.com/questions/429808).
You complain, but actually the deck is heavily stacked in your favour: there is a 5-minute grace period on answers; plus you can submit the answer by yourself, regardless of your reputation score, while typical closures (not duplicates and not questions flagged and then seen by someone on the very small moderation team) require three high-rep users (and it used to be five) to agree.
However, the question was not "nuked": the OP gets at least 9 days to fix it and submit for reconsideration before the system deletes it automatically (unless it's so bad that multiple even higher rep users take even further consensus action, on the belief that it fundamentally can't be fixed: see https://meta.stackoverflow.com/questions/426214/when-is-it-a...).
And this overwhelmingly was not done "by mods". It's done by people who acquired significant reputation (of course, this also generally describes the mods), typically by answering many questions.
I mostly chalk it up to UI affordances. The most obvious one: the site constantly presents an "Ask Question" button; it gives you a form to type in a question; people come to the site because they have a question, and it goes live to a general audience[1] as soon as it's posted. No amount of emphasis on search is ever going to override that.
Less obvious but much more important is that the community can't actually put information about community norms in front of new users, except by scolding them for mistakes. No matter how polite you are about giving people links to the tour[2] or to policies[3],
Then of course, they wanted the site to actually grow at the start, so we got that terribly conceived reputation system best described as Goodhart's law incarnate[4]. And it was far too successful at that early growth, such that if anyone actually understood the idea properly at the start, they were overwhelmed by new users (including the experts answering questions) and had no chance to instill a site culture. It took until 2012 or so until a significant chunk of the experts were getting frustrated with... all the same things they were historically frustrated with on actual forums; then we got the "What Stack Overflow is Not" incident[5]. A lot of the frustration was misdirected except for a general annoyance at certain stereotypes of typical users. It took until at least 2014, from my assessment of the old meta posts, for a real consensus to start emerging about what makes a good question, and even then there was a lot of confusion[6].
Newer sites like Codidact[7] have a chance to learn from this mess, establishing ideas about what good questions look like, and about site scope, from the start[8].
1. Notwithstanding more recent efforts, like the Staging Ground and now a new "question type" feature (https://meta.stackoverflow.com/questions/435293) which seems to have been recently rolled back in preparation for something bigger (https://meta.stackoverflow.com/questions/437856), and the various attempts to force AI into the process, etc.
2. https://stackoverflow.com/tour
3. Especially things like 'Under what circumstances may I add "urgent" or other similar phrases to my question, in order to obtain faster answers?' (https://meta.stackoverflow.com/questions/326569) and 'Why is "Can someone help me?" not a useful question?' (https://meta.stackoverflow.com/questions/284236). See also https://news.ycombinator.com/item?id=46485817 .
4. https://meta.stackexchange.com/questions/387356/the-stack-ex... ; the anchor is for my own answer but please scroll around and read other points of view.
5. See https://meta.stackexchange.com/questions/137795. Back then I was actively using the site but not active on meta; I pretty well gave up in 2015 for largely unrelated (personal) reasons, then came back in mid 2019, coincidentally shortly before the Monica situation[9].
6. In particular, see 'How much research effort is expected of Stack Overflow users?' (https://meta.stackoverflow.com/questions/261592), originally authored 2013, and especially compare the original answers to newer ones. Notably there were also quite a few deleted answers on this one, for those of you with the reputation to view them. Also see 'How do I ask and answer homework questions?' (https://meta.stackoverflow.com/questions/334822) which largely misses the point: it's not so much about the ethics of someone cheating on homework, but about the question fitting the site model.
7. https://codidact.com , with subdomains for various topics. Notably, "programming" as a topic is not privileged; unlike how the Stack Exchange network started with Stack Overflow which still dominates everything else put together, software.codidact.com is just another section of the site. Full disclosure: I am a moderator for that section.
8. See for example https://software.codidact.com/posts/285035/289176#answer-289... ; https://software.codidact.com/posts/291064 ; https://software.codidact.com/posts/284979 ; https://software.codidact.com/posts/292960 ; https://software.codidact.com/posts/294610 ; https://meta.codidact.com/posts/289910 ; https://meta.codidact.com/posts/290028 ; https://meta.codidact.com/posts/291121/291156#answer-291156 ; https://meta.codidact.com/posts/289687 ; https://meta.codidact.com/posts/289951 ; https://meta.codidact.com/posts/284169. Yes, this is a carefully hand-picked list. I have a fairly clear mental image of one more but was somehow unable to search for it.
9. See https://meta.stackexchange.com/questions/333965 and many others. It's a deep rabbit hole. It's also the triggering incident leading to the creation of Codidact[7].
You either found your answer coming in from a search engine, or you pretend the site does not even exist.
I don't think it was supposed to be sustainable, but oh well.
But I'm not sure that Atwood and Spolsky (especially Atwood) realized that nature.
I struggle not to see the people describing it as full of exclusionary rants as telling on themselves.
I was an active contributor on SO in the early days - it was fun to help folks out, and it was often the only way to get help when I needed it myself.
For me, it stopped being a place I wanted to visit when they made the decision to close any question that they didn't deem a perfect fit for their vision of SO. There was certainly some value to that around the edges, but the policy ended up being enforced so strictly that many interesting topics their audience would have found valuable were declared out-of-bounds. Questions I wanted answers to - and that were getting good answers! - would get closed, and so would interesting questions that I wanted to try to answer.
I tried a couple times to push back gently, and got piled on each time.
It stopped being fun, so I stopped going there. Shrug.
https://stackoverflow.com/questions/77855606/should-we-enabl...
That's an interesting question! It's a question that real programmers might want to have answers to! And I had a very specific answer, based on real-world experience and data. But the question got closed as being off-topic.
Shrug. They get to make the site they want. It doesn't mean it's the site folks want to visit.
The question as written is still opinion-based (try instead something like "what negative consequences could occur from..."; the point is that the site should not be trying to weigh pros and cons for you, but just stating what they are, so that people can make their own decisions).
This sort of thing gets marked off topic because it pertains to the matter of hosting the program, rather than creating it. See also https://meta.stackoverflow.com/questions/425628, https://meta.stackoverflow.com/questions/276579, https://meta.stackoverflow.com/questions/271279 etc.
> "Because I can get an answer from an LLM (which does need to be verified) in less than a minute versus the hours or days I would have to wait to get a toxic and potentially useless reply on stackoverflow. They should really downsize or just kill the company it’s a relic of the past and most developers won’t miss it."
To me, the value of StackOverflow is not in the ability to ask new questions, it's as a huge archive of questions that have already answered. Sure, new questions might be falling off and it might be decreasing in relevance, but that in no way means that a massively resource-intensive LLM regurgitating paragraphs of semi-duplicated text is better enough to justify canning it. (There's also the matter of all the other StackExchange sites, I have no idea what the state of the world is on those but I imagine they also have value in themselves.)
To this day, I find almost all of my low-level questions are still readily answered by StackOverflow, and it holds lots of discussion on higher-level questions that I find useful.
Does StackOverflow have an attitude problem? Absolutely. It is fair to say that most developers won't miss it? No.
What made SO no longer useful to me, though, is that far too many of the existing answers are either obsolete or incorrect.
SO is infamous for overzealous mods closing questions left and right, calling you an idiot, and being generally unhelpful.
I think these mods were/are burned out by dealing with mostly idiotic questions all day. They default to suspicion and hostility.
If they'd had AI weeding out low quality questions before they ever got to a human (and not the rudimentary text classifiers that were the state of the art at the time) I think the mods would be a more helpful bunch.
They were on it in 2012! And no, it didn't help. The site's value was in providing users with answers to their questions, but that was never on the radar of the folks running SO. I guess they just assumed the free labor of the people that built the place would continue forever.
Edit: I even told them at the time that this was not the right way to proceed. They had no reason to close questions or even review them. All they needed was a holding pen where first-time questioners posted questions, and they needed to be promoted by users viewing them as worthy of an answer.
Guess what. They deleted my comment.
> and not the rudimentary text classifiers that were the state of the art at the time
This actually could be a good argument for LLMs, you won't get told your question is dumb.
It is.
The consensus in the meta community is basically: we don't want AI on the site (although clearly the owners do); people who would benefit from generative AI can get basically all its benefit by using it off-site, as it's heavily trained on Stack Overflow (and agents can presumably search it) anyway. It's fine if people use that off-site; it keeps out unsuitable questions and is basically filling the role that conventional search used to before Google got so enshittified.
For a question asker, it could be really toxic. I've had toxic responses as well. The problem is, there are _a lot_ of bad questions that would pollute that site otherwise.
For a case study into what it would look like if it invited all questions, look at many subreddits.
I'll occasionally go on /r/ObsidianMD to see if there are interesting product updates, but instead I just see questions that get re-asked _constantly_. There's a very bad culture around searching for previous thread for info on many subreddits. People also ask questions unrelated to the sub at all. I've seen so many times people asking questions about some Android specific problem unrelated to the issue. While ideally I'd like to help these people, it pollutes real questions, discussion, and most valuably to future users, the ability to properly search for previous discourse.
SO best acts as library of information. I will say, people on there could benefit from removing the rude tone (this is a trait I see in software engineers frequently, unfortunately). But closing activity when it's inappropriate or duplicate (which despite many testimonials, I have seen is more common than not) is a good habit IMO.
Where SO started failing in my opinion is when the "no duplicate questions" rule started to be interpreted as "it's a duplicate if the same or very similar question has ever been answered on the site". That caused too many questions to have outdated answers as the tech changes, best practices change and so on. C# questions have answers that were current for .NET Core 1.0 and should be modified. I have little webdev experience but I know JS has changed rapidly and significantly, so 2012 answers to JS questions are likely not good now.
What else could it mean? The entire point is that if you search for the question, you should always find the best version of that question. That only works by identifying it and routing all the others there.
> That caused too many questions to have outdated answers as the tech changes
You are, generally, supposed to put the new answer on the old question. (And make sure the question isn't written in a way that excludes new approaches. Limitations to use a specific library are generally not useful in the long term.)
Of course, working with some libraries and frameworks is practically like working in a different language; those get their own tags, and a question about doing it without that framework is considered distinct as long as everyone is doing their jobs properly. The meta site exists so that that kind of thing can be hashed out and agreed upon.
> C# questions have answers that were current for .NET Core 1.0 and should be modified.
No; they should be supplemented. The old answers didn't become wrong as long as the system is backwards-compatible.
The problem is mainly technical: Stack Overflow lacked a system to deprecate old answers, and for far too long the preferred sort was purely based on score. But this also roped in a social problem: high scores attract more upvotes naturally, and most users are heavily biased against downvoting anything. In short, Reddit effects.
If you're asking the question, you don't know the new answer.
If you're not asking the question, you don't know the answer needs updating as it is 15 years old and has an accepted answer, and you didn't see the new question as it was marked as a dupe.
Even if you add the updated answer, it will have no votes and so has a difficult battle to be noticed with the accepted answer, and all the other answers that have gathered votes over the years.
There were tons and tons of them anyway. If only we had the Staging Ground in 2012.
My favorite is when searching for something I land on a thread where the only reply is "just search for the answer!"
At Codidact we're trying to build systems that hand out privileges based on actions taken that are relevant to the privilege. There's still a reputation system but the per-user numbers are relatively de-emphasized. And posts are by default sorted by Wilson score, with separate up/down counts visible to everyone by default, so that downvotes on an upvoted post (and vice-versa) have meaningful effect and "bad" voting can be more easily corrected. There's also a system of "reactions" for posts so that people willing to put their username behind it can explicitly mark an answer as outdated or dangerous.
The next time I asked I made sure to put the work in beforehand to make sure it was well written and included any relevant info that someone would need, as well as linking potential duplicates and explaining the differences I had to them. That got a much better response and is more useful for future readers.
It's not because people disliked the rudeness, though that didn't help and was never necessary. It's because the toxicity became the only goal. And the toxicity replaced both accuracy and usefulness.
I wanted to learn web development during the lockdown of 2020. First thing I learned was that SO was completely useless. Everything on there was wrong. Everything was wrong due to incompetent moderation that prevented bad information from being corrected. Everything new marked as a dupe even when the other post was wrong or irrelevant.
That killed SO. It was a decision of the founders and the mods. They took great pride in doing it, too.
They did a quick patch by letting you sort answers by some attribute, but darn, that's low effort product/ux dev. What did those teams do in 10 years??
The remaining community understands perfectly well why people have left. It's just deemed irrelevant.
Way too many people joined the site, way too quickly, based on a false understanding of the intended premise. Gradually they filtered themselves out as they realized the mistake. Those few who saw value in the actual idea remained.
It was always going to be relatively few people who truly want to participate in something like that. And that's fine. Ideas don't have to be popular to be worthwhile or valid.
But also, a note: conversations are never finished. People talked about this broad topic yesterday, but I didn’t see it yesterday to be able to weigh in. I’m here today, saw this topic, and started talking about it with the other people who stumbled across it just now. I would be highly annoyed with a friend if I brought up an interesting subject and they replied that they’d already discussed it with someone else over dinner last night so there’s no need to talk about it again. I wasn’t there last night. Even if there was a recording of it, that would be a stale artifact I could interact with, other than to contact last night’s debaters and try to continue on with a subject they’d already finished with.
But no one's forcing anyone to see this version of it. It's just a link they can skip past, although if enough people upvote this post to keep it on the front page, then apparently a significant chunk of the readership hasn't gotten tired of seeing it yet.
The top answer will almost always be an explanation of why the asker is wrong to want to do the thing they want to do. But that's not most answers, or most answerers, just the top ones.
The bottom answer will almost always be an honest attempt to write code that answers the question. Sometimes the code doesn't work, or needs explanation: these problems will be solved a couple answers up from the bottom.
My technique for years has been to click the SO link in search results, then hit End on my keyboard to jump to the bottom of the page. This a little slower than reading a cached LLM answer, but faster than waiting for the LLMs to generate something.
Since this question has received so much attention, even the bottom answer has a positive score. Otherwise the pattern looks typical.
[0]https://stackoverflow.com/questions/1732348/regex-match-open...
Source: https://meta.stackoverflow.com/questions/426250/unanswered-q...
SO solved a problem, that problem is now gone
MattGaiser•1d ago
ronbenton•1d ago
JohnFen•1d ago
ronbenton•1d ago
spelk•1d ago
canjobear•1d ago
throwway120385•1d ago
foobarian•1d ago
Actually wonder if replacing all the moderators with LLM would be an improvement.
NoMoreNicksLeft•1d ago
There are people at the periphery of my life that I suspect are little more than biological LLMs. Their drivel has much in common with AI slop. I'm sure some of you have had similar experiences.
>Actually wonder if replacing all the moderators with LLM would be an improvement.
This might be like putting Skynet in charge of traffic court penalties.
ourmandave•1d ago
The other thing is it will say, "New Contributor, be nice." and their question will have -5 or more downvotes. I think that's because the auto closer needed -5 to trigger or something. I could be wrong. Either way, to a newbie it just looks like they're piling on hate for no reason.
zahlman•23h ago
Okay, presumably you're familiar with the experience of seeing these links. When you saw them, how much effort did you put into opening them in new tabs and checking whether they answer your question? I'm guessing, not a lot. The UI affordance isn't great. The semantic search is just not very good (and it has to filter through massive piles of dreck) and it constantly updates; it shows you answer counts and doesn't try to sort or filter what it shows you for quality; but most importantly someone writing a question isn't expecting it, and checking it breaks the flow of drafting the question.
> Yet them seem quick to play the Closed for Duplicate card.
My experience has been that the very large majority of these closures are correct, and the large majority of closures complained about on the meta site are very obviously correct and the objections often boil down to trivialities and/or a general opposition to the idea of closing duplicates at all (without attempting to understand the reasons for doing so).
> The other thing is it will say, "New Contributor, be nice." and their question will have -5 or more downvotes. I think that's because the auto closer needed -5 to trigger or something. I could be wrong. Either way, to a newbie it just looks like they're piling on hate for no reason.
No. It's because the downvotes are for quality rating and are explicitly not intended as a rebuke to the OP, regardless of account age or reputation. Also because they affect sort order; because non-duplicates can't be immediately closed, people who want the question closed (note: the explicit, sole purpose of closing a question is to prevent it from receiving new answers) have a vested interest in hiding it from the sort of users who try to answer everything without heed to quality or suitability.
ourmandave•16h ago
The last time I started to ask a question I was able to rubber duck the answer so didn't need to post it.
And if I saw -3 votes as a newbie and then watched it go to -4 and -5, that sure feels like a rebuke. You see that every time stackoverflow comes up. People always comment about how mean they are, intended or not.
zahlman•23h ago