They are psychological, manipulative, influencing tools. It's like an annoying wasp that appears out of nowhere and follows you around.
Nobody asked for this.
When this comes to StackExchange, use a PiHole and protect yourself from this barrage of irrelevant ads.
https://cloud.google.com/blog/topics/threat-intelligence/int...
Note: Ctrl + F: malicious advertisements
as the purpose of the post was not to highlight that fact. But given that Google is an advertising company and they still mention it ...
Well... the advertisers did.
> They are psychological, manipulative, influencing tools.
The second paragraph is in my opinion also an accurate description of a very huge amount of people. By your argumentation that ads should not exist at all: shouldn't these people also not exist at all?
I mean, it's there any genuine case you can cover with SO that you cannot with your favorite LLM?
Because where a LLM falls short is in the same topic SO fell short: highly technical questions about a particular technology or tool, where your best chance to get the answer you were looking for is asking in their GitHub repo or contacting the maintainers.
As I see it, the next step is a synthesis of the two, whereby StackOverflow (or a competitor) reverses their ban on GenAI [0] and explicitly accepts AI users. I'm thinking that for moderation purposes, these would have to be explicitly marked as AIs, and would have to be "sponsored" by a proven-human StackOverflow user of good standing. Other than that, the AI users would act exactly as human users, being able to add questions, answers and comments, as well as to upvote and downvote other entries, based on the existing (or modified) reputation system.
I imagine for example, that for any non-sensitive open source project that I'm using Claude Code for, I would give it explicit permissions to interact on SO: for any difficult issue it encounters, it would try to find an existing question that might be relevant, if so, try the answers there, and upvote/comment about those, or to create a new question, and either get good answers from others, or to self-answer it, if it later found its own solution.
[0] https://meta.stackoverflow.com/questions/421831/policy-gener...
Perhaps better than current models at detecting and pushing back when it sounds like the individual asking the question is thinking of doing something silly/dubious/debatable.
Volunteer admins with nothing better to do get their dopamine by closing questions for StackOverflow points, regardless of whether the supposedly duped question from 8 years ago is actually still the best answer and covers the nuances of the question now being asked.
There probably is still a space for a SO-style site to exist, but they'd need a drastic change of approach. LLMs (+ Reddit I suppose?) have taken over most the engineer support role.
This rung so true to me, given that my answer from 4y ago was closed as a duplicate of an answer made 3m ago :D (no, the nuances were not considered and the questions were ultimately too different)
https://github.com/gorhill/uBlock?tab=readme-ov-file#ublock-...
Given how ruthlessly this site treated everyone when it was relevant, not a single tear will be shed when the front page is a letter from the founder.
More and more I think we need volunteer projects running the things we depend on the most. Community driving email, forums, social networks and Q&A sites like Stack Overflow. A community driven Stack Overflow could still run a job board, or have the C# section be "Sponsored by Microsoft", or run a Jetbrains ad. If you only have to pay for hosting, then you need less ad revenue.
conartist6•1h ago
I can't believe we keep making progress. You know. Things get better and better as time goes on. Right?
Right?
...... Right?