Sounds more like censorship than moderation to me...
You can moderate comments on your own blog however you'd like.
Thank you so much for that permission.
(this is an example of sarcasm, it's used as a form of criticism to express disagreement by saying the exact opposite of what is actually meant. Currently, this could serve as a test for human text, because the LLM slop that I read is typically asskissingly obsequious, whereas humans are often not that friendly to each other)
[0] https://aphyr.com/posts/379-geoblocking-the-uk-with-debian-n...
LLMs are going to reduce the value of bullshit. Look at how it's already decimating the marketing industry!
I just bullshitted those last couple sentences though.
But yeah. The vast majority of user generated content on the big platforms was already very loosely moderated, and was already mostly trash.
The platforms are just going to keep on doing what they always do, which is optimize for engagement. It's not the crappy AI comments I'm worried about, it's the good ones. These things will become much better than humans at generating clickbait, outrage, and generally chewing up people's time and sending their amygdalas into overdrive.
I think we're going to keep getting more of what we have now, only more optimized, and therefore worse for us. As the AIs get good we will evolve an even more useless, ubiquitous, addictive, divisive, ad-revenue-driven attention economy. Unplugging your brain will be harder to do but even more worth doing. Probably most people still will not do it. Getting serious dystopia vibes over all this.
One of the answers to “how do we solve this mess” was “climate change”. (Dealing with depressing things does funny things to humans).
One report on cyber security (which had Bruce Schneier as an author) showed that LLMs make hitherto unprofitable phishing targets, profitable.
There’s even a case where an employee didn’t follow their phishing training and clicked on a link, and ended up in a zoom call with their team members, transferring a few million in USD to another account. Except everyone on the call was faked.
This is the stuff on the fraud and cyber crime axis, forget the stuff for mundane social media. We’re at the stage where kids are still posting basic GenAI output after prompting “I think vaccines are bad and need to warn people”. They are going to learn FAST at masking this content. Hoo boy.
Dystopia vibes? It’s like looking into the abyss and seeing the abyss reach out to give you a hug.
https://www.youtube.com/watch?v=-gGLvg0n-uY
1) Even telephone calls will become totally untrustworthy -->
2) Mandatory digital identity verification for all humans, at all times -->
3) Total control and censorship, the end of what we think of as the Internet today.
Once the algorithms predominantly feed on their own shit the bazillion dollar clown party is over.
Maybe one day we will have organic LLMs guaranteed to be fed only human generated content.
Nobody wants this, because it's a pain, it hurts privacy (or easily can hurt it) and has other social negatives (cliques forming, people being fake to build their reputation, that episode of Black Mirror, etc.). Anonymity is useful like cash is useful. But if someone invents a machine that can print banknotes that fool 80% of people, eventually cash will go out of circulation.
I think the big question is: How much do most people actually care about distinguishing real and fake comments? It hurts moderators a lot, but most people (myself included) don't see this pain directly and are highly motivated by convenience.
I mean, I care if meetup.com has real people, and I care if my kids’ schools Facebook group has real people, and other forums where there is an expectation of online/offline coordination, but hacker news? Probably not.
And use cases like bringing up an issue on HN to get companies to reach out to you and fix it would be much harder with llms taking up the bandwidth probably.
On the internet, maybe you have people using character.io, or other complex prompts to make the comments sound more diverse and personal. Who knows.
I wonder how many different characters you would need on a forum like hacker news to pass a sort of collective Turing test.
My expectation would be that anyone going through the effort to put a LLM generated comment bot online is doing it for some ulterior motive, typically profit or propaganda.
Given this, I would equate not caring about the provenance of the comment, to not caring if you're being intentionally misinformed for some deceptive purpose.
This was not always the case. I used to be a Grade A asshole, and have a lot to atone for.
I also like to make as much of my work open, as I can.
>In practice that means PKI or web of trust (or variants/combinations), plus reputation systems.
Yep, that is the way.
Also LLMs will help us create new languages or dialects from existing languages, with the purpose of distinguishing the inner group of people from the outer group of people, and the outer group of LLMs as well. We are in a language arms race for that particular purpose for thousands of years. Now LLMs are one more reason for the arms race to continue.
If we focus for example in making new languages or dialects which sound better to the ear, LLMs have no ears, it is always humans who will be one step ahead of the machine, providing that the language evolves non stop. If it doesn't evolve all the time, LLMs will have time to catch up. Ears are some of the more advanced machinery on our bodies.
BTW I am making right now a program which takes a book written in Ancient Greek and creates an audiobook, or videobook automatically using Google's text to speech. The same program on Google Translate website.
I think people will be addicted in the future with how new languages sound or can be sung.
Relevant meme video (which watching is in my opinion worth your time):
Raiden Warned About AI Censorship - MGS2 Codec Call (2023 Version)
https://www.youtube.com/watch?v=-gGLvg0n-uY
Worse, it won’t work. We are already able to create fake human accounts, and it’s not even a contest.
And with LLMs, I can do some truly nefarious shit. I could create articles about some discovery of an unknown tribe in the Amazon, populate some unmanned national Wikipedia version with news articles, and substantiate the credentials of a fake anthropologist, and use that identity to have a bot interact with people.
Heck I am bad at this, so someone is already doing something worse than what I can imagine.
Essentially, we can now cheaply create enough high quality supporting evidence for proof of existence. We can spoof even proof of life photos to the point that account take overs resolution tickets can’t be sure if the selfies are faked. <Holy shit, I just realized this. Will people have to physically go to Meta offices now to recover their accounts???>
Returning to moderation, communities online, and anonymity:
The reason moderation and misinformation has been the target of American Republican Senators is because the janitorial task of reducing the spread of conspiracy theories touched the conduits carrying political powers.
That threat to their narrative production and distribution capability has unleashed a global campaign to target moderation efforts and regulation.
Dumping anonymity requires us to basically jettison ye olde internet.
You're just doing the bidding of corporations who want to sell ID online systems for a more authoritarian world.
Those systems also use astroturfing. It was not invented with LLMs.
See my other comment https://news.ycombinator.com/item?id=44130743#44150878 for how this is "bleak" mostly if you were comfortable with your Overton window and censorship.
No one is trying to take away your right to host or participate in anonymous discussions.
> Those systems also use astroturfing. It was not invented with LLMs.
No one is claiming that LLMs invented astroturfing, only that they have made it considerably more economical.
> You're just doing the bidding of corporations who want to sell ID online systems for a more authoritarian world.
Sure, man. Funny that I mentioned "web of trust" as a potential solution, a fully decentralised system designed by people unhappy with the centralised nature of PKI. I guess I must be working in deep cover for my corporate overlords, cunningly trying to throw you off the scent like that. But you got me!
If you want to continue drinking from a stream that's been becoming increasingly polluted since November 2022, you're welcome to do so. Many other people don't consider this an appealing tradeoff and social systems used by those people are likely to adjust accordingly.
I'm sorry man, I can't trust anything you say unless you post your full name and address. I can also throw some useless strawman quip to distract the conversation.
No one is forcing you to stay up at night or worry about this, so don't.
> If you want to continue drinking from a stream that's been becoming increasingly polluted since November 2022, you're welcome to do so. Many other people don't consider this an appealing tradeoff and social systems used by those people are likely to adjust accordingly.
Lol. The naivety of people like you and throwing these cute dates to start having a semblance of critical reading is hilarious. Not that it helps you since you immediately want authoritarian solutions and any challenge is met with a strawman.
But hey, give us more "sarcasm".
I'll post your comment because it's worth reading in full and going back to your "are you crazy? No one is saying X" fallback.
> I think that, ultimately, systems that humans use to interact on the internet will have to ditch anonymity. If people can't cheaply and reliably distinguish human output from LLM output, and people care about only talking to humans, we will need to establish authenticity via other mechanisms. In practice that means PKI or web of trust (or variants/combinations), plus reputation systems.
> Nobody wants this, because it's a pain, it hurts privacy (or easily can hurt it) and has other social negatives (cliques forming, people being fake to build their reputation, that episode of Black Mirror, etc.). Anonymity is useful like cash is useful. But if someone invents a machine that can print banknotes that fool 80% of people, eventually cash will go out of circulation.
> I think the big question is: How much do most people actually care about distinguishing real and fake comments? It hurts moderators a lot, but most people (myself included) don't see this pain directly and are highly motivated by convenience.
Lol.
I wouldn't let that bother you, though. Life must be exciting when you know that everyone else is secretly hellbent on authoritarianism.
Strawman. Your web of trust comment doesn't exonerate you from proposing to ban anonimity, doing the bidding of, not big tech, but I opportunistic lobbying tech and governments.
Web of trust is not "good" if you try to impose it on others, it was a comment of "create your own thing with your own friends" instead of pushing your bullshit unto us.
> I wouldn't let that bother you, though. Life must be exciting when you know that everyone else is secretly hellbent on authoritarianism.
I mean. If you were capable of actual arguments instead of just strawmen... Your life would be exciting too. As it is, you just parrot narratives.
For myself, I usually link to my own stuff; not because I am interested in promoting it, but as relevant backup/enhancement of what I am writing about. I think that a link to an article that goes into depth, or to a GitHub repo, is better than a rough (and lengthy) summary in a comment. It also gives others the opportunity to verify what I say. I like to stand behind my words.
I suspect that more than a few HN members have written karmabots, and also attackbots.
https://www.youtube.com/watch?v=4VrLQXR7mKU
Previously (6 months ago but didn't trend, perhaps due for a repost?):
Thanks!
Thanks for having the courage to post under your real name, also, as you mentioned in another thread of yours I was reading. It's been a growth experience for me as well.
For myself, I have no issue with subtitled films, but a lot of my countrymen are not comfortable with it.
The main issues that I have with foreign (to me) films, is the cultural frame can be odd. That also happens with British and Australian stuff.
For me, that is the magic of film, but I wonder if reality recedes as we approach, via biases, oversights, and the key design 'flaw as feature' of the camera, that it only captures what has already been framed.
I'm not sure LLMs deviate from a long term trend of increasing volume of information production. It certainly does break the little bubble we had from the early 1990s until 2022/3 where you could figure out you were talking to a real human based on the sophistication of the conversation. That was nice, as was usenet before spammers.
There is a bigger question of identify here. I believe the mistake is to go the path of: photo ID, voice verification, video verification (all trivially by-passable now.) Take another step further with Altman's eyeball thing, another mistake since a human can always be commandeered by a third party. In the long term do we really care that the person we are talking to is real or an AI model? Most of the conversations generated in the future will be AI. They may not care.
I think what actually matters more is some sort of larger history of identify and ownership, matching to what one wishes to (I see no problem with multiple IDs, nicks, avatars.) What does this identify represent? In a way, proof of work.
Now, when someone makes a comment somewhere, if it is just a throw away spam account there is no value. Sure, the spammers can and will do all of the extra stuff to build a fake identity just to promote some bullshit produce, but that already happens with real humans.
Not so sure I'd call it "nice."
I am ashamed to say that I was one of the reasons that it wasn't so "nice."
For web spam this was HTTPS. For account spam this is phone # 2fa. I think requiring a form of id or payment card is the next step.
Because if there's one place where Google didn't solve spam, it's on YT's comments
I do believe that this problem is very self-inflicted (and perhaps even desired) by YouTube:
- The way the comments on YouTube are structured and ordered makes it very hard to make deep discussions on YouTube
- I think there is also a limit on the comment length on YouTube, which again makes it hard to write longer, sophisticated arguments.
- Videos for which a lot of comments a generated tend to become promoted by YouTube's algorithm. Thus YouTubers encourage viewers to write lots of comments (thus also a lot of low-quality comments), i.e. YouTube incentivizes that videos are "spammed" with comments. The correct solution would be to incentivize few, but high-quality comments (i.e. de-incentivize comments that contribute nothing valuable (i.e. worth your time to read)). This makes it much easier to detect and remove the (real) spam among them.
At some point your cost to dissuade spammers becomes a huge risk for humans to make mistakes of any sort.
At this point users mutiny.
LLMs increase the burden of effort on users to successfully share information with other humans.
LLMs are already close to indistinguishable from humans in chat; Bots are already better at persuading humans[1]. Suggesting that users who feel ineffective at conveying their ideas online, are better served by having a bot do it for them.
All of this, is effectively putting a fitness function on online interactions, increasing the cognitive effort required for humans to interact or be heard. I dont see this playing out in a healthy manner. The only steady state I can envision is where we assume that we ONLY talk to bots online.
Free speech and the market place of ideas, sees us bouncing ideas off of each other. Our way of refining our thoughts and forcing ourselves to test our ideas. This is the conversation that is meant to be the bedrock of democratic societies.
It does not envisage an environment where the exchange of ideas is into a bot.
Yes yes, this is a sky is falling view - not everyone is going to fall off the deep end, and not everyone is going to use a bot.
In a funny way, LLMs will outcompete average forum critters and trolls for their ecological niches.
We are at the stage where it’s still mostly online but the first ways this will leak into the real world in big ways are easy to guess. Job applications, college applications, loan applications, litigation. The small percentage of people who are savvy and naturally inclined towards being spammy and can afford any relevant fees will soon be responsible for over 80 percent of all traffic, not only drowning out others but also overwhelming services completely.
Fees will increase, then the institutions involved will use more AI to combat AI submissions, etc. Schools/banks/employers will also attempt to respond by networking, so that no one looks at applicants directly any more, they just reject if some other rejected. Other publishing from calls for scientific papers to poetry submissions kind of progresses the same way under the same pressures, but the problem of “flooded with junk” isn’t such a new problem there and the stakes are also a bit lower.
What's the point in even having comments sections? The CBC here in Canada shut theirs off years ago and frankly the world is better for it. News articles are a swamp of garbage comments, generally.
The future of social engagement online is to go back to smaller, registration-required, moderated forums.
I suspect even the 'well I never trust online comments!' crowd here is not as immune to propaganda as they'd like to believe themselves to be
I am more concerned about voice alignment efforts, like someone creating over time 10k real-ish accounts attempt to contribute, but are doing so to just abuse upvote features to change perception. Ultimately figuring out what is a real measure of popularity , and what is just a campaign to, say, send people to your play is going to get even harder than it is now
There is also a dependence on the culture. For example, what in the USA would be considered a "recommendation" (such as on Reddit) would often be considered "insanely pushy advertising" in Germany.
With this in mind, wouldn't a pertial solution also be to become less tolerant of such pushy advertisement in such places (say on Reddit), even if they are done by honest users?
[0]: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
^~~ tirreno guy is here
Thank you for mentioning tirreno.Spam is one of the use cases for tirreno. I'm not sure why you'd call this spamming, as the tool is relevant to the problem.
I’m not sure exactly why we are still waiting for the obviously possible ad-hominem and sunk cost fallacy detectors, etc. For the first time we now have the ability to actually build a threaded comment system that (tries to) insist on rational and on topic discussion. Maybe part of that is that we haven’t actually made the leap yet to wanting to censor non contributing but still-human “ contributers” in addition to spam. I guess shit posting is still part of the “real” attention economy and important for engagement.
The apparently on topic but subtly wrong stuff is certainly annoying and in the case of vaguely relevant and not obviously commercial misinformation or misapprehension, I’m not sure how to tell humans from bots. But otoh you wouldn’t actually need that level of sophistication to clean up the cesspool of most YouTube or twitter threads.
It would also presume that an LLM knows the truth, which it does not. Even in technical and mathematical matters it fails.
I do not think an LLM can even accurately detect ad-hominem arguments. Is "you launched a scam coin scheme in the first days of your presidency and therefore I don't trust you on other issues" an ad-hominem or an application of probability theory?
We might struggle to differentiate information vs disinformation, sure, but the above mentioned new super powers are still kind of remarkable, and easily accessible. And yet that “information only please” button is still missing and we are smashing simple up/down votes like cavemen
Actually when you think about even classic sentiment analysis capabilities it really shows how monstrous and insidious algorithmic feeds are.. most platforms just don’t want to surrender any control to users at all, even when we have the technology.
The moderators will need to pay for LLM service to solve a problem created by malicious actors who are paying for LLM service also? No wonder the LLM providers have sky-high valuations.
People already working on LLMs to assist with content moderation (COPE). Their model can apply a given policy (eg harassment policy,) to a piece of content and judge if it matches the criteria. So the tooling will be made, one way or another.
My support for the thesis is also driven based on how dark the prognostications are.
We won’t be able to distinguish between humans and bots, or even facts soon. The only things which will remain relatively stable are human wants / values and rules / norms.
Bots that encourage pro social behavior, norms and more, are definitely needed just as the natural survival tools we will need.
It will not stop misinformation either.
Verification is expensive and hard, and currently completely spoof-able. How will a Reddit community verify an ID? In person?
If Reddit itself verifies IDs, then nations across the world will start asking for those IDs and Reddit will have to furnish them.
As for misinformation, as long as all actors are known and are real people, they should be allowed to speak. It’s not good to be a flood of fakes.
And even then, it doesn't stop harassment and bullying. We already know this from facebook, where people's IDs are known. Going for legal redress requires court time and resources to fight the case.
The core of the misinformation doom loop, is when popular misinformation narrative is picked up and amplified by well known personalities. This is crucial in making it a harmful force in our politics.
So having known actors makes very little difference to misinformations gumming up our information markets.
It depends on your privacy settings. If people you don't know can comment on your posts, then they're not really verified (ie, you never accepted a friend request from them). In FB communities limited only to friends, I suspect there is much less bullying or harassment. But that kind of community is hard to create on FB, by design.
>So having known actors makes very little difference to misinformations gumming up our information markets.
If a verified actor can be permanently banned from a platform, then of course that will reduce misinformation on that platform by systematically excluding the originators. That includes people who routinely spread misinformation they receive off-platform.
>If a verified actor can be permanently banned from a platform, then of course that will reduce misinformation on that platform by systematically excluding the originators.
Eh. Yes, in a simplified producer/consumer model of this. I'm personally all for removing people who are not engaging in good faith.
Thing is that misinformation is now firmly linked to political power.
Compared to facts? Misinfo is faster and cheaper to produce, yet perfectly suited to maximize engage amongst the target audience. A key moment in that process, is when a fringe narrative is picked up by a key player in the media ecosystem.
Removing such a node, is to take up arms against the political forces using misinformation to fuel their narratives.
Not saying it shouldn't be done if it can. Just that we need a better set of understandings and tools before we can make this case.
>It will not stop misinformation either.
I'm open to any evidence that either statement is true. The rational argument that verification will reduce harrassment, bullying, and misinformation is that the verified perpetrator can be permanently banished from the community for anti-social behavior, whereas an anonymous perpetrator can simply create a new account.
Do you have a rational counter-argument?
>If Reddit itself verifies IDs, then nations across the world will start asking for those IDs and Reddit will have to furnish them.
Every community will have to decide whether the benefits of anonymity outweigh the risks. On the whole, I think anonymity has been a net negative for online community, but I understand that others may disagree. They'll still be free to join anonymous communities. But I suspect that large-scale, verified communities will ultimately be the norm, because for everyday use people will prefer them. Obviously, they work better in countries with healthy, functional liberal democracies.
I can say this from experience moderating, as well as research. I'll take the easy case of real world bullying first - people know their bullies here. It does not stop bullying. Attackers tend to target groups/individuals that cannot fight back.
Now you asked for evidence that either statement was true, but then spoke about reducing harassment. These are not the same things. This 2013 paper studied incivility in anonymous and non-anoymous forums [1] . Incivility was lower in the case where identities were exposed, however this did not stop incivility.
The Australian ESafety commisioner has this to say as well: > owever, it is important to note that preventing or limiting anonymity and identity shielding online would not put a stop to all online abuse, and that online abuse and hate speech are not always committed by anonymous or fake account holders. [2]
Now to bring GenAI into the mix - the cost of spoofing a selfie has now gone down quite a bit, if not made it very cheap. Verification of ID will require being able to manually inspect an individual. This means the costs of verification are VERY labor intensive. India has a biometric ID program, and we are talking about efforts on those scales. And even then, it doesn't stop false IDs from being created.
Combining these various points, ditching anonymity would necessitate a large effort in verifying all users, killing off the ability for people to connect on anonymous forums (LGBTQ communities for example) for some reduction in harassment.
This also assumes that people rigorously check your ID when its being used, becuase if there is any gap or loophole, it will be used to create fake IDs to spam, harass or target people.
[1] https://www.researchgate.net/publication/263729295_Virtuous_...
[2] https://www.esafety.gov.au/industry/tech-trends-and-challeng...
> On the whole, I think anonymity has been a net negative for online community, but I understand that others may disagree.
I would like to agree with you, but having moderated content myself - people do not give a shit and will say whatever they want, because they damned well want you to know it.
Take misinformation; I used to think the volume of misinformation was the issue. It turns out that misinformation amplificaiton is more driven by partisan or momentar political needs, than our improved ability to churn out quantities of it.
That said, anonymity is not necessary condition of a safe environment. Pseudonymity with sufficient protections against disclosure will work just fine. If a platform only knows that there’s a real person behind a nickname and they can reliably hold that person accountable it is enough. They don’t need a name. Just some identifier from identity provider.
As for misinformation, is not a moderation issue and should not be solved by platforms. You cannot and should not suppress political messages, they will find their way. It’s the matter of education and political systems and counter-propaganda. The less efficient are the former, the more efficient is propaganda in general.
But in an online forum where the bully is known and can be banned/blocked permanently, everyone can fight back.
>Now you asked for evidence that either statement was true, but then spoke about reducing harassment. These are not the same things.
Of course there will continue to be harassment on the margins, where people could reasonably disagree about whether it's harassment. But even in those cases, the victims can easily and permanently block any interaction with the harasser. Which removes the gratification that such bad actors seek.
>Incivility was lower in the case where identities were exposed, however this did not stop incivility.
I think we're getting hung up on what 'stop' means in this context. If I have 100 incidences per day of incivility before verification, and only 20/day after, then I've stopped 80 cases/day. Have I stopped all incivility? No, but that was not the intent of my statement. I think it will drastically reduce bullying and misinformation, but there will always be people who come into the new forum and push the envelope. But they won't be able to accumulate, as they are rapidly blocked and eventually banned. The vast majority of misinformation and bullying comes from a small number of repeat offenders. Verification prevents the repetition.
Have you moderated in a verified context, where a banned actor cannot simply create a new account? I feel like there are very few such platforms currently, because as you point out, it's expensive and so for-profit social media prefers anonymity. But if we're all spending a significant part of our lives online, and using these platforms as a source of critical information, it's worth it.
One context where everyone is verified is in a typical business---your fellow employees, once fired, cannot simply create another company email account and start over. And bad apples who behave anti-socially are weeded out. People generally behave civilly to each other. So clearly such a system can and does work, most of us see it on a daily basis.
Firstly, please acknowledge that knowing the identity of the attacker doesn’t stop bullying. Ignoring or papering over it, deprives arguments of supports that are required to be useful in the real world.
There is a reason I pointed out that it doesn’t stop harassment, because it disproves the contention that anonymity is the causal force for harassment.
The argument for reducing harassment via anonymity is supported, but it results in other issues. In a fully de-anonymized national social media platform, people will target minorities, immigrants and nations from other countries. I.E whatever is the acceptable jingoism and majority view point. Banning such conversation will put the mods in the cross hairs.
And yes, if it reduced harassment by 80%, that would be something. However the gains are lower (from that paper, it seemed like a 12% difference).
——-
I am taking great pains to separate out misinfo from bullying / harassment.
For misinformation, the first section, about minute 3 to minute 4, where Rob Faris speaks, does a better job of articulating the modern mechanics : https://www.youtube.com/watch?v=VGTmuHeFdAo
The larger issue for misinformation, is that it has utility for certain political groups and users today. It allows them the ability to create narratives and political speech faster and more effectively.
Making a more nuanced point will end up with me elaborating on my personal views on market capture for ideas. The shortest point i can make about misinformation, journalism and moderation is this:
Factual accuracy is expensive, and an uncompetitive product, when you are competing in a market that is about engagement. Right now, I don’t see journalism, science, policy - slow, fact and process limited systems, competing with misinformation.
Solving the misinformation problem will require figuring out how to create a fair fight / fair competition, between facts and misinformation.
Since misinformation purveyors can argue they have freedom of speech, and since they are growing increasingly enmeshed with political power structures, simple moves like banning are shifting risk to moderators and platforms - all of whom have a desire to keep living their lives without having to be harassed.
For what it’s worth, I would have argued the same thing as you until a few scant months ago. The most interesting article I read showed that the amount of misinformation that is consumed is a stable % of total content consumed. Indicating that while supply and production capacity of misinformation may increase, the demand is limited. This coupled with the variety of ways misinformation can be presented, and the ineffectiveness of fact checkers at stopping uptake, forced a rethinking of how to effectively address all that is going on.
——-
I don’t have information on how behavior is in a verified context. I have some inklings of seeing this at some point, and eventually being convinced this was not a solution. I’ll have to see if I end up finding something.
I think one of the issues is that verification is onerous, and results in a case where you can lose your ID and then have all the real world challenges that come with it, while losing the benefits that come from being online. There’s a chilling effect on speech in both directions. Anonymity was pretty critical to me being able to even learn enough to make the arguments I am making, or to converse with people here.
If theres a TLDR to my position, it’s that the ills we are talking about are symptoms of dysfunction in how our ecosystem is behaving. So these solutions will only shift the method by which they are expressed. I would agree that it’s a question of tradeoffs. To which my question is what are we getting for what ground we are conceding.
A recent anecdote: an acquaintance of mine automated parts of his LinkedIn activity. After I liked one of his posts, I received an automatic message asking how I was doing. I recognized that the message didn't match a personal tone, but I replied anyway to catch up. He never responded, highlighting how people are automating the engagement process but can't really keep up with the manual follow-through.
I do recognize the capabilities to hurt from bots in many spaces and cause cost are a real thing to contend with but the paradigm shift is fascinating. Suddenly people need to question authority (LLM output). Awesome. You should've been doing that all along.
It will be a story or question with just enough hints at personal drama and non specifics to engage the community. The stories always seem like a mishmash of past popular posts.
They’re usually posted by brand new accounts that rarely if ever post a comment.
Some subs seem relatively free of them, others inundated with them.
Recently revisited on Peter's blog: https://www.rifters.com/crawl/?p=11220
I ended up going to my local Walmart to try one, and boy was it delicious! Sometimes things work out in life.
thomasdziedzic•4d ago
rizky05•1d ago
msgodel•1d ago
intended•1d ago
immibis•1d ago
Also they get downvoted and hidden, alongside controversial but correct opinions.
The person who replied to this saying they have multiple accounts is shadowbanned - so clearly, they don't really have multiple accounts.