Although if they got banned during the start of covid during the Trump administration then we're talking about 5 years.
No, it was not. It’s particularly silly to suggest this when we have live example of such orders right now.
The companies were nudged. (And they were wrong to respond to public pressure.) The President, after all, has a “bully pulpit.” But there were no orders, no credibly threats and plenty of companies didn’t deplatform these folks.
It was not. No threats were made, and Twitter didn’t blindly follow the FBI’s guidance.
The simple truth is the leftist elements that wanted to control the debate were there in the White House and in Twitter’s San Francisco offices. Nobody had to be coerced, they were coördinating.
What was the implication? Twitter had no business in front of the federal government. They were wilfully complying.
That doesn't make it okay. But it's a total retconning of actual history to suggest this was government censorship in any form.
So... what sort of threat was this, that suddenly disappeared when Musk bought it? How credible was the threat if Musk was able to release the Twitter Files without repercussions from the Biden admin?
And the implication of repercussions were for employees that were in charge of removing content. Not for the head honchos.
I’m not disputing that they coördinated. I’m challenging that they were coerced.
We wouldn’t describe Fox News altering a script on account of a friendly call from Miller and friends the “government ordering private companies” around. (Or, say, Florida opening their criminal justice records to ICE the federal government ordering states around.) Twitter’s leadership and the Biden administration saw eye to eye. This is a story about a media monoculture and private censorship, not government censorship.
https://apnews.com/article/meta-platforms-mark-zuckerberg-bi...
https://open.spotify.com/episode/3kDr0LcmqOHOz3mBHMdDuV?si=j...
https://x.com/cidrap/status/1420482621696618496 ("Our Osterholm Update podcast episode (Jul 22) was removed for “medical misinformation.”" (2021))
Most ironic thing I've ever seen. I still recall it perfectly, though it's been four years. Never, ever trust censorship algorithms or the people who control them: they are just dumb parrots that suppress all discussion of an unwanted topic, without thought or reason.
Which is rather different than scanning actual private files.
https://blog.youtube/news-and-events/managing-harmful-vaccin...
From the two links in the post, Google fleshes it out in great detail, with many examples of forbidden thought.
I know that some services do this in addition to account ban.
Even if one does argue that CSAM should result in hardware and IP bans, there's no reason that can't be a sole exception to a wider prohibition on such bans.
We don’t have the resources for this, even when the FBI isn’t being purged and sent to Home Depots. Unrestricting IPs means a boom for CSAM production and distribution.
This adds to their risks and costs. That tips the economic balance at the margin. Actually going after all creators would require an international law-enforcement effort for which, frankly, there isn't political capital.
Charging would be bank robbers a fee to do practice runs of breaking into a vault adds to their costs; somehow that doesn't seem like an effective security measure.
> Actually going after all creators would require an international law-enforcement effort for which, frankly, there isn't political capital.
I'm not talking about going after all creators, just the ones you have the identifying information for which are so continuously pumping out such quantities of CSAM that it is impossible to stop the firehose by removing the content.
If you don't have the political capital to go after them, again you have bigger issues to deal with.
…this is literally how we police bank theft. Most bank thieves are never caught because they can do it online from an unresponsive jurisdiction.
> just the ones you have the identifying information for
Sure. You’re still going to have a firehose of CSAM, and worse, newly-incentivised producers, if you turn off moderation.
It's been a long time since I had anything remotely to do with this (thankfully) but... I'm pretty sure there are lots of resources devoted to this, including the major (and even small) platforms working with various authorities to catch these people? Certainly to say they're "free to operate" requires some convincing.
We don’t have the resources and we don’t want to divert them.
> banning IPs is a counterproductive strategy to combat CSAM and it is a terrible justification for permitting IP bans
The simple reason for banning Russian and Chinese IPs is the same as the reason I block texts from Vietnam. I don’t have any legitimate business there and they keep spamming me.
This actually surprised me because I thought (and maybe still think) that it was Google employees that led the charge on this one.
I wouldn't trust any public statement from these companies once that kind of threat has been thrown around. People don't exactly want to go to prison forever.
For Google now to pretend Biden twisted their arm is pretty rich. They'd better have a verifiable paper trail to prove that, if they expect anyone with a memory five years long to believe it.
The Twitter files showed direct communications from the administration asking them ban specific users like Alex Berenson, Dr. Martin Kulldorff, and Dr. Andrew Bostom: https://cbsaustin.com/news/nation-world/twitter-files-10th-i...
Meta submitted direct communications from the administration pressuring them to ban people as part of a congressional investigation: https://www.aljazeera.com/news/2024/8/27/did-bidens-white-ho...
It would be more surprising if they left Google alone.
US money wasn't supposed to be used to fund that kind of research. So people violated policy and evaded detection until the leak happened. How? Who? Would different audit controls have helped?
The was a cover-up after the fact. Again, how did it work and who was involved? What could have made it less effective?
The lab accident itself is the least interesting part, it's all the bureaucratic stuff that really matters. For boring generic bureaucratic-effectiveness reasons, not any "someone tried to do a bioweapon" silliness.
What the Biden admin did was not acceptable, and even at the time I got plenty of heat from HN for thinking that it was a sketchy loophole for the government to use, that it was against the spirit of the law.
I'm trying to emphasize the distinction because the companys' self-serving language is going to be abused to claim that the current admin - that has just threatened to sue a TV channel for bringing back a show they tried to threaten the channel into getting rid of - is actually a defender of free speech.
The Executive Branch may only take actions that the Constitution allows (same as the Legislative). These are called the Enumerated Powers. Some of them are simple, like requiring the President to give a State of the Union Address. Others are more complex, like allowing the Executive Branch to negotiate treaties. But it then says “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” So anything not in the Executive Branch’s enumerated powers is illegal for them to do.
Of course, one of their enumerated powers is to execute any law passed by the Legislature. That is quite open–ended, because Congress can pass any kind of law that they want, right? Well, no. The First Amendment specifically says “Congress shall make no law… abridging the freedom of speech, or of the press.” So we know that there is no law authorizing the the Executive Branch from censoring speech or the press. Thus, the actions of the Biden Administration were illegal. They were explicitly forbidden from taking those actions by the Constitution, and they did it anyway.
We also tried letting the propaganda machine full-blast those lies on the telly for the past 5 years.
For some reason, that didn't work either.
What is going to work? And what is your plan for getting us to that point?
People can post all sorts of crazy stuff, but the algorithms do not need to promote it.
Countries can require Algorithmic Impact Assements and set standards of compliance to recommended guidelines.
If 'silencing people' doesn't work- so online platforms aren't allowed to remove anything? Is there any limit to this philosophy? So you think platforms can't remove:
Holocaust denial? Clothed underage content? Reddit banned r/jailbait, but you think that's impermissible? How about clothed pictures of toddlers but presented in a sexual context? It would be 'silencing' if a platform wanted to remove that from their private property? Bomb or weapons-making tutorials? Dangerous fads that idiotic kids pass around on TikTok, like the blackout game? You're saying it's not permissible for a platform to remove dangerous instructionals specifically targeted at children? How about spam? Commercial advertising is legally speech in the US. Platforms can't remove the gigantic quantities of spam they suffer from every day?
Where's the limiting principle here? Why don't we just allow companies to set their own rules on their own private property, wouldn't that be a lot simpler?
In the open, it becomes normalized, it draws in more people. Do you rather have some crazies in the corner, or 50% of a population that believes something false, as it became normalized.
The only people benefiting from those dark concepts are those with financial gains. They make money from it, and push the negatives to sell their products and cures. Those that fight against it, do not gain from it and it cost them time/money. That is why is a losing battle.
Few countries have more restrictions on Nazi speech than Germany. And yet not only AfD is a thing, but it keeps growing.
How about "If the content isn't illegal, then the government shouldn't pressure private companies to censor/filter/ban ideas/speech"?
And yes, this should apply to everything from criticizing vaccines, denying election results, being woke, being not woke, or making fun of the President on a talk show.
Not saying every platform needs to become like 4chan, but if one wants to be, the feds shouldn't interfere.
In this case it wasn't a purely private decision.
1) They are public corporations and are legal creation of the state and benefit from certain protections of the state. They also have privileged access to some public infrastructures that other private companies do not have.
2) By acting on the behest of the government they were agent of the government for free speech and censorship purposes
3) Being monopolies in their respective markets, this means they must respect certain obligations the same way public utilities have.
2. This has already been adjudicated and this argument lost https://en.wikipedia.org/wiki/Murthy_v._Missouri
3. What market is Youtube a monopoly in?
The 6–3 majority determined that neither the states nor other respondents had standing under Article III, reversing the Fifth Circuit decision.
In law, standing or locus standi is a condition that a party seeking a legal remedy must show they have, by demonstrating to the court, sufficient connection to and harm from the law or action challenged to support that party's participation in the case.
Justice Amy Coney Barrett wrote the opinion, stating: "To establish standing, the plaintiffs must demonstrate a substantial risk that, in the near future, they will suffer an injury that is traceable to a government defendant and redressable by the injunction they seek. Because no plaintiff has carried that burden, none has standing to seek a preliminary injunction."
The Supreme Court did not say that what was done was legal, they only said that the people who were asking for the injunction and bringing the lawsuit could not show how they were being or going to be hurt.
That "and/or" is doing a lot of work here. There's a huge difference between government censorship and forcing private companies to host content they don't want to host on servers they own.
Then again, Alphabet is now claiming they did want to host it and mean old Biden pressured them into pulling it so if we buy that, maybe it doesn't matter.
> What if they started banning tylenol-autism sceptical accounts?
What if it's pro-cannibalism or pedophilia content? Everyone has a line, we're all just arguing about where exactly we think that line should be.
I've already described above that even in this thread there's a sentiment which is that, "as long as somebody has gained coercive power legitimately then it is within their right to coerce." I see terms thrown around like, "if somebody owns" or, "if somebody is the CEO of..." which speaks to the growing air of illiberality an liberal autocranarianism which is a direct result of the neoliberal assault founding and funding thousands of Cato Institutes, Adam Smith Societies, and Heritage Foundations since the neoliberal turn in the late 1960's. We've legitimized domination ethics as an extension of the hungry rights of pseudotyrants and the expense of people in general.
I wonder what people in general might one day do about this? I wonder if there's a historical precedent for what happens when people face oppression and the degradation of common cultural projects?
https://en.wikipedia.org/wiki/Russian_Revolution#October_Rev...
If I choose to put a Kamala sign in my yard and not a Trump sign, that’s an expression of free speech.
If the marketing company I own decides to not work for causes I don’t personally support, that’s free speech.
If the video hosting platform I’m CEO of doesn’t host unfounded anti-vax content because I think it’s a bad business move, is that not also free speech?
> we want to be sure that you have a handle on vaccine hesitancy generally and are working toward making the problem better. This is a concern that is shared at the highest (and I mean highest) levels of the White House
Saying you want to make sure they will censor these videos is a threat, and then they said that Biden was behind this to add legitimacy to the threat.
If it was just a friendly greeting why would they threaten youtube with Bidens name? If youtube did this willingly there would be no need to write such a threatening message saying they want to make sure Youtube censors these.
You can read the whole report here if you wanna see more: https://judiciary.house.gov/sites/evo-subsites/republicans-j...
And if you don't see that as a threat, imagine someone in the trump administration sent that, do you still think its not a threat? Of course its a threat, it makes no sense to write that way otherwise, you would just say you wanted to hear how it goes not say you wanna make sure they do this specific thing and threaten them with the presidents powers.
We don't need to imagine anything. The chair of the FCC publicly threatened ABC over Kimmel. This morning Trump posted a direct threat of government reprisals if they didn't fire a comedian over a joke he doesnt like.
Nothing vague or implied about it. Just the government of the United States directly threatening free speech
I wont link to truth social. You can Google it.
Fortunately, the Trump administration has given us an example of what a threat and coercion actually looks like. They declared exactly the action they would take if they did not get their preferred outcome and it’s clearly politically motivated.
That’s quite a bit different than we’re concerned about this misinformation and would like you to do something about it.
I think a reasonable and nuanced debate can be had on whether or not that was appropriate, but there is a difference.
I very much agree. It's reasonable to ask if the Biden admin overstepped their boundaries by politely asking if Youtube would help them stop people from murdering each other with disinformation and trying to overthrow the government.
I think the current situation is much less debatable. The government is now issuing ultimatums and very publicly threatening corporations to stifle free speech.
It’s a problem especially if there is a direct or implied threat to use the powers of the government to impact a business if the government is acting counter to the first amendment. This is essentially the government causing the outcome, not a business using its free speech after an independent business decision.
One could argue a business might come to a decision to pull content the government doesn’t like independently without coercion if they had an antitrust case pending with the DOJ. There’s probably a line here where the government needs to act in a specific way to threaten to make it coercion. Maybe the line was crossed in YT’s case?
On all of these cases I come to the conclusion there needs to be separation of powers on some of these executive branch actions. I’m not sure how to do it something is needed to protect individual rights from executive overreach (regardless of which party is in power).
I want to recapitulate this sentiment as often and as widely as possible-- Rand and her cronies know as much about virtue, freedom, and Aristotle as they do about fornicating; not much.
Even if I disagreed with you I would upvote for this gem. I'll be chuckling at this one randomly for weeks.
In short-- no. Your right is to positively assert, "Trump sign" not, "excludes all other signs as a comparative right" even though this is a practical consequence of supporting one candidate and not others. "Owning a marketing company" means that you most hold to industrial and businesss ethics in order to do business in a common economic space. Being the CEO of any company that serves the democratic public means that one's ethical obligations must reflect the democratic sentiment of the public. It used to be that, "capitalism" or, "economic liberalism" meant that the dollars and eyeballs would go elsewhere as a basic bottom line for the realization of the ethical sentiment of the nation-state. This becomes less likely under conditions of monopoly and autocracy. The truth is that Section 230 created a nightmare. If internet platforms are now ubiquitous and well-developed aren't the protections realized under S230 now obsolete?
It would be neat if somebody did, "you can put any sign in my yard to promote any political cause unless it is specifically X/Trump/whatever." That would constitute a unique form of exclusionary free speech.
How does one determine the democratic sentiment of the public, especially a public that is pretty evenly ideologically split? Seems fraught with personal interpretation (which is arguably another form of free speech.)
I'm reminded of that old line by Tolstoy-- something like, "happy families are all happy for precisely the same reasons; every unhappy family is unhappy in its own way." The point from an Adam Smith perspective is that healthy societies might all end up tending toward the same end by widely different means: Chinese communists might achieve superior cooperation and the realization of their values as, "the good life" by means dissimilar to the Quaker or the African tribesperson. The trick is seeing that the plurality of living forms and their competing values is not a hinderance to cooperation and mutual well-being but an opportunity for extended and renewed discourses about, "what we would like to be as creatures."
Worth mentioning:
https://sites.pitt.edu/~rbrandom/Courses/Antirepresentationa...
If those two private companies would host all legal content, this could be a thriving market.
Somehow big tech and payment processors get to censor most software.
Modern democracies aren't founded on realist ethics or absolute commitments to economic liberalism as totalizing-- they're founded on a ethical balance between the real needs of people, the real potential for capital expansion, and superior sentiments about the possibilities of the human condition. As a kid that supported Ron Paul's bid for the Republican nomination as a 16-year-old I can't help but feel that libertarian politics has ruined generations of people by getting them to accept autocracy as, "one ethical outcome to a free society." It isn't.
The irony in me posting this will be lost on most: https://www.uschamber.com/
I simply don't believe people who say they want to support a culture of free speech on a media or social media site. They haven't really thought about what that means.
While I'm with my dudes in computer space-- it all starts with the passing of the Mansfield Amendment. You want to know why tech sucks and we haven't made any foundational breakthroughs for decades? The privatization of technology innovation.
https://en.wikipedia.org/wiki/Pirates_of_Silicon_Valley
https://www.nsf.gov/about/history/narrative#chapter-iv-tumul...
The American civilization has deep flaws but has historically worked toward, "doing what was right."
https://www.adamsmithworks.org/documents/book-v-of-the-reven...
This private-public tyranny that's going on right now. The FCC can't directly tell Kimmel, "you can't say that" they can say, "you may have violated this or this technical rule which..." This is how Project 2025 will play out in terms of people's real experience. You occupy all posts with ideologically sympathetic players and the liberality people are used to becomes ruinous as, "the watchers" are now, "watching for you." The irony is that most conservatives believe this is just, "what the left was doing in the 2010's in reverse" and I don't have a counterargument for this other than, "it doesn't matter; it's always bad and unethical." Real differences between Colbert and Tate taken for granted.
We must be able to have those debates, but we must also guard against grifters selling 5g-proof underwear to the functionally illiterate or serving up vaccine skepticism based more on their desire to score political points than science.
The devil will always be in the details, and it's a tough balancing act. I personally suspect that the way we do this is by checking credentials.
Doctors and scientists should be allowed to offer alternative theories. Meth-addicts posting from labs hidden in the Ozarks, software developers, and unqualified politicians, maybe not.
The legal process already did all the hard work of reaching consensus/compromise on where that line is, so just use that. At least with the legal system, there's some degree of visibility and influence possible by everyone. It's not some ethics department silently banning users they don't agree with.
It really depends. I remember after the Christchurch mosque shootings, there was a scramble to block the distribution of the shooter's manifesto. In some countries, the government could declare the content illegal directly, but in others, such as Australia, they didn't have pre-existing laws sufficiently wide to cover that, and so what happened in practice is that ISPs "proactively" formed a voluntary censorship cartel, acting in lockstep to block access to all copies of the manifesto, while the government was working on the new laws. If the practical end result is the same - a complete country block on some content - does it really matter whether it's dressed up as public or private censorship?
And with large tech companies like Alphabet and Meta, it is a particularly pointed question given how much the market is monopolized.
OTOH if the goal is to prevent copycats then I don't see the point of a 90-day embargo. People who are likely to take that kind of content seriously enough to emulate are still going to do so. Tarrant, for example, specifically referenced Anders Breivik.
Power is power. Wealth is power. Political power is power. The powerful should not control the lives or destinies of the less powerful. This is the most basic description of contemporary democracy but becomes controversial when the Randroids and Commies alike start to split hairs about how the Lenins and John Galts of the world have a right to use power to further their respective political objectives.
https://www.gutenberg.org/files/3207/3207-h/3207-h.htm (Leviathan by Hobbes)
https://www.gutenberg.org/ebooks/50922 (Perpetual Peace by Kant)
https://www.heritage-history.com/site/hclass/secret_societie...
This throws out spam and fraud filters, both of which are content-based moderation.
Nobody moderates anything isn’t unfortunately a functional option. Particularly if the company has to sell ads.
The reason we ban government censorship is so that a private actor can always create their own conspiracy theory + snuff film site if they want, and other platforms are not obligated to carry content they find objectionable. Get really into Rumble or Truth Social or X if you would like a very different perspective from Youtube's.
At the crux of things the libertarians and the non-psychos are just having a debate on when it's fair game to be unethical or cruel to others in the name of extending human freedom and human dignity. We've fallen so far from the tree.
What you are arguing for is a dissolution of HN and sites like it.
As a book publisher, should I be required to publish your furry smut short stories? Of course not. Is that infringing on your freedom of speech? Of course not.
Compelled speech is not free speech. You have no right to an audience. The existence of a wide distribution platform does not grant you a right to it.
These arguments fall completely flat because it’s always about the right to distribute misinformation. It’s never about posting porn or war crimes or spam. That kind of curation isn’t contentious.
Google didn’t suddenly see the light and become free speech absolutists. They caved to political pressure and are selectively allowing the preferred misinformation of the current administration.
And I don't think it erodes any fundamental rights to put restrictions on huge monopolies.
If you force Google alone to amplify certain speech then what competitive advantage does a less censorious service provide?
> If it's purely by making them suck less, I'm okay with that risk.
Define “suck less”. Now ask yourself if you are comfortable with someone you completely disagree with defining what sucks less.
> And I don't think it erodes any fundamental rights to put restrictions on huge monopolies.
You’re talking about antitrust, not free expression.
Compelled speech is an erosion of the first amendment. You may think that erosion is acceptable but you can’t deny it exists.
If that's the only "advantage" another service has, I don't care if it has no competitive advantage. If it offers anything else then that's the advantage.
Seriously this idea is super weird to me. There are plenty of reasons to avoid too much regulation. But "don't force company X to make their users happier because happy users won't leave" is a terrible reason.
>Define “suck less”. Now ask yourself if you are comfortable with someone you completely disagree with defining what sucks less.
A big part of the "if" is that people are making their own evaluations.
> You’re talking about antitrust
I am not talking about antitrust. I'm saying that the bigger and more powerful a corporation gets the further it is from a human and human rights.
> Compelled speech is an erosion of the first amendment. You may think that erosion is acceptable but you can’t deny it exists.
In this case, barely at all, and it's the same one we already have for common carriers.
The value proposition of a less censorious YouTube alternative is exactly that it is less censorious. You’re seemingly arguing against free markets.
> Seriously this idea is super weird to me. There are plenty of reasons to avoid too much regulation. But "don't force company X to make their users happier because happy users won't leave" is a terrible reason.
The problem with compelled speech is that the government should not be in the business of deciding what kind of speech makes people happy.
> A big part of the "if" is that people are making their own evaluations.
People should have the freedom to choose the media they consume. Compelled speech takes that choice away from them by putting the government in the position of making that decision for the people. This distorts the marketplace of ideas.
I don’t have time to read every comment or email or watch every video. Private content moderation is a value add and a form of expression. We need competition in that space, not government restriction.
> I am not talking about antitrust. I'm saying that the bigger and more powerful a corporation gets the further it is from a human and human rights.
If your problem with Google is how much influence they have then yes, you are talking about antitrust. That’s the regulatory mechanism by which excessive corporate influence can be restricted.
> In this case, barely at all, and it's the same one we already have for common carriers.
“A little” is still more than nothing which was your previous assertion. You may be comfortable with the rising temperature of our shared pot of water but I say it is a cause for concern.
You're only talking about the people that like a feature. Why do you need a free market for that if every company can do it?
Not everything has to be a free market. There are reasons to use competition but not this reason.
> the government should not be in the business of deciding what kind of speech makes people happy
I did not say or intentionally imply they should.
> People should have the freedom to choose the media they consume. Compelled speech takes that choice away
Not if the compelling is just that they can't ban content. That only adds choice.
> If your problem with Google is how much influence they have then yes, you are talking about antitrust. That’s the regulatory mechanism by which excessive corporate influence can be restricted.
There can be other mechanisms, and more importantly my argument there isn't about mechanisms. They are barely barely humanlike, so human rights are barely barely relevant.
> “A little” is still more than nothing which was your previous assertion. You may be comfortable with the rising temperature of our shared pot of water but I say it is a cause for concern.
It's barely any increase because we already have common carrier rules.
And I stand by the statement that it doesn't erode fundamental rights. The right of giant corporations to have free speech is at the edge, not fundamental. And a rule like that increases the free speech of so many actual humans.
It is literally the very first thing you said in this comment thread. It either frames your entire argument or you have no idea what you are talking about.
> How is compelling google to censor less going to entrench their dominance? If it's purely by making them suck less, I'm okay with that risk.
And even if it would objectively make them suck less in some scenario, there's still nothing in that post that says I want the government to force them to do it. That post was only about whether it entrenches their dominance or not.
Then in my next comment I:
* put the word 'advantage' in scare quotes
* clarified that "if they suck less" is supposed to be evaluated by individual people and not me
* stated that there are reasons to not want regulation, but that I was skeptical of this specific entrenchment reason
The first two should make it clear that I'm not even saying it's an advantage, and the third should make it clear that I'm focusing on this specific argument and not making an overall case for government intervention. So that's three reasons I'm not saying the government should do it.
How do I make my non-endorsement clearer?
Also you were the one calling it a "competitive advantage" and "value proposition". That's not endorsement but it's definitely closer to endorsement than what I was saying.
Edit: Wait, I made this whole post interpreting the "should" as about government intervention. But I think technically that "should" was actually about the government deciding what makes people happy? If that's what you meant to ask then I have no idea how you got there. The sentences you quoted don't support that interpretation at all. The "suck less -> entrenchment" theory only works if users are actually happy, completely separate from what the government thinks.
The effect of YouTube’s content moderation size on speech is a symptom of weak antitrust policy, not of free expression. So sure, mention the effect on speech if you want but don’t ignore the solution.
Hosting content is not giving someone an audience.
If I take my stool into the main square and stand on it, giving a speech about the evils of canned spinach. People pass by but no-one stops and listens(or not for long), I did not have an audience.
If I record the same thing and put it up on Youtube and the same reaction happens. I only get 5~10 views, Youtube is not giving me an audience. They are hosting the video, just like they do for many other videos that are uploaded everyday.
If Youtube suddenly starts pushing my video onto everyone's "Home", "Recommended " or whatever; then that would be them giving me an audience.
If the Big Spinach Canners find my video and ask Youtube to take it down, that is censorship.
Yes, it is.
> If I take my stool into the main square and stand on it, giving a speech about the evils of canned spinach. People pass by but no-one stops and listens(or not for long), I did not have an audience.
Well, yes, you did. They are free to cheer, boo, or leave. YouTube is more like an open mic night. I reject the idea that it is a public space like a main square.
> If I record the same thing and put it up on Youtube and the same reaction happens. I only get 5~10 views, Youtube is not giving me an audience. They are hosting the video, just like they do for many other videos that are uploaded everyday.
I am lucky to have never worked in content moderation but I’m certain YouTube refuses or removes submissions every day. So while your spinach speech may survive there are many other videos that don’t.
> If Youtube suddenly starts pushing my video onto everyone's "Home", "Recommended " or whatever; then that would be them giving me an audience.
Being on YouTube at all is YouTube giving you an audience. Their recommendation algorithm is the value proposition of their product to consumers whose attention is the product sold to advertisers.
> If the Big Spinach Canners find my video and ask Youtube to take it down, that is censorship.
Perhaps in the strictest dictionary sense it is censorship but it is not censorship in a first amendment sense. This is a private business decision. You’re free to submit your video as an ad and pay Google directly for eyeballs. And they can still say no.
The only problem here is the size of YouTube relative to competitors. The fix there is antitrust, not erosion of civil liberties.
Consider the landscape that evolves in a post-YouTube environment with an eroded first amendment and without section 230 protections. Those protections are critical for innovation and free expression.
I want to see how steep this hill you're willing to die on is. What's that old saying-- that thing about the shoe being on the other foot?
We have the right to do a potentially limitless amount of unbecoming, cruel, and oppressive things to our fellow man. We also have the potential for forming and proliferating societies. We invented religion and agriculture out of dirt and need. Let us choose Nazarenes, Jeffersons, and Socrates' over Neros, Alexanders, and Napoleons. This didn't use to be politically controversial!
Are you the government? If not then it is not oppression. It is free speech. This is the point of my rhetorical device.
My fear is that this is incredibly uncontroversial this is until it's not-- when pushes becomes shoves we start having debates about what are, "legitimate" concentrations of power (wealth) and how that legitimacy in itself lets us, "tolerate what we would generally condemn as intolerable." I feel we need to take a queue from the Chomsky's of the world and decree:
"all unjustified concentrations of power and wealth are necessarily interested in control and as such we should aggressively and purposefully refuse to tolerate them at all as a basic condition of democratic living..."
This used to be, "social democracy" where these days the Democratic Party in the United States' motto is more, "let us make deals with the devil because reasons and things." People have the power. We are the people. Hare fucking Krsna.
I think government censorship should be strictly prohibited. I think "company" censorship is just the application of the first amendment.
Where I think the problem lies with things like YouTube is the fact that we have _monopolies_, so there is no "free market" of platforms.
I think we should be addressing "big tech" censorship not by requiring tech companies to behave like a government, but rather by preventing any companies from having so much individual power that we _need_ them to behave like a government.
We should have aggressive anti-trust laws, and interoperability requirements for large platforms, such that it doesn't matter if YouTube decides to be censorious, because there are 15 other platforms that people can viably use instead.
I've seen stupidity on the internet you wouldn't believe.
Time Cube rants — four simultaneous days in one rotation — burning across static-filled CRTs.
Ponzi pyramids stretching into forever, needing ten billion souls to stand beneath one.
And a man proclaiming he brought peace in wars that never were, while swearing the President was born on foreign soil.
All those moments… lost… in a rain of tweets.
But even that dumb stuff aside: there's two ways for a government to silence the truth: censorship, and propaganda.We've got LLMs now, letting interested parties (government or not) overwhelm everyone with an endless barrage of the worst, cheapest, lowest quality AI slop, the kind that makes even AI proponents like me go "ah, I see what you mean about it being autocomplete", because even the worst of that by quality is still able to bury any bad news story just as effectively as any censorship. Too much noise and not enough signal, is already why I'm consuming far less YouTube these days, why I gave up on Twitter when it was still called that, etc.
And we have AI that's a lot better at holding a conversation than just the worst, cheapest, lowest quality AI slop. We've already seen LLMs are able to induce psychosis in some people just by talking to them, and that was, so far as we can tell, accidental. How long will it be before a developer chooses to do this on purpose, and towards a goal of their choice? Even if it's just those who are susceptible, there's a lot of people.
What's important is the freedom to share truth, no matter how uncomfortable, and especially when it's uncomfortable for those with power. Unfortunately, what we humans actually share the most is gossip, which is already a poor proxy for truth and is basically how all the witch hunts, genocides, and other moral-panic-induced horrors of history happened.
It is all a mess; it is all hard; don't mistake the proxy (free speech in general) for the territory (speak truth to power, I think?); censorship is simultaneously bad and the only word I know for any act which may block propaganda which is also bad.
On their platform, that’s exactly what they are entitled to do. When you type into the box in the Facebook app, that’s your speech. But unless the platform wants to add your contribution to their coherent speech product, they have every right to reject it.
Otherwise, the government is deciding what people can say, and you’d be against that, right?
Further, if I wanted to start a social media platform called thinkingtylenolcausesautismisstupid.com, wouldn’t restricting my right to craft my product defeat the whole point of my business?
Giving platforms the ability to moderate their output to craft a coherent speech product is the only reason we have multiple social networks with different rules, instead of one first-mover social network with no rules where everyone is locked in by network effects.
I think if public health bodies just laid out the data they had honestly (good and bad) and said that they think most people should probably take it, but left it to people to decide, the vast, vast majority of people would still have gotten the vaccine but we wouldn't have allowed anti-vaccine sentiment to fester.
Nah, the same grifters who stand to make a political profit of turning everything into a wedge issue would have still hammered right into it. They've completely taken over public discourse on a wide range of subjects, that go well beyond COVID vaccines.
As long as you can make a dollar by telling people that their (and your) ignorance is worth just as much - or more - than someone else's knowledge, you'll find no shortage of listeners for your sermon. And that popularity will build its own social proof. (Millions of fools can't all be wrong, after all.)
There's always going to be people for all kinds of reasons pushing out bad ideas. That's part of the trade-off of living in a free society where there is no universal "right" opinion the public must hold.
> They've completely taken over public discourse on a wide range of subjects
Most people are not anti-vax. If "they've" "taken over public discourse" in other subjects to the point you are now holding a minority opinion you should consider whether "they" are right or wrong and why so many people believe what they do.
If can't understand their position and disagree you should reach out to people in a non-confrontational way, understand their position, then explain why you disagree (if you still do at that point). If we all do a better job at this we'll converge towards truth. If you think talking and debate isn't the solution to disagreements I'd argue you don't really believe in our democratic system (which isn't a judgement).
Some of these public school districts in Texas have >10% of students objecting to vaccines. My kids are effectively surrounded by unvaccinated kids whenever they go out in public. There's a 1 in 10 chance that kid on the playground has never had a vaccine, and that rate is increasing.
A lot of the families I know actively having kids are pretty crunchy and are at least vaccine hesitant if not outright anti-vax.
https://www.dshs.texas.gov/sites/default/files/LIDS-Immuniza...
"A total of 913 participants were included in the final analysis. The adjusted ORs for COVID-19 infection among vaccinated individuals compared to unvaccinated individuals were 1.85 (95% CI: 1.33-2.57, p < 0.001). The odds of contracting COVID-19 increased with the number of vaccine doses: one to two doses (OR: 1.63, 95% CI: 1.08-2.46, p = 0.020), three to four doses (OR: 2.04, 95% CI: 1.35-3.08, p = 0.001), and five to seven doses (OR: 2.21, 95% CI: 1.07-4.56, p = 0.033)." - ["Behavioral and Health Outcomes of mRNA COVID-19 Vaccination: A Case-Control Study in Japanese Small and Medium-Sized Enterprises" (2024)](https://www.cureus.com/articles/313843-behavioral-and-health...)
"the bivalent-vaccinated group had a slightly but statistically significantly higher infection rate than the unvaccinated group in the statewide category and the age ≥50 years category" - ["COVID-19 Infection Rates in Vaccinated and Unvaccinated Inmates: A Retrospective Cohort Study" (2023)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10482361/)
"The risk of COVID-19 also varied by the number of COVID-19 vaccine doses previously received. The higher the number of vaccines previously received, the higher the risk of contracting COVID-19" - ["Effectiveness of the Coronavirus Disease 2019 (COVID-19) Bivalent Vaccine" (2022)](https://www.medrxiv.org/content/10.1101/2022.12.17.22283625v...)
"Confirmed infection rates increased according to time elapsed since the last immunity-conferring event in all cohorts. For unvaccinated previously infected individuals they increased from 10.5 per 100,000 risk-days for those previously infected 4-6 months ago to 30.2 for those previously infected over a year ago. For individuals receiving a single dose following prior infection they increased from 3.7 per 100,000 person days among those vaccinated in the past two months to 11.6 for those vaccinated over 6 months ago. For vaccinated previously uninfected individuals the rate per 100,000 person days increased from 21.1 for persons vaccinated within the first two months to 88.9 for those vaccinated more than 6 months ago." - ["Protection and waning of natural and hybrid COVID-19 immunity" (2021)](https://www.medrxiv.org/content/10.1101/2021.12.04.21267114v...)
It's often a lot better to just let kooks speak freely.
There is nobody more confident in themselves than the middle-class.
Here's an overview study that reviewed other studies: https://jphe.amegroups.org/article/view/9493/html
"Pre-COVID-19 interviews with a high-income vaccine hesitant sample in Perth, Australia found that vaccine hesitancy was based on an inflated sense of agency in making medical decisions without doctors or public health officials, and a preference for “natural” methods of healthcare (30)."
"A similar study in the United States reported on interviews from 25 White mothers in a wealthy community who refused vaccination for their children (31). These participants reported high levels of perceived personal efficacy in making health decisions for their children and higher confidence in preventing illness through individual “natural” measures such as eating organic food and exercising. Additionally, these participants report lower perceived risk of infection or disease, which is contrasted with their high perceived risk of vaccination."
"Vaccine hesitancy among those with privilege may be more than just a product of resource access. There is evidence that individuals with high socioeconomic status perceive themselves to be more capable, hardworking, important, and deserving of resources and privileges than others (32,33)"
They have always been able to speak freely. I still see vaccine conspiracies on HN to this day. It was rampant during COVID as well.
The only reason you believe that is because all information to the contrary was systematically censored and removed from the media you consume. The actual data doesn't support that, there are even cases where it increased mortality, like https://pmc.ncbi.nlm.nih.gov/articles/PMC11278956/ and increased the chance of future covid infections, like https://pubmed.ncbi.nlm.nih.gov/39803093/ .
People have become more anti-Vax because the Covid vaccines were at best ineffective and as you said anything contra-narrative is buried or ignored.
If you push a shitty product and force people to take it to keep their jobs it’s going to turn them into skeptics of all vaccines, even the very effective ones.
More harm than good was done there. The government should have approved them for voluntary use so the fallout would not have been so bad.
The first paper seems to claim a very standard cohort study is subject to "immortal time bias", an effect whereby measuring outcomes can seem to change them. The typical example of sampling time bias is that slow-growing cancers are more survivable than fast-growing ones, but also more likely to be measured by a screening, giving a correlation between screening and survivablility. So you get a time effect where more fast-acting cancers do not end up in the measurement, biasing the data.
But in measurements such that one outcome or the other does not bias the odds of that outcome being sampled, there can be no measurement time effect, which is why it's not corrected for in studies like this. The authors do not explain why measurement time effects would have anything to do with detecting or not detecting death rates in the abstract, or anywhere else in the paper, because they are quacks, who apply arbitrary math to get the outcome they want.
As another commenter pointed out, randomized controlled trials -- which cannot possibly have this made-up time effect -- often clearly show a strongly positive effect for vaccination.
I did not read the second paper.
That said, I think it's important to separate personal experiences from what the larger body of evidence shows. Many vaccinated people still got COVID, especially once Omicron came along. The vaccines were never perfect at preventing infection. But the strongest data we have from randomized trials and real-world results show that vaccinated people were far less likely to end up in the ICU or die from COVID. That's what the vaccines were designed to do and that's where they consistently worked.
As for cancer, I understand why you'd connect your wife's diagnosis to the vaccine -- it's natural to search for causes -- our brains are wired to look for patterns especially when big events happen close together. But cancer registries and monitoring systems around the world haven't found an increase in cancer rates linked to COVID vaccines. The vaccines give a short-lived immune stimulus; they don't reprogram the immune system or permanently shut down T-cells. My family has a long history of cancer going back generations. Literally every other member of my family has had cancer long before COVID. The idea that there is a low probability of two people in the same family getting cancer in the same year is unfortunately not as unlikely as you want to believe. That is perhaps a cold comfort but doctors and scientists aren't seeing the pattern you're worried about.
That isn't to say there aren't side effects to the vaccine. Myocarditis and clotting problems are well documented but rare side-effects. In fact, someone I know about indirectly had a heart attack immediately after the COVID vaccine -- his family is genetically predisposed to this kind of heart attack but it was directly triggered by the shot (he survived). It's good to acknowledge those risks. But when you look at the big picture, health agencies estimate that the vaccines prevented millions of deaths. I sadly know of a few people who died from COVID prior to vaccine availability and have family members with permanent lung issues. They're currently struggling to get another COVID shot because they don't think they can survive getting it unprotected again.
A sibling read the your first link and noted the problems with it. I read just the abstract of the second link, and it's clear their methodology and description of what they're measuring can't actually support their conclusions.
These are just things that some of the population will be more attracted to, I don't think it has anything to do with censorship, lockdowns, or mandates. At most the blame can be at institutions for lacking in their ability to do effective scientific communication.
*And this skews more to less educated and intelligent.
Trump thought so too.
Now vaccines are getting banned and the GOP is gerrymandering the hell out of the country to ensure the end of the democratic process. Sure, let's do nothing and see where that brings us. Maybe people will magically come to their senses.
The same thing since time eternal will continue to occur: the educated and able will physically move themselves from risk and others will suffer either by their own volition, or by association, or by lot.
Is it? How does that work at scale?
Speech generally hasn’t been restricted broadly. The same concepts and ideas removed from YouTube still were available on many places (including here).
Yet we still have so many people believing falsehoods and outright lies. Even on this very topic of COVID, both sides present their “evidence” and and truly believe they are right, no matter what the other person says.
People don’t believe the scientific consensus on vaccines because there were no WMDs in Iraq, to give one of many huge examples.
“But those were different experts!”
No they weren’t. Not to the average person. They were “the authorities,” and “the authorities” lied us into a trillion dollar war. Why should anyone trust “the authorities” now?
Tangentially… as bad as I think Trump is, he’s still not as bad as George W Bush in terms of lasting damage done. Bush II was easily the worst president of the last 100 years, or maybe longer. He is why we have a president Trump.
Have you actually tried to shine sunlight on online misinformation? If you do you will quickly find it doesn't really work.
The problem is simple. It is slower to produce factually correct content. A lot slower. And when you do produce something the people producing the misinformation can quickly change their arguments.
Also, by the time you get your argument out many of the people who saw the piece you are refuting and believed it won't even see your argument. They've moved on to other topics and aren't going to revisit that old one unless it is a topic they are particularly interested in. A large number will have noted the original misinformation, such as some totally unsafe quack cure for some illness that they don't currently have, accepted it as true, and then if they ever find themselves with that illness apply the quack cure without any further thought.
The debunkers used to have a chance. The scammers and bullshitters always had the speed advantage when it came to producing content but widespread distribution used to be slow and expensive. If say a quack medical cure was spreading the mainstream press could ask the CDC or FDA about it, talk to researchers, and talk to doctors dealing with people showing up in emergency rooms from trying the quack cure, and they had the distribution networks to spread this information out much faster than the scammers and bullshitters.
Now everyone has fast and cheap distribution through social media, and a large number of people only get their information from social media and so the bullshitters and scammers now have all the advantages.
It massively amplified the nuts. It brought it to the mainstream.
I'm a bit amazed seeing people still justifying it after all we've learned.
COVID was handled terribly after the first month or so, and hopefully we've learned from that. We're going to endure the negative consequences for years.
And to state my position like the root guy, I'm a progressive, pro-vaccine, medical science believer. I listen to my doctor and am skeptical if not dismissive of the YouTube "wellness" grifters selling scam supplements. I believe in science and research. I thought the worm pill people were sad if not pathetic. Anyone who gets triggered by someone wearing a mask needs to reassess their entire life.
But lockdowns went on way too long. Limits on behaviour went on way too long. Vaccine compliance measures were destructive the moment we knew it had a negligible effect on spread. When platforms of "good intentions" people started silencing the imbeciles, it handed them a megaphone and made the problem much worse.
And now we're living in the consequences. Where we have a worm-addled halfwit directed medicine for his child-rapist pal.
>COVID was handled terribly after the first month or so, and hopefully we've learned from that. We're going to endure the negative consequences for years.
In theory, I agree, kind of.
But also - we were 10+ months into COVID raging in the US before Biden’s administration, the administration that enacted the policies the article is about, came to be. Vaccine production and approval were well under way, brought to fruition in part due to the first Trump administration. The “nuts” had long been mainstream and amplified before this “silencing” began. Misinformation was rampant and people were spreading it at a quick speed. Most people I know who ultimately refused the vaccines made up their minds before Biden took office.
We already had a pretty strong undercurrent of contrarianism regarding public health already -- it's absolutely endemic on here, for instance, and was long before COVID -- but it mainstreamed it. Before COVID I had a neighbour that would always tell me hushed tones that he knows what's really going on because he's been learning about it on YouTube, etc. It was sad, but he was incredibly rare. Now that's like every other dude.
And over 80% of the US public got the vaccine! If we were to do COVID again, I doubt you'd hit even 40% in the US now. The problem is dramatically worse.
[1] That infamous Zuck interview with Rogan, where Zuck licked Trump's anus to ingratiate himself with the new admin, was amazing in that he kept blaming Biden for things Meta did long before Biden's admin took office or even took shape. Things he did at the urging of the Trump admin pt 1. I still marvel that he could be so astonishingly deceptive and people don't spit in his lying face for it.
Google makes it very clear that these were choices they made, and were independent of whatever the government was asking. Suggesting these policies are anything other than Google's is lying.
I agree, but how do you combat propaganda from Putin? Do you match him dollar for dollar? I am sure YouTube would like that, but who has deep enough pockets to counter the disinformation campaigns?
Similar issue with Covid... when you are in the middle of a pandemic, and dead bodies are piling up, and hospitals are running out of room, how do you handle misinformation spreading on social media?
This is the conundrum social media has created. In the past only the press, who were at least semi-responsible, had the ability to spread information on a massive scale. Social media changed that. Now anyone can spread information instantly on a massive scale, and often it is the conspiracy theories and incorrect information that people seek out.
"We were a bit naive: we thought the internet, with the availability of information, would make us all a lot more factual. The fact that people would seek out—kind of a niche of misinformation—we were a bit naive." -- Bill Gates to Oprah, on "AI and the Future of us".
Yes. An unfortunate conclusion I’m approaching (but have not reached, and frankly don’t want to reach) is the First Amendment doesn’t work in a country that’s increasingly illiterate and addicted to ad-powered algorithmic social media.
On the internet everything can appear equally legitimate. Breitbart looks as legit as the BBC. Sacha Baron Cohen https://www.youtube.com/watch?v=ymaWq5yZIYM
Excerpts:
Voltaire was right when he said "Those who can make you believe absurdities can make you commit atrocities." And social media lets authoritarians push absurdities to millions of people.
Freedom of speech is not freedom of reach. Sadly There will always be racists, misogynists, anti-Semites, and child abusers. We should not be giving bigots and pedophiles a free platform to amplify their views and target their victims.
Zuckerberg says people should decide what's credible, not tech companies. When 2/3rds of millennials have not heard of Auschwitz how are they supposed to know what's true? There is such a thing as objective truth. Facts do exist.
What are your suggestions on accomplishing this while also bent compatible with the idea that government and big tech should not control ideas and speech?
Time delay. No content based restrictions. Just, like, a 2- to 24-hour delay between when a post or comment is submitted and when it becomes visible, with the user free to delete or change (in this case, the timer resets) their content.
I’d also argue for demonetising political content, but idk if that would fly.
Who decides agar falls in this bucket. The government? That seems to go against the idea of restricting speech and ideas.
Congress for the first. Either the FCC or, my preference, private litigants for the second. (Treble damages for stupid suits, though.)
> For all content or just “political”?
The courts can already distinguish political speech from non-political speech. But I don’t trust a regulator to.
I’d borrow from the French. All content within N weeks of an in the jurisdiction. (I was going to also say any content that mentions an elected by name, but then we’ll just get meme names and nobody needs that.)
Bonus: electeds get constituent pressure to consolidate elections.
Alternative: these platforms already track trending topics. So an easy fix is to slow down trending topics. It doesn’t even need to be by that much, what we want is for people to stop and think and have a chance to reflect on what they do, maybe take a step away from their device while at it.
Allow citizens to sue social media companies for the harm caused to them by misinformation and disinformation. The government can stay out of this.
May I suggest only repealing it for companies that generate more than a certain amount of revenue from advertising, or who have more than N users and have algorithmic content elevation?
It's taking a sword to the surgery room where no scalpel has been invented yet.
We need better tools to combat dis/mis-information.
I wish I knew what that tool was.
Maybe 'inoculating information' that's specifically stickier than the dis/mis-info?
Social media platforms in the United States rely heavily on Section 230 of the Communications Decency Act, which provides them immunity from liability for most user-generated content.
Neo-nazis and white nationalists went from their 3-4 replies per thread forums, 4chan posts, and Telegram channels, to now regularly reaching millions of people and getting tens of thousands of likes.
As a Danish person I remember how American media in the 2010s and early 2020s used to shame Denmark for being very right-wing on immigration. The average US immigration politics thread on X is worse than anything I have ever seen in Danish political discussions.
In general, you can't argue or 'fact' people out of beliefs they were not argued into. The best you can do is give them a safe place to land when disconfirmation begins. Don't be too judgy, no one is immune to propaganda.
Reality is, i have personally seen what this type of uncontrolled anti-vax stuff does. The issue is that its harder to disprove a negative with a positive, then people realize.
The moment you are into the youtube, tiktok or whatever platform algorithm, you are fed a steady diet of this misinformation. When you then try to argue with actual factual studies, you get the typical response from "they already said that those studies are made up"... How do you fight that? Propaganda works by flooding the news and over time, people believe it.
That is the result of uncensored access because most people do not have the time to really look up a scientific study. The amount of negative channels massive out way positive / fact based channels because the later is "boring". Its the same reason why your evening news is 80% deaths, corruptions, thefts, politicians and taxes or other negative world news. Because it has been proven that people take in negative news much more. Clickbait titles that are negative draw in people.
There is a reason why holocaust denial is illegal in countries. Because the longer some people can spew that, the more people actually start to believe it.
Yes, i am going to get roasted for this but people are easily influenced and they are not as smart as they think themselves are. We have platforms that cater to people's short attention span with barely 1~3 min clips. Youtube video's longer then 30min are horrible for the youtubers income as people simply do not have the attention span and resulting lost income.
Why do we have laws like seatbelt, speed limits, and other "control" over people. Because people left to their own devices, can be extreme uncaring about their own family, others, even themselves.
Do i like the idea of censorship for the greater good, no. But when there are so many that spew nonsense just to sell their powders, and their homemade vitamine C solutions (made in China)... telling people information that may hurt or kills themselves, family or others.
Where is the line of that unbridled free speech? Silencing people works as in, your delaying the flow of shit running down a creek, will it stop completely? No, but the delay helps people downstream. Letting it run uninterrupted, hoping that a few people downstream with a mop will do all the work, yea ...
We only need to look at platforms like X, when "censorship" got removed (moderation). Full of free speech, no limits and it turned into a soak pit extreme fast (well, bigger soak pit).
Not sure why i am writing this because this is a heated topic but all i can say is, I have seen the damage that anti-vax did on my family. And even to this day, that damage is still present. How a person who never had a issue with vaccinations, never had a bad reaction beyond the sour arm for a day, turned so skeptical to everything vaccination. All because those anti-vax channels got to her.
The anti-vax movement killed people. There is scientific study upon study how red states in the US ended up with bigger amounts of deaths given the time periodes. And yet, not a single person was ever charged for this, ... all simply accepted this and never looked back. Like it was a natural thing, that people's grandparents, family members died that did not need to die.
The fact that people have given up, and now accept to let those with often financial interests, spew nonsense as much as they like. Well, its "normal".
I weep for the human race because we are not going to make it.
My biggest problem with YouTube isn't what it does/doesn't allow on its platform, it's that it will happily feed you a psychotic worldview if it keeps you on the site. I've had several family members go full conspiracy nut-job after engaging heavily with YouTube content.
I don't know what the answer is. I think many people would rightly argue that removing misinformation from the recommendation engine is synonymous with banning it. FWIW I'd be happy if recommendation engines generally were banned for being a societal cancer, but I'm probably in the minority here.
It actually does work. You need to remove ways for misinformation to spread, and suppressing a couple of big agents works very well.
- https://www.nature.com/articles/s41586-024-07524-8 - https://www.tandfonline.com/doi/full/10.1080/1369118X.2021.1... - https://dl.acm.org/doi/abs/10.1145/3479525 - https://arxiv.org/pdf/2212.11864
Obviously, the best solution would be prevention, by having good education systems and arming common people with the weapons to assess and criticize information, but we are kinda weak on that front.
The first amendment was written in the 1700s...
People should be free to say what they want online. But going down "YouTube conspiracy theory" rabbit holes is a real thing, and YouTube doesn't need to make that any easier, or recommend extreme (or demonstrably false) content because it leads to more "engagement".
But if 100 tweets each get 1000 likes, they're never singularly important enough to community note.
In person, things are different. The dominant side can and often enough does drown out dissent. In that case, the intent is to silence. So that would be "censorship" in a cultural, not legal sense which would be hostile to the freedom of speech in the cultural sense, not legal.
In this sense platforms like X need to be regulated more like gambling. In some ways X is a big roulette wheel that's being spun which will help stochastically determine where the next major school shooting will take place.
The words of world renown epidemiologists who were, to be frank, boring and unentertaining could never possibly compete with crunchymom44628 yelling about how Chinese food causes covid.
Bad takes have the advantage of the engagement of both the people who vehemently agree and the people who vehemently disagree. Everyone is incentivized to be a shock jock. And the shock jocks are then molded by the algorithm to be ever more shock jockish.
Especially at a time when we were all thrown out of the streets and into our homes and online.
And here I'll end this by suggesting everyone watch Eddington.
One of the sentiments I've been flirted with in posts below/above is the idea that while bad takes and their amplification are indeed a kind of societal evil-- in a society which was more effectively mediated bad takes might serve a vital purpose in the discourse. Societies committed to their own felicity might treat disagreements as an opportunity to extend the public discourse. This seems to be the crux of the thing-- we can take all day about checks and balances but unless a society is truly at some level committed to its own preservation and expansion those checks and balances will end up becoming tools for domination and exploitation as we see in the United States.
It don't care how well you can bake you can't make apple pie with rotten apples. No amount of sugar will correct the rot. Th trick is growing healthy apples.
The US military also promoted anti-vax propaganda in the Philippines [0].
A lot of the comments here raise good points about silencing well meaning people expressing their opinion.
But information warfare is a fundamental part of modern warfare. And it's effective.
An American company or individual committing fraud can be dealt with in the court system. But we don't yet have a good remedy for how to deal with a military power flooding social media with information intentionally designed to mislead and cause harm for people who take it seriously.
So
> I think we have to realize silencing people doesn't work
it seems to have been reasonably effective at combating disinformation networks
> It just causes the ideas to metastasize
I don't think this is generally true. If you look at old disinformation campaigns like the idea that the US faked the moon landings, it's mostly confined to a small group of people who are prone to conspiracy thinking. The idea of a disinformation campaign is you make it appear like a crazy idea has broad support. Making it appear that way requires fake accounts or at least boosters who are in on the scheme. Taking that away means the ideas compete on their own merit, and the ideas are typically real stinkers.
[0] https://www.btimesonline.com/articles/167919/20240727/u-s-ad...
Have we all lost the ability to reason? Seriously, this isn't hard. No one owes you distribution unless you have a contract saying otherwise.
But these days, when you can count the forums on one hand even if you're missing a few fingers, and they all have extremely similar (American-style) censorship policies? To me it's less clear than it once was.
Might be a boon for federated services—smaller servers, finer-grained units of responsibility…
It makes sites not count as the publisher or speaker of third party content posted to their site, even if they remove or moderate that third party content.
Even today, I was listening to NPR talk about the potential TikTok deal and the commenter was wringing their hands about having a "rich guy" like Larry Ellison control the content.
I don't know exactly what the right answer is. But given their reach -- and the fact that a lot of these companies are near monopolies -- I think we should at least do more than just shrug and say, "they can do what they want."
The common carrier law says you have to for for some things, so it makes sense to institute such a law for some parts of social media as they are fundamental enough. It is insane that we give that much censorship power to private corporations. They shouldn't have the power to decide elections on a whim etc.
https://en.wikipedia.org/wiki/Common_carrier
Is there perhaps another name for what you're describing? It piques my interest.
So the equivalent of Google banning anyone talking about Covid is the same as a phone service provider ending service for anyone mentioning covid on their phones. Nobody but the most extreme authoritarians thinks phone providers should be allowed to do that, so why not apply this to Google as well?
If they did that, people would leave the service in droves for a competitor with reasonable moderation. Nobody wants to use a site that is overrun with spam and porn.
Did people leave Google in droves in favor of a competitor that censors out all porn from search results? No, people had no issue that you can find porn on Google, they still used it. Youtube providing porn to those who want it does not cause problems for anyone, just like it doesn't for Google search, and Google even run both so they can easily apply this same feature on Youtube.
> Nobody wants to use a site that is overrun with spam and porn.
The internet is overrun by spam and porn yet people still use it, so you are clearly wrong. Google already manages as search engine over the internet that is capable of not showing you porn when you don't search for it, but you can find it if you do, so Google has already solved that problem and could just do the same in Youtube.
You might ask yourself why you are here, instead of another website with less or no moderation.
I would prefer if discord / reddit and similar became common carriers of forums, not messages. So discord and reddit can't control what a subreddit does and what its moderators do, but the moderators can control what the people posting there can do.
By having a common carrier forum provider anyone could easily make their own forum with their own rules and compete on an open market without needing any technical skills, and without the forum provider being able to veto everything they say and do on that forum. That is where we want to be, in such an environment HN wouldn't need to depend on ycombinator, you could have many independently moderated forums and you pick the best one.
Discord and reddit today aren't that, both ban things they don't like, it would be much better if we removed that power from them. Both reddit and discord admins allows porn and spam, their censorship adds zero value to the platform, the only thing it does is kick some political factions out of the platform which doesn't add any value to it, as I wouldn't visit those discords / subreddits anyway so they don't hurt me.
So it isn't hard to imagine how to draft such laws where all our favorite usecases are still allowed while also adding much more freedom for users and making life easier for these content platforms since they are no longer targeted by takedown request spam, it is a win win for everyone except those who want to censor.
Unless you make a law preventing all moderation, the users and advertisers are going to migrate to the moderated forums.
Thanks for answering why the law is needed, as you explained a private solution cannot solve this. Advertisers wouldn't be able to push reddit to ban things if reddit weren't allowed to ban them, so you would still be able to run ads with such a law, it just reduces the power those ad companies has over you.
And no, the ad companies doesn't really care if you show porn or show terrorist propaganda on your site, you can both watch porn and read terrorist propaganda on Google without leaving the site yet every advertiser I know is happily spending a massive amount of money on Google ads. If they actually cared they would leave Google, instead they just care about bullying those who can comply, if they know the target wont budge due to a law then they would just continue to advertise like they do with Google.
These kind of regulations are needed when the free market results in oppressive results, there are many such cases where regulations do a good job and I don't see why these internet companies should be an exception.
If you take proposals from whoever and then only approve ones you specifically like, for whatever reason, then I don’t think anyone would feel silenced by that.
If you take anything from anyone, and a huge volume of it, on any topic and you don’t care what, except for a few politically controversial areas, that feels more like silencing. Especially when there is no alternative service available due to network effects and subsidies from arguably monopolistic practices.
Of course they would never check things before allowing them to be posted because there isn’t any profit in that.
Sure, Google doesn't need to host anything they don't want to; make it all Nazi apologia if they thing it serves their shareholders. But doing so and silencing all other viewpoints in that particular medium is surely not a net benefit for society, independent of how it affects Google.
I’d even search for “coronavirus” and primarily get “official” sites about Covid-19 even tho that’s just one of many coronaviruses. At least Wikipedia makes the front page again, with the Covid-19 page outranking the coronavirus page…
I’d even search for “coronavirus” and primarily get “official” sites about Covid-19 even tho that’s just one of many coronaviruses. At least Wikipedia makes the front page again, with the Covid-19 page outranking the coronavirus page…
SCOTUS. Bantam Books, Inc. v. Sullivan, holds that governments cannot coerce private entities into censoring speech they disfavor, even if they do not issue direct legal orders.
This was a publicly announced motivation for Elon Musk buying Twitter. Because of which we know the extent of this illegal behavior.
Mark Zuckerberg has also publicly stated Meta was asked to remove content by the US government.
>In April 2021, White House advisers met with Twitter content moderators. The moderators believed the meeting had gone well, but noted in a private Slack discussion that they had fielded "one really tough question about why Alex Berenson hasn't been kicked off from the platform."
Is there a difference between the White House stating they are looking at Section 230 and asking why this one guy has not been banned?
Also, spreading disinformation about covid has real-world implications.
Orange man getting his feelings hurt because comedian said something isn't even in the same ballpark
Your logic can be used to censor anything that goes against the narratives of the arbiters of disinformation.
> Orange man getting his feelings hurt because comedian said something isn't even in the same ballpark
Pejorative. Lack of evidence. Ignoring contradictory evidence. Sounds like you are locked in.
"Shouldn't they(Facebook and Twitter) be liable for publishing that information and then open to lawsuits?" - MSNBC "Certainly, they should be held accountable, You've heard the president speak very aggressively about this. He understands this is an important piece of the ecosystem." - White House Communications Director Kate Bedingfield
Source: https://reason.com/2023/01/19/how-the-cdc-became-the-speech-...
So yes, MSNBC brought up Section 230 and the White House Communications Director says "Yes, we are looking to hold social media accountable."
>Also from the same source: The Twitter moderators believed the meeting had gone well, but noted in a private Slack discussion that they had fielded "one really tough question about why Alex Berenson hasn't been kicked off from the platform."
>Throughout 2020 and 2021, Berenson had remained in contact with Twitter executives and received assurances from them that the platform respected public debate. These conversations gave Berenson no reason to think his account was at risk. But four hours after Biden accused social media companies of killing people, Twitter suspended Berenson's account.
I don't care about Trump's feelings but if we want to be able to speak truth to power, we have to be willing to let people talk shit as well. Yes, COVID has real world implications. Almost everything does.
People on the left say "Think about the children and implications with regard to this." People on the right say "Think about the children and implications with regard to that."
Notice how none of them seem to be saying "Let's lay out the facts and let you think about it."
Preventing people from having a platform for content-free asshattery doesn't have that problem.
(A fun implication of this line is reasoning, is that the claim that Kimmel's comments were "lies" makes the jawboning against him more morally bad rather than less bad.)
More here: https://sbgi.net/sinclair-says-kimmel-suspension-is-not-enou...
https://www.politico.com/story/2017/08/06/trump-fcc-sinclair...
https://upriseri.com/sinclair-nexstar-duopoly-right-wing-con...
That can’t be your point, but I also can’t think of a more charitable interpretation.
You seem to think there's a bright line of "silenced" vs "not silenced". In reality there's many ways of limiting and restricting people's expressions. Some are generally considered acceptable and some are not. When huge swaths of communication are controlled by a handful of companies, their decisions have a huge impact on what speech gets suppressed. We should interrogate whether that serves the public interest.
What I'm trying to get at is it's possible to stifle people's freedom of expression without literally blocking them from every platform. Threatening their livelihood. Threatening their home. Kicking them off these core social media networks. All of these things are "silencing". And we should be wary of doing that for things we simply disagree about.
The other distinction you seem to be ignoring is that the Biden administration was doing it because of public health concerns. Trump and the FCC was doing it because a comedian said meab things aboit him and a devout racist.
What favoritism did the Biden administration show? They still went after Google for being a monopoly.
Unlike Trump who only had his administration approve the Paramount deal after accepting a $15 million bribe in public
That distinction is a relic of a world of truly public spaces used for communication— a literal town square. Then it became the malls and shopping centers, then the Internet— which runs on private pipes— and now it’s technological walled gardens. Being excluded from a walled garden now is effectively being “silenced” the same way being excluded from the town square was when whatever case law you’re thinking was decided.
To me that sounds only fair trade. You editorialize content. You are liable for all content. In every possible way.
Except many people don't roll their eyes at it, that's exactly the problem. QAnon went from a meme on 4chan to the dominant political movement across the US and Europe. Anti-vax went from fringe to the official policy position of the American government. Every single conspiracy theory that I'm aware of has only become more mainstream, while trust in any "mainstream" source of truth has gone down. All all of this in an environment of aggressive skepticism, arguing, debating and debunking. All of the sunlight is not disinfecting anything.
We're literally seeing the result of the firehose of misinformation and right-wing speech eating people's brains and you're saying we just have to "let it ride?"
Silencing people alone doesn't work, but limiting the damage misinformation and hate speech can do while pushing back against it does work. We absolutely do need to preserve the right of platforms to choose what speech they spread and what they don't.
Doesn't it though? I've seen this repeated like it's fact but I don't think that's true. If you disallowed all of some random chosen conspiracy off of YouTube and other mainstream platforms I think it would stop being part of the larger public consciousness pretty quickly.
Many of these things arrived out of nothing and can disappear just as easily.
It's basic human nature that simply hearing things repeated over and over embeds it into your consciousness. If you're not careful and aware of what you're consuming then that becomes a part of your world view. The most effective way to bring people back from conspiratorial thinking (like QAnon) is to unplug them from that source of information.
Silencing absolutely works! How do you think disinformation metastasized!?
The simple solution is repeal Section 230. When information can be transmitted instantly on a massive scale, somebody need to responsible for the information. The government should not police information but citizens should be allowed to sue social media companies for the harm caused to them.
If you allow crazy people to "let it ride" then they don't stop until... until... hell we're still in the middle of it and I don't even know when or if they will stop.
Very analogous to people who don't like how inefficient governments function and somehow conclude that the solution is to put people in power with zero experience managing government.
I don't believe Trump's or Kennedy's ideas about COVID and medicine are the ones that deserve to win out, but I do think that top-down suppression of ideas can be very harmful to truth seeking and was harmful during the pandemic. In North America I believe this led to a delayed (and ultimately minimal) social adoption of masks, a late acceptance of the aerosol-spread vector, an over-emphasis on hand washing, and a far-too-late restriction on international travel and mass public events, well past the point when it could have contributed to containing the disease (vs Taiwan's much more effective management, for example).
Of course there's no guarantee that those ideas would have been accepted in time to matter had there been a freer market for views, and of course it would have opened the door to more incorrect ideas as well, but I'm of the view that it would have helped.
More importantly I think those heavy restrictions on pre-consensus ideas (as many of them would later become consensus) helped lead to a broader undermining of trust in institutions, the fallout of which we are observing today.
Experts can study and learn from their prior mistakes. Continually doing bottom-up when we have experts is inefficient and short-sighted, no? Surely you would streamline part of the process and end up in the pre-Trump framework yet again?
Also, I'm curious why you have such a rosy picture of the bottom-up alternatives? Are you forgetting about the ivermectin overdoses? 17,000 deaths related to hydroxychloroquine? The US president suggesting people drinking bleach? It is easy to cherry pick the mistakes that science makes while overlooking the noise and misinformation that worms its way into less-informed/less-educated thinkers when non-experts are given the reins
I'm criticizing them for suppressing the dissemination of ideas that did later turn out to be correct. I hope the distinction is clear.
If you're going to impose a ban on the dissemination of ideas, you'd better be ten thousand percent sure that nothing covered by that ban later turns out to be the truth. Not a single one, not even if every other idea that got banned was correctly identified as a falsehood. Otherwise, the whole apparatus falls apart and institutions lose trust.
I'm not forgetting ivermectin overdoses. I don't believe my picture is rosy. I'm aware of all the garbage ideas out there, which is why the measles is back and all the other madness. But I'm firmly of the opinion that trying to suppress these bad ideas has only redoubled their strength in the backlash, and caused a rejection of expert knowledge altogether.
If you were on the early Internet, you were self policing with the help of admins all the time. The difference was you had niche populations that had a stake in keeping the peace and culture of a given board
We broke those boundaries down though and now pit strangers versus strangers for clicks and views, resulting in daily stochastic terrorism.
Police the damn speech.
But along with fringe Covid ideas, we limited actual speech on legitimate areas of public discourse around Covid. Like school reopening or questioning masks and social distancing.
We needed those debates. Because the unchecked “trust the experts” makes the experts dumber. The experts need to respond to challenges.
(And I believe those experts actually did about as best they could given the circumstances)
I would put my trust in the people I knew were trained for this and adjust from there.
I suspect many of these opinions are born from hindsight.
Experts have a worse track record than open debate and the COVID censorship was directed at even experts who didn’t adhere to political choices — so to my eyes, you’re saying that you’d give in to authoritarian impulses and do worse.
At some point in any emergency, organized action has to be prioritized over debate.
Maybe that is still authoritarian, but they do say to have moderation in all things!
Ah… so… ”we must do something! Even if it’s the wrong thing”
Hot take.
From health emergencies to shootings to computer system crashes to pandemics — doing things without a reason to believe they’ll improve the situation is dangerous. You can and many have made things worse. And ignoring experts shouting “wait, no!” is a recipe for disaster.
When we were responding to COVID, we had plenty of time to have that debate in a candid way. We just went down an authoritarian path instead.
I don't see how that turns into you needing to mandate what I read and who's opinions I hear.
I have a three month old son. At the time he was being born, in my city, there was an outbreak of one of those diseases that killed more then one kid. Don't tell me this stuff doesn't have a direct impact on people.
Society as a whole has a responsibility to not do that kind of shit. We shouldn't be encouraging the spread of lies.
There's a big difference, and in any healthy public discourse there are severe reputations penalties for lies.
If school reopening couldn't be discussed, could you point to that?
It's very odd how as time goes on my recollection differs so much from others, and I'm not sure if it's because of actual different experiences or because of the fog of memory.
>As super low hanging fruit: > June 8, 2020: WHO: Data suggests it's "very rare" for coronavirus to spread through asymptomatics [0] > June 9, 2020: WHO expert backtracks after saying asymptomatic transmission 'very rare' [1] > 0: https://www.axios.com/2020/06/08/who-coronavirus-asymptomati... 1: https://www.theguardian.com/world/2020/jun/09/who-expert-bac... > Of course, if we just take the most recent thing they said as "revised guidance", I guess it's impossible for them to contradict themselves. Just rapidly re-re-re-revised guidance.
My hypotheses for our discrepant viewpoints were 1) my aging memory, or 2) different experiences, but it's actually 3) not using words to have the same meaning!
Citing this as "blatant truth suppression" weakens my view of any other evidence or argument you put forward, because I no longer trust that we can use words in ways that are compatible with each other.
More seriously, it's just not this simple man. I know people really want it to be, but it's not.
I watched my dad get sucked down a rabbit hole of qanon, Alex Jones, anti-vax nonsense and God knows what other conspiracy theories. I showed him point blank evidence that qanon was bullshit, and he just flat out refuses to believe it. He's representative of a not insignificant part of the population. And you can say it doesn't do any damage, but those people vote, and I think we can see clearly it's done serious damage.
When bonkers ass fringe nonsense with no basis in reality gets platformed, and people end up in that echo chamber, it does significant damage to the public discourse. And a lot of it is geared specifically to funnel people in.
In more mainstream media climate change is a perfect example. The overwhelming majority in the scientific community has known for a long time it's an issue. There were disagreement over cause or severity, but not that it was a problem. The media elevated dissenting opinions and gave the impression that it was somehow an even split. That the people who disagree with climate change were as numerous and as well informed, which they most certainly weren't, not by a long shot. And that's done irreparable damage to society.
Obviously these are very fine lines to be walked, but even throughout US history, a country where free speech is probably more valued than anywhere else on the planet, we have accepted certain limitations for the public good.
We've had these debates for decades. The end result is stuff like Florida removing all vaccine mandates. You can't debate a conspiracy or illogical thinking into to going away, you can only debate it into validity.
It’s not if Google can decide what content they want on YouTube.
The issue here is that the Biden Whitehouse was pressuring private companies to remove speech that they otherwise would host.
That's a clear violation of the first amendment. And we now know that the previous Whitehouse got people banned from all the major platforms: Twitter, YouTube, Facebook, etc.
The current administration has been openly threatening companies over anything and everything they don't like, it isn't surprising all of the tech companies are claiming they actually support the first amendment and were forced by one of the current administration's favorite scapegoats to censor things.
It's easy to see how at minimum there could be a conflict of interest.
You had direct statements like this from scientific experts, those experts turned out to be the middleman group that was funding Wuhan via NIH grants.
Peter Daszak, a zoologist and president of the EcoHealth Alliance, who has been among the most vocal critics of the idea of a lab leak, wrote, “I just wanted to say a personal thank you on behalf of our staff and collaborators, for publicly standing up and stating that the scientific evidence supports a natural origin for COVID-19 from a bat-to-human spillover, not a lab release from the Wuhan Institute of Virology.”
Given an expert statement like that, youtube can and did take down lableak videos because they were misinformation (contrary to the information provided by the experts).
What happens when the “police” disagrees with and silences what you believe is true? Or when they allow the propagation of what you believe to be lies?
Who gets to decide what’s the truth vs. lies? The “police”?
This keeps coming up on this site. It seems like a basic premise for a nuanced and compassionate worldview. Humility is required. Even if we assume the best intentions, the fallible nature of man places limits on what we can do.
Yet we keep seeing posters appealing to Scientism and "objective truth". I'm not sure it is possible to have a reasonable discussion where basic premises diverge. It is clear how these themes have been used in history to support some of the worst atrocities.
Both have always been massively shady. I'm old enough to remember the big stink around the Al Gore election loss, or the robust questioning of the 2016 election for that matter. So ridiculous for self-proclaimed defenders of democracy to want to ban the discussion and disagreement about the facts around elections. Democratic processes and institutions should be open to doubt, questioning, and discussion.
The response to covid vaccines was actually extremely rational. They were highly taken up by the elderly who were shown to have the greatest risk, despite that demographic skewing more conservative (and arguably could be most at risk of "misinformation" from social media). And they were not able to stop transmission or provide much benefit to children and younger people, so they didn't get taken up so much among those groups. So there was really no need for this massive sustained psychological campaign of fearmongering, divisiveness, censorship, and mandates. They could have just presented the data and the facts as they came to hand, and be done with it.
With questioning elections, I think Jan 6 would be a pretty good indication of why it wasn't appropriate? This wasn't how questioning the results of elections goes in democracies. Instead, even after courts had investigated, the outgoing president refused to accept the result without any substantiated evidence.
Pushback on what? There's always been new age hippy garbage, Chinese medicine, curing cancer with berries, and that kind of thing around. I don't see that causing much damage and certainly not enough to warrant censorship. People can easily see through it and in the end they believe what they want to believe.
Far far more dangerous and the cause of real damage that I have seen come from the pharmaceutical industry and their captured regulators. Bribing medical professionals, unconscionable public advertising practices, conspiring to push opioids on the population, lying about the cost to produce medications, and on and on. There's like, a massive list of the disasters these greedy corporations and their spineless co-conspirators in government regulators have caused.
Good thing we can question them, their motives, their products.
> With questioning elections, I think Jan 6 would be a pretty good indication of why it wasn't appropriate?
I don't understand your question. Can you explain why you think Jan 6 would be a pretty good indication that discussion and disagreement about elections should be censored?
> This wasn't how questioning the results of elections goes in democracies. Instead, even after courts had investigated, the outgoing president refused to accept the result without any substantiated evidence.
I never quite followed exactly were the legal issues around that election. Trump was alleged to have tried to illegally influence some election process and/or obstructed legal transfer of power. Additionally there was a riot of people who thought Trump won and some broke into congress and tried to intimidate law makers.
I mean taking the worst possible scenario, Trump knew he lost and was scheming a plan to seize power and was secretly transmitting instructions to this mob to enter the building and take lawmakers hostage or something like that. Or any other scenario you like, let your imagination go wild.
I still fail to see how that could possibly justify censorship of the people and prohibiting them from questioning the government or its democratic processes. In fact the opposite, a government official went rogue and committed a bunch of crimes so therefore... the people should not be permitted to question or discuss the government and its actions?
There are presumably laws against those actions of rioting, insurrection, etc. Why, if the guilty could be prosecuted with those crimes, should the innocent pay with the destruction of their human rights, in a way that wouldn't even solve the problem and could easily enable worse atrocities be committed by the government in future?
Should people who question the 2024 election be censored? Should people who have concerns with the messages from the government's foremost immigration and deportation "experts" be prohibited from discussing their views or protesting the government's actions?
New age medicine has been around forever, yes. But the effects are only known to be negligible outside of pandemics. We know from history that people did many irrational things during past pandemics due to fear and social contagion.
It's a tough problem, everyone believes themselves an expert on everything, plus trolls and disinformation campaigns. There's also a significant information asymmetry.
It's funny you mention opioids as I just recently came across a tweet claiming that Indians were responsible for getting Americans addicted to them via prescription. In one of the buried reply chains, the poster admits they have no evidence and are just repeating a claim someone made to them sometime. But how many people will read that initial post and reinforce their racist beliefs vs see that the claim was unsubstantiated? And when that leads to drastic action by a madman, who's going to be the target of the blame? The responsibility is too diffused to target any specific person, the government obviously won't, madmen don't act in a vacuum and so the blame falls on the platform.
Yes, no one should have the power to determine what ideas are and are not allowed to propagate, but on the other hand, you could still go to other platforms and are not entitled to the reach of the major platforms, but then again, these platforms are extremely influential. At the same time there's also the problem that people in part view the platforms as responsible when they spread bad ideas, the platform operators also feel some level of social responsibility, while the platform owners don't want legal responsibility.
I don't understand how your question relates to the discussion. Perhaps you could answer my earlier questions first and it might clear things up for me.
Being censored by the robber from discussing the burglary is not a measure to protect your belongings any more than giving the government the power to prevent freely speaking about their carrying out the democratic processes is a measure to protect democracy from abuse by government officials.
Should Trump have had the power to censor news and discussion of the 2016 election when there were a lot of election deniers and conspiracy theorists concerned about the legitimacy of the election and conspiracies with Russia? Absolutely not.
> New age medicine has been around forever, yes. But the effects are only known to be negligible outside of pandemics. We know from history that people did many irrational things during past pandemics due to fear and social contagion.
There are unfounded claims about how much damage was caused by people exercising their right to speak about covid, and they all come from authoritarians who sound like they have ravenous thirst to gain the power to silence their critics and the population at large. So I consider them totally unreliable handwaving at best, and more likely fraudulent fabrications. I actually don't think there's anything wrong with letting them use social media platforms like anybody else. It's fine if those companies decided to decide whose message to amplify or create their own terms of use, but having governments pressure corporations to carry out this censorship is a crazy overreach and violation of human rights by the government.
The response, policies and messaging and communication by governments and bureaucrats has caused far more damage to society, to public health, to trust in institutions and trust in vaccines and medical science than common people talking about it.
It would be a mistake to think such operations care too much about specific talking points, the goal is to drown out moderate discussion to replace it with flamewars. It's a numbers game, so they'll push in hundreds of different directions until they find something that sticks and also both sides of the same conflict.
Content that makes people angry (extreme views) brings views.
Algorithims optimise for views -> people get recommended extreme views.
You can test this with a fresh account, it doesn't take many swipes on Youtube Shorts to get some pretty heinous shit if you pretend to be a young male to the algorithm.
The problem is that the recommendation algorithms push their viewers into these echo chambers that are divorced from reality where all they see are these videos promoting misinformation. Google's approach to combating that problem was to remove the channels, but the right solution was, and still is today, to fix the algorithms to prevent people from falling into echo chambers.
I like the term disinformation better, since it can expand to the unfortunately more relevant dissenting information.
For whatever reason I guess I'm in that very rare group that genuinely watches everything from far-right racists, to communists, to mainstream media content, to science educational content, to conspiracy content, etc.
My YT feed is all over the place. The algorithms will serve you a very wide range of content if you want that, the issue is that most people don't. They want to hear what they already think.
So while I 100% support changing algorithms to encourage more diversity of views, also I think as a society we need to question why people don't want to listen to more perspectives naturally? Personally I get so bored here people basically echo what I think. I want to listen to people who say stuff I don't expect or haven't thought about before. But I'm in a very significant minority.
First, there's YouTube's decision of whether or not to allow potentially dangerous misinformation to remain on their site, and whether the government can or did require them to remove it.
Second, though, there's YouTube's much stronger editorial power: whether or not to recommend, advertise, or otherwise help people discover that content. Here I think YouTube most fairly deserves criticism or accolades, and it's also where YouTube pretends that the algorithm is magic and neutral and they cannot be blamed for actively pushing videos full of dangerous medical lies.
These companies owe you nothing that's not in a contract or a requirement of law. That you think they owe you hosting, distribution, and effort on their algorithm, is a sign of how far off course this entire discourse has moved.
This is why perma bans are bad. Id rather a strike system before a temp ban to give some breathing room for people to navigate the inevitable incorrect automation. Even then if the copyright issue is anything to go by this is going to hurt more than help.
The next Drain-o chug challenge "accident" is inevitable, at this rate.
For example, in 2020, the WHO(!) Twitter account literally tweeted that masks don't work. That same statement would have been considered medical misinformation by a different authority.
Another example: the theory that Covid leaked from a lab in Wuhan which was known to do gain of function experiments on coronaviruses was painted as a wacky conspiracy theory by most of the mainstream media, despite the fact that many respectable sources (e.g. the CIA) later concluded that it has a significant amount of plausibility versus the alternative Wuhan wet market hypothesis which required that the virus somehow arrived there from a bat cave more than a thousand kilometres away.
Future tense?
Not even much to do with Reddit, it's something I picked up from playing video games: https://speculosity.wordpress.com/2014/07/28/the-lyte-smite/
It's not even always politics, although that's certainly a major driving force. But then you have really stupid fights like two subs about the same topic banning each others' members.
Tried clarifying this in another comment, my point was more that people who say "I was banned from X for doing something innocuous" are often not telling the whole truth.
... Except when the X in question is Reddit.
My favourite. I'm trans/autistic. I was posting on r/autism being helpful. OP never mentioned their pronouns, just that they have a obgyn and feminine problems. I replied being helpful. but I misgendered them and they flipped out. Permabanned me from r/asktransgender, even though i never posted on it. Then left me a pretty hateful reply on r/autism. Reddit admins give me a warning for hate toward trans people. Despite me never doing any such thing and being one.
Right about the same time r/askreddit had a thread about it being hard not to misgender trans. So i linked this thread, linking an imgur of the reddit admin warning. I went to like 30,000 upvotes. r/autism mods had to reply saying they dont see any hate in my post and that people should stop reporting it.
Reddit bans aren‘t an indicator for anything
On this site, we're trying for discussion in which people don't just bash each other with pre-existing talking points (and unprocessed rage). Such comments quickly flood the thread on a divisive topic like this one, so flagging them is essential to having HN operate as intended. To the extent possible at least.
(oh and btw, though it ought to go without saying, this has to do with the type of comment, not the view it's expressing. People should be able to make their substantive points thoughtfully, without getting flagged.)
I know what I‘m talking about
We're happy to take the rate limit off once the account has built up a track record of using HN as intended.
Placing absolute trust in these organizations and restricting freedom of speech based on that is a very bootlicking, anti-freedom stance
If we think tylenol might cause autism why doesn't he run/fund a nice clean and large randomized controlled trial? Instead he spreads conjecture based on papers with extremely weak evidence.
It turns out that masks ARE largely ineffective at preventing CoViD infection. It's amazing how many studies have come up with vastly different results.
https://egc.yale.edu/research/largest-study-masks-and-covid-...
(Before you tell me that the story I cited above says the opposite, look at the effectiveness percentages they claim for each case.)
There's also this: https://x.com/RandPaul/status/1970565993169588579
There was scientific basis for N95 masks and similar masks. If you are talking about cloth and paper masks, I mostly agree. Even then there were tests done with using even those surgical masks with 3d printed frames. I remember this as one example of people following this line of thinking.
https://www.concordia.ca/news/stories/2021/07/26/surgical-ma...
As for dehumanization, I used to live in Tokyo and spending years riding the train. I think blaming masks for dehumanization when we have entire systems ragebaiting us on a daily basis is like blaming the LED light for your electric bill.
Social Distancing having "no scientific backing" is very difficult to respond to. Do you mean in terms of long term reduction of spread, or as a temporary measure to prevent overwhelming the hospitals (which is what the concern was at the time)?
I do agree that it was fundamentally dishonest to block people from going to church and then telling other people it was OK to protest (because somehow these protests were "socially distanced" and outdoors). They could have applied the same logic to Church groups and helped them find places to congregate, but it was clearly a case of having sympathy for the in-group vs the out-group.
This is false. Even quick search shows multiple papers from pre-covid times that show masks being effective [0][1]. There are many more studies post-covid that show that N95/FFP2/FFP3 masks actually work if you wear them correctly (most people don't know how to do this). Educate yourself before sharing lies.
Especially at the time when many countries were having their healthcare systems overloaded by cases.
> Colored masks of various construction were handed out free of charge, accompanied by a range of mask-wearing promotional activities inspired by marketing research
"of various construction" is... not very specific.
If you just try to cover your face with a piece of cloth it won't work well. But if you'll use a good mask (N95/FFP2/FFP3), with proper fit [0] then you can decrease the chance of being infected (see e.g. [1])
[0] https://www.mpg.de/17916867/coronavirus-masks-risk-protectio...
[1] https://www.cam.ac.uk/research/news/upgrading-ppe-for-staff-...
But who watches the watchmen?
Did they? I remember them revising their guidance, which seems like something one would expect during an emerging crisis, but I don't remember them directly contradicting themselves.
June 8, 2020: WHO: Data suggests it's "very rare" for coronavirus to spread through asymptomatics [0]
June 9, 2020: WHO expert backtracks after saying asymptomatic transmission 'very rare' [1]
0: https://www.axios.com/2020/06/08/who-coronavirus-asymptomati... 1: https://www.theguardian.com/world/2020/jun/09/who-expert-bac...
Of course, if we just take the most recent thing they said as "revised guidance", I guess it's impossible for them to contradict themselves. Just rapidly re-re-re-revised guidance.
Yeah, that was banished to the dark corners of Reddit until Jon Stewart said the obvious, and he was considered too big to censor.
I'm not aware that the WHO ever claimed simultaneously contradictory things.
Obviously, rapid revisions during a period of emerging data makes YouTube's policy hard to enforce fairly. Do you remove things that were in line with the WHO when they were published? When they were made? Etc
A broken clock says the right time twice a day, that doesn’t mean much.
ps: I don’t support the bans but you argument seems flawed to me.
Yes.
> If people on June 8th criticized that official position before they reversed the next day, do you think it was right or a good idea for them to be censored?
Obviously not. Like I pointed out to the other commenter, if you were to read the comment of mine you replied to, I have a whole paragraph discussing that. Not sure why you're asking again.
Like, we're getting pretty nuanced here pretty fast, it would be nice to discuss this against an actual example of how this was enforced rather than being upset about a hypothetical situation where we have no idea how it was enforced.
1. They really did have some insight or insider knowledge which the WHO missed and they spoke out in contradiction of officialdom in a nuanced and coherent way that we can all judge for ourselves.
2. They in fact had no idea what they were talking about at the time, still don't, and lucked into some of it being correct later on.
I refer to Harry Frankfurt's famous essay "On Bullshit". His thesis is that bullshit is neither a lie nor the truth but something different. Its an indifference to the factuality of ones statements altogether. A bullshit statement is one that is designed to "sound right" for the context it is used, but is actually just "the right thing to say" to convince people and/or win something irrespective of if it is true or false.
A bullshit statement is more dangerous than a lie, because the truth coming to light doesn't always expose a bullshitter the way it always exposes a lie. A lie is always false in some way, but bullshit is uncorrelated with truth and can often turn out right. Indeed a bullshitter can get a lucky streak and persist a very long time before anyone notices they are just acting confident about things they don't actually know.
So in response.
It is still a good idea to censor the people in category two. Even if the hypothetical person in your example turned out to get something right that the WHO initially got wrong, they were still spreading false information in the sense that they didn't actually know the WHO was wrong at the time when they said it. They were bullshitting. Having a bunch of people spreading a message of "the opposite of what public health officials tell you" is still dangerous and bad, even if sometimes in retrospect that advice turns out good.
People in category one were few and far between and rarely if ever censored.
I disagree on numerous levels with this position, not just on ethical grounds, but also on empirical grounds. People are simply not as gullible as you think they are, but I don't have time to delve into this, so I'll just leave it at that.
> People in category one were few and far between and rarely if ever censored.
According to whom? The stated policy makes no such distinction, it says anyone who contradicts WHO positions ought to be censored. There is no nuance, and how exactly is YouTube going to judge who belongs in each category? If they could reliably judge who was bullshitting, they wouldn't need the WHO policy to begin with. The policy is a "cover my ass" blanket so they don't have to deal with the nuance.
I mean how can you censor the WHO?
> WHO initially got wrong
But they don't got something wrong, they, as you put it, were "bullshitting", and it was obvious to any person with a three-digit IQ
Eh, ya kind of, but it seems more like the distinction between parallel and concurrent in this case. She doesn't appear to be wrong in that instance while at the same time the models might have indicated otherwise, being an apparent contradiction and apparently both true within the real scope of what could be said about it at that time.
FYI Taiwan is East Asia, not Southeast Asia. Perhaps you were thinking of Thailand.
Whether they did or not is almost irrelevant: information doesn't reach humans instantaneously, it takes time to propagate through channels with varying latency, it gets amplified/muted depending on media bias, people generally have things going on in life other than staying glued to new sources, etc.
If you take a cross sample you're guaranteed to observe contradictory "parallel" information even if the source is serially consistent.
It's 2020 and suddenly we need research about how well masks work, if at all and what is their exact benefit.
If you posted to YouTube that it is very rare for asymptomatics to spread the disease, would you be banned? What if you posted it on the 9th in the hours between checking their latest guidance and their guidance changing? What if you posted it on the 8th but failed to remove it by the 10th?
What if you disagreed with their guidance they gave on the 8th and posted something explaining your stance? Would you still get banned if your heresy went unnoticed by YouTube's censors until the 10th at which time it now aligns with WHO's new position? Banned not for spreading misinformation, but for daring to question the secular high priests?
It was a novel time and things were changing daily. Care needs to be taken yes, but it’s also weighed against clear and open communication. People were very scared. Thinking they would die. I don’t mind having up-to-date information even if it were changing daily.
Quite likely the WHO directly or by proxy with members who are also part of bureaucracy and governments in member states.
There is no question the WHO loves censorship and take an authoritarian approach to their "authority".
https://healthpolicy-watch.news/the-world-health-organizatio...
https://www.theguardian.com/world/2020/nov/13/who-drops-cens...
If corporations start adopting policies that censor anything contradicting WHO, there would be a larger onus on a claim that they were not involved in that censorship action, in my opinion.
If it wasn't them and it was all Google's idea to censor this without any influence from governments or these organizations, which is quite laughable to think but let's entertain the idea -- the WHO still should not have responded as it did with these knee jerk reactions, and also it should have been up to Google to ensure the did not use as their "source of truth" an organization that behaved in that way.
> It was a novel time
It wasn't really that novel since there have been centuries to study pandemics and transmissible diseases of all kinds, and there have even been many others of slightly less scale happen.
> and things were changing daily.
Things always change daily. Covid was not particularly "fast moving" at the time. It's not like new data was coming in that suddenly changed things day to day. It just progressed over the course of months and years. It appeared to be wild and fast moving and ever changing mainly because of the headless-chicken response from organizations like this.
> Care needs to be taken yes, but it’s also weighed against clear and open communication. People were very scared. Thinking they would die.
People were very scared because of the fear campaign, and the imbecilic and contradictory responses from these organizations.
Not that it was nothing to be afraid of, but people should have calmly been given data and advice and that's it. Automobiles, heart attacks, and cancer kill lots of people too, and should be taken very seriously and measures taken to reduce risk but even so it would be stupid to start screaming about them and cause panic.
> I don’t mind having up-to-date information even if it were changing daily.
It's not having data that is the problem, it is jumping the gun with analysis and findings and recommendations based on that data, then having to retract it immediately and say the opposite.
We actually has the emails the Biden administration sent to Youtube, here is a quote they sent:
"we want to be sure that you have a handle on vaccine hesitancy generally and are working toward making the problem better. This is a concern that is shared at the highest (and I mean highest) levels of the White House"
That is a very clear threat. "We want to make sure you ...", and then saying this threat is done with the highest authority of the USA, so better get working on what we want.There are hundreds of such emails detailed in this report if you want to read what they sent to the different tech companies to make them so scared that they banned anything related to Covid: https://judiciary.house.gov/media/press-releases/weaponizati...
They said it was a fact that COVID is NOT airborne. (It is.)
Not they believed it wasn't airborne.
Not that data was early but indicated it wasn't airborne.
That it was fact.
In fact, they published fact checks on social media asserting that position. Here is one example on the official WHO Facebook page:
https://www.facebook.com/WHO/posts/3019704278074935/?locale=...
Argue that they were incompetent in their handling of it, sure, whatever. That's not the comment you're replying to.
misinformation is a real and worsening problem, but censorship makes conspiracies flourish, and establishes platforms as arbiters of truth. that "truth" will shift with the political tides.
IMO we need to teach kids how to identify misinformation in school. maybe by creating fake articles, mixing them with real articles and having students track down sources and identify flaws. critical thinking lessons.
I spent several hours on one of these only to discover the author of the post had found a subtle way to misrepresent the findings and had done things to the graph to skew it further. You cannot expect a kid (let alone most adults) to come to the same conclusion through lessons on critical thinking.
I think some accountability for platforms is an important part of this. Platforms right now have the wrong incentives, we need to fix this. It's not just about "truth" but it's also about stealing our attention and time. It's a drug and we should regulate it like the drug it is.
This is extremely difficult. Many of the people who thrive on disinformation are drawn to it because they are contrarian. They distrust anything from the establishment and automatically trust anything that appears anti-establishment. If you tell them not to trust certain sources that’s actually a cue to them to explore those sources more and assume they’re holding some valuable information that “they” don’t want you to know.
The dynamics of this are very strange. A cluster of younger guys I know can list a dozen different times medical guidance was wrong in history from memory (Thalidomide, etc), but when you fact check Joe Rogan they laugh at you because he’s a comedian so you can’t expect him to be right about everything. “Do your own research” is the key phrase, which is a dog whistle to mean find some info to discount the professionals but then take sources like Joe Rogan and his guests at face value because they’re not the establishment.
You just described a perfectly normal "Civics & Current Events" class in early grade-school back when / where I grew up. We were also taught how to "follow the facts back to the actual sources" and other such proper research skills. This was way back when you had to go to an actual library and look up archived newspapers on microfiche, and encyclopedias were large collections of paper books. Y'know... When dinosaurs still roamed the streets... ;)
To the people zealously downvoting all of these replies: defend yourselves. What about this is not worthy of conversation?
I'm not saying that I support lab leak. The observation is that anyone that discussed the lab leak hypothesis on social media had content removed and potentially were banned. I am fundamentally against that.
If the observation more generally is that sentiments should be censored that can risk peoples lives by influencing the decisions they make, then let me ask you this:
Should Charlie Kirk have been censored? If he were, he wouldn't have been assassinated.
On the other hand, if he were, then whoever censored him might have just as easily become the target of some other crazy, because that appears to be the world we live in now. Something's gotta change. This whole "us vs them" situation is just agitating the most extreme folks right over the edge of sanity into "Crazy Town". Wish we could get back to bein' that whole "One Nation Under God" "Great Melting Pot" "United States" they used to blather on about in grade-school back in the day, but that ship appears to have done sailed and then promptly sunk to the bottom... :(
It was not a bold claim at the time. Not only was there no evidence that it was the wet market at the time, the joint probability of a bat coronavirus outbreak where there were few bat caves but where they were doing research on bat coronaviruses is pretty damning. Suppressing discussion of this very reasonable observation was beyond dumb.
I thought it wasn't so much an error as a conflict of interest.
A novel coronavirus outbreak happens at the exact location as a lab performing gain of function research on coronaviruses... but yeah, suggesting a lab leak is outlandish, offensive even, and you should be censored for even mentioning that as a possibility. Got it.
This line of thinking didn't make sense then and still doesn't make sense now.
Plenty of people were able to talk about lab leak conspiracies. That is why we are still debating it today.
I doubt it.
The same country with 1 billion people and 6-8 covid cases per day. Sure.
To be honest I don't even understand why this is a topic anymore. Conspiracy or not, it's plausible that they screwed up. Why are people nitpicking, I don't get it.
Because WHO worked with CPC to bury evidence and give clean chit to wuhan lab. There was some pressure building up then for international teams to visit wuhan lab and examine data transparently. But, with thorough ban of lab leak theory, WHO visited china and gave clean chit without even visiting wuhan lab or having access to lab records. The only place that could prove this definitively buried all records.
https://www.nytimes.com/interactive/2024/06/03/opinion/covid...
Even Dr Fauci said in 2021 he was "not convinced" the virus originated naturally. That was a shift from a year earlier, when he thought it most likely Covid had spread from animals to humans.
https://www.deseret.com/coronavirus/2021/5/24/22451233/coron...
(..February 2023..) The Department of Energy, which oversees a network of 17 U.S. laboratories, concluded with “low confidence” that SARS-CoV-2 most likely arose from a laboratory incident. The Federal Bureau of Investigation said it favored the laboratory theory with “moderate” confidence. Four other agencies, along with a national intelligence panel, still judge that SARS-CoV-2 emerged from natural zoonotic spillover, while two remain undecided.
https://www.nejm.org/doi/full/10.1056/NEJMp2305081
WHO says that "While most available and accessible published scientific evidence supports hypothesis #1, zoonotic transmission from animals, possibly from bats or an intermediate host to humans, SAGO is not currently able to conclude exactly when, where and how SARS-CoV-2 first entered the human population."
However "Without information to fully assess the nature of the work on coronaviruses in Wuhan laboratories, nor information about the conditions under which this work was done, it is not possible for SAGO to assess whether the first human infection(s) may have resulted due to a research related event or breach in laboratory biosafety."
https://www.who.int/news/item/27-06-2025-who-scientific-advi...
WHO paraphrased: We have no data at all about the Wuhan Laboratory so we can not make a conclusion on that hypothesis. Since we have data relating to natural transmission from animals we can say that situation was possible.
Eventually I started seeing some serious discussion about how it might have been accidentally created through gain of function research.
If it was a lab leak, by far the most likely explanation is that someone pricked themselves or caught a whiff of something.
A friend of mine who lived in China for a while and is familiar with the hustle culture there had his own hypothesis. Some low level techs who were being given these bats and other lab animals to euthanize and incinerate were like “wait… we could get some money for these over at the wet market!”
No, that was just the straw man circulated in your echo chamber to dismiss discussion. To be clear, there were absolutely people who believed that, but the decision to elevate the nonsense over the serious discussion is how partisan echo chambers work.
excuse me I'm sorry what?
..but i'm not a yter.
First of all, you can't separate a thing's content from the platform it's hosted on? Really?
Second of all, this is why
I'll just go do this again and if you flag me it's on you, you have no standing to do it (the internet is supposed to be democratic, remember?)
https://rumble.com/v28x6zk-sasha-latypova-msc.-nsa-team-enig...
https://rumble.com/v3zh3fh-staggering-17m-deaths-after-covid...
https://rumble.com/vt62y6-covid-19-a-second-opinion.html
https://rumble.com/v2nxfvq-international-covid-summit-iii-pa...
I could go on. Feel free if you want to see more. :)
(Was it misinformation when Fauci said you shouldn't rush a vaccine or all hell breaks loose years later? Or when he intimated that masks wouldn't work for covid?)
The issue is that there is no mechanism for punishing people who spread dangerous misinformation. It's strange that it doesn't exist though, because you're allowed to sue for libel and slander. We know that it's harmful, because people will believe lies about a person, damaging their reputation. It's not clear why it can't be generalized to things that we have a high confidence of truth in and where lying is actively harmful.
I wouldn't be surprised if some lawsuit is incoming from the company that manufactures it.
We have mechanisms for combatting the government through lawsuits. If the government came out with lies that actively harm people, I hope lawsuits come through or you know... people organize and vote for people who represent their interests.
- Tylenol causes autism
- Vaccines cause autism
- Vaccines explode kids hearts
- Climate change is a hoax by Big Green
- "Windmill Farms" are more dangerous for the environment than coal
- I could go on but I won't
Maybe it's not banning, but it doesn't feel right? Google shouldn't have been forced to do that and really what should've happened is that the people that spread genuine harmful disinformation, like injecting bleach, the ivermectin stuff or the anti-vax stuff, should've faced legal punishment.
Silver lining in this is the conversation continued and will continue. I can see governments needing to try to get accurate and helpful information out in crisis - and needing to pressure or ask more of private companies to do that. But also like that we can reflect back and go - maybe that didn’t work like what we wanted or maybe it was heavy-handed.
In many governments, the government can do no wrong. There are no checks and balances.
The question is - should we still trust YouTube/Google? Is YouTube really some kind of champion of free speech? No. Is our current White House administration a champion of free speech? Hardly.
But hopefully we will still have a system that can have room for critique in the years to come.
So after January 22 2026, US leaves WHO and youtube users will be able to contradict WHO recommendations
Second, some ideas just aren't worth distributing or debating. There's a refrain "there's no point debating a Nazi". What that means is there is a lot of lore involved with being a modern Nazi, a labyrinth of conspiracy theories. To effectively debate a Nazi means learning all that lore so you can dismantle it. There's no point. In reality, all you end up doing is platforming those ideas.
I'm actually shocked at how ostensibly educated people fall into the anti-vax conspiracy trap. Covid definitely made this worse but it existed well before then. Certain schools in San Francisco had some of the lowest child vaccination rates in the country.
As a reminder, the whole vaccine autism "theory" originated from one person: Andrew Wakefield. He was a doctor in the UK who was trying to sell a vaccine. The MMR vaccine was a direct compeititor so he just completely made up the MMR link to autism. He his medical license because of it. But of course he found a receptive audience in the US. He is and always was a complete charlatan.
Likewise, the Covid anti-vax movement was based on believing random Youtube videos from laymen and, in many cases, an intentional ignorance in the most esteemed traditions of American anti-intellectualism. People who are confidently wrong about provably wrong things who had no interest in educating themselves. Some were griters. Some were stupid. Many were both.
We had people who didn't understand what VAERS was. (and is). We had more than 10 million people die of Covid yet people considered the vaccine "dangerous" without any evidence of side effects let alone death. As one example, you had people yelling "J'accuse!" at hints of myocardial inflammation from the vaccine. But you know what else causes myocardial inflammation? Getting Covid.
If you're excited by this move, it just further highlights that you have no idea whta's going on and zero interest in the truth. What's happening here is big tech companies capitulating to the fringe political views of the administration, a clear First Amendment violation, to curry favor, get their mergers approved, get cgovernment contracts and so on.
Regardless of your views on this or any otehr issue you should care about capitulation by social media sites in this way.
This comments on this post are just a graveyard of sadness.
In my country of origin, you get called a Nazi simply for being opposed to the war of aggression that it is currently engaged in. In US, we have a long history of "terrorist" and "extremist" being similarly abused.
I just don't think that "there's no point debating a Nazi" is, in general, a good argument in favor of censorship, whether public or private. It's one of those things that have a good ring to it and make some superficial sense, like "fire in the crowded theater", and then you look at how it works in the real world...
"You and I, if we say a lie we are held responsible for it, so people can trust us. Well, Facebook made a system where the lies repeated so often that people can't tell."
"Both United Nations and Meta came to the same conclusion, which is that this platform Facebook actually enabled genocide that happened in Myanmar. Think about it as, when you say it a million times... it is not just the lie but also it is laced with fear, anger and hate. This is what was prioritized in the design and the distribution on Facebook. It keeps us scrolling, but in countries like Myanmar, in countries like Philippines, in countries where institutions are weak, you saw that online violence became real world violence."
"Fear, anger, hate, lies, salaciousness, this is the worst of human nature... and I think that's what Big Tech has been able to do through social media... the incentive structure is for the worst of who we are because you keep scrolling, and the longer you keep scrolling the more money the platform makes."
"Without a shared reality, without facts, how can you have a democracy that works?"
https://www.cnn.com/2025/01/12/us/video/gps0112-meta-scraps-...
Who would hold Meta accountable for the lies it helps spread and capitalize upon them if not the government.
So by crippling democratic institutions and dwarfing the government to the point of virtual non-existence, all in the name of preserving freedom of speech and liberalism -- and in the process subverting both concepts -- elected leaders have managed to neutralize the only check in the way of big corps to ramp up this misinformation machine that the social networks have become.
Even here on HN, i was almost banned when i said about children abduction by Russia https://news.ycombinator.com/item?id=33005062 - the crime that half year later ICC wrote the order against Putin.
They simply need choose which negative stories they print, which opinions they run. How do you frame misrepresentation vs a differing point of view? How do you call out mere emphasis on which true stories are run. Truths are still truths, right?
It's not infrequent today to see political opinions washed through language to provide reasonable deniability by those using it.
Hell, it's not infrequent to see racism, bigotry and hate wrapped up to avoid the key phrases of yesteryear, instead smuggling their foulness through carefully considered phrases, used specifically to shield those repeating them from being called out.
'No no no. Of course it doesn't mean _that_, you're imagining things and making false accusations.'
the newspaper company is the bottleneck that the censors can easily tighten like it was say in USSR. Or like even FCC today with the media companies like in the case of Kimmel.
Social media is our best tool so far against censorship. Even with all the censorship that we do have in social media, the information still finds a way due to the sheer scale of the Internet. That wasn't the case in the old days when for example each typewritter could be identified by unique micro-details of the shape of its characters.
>Social media destroyed this process, now anyone can spread allegations at lightning speed on a massive scale without any evidence to back it up.
Why to believe anything not accompanied by evidence? The problem here is with the news consumer. We teach children to not stick fingers into electricity wall socket. If a child would still stick the fingers there, are you going to hold the electric utility company responsible?
>This has to stop. We should return to the old way, it wasn't perfect, but it worked for 100s of years.
The same can be said about modern high density of human population, transport connections and infectious decease spreading. What you suggest is to decrease the population and confine the rest preventing any travel like in the "old days" (interesting that it took Black Death some years to spread instead of days it would have taken today, yet it still did spread around all the known world). We've just saw how it works in our times (and even if you say it worked then why aren't we still doing it today?). You can't put genie back into the bottle and stop the progress.
>Repealing Section 230 will accomplish this.
Yes, good thing people didn't decided back then to charge the actual printer houses with lies present in the newspapers they printed.
At this stage you are clearly just trolling. Are you even aware of the last 100s of years? From Luther to Marx? You are not acting in good faith. I want nothing to do with your ahistorical worldview.
I remember a story that was investigated and then published...for some reason it was blocked everywhere and we were not allowed to discuss the story or even link to the news article. It "has the hallmarks of a Russian intelligence operation."(Hunter Biden Laptop) Only to come out that it was true: https://www.msn.com/en-us/news/politics/fbi-spent-a-year-pre...
I would rather not outsource my thinking or my ability to get information to approved sources. I have had enough experience with gell-mann amnesia to realize they have little to no understanding of the situation as well. I may not be an expert in all domains but while I am still free at least I can do my best to learn.
And on that point the FBI investigations didn't even start on the basis of the Steele Dossier; they started on the basis of an Australian diplomat, Alexander Downer, who during a meeting with top Trump campaign foreign policy advisor George Papadopoulos, became alarmed when Papadopoulos mentioned that the Russian government had "dirt" on Hillary Clinton and might release it to assist the Trump campaign. Downer alerted the Australian government, who informed the FBI. The Steele Dossier was immaterial the investigation's genesis.
So any claim that the dossier as a whole has been "debunked" is not remarkable. Of course parts of it have been debunked, because it wasn't even purported to be 100% true by the author himself. It's not surprising things in it were proven false.
Moreover that also doesn't mean everything in it was not true. The central claim of the dossier -- that Donald Trump and his campaign had extensive ties to Russia, and that Russia sought to influence the 2016 U.S. election in Trump’s favor -- were proven to be true by the Muller Report Vol I and II, and the Senate Select Intel Committee Report on Russian Active Measures Campaigns and Interference in the 2016 Election, Vols I - VI.
> The current president of the US stole the election
Not a claim made in the dossier.
> and our biggest adversary has videos of him in compromising positions.
This hasn't been debunked. The claim in the dossier was that Russia has videos of Trump with prostitutes peeing on a bed Obama slept in, not peeing on Trump himself. The idea that it was golden showers is a figment of the internet. Whether or not the scenario where people peed on a bed Obama slept in happened as laid out in the dossier is still unverified, but not "debunked".
It was never “debunked”, that is far too strong a word. Is it true? Who knows! Should we operate as if it was true without it being proven? Definitely not.
> Hunter’s laptop
In what way was that story buried or hidden? It was a major news story on every news and social network for over half a year. There was only consternation about how the laptop was acquired and who or what helped with that endeavor. The “quieting” of the story is BS and only came about a long time after the fact. Biden’s people sought (unsuccessfully) to have images removed from platforms but there was never an effort to make it seem like the allegations that stemmed from the laptop were misinformation.
https://www.bbc.com/news/world-us-canada-62688532
Twitter's Vijaya Gadde also admitted that they blocked users from sharing the story. That's not BS either.
I am not going into a semantic argument with you over whether my exact wordings match whatever you think I said.
I will however say that both theses put forth by the comment I replied to are false. If you read either article you linked they actually support my point, the Hunter Biden news was extremely widely shared on Facebook and only throttled due to suspicions on Facebook’s part that it may have been inorganic. A particular article (but not the news) was blocked on Twitter based on an existing policy, discussion was still allowed and it was definitely widely discussed and shared.
(They don't necessarily exclude each other. You need both positive preemptive and negative repressive actions to keep things working. Liberty is cheap talk when you've got a war on your hands.)
How am I supposed to learn what’s going on outside my home town without trusting the media?
It's so much easier to stop one source than it is to (checks notes) educate the entire populace?!? Gosh, did you really say that with a straight face? As if education isn't also under attack?
If we could just educate people and make sure they don't fall for scams, we'd do it. Same for disinformation.
But you just can't give that sort of broad education. If you aren't educated in medicine and can't personally verify qualifications of someone, you are going to be at a disadvantage when you are trying to tell if that health information is sound. And if you are a doctor, it doesn't mean you know about infrastructure or have contacts to know what is actually happening in the next state or country over.
It's the same with products, actually. I can't tell if an extension cord is up to code. The best that I can realistically do is hope the one I buy isn't a fake and meets all of the necessary safety requirements. A lot of things are like this.
Education isn't enough. You can't escape misinformation and none of us have the mental energy to always know these things. We really do have to work the other way as well.
And once you know you need pre-screening, the question becomes why not just provide it instead of making people hunt it down?
Applying that to information and propaganda, users should have some automated defenses (like ad blockers), but also manual control of what should or should not be blocked, and also education and tools to be better informed when taking manual control.
In neither system should we remove manual control, education or automated help. They all act in union to make people safer.
Some people die (often children) by opening doors while a vehicle is moving or before it is safe to do so.
However, this also impedes the ability of rescuers to extract people from crashed vehicles (especially with fail-dangerous electric car locks).
Is it safer to protect citizens from themselves or empower them to protect themselves?
In my perfect US, both would be done:
"Dealing with disinformation" as a semester-long required high-school level course and federally mandating the types of tools that citizens could use to better (read: requiring all the transparency data X and Meta stopped reporting, from any democracy-critical large platform).
While also mandating measures to limit disinformation where technically possible and ethically appropriate (read: not making hamfisted government regulations, when mandating data + allowing journalists / citizens to act is a better solution).
Treating children as children is fine and expected. Treating adults as children is not. Protecting children from disinformation, under the assumption that they lack the experience, education, impulse control, and expectation to handle information security themselves is fine. The government can also be an acceptable party to define this for children, even if some parents will object to not carry that role. An alternative could also be to make the parent liable if they fail in their role to protect their children from information harm.
Going back to auto-lock-on-drive doors, giving the government remote control of the car doors with no override, including the driver door, is unlikely to be acceptable to the adult driver who own the car.
Sarcasm aside, we tend to focus too much on the means and too little on the outcomes.
Take Reddit, for example. It's filled with blatant propaganda, from corporations and politicians. It's a disgustingly astroturfed platform ran by people of questionable moral character. What's more, it also has porn. All you need is an account to access 18+ "communities". Not exactly "enlightening material" that frees the mind from tyranny.
If it was bad for Biden admin, it's much worse for Trump admin - he campaigned against it.
It’s shameful that this happens. Is it bot voting? Partisan cheering over productive conversation? It’s troubling.
I don't know how it works in The Philippines, but in the USA the suggestion that media outlets are held responsible for the lies that they tell is one of the most absurd statements one could possibly make.
I'm not an Alex Jones fan, but I don't understand how a conspiracy theory about the mass shooting could be construed as defamation against the parents of the victims. And the $1.3B judgement does seem excessive to me.
Despite his resources, Alex Jones completely failed to get competent legal representation and screwed himself. He then portrayed himself as the victim of an unjust legal system.
[1] https://www.npr.org/2021/11/15/1055864452/alex-jones-found-l...
> Connecticut Superior Court Judge Barbara Bellis cited the defendants' "willful noncompliance" with the discovery process as the reasoning behind the ruling. Bellis noted that defendants failed to turned over financial and analytics data that were requested multiple times by the Sandy Hook family plaintiffs.
[2] https://lawandcrime.com/high-profile/judge-rips-alex-jones-c...
> Bellis reportedly said Jones' attorneys "failure to produce critical material information that the plaintiffs needed to prove their claims" was a "callous disregard of their obligation," the Hartford Courant reported.
Yeah. Reufsing to cooperate with the court has to always be at least as bad as losing your case would have been.
When corporate media figures tell lies that are useful to the establishment, they are promoted, not called to account.
In 2018 Luke Harding at the Guardian lied and published a story that "Manafort held secret talks with Assange in Ecuadorian embassy" (headline later amended with "sources say" after the fake story was debunked) in order to bolster the Russiagate narrative. It was proven without a shadow of a doubt that Manafort never went to the Embassy or had any contact at all with Assange (who was under blanket surveillance), at any time. However, to this day this provably fake story remains on The Guardian website, without any sort of editor's note that is it false or that it was all a pack of lies!(1) No retraction was ever issued. Luke Harding remains an esteemed foreign correspondent for The Guardian.
In 2002, Jonah Golberg told numerous lies in a completely false article in The New Yorker that sought to establish a connection between the 9/11 attacks and Saddam Hussein called, "The Great Terror".(2) This article was cited repeatedly during the run up to the war as justification for the subsequent invasion and greatly helped contribute to an environment where a majority of Americans thought that Iraq was linked to Bin Laden and the 9/11 attackers. More than a million people were killed, in no small part because of his lies. And Goldberg? He was promoted to editor-in-chief of The Atlantic, perhaps the most prestigious and influential journal in the country. He remains in this position today.
There are hundreds, if not thousands, of similar examples. The idea suggested in the original OP that corporate/established media is somehow more credible or held to a higher standard than independent media is simply not true. Unfortunately there are a ton of lies, falsehoods and propaganda out there, and it is up to all of us to be necessarily skeptical no matter where we get our information and do our due diligence.
1. https://www.theguardian.com/us-news/2018/nov/27/manafort-hel...
2. https://www.newyorker.com/magazine/2002/03/25/the-great-terr...
Passive voice. Who exactly is supposed to do the "checking" and why should we trust them?
Section 230 is largely irrelevant to this process so I don't know why you'd bring it up. Have you ever even read the Communications Decency Act of 1996?
But no, yet again the blame is all piled on to the little people. Yes, it's us plebs lying on the internet who are the cause of all these problems and therefore we must be censored. For the greater good.
I have an alternative idea, let's first imprison or execute (with due process) politicians, CEOs, generals, heads of intelligence and other agencies and regulators, those found to have engaged in corrupt behavior, lied to the public, committed fraud, insider trading, fabricated evidence to support invading other countries, engage in undeclared wars, ordered extrajudicial executions, colluded with foreign governments to hack elections, tax evasion, etc. Then after we try that out for a while and if it has not improved things, then we could try ratcheting up the censorship of plebs. Now one might argue that would be a violation of the rights of those people to take such measures against them, but that is a sacrifice I'm willing to make. Since We Are All In This Together™, they would be willing to make that sacrifice too. And really, if they have nothing to hide then they have nothing to fear.
When you get people like Zuckerberg lying to congress, it's pretty difficult to swallow the propaganda claiming that it's Joe Smith the unemployed plumber from West Virginia sharing "dangerous memes" with his 12 friends on Facebook that is one of the most pressing concerns.
(Alpha Centauri, 1999, https://civilization.fandom.com/wiki/The_Planetary_Datalinks... )
~Anonymous, Datalinks.
You can't call the phrase "the sky is mint chocolate chip pink with pulsate alien clouds" information.
The "disinformation" bucket was overly large.
There was no nuance. No critical analysis of actual statements made. If it smelled even slightly off-script, it was branded and filed.
Of course not.
But as we know, MAGA are snowflakes and look for anything so they can pull out their Victim Card and yell around...
The doublethink is real.
https://www.cdc.gov/vaccines/covid-19/clinical-consideration...
BOTH of them were targeted by the misinformation squad, as if equivalent.
Yes I block it routinely. No the algo doesnt let up.
I dont need "faith" when I can see that a decent chunk of people disbelieve modern history, and aggressively disbelieve science.
More data doesnt help.
I think you could make a reasonable argument that the algorithms that distort social media feeds actually impede the free flow of information.
Employees choose what you see every day you use most social media.
Anyone who has the power to deny you information absolutely has more power than those who can swamp out good information with bad. It's a subtle difference yes, but it's real.
I don't quite understand how the Ressa quote in the beginning of this thread justifies banning dissent for being too extreme. The algorithms are surely on YouTube and Facebook (and Ressa's!) side here, I'm sure they tried to downrank distrust-promoting content as much as they dared and had capabilities to, limited by e.g. local language capabilities and their users' active attempts to avoid automatic suppression - something everyone does these days.
It's not an argument for banning doctors from YouTube for having the wrong opinions on public health policy.
There is no free flow of information. Never was. YouTube and FB and Google saying "oh it's the algorithm" is complete BS. It always manipulated, boosting whoever they feel fit.
Why? No one actually lives like that when you watch their behavior in the real world.
It's not even post modernism, it's straight up nihilism masquerading as whatever is trendy to say online.
These people accuse every one of bias while ignoring that there position comes from a place of such extreme biased it irrationally, presuppositionaly rejects the possibility of true facts in their chosen, arbitrary cut outs. It's special pleading as a lifestyle.
It's very easy to observe, model, simulate, any node based computer networks that allow for coherent and well formed data with high correspondence, and very easy to see networks destroyed by noise and data drift.
We have this empirically observed in real networks, it's pragmatic and why the internet and other complex systems run. People rely on real network systems and the observed facts of how they succeed or fail then try to undercut those hard won truths from a place of utter ignorance. While relying on them! It's absurd ideological parasitism, they deny the value of the things the demonstrably value just by posting! Just the silliest form of performative contradiction.
I don't get it. Fact are facts. A thing can be objectively true in what for us is a linear global frame. The log is the log.
Wikipedia and federated text content should never be banned, logs and timelines, data etc... but memes and other primarily emotive media is case by case, I don't see their value. I don't see the value in allowing people to present unprovable or demonstrably false data using a dogmatically, confidentally true narrative.
I mean present whatever you want but mark it as interpretation or low confidence interval vs multiple verified sources with a paper trail.
Data quality, grounding and correspondence can be measured. It takes time though for validation to occur, it's far easier to ignore those traits and just generate infinite untruth and ungrounded data.
Why do people prop up infinite noise generation as if it was a virtue? As if noise and signal epistemically can't be distinguished ever? I always see these arguments online by people who don't live that way at all in any pragmatic sense. Whether it's flat earthers or any other group who rejects the possibility of grounded facts.
Interpretation is different, but so is the intentional destruction of a shared meaning space by turning every little word into a shibboleth.
People are intentionally destroying the ability to even negotiate connections to establish communication channels.
Infinite noise leads to runaway network failure and in human systems the inevitably of violence. I for one don't like to see people die because the system has destroyed message passing via attentional ddos.
Also accurate information (like here is 10 videos about black killing whites) with distorted statistics (there is twice as much white on black murder) is still propaganda. But these are difficult to identify, since they clearly affect almost the whole population. Not many people even tried to fight against it. Especially because the propaganda’s message is created by you. // The example is fiction - but the direction exists, just look on Kirk’s twitter for example -, I have no idea about the exact numbers off the top of my head
Like those low quality AI video about Trump or Biden, saying things that didn't happened. Anyone with critical thinking knows that those are either propaganda or engagement farming
Sometimes it's clearly one and not the other, but it isn't always clear.
https://bsky.app/profile/atrupar.com/post/3lzm3z3byos2d
You 'it's just comedy' guys are so full of it. The FCC Head attacking free media in the United States isn't 'just telling jokes'.
The fundamental problem here is exactly that.
We could have social media that no central entity controls, i.e. it works like the web and RSS instead of like Facebook. There are a billion feeds, every single account is a feed, but you subscribe to thousands of them at most. And then, most importantly, those feeds you subscribe to get sorted on the client.
Which means there are no ads, because nobody really wants ads, and so their user agent doesn't show them any. And that's the source of the existing incentive for the monopolist in control of the feed to fill it with rage bait, which means that goes away.
The cost is that you either need a P2P system that actually works or people who want to post a normal amount of stuff to social media need to pay $5 for hosting (compare this to what people currently pay for phone service). But maybe that's worth it.
The Fediverse[1] with ActivityPub[0]?
How is that not the case now?
>You need all moderation to be applied on the client, or you'll have large servers doing things like banning everyone from new/small independent servers by default so that people have to sign up with them instead.
I suppose. There are ActivityPub "clients" which act as interfaces that allow the former and act as agents for a single user interacting with other ActivityPub instances. which, I'd expect can take us most of the way you say we should go.
I haven't seen the latter, as there's really no incentive to do so. Meta tried doing so by federating (one-way) with threads, but that failed miserably as the incentives are exactly the opposite in the Fediverse.
I suppose that incentives can change, although money is usually the driver for that and monetization isn't prioritized there.
>The protocol needs to make that impossible or the long-term consequences are predictable.
Impossible? Are you suggesting that since ActivityPub isn't perfect, it should be discarded?
ActivityPub is easily 75% of where you say we should go. Much farther along that line than anything else. But since it's not 100% it should be abandoned/ignored?
I'm not so sure about your "long-term consequences" being predictable. Threads tried to do so and failed miserably. In fact, the distributed model made sure that it would, even though the largest instances did acquiesce.
ActivityPub is the best you're going to get right now. and the best current option for distributed social media.
Don't let the perfect be the enemy of the good.
Edit: I want to clarify that I'm not trying to dunk on anyone here. Rather, I'm not understanding (whether that's my own obtuseness or something else) the argument being made against ActivityPub in the comment to which I'm replying. Is there some overarching principle or actual data which supports the idea that all social media is doomed to create dystopian landscapes? Or am I missing something else here?
The protocol allows servers, rather than users, to ban other servers. Servers should be only the dumbest of pipes.
> Are you suggesting that since ActivityPub isn't perfect, it should be discarded?
I'm saying that by the time something like this has billions of users the protocol is going to be a lot harder to change, so you should fix the problems without delay instead of waiting until after that happens and getting deja vu all over again.
> Threads tried to do so and failed miserably.
Threads tried to do that all at once.
The thing that should be in your threat model is Gmail and Chrome and old school Microsoft EEE. Somebody sets up a big service that initially doesn't try to screw everyone, so it becomes popular. Then once they've captured a majority of users, they start locking out smaller competitors.
The locking out of smaller competitors needs to be something that the protocol itself is designed to effectively resist.
>The protocol allows servers, rather than users, to ban other servers. Servers should be only the dumbest of pipes.
A fair point. A good fix for this is to have individual clients that can federate/post/receive/moderate/store content. IIUC, there is at least one client/server hybrid that does this. It's problematic for those who don't have the computing power and/or network bandwidth to run such a platform. But it's certainly something to work towards.
>> Are you suggesting that since ActivityPub isn't perfect, it should be discarded?
>I'm saying that by the time something like this has billions of users the protocol is going to be a lot harder to change, so you should fix the problems without delay instead of waiting until after that happens and getting deja vu all over again.
I'm still not seeing the "problems" with server usage you're referencing. Federation obviates the need for users to be on the same server and there's little, if any, monetary value in trying to create mega servers. Discoverability is definitely an issue, but (as you correctly point out) should be addressed. It is, however, a hard problem if we want to maintain decentralization.
>The thing that should be in your threat model is Gmail and Chrome and old school Microsoft EEE. Somebody sets up a big service that initially doesn't try to screw everyone, so it becomes popular. Then once they've captured a majority of users, they start locking out smaller competitors.
Given the landscape of the Fediverse, that seems incredibly unlikely. Perhaps I'm just pie in the sky on this, but those moving to ActivityPub platforms do so to get away from such folks.
Adding to that the ability to manage one's own content on one's own hardware with one's own tools, it seems to be a really unlikely issue.
Then again, I could absolutely be wrong. I hope not. That said, I'm sure that suggestions for changes along the lines you suggest to the ActivityPub protocol[0][1][2] as a hedge against making it fall into a series of corporate hell holes, as you put it, "impossible," would be appreciated.
[0] https://github.com/w3c/activitypub
[1] https://activitypub.rocks/
[2] https://w3c.github.io/activitypub/
Edit: Clarified my thoughts WRT updates to the ActivityPub protocol.
Because that's the argument you need to be making here.
In fact, this is the reality we have always had, even under Biden. This stuff went to court. They found no evidence of threats against the platforms, the platforms didn't claim they were threatened, and no platform said anything other than they maintained independent discretion for their decisions. Even Twitter's lawyers testified under oath that the government never coerced action from them.
Even in the actual letter from YouTube, they affirm again that they made their decisions independently: "While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the company to remove non-violative user-generated content."
So where does "to press" land on the spectrum between requesting action and coercion? Well, one key variable would be the presence of some type of threat. Not a single platform has argued they were threatened either implicitly or explicitly. Courts haven't found evidence of threats. Many requests were declined and none produced any sort of retaliation.
Here's a threat the government might use to coerce a platform's behavior: a constant stream of subpoenas! Well, wouldn't you know it, that's exactly what produced the memo FTA.[1]
Why hasn't Jim Jordan just released the evidence of Google being coerced into these decisions? He has dozens if not hundreds of hours of filmed testimony from decision-makers at these companies he refuses to release. Presumably because, like in every other case that has actually gone to court, the evidence doesn't exist!
[1] https://www.politico.com/live-updates/2025/03/06/congress/ji...
It's unreasonable to expect some portion of the executive branch to reliably act counter to the President's stated goals, even if they would otherwise have.
And that opportunity for perversion of good governance (read: making decisions objectively) is exactly why the government shouldn't request companies censor or speak in certain ways, ever.
If there are extenuating circumstances (e.g. a public health crisis), then there need to be EXTREMELY high firewalls built between the part of the government "requesting" and everyone else (and the President should stay out of it).
For example, the government has immense resources to detect fraud, CSAM, foreign intelligence attacks, and so on.
It is good, actually, that the government can notify employers that one of their employees is a suspected foreign asset and request they do not work on sensitive technologies.
It is good, actually, that the government can notify a social media platform that there are terrorist cells spreading graphic beheading videos and request they get taken down.
It's also good that in the vast majority cases, the platforms are literally allowed to reply with "go fuck yourself!"
The high firewall is already present, it's called the First Amendment and the platforms' unquestioned right to say "nope," as they do literally hundreds of times per day.
In the Biden admin, multiple lawsuits (interestingly none launched by the allegedly coerced parties) revealed no evidence of such mechanics at play.
In the Trump admin, the FCC Commissioner and POTUS have pretty much explicitly tied content moderation decisions to unrelated enforcement decisions.
Definitely there's possibility for an admin to land in the middle (actually coercive, but not stupid enough to do it on Truth Social), and in those scenarios we rely on the companies to defend themselves.
The idea that government should be categorically disallowed from communicating and expressing preferences is functionally absurd.
What do you do?
It's easy to rely on absolutes and pithy quotes that don't solve any actual problems. What would you, specifically, with all your wisdom do?
By done I don’t mean it won’t continue to be the worlds biggest and most important country, but I don’t expect any other country to trust America more than they have to for a 100 years or so.
some consequence. Not all consuming, but significant.
> Our traditional allies will continue to cooperate regardless of
whether they continue to include the US within that circle to the same degree, or indeed at all.
Trump's tariff's have been a boon for China's global trade connections, they continue to buy soybeans, but from new partners whereas before they sourced mainly from the US.
... when the Presidency, House, and Senate are also controlled by one unified party, and the Supreme Court chooses not to push back aggressively.
That rarely happens.
It's called flooding the zone, and it is a current Republican strategy to misinform, to sow defeatism in their political opposition, default/break all of the existing systems for handling politics, with the final outcome to manipulate the next election. And they publicized this yet people like you claim to think it's non issue.
Another important error is the implicit assumption that public health risks are constant, and do not vary with changing time and conditions, so that the public health risk profile today is essentially the same as in the first century of the US’s existence.
One could say the problem with freedom of speech was that there weren't enough "consequences" for antisocial behavior. The malicious actors stirred the pot with lies, the gullible and angry encouraged the hyperbole, and the whole US became polarized and divided.
And yes, this system chills speech as one would be reluctant to voice extreme opinions. But you would still have the freedom to say it but the additional controls exert a pull back to the average.
Imagine an interface that reveals the engagement mechanism by, say, having an additional iframe. In this iframe an LLM clicks through its own set of recommendations picked to minimize negative emotions at the expense of engagement.
After a few days you're clearly going to notice the LLM spending less time than you clicking on and consuming content. At the same time, you'll also notice its choices are part of what seems to you a more pleasurable experience than you're having in your own iframe.
Social media companies deny you the ability to inspect, understand, and remix how their recommendation algos work. They deny you the ability to remix an interface that does what I describe.
In short, your quote surely applies to social media companies, but I don't know if this is what you originally meant.
They are usually even more brazen in their ambitions than the censors, but somehow get a free pass because, hey, he's just fighting for the oppressed.
The trick is there's a fine line between honest free speech absolutism and 'pro free speech I believe in and silence about the freedom of that I don't.' Usually when ego and power get involved (see: Trump, Musk).
To which, props to folks like Ted Cruz on vocally addressing the dissonance of and opposing FCC speech policing.
Without truth there is no information.
If you've never played Alpha Centauri (like me) you are guaranteed to believe this to be a real quote by a UN diplomat. It also doesn't help that searching for "U.N. Declaration of Rights" takes me (wrongly) to the (real) Universal Declaration of Human Rights. I only noticed after reading ethbr1's comment [1], and I bet I'm not the only one.
Also, you missed a great game.
We are not controlling people by reducing information.
We are controlling people by overwhelming them in it.
And when we think of a solution, our natural inclination to “do the opposite” smacks straight into our instinct against controlling or reducing access to information.
The closest I have come to any form of light at the end of the tunnel is Taiwan’s efforts to create digital consultations for policy, and the idea that facts may not compete on short time horizon, but they surely win on longer time horizons.
People should be able to say whatever the hell they want, wherever the hell they want, whenever the hell they want. (Subject only to the imminent danger test)
But! We should also be funding robust journalism to exist in parallel with that.
Can you imagine how different today would look if the US had leveraged a 5% tax on social media platforms above a certain size, with the proceeds used to fund journalism?
That was a thing we could have done. We didn't. And now we're here.
The former moderated content and was thus held liable for posted content. The latter did not moderate content and was determined not to be liable for user generated content they hosted.
Part of the motivation of section 230 was to encourage sites to engage in more moderation. If section 230 were to be removed, web platforms would probably choose to go the route of not moderating content in order to avoid liability. Removing section 230 is a great move if one wants misinformation and hateful speech to run unchecked.
The idea that we need to protect people from “bad information” is a dark path to go down.
You eliminate the good and the bad ideas. You eliminate the good ideas that are simple “bad” because it upsets people with power. You eliminate the good ideas that are “bad” simply because they are deemed too far out the Overton window.
And worst of all, it requires some benevolent force to make the call between good and bad, which attracts all sorts of psychopaths hungry for power.
Cue the quote that says it takes 30 minutes to debunk 30 seconds of lying.
Why?
But what I just realized is that you don't explicitly say it, and certainly make no real argument for it. Ressa laments algorithmic promotion of inflammatory material, but didn't say "keep out anti-government subversives who spread dangerous misinformation" - which is good, because
1. We can all see how well the deplatforming worked - Trump is president again, and Kennedy is health secretary.
2. In the eyes of her government, she was very much such a person herself, so it would have been pretty bizarre thing of her to say.
Ironically, your post is very much an online "go my team!" call, and a good one too (top of the thread!). We all understand what you want and most of us, it seems, agree. But you're not actually arguing for the deplatforming you want, just holding up Ressa as a symbol for it.
Not a compelling argument...
Jan 2021 - Twitter bans Trump (for clear policy violations)
Apr 2022 - Musk buys Twitter
Aug 2023 - Twitter reinstates Trump's account
Nov 2024 - Trump re-elected, gives Musk cabinet position
But at least the Covid dissenter deplatforming worked, right? Or was the problem Musk there again?
One of my mantras is that powerful people believe all the crazy things regular people believe in, they just act differently on them. I think both Musk and Kennedy are great examples that you'd appreciate, as are Xi and Putin with their open mic life extension fantasies.
It's not long ago that Musk and even Trump himself, was aligned with your competent technocrats wielding the "suppression of irresponsible speech" powers.
It wasn’t just banning Trump either, tbh one of the biggest ones was the banning of the Babylon Bee for a pretty tame joke. There’s a long list of other right-leaning accounts which were banned during that time as well.
The approach of allowing everything that is _legal_ to say is much better. If it is allowed by a court of law then companies should not be trying to apply their own additional filters. It can be downranked in the algorithm but at least allowing legal speech is important.
Even just looking at your statement, lumping Andrew Tate in with Tommy Robinson is a completely subjective thing, they are two wildly different people. Everything Tommy Robinson has said is true, he regularly states that he doesn’t care about race, he rejects white supremacists, and has a movement filled with peaceful normal Brits. Nothing he says or does is violent or illegal, his claims about Pakistani rape gangs are supported by evidence and first hand testimony. And more generally: not wanting to become a hated minority in your own country is not an extremist position. It doesn’t mean you hate others for their skin color or whatever type of “phobic” label you care to apply. People vote repeatedly for a government to stop the boats and every government that gets elected decides not to try for some mysterious reason, people are justifiably angry that their elected officials are doing the opposite of what they voted for.
Andrew Tate is yes of course a controversial dumb guy who does say things which are pretty out there, but the principle of allowing him to say everything which is legal in a court of law is important. Most normal people recognize that he’s outside the Overton window on many topics and it’s generally easy to counter his speech with better speech. But lumping crazies like Tate in with legitimate people like Robinson is a common tactic to delegitimize the people you disagree with.
Yet if we do it your way we get violent nationwide riots fuelled by misinformation on social media.
> Everything Tommy Robinson has said is true ... Nothing he says or does is violent or illegal
Tommy Robinson who was tried and jailed for repeatedly spreading libellous allegations about a Syrian refugee? Are you being serious?
He is completely correct about Pakistani rape gangs, the growth of Sharia courts and laws in the UK, and growing violence against the native British.
Side note: the UK government fines and jails more people for speech than most authoritarian dictatorships, including Russia.
A 15 year old refugee boy had been assaulted, had water forced into his mouth, had had his arm broken, his sister had been assaulted. He's now terrified of going back to the school because of the hate that Robinson has filled other kids' heads with. Robinson's behaviour was utterly shameful, and it's shameful that you defend him in this instance.
> the UK government fines and jails more people for speech than most authoritarian dictatorships
This is another false claim that Robinson peddles about. I've addressed it previously here: https://news.ycombinator.com/item?id=41488099
“A photo he stole” - didn’t he just repost something he found on TikTok? How is that stealing?
Again all these things you say he did amount to him saying things you don’t like. He didn’t commit any violence or hurt anyone, just said words. You are trying to justify locking people up for saying things and that is what real authoritarian government looks like.
>Approximately 12,000 people are arrested annually in the UK for offensive online messages
>In 2023, specifically 12,183 people were arrested for sending or posting offensive messages on social media [3]
>Police are making around 30 arrests per day for offensive online messages [1]
>The trend shows significant increases: arrests have risen by 121% since 2017 [1] and by almost 58% since before the pandemic
https://factually.co/fact-checks/justice/uk-social-media-arr...
> Everything Tommy Robinson has said is true ... Nothing he says or does is violent or illegal
Given that he committed libel - which involves making false statements and is illegal - will you at least admit that you were wrong?
Perhaps you take the niche position that libel should be legal. But at least have the decency to concede that it is currently illegal.
As bad as facebook and it's opaque algorithms that favor rage bait are, the kind of stuff you get by keeping score is worse.
As for free speech online, do you think there should be no limit to what can be said or shared online? What about pedophilia or cannibalism? Or, more relevantly, what about election-denialism, insurrectionism or dangerous health disinformation that are bound to make people act dangerously for themselves and society as a whole? Point is, free speech is never absolute, and where the line is drawned is an important conversation that must be had. There is no easy, objective solution to it.
I also cringed at your list.
"what about election-denialism"
I dont think I can help you.
What do you disagree with? I'll note you still haven't advanced anything.
"DOJ aims to break up Google’s ad business as antitrust case resumes"
https://arstechnica.com/gadgets/2025/09/google-back-in-court...
Without their claim to victimization, they can't justify their hatred.
So yes. This is a problem rooted in the USA. But it is still a problem, and it's a problem for everyone, everywhere, all the time.
Same thing with the UFO "Alien" video that was "shot down" by a hellfire missile (most likely a balloon), people just automatically assume that because it was said in congress it has to be true, zero analysis whatsoever of the footage or wanting to seek analysis by an expert, nope, it must be an alien.
There is so much misinformation, so much lack of understanding, and so many people, from every side that just have complete and utter lack of understanding of how seemingly basic things work, I am afraid for the future.
But yeah! let's unban unscientific sources, oh and people who are okay with a literal coup on a democracy.
But when it comes to this thread, the sentiment mostly is banning is good and we should trust Google made the right choice.
More than that... different situations usually require different conclusions.
I'm sure Zuckerberg will say the same thing in 2029 too if the ruling party changes again. Until then, removing fact-checking and letting conspiracy theorists have their freedom of speech while suppressing voices critical of the current administration will make that change less likely...
My argument, free speech is a limit on the government. Give them as much consequences you please but not with government power.
That's the problem here, Democrats were using government power to censor their political opponents; but they wouldnt have been able to do it without government power.
pessimizer•4mo ago
Actual letter: https://judiciary.house.gov/sites/evo-subsites/republicans-j...
Good editorial: https://www.businessinsider.com/google-meta-congress-letter-...
murphyslab•4mo ago
- https://www.engadget.com/big-tech/youtube-may-reinstate-chan...
- https://arstechnica.com/gadgets/2025/09/youtube-will-restore...
topspin•4mo ago
Yes, I know about the Charlie Kirk firings etc.
dang•4mo ago