Can you elaborate? (At the risk of spoiling the joke)
Leaving social media can be thought of as emerging from the cave: you interact with people near you who actually have a shared experience to yours (if only geographically) and you get a feel for what real world conversation is like: full of nuance and tailored to the individual you’re talking to. Not blasted out to everyone to pick apart simultaneously. You start to realize it was just a website and the people on it are just like the shadows on the wall: they certainly look real and can be mesmerizing, but they have no effect on anything outside of the cave.
My impression of the joke is that intelligent and knowledgeable people willingly engage with social media and fall into treating what they see as truth, and then are shocked when they learn it's not truth.
If the allegory of the cave is describing a journey from ignorant and incorrect beliefs to enlightened realizations, the parent is making a joke about people going in reverse. Perhaps they have seen first hand someone who is educated, knowledgeable and reasonable become deceived by social media, casting away their own values and knowledge for misconceptions incepted into them by persistent deception.
I'm not saying I agree entirely with the point the joke is making but it does sort of make sense to me (assuming I even understand it correctly).
I also see this with AI answers relying on crap internet content.
AI trained on most content will be filled with misconceptions and contradictions.
Recent research has been showing that culling bad training data has a huge positive impact on model outputs. Something like 90% of desirable outputs comes from 10% of the training data (forget the specifics and don't have time to track down the paper right now)
I really hope that AI business models don't fall into relying on getting and keeping attention. I also hope the creators of them exist in a win-win relationship with society as a whole. If AIs compete with each other based on which best represent truth, then overall things could get a lot better.
The alternative seems dreadful.
Edit: I am curious why this is getting downvoted.
https://www.anthropic.com/research/small-samples-poison
It was discussed a month or so back.
I mean it's also just the classic garbage in garbage out heuristic, right?
The more training data is filtered and refined, the closer the model will get to approximating truth (at least functional truths)
It seems we are agreeing and adding to each other's points... Were you one of the people who downvoted my comment?
I'm just curious what I'm missing.
Do hope. But hoping for a unicorn is magic thinking.
For other people, they can either count this as a reason to despair, or figure out a way to get to the next best option.
The world sucks, so what ? In the end all problems get solved if you can figure them out.
The reason I say this is that blind hope and informed hope are two different things.
Media has always relied on novel fear to attract attention. It's always "dramatized"; sacrificing truth for what sells. However AI is like electricity or computation. People make it to get things done. Some of those things may be media, but it will also be applied to everything else people want to get done. The thing about tools is that if they don't work people won't keep using them. And the thing about lies is that they don't work.
For all of human history people have become more informed and capable. More conveniences, more capabilities, more ideas, more access to knowledge, tools, etc.
What makes you think that AI is somehow different than all other human invention that came before it?
It's just more automation. Bad people will automate bad things, good people will automate good things.
I don't have a problem with people pointing out risks and wanting to mitigate them, but I do have a problem with invalid presuppositions that the future will be worse than the past.
So no, I don't think I'm hoping for a unicorn. I think I'm hoping that my intuition for how the universe works is close enough, and the persistent pessimism that seems to permeate from social media is wrong.
> The thing about tools is that if they don't work people won't use them.
People will and do use tools that don't work. Over time fewer people use bad tools as word spreads. Often "new" bad tools have a halo uptake of popularity.
> And the thing about lies is that they don't work.
History tells us that lies work in the short term, and that is sufficient to force bad decisions that have long shadows.
My bad. I meant won't keep using them.
> History tells us that lies work in the short term, and that is sufficient to force bad decisions that have long shadows.
What do you mean by "work"?
It sounds like you are implying that a lie "works" by convincing people to believe it?
I meant a lie doesn't work in that if you follow the lie you will make incorrect predictions about the future.
If someone acts on a lie which results in a bad decision with a "long shadow" then wouldn't that mean acting out the lie didn't work?
They are used by bad actors to, say, win elections and then destroy systemic safeguards and monitoring mechanisms that work to spotlight bad actions and limit damage.
There are also lies, such as a common belief in Wagyl, that draw people to together and act in unison as a community to help the less fortunate, preserve the environment and common resources, and other things not generally perceived as destructive.
I don't disagree with this. It's reasonable to assume I was talking about that type of "work", but I wasn't.
> There are also lies, such as a common belief in Wagyl, that draw people to together and act in unison as a community to help the less fortunate, preserve the environment and common resources, and other things not generally perceived as destructive.
I am not familiar with this specific culture but I totally get your point. Most religion works like this. I would just consider that the virtues and principles embedded within the stories and traditions are the actual truths that work, and that Wagyl and the specifics of the stories are just along for the ride. The reason I believe this is because other religions with similar virtues and values will have similar outcomes even though the lie they believe in is completely different.
I said that lies destroy, and that wasn't right. Sometimes they do, but as you have pointed out, often they don't.
> I really hope that AI business models don't fall into relying on getting and keeping attention. I also hope the creators of them exist in a win-win relationship with society as a whole.
The ratio of total hours of human attention available to total hours of content is essentially 0. We have infinite content, which creates unique pressures on our information gathering and consumption ability.
Information markets tend to consolidate, regulating speech is beyond fraught, and competition is on engagement, not factuality.
Competing on accuracy requires either Bloomberg Terminal levels of payment, or you being subsidized by a billionaire. Content competes with content, factual or otherwise.
My neck of the words is content moderation, misinformation, and related sundry horrors against thought, speech and human minds.
Based on my experience, I find this hope naive.
I do think it is in the right direction, and agree that measured interventions for the problems we face are the correct solution.
The answer to that, for me, is simply data and research on what actually works for online speech and information health.
> I really hope that AI business models don't fall into relying on getting and keeping attention.
What I really meant is that I hope that the economic pressures on media don't naturally also apply to AI. I do think it's naive to hope that AI won't be used in media to compete for attention, I just don't think it's naive to hope that's not the only economic incentive for its development.
I also hope that it becomes a commodity, like electricity, and spills far and wide outside of the control of any monopoly or oligopoly (beyond the "tech giants"), so that hoping tech giants do anything against their incentive structures is moot. I hope that the pressures that motivate AIs development are overwhelmingly demand for truth, so that it evolves overwhelmingly towards providing it.
If this hope is naive, that would imply the universe favors deception over truth, death over life, and ultimately doesn't want us to understand it. To me, that implication seems naive.
The Bloomberg terminal is an interesting example and I see your point. I guess the question is what information is there a stronger incentive to keep scarce. The thing about Bloomberg terminals are that people are paying for immediate access to brand new information to compete in a near-zero-sum game. Most truth is everlasting insight into how to get work done. A counter example are textbooks.
The commodification is towards the production of content, not information.
Mostly, producers of Information, are producing expensive “luxury goods”, but selling them in a market for disposable, commodified goods. This is why you need to subsidize fact checkers, and news papers.
I believe this is a legacy of our history, where content production was hard and the ratio of information to content was higher.
Consumers of content are solving for not just informational and cognitive needs, they are also solving for emotional needs, with emotional needs being the more fundamental.
Consumers will struggle with so many sources of content, and will eventually look towards bundling or focus only on certain nodes.
Do note - the universe does not need to favor anything for this situation to occur. Deception is a fundamental part of our universe, because it’s part of the predator prey dynamic. This in turn arises out of the inability of any system to perfectly process all signals available to them.
There is always place for predators or prey to hide.
I thought of the predator prey frame shortly after posting my last comment.
Maybe it boils down to game theory and cooperation vs competition, and the free energy principle. Competition (favoring deception) puts pressure on cooperation (favoring truth). Simultaneously life gets better at deceiving and at communicating the truth. They are not mutually exclusive.
When entities are locked into long term cooperation, they have a strong bias to communicate truth with each other. When entities are locked into long term competition, they have a strong bias to deceive each other.
Evolution seems to be this dance of cooperation and competition.
When a person is born, overwhelmingly what's going on between cells inside their body is cooperation. When they die, overwhelmingly what happens between cells is competition.
So one way that AI could increase access to truth, is if most relationships between people and AI are locked into long term cooperation. Not like today where it's lots of people using one model from a tech co, but something more like most people running their own on their own hardware.
I've heard people say we are in the "post truth era" and something in my gut just won't accept that. I think what's going on is the power structures we exist in are dying, which is biasing people and institutions to compete more than cooperate, and therefore deceive more than tell the truth. This is temporary, and eventually the system (and power structures) will reconfigure and bias back to cooperation, because this oscillation back and forth is just what happens over history, with a long term trend of favoring cooperation.
So to summarize... Complexity arises from oscillations between competition and cooperation, competition favors deception and cooperation favors telling the truth. Over the long-term cooperation increases. Therefore, over the long-term truth communication increases more than deception.
I’ve been there too, is what I am saying. But, reality is reality, and feeling bad or good about it is pointless beyond a point.
AI cannot increase access to truth. This is also part of the hangover of our older views on content, truth and information.
In your mental mode, I think you should recognize that we had an “information commons” previously, even to an extent during the cable news era.
Now we have a content commons.
The production of Information is expensive. People are used to getting it for free.
People are also now offered choices of more emotionally salient content than boring information.
People will choose the more emotionally salient content.
People producing information, will always incur higher costs of production than people producing content. Content producers do not have to take the step of verifying their product.
So content producers will enjoy better margins, and eventually either crowd out information producers, or buy out information producers.
Information producers must raise prices, which will reduce the market available for them. Further - once information is made, it can always just be copied and shared, so their product does not come with some inherent moat. Not to mention that raising prices results in fewer customers, and goes against the now anachronous techie ethos of “Information should be free”.
I am sure someone will find some way to build a more durable firm in this environment, but it’s not going to work in the way you hoped initially. It will either need to be subsidized, or perhaps via reputation effects, or some other form of protection.
Cooperation is favored if cooperation can be achieved. People will find ways to work together, however the equilibrium point may well be less efficient than alternatives we have seen, imagined or hoped for.
More dark forest, cyberpunk dystopia, than Star Trek utopia.
There’s an assumption of positive drift in your thinking. As I said, this is my neck of the woods, and things are grim.
But - so what? If things are grim, only through figuring it out can it actually be made better.
This is the way the pieces on the board are set up as I see it. If you wish to agency in shaping the future, and not a piece that is moved, then hopefully this explanation will help build new insights and potential moves.
There's one thing that I just realized hasn't come up in our discussion yet which has a big impact on my perspective.
Everything in the universe seems built on increasing entropy. Life net decreases entropy locally so that it can net increase it globally. There also seems to be this pattern of increasing complexity (particles, atoms, molecules, cells, multi cells, collectives) that unlocks more and more entropy. One extremely important mechanism driving this seems to be the Free Energy Principle, and the emergent ability to predict consequences of actions. Something about it enables evolution, and evolution enables it.
This perspective is that gives me more confidence that within collectives the future will include more shared truth than the past, because at every level of abstraction and for all known history that has been the long term trend.
Cells get better at modelling their external environment, and better at communication internally.
The reason why I am so confident we are not "post truth" is because lies don't work, not in the sense that people can't be deceived by lies (obviously they can), but dysfunctional lies won't produce accurate predictions. Dysfunctional lies don't help work get done, and the universe seems to be designed for work to get done. There is some force of nature that seems to favor increasingly accurate predictive ability.
Your perspective seems to be very well informed from what feels like the root of the issue, but I think you're missing the big picture. You aren't seeing the forest, just the trees around you. I know you assume the same of me, that I don't see these trees that you see. I believe you, that what you see looks grim. I also agree we need to understand the problems to solve them. I'm not advocating for any lack of action.
Just suggesting that you consider:
- for all history life has gotten better at prediction
- truth makes better predictions than lies
What's more likely? we are hitting a bump in the road that is an echo of many that have come before it, or something fundamental has materially changed the trajectory of all scientific history up until this point?
Your points about the cost of information and the cost of content are valid. In a sense, content is pollution. It's a byproduct of competition for attention.
I can think of a few ways that the costs and addictive nature of content could become moot.
- AI lowers the cost of truth
- Human psychology evolves to devalue content
- economic systems evolve to rebalance the cost/value of each
- legal systems evolve to better protect people from deception
These are just what come to mind quickly. The main point is that these quirks of our current culture, psychology, economic system, technological stage and value system are temporary, not fundamental, and not permanent. Life has a remarkable ability to adapt, and I think it will adapt to this too.
I really appreciate you engaging with me on this so I could spend time reflecting on your perspective. If I ever came across as dismissive I apologize. You've helped me empathize with you and others with the same concerns and I value that. You haven't fundamentally changed my mind, but you gave me a chance to hone my thinking and more deeply reflect on your main points.
It feels like we agree on a lot, we are just incorporating different contexts into our perspectives.
Nah. I see it more as there was an information asymmetry, on this specific topic, due to our different lived experiences.
I can feasibly provide more nuanced examples of the mechanics at play as I see them. Their distribution results in a specific map / current state of play.
> - Economic systems evolve > - legal systems evolve
These types of evolutions take time, and we are far from even articulating a societal position on the need to evolve.
Sometimes that evolution is only after events of immense suffering. A brand seared on humanity’s collective memory.
We are not promised a happy ending. We can easily reach equilibrium points that are less than humanly optimal.
For example - if our technology reaches a point where we can efficiently distract the voting population, and a smaller coterie of experts can steer the economy, we can reach 1984 levels of societal ordering.
This can last a very long time, before the system collapses or has to self correct.
Something fundamental has changed and humanity will adapt. However, that adaptation will need someone to actually look at the problem and treat it on its merits.
One way to think of this is cigarettes, Junk foods and salads. People shifted their diets when the cost of harm was made clear, AND the benefits of a healthy diet were made clear AND we had things like the FDA AND scientists doing sampling to identify the degree of adulteration in food.
——
> My move is to focus on making it easier for college students to develop critical thinking and communication skills. Smoothing out the learning curves and making education more personalized, accessible, and interactive. I'm just getting started, but so far already helping thousands of students at multiple universities.
How are you doing this?
I never said that though?
> Hoping that audiences will reject this is viable.
I have no clue what you mean. What is "this" refering to?
It would be VERY refreshing to see more than one company try to build an LLM that is primarily truth-seeking, avoiding the "waluigi problem". Benevolent or not, progress here should not be led just by one man ...
I think that's by design though. Tolerate bots to get high-value users to participate more after they think real people are actually listening to them.
It was just a way for him to convey his "theory of forms" in which perfect versions of all things exist somewhere, and everything we see are mere shadows of these true forms. The men in the cave are his fellow Athenians who refuse his "obvious" truth, he who has peeked out of the cave and seen the true forms. All in all, it's very literal.
> Walk willingly into platos cave, pay for platos cave verification, sit down, enjoy all the discourse on the wall.
Homer pays to get the crayon put back up his nose
> Spit your drink out when you figure out that the shadows on the wall are all fake.
Homer gets annoyed/surprised if someone calls him stupid.
It’s kind of funny how everyone projects their own dialectic framing on statements, and assumes that a person opposing side A automatically supports whatever is side B in their own mind.
I would imagine a large majority of readers read your original post and immediately in their head thought, “are they one of those school voucher people” or something along those lines.
If we are all going around assuming 99% of the positions of people we are engaging with, what is the point of discussing anything?
The shadows on the wall aren't fake, they are just... shadows of real things. Plato's cave is about having an incomplete view of reality, not a false view of reality.
how open are you to a US citizen verified town square online? You'd have to scan your passport or driver license to post memes and stuff.
I wonder how much more expensive per post it would be for the bad guys if social networks required the most draconian verification technology, like a hardware-based biometric system you have to rent, and touch or sit near when posting on social media. And maybe you have to read comments you want to post to a camera.
Even at such a ludicrous extreme, state actors would still find ways to pay people to astroturf. But how effective would extraordinary countermeasures like that be, I wonder.
(Also I think high global incomes would greatly mitigate the issue by reducing the number of people willing to pretend they genuinely hold views of foreign adversaries and risk treasony kinda charges.)
I'm thinking Nikita is falling out with Elon as they both seem to have diverging goals with the platform. Advertisement revenues on X isn't that great and neither are conversions on X so you can't really get consistent payouts that match Youtube. Premium subscriptions don't bring in as much dough as advertising did during Twitter days.
Hmm, interesting insight, what did they each say when you talked to them?
One side has largely left X.
We're on a thread about widespread fake/inauthentic users on Twitter right now. I see very little reason to trust those numbers.
https://www.forbes.com/sites/conormurray/2025/11/03/threads-...
The problem with not using a cloak was that you'd stand a very real chance of getting DDoS'd or, worse, outright hacked (made easier by the fact that in ye olde modem days, your computer was directly exposed to the Internet with no firewall/NAT to protect you), and even with using a cloak and a NAT router you'd still have trolls sending "DCC SEND" [1] into channels, immediately yeeting a bunch of people with old shoddy middleboxes.
> Accounts registered after March 2024 that have a verified email address are automatically assigned a generic user cloak. If your account does not currently have a cloak, you may contact staff to receive one.
And I don’t think he’s been trying all that hard either.
Do it enough times, and you end up with yes men that also force other people into the meat grinder well enough you don’t have to care, directly.
It’s a type of genius. It works best when you embrace that everyone wants to suck up to you anyway, and there are always more flunkies where they came from, so you’re really helping the world out by filtering down to the somewhat effective ones ASAP.
Instead they built better sycophants
And let’s be honest, you know what you’d do too if it was you.
I took a look at some X profile's I know where they're based, and a couple of other random, and I can see "Account based in" and "Connected via" for all of them, just logged in as a free user.
Is it possible they enabled it back again?
I had this same idea before and it’s not terrible. If it guaranteed user privacy by using an external identification service (ID.me?), it might get some attention. You would likely have to reverify accounts every 6 months or so to limit sales of accounts, and you would need to prevent sock puppets somehow.
If you allow pseudonymity you would get some interesting dynamic conversations, while if you enforced a real name policy I think it would end up like a ghost town version of LinkedIn. (Many people don’t want to be honest on a “face” account.) The biggest problem with current pseudonymous networks like X/Twitter is you have no idea if the other person really has a stake in the discussion.
Also, if ID were verified and you could somehow determine that a person has previously registered for the service, bans would have teeth and true bad actors would eventually be expelled. It would be better to have a forgiving suspension/ban policy because of this, with gradually increasing penalties and reasonable appeals in case of moderation mistakes.
the linkedin effect seems more due to the nature of corporate culture where everyone's profile is an extension of their persona optimized for monetary/career outcomes so you get this vapid superficial fakeness to it that turns people off.
this X feature does make it interesting like for example engaging with US politics while shouldnt stop commentary from foreigners it definitely should contain the limits of perception meddling
> the linkedin effect seems more due to the nature of corporate culture where everyone's profile is an extension of their persona optimized for monetary/career outcomes so you get this vapid superficial fakeness to it that turns people off.
The same would happen if people knew your IRL identity on a social site, see all the attempted “cancellations” on both sides of the aisle these last few years.
My small neighborhood has a non-anonymous chat group, which is 2-3 streets (~50 houses) inside a village which is inside a city. It is basically just a mini nextdoor but without ads or conspiracies.
A town square in Cologne where 90% of participants don't hail from Cologne but London, Mumbai and San Francisco aren't going to solve the problems of Cologne or have any stake in doing so.
Which also reveals of course what Twitter actually is, an entropy machine designed to generate profit that in fact benefits from disorder, not a means of real world problem solving, the ostensible point of meaningful communication.
Upholding at least some utterly basic foundational values of humanity doesn't require holding any stake.
Except human across the planet doesn't even agree on those "foundational values". What seems obvious and fundamental to us, often isn't to others.
Verified residency is better than nothing for putting real money on the table. Although if you've been to a local town meeting, you'll know it's still not perfect.
its got the followers because the followers want to read and reshare it.
id maybe like to see the location of origin as a pie chart on the followers list, as well as on what theyre following, but if the idea is good(for whatever definition if good)
is being american even particularly relevant? i dont think the random guy in indiana's opinions on Mamdani are any more relevant than a random guy in nigeria's.
See exhibit 8 and such: https://www.justice.gov/opa/media/1366201/dl
Or 10 which specifically talks about Twitter https://www.justice.gov/archives/opa/media/1366191/dl
Are you people ever going to let this idea go? Almost all of this activity is coming out of India, Israel, and Nigeria. Russia isn’t mentioned once in the article.
https://www.theguardian.com/technology/2020/mar/13/facebook-...
This is the pattern with all Russian influence operations; they’re always implied to be ominously large and end up being laughably small.
American political polarization had nothing to do with the Russians; this is just the refrain of frustrated Democrats who refuse to acknowledge the consequences of ill-conceived policy. Israel has always had far more sway over American politics.
I know of a few defectors who ended up there; one was an American that went by the name of “Texas,” while another one was a Canadian who moved there to be a farmer in hopes of protecting his family from what he saw as degenerate values being propagated by the Canadian education system. Texas was supposedly murdered by Russian soldiers while operating with Kremlin-aligned militias in the Donbas region. The Canadian is still living in Russia and has a YouTube channel.
I suspected a regular rotation of Kremlin agents were on /pol/ during the Syrian Civil War. Russian sentiment was generally far more positive prior to the invasion. It’s possible this was all organic and just collapsed as people saw what they did to Ukraine; I really have no idea.
Frog Twitter for their part pivoted on Russia quite quickly in the early 2020s, around the time Thiel was buying out podcasts.
On the other hand there's hundreds of thousands of diaspora Russians, and they're very pro russian. Richard Spencer's ex wife is a good example of this. Overall this is a much bigger impact than the dozen converts or a few thousand half hearted Harper's.
Obviously before the war Russia was less publicly objectionable. In Syria everyone just hated ISIS.
The /pol/ effect is nostalgia for worlds that no longer exist and we're not personally experienced. It's political flavored nostalgia instead of Pokémon collecting.
In terms of American twitter Russiagate and making Russia a red/blue partisan issue has been the most disastrous. It's simple contrarianism.
The problem in particular is not only the scale but that this propaganda is not solely directed at altering US policy towards Russia, it's also about stoking ethnic and religious tension to try to weaken the US and destroy its ability to be a unified cohesive country. If the US is fighting itself then it isn't fighting Russia after all.
Can you provide any citation for this and the approximate date when this was revealed? I’ve been hearing about this since 2015 and the last report I looked at was entirely unconvincing.
> it's also about stoking ethnic and religious tension to try to weaken the US and destroy its ability to be a unified cohesive country.
That is likely one of Russia’s goals; it is not likely that the Russians were the origin of these political cleavages. This was the problem with the entire Russian influence narrative; it was a post-hoc rationalization for why exceptionally bad ideas like diversity and multiculturalism were rejected by a subset of the population. In essence: “If they hadn’t been exposed to these Facebook posts, they never would have had these illiberal ideas put into their heads.”
It was also impossible to take seriously because most of the elected officials promoting it were receiving campaign contributions from AIPAC.
> A 2018 BuzzFeed News investigation found that at least one member of the Russian IRA, indicted for alleged interference in the 2016 US election, had also visited Macedonia around the emergence of its first troll farms, though it didn’t find concrete evidence of a connection. (Facebook said its investigations hadn’t turned up a connection between the IRA and Macedonian troll farms either.)
Further, the article supports the point I was making:
> For the most part, the people who run troll farms have financial rather than political motives; they post whatever receives the most engagement, with little regard to the actual content. But because misinformation, clickbait, and politically divisive content is more likely to receive high engagement (as Facebook’s own internal analyses acknowledge), troll farms gravitate to posting more of it over time, the report says.
This isn’t evidence of a concerted influence campaign. It’s not even clear what the article means when it refers to these outfits as troll farms. What I imagine when I hear the phrase is a professionalized state-backed outfit with a specific mandate to influence public opinion in a target country; this isn’t what is being described in the article.
There’s evidence that Russia engaged in these kinds of influence campaigns during the 2016 election, but I’ve never seen evidence that they were particularly effective at it.
"For the most part, the people who run troll farms have financial rather than political motives; they post whatever receives the most engagement, with little regard to the actual content"
BuzzFeed News investigation "didn't find concrete evidence of a connection" and "Facebook said its investigations hadn't turned up a connection between the IRA and Macedonian troll farms either"
I've been in touch with tech people in Eastern Europe. Grey zone warfare is very real in their countries.
Maybe it wasn't your intent, but your comment makes it sound like this was an issue with only a single side of the political spectrum. However...
https://www.businessinsider.com/russians-organized-pro-anti-...
> The Russians weaponized social media to organize political rallies, both in support of and against certain candidates, according to the indictment. Although the Russians organized some rallies in opposition to Trump's candidacy, most were supportive.
Not to mention the recent exposure of the funding source of the fine folks over at Tenet Media.
That's what the Russians do. It's too difficult to improve their own country, their own lives, and their own prospects, so they focus on the next-best strategy for the acquisition of power, which is dragging everybody else down to their level.
It’s possible the Russians have contracted influence campaigns out to Indian and Israeli firms, but the simpler explanation is just that India is continuing its long and storied history of using telecomm networks to scam unwitting boomers while Israel is continuing its long and storied history of being the worst greatest ally of all time.
What political interest does a Nigerian have in swaying US opinion?
They’re grifters; their interest in American politics is commercial. Indians were targeting Trump supporters with fake news for ad revenue as early as 2015; this is a continuation of that model.
That kind of false engagement is also a problem (for advertisers, genuine fans etc) but doesn't shape elections and thus come with policy consequences.
And to be fair, a lot of these accounts that are exposed as grifters were called out as such for a while now. And most of them were so obviously griftery that the only ones that followed them were those that were already so deeply entrenched in their echo chamber.
It's funny that they're explicitly being exposed now though!
Or hasn't covered yet. It's interesting to watch the cycle of "shows up on social media" then "shows up in industry-specific press" then "shows up in mainstream press", with lag in each step.
These days, Fediverse is providing the same thing for some industries. You see stuff show up there first, then show up on X and industry press a little later, then mainstream press a little later.
IRC
Usenet
Facebook (live)
It's really fucked how the online content providers have moved from letting you seek out whatever you might fancy towards deciding what you're going to see. "Search" doesn't even seem like an important feature anymore many places.
But the thing that was supprising to me, as someone that remembers the world before the internet, is that anger is the thing that makes people stay on a site.
Before the internet came along, one would have thought that Truth would be the thing. Or funniness, or gossip, or even titalation and smut. Anger would have been quite far down on the list of 'addicting' things. But the proof is obvious, anger drives dollars.
There's no putting this knowledge away now that we know it.
The lesson only question is what are we going to do about it?
I don't do this with every topic unless I'm interested in discussing something just so I'm more informed just to reduce bias.
Scientists/Researchers
Journalists
Activists
Politicians
Subject Matter Experts (for the fields I am interested in)
There were (when I was using it) a large number of "troll" accounts, and bots, but it was normally easy to distinguish the wheat from the chaff
You could also engage in meaningful conversations with complete strangers - because, like Usenet, the rules for debate were widely adopted, and transgression results in shunning (something that I rarely see beyond twitter to be honest)
I often hear that one community, or another, is "really good, not toxic at all, which is true when it starts (for tech, whilst it's "new" and everyone is still interested in figuring out how it works, sharing their learnings, and actively working to encourage people to also take interest)
Then idealism works it way in - this community is the greatest that every existed ever - and whatever it is centred is the best at whatever
Then - all other things are bad, you're <something bad> if you think otherwise
And, boom, toxicity starts to abound
For me, I've seen it so many times, whether in motorised transport (Motorcycles vs cars, then Japanese bikes vs British/European/American then individual brands (eg Triumph vs Norton), or even /style/ of bike (Oh you ride a sport bike, when clearly a cruiser is better...))
In the tech scene it's been Unix vs Microsoft, then Microsoft vs Linux or Apple, and then... well no doubt you've seen it too
Uhm I would rather say it is when the idealists are pushed out by grifters is when things get bad for a community.
If you followed a variety of people it was quite addictive - so many celebrities or other notable people meant you got actual "first hand news", and it was fun seeing everyone join in on silly jokes and games and whatever, that doesn't hit quite as hard when it's just random usernames not "people".
But it suffered for that success, individual voices got drowned out in favour of the big names, the main way to get noticed becoming more controversial statements, and the wildly different views becoming less free flowing discussion and more constant arguments.
It was fun for a while if you followed fun people, but I think the incentives of such systems means it was always going to collapse as people worked out how to manipulate it.
X and Reddit are no different.
But the problem with over credulity goes far beyond social media. I've gotten strong push back for telling people they shouldn't trust Wikipedia and should look at primary sources themselves.
Yeah, but basically nobody is capable of evaluating those sources themselves, outside of very narrow topics.
Reading a wikipedia page about Cicero? Better make sure you can read Latin and Greek, and also have a PhD in Roman history and preferably another one in Classical philosophy, or else you will always be stuck with translations and interpretations of other people. And no, reading a Loeb translation from the 1930's doesnt mean you will fully understand what he wrote, because so much of it all hinges on specific words and what those words meant in the context they were written, and how you should interpret whole passages and how those passages relate to other authors and things that happened when he was alive and all that fun stuff.
And that's just one small subject in one discipline. Now move on to an article about Florence during the Renaissance and oh hey suddenly there are yet another couple of languages you should learn and another PhD to get.
I really don't as far as social media goes. If I see a link here the account posting it likely doesn't play any part, trust comes from the source of the content more than random user.
Reason I ask is because there are few people I follow that use VPNs but their location is accurate on X.
Also, X also shows where you downloaded the app from, e.g. [Country] App Store, so that one might be a bit more difficult to get around.
They would most likely use residential proxy/vpns that show your traffic coming out of a regular household ISP. They can be purchased for cheap.
Ironically many of the people in favor of banning VPNs are themselves using a VPN.
Remember that China blocks Western social media, yet posts a lot of Chinese government propaganda on Western social media. Making VPNs illegal for the general public does not entail making VPNs inaccessible to government agents.
It’s ironic but also completely typical.
Same way so many people publicly freaking out about homosexuality turn out to be gay. There’s something in human nature that makes people shout about the dangers of the things they themselves do, some kind of camouflage instinct I guess.
And with that statement you ironically insinuate that I'm a pedo
You're not the first person that made that argument (that the people talking about a problem actually are the real perps!), but from my perspective it feels more like an easy way to make it socially unacceptable to talk about categories of issues. Which is likely intended by the person making this argument, likely because... You see were this is going?
How do you know this as a fact?
Or maybe they are able to link carrier-sourced cellphone location datasets to particular twitter accounts. Those aren't going to be real-time though, so something like that could explain the lag.
Going forward this is going to be a bit of a cat-and-mouse game. There are plenty of other tricks X can do to determine country of origin. Long term I agree the sock puppets have the upper hand here, though forcing them to go through the effort is probably a good thing.
I'd make the assumption that posters located in Russia, China, NK, etc. are likely to be in some way tied to the state, where posters in India, random African nations, etc. are more likely to be private actors of which some will be US-based outsourcing to low-cost labor.
Almost all of these accounts are operating out of India or Israel. The Indians are usually in it for the money (though there’s probably some Israeli outsourcing going on there, too), whereas the Israelis were riding off 2010s Islamophobia to prime American Evangelicals for their activities in Gaza.
That is exactly what is happening and what is being reported on. The thing you attribute to "weird personal bias" is being widely exposed.
We should probably examine your weird personal bias. Weird, because you could just read the article!
The Department of Homeland Security, for one.
Edit: Link removed as I was disinformed by a /pol/ PsyOp.
https://www.reuters.com/technology/tencents-wechat-reveal-us...
However if you comment on those articles, your provincial location would be attached. the Cyber Admin of CCP mandates every app to reveal the provincial location for author and commenter.
Relevant: “Containment Control for a Social Network with State-Dependent Connectivity” (2014), Air Force Research Laboratory, Eglin AFB: https://arxiv.org/pdf/1402.5644.pdf
https://web.archive.org/web/20160410083943/http://www.reddit...
Funny nonetheless though.
Eglin has something like 50,000 people but it's actual population as a census designated area is more like 5000.
Oak Brook, IL was also "most addicted" but people didn't run with the idea that McDonalds HQ was running psyops.
It was generally being called astroturfing when it got more apparent on Reddit in the early 2010s, and definitely didn't get less after.
You would think such people would be competent enough to proxy their operations through at least a layers of compromised devices, or Tor, or VPNs, or at least something other than their own IP addresses.
Not sure what the "most addicted" means except for "over 100k visits total" but it doesn't seem to be pulled out of ops ass,
This is a special addiction because most of us are community starved. Formative years were spent realizing we could form digital communities, then right when they were starting to become healthy and pay us back, they got hijacked by parasites.
These parasites have always dreamed of directly controlling our communities, and it got handed to them on a silver platter.
Corporate, monetized community centers with direct access to our mindshare, full ability to censor and manipulate, and direct access to our community-centric neurons. It is a dream come true for these slavers which evoke a host of expletives in my mind.
Human beings are addicted to community social interaction. It is normally a healthy addiction. It is not any longer in service of us.
The short term solution: reduce reliance on and consumption of corporate captured social media
The long term solution: rebuild local communities, invest time in p2p technology that outperforms centralized tech
When I say "p2p" I do not mean what is currently available. Matrix, federated services, etc are not it. I am talking about going beyond even Apple in usability, and beyond BitTorrent in decentralization. I am talking about a meta-substrate so compelling to developers and so effortless to users that it makes the old ways appear archaic in their use. That is the long term vision.
Also don’t reply to this.
If you’re looking to make some money on X you want engagement. If you want engagement you want to say controversial things people will argue about. What better than right wing US politics, especially when the X algorithm seems to amplify it?
Which for many enterprising trolls/grifter have seen them become SEO(TEO?) experts to push their preferred narratives for clout/profit while drowning the entire timelines in a flood of noise.
for canada though, id like to see the CBC dedicatedly paying canadians to post canadian perspectives on social media
Yay politics. Hooray for the engagement-driven internet.
While the location now shows US, X notes that the account location might not be accurate due to use of VPN
Just 'now'... not when signing up for their account?It's cheap and easy to use social media to propagandize, so certainly there are scores of fake American accounts, but it's irritating that this article doesn't address VPN-usage during account creation.
But there may be ways to link those records to a platform's users
X begins rolling out 'About this account' location feature to users' profiles
https://news.ycombinator.com/item?id=46024417
Top MAGA Influencers on X/Twitter Accidentally Unmasked as Foreign Trolls
Contrast that with legit pro-rightwing accounts: @tuckercarlson (17M), @benshapiro (8M), @RealCandaceO (7.5M), @jordanbpeterson (6M), @catturd2 (4M), @libsoftiktok (4.5M), @seanhannity (7M).
Now if we could have other platforms do the same, and not just accidentally like with the Reddit case lol
If I made an X account while vacationing in a foreign country, would that then be my country-of-origin for that account, even upon continuing to use X after returning home?
Or is it based on the IP address of last interaction?
It's absolutely nuts.
tehjoker•2mo ago
intothemild•2mo ago
jsheard•2mo ago
happosai•2mo ago
prmph•2mo ago
pixl97•2mo ago
https://youtu.be/rE3j_RHkqJc
Anger works wonders online.
nickthegreek•2mo ago
wnevets•2mo ago
with the development capability remaining at twitter anything is possible.
jiggawatts•2mo ago
They use professional paid services from these low labour cost countries all the time for publicity or to control the narrative.
By some estimates 20-60% of everything you see on social media is generated by a bot farm, depending on the forum in question. An analysis of Reddit showed some subreddits are 80% AI generated.
bcoates•2mo ago
The "control the narrative" stuff is mostly a PR campaign by social media intelligence companies trying to make their services seem more valuable than they are.
neaden•2mo ago
exegete•2mo ago
energy123•2mo ago
throwaway48476•2mo ago
duskwuff•2mo ago
Speculation: they're resolving historical IP addresses against a current IP geolocation database. An IP which belonged to a US company in 2010 may have since been sold to a Nigerian ISP, but that doesn't mean that the user behind that IP in 2010 was actually in Nigeria.
toast0•2mo ago
colechristensen•2mo ago
These are paid astroturfers probably more like call centers, paid for presumably by all sorts of interests from foreign intelligence services, to businesses (or select executives), to internal political groups or politicians trying to manipulate public opinion.
Both political extremes are suffering from this kind of manipulation where real concerns are twisted and amplified for lets say the more gullible half of the population (gullibility knows know political alignment exclusively). The excluded middle is afraid of the people who have been manipulated this way (death threats also know no political boundaries).
disambiguation•2mo ago
d0100•2mo ago
bdangubic•2mo ago