I meant, I no longer know who to trust. It feels like the only solution is to go live in a forest, and disconnect from everything.
Also feel you wrt living in a forest and leaving this all behind.
On the other hand it puts a big fat question mark over any policy-affecting findings since there's an incentive not to piss off the donors/helpers.
And yet one person kills a CEO, and they're a terrorist.
"Former UnitedHealth CEO Andrew Witty published an op-ed in The New York Times shortly after the killing, expressing sympathy with public frustrations over the “flawed” healthcare system. The CEO of another insurer called on the industry to rebuild trust with the wider public, writing: “We are sorry, and we can and will be better.”
Mr. Thompson’s death also forced a public reckoning over prior authorization. In June, nearly 50 insurers, including UnitedHealthcare, Aetna, Cigna and Humana, signed a voluntary pledge to streamline prior authorization processes, reduce the number of procedures requiring authorization and ensure all clinical denials are reviewed by medical professionals. "
https://www.beckerspayer.com/payer/one-year-after-ceo-killin...
When one gets fired, quits, retires, or dies, you get a new one. Pretty fungible, honestly.
But yeah, shooting people is a bad decision in almost all cases.
In terms of health insurance, which is the industry where the CEO got shot, we can pretty definitively say that it's worse. More centralized systems in Europe tend to perform better. If you double the number of insurance companies, then you double the number of different systems every hospital has to integrate with.
We see this on the internet too. It's massively more centralized than 20 years ago, and when Cloudflare goes down it's major news. But from a user's perspective the internet is more reliable than ever. It's just that when 1% of users face an outage once a day it gets no attention, but when 100% of users face an outage once a year everyone hears about it even though it is more reliable than the former scenario.
But that doesn't seem to be true at all. He just had a whole lot of righteous anger, I guess. Gotta be careful with that stuff.
I'm talking about intentional actions that lead to deaths. E.g. [1] and [2], but there are numerous such examples. There is no plausible defense for this. It is pure evil.
I never understood why this doesn't alarm more people on a deep level.
Heck you wouldn't get ethics approval for animal studies on half of what we know social media companies do, and for good reason. Why do we allow this?
Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals. That claim sounds ridiculous.
Fun fact, the last data privacy law the US passed was about video stores not sharing your rentals. Maybe it's time we start passing more, after all it's not like these companies HAVE to conduct business this way.
It's all completely arbitrary, there's no reason why social media companies can't be legally compelled to divest from all user PII and be forced to go to regulated third party companies for such information. Or force social media companies to allow export of data or forcing them to follow consistent standards so competitors can easily enter the realm and users can easily follow too.
You can go for the throat and say that social media companies can't own an advertising platform either.
Before you go all "oh no the government should help the business magnates more, not the users." I suggest you study how monopolies existed in the 19th century because they look no different than the corporate structure of any big tech company, and see how government finally regulated those bloodsuckers back then.
I must be really good at asking questions if they have that kind of power. So here's another. How would we ever even know those changes were making users more depressed if the company didn't do research on them? Which they would never do if you make it a bureaucratic pain in the ass to do it.
And, no, I would much rather the companies that I explicitly create an account and interact with to be the ones holding my data rather than some shady 3rd parties.
These types of comments are common on this site because we are actually interested in how things work in practice. We don’t like to stop at just saying “companies shouldn’t be allowed to do problematic research without approval”, we like to think about how you could ever make that idea a reality.
If we are serious about stopping problematic corporate research, we have to ask these questions. To regulate something, you have to be able to define it. What sort of research are we trying to regulate? The person you replied to gave a few examples of things that are clearly ‘research’ and probably aren’t things we would want to prevent, so if we are serious about regulating this we would need a definition that includes the bad stuff but doesn’t include the stuff we don’t want to regulate.
If we don’t ask these questions, we can never move past hand wringing.
One possible example is the emotion manipulation study Facebook did over a decade ago[0]. I don't know how you would perform an experiment like this on animals, but Facebook has demonstrated a desire to understand all the different ways its platform can be used to alter user behavior and emotions.
0: https://www.npr.org/sections/alltechconsidered/2014/06/30/32...
These bodies are exactly what makes academia so insufferable. They're just too filled with overly neurotic people who investigate research way past the point of diminishing returns because they are incentivized to do so. If I were to go down the research route, there is no way I wouldn't want to do in a private sector.
Because that's where people with that expertise work.
Whole industries are paid for decades, the hope are the independent journalists with no ties to anybody but the public they wanna reach.
Find one independent journalist on YT with lots of information and sources for them, and you will noticed how we have been living in a lie.
https://corp.oup.com/news/the-oxford-word-of-the-year-2025-i...
More like the exposure of institutions. It’s not like they were more noble previously, their failings were just less widely understood. How much of America knew about Tuskegee before the internet?
The above view is also wildly myopic. You thought modern society overcame populist ideas, extreme ideas, and social revolution being very popular historically? Human nature does not change.
Another thing that doesn’t change? There are always, as evidenced by your own comment, always people saying the system wasn’t responsible, it’s external forces harming the system. The system is immaculate, the proletariat are stupid. In any other context, that’s called black and white thinking.
bikenaga•4h ago
Abstract: "To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research. These trends do not arise from broad scientific engagement with industry, but rather from a select group of scientists who maintain long-lasting relationships with industry. Undisclosed ties to industry are common not just among authors, but among reviewers and academic editors during manuscript evaluation. Further, industry-tied research garners more attention within the academy, among policymakers, on social media, and in the news. Finally, we find evidence that industry ties are associated with a topical focus away from impacts of platform-scale features. Together, these findings suggest industry influence in social media research is extensive, impactful, and often opaque. Going forward there is a need to strengthen disclosure norms and implement policies to ensure the visibility of independent research, and the integrity of industry supported research. "