I read “Hillbilly Elegy” and wondered why it wasn’t in there. Snopes cleared it up in a matter of minutes. Why he hasn’t sued people into oblivion is his prerogative, but it’s a fascinating case study that we are, indeed, living in a Post-Truth environment.
And then, one day, the politicians started saying it...
https://youtu.be/NtRPLCso0Sw?t=14m09s
Makes me believe that you're really not commenting in good faith here.
Unless he's repeating Trump's lies, then 77M people apparently believe it.
I don't think the right answer to widespread disinformation campaigns is retaliatory disinformation campaigns (even if they're couched – pun not intended – in a just-barely-thin-enough veil of "wink wink we know this is a joke").
The right answer is to create systems and measures that actually limit disinformation.
They never did!
You'd assume an outgoing link from a CNN website has more credibility than one from an anonymous blog. That is, I reckon, still true. Although the credibility either link conveys is degrading. Again, it has been so since we started playing the game of SEO, yet AI-generated content in this context is basically a weapon of mass destruction. The deterioration has sped up dramatically.
Which is to say: pretty good so far, in their case. For the future? Who knows. But they've done well up to now, at least.
What's amazing is that people think Snopes or other fact-checkers are automatically wrong. I assume this comes from people who make a habit of believing bullshit and can't handle being corrected.
https://fair.org/home/the-digital-media-oligarchy-who-owns-o...
I haven't tested this again on the latest models though, so not sure if there's been an improvement.
This is a common, infuriating practice: provides a veneer of authoritativeness and credibility to newspaper articles, and who is ever going to click on the links that support those very cogent claims? Nobody of course, so they just link to another article with more vague claims, and at any further level deep your willingness to verify that information evaporates at the same rate as the information itself.
But hey, in the meanwhile the author has managed to sneak in that "scientists have found" and that if you don't believe it you must be anti-science.
Incidentally, highlighting this abuse (together with a bunch of other quality and fact-checking) would be a great use of AI on online news publication.
I have found the single best way to avoid being pissed off by this shit is to just avoid Facebook. It dramatically cuts down on the amount I am exposed to.
I also run with adblockers, and consume news via brutalist.report, which also helps. (I avoid the Fox News section at the bottom)
I would say save your time and energy, and invest that into something else - forget all this social media.
Just as an aside jumping off this sentence from the article, I am far less tolerant of the practice of naming countries of origin or general locales rather than specific organizations in headlines and stories.
Name the organization, and if you want to in the body, name where they’re from/located/operating as it pertains to the organization. For that matter, if you can offer information on the specific locale (Sweden is a big place after all), you should also do that unless it really is something more national/international.
Just this week I read a "study" because someone claimed on social media that it was made by (Public, famous) Unis A, B and C and reported as an effect an increase in 30% of revenue for the companies that participated in the experiment.
The "study" was commissioned by an interest group (bad sign). It was conducted by people associated with said unis (I didn't check their credentials), and it did report in its headline the 30% revenue increase.
Said study was about an experiment that ran for a few months. Within these months, the revenue was flat (which could be considered good enough for the cause). The 30% was the revenue of this period against the same period the previous year. So somehow the experiment affected the companies retroactively! Not to mention that the researchers were able to find a group of companies that were, on average, growing 30% YoY. Surprising indeed.
So even if you check your sources, it may still be bullshit science or bullshit reporting from well-credentialed sources.
wing-_-nuts•56m ago
6stringmerc•56m ago
wing-_-nuts•6m ago
AI is quite good when grounded in a source.
flail•39m ago
"Find me research on code reviews, their size, and quality" would give you more than enough reading. Yet, if you start with a claim, like "Longer PRs mean worse defect detection," the relevant data points fall to few enough for AI to start hallucinating.
You get "something, something, PR length, defect detection, IDK, I don't read research papers." Such output is fine as long as the author cares to validate it.
Skip the second step, and you might be good if you ask about something generic, like "What's the Slack story?" or "How did Blockbuster go bust?" Ask about some specific details, though, and you're bound to end up with made-up stuff that sounds just about right, while it's actually wrong.
throw310822•21m ago
kakugawa•24m ago
So, LLMs are inherently bad at citing sources. A lot of effort has been put in to improve this behavior, but it's compensating for an inherent flaw.
oulipo2•11m ago