frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Check Your Fucking Sources, People

https://brodzinski.com/2026/05/check-fcking-sources.html
37•flail•1h ago

Comments

wing-_-nuts•56m ago
Ironically, 'source checking' is something AI is quite good at.
6stringmerc•56m ago
Citation needed, please
wing-_-nuts•6m ago
Personal experience? You ask it for the name of the paper referenced. You google that paper (for some reason it's not great at going out and acquiring the paper). You then upload the pdf and ask it if the paper supports the assertion if it's not quickly findable via ^F. You go read, ask it clarifying questions about hazard ratios, what they controlled for, etc.

AI is quite good when grounded in a source.

flail•39m ago
There's nuance to that. An LLM is quite capable of suggesting relevant reading, given the context. Especially when the context is broad enough that there's enough training data.

"Find me research on code reviews, their size, and quality" would give you more than enough reading. Yet, if you start with a claim, like "Longer PRs mean worse defect detection," the relevant data points fall to few enough for AI to start hallucinating.

You get "something, something, PR length, defect detection, IDK, I don't read research papers." Such output is fine as long as the author cares to validate it.

Skip the second step, and you might be good if you ask about something generic, like "What's the Slack story?" or "How did Blockbuster go bust?" Ask about some specific details, though, and you're bound to end up with made-up stuff that sounds just about right, while it's actually wrong.

throw310822•21m ago
Checking is different from finding, though. Source checking means just "verify that this information is actually present in that document". Much harder to hallucinate in this case.
kakugawa•24m ago
Have we forgotten how bad LLMs were at citing sources when they first came out? So, we had to build a lot of structure (harness engineering) and frontier labs had to do specific training to try to compensate for this.

So, LLMs are inherently bad at citing sources. A lot of effort has been put in to improve this behavior, but it's compensating for an inherent flaw.

oulipo2•11m ago
Judging by the number of scientific papers that have been outed as AI-generated, precisely because it hallucinated sources, it's not
6stringmerc•56m ago
Also relevant: the derision and mockery directed at JD Vance as a “couch fucker” even used by John Oliver.

I read “Hillbilly Elegy” and wondered why it wasn’t in there. Snopes cleared it up in a matter of minutes. Why he hasn’t sued people into oblivion is his prerogative, but it’s a fascinating case study that we are, indeed, living in a Post-Truth environment.

red_hare•52m ago
There was a time, in the early to mid 2010s, when the phrase "Fake News" was almost exclusively used by people in publishing to talk about a very real rise in editorial disruption as news readers shifted from being desktop and homepage-driven to mobile and facebook-driven.

And then, one day, the politicians started saying it...

righthand•47m ago
Purveyors of post-truth lies don’t turn around and sue people. They just peddle more lies, this is the kind of environment scum like the Vance’s live for.
justin66•33m ago
Did anyone actually believe that was anything more than a joke? It was a disgusting and weird thing to suggest about a disgusting and weird guy, and highly immature, but it's only libel if it's presented as being true.
kittoes•29m ago
Interesting that you focus on John Oliver's bit considering that it came up in the context of JD Vance doubling down on the whole "they're eating the cats and dogs thing".

https://youtu.be/NtRPLCso0Sw?t=14m09s

Makes me believe that you're really not commenting in good faith here.

ourmandave•24m ago
Tucker Carlson set the precedent when he was sued for libel by Karen McDougal and won because Fox New lawyers successfully argued he wasn't a reporter and no reasonable person would believe he's stating facts.

Unless he's repeating Trump's lies, then 77M people apparently believe it.

titanomachy•22m ago
Oliver in that clip literally calls the couch-fucking thing "the fun kind of misinformation". He's not suggesting it's true.
ubertaco•20m ago
You're getting downvotes because the target of this particular lie was a known liar, so people probably feel like it's some sort of poetic justice (or they know it's just in-kind retaliation and are cathartically satisfied by it).

I don't think the right answer to widespread disinformation campaigns is retaliatory disinformation campaigns (even if they're couched – pun not intended – in a just-barely-thin-enough veil of "wink wink we know this is a joke").

The right answer is to create systems and measures that actually limit disinformation.

ekianjo•43m ago
> Links No Longer Mean Credibility

They never did!

flail•34m ago
Ultimate credibility? Sure, they never did. Yet the whole thing Google was built upon was using links as tokens of credibility.

You'd assume an outgoing link from a CNN website has more credibility than one from an anonymous blog. That is, I reckon, still true. Although the credibility either link conveys is degrading. Again, it has been so since we started playing the game of SEO, yet AI-generated content in this context is basically a weapon of mass destruction. The deterioration has sped up dramatically.

tinfoilhatter•43m ago
It's amazing that people think Snopes or other "fact-checkers" are reliable sources of information and represent ultimate truth, as if they're immune to bias and don't receive funding from people / organizations with their own agendas.
ubertaco•24m ago
Snopes (like anywhere) is only as reliable as its track record of collecting firsthand sources and accurately reporting on their contents.

Which is to say: pretty good so far, in their case. For the future? Who knows. But they've done well up to now, at least.

tinfoilhatter•18m ago
Actually no, their track record is not great: https://en.wikipedia.org/wiki/Snopes#2010s
wat10000•20m ago
They are generally quite good, and they provide ample background info for you to replicate (or repudiate) their findings on your own if you're so inclined.

What's amazing is that people think Snopes or other fact-checkers are automatically wrong. I assume this comes from people who make a habit of believing bullshit and can't handle being corrected.

tinfoilhatter•16m ago
When there is no independent media, it's not difficult to find sources that back up the lies that Snopes and other fact-checkers peddle.

https://fair.org/home/the-digital-media-oligarchy-who-owns-o...

https://swprs.org/the-american-empire-and-its-media/

louiereederson•31m ago
Late last year I tried asking ChatGPT to summarize a collection of 10 researchers' views/findings on a topic and provide representative quotes. It initially looked plausible but when I checked the links, the quotes were from clearly AI generated summaries of actual interviews. The paraphrasing was also plausible but subtly and profoundly incorrect.

I haven't tested this again on the latest models though, so not sure if there's been an improvement.

throw310822•28m ago
> Ops, the link doesn’t lead to the study, but to another article. But that article, in turn, has a link of its own. Which leads to yet another article that doesn’t even mention the study anymore.

This is a common, infuriating practice: provides a veneer of authoritativeness and credibility to newspaper articles, and who is ever going to click on the links that support those very cogent claims? Nobody of course, so they just link to another article with more vague claims, and at any further level deep your willingness to verify that information evaporates at the same rate as the information itself.

But hey, in the meanwhile the author has managed to sneak in that "scientists have found" and that if you don't believe it you must be anti-science.

Incidentally, highlighting this abuse (together with a bunch of other quality and fact-checking) would be a great use of AI on online news publication.

RajT88•27m ago
Facebook, ever the wasteland of bullshit and scams, has gotten even more bullshit and scammy in the AI era.

I have found the single best way to avoid being pissed off by this shit is to just avoid Facebook. It dramatically cuts down on the amount I am exposed to.

I also run with adblockers, and consume news via brutalist.report, which also helps. (I avoid the Fox News section at the bottom)

princevegeta89•9m ago
Not just Facebook, but also make sure to avoid TikTok, Instagram and YouTube, along with YouTube Shorts. Many of them are just nothing but fake AI content, and these days people are using AI to create fake profiles of good-looking, cute girls doing impossible things or actually showing off their bodies, and so on. At least 50% of what you see on your feed should be considered AI-generated content.

I would say save your time and energy, and invest that into something else - forget all this social media.

SllX•17m ago
> Not Sweden, but one Swedish startup.

Just as an aside jumping off this sentence from the article, I am far less tolerant of the practice of naming countries of origin or general locales rather than specific organizations in headlines and stories.

Name the organization, and if you want to in the body, name where they’re from/located/operating as it pertains to the organization. For that matter, if you can offer information on the specific locale (Sweden is a big place after all), you should also do that unless it really is something more national/international.

vslira•11m ago
People like to blame social media for this kind of bullshit, but social media is just the vector.

Just this week I read a "study" because someone claimed on social media that it was made by (Public, famous) Unis A, B and C and reported as an effect an increase in 30% of revenue for the companies that participated in the experiment.

The "study" was commissioned by an interest group (bad sign). It was conducted by people associated with said unis (I didn't check their credentials), and it did report in its headline the 30% revenue increase.

Said study was about an experiment that ran for a few months. Within these months, the revenue was flat (which could be considered good enough for the cause). The 30% was the revenue of this period against the same period the previous year. So somehow the experiment affected the companies retroactively! Not to mention that the researchers were able to find a group of companies that were, on average, growing 30% YoY. Surprising indeed.

So even if you check your sources, it may still be bullshit science or bullshit reporting from well-credentialed sources.

The Wonders of AI: We Are Retiring Our Bug Bounty Program

https://turso.tech/blog/the-wonders-of-ai
211•tjek•2h ago•134 comments

A 0-click exploit chain for the Pixel 10

https://projectzero.google/2026/05/pixel-10-exploit.html
87•happyhardcore•1h ago•34 comments

O(x)Caml in Space

https://gazagnaire.org/blog/2026-05-14-borealis.html
166•yminsky•4h ago•27 comments

ASCII by Jason Scott

https://ascii.textfiles.com/
37•bookofjoe•1h ago•5 comments

Explore Wikipedia Like a Windows XP Desktop

https://explorer.samismith.com/
313•smusamashah•6h ago•83 comments

High dimensional geometry is transforming the MRI industry(2017) [pdf]

https://www.ams.org/government/DonohoPresentation06-28-17Final.pdf
37•nill0•2h ago•6 comments

Trade Dollars with other startups. Book it as revenue

https://www.revswap.ai/
89•tormeh•2h ago•53 comments

Removing the modem and GPS from my 2024 RAV4 hybrid

https://arkadiyt.com/2026/05/13/removing-the-modem-and-gps-from-my-rav4/
973•arkadiyt•22h ago•508 comments

Radicle: Sovereign {code forge} built on Git

https://radicle.dev/
97•KolmogorovComp•3h ago•24 comments

Show HN: Find the best local LLM for your hardware, ranked by benchmarks

https://github.com/Andyyyy64/whichllm
257•andyyyy64•6h ago•55 comments

UK government replaces Palantir software with internally-built refugee system

https://www.bbc.com/news/articles/c2l2j1lxdk5o
428•cdrnsf•16h ago•162 comments

Too dangerous or just too expensive? The real reason Anthropic is hiding Mythos

https://kingy.ai/ai/too-dangerous-to-release-or-just-too-expensive-the-real-reason-anthropic-is-h...
124•chbint•2h ago•134 comments

SigNoz (YC W21, open source Datadog) Is hiring for growth and engineering roles

https://signoz.io/careers
1•pranay01•3h ago

A few words on DS4

https://antirez.com/news/165
379•caust1c•17h ago•156 comments

Details of the Daring Airdrop at Tristan Da Cunha

https://www.tristandc.com/government/news-2026-05-11-airdrop.php
214•kspacewalk2•11h ago•79 comments

NanoTDB – Golang Append-Only Time Series DB

https://github.com/aymanhs/nanotdb
19•aymanhs72•5h ago•3 comments

Building ML framework with Rust and Category Theory

https://hghalebi.github.io/category_theory_transformer_rs/
69•adamnemecek•23h ago•16 comments

RTX 5090 and M4 MacBook Air: Can It Game?

https://scottjg.com/posts/2026-05-05-egpu-mac-gaming/
652•allenleee•23h ago•152 comments

First public macOS kernel memory corruption exploit on Apple M5

https://blog.calif.io/p/first-public-kernel-memory-corruption
405•quadrige•21h ago•107 comments

Codex is now in the ChatGPT mobile app

https://openai.com/index/work-with-codex-from-anywhere/
403•mikeevans•19h ago•208 comments

Gyroflow: Video stabilization using gyroscope data

https://github.com/gyroflow/gyroflow
134•nateb2022•3d ago•21 comments

Amazon workers under pressure to up their AI usage–so they're making up tasks

https://www.fastcompany.com/91541586/amazon-workers-pressured-to-up-ai-use-extraneous-tasks
79•hackernj•2h ago•58 comments

Welcome to the Strip Mining Era of OSS Security

https://www.metabase.com/blog/strip-mining-era-of-open-source-security
70•salsakran•4h ago•53 comments

New Nginx Exploit

https://github.com/DepthFirstDisclosures/Nginx-Rift
409•hetsaraiya•22h ago•96 comments

Check Your Fucking Sources, People

https://brodzinski.com/2026/05/check-fcking-sources.html
39•flail•1h ago•29 comments

Steve Jobs Next Computer: His Forgotten Exile Years

https://spectrum.ieee.org/steve-jobs-next-computer
80•rbanffy•5h ago•79 comments

Mullvad exit IPs are surprisingly identifying

https://tmctmt.com/posts/mullvad-exit-ips-as-a-fingerprinting-vector/
501•RGBCube•13h ago•306 comments

Claude for Legal

https://github.com/anthropics/claude-for-legal
153•Einenlum•18h ago•133 comments

Tesla Wall Connector bootloader bypasses the firmware downgrade ratchet

https://www.synacktiv.com/en/publications/exploiting-the-tesla-wall-connector-from-its-charge-por...
113•p_stuart82•18h ago•64 comments

HDD Firmware Hacking

https://icode4.coffee/?p=1465
215•jsploit•23h ago•34 comments