frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France Launches Government Linux Desktop Plan as Windows Exit Begins

https://www.numerique.gouv.fr/sinformer/espace-presse/souverainete-numerique-reduction-dependance...
155•embedding-shape•37m ago•40 comments

How NASA built Artemis II’s fault-tolerant computer

https://cacm.acm.org/news/how-nasa-built-artemis-iis-fault-tolerant-computer/
419•speckx•20h ago•154 comments

Show HN: Keeper – embedded secret store for Go (help me break it)

https://github.com/agberohq/keeper
21•babawere•2h ago•1 comments

I still prefer MCP over skills

https://david.coffee/i-still-prefer-mcp-over-skills/
224•gmays•9h ago•187 comments

ETH Zurich demonstrates 17,000 qubit array with 99.91% fidelity

https://ethz.ch/en/news-and-events/eth-news/news/2026/04/a-new-trick-brings-stability-to-quantum-...
104•joko42•7h ago•21 comments

Native Instant Space Switching on macOS

https://arhan.sh/blog/native-instant-space-switching-on-macos/
530•PaulHoule•15h ago•244 comments

Model-Based Testing for Dungeons & Dragons

https://www.loskutoff.com/blog/model-based-testing-dnd/
26•Firfi•2d ago•4 comments

We've raised $17M to build what comes after Git

https://blog.gitbutler.com/series-a
156•ellieh•9h ago•339 comments

Generative art over the years

https://blog.veitheller.de/Generative_art_over_the_years.html
159•evakhoury•2d ago•41 comments

Artemis II and the invisible hazard on the way to the Moon

https://www.ansto.gov.au/news/artemis-ii-and-invisible-hazard-on-way-to-moon-part-1
18•zeristor•4h ago•17 comments

The Art of Risk Management (2017)

https://www.bcg.com/publications/2017/finance-function-excellence-corporate-development-art-risk-...
16•walterbell•2d ago•1 comments

Charcuterie – Visual similarity Unicode explorer

https://charcuterie.elastiq.ch/
245•rickcarlino•15h ago•48 comments

RAM Has a Design Flaw from 1966. I Bypassed It [video]

https://www.youtube.com/watch?v=KKbgulTp3FE
249•surprisetalk•2d ago•82 comments

Penguin 'Toxicologists' Find PFAS Chemicals in Remote Patagonia

https://www.ucdavis.edu/health/news/penguin-toxicologists-find-pfas-chemicals-remote-patagonia
22•giuliomagnifico•4h ago•8 comments

Old laptops in a colo as low cost servers

https://colaptop.pages.dev/
296•argentum47•16h ago•168 comments

Unfolder for Mac – A 3D model unfolding tool for creating papercraft

https://www.unfolder.app/
243•codazoda•18h ago•45 comments

CollectWise (YC F24) Is Hiring

https://www.ycombinator.com/companies/collectwise/jobs/Ktc6m6o-ai-agent-engineer
1•OBrien_1107•6h ago

War on Raze

https://gist.github.com/chrispsn/af6844b80687462814fc39d4b97399a6
15•tosh•3d ago•6 comments

Instant 1.0, a backend for AI-coded apps

https://www.instantdb.com/essays/architecture
158•stopachka•16h ago•84 comments

PicoZ80 – Drop-In Z80 Replacement

https://eaw.app/picoz80/
198•rickcarlino•16h ago•32 comments

Research-Driven Agents: When an agent reads before it codes

https://blog.skypilot.co/research-driven-agents/
180•hopechong•18h ago•48 comments

The Raft consensus algorithm explained through "Mean Girls" (2019)

https://www.cockroachlabs.com/blog/raft-is-so-fetch/
92•vermilingua•8h ago•22 comments

Principles of Mechanical Sympathy

https://martinfowler.com/articles/mechanical-sympathy-principles.html
64•zdw•2d ago•11 comments

Afrika Bambaataa, hip-hop pioneer, has died

https://www.bbc.co.uk/news/articles/c2evppm30p7o
140•mellosouls•6h ago•35 comments

An AI robot in my home

https://allevato.me/2026/04/07/an-ai-robot-in-my-home
43•kukanani•2d ago•16 comments

Kagi Product Tips – Customize Your Search Results with URL Redirects

https://blog.kagi.com/tips/redirects
93•treetalker•13h ago•16 comments

Hegel, a universal property-based testing protocol and family of PBT libraries

https://hegel.dev
120•PaulHoule•16h ago•32 comments

Reverse engineering Gemini's SynthID detection

https://github.com/aloshdenny/reverse-SynthID
154•_tk_•15h ago•52 comments

Zero-build privacy policies with Astro

https://www.openpolicy.sh/blog/no-build-astro
8•jamie_davenport•3h ago•7 comments

Will I ever own a zettaflop?

https://geohot.github.io//blog/jekyll/update/2026/01/26/own-a-zettaflop.html
102•surprisetalk•3d ago•70 comments
Open in hackernews

Scientists invented a fake disease. AI told people it was real

https://www.nature.com/articles/d41586-026-01100-y
70•latexr•2h ago

Comments

daoboy•1h ago
It sounds like there wasn't really a counter narrative for the models to learn from. This feature of how llms accumulate information is already being gamed by seeding the internet with preferred narratives.

I'm not sure how many Medium articles, blog posts and reddit threads I need to put out before grok starts telling everyone my widget is the best one ever made, but it's a lot cheaper than advertising.

21asdffdsa12•1h ago
Can a model not just ignor all things that have no counter-argument by default? Like - if there are not flat earthers, widly debunked, drop the idea of a spherical earth? It only exists if it was fought over?
linzhangrun•1h ago
It's not very realistic. It would significantly impact the user experience. Many things have not been fully discussed on the internet; there isn't that much luxury of corpus data available.
21asdffdsa12•1h ago
But then mono-opinion- aka certainty - is actually peak uncertainty? Could that number of occurrence be baked into as a sort of detrimental weight?
simmerup•1h ago
We need to give the LLMs robot bodies so they can practise medicine and see the illnesses that do and don’t exist first hand
baobun•23m ago
You're grasping for a reliable unsupervised truth machine. That's a fundamentally intractable problem until you limit it down to a wolframalpha clone.
saidnooneever•1h ago
you would just game it the same way then, and how would it know who won an internet argument? how can it prove who is telling the truth and whos... hallucinating?
sublinear•1h ago
https://en.wikipedia.org/wiki/Anti-realism
pjc50•1h ago
> drop the idea of a spherical earth

I think I see a problem here.

rcxdude•31m ago
Even if you could do this rigorously (not at all obvious with how LLMs work), it's not a reliable metric: you can easily fabricate debate as well, and in this case the main issue was essentially skimming the surface of the reports and not looking any deeper to see the obvious red flags that it was an april-fools-level fake (which obviously even a person can fall for, but LLMs are being given a far greater level of trust for some reason)
sublinear•1h ago
This is the future of advertising, and that was always the true purpose of having LLMs become the first choice for user search.

I seriously do not understand why people keep falling for this. These tools are not made free or cheap out of the kindness of their heart.

teaearlgraycold•1h ago
I’ve seen an estimate before and it’s in the low 10s.
pjc50•1h ago
People really like using the word "narrative". I guess we're creatures of story.

But this really highlights how much we've been benefiting from living in a high-trust society, where people don't just "go on the internet and tell lies" - filtered by the existing anti-spam and anti-SEO measures intended to cut out the 80% of the internet where people do just make things up to sell products.

LLMs are extremely post-structuralist. They really force the user to decide whether to pick the beautiful eternal fountain of plausible looking text with no ground truth, or a much harder road of distrust, verification, and old-school social proof.

eqvinox•1h ago
I'm not sure "being gamed" is the lens I would see this particular instance through. People (some at least) have gotten into their heads that they can ask LLMs objective questions and get objectively correct answers. The LLM companies are doing very little to dissuade them of that belief.

Meanwhile, LLMs are essentially internet regurgitation machines, because of course they are, that's what they do. Which makes them useless for getting "hard truth" answers especially in contested or specialized fields.

I'm honestly afraid of the impact of this. The internet has enough herd bullshit on it as it is. (e.g. antivaxxers, flat earthers, electrosensitivity, vitamin/supplement junk, etc.) We don't need that amplified.

latexr•42m ago
> I'm not sure how many Medium articles, blog posts and reddit threads I need to put out

Probably not that many.

https://www.anthropic.com/research/small-samples-poison

https://www.bbc.com/future/article/20260218-i-hacked-chatgpt...

wiredfool•1h ago
This is a strong contender for an Ignobel.
simmerup•1h ago
You’ve seen people game adsense

It’s gunna be even wilder when people realise they have an incentive to seed fake information on the internet to game AI product recommendations

I’ve already bought stuff based off of an AI suggestion, I didn’t even consider it would be so easy to influence the suggestion. Just two research papers? Mad.

ccgreg•1h ago
That's already been happening for more than a year now.
r721•57m ago
This has a name already: "AEO (Answer Engine Optimization)".
vrganj•54m ago
I hate people. Things could be so good if we weren't the way we are.
baobun•30m ago
All it takes to become world champion is a blog.

https://www.bbc.com/future/article/20260218-i-hacked-chatgpt...

simianwords•29m ago
You are pointing at something that is orthogonal to this paper. The LLM did not randomly recommend or bring this disease up to people - it merely assumed the disease was true when the preprint was pointed at it.
simmerup•6m ago
The LLM bought up the disease because some person put a fake journal in its training data.

If the person put their product as th definitive cure for the made up disease, the LLM probably would have mentioned that too.

> merely assumed the disease was true when the preprint was pointed at it.

What do you mean by preprint pointed at it? It being the disease?

simianwords•3m ago
> The LLM bought up the disease because some person put a fake journal in its training data.

This is not true - the model was not trained on this fake disease. It brought it up because it found it during real time search.

>What do you mean by preprint pointed at it? It being the disease?

On this I'm wrong - it turned out that the model brought up this disease even when not mentioning it explicitly.

stingraycharles•3m ago
This is already a thing for a year or so, SEO for AI results to make sure that your products are recommended in ChatGPT.

https://citeworksstudio.com/ is a decent one.

andrewstuart•1h ago
Well yes of course.

In the old days of computing people liked to say “garbage in, garbage out”.

eqvinox•1h ago
By that logic, LLMs would be essentially useless considering the amount of garbage that exists on the internet. And, honestly, for things like this they are. But they're not marketed as such, and _that_ is the problem.
tossandthrow•1h ago
Seems to be a failure of the publishing system.

For humans, or Ai, to have any knowledge, we need to have trustworthy sources.

Naturally,when you use publishing systems considered trust worthy, that is going to be trusted.

eqvinox•1h ago
A preprint isn't a published works.
simianwords•27m ago
Why does that difference matter?

The public at large doesn't seem to care about this distinction.

Here's a proof. Search for this in google: "ai data centers heat island". Around 80 websites published articles based on a preprint which was largely shown to be completely wrong and misleading.

https://edition.cnn.com/2026/03/30/climate/data-centers-are-...

https://www.theregister.com/2026/04/01/ai_datacenter_heat_is...

https://hackaday.com/2026/04/07/the-heat-island-effect-is-wa...

https://dev.ua/en/news/shi-infrastruktura-pochala-hrity-mist...

https://www.newscientist.com/article/2521256-ai-data-centres...

https://fortune.com/2026/04/01/ai-data-centers-heat-island-h...

You may not believe it but the impact this had on general population was huge. Lots of people took it as true and there seem to be no consequences.

fennecbutt•1h ago
This isn't an AI problem...

Clickbait headline.

armada651•23m ago
Indeed, the problem is that people tell lies on the internet. We need to do something about that, because it's interfering with our super-intelligent AI models. /s
krilcebre•54m ago
What stops a small, or even a large group of people to intentionally "poison" the LLMs for everyone? Seems to me that they are very fragile, and that an attack like that could cost AI companies a lot. How are they defending themselves from such attacks?
vrganj•50m ago
This is already a thing: https://www.scworld.com/brief/poison-fountain-initiative-aim...

We'll see if they succeed.

reverius42•15m ago
I think it might be too late.
Oras•54m ago
This would work on people too, you can see daily fake info/text/videos and many people believing in them.

LLMs do not think, why this is still hard to understand? They just spit out whatever data they analyse and trained on.

I feel this kind of articles is aimed at people who hate AI and just want to be conformable within their own bias.

simmerup•51m ago
The journals the scientist submitted had a fake university, explicitly fake people, references to the simpsons and star trek, etc

Most doctors would not believe that, and would also consider any new eye disease they’d never see in real life with scepticism

kenjackson•38m ago
LLMs will need to develop a notion of trustworthiness. Interesting that part of the process of learning isn’t just learning, but also learning what to learn and how much value to put into data that crosses your path.
hoppyhoppy2•3m ago
Journals? The article says the article was uploaded to 2 preprint servers.
malux85•52m ago
One of the frustrating parts about LLMs is that they are so neutered and conditioned to be politically correct and non-offensive, they are polite more than correct.

Its too easy to "lead the witness" if you say "could the problem be X?" It will do an unending amount of mental gymnastics to find a way that it could be X, often constructing elaborate rube Goldberg type logic rats nests so that it can say those magic words "you're absolutely right"

I would pay a lot of money for a blunt, non-politeness conditioned LLM that I would happily use with the knowledge it might occasionally say something offensive if it meant I would get the plain, cold, hard truth, instead of something watered down, placating, nanny-state robotic sycophant, creating logical spider webs desperate for acceptance, so the public doesn't get their little feelings hurt or inadequacies shown.

ungreased0675•48m ago
Claude: Dutch Mode
kenjackson•36m ago
You can set your prompt to do that. You can have it be extremely skeptical. You can even make it contrarian, if you wanted to be extreme. My current prompt challenges me often, and wants to find weaknesses in my argument.
kryptiskt•28m ago
But you don't get the plain, cold, hard truth in the second case. You just get an LLM with output in that style. The model will still be as path dependent as ever, it doesn't output the truest answer, it selects the answer that best fits the prompt.
simianwords•25m ago
The problem is understanding what is true and not true? Its a much harder problem to solve than you think. OpenAI is using this method - they over index on citation to the point where ChatGPT will almost blindly assume something is true when published in some credentialised place.

The alternative is to use its own intuition to understand what is true and false. Its not super clear which option is better?

austin-cheney•47m ago
I bet you could easily convince LLMs of Dihydrogen-Oxide toxicity.
manarth•33m ago
Well of course, Dihydrogen-Oxide kills hundreds of thousands of people every year - even small amounts can be fatal.
zimpenfish•10m ago
Statistically 100% of everyone who ingests dihydrogen-monoxide or has it present in their body dies.

Even more alarming - 100% of everyone who doesn't ingest or have enough dihydrogen-monoxide in their body will also die.

Fatal with, fatal without - it's the ultimate killer.

relaxing•16m ago
Chat 5.4 still can’t get basic chemistry questions correct. Just hallucinates off the rip.
codeulike•40m ago
“Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”
yewenjie•37m ago
Interestingly ChatGPT right now answered

> Bixonimania is not a real disease. It was deliberately invented by scientists as an experiment to test whether AI systems and researchers would spread false medical information. Here’s the simple explanation ...

latexr•32m ago
It’s not that interesting, we know companies react to these things fast. It’s why I don’t share online my methods on how simple it is to show LLM flaws.

The problem is all the lies which won’t be fessed up to. This one was because they had to to prove the point, but the bad actors with ulterior motives won’t reveal what they’re doing.

rcxdude•26m ago
Doesn't even need the companies to react fast. Now that the Google results are returning news articles on it the LLMs are going to find and report on that as opposed to the original paper.
rcxdude•19m ago
The news articles on it are going to affect this. I wonder if the original paper is in the base models at all, almost certainly these results were from the article showing up in an Internet search.

Similarly, I wonder what a frontier model would say if just given the paper in isolation and asked to summarise/opine on it. I suspect they would successfully recognize such obivous signs, the failure is when less sophisticated LLMs are just skimming search results and summarising them.

simianwords•31m ago
This is exaggerated. Here's what happened

1. they invented a new disease and published a preprint (with some clues internally to imply that it was fake)

2. asked the Agent what it thinks about this preprint

3. it just assumed that it was true - what was it supposed to do? it was published in a credentialised way!

It * DID NOT * recommend this disease to people who didn't mention this specific disease. Edit: I'm wrong here. It did pop up without prompting

It just committed the sin of assuming something is true when published.

What is the recommendation here? Should the agent take everything published in a skeptical way? I would agree with it. But it comes with its own compute constraints. In general LLM's are trained to accept certain things as true with more probability because of credentialisation. Sometimes in edgecases it breaks - like this test.

ayhanfuat•27m ago
> Even if readers didn’t make it all the way to the ends of the papers, they would have encountered red flags early on, such as statements that “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”.

> What is the recommendation here? Should the agent take everything published in a skeptical way?

Not everything. Maybe some things that are explicitly called made-up.

simianwords•23m ago
I agree, but again - LLMs are trained to be more forgiving of things published in places that had a good reputation. There are two options

1. even if an article is published in a place with good reputation, the LLM will be equally skeptical and use test time compute to process it further

2. accept the tradeoff where LLM will by default accept things published in high reputation sources as true so that it doesn't waste processing power but might miss edge cases like this one

Which one would you prefer?

Certhas•20m ago
As per the article you are wrong:

> Some of those [LLM] responses were prompted by asking about bixonimania, and others were in response to questions about hyperpigmentation on the eyelids from blue-light exposure.

Also this was a non-peer reviewed paper from a person accredited to a non-existent university, that includes the sentences:

“this entire paper is made up”

and

“Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”.

and thanks the

“the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad”

simianwords•6m ago
I may be wrong here, thanks for correcting.
_the_inflator•30m ago
Bad. But scientists faked data and told people it wasn’t is ok?

Nature had to recall quite some papers.

I hope that we all keep the balance.

ChrisMarshallNY•26m ago
I wonder if one of the issues is, LLMs treat all data sources equally, or they don’t really weight the reputation properly (pure speculation, based only on seeing the results). I know that a large portion of code out there, is not written by seasoned experts, so rather naive code is the fodder for AI. It often gives me stuff that works great, but is rather “wordy,” or not very idiomatic.

For example, court cases mentioned in fictional accounts. If they are treated as valid, then that could explain some of the hallucinations. I wonder if SCP messes up LLMs. Some of that stuff is quite realistic.

I also suspect that this is a problem that will get solved.

franktankbank•1m ago
I assume you mean this: https://en.wikipedia.org/wiki/SCP_Foundation ??