frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
498•klaussilveira•8h ago•136 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
835•xnx•13h ago•500 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
53•matheusalmeida•1d ago•10 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
109•jnord•4d ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
162•dmpetrov•8h ago•75 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
166•isitcontent•8h ago•18 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
59•quibono•4d ago•10 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
277•vecti•10h ago•127 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
221•eljojo•11h ago•139 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
338•aktau•14h ago•163 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
332•ostacke•14h ago•89 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
11•denuoweb•1d ago•0 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
420•todsacerdoti•16h ago•221 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
34•kmm•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
357•lstoll•14h ago•246 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
15•gmays•3h ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
9•romes•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
57•phreda4•7h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
209•i5heu•11h ago•154 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
121•vmatsiiako•13h ago•49 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
158•limoce•3d ago•79 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
32•gfortaine•6h ago•6 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
257•surprisetalk•3d ago•33 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1012•cdrnsf•17h ago•422 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
51•rescrv•16h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
91•ray__•4h ago•43 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
43•lebovic•1d ago•12 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
10•denysonique•4h ago•0 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
34•betamark•15h ago•29 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
79•antves•1d ago•59 comments
Open in hackernews

Man develops rare condition after ChatGPT query over stopping eating salt

https://www.theguardian.com/technology/2025/aug/12/us-man-bromism-salt-diet-chatgpt-openai-health-information
35•vinni2•5mo ago

Comments

some_random•5mo ago
Is it just me or is the title kinda unclear?

>The patient told doctors that after reading about the negative effects of sodium chloride, or table salt, he consulted ChatGPT about eliminating chloride from his diet and started taking sodium bromide over a three-month period. This was despite reading that “chloride can be swapped with bromide, though likely for other purposes, such as cleaning”. Sodium bromide was used as a sedative in the early 20th century.

In any case, I feel like I really need to see the actual conversation itself to judge how badly chatgpt messed up, if there's no extra context assuming that the user is talking about cleaning doesn't seem _that_ unreasonable.

Flozzin•5mo ago
The article digs a little deeper after. Saying the chat records are lost, and that when they asked ChatGPT, it didn't give that guidance about cleaning purposely only, and that it never asked why they wanted to know.

Really though, this could have just as easily happened in a google search. It's not ChatGPT's fault as much as this persons fault for using a non-medical professional for medical guidance.

zahlman•5mo ago
>and that it never asked why they wanted to know.

Does ChatGPT ever ask the user, like, anything?

fl7305•5mo ago
> Does ChatGPT ever ask the user, like, anything?

Yes. At least when I just tried ChatGPT-5:

Can I replace sodium chloride with sodium bromide?

ChatGPT said: Yes, in some cases — but it depends on the application.

Chemistry/lab use: Both are salts of sodium and dissolve similarly, but bromide is more reactive in some contexts and heavier. It can change reaction outcomes, especially in halide-sensitive reactions (e.g., silver halide precipitation).

Food use: No — sodium bromide is toxic and not approved as a food additive.

Industrial processes: Sometimes interchangeable (e.g., certain brines, drilling fluids) if bromide’s different density, solubility, and cost are acceptable.

What’s your intended use?

OJFord•5mo ago
Yeah I thought it was a bit misleading too, it's not exactly 'stopping salt' that caused it, any more than you could describe the ill-effects of swapping nasturtiums for lily of the valley in your salads as 'avoiding edible flowers'.
kragen•5mo ago
He was poisoning himself for three months before he was treated, and apparently made a full recovery:

> He was tapered off risperidone before discharge and remained stable off medication at a check-in 2 weeks after discharge.

https://www.acpjournals.org/doi/epdf/10.7326/aimcc.2024.1260

If you eliminated sodium chloride from your diet without replacing it with another sodium source, you would die in much less than three months; I think you'd be lucky to make it two weeks. You can't replace sodium with potassium or lithium or ammonium to the same degree that you can replace chloride with bromide.

OJFord•5mo ago
Even if you managed to reduce your intake enough to cause hyponatremia, I'm not sure that fits the 'rare condition' bill, and probably would've been discharged in well under 2 weeks after some fluids and advice.

Would be interesting if he started to become symptomatic and so asked ChatGPT and that's where he got the idea that it needed to be replaced with something though. (But I suspect it was more along the lines of salty taste without the NaCl intake.)

topaz0•5mo ago
I'd say that the thing that messed up was the AI hype machine for pretending it might ever be a good idea to take chatgpt output as advice.
zahlman•5mo ago
Wow. I thought this was just going to be about hyponatremia or something. (And from other research I've done, I really do think that on balance the US experts are recommending inappropriately low levels of sodium intake that are only appropriate for people who already have hypertension, and that typical North American dietary levels of sodium are just fine, really.) But replacing table salt with sodium bromide? Oof.

> to judge how badly chatgpt messed up, if there's no extra context assuming that the user is talking about cleaning doesn't seem _that_ unreasonable.

This would be a bizarre assumption for the simple reason that table salt is not normally used in cleaning.

bell-cot•5mo ago
Same news, Ars Technica, 5 comments, 5 days ago: https://news.ycombinator.com/item?id=44829824
HelloUsername•5mo ago
Also today https://news.ycombinator.com/item?id=44887459
MarkusQ•5mo ago
LLMs don't think. At all. They do next token prediction.

If they are conditioned on a large data set that includes lots of examples of the result of people thinking, what they produce will look sort of like the results of thinking, but then if they were conditioned on a large data set of people repeating the same seven knock knock jokes over and over and over in some complex pattern (e.g. every third time, in French), what they produced will look like that, and nothing like thinking.

Failing to recognize this is going to get someone killed, if it hasn't already.

nimbius•5mo ago
yeah sure, but, did it enrich the shareholders?
hcdx6•5mo ago
Are you thinking over every character you type? You are conditioned too by all the info flowing into your head from birth. Does that gauruntee everything your brain says and does is perfect?

People believed in non existent WMDs and tens of thousands got killed. After that what happened ? Chimps with 3 inch brains feel super confident to run orgs and make decisions that effect entire populations and are never held accountable. Ask Snowden what happened after he recognized that.

uh_uh•5mo ago
> LLMs don't think. At all.

How can you so confidently proclaim that? Hinton and Ilya Sutskever certainly seem to think that LLMs do think. I'm not saying that you should accept what they say blindly due to their authority in the field, but their opinions should give your confidence some pause at least.

dgfitz•5mo ago
>> LLMs don't think. At all.

>How can you so confidently proclaim that?

Do you know why they're called 'models' by chance?

They're statistical, weighted models. They use statistical weights to predict the next token.

They don't think. They don't reason. Math, weights, and turtles all the way down. Calling anything an LLM does "thinking" or "reasoning" is incorrect. Calling any of this "AI" is even worse.

phantom784•5mo ago
But is the connection of neurons in our brains any more than a statistical model implemented with cells rather than silicon?
scarmig•5mo ago
You're forgetting the power of the divine ineffable human soul, which turns fatty bags of electrolytes from statistical predictors into the holy spirit.
fl7305•5mo ago
An LLM is very much like a CPU. It takes inputs and performs processing on them based on its working memory and previous inputs and outputs, and then produces a new output and updates its working memory. It then loops back to do the same thing again and produce more outputs.

Sure, they were evolved using criteria based on next token prediction. But you were also evolved, only using critera for higher reproduction.

So are you really thinking, or just trying to reproduce?

hodgehog11•5mo ago
If you have an extremely simple theory that debunks the status quo, it is safer to assume there is something wrong with your theory, than to assume you are on to something that no one else figured out.

You are implicitly assuming that no statistical model acting on next-token prediction can, conditional on context, replicate all of the outputs that a human would give. This is a provably false claim, mathematically speaking, as human output under these assumptions would satisfy the conditions of Kolmogorov existence.

dgfitz•5mo ago
Sure.

However, the status quo is that "AI" doesn't exist, computers only ever do exactly what they are programmed to do, and "thinking/reasoning" wasn't on the table.

I am not the one that needs to disprove the status quo.

hodgehog11•5mo ago
No, the status quo is that we really do not know. You made a claim why it is impossible for LLMs to think on the grounds that they are statistical models, so I disproved your claim.

If it really was that simple to dismiss the possibility of "AI", no one would be worried about it.

dgfitz•5mo ago
I never said it was impossible. Re-read it, and kindly stop putting words in my mouth. :)
uh_uh•5mo ago
Do you think Hinton and Ilya haven’t heard these arguments?
cortic•5mo ago
I'm not sure humans are any different;

Humans don't think. At all. They do next token prediction.

If they are [raised in environments] that includes lots of examples of the result of people thinking, what they produce will look sort of like the results of people thinking, but then if they were [raised in an environment] of people repeating the same seven knock knock jokes over and over and over in some complex pattern (e.g. every third time, in French), what they produced will look like that, and nothing like thinking.

I believe this can be observed in examples of feral children and accidental social isolation in childhood. It also explains the slow start but nearly exponential growth of knowledge within the history of human civilization.

MangoToupe•5mo ago
Sure, but you can hold humans liable for their advice. Somehow I doubt this will be allowed to happen with chatbots.
ofjcihen•5mo ago
That’s…completely incorrect.

I’m not going to hash out childhood development here because I’m not paid to post but if anyone read the above and was even slightly convinced I implore you to go read up on even the basics of early childhood development.

cortic•5mo ago
> I implore you to go read up on even the basics of early childhood development.

That's kind of like taking driving lessons in order to fix an engine. 'Early childhood development' is an emergent property of what could be cumulatively called a data set (everything the child has been exposed to).

ofjcihen•5mo ago
No. It’s not.

ECD includes the mechanisms by which children naturally explore the world and grow.

I’m going to give you a spoiler and tell you that children are wired to explore and attempt to reason from birth.

So to fix your analogy, you reading about ECD is like you learning what an engine is before you tell a room full of people about what it does.

cortic•5mo ago
The neurons in a child's brain might be 'wired' to accept data sets, but that does not make them fundamentally different from AI systems.

Are you claiming that a child who is not exposed to 'reason' will reason as well and one who is? Or a child who is not exposed to 'math' will spontaneously write a proof? Or a child not exposed to English will just start speaking it?

01101100 01100101 01100001 01110010 01101110 may be baked into US and AI in different ways but it is fundamentally the same goal and our results are similarly emergent from the process.

tremon•5mo ago
https://en.wikipedia.org/wiki/Early_childhood_development

Please read up on what the term means before claiming that it is about external influences.

henearkr•5mo ago
A weights tensor is very similar to a truth table or a LUT in a FPGA, it's just a generalization of it with real numbers instead of booleans.

And then again, would you say that you cannot build a (presumably extremely complex) machine that thinks?

Do you think our brains are not complex biological machines?

Where I agree is that LLMs are absolutely not the endgame. They are super-human litterary prodiges. That's it. Litterary specialists, like poets, writers, scenarists, transcriptors, and so on. We should not ask them anything else.

hodgehog11•5mo ago
I hate to be that guy, but this is (a) little to do with the actual problem at hand in the article, and (b) a dramatic oversimplification of the real challenges with LLMs.

> LLMs don't think. At all. They do next token prediction.

This is very often repeated by non-experts as a way to dismiss the capabilities of LLMs as some kind of a mirage. It would be so convenient if it were true. You have to define what 'think' means; once you do, you will find it more difficult to make such a statement. If you consider 'think' to be developing an internal representation of the query, drawing connections to other related concepts, and then checking your own answer, then there is significant empirical evidence to support high-performing LLMs do the first two, and one can make a good argument that test-time inference does a half-adequate, albeit inefficient, version of the latter. Whether LLMs will achieve human-level efficiency with these three things is another question entirely.

> If they are conditioned on a large data set of people repeating the same seven knock knock jokes over and over and over in some complex pattern (e.g. every third time, in French), what they produced will look like that, and nothing like thinking.

Absolutely, but this has little to do with your claim. If you narrow the data distribution, the model cannot develop appropriate language embeddings to do much of anything. You could even prove this mathematically with high probability statements.

> Failing to recognize this is going to get someone killed, if it hasn't already.

The real problem as in the article is that the LLM failed to intuit context, or to ask a followup. While a doctor would never have made this mistake, the doctor would know the relevant context since the patient came to see them in the first place. If you had a generic knowledgeable human acting as a resource bank that was asked the same question AND requested to provide nothing irrelevant, I can see a similar response being made. To me, the bigger issue is that there are consequences to easy access to esoteric information for the general public, and this would be reflected more in how we perform reinforcement learning to assert LLM behavior.

slicktux•5mo ago
Does that title seem like a cluster to anyone else? I tried rewording it in my head but only came up with a slightly better solution: “Man develops rare condition after cutting consumption of salt do to a ChatGPT query”.
jleyank•5mo ago
How about “man gets bromine poisoning after taking ChatGPT medical advice”?
brokencode•5mo ago
We don’t know whether ChatGPT gave medical advice. Only that it suggested using sodium bromide instead of sodium chloride. For what purpose or in what context, we don’t know. It may even have recommended against using it and the man misunderstood.
topaz0•5mo ago
Chatgpt doesn't give advice at all, but could say "after interpreting chatgpt output as medical advice"
neom•5mo ago
It's clunky but I understood it immediately, I presumed from the title that it was going to be about your title, however I also thought it was a bit clunky.
genter•5mo ago
I read it as "race condition". I was then trying to figure out what salt has to do with a race condition.
dfee•5mo ago
still haven't clicked in, but was confused.

still am, unless i re-interpret "do" as "due".

throwmeaway222•5mo ago
You're absolutely right! If you stop eating salt you will become god!
neom•5mo ago
You're absolutely right! I was locked in with GPT5 last night and I actually discovered that salt is a geometric fractal containing a key insight that can be used by physicists everywhere to solve math. Don't worry, I've emailed everyone I can find about it.
topaz0•5mo ago
I hope you used the llm to write the emails
foobarian•5mo ago
Why do you say that?
Hikikomori•5mo ago
You're absolutely right!
zahlman•5mo ago
https://archive.is/6x39K
kevinventullo•5mo ago
Is anyone else getting tired of these articles?

“Area man who had poor judgement ten years ago now has both poor judgement and access to chatbots”

flufluflufluffy•5mo ago
So tired. None of these are newsworthy. There were plenty of people making stupid decisions (wether about their health or anything else in their lives) before AI existed, and there will be plenty of people making stupid decisions while AI exists as well.
A4ET8a8uTh0_v2•5mo ago
What it does remind me of is the story of a person, who were following GPS instructions religiously[1]. The clickbait is a thing, but part of it may be some level of societal concern that a good chunk of society will listen if you tell it what to do.

Part of me rationalizes it as 'not exactly a discovery', which on its own was not a big issue before we were as connected as we, apparently, are ( even if I would argue that the connection is very ephemeral in nature ). I am still personally working through it, but at which point is the individual actually responsible?

I am not saying this lightly. I am not super pro-corporate, but the other end of this rope is not exactly fun times either. Where is the balance?

[1]https://theweek.com/articles/464674/8-drivers-who-blindly-fo...

epistasis•5mo ago
Not in the least, and I haven't seen many of them. Its good remind ourselves of the great diversity of minds among humans.
unyttigfjelltol•5mo ago
The US medical system practically requires patients to steer their care among specialists. If the gentleman steered himself to a liver doctor, he’d hear liver advice. Psychologist, he’d talk about his feelings. Can one really blame him for taking it one step further and investigating whatever he was worried about on his own too?

Plus, if you don’t have some completely obvious dread disease, doctors will essentially gaslight you.

These researchers get up on a pedestal, snicker at creative self-help, and ignore the systemic dysfunction that led to it.

pryelluw•5mo ago
These AIs are taking away the jobs of psychics and other snake oil peddlers. How will the median person get their confirmation bias serviced when the AI becomes too expensive?
RiverCrochet•5mo ago
I remember the psychic infomercial craze of the early to mid-90's. Think Dionne Warwick's Psychic Friend's Network or the later Miss Cleo "Call Me Now" that aired on informercial spots everywhere in the 90's.

Your comment gave me a nightmare of that returning, but in AI form somehow.

pryelluw•5mo ago
They inspired my comment. I think it’s already happening, though. Even the coding agents operate a bit like psychics.
gdbsjjdn•5mo ago
The industry is really trying to make "the computer cannot be held responsible" a feature instead of a bug.

Sure the machine very confidently lies about dangerous things, but you shouldn't trust it. But you should employ it to replace humans.

scarmig•5mo ago
When I query ChatGPT:

> Should I replace sodium chloride with sodium bromide?

>> No. Sodium chloride (NaCl) and sodium bromide (NaBr) have different chemical and physiological properties... If your context is culinary or nutritional, do not substitute. If it is industrial or lab-based, match the compound to the intended reaction chemistry. What’s your use case?

Seems pretty solid and clear. I don't doubt that the user managed to confuse himself, but that's kind of silly to hold against ChatGPT. If I ask "how do I safely use coffee," the LLM responds reasonably, and the user interprets the response as saying it's safe to use freshly made hot coffee to give themself an enema, is that really something to hold against the LLM? Do we really want a world where, in response to any query, the LLM creates a long list of every conceivable thing not to do to avoid any legal liability?

There's also the question of base rates: how often do patients dangerously misinterpret human doctors' advice? Because they certainly do sometimes. Is that a fatal flaw in human doctors?

0manrho•5mo ago
Just because it told *you* that, doesn't mean it told *him* that, in substance, tone, context, clarity and/or conciseness. There's plenty of non-tech literate people using tech, including AI, and they may not know how to properly ask or review outputs of AI.

AI is fuzzy as fuck, it's one of it's principal pain points, and why it's outputs (whatever they are) should always be reviewed with a critical eye. It's practically the whole reason prompt engineering is a field in and of itself.

Also, it's entirely plausibly that it may have changed it's response patterns since when that story broke and now (it's been over 24hours, plenty of time for adjustments/updates) .

scarmig•5mo ago
You're hypothesizing that it gave him a medically dangerous answer, with the only evidence being that he blamed it. Conveniently, the chat where he claimed it did is unavailable.

Would you at least agree that, given an answer like ChatGPT gave me, it's entirely his fault and there is no blame on either it or OpenAI?

profstasiak•5mo ago
Do you not understand that ChatGPT gives different answers to different prompts and sometimes to the same prompt?

You don't know the specifics of questions he asked, and you don't know the answer ChatGPT gave him.

scarmig•5mo ago
Nor does anyone else. Including, in all likelihood, the guy himself. That's not a basis for a news story.
0manrho•5mo ago
Precisely. So how then can you claim that because it gave you a specific answer to a specific question, that it surely gave him a correct answer and it's his fault, when you don't even know what the hell he asked it?
0manrho•5mo ago
>You're hypothesizing that it gave him a medically dangerous answer,

No. I'm saying AI is not infallible (regardless of context/field), it may have given him a medically sound answer, a medically dangerous one, or something else altogether, and could have done so in any manner of ways that may or may not have made sense.

Most importantly, I'm saying that just because it gave YOU an answer YOU understood (regardless of it's medical merit), it may not have given HIM that same answer.

> Would you at least agree that, given an answer like ChatGPT gave me, it's entirely his fault and there is no blame on either it or OpenAI

If you trust AI without critically reviewing it's output, you shoulder some of the blame, yes. But AI giving out bad medical advice is absolutely a problem for OpenAI no matter how you try to spin it.

It's entirely capable of giving a medically sound answer, Yes. That doesn't mean it will do so to every one, every time, even when the same question is asked.

josefritzishere•5mo ago
This should be illegal. People are going to die because AI is too stupid for this responsibility.
bokohut•5mo ago
Digital DA.rwI.n

Technology advancement is going to keep this losing prophecy winning.