frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
216•theblazehen•2d ago•64 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
688•klaussilveira•15h ago•204 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
960•xnx•20h ago•553 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
127•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
65•videotopia•4d ago•5 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
50•jesperordrup•5h ago•24 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
32•kaonwarb•3d ago•27 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
230•dmpetrov•15h ago•121 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
9•matt_d•3d ago•2 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
500•todsacerdoti•23h ago•244 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
28•speckx•3d ago•17 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
384•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
298•eljojo•18h ago•187 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
360•aktau•21h ago•183 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
421•lstoll•21h ago•281 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
67•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
95•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
263•i5heu•18h ago•215 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
38•gmays•10h ago•13 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1076•cdrnsf•1d ago•460 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
295•surprisetalk•3d ago•46 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
61•gfortaine•13h ago•27 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
153•vmatsiiako•20h ago•72 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
161•SerCe•11h ago•149 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
14•1vuio0pswjnm7•1h ago•3 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
74•phreda4•14h ago•14 comments
Open in hackernews

Tell me again about neurons now

https://www.science.org/content/blog-post/tell-me-again-about-neurons-now
57•strangattractor•6mo ago

Comments

strangattractor•6mo ago
Derek has a little thought experiment at the end.
barisozmen•6mo ago
Answer to his though experiment: Yes, I believe a sufficiently advanced AI could told us that. Scientists who have been fed with wrong information can come up with completely new ideas. Making what we know less wrong.

That being said, I don't think current token-predictors can do that.

tptacek•6mo ago
My read of this was that AI is fundamentally limited by the lack of access to the new empirical data that drove this discovery; that it couldn't have been inferred from the existing corpus of knowledge.
DougBTX•6mo ago
Recent LLMs have larger context windows to process more data and tool use to get new data, so it would be surprising if there’s a fundamental limitation here.
readthenotes1•6mo ago
Maybe an AI will be smart enough to realize that there's more than one explanation for a low level of triglycerides in neurons.

The RICE myth and the lactic acid myth will surely be a part of the training material so the AI will realize that there's a fair amount of unjustified conclusions in the bioworld

ethan_smith•6mo ago
The RICE protocol (Rest, Ice, Compression, Elevation) for injuries has been largely debunked - inflammation is now understood as a necessary healing process. Similarly, lactic acid was wrongly blamed for muscle soreness when it's actually a fuel source during exercise, paralleling how we misunderstood neuronal fatty acid metabolism.
zahlman•6mo ago
Is inflammation not still considered to be harmful in the long term? (Is that not why we're still expected to care about omega-6 vs omega-3 dietary fatty acids?) What is the new explanation for muscle soreness?
greensoap•6mo ago
There is a difference between localized inflammation that is bringing the source of healing to injury and systemic inflammation
Sniffnoy•6mo ago
I think this could use a more informative title? The title this was posted with is actually less informative than the original title.
ants_everywhere•6mo ago
> So let’s ask ourselves: would AI have told us this?

Why ask ourselves, when we can ask the AI? Here's the start of my conversation with Gemini:

> Me: What is known about fatty acid combustion in the brain?

> Gemini: The Brain's Surprising Use of Fat for Fuel For a long time, the brain was thought to rely almost exclusively on glucose for its immense energy needs. However, a growing body of research reveals that fatty acid combustion, or beta-oxidation, plays a more significant and complex role in brain energy metabolism and overall neurological health than previously understood. While not the brain's primary fuel source under normal conditions, the breakdown of fatty acids is crucial for various functions, particularly within specialized brain cells and under specific physiological states....

It cites a variety of articles going back at least to the 1990s.

So

> would AI have told us this?

Yes and it did

1970-01-01•6mo ago
What facts did it hallucinate and which are true?
TeMPOraL•6mo ago
That cuts both ways: the model can be telling the truth, but because its claims look unusual, they'll get dismissed as hallucinations, and possibly even used as anti-example in training the next model generation.
zdragnar•6mo ago
If you simply ask Gemini what the brain uses for fuel, it gives an entirely different answer that leaves fatty acids out completely and reinforces the glucose story.

LLMs tell you what you want to hear, sourced from a random sample of data, not what you need to, based on any professional/expert opinion.

ants_everywhere•6mo ago
When I ask the same question it says primarily glucose and also mentions ketone bodies. It mentions that the brain is flexible and while it normally metabolizes glucose it may sometimes need to metabolize other things. This is both at gemini.google.com and using google.com in "AI mode" in private browsing.

gemini.google.com mentions lactate and fat. But it also knows I care about science. I'm not sure how much history is used currently.

But this is kind of silly because if you're a member of the public and ask a scientist what the brain uses as fuel they'll also say glucose. If you've ever been in a conversation with someone who felt the need to tell you *every detail* of everything they know, then you'll understand that that's not how human communication typically works. So if you want something more specific you have to start the conversation in a way that elicits it.

ben_w•6mo ago
> If you've ever been in a conversation with someone who felt the need to tell you every detail of everything they know, then you'll understand that that's not how human communication typically works.

Indeed.

I'm currently finding myself forced to do so with some customer support agents who keep forgetting critical issues on each step. I do not know if the agents are humans or AI, but either way it's not fun to keep repeating all the same details each time.

And in normal cases, do sometimes notice I've given a wrong impression by skipping some background that I didn't realise the other party didn't already have, precisely because it's not natural to share everything.

justlikereddit•6mo ago
If you ask a neuroscience teacher the same question you're also told it's all glucose and maybe occasionally ketone bodies.
ClaraForm•6mo ago
If you ask most neuroscientists they’d say the same. Only a small subset of us would cite the literature that the brain’s caloric neuronal activity is ~10-15% unaccounted for by the amount of glucose neurons have access to. It’s a niche within a niche. And debated by the majority.
plemer•6mo ago
Yup. An accomplished scientist friend of mine looked up a topic in which he’s an expert and was deeply unimpressed - outdated, inaccurate, incomplete, misleading info (perhaps because much relevant research is paywalled). LLMs are amazing but not all-knowing.
ben_w•6mo ago
Sounds about right.

I used to think software developer would be the final thing that gets automated because we'd be the ones making specific new AI for each task, but these days I think it's more likely to be spy craft that's the "final profession", because there's nothing to learn from except trial and error — all the info (that isn't leaked) is siloed even within any given intelligence organisation, so AI in that field not only don't get (much) public info about the real skills, they also don't even get classified info. What they will get instead is all the fiction written about spies, which will be about as helpful to real spies as Mr. Scott's Guide to the Enterprise is to the design and construction of the Boeing Starliner.

skybrian•6mo ago
I tried it using Gemini 2.5 Pro and it cited this Hacker News thread for its first paragraph. I can't judge the other citations, other than to say they're not made up. (I see links to PubMed Central.)
ants_everywhere•6mo ago
Well, obviously this thread wasn't available when I asked since I was one of the first commenters.

But:

(1) You're right that it's a hard experiment to do now because LLMs can search the web, on the other hand...

(2) LLMs can search the web, which cuts against the author's implicit premise that LLMs can only know what is in their training data. LLMs have access to tools and oracles. But also...

(3) I checked gemma3:12b running in Ollama. It can not search the internet. But it also knew about fatty acid combustion in the brain and recapped the research history on it.

Now obviously I expect gemma3 to be more prone to hallucination since it's a weaker model and doesn't have any safeguards a production model would have. Also it's working entirely from memory.

But I feel comfortable concluding that the author overestimates the novelty of the study and underestimates LLMs. The study rewrites the Bayesian weights on various interpretations of fatty acid combustion results in the brain. It doesn't completely propose and prove an unheard of hypothesis. Even the offline Gemma told me that glucose-only metabolism used to be the dogma etc etc.

But that's also true in general about science. Breakthroughs are always building on things that came before. And the LLM knows everything that came before.

TeMPOraL•6mo ago
>> So let’s ask ourselves: would AI have told us this?

My first thought: if it did, would you believe it?

> Yes and it did

And before today and this thread, if I asked something like it honestly, without already knowing the answer, and an LLM answered like this...

... I'd figure it's just making shit up.

Before AI will be able to "pretty much solve all our outstanding scientific problems Real Soon Now", it needs to be improved some more, but there's a second, underappreciated obstacle: we will need to learn to gradually start taking it more seriously. In particular, novel hypotheses and conclusions drawn from synthesizing existing research will, by their very nature, look like hallucinations to almost everyone, including domain experts.

virtualritz•6mo ago
I think you missed the point the article makes.

The point was that LLMs are not well set up to find new insights unless they are already somehow contained in the knowledge they have been trained on. This can mean "contained indirectly" which still makes this useful for scientific purposes.

The fact that the author maybe underestimated the knowledge about the topic of the article already contained within an LLM does not invalidate this point.

dr_dshiv•6mo ago
Except it inadvertently showed that LLMs might be more flexible thinkers than most people. Everyone knows…
sam_goody•6mo ago
Yes, but only after this post in science and on HN. As has been mentioned above, one of the links it offers is this very post.

So, AI will look online and synthesize the latest relevant blog posts. In Gemini's case, it will use trends to figure out what you are probably asking. And since this post caught traction, suddenly the long tail of related links are gaining traction as well.

But had Derek asked the question before writing the article, his links would not have matched. And his point that it isn't the AI that figured out that something has changed, remains relevant.

OT, I really enjoy his posts. As AI takes over, will we even read blog posts [enough for authors like him to keep writing], or just get the AI cliff notes - until there is no one writing novel stuff?

ants_everywhere•6mo ago
> The point was that LLMs are not well set up to find new insights unless they are already somehow contained in the knowledge they have been trained on.

The author is, to use his phrase, "deeply uninformed" on this point.

LLMs generalize very well and they have successfully pushed the frontier of at least one open problem in every scientific field I've bothered to look up.

xg15•6mo ago
Well yeah, today we know the dogma was wrong and so that information is probably already in the training data.

I think what Lowe meant was that an LLM could not have come up with this "on its own", if it was only trained on papers supporting the dogma.

So it cannot produce novel insights which would be a requirement if LLMs should "solve science".

johnisgood•6mo ago
> So it cannot produce novel insights which would be a requirement if LLMs should "solve science".

How sure are we about this statement, and why? I have been hearing this a lot, and it might be true, but I would like to read some research into this area.

ants_everywhere•6mo ago
> So it cannot produce novel insights

You write as if this is your conclusion but it's really your premise.

> if it was only trained on papers supporting the dogma.

It's not, it's also trained on all the papers the authors of the current study read to make them think they should spend money researching fatty acid combustion in the brain.

> so that information is probably already in the training data.

Run an offline copy of Gemma with training cutoff before this study came out and it will also tell you about fatty acid combustion in the brain, with studies going back to the 60s and taking off around the 2000s or 2010s.

xg15•6mo ago
I was just paraphrasing the argument from the OP, because I think just asking Gemini is not a valid refutation of it.

> So let’s ask ourselves: would AI have told us this? Remember, when people say AI they are about 95% saying “machine learning”, so would it really have told us about this after having been trained on years and years of the medical literature telling it that neurons are obligate glucose users and don’t really have a role for triglycerides? Of course not. And this is why I keep saying (and I’m sure not the only one) that we simply don’t know enough to teach the machine learning algorithms yet. Not to mention that some of what we’d be teaching them is just wrong to start with.

But yeah, if Gemini cites papers from the 90s, then either the shift in thinking on this topic was further back than Lowe makes it seem here (and the "new findings" are already established for decades) or the model interpreted the old papers differently than the scientists did back then.

zahlman•6mo ago
I get that this is intended to be parsed "Discovering (what we think we know) is (wrong)", but it took me a while to discard the alternative "discovering (what we think (we know is wrong))".
mrbluecoat•6mo ago
> the constant possibility that something that Everybody Knows will turn out to be wrong

Reminds me of astronomy and also quantum mechanics

K0balt•6mo ago
It’s best to remember that ai is an extractive process, not a creative one. That’s why it seems to give you what you “want to hear”. The prompt directs the drill, the well spills out what you drilled into.