frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
487•klaussilveira•7h ago•130 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
828•xnx•13h ago•496 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
48•matheusalmeida•1d ago•5 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
163•isitcontent•8h ago•18 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
104•jnord•4d ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
159•dmpetrov•8h ago•74 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
58•quibono•4d ago•10 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
269•vecti•10h ago•127 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
334•aktau•14h ago•161 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
216•eljojo•10h ago•136 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
329•ostacke•13h ago•87 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
31•kmm•4d ago•1 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
418•todsacerdoti•15h ago•220 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
9•denuoweb•1d ago•0 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
8•romes•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
349•lstoll•14h ago•246 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
55•phreda4•7h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
205•i5heu•10h ago•150 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
117•vmatsiiako•12h ago•43 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
155•limoce•3d ago•79 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
30•gfortaine•5h ago•4 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
12•gmays•3h ago•2 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
254•surprisetalk•3d ago•32 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1008•cdrnsf•17h ago•421 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
50•rescrv•15h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
83•ray__•4h ago•40 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
41•lebovic•1d ago•12 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
32•betamark•15h ago•28 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments
Open in hackernews

AI focused on brain regions recreates what you're looking at (2024)

https://www.newscientist.com/article/2438107-mind-reading-ai-recreates-what-youre-looking-at-with-amazing-accuracy/
79•openquery•9mo ago

Comments

neonate•9mo ago
https://archive.ph/650Az
averageRoyalty•9mo ago
Maybe I missed this, but isn't the underlying concept here big news?

Am I understanding this right? It seems that by reading areas of the brain, a machine can effectively act as a rendering engine with knowledge on colour, brightness etc per pixel based on an image the person is seeing? And AI is being used to help because this method is lossy?

This seems huge, is there other terminology around this I can kagi to understand more?

walterbell•9mo ago
This requires intrusive electrodes, "fMRI visual recognition", https://scholar.google.com/scholar?q=fmri+visual+recognition

There are startups working on less intrusive (e.g. headset) brain-computer interfaces (BCI).

Legend2440•9mo ago
fMRI isn't the one with the electrodes, it's the one with the giant scanner and no metal objects in the room.
Legend2440•9mo ago
>And AI is being used to help because this method is lossy?

AI is the method. They put somebody in a brain scanner and flash images on a screen in front of them. Then they train a neural network on the correlations between their brain activity and the known images.

To test it, you display unknown images on the screen and have the neural network predict the image from the brain activity.

hwillis•9mo ago
> Then they train a neural network on the correlations between their brain activity and the known images.

Not onto known images, onto latent spaces of existing image networks. The recognition network is getting a very approximate representation which it is then mapping onto latent spaces (which may or may not be equivalent) and then the image network is filling in the blanks.

When you're using single-subject, well-framed images like this they're obviously very predictable. If you showed something unexpected, like a teddy bear with blue skin, the network probably would just show you a normal-ish teddy bear. It's also screwy if it doesn't have a well-correlated input, which is how you get those weird distortions. It will also be very off for things that require precision like seeing the actual outlines of an object, because the network is creating all that detail from nothing.

At least the stuff using a Utah array (a square implanted electrode array) is not transferrable between subjects, and the fmri stuff also might not be transferrable. These models are not able to see enough detail to know what is happening- they only see glimpses of a small section of the process (Utah array) or very vague indirect processes (fmri). They're all very overfitted.

ianarchy•9mo ago
Big blocker I believe, besides giant expensive fMRI machine, is each person is different, so model trained on Bob won’t work on Jane.
ivape•9mo ago
Yeah, it's pretty crazy. This seems like it's inputting an image to the Monkeys eyes and then figuring out how that particular input maps to brain activity. Someone would have to fight me here, but with enough input, we should be able to mostly figure out how things map. As in, we can perfect this ...
cheschire•9mo ago
I hope one day we can turn this on for coma patients and see if they're dreaming or otherwise processing the world.
hwillis•9mo ago
Using these techniques, never. The electrode methods can only see a tiny section of processing and are missing all the information elsewhere. fMRI is very low resolution. Because of this they are all very overfitted- they cue off very particular subject-specific quirks that will not generalize well.

More importantly, these techniques operate on the V1, V4 and inferior temporal cortex areas of the brain. These areas will fire in response to retina stimulation regardless of what's happening in the rest of your brain. V1 in particular is connected directly to your retinas. While deeper areas may be sympathetically activated by hallucinations etc, they aren't really related to your conception of things. In general if you want to read someone's thoughts you would look elsewhere in the brain.

khazhoux•9mo ago
He did say “one day”
hwillis•9mo ago
One day using different techniques. Not these techniques.
aitchnyu•9mo ago
I want to see a cats POV when its startled by a cucumber (Youtube has lots of examples). A theory is that part of the brain mistook it for a snake. Also research on "constant bearing, decreasing range (CBDR)" where drivers may not notice another car/cycle in a perfectly clear crossroads till its too late.'
explodes•9mo ago
For something like these kinds of reflexes, my understanding is that the response comes from the central nervous system, even before the brain has had the chance to fully process the input. This shortcut makes one avoid, say, burns or snakes, quicker than if it required the brain. Still, I agree with you that seeing what a cat sees (here or anywhere) would be awesome.
abeppu•9mo ago
I think the distinction you're drawing between "the central nervous system" and "the brain" is mistaken here -- the brain is part of the CNS. This kind of reflex basically has to involve the brain b/c it involves both the visual system and the motor system i.e. there's not a fast path from the retina to moving your appendages etc that doesn't include the brain.

The "fully process" part is part of the story though -- e.g. perhaps some reactions use the dorsal stream based on peripheral vision while ventral stream is still waiting on a saccade and focus to get higher resolution foveal signals. But though these different pathways in the brain operate at different speeds, they're both still very much in the brain.

ljsprague•9mo ago
Some touch-based reflexes might avoid the higher parts of the brain though no?
abeppu•9mo ago
Yeah I think there are multiple documented cases of this, where especially well-practiced motor-plans seem to be 'pushed down', and if they're interrupted, correction can start faster than a round trip to the brain.
trhway•9mo ago
In this article you can see a typical and a "broken" "visual to amygdala fear shortcuts" (the "broken" is MRI of the famous climber Honnold)

https://assets.nautil.us/10086_6412121cbb2dc2cb9e460cfee7046...

https://nautil.us/the-strange-brain-of-the-worlds-greatest-s...

(the path is from the back of the head (V5?) where the visual nerve comes into brain)

heavyset_go•9mo ago
Reflexes do not necessarily have to exist in the brain, but they do exist in the central nervous system. The peripheral nervous system doesn't handle reflexes as far as I'm aware.
johnisgood•9mo ago
PNS does not process reflexes, but it is essential for transmitting the sensory and motor components of reflex arcs though.

But yeah, reflexes are processed in the central nervous system (CNS), typically the spinal cord or brainstem, not necessarily the brain.

smusamashah•9mo ago
It reminds of this research where faces monkey's were seeing were recreated almost identically.

https://www.bbc.co.uk/news/science-environment-40131242

https://www.cell.com/cell/fulltext/S0092-8674(17)30538-X

abeppu•9mo ago
I think it would be interesting to know if the viewer's familiarity with the object informs how accurate the reconstruction is. This shows presumably lab-raised macaques looking at boats and tarantulas and goldfish -- and that's cool. But presumably a macaque especially whose life has been indoors in confinement has no mental concepts for these things, so they're basically seeing still images of unfamiliar objects. If the animal has e.g. some favorite toys, or has eaten a range of foods, do they perceive these things with a higher detail and fidelity?
Animats•9mo ago
The paper, at least as shown here, [1] is vague about which results came from implanted electrodes and which came from functional MRI data. Functional MRI data is showing blood flow. It's like looking at an IC with a thermal imager and trying to figure out what it is doing.

[1] https://archive.is/650Az

buildbot•9mo ago
That could be an interesting project in itself, take a simple 8 but microcontroller, a thermal camera, and some code that does different kinds of operations, see if you can train a classification model at least, or even generate the code running via an image to text llm.
moffkalast•9mo ago
Ah yes, yet another attack vector chip manufacturers will have to protect against now.
tharant•9mo ago
Please stop giving me project ideas. :)
vo2maxer•9mo ago
Just to clarify, the paper [0] does use both implanted electrodes and fMRI data, but it is actually quite transparent about which data came from which source. The authors worked with two datasets: the B2G dataset, which includes multi-unit activity from implanted Utah arrays in macaques, and the Shen-19 dataset, which uses noninvasive fMRI from human participants.

You’re right that fMRI measures blood flow rather than direct neural activity, and the authors acknowledge that limitation. But the study doesn’t treat it as a direct window into brain function. Instead, it proposes a predictive attention mechanism (PAM) that learns to selectively weigh signals from different brain areas, depending on the task of reconstructing perceived images from those signals.

The “thermal imager” analogy might make sense in a different context, but in this case, the model is explicitly designed to deal with those signal differences and works across both modalities. If you’re curious, the paper is available here:

[0] https://www.biorxiv.org/content/10.1101/2024.06.04.596589v2....

dogma1138•9mo ago
If you can extract private keys by measuring how much power a chip consumes I don’t really see a problem with extracting images from fMRI data….
vo2maxer•9mo ago
Fair point. Side-channel attacks show how much signal you can pull from noise. But fMRI is a different kind of beast. It’s slow, indirect, and coarse. You’re not measuring neural activity directly, just blood flow changes that lag by a few seconds.

The paper [0] doesn’t pretend otherwise. It trains a model (PAM) to learn which brain regions carry useful info for reconstructing images, and applies this to both fMRI data from humans and intracranial recordings from macaques. The two signal types are handled separately.

If you want an analogy, it’s less like tapping power lines and more like trying to figure out which YouTube video someone is watching by measuring heat on the back of their laptop every few seconds. There’s a pattern in there, but pulling it out takes work.

[0] https://www.biorxiv.org/content/10.1101/2024.06.04.596589v2....

EasyMarion•9mo ago
Big jump when we go from decoding what you’re seeing to what you’re thinking.
Hoasi•9mo ago
In that case, you'll need the equivalent of ad-blockers for the brain, to prevent eavesdropping and intrusions by commercial and state actors.
w_for_wumbo•9mo ago
This is a big jump ethically, but technically it feels like it's a hop away. If we can do this for visual images, we could use the same strategy on patterns of thought - especially if the person is a skilled at visualisation.
ivape•9mo ago
There are no ethics in China.
Gigachad•9mo ago
There are no ethics in America
pona-a•9mo ago
"Feels" and "is" are quite different in these domains. Self-driving feels like it's 5 years away for 10 years straight, and biology is infinitely more complex than automation. See this comment from this thread [0].

> These techniques operate on the V1, V4 and inferior temporal cortex areas of the brain. These areas will fire in response to retina stimulation regardless of what's happening in the rest of your brain. V1 in particular is connected directly to your retinas. While deeper areas may be sympathetically activated by hallucinations etc, they aren't really related to your conception of things. In general if you want to read someone's thoughts you would look elsewhere in the brain.

[0] https://news.ycombinator.com/item?id=43910953

gitroom•9mo ago
big leap for tech but honestly i get kinda uneasy thinking how far we're gonna take this brain decoding stuff
StefanBatory•9mo ago
Ideally, every person working on that should be tried for crimes against humanity and locked away forever.