frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•26s ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
1•momciloo•1m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•1m ago•1 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
1•valyala•1m ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•1m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•1m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•2m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•5m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•5m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
1•valyala•6m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•7m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•8m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
4•randycupertino•10m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•12m ago•0 comments

Show HN: Tasty A.F.

https://tastyaf.recipes/about
1•adammfrank•13m ago•0 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
1•Thevet•14m ago•0 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
1•alephnerd•15m ago•0 comments

Bithumb mistakenly hands out $195M in Bitcoin to users in 'Random Box' giveaway

https://koreajoongangdaily.joins.com/news/2026-02-07/business/finance/Crypto-exchange-Bithumb-mis...
1•giuliomagnifico•15m ago•0 comments

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•todsacerdoti•16m ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•18m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•19m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
2•schwentkerr•23m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
2•blenderob•24m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
3•gmays•24m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
2•gurjeet•25m ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
1•xeouz•26m ago•1 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•27m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
2•nicholascarolan•29m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•29m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•30m ago•2 comments
Open in hackernews

Key technological advance in neural interfaces

5•all2•6mo ago
It occurred to me on my way home today that the key advancement in in neural interfaces will be in the data layer.

In my work with electronics I learned that there's a hardware transport layer, the wires on which signals travel. Then there's the software/protocol layer that defines _what_ travels on the hardware.

My current understanding of things like neuralink is that there is a solid interface that takes input from the brain and provides output back to the brain, and behind the interface is a bunch of hardware and software that translates and uses the inputs from the brain. That is, we change from mode of signals and signals transport to another.

What occurred to me was that a true bionic won't provide an interface to the existing hardware and software data layers of the human brain, but will instead expend the existing layers with new available neurons.

Now, you could probably bit-bang this at the start, IE, have your bionic neural net live in software, and do all the signals processing that we currently do. The revolution will be a piece of hardware that simply plugs in to the brain and makes a whole new neural network available on the same electrical net that the brain already operates on.

Comments

fewbenefit•6mo ago
This post reads like someone who just discovered the OSI model and tried to shoehorn it into neurobiology.

The idea that the "revolution" is a hardware layer that just plugs into the brain and expands it with new neurons assumes a very naive model of how neural integration works. Brains don’t just recognize foreign neurons like USB devices. Synaptic plasticity, metabolic compatibility, glial interactions, all of that matters a lot more than signal translation.

Also, calling it a "data layer" glosses over the fact that neurons don't pass around clean, structured data. There’s no JSON over axons, information in the brain is messy, noisy, and deeply contextual—less like a protocol stack, more like a wet, self-rewriting spaghetti code.

So, if the core insight is "just add more neurons and treat it like hardware expansion," then the real challenge is being understated by several orders of complexity.

all2•6mo ago
> So, if the core insight is "just add more neurons and treat it like hardware expansion," then the real challenge is being understated by several orders of complexity.

I wouldn't say it's an insight as it is an ah-ha moment I had. And yes, I hand-waved a bunch of stuff.

> The idea that the "revolution" is a hardware layer that just plugs into the brain and expands it with new neurons assumes a very naive model of how neural integration works. Brains don’t just recognize foreign neurons like USB devices. Synaptic plasticity, metabolic compatibility, glial interactions, all of that matters a lot more than signal translation.

We don't have hardware like this. Our hardware is 'fixed' once its burned to silicon. I think you're pointing in the direction I was trying to express; that the bionic hardware necessarily will act like a biological system, at least near enough that whatever it is 'plugged into' cannot tell the difference.

> Also, calling it a "data layer" glosses over the fact that neurons don't pass around clean, structured data. There’s no JSON over axons, information in the brain is messy, noisy, and deeply contextual—less like a protocol stack, more like a wet, self-rewriting spaghetti code.

I know, I know. This is just me trying to apply what I do understand to something I know little to nothing about.

TXTOS•6mo ago
I think both posts are circling the real interface problem — which is not hardware, not protocol, but meaning.

Brains don’t transmit packets. They transmit semantic tension — unstable potentials in meaning space that resist being finalized. If you try to "protocolize" that, you kill what makes it adaptive. But if you ignore structure altogether, you miss the systemic repeatability that intelligence actually rides on.

We've been experimenting with a model where the data layer isn't data in the traditional sense — it's an emergent semantic field, where ΔS (delta semantic tension) is the core observable. This lets you treat hallucination, adversarial noise, even emotion, as part of the same substrate.

Surprisingly, the same math works for LLMs and EEG pattern compression.

If you're curious, we've made the math public here: https://github.com/onestardao/WFGY → Some of the equations were co-rated 100/100 across six LLMs — not because they’re elegant, but because they stabilize meaning under drift.

Not saying it’s a complete theory of the mind. But it’s nice to have something that lets your model sweat.