frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•6m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•6m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•8m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•9m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•10m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•10m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•10m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
2•samasblack•12m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•14m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•14m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•15m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•17m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•17m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•18m ago•0 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•18m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•19m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
2•maxmoq•20m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•20m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•21m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•21m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•24m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•25m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•29m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
2•elashri•30m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•30m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•30m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•31m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model by Mitchellh

https://github.com/ghostty-org/ghostty/pull/10559
1•samtrack2019•32m ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
1•mellosouls•32m ago•1 comments

The Neuroscience Behind Nutrition for Developers and Founders

https://comuniq.xyz/post?t=797
1•01-_-•32m ago•0 comments
Open in hackernews

Safe AI

1•bobby_mcbrown•4mo ago
Hey friends I have an idea for a way to make neural networks deterministiclly instead of probabilistically.

Right now we train neural networks on unstructured data but the problem is they are probabilistic models and hard to understand.

I want to create a new neural network that is fully understandable, and each weight is intentional.

So what we do is this. We create a neural network that specializes in reading, understanding and writing actual neural network weights.

So the idea is that a correctly trained network could actually intentionally create and update neural networks with new knowledge deterministically.

So like you could say "create a network that can read mnist", and it would like actually know how to make the network including the weights, and it would come up with reasonable values for the weights in the network. It would specify each nerve and each connection with the actual value of the weight.

The cool thing about this is we could have it like gain an intuition for various neural architectures. it would set weights, assign input values, run it through, and "debug" so it would get better and better at making neural nets.

Honestly, we could have it like do reinforcement learning where every time it makes updates that are better it can be like "yes"! and it will reward itself by doing reinforcement learning for that series, and we do this in parallel and the ones that work win.

So the benefit would be for safety sensitive scenarios, having an ai that can truly understand and inspect what weights mean and what they are for, and ability to edit them for precise purpose. This would prevent a surgery robot, for example, from having neurons left over poorly set from bad training cutting too much because it saw a video in pretraining about a butcher shop.

The other benefit is that it could intelligently do what our brains do - create "skip" connections from early layers to later layers enhancing efficiency.

It could also lead to enhanced efficiency where it only makes a few connections that are necessary. It could also choose data types intelligently using high precision floating points for areas that are sensitive and need it, and low precision elsewhere.

By training a network to be able to inspect and make networks, we can get much closer to guaranteeing that networks don't have rogue neorons.

Since networks have billions of neurons, I would guess that it would need to do inspection of neurons at high levels and low levels bit by bit and a ton of work and experimentation on different sections and sort of create a plain text "database" of what sections refer to what, it could make indexes and stuff like that.

Eventually a neural network could be "self compiling" like a language where it doesn't even need a pretraining phase or backprop.