frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
629•klaussilveira•12h ago•186 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
929•xnx•18h ago•547 comments

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
11•theblazehen•2d ago•0 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
34•helloplanets•4d ago•26 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
110•matheusalmeida•1d ago•28 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
43•videotopia•4d ago•1 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
10•kaonwarb•3d ago•8 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
221•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
212•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
323•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
372•ostacke•19h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
478•todsacerdoti•20h ago•233 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
275•eljojo•15h ago•163 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
403•lstoll•19h ago•272 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
16•jesperordrup•3h ago•9 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
13•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
244•i5heu•15h ago•189 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
52•gfortaine•10h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
140•vmatsiiako•18h ago•64 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
281•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1058•cdrnsf•22h ago•435 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
133•SerCe•9h ago•118 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
177•limoce•3d ago•96 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•22 comments
Open in hackernews

Stochastic computing

https://scottlocklin.wordpress.com/2025/10/31/stochastic-computing/
66•emmelaich•3mo ago

Comments

emil-lp•2mo ago
How is using randomness in stochastic computing connected to how algorithms (eg in the complexity class BPP) use randomness to solve problems?
numbol•2mo ago
It seems that those two (actually three or four) ideas are parallel and not always compatible.

[please forgive my grammar]

1. There is noisy computers which can work despite or because some unreliable part. Neural netwroks are quite ok with it for example, so some people speculate that it will be possible to build specialized noisy circuits for specific networks. 2. There is stochastic computing, in which complicated numerical functions represented as probability density distributions (?) 3. And then there is probabalistic computing, when state randomly updated in accordance with some "temprature". 4. And finally there is randomized algoritms, which are closer to classical computer science but with some stream of input. Howver, people like Avi Wigderson who succesfully removed the "random" parts of those algoritms.

Plus there is funny things with non-associativity of floating-point numbers which can lead to non-determinism when the order of execution (summation for example) is arbitary, which can lead to funny results. But because neural netwroks are robust to noise to some degree, it will still work.

And the stuff which done by Avi Wigderson requires that computers work in determinstic way (except of that random stream), so it will not be very compatible with unreliable noisy computations. However, it seems that stochastic, probabalistic and noisy computations could be combined.

mikewarot•2mo ago
The key thing I would watch out for with real stochastic computing hardware is crosstalk[1], the inevitable coupling between channels that is bound to happen at some level. Getting hundreds or thousands (or millions?) of independent noise sources to avoid correlation is going to be one of the largest challenges in the process. For a small number of channels, it should be managable, but with LLM size problems, I think it's a deal killer.

[1] https://en.wikipedia.org/wiki/Crosstalk

kragen•2mo ago
If your random bit streams are generated by deterministic processes such as LFSRs, and you're combining them with things like NAND gates, you should easily be able to get the bit error rate down below 10⁻²⁰, I'd think? (And crosstalk would be a bit error.) How often do the gates in your CPU produce the wrong answer?
observationist•2mo ago
https://en.wikipedia.org/wiki/Noisy-channel_coding_theorem

You can precisely engineer arbitrary numbers of channels, design sampling methods to raise your data integrity to whatever your desired error rate is, and so on. This gives you an accuracy/efficiency tradeoff dial, which can be useful - you can choose to spend more time or energy for higher fidelity where the cost justifies it.

Feedback and crosstalk creating chaotic relationships, unintended synchronization, and other effects are non-trivial, however.

Neural networks are non-dimensional or unordered sets, meaning you can arbitrarily order the neurons in a layer so long as you maintain the links to the connected layers. If you permute the structure of a network to reorder neurons in a layer by some feature, the function of the network remains identical to the original, but you can highlight a particular function or feature of the layer, with the constellation of coordinates representing the particular configuration of synapse ordering and weight vectors. You can cycle through all possible configurations of orderings, and those represent possible states of a trained network. When trying to work with stochastic optimizations for neural networks, you're playing around in this same space - they're effectively a combinatorial minefield.

If you design a processing regime to sample a particular subset of possible configurations, it might be possible to exploit a traversal of random orderings associated with amplitude of signals where they correlate and coincide with useful computation - selecting and ordering a set of addresses whose function approximates the desired value.

I see some possibilities and interesting spaces to explore with these systems, but they're going to need some heavy duty number theorists just to eke out a set of useful primitives, and it's unclear to me that it can ever be generalized. You might be able to carefully handcraft an implementation for something like ChatGPT 5, for example, but I don't see how you could simply update it, finetune it, or otherwise. You'd have to put in just as much effort to implement any other model, and any sort of dynamic online learning or training seems to hit a combinatorial explosion right out of the gate.

RA_Fisher•2mo ago
How is this not rediscovering statistics in unprincipled ways?
RA_Fisher•2mo ago
Crickets, crickets, crickets ... :)