frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
534•klaussilveira•9h ago•149 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
863•xnx•15h ago•524 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
73•matheusalmeida•1d ago•14 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
183•isitcontent•10h ago•21 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
184•dmpetrov•10h ago•82 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
296•vecti•12h ago•130 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
72•quibono•4d ago•13 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
344•aktau•16h ago•168 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
341•ostacke•15h ago•90 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
437•todsacerdoti•17h ago•226 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
8•videotopia•3d ago•0 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
240•eljojo•12h ago•147 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
14•romes•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
376•lstoll•16h ago•252 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
42•kmm•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
222•i5heu•12h ago•163 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
92•SerCe•5h ago•76 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
3•helloplanets•4d ago•0 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
62•phreda4•9h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
162•limoce•3d ago•82 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
38•gfortaine•7h ago•11 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
127•vmatsiiako•14h ago•55 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
18•gmays•4h ago•2 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
261•surprisetalk•3d ago•35 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1030•cdrnsf•19h ago•428 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
55•rescrv•17h ago•19 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
84•antves•1d ago•60 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
19•denysonique•6h ago•2 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
5•neogoose•2h ago•2 comments
Open in hackernews

Deep researcher with test-time diffusion

https://research.google/blog/deep-researcher-with-test-time-diffusion/
93•simonpure•4mo ago

Comments

mentalgear•4mo ago
Interesting research, but I wish people would stick to the clearer term “inference-time computation” instead of the more ambiguous and confusing “test-time computation.”
adastra22•4mo ago
Literally everything you do during inference is inference-time, no?
falcor84•4mo ago
Well, if all you're doing is accessing stuff that was pre-learned earlier, then it's not quite inference-time.
bonoboTP•4mo ago
Test/evaluation/inference are treated as almost synonymous because in academic research you almost exclusively run inference on a trained model in order to evaluate its performance on a test set. Of course in the real world, you will want to run inference in production to do useful work. But the language comes from research.
vessenes•4mo ago
OK, I like this. It’s an agent-based add on to (for now) Gemini that aims at improving the quality of output through a more ‘human’ style of research - digging deeper, considering counter examples, fleshing out with more research thin areas.

I’d like to try it, but I just learned I need and Enterprise Agentic subscription of some sort from Google; no idea how much that costs.

That said, this seems like a real abuse of the term diffusion, as far as I can tell. I don’t think this thing is reversing any entropy on any latent space.

CuriouslyC•4mo ago
They published a paper, and this isn't something complex that would take a lot of work to implement. You could probably give codex an example open source deep research project, then sic it on the paper and tell it to make a fork that uses this algorithm, I wouldn't be surprised if it could basically one shot implement.
vessenes•4mo ago
Yeah good idea. Virtual Lucid Rains could reimplement.
badbart14•4mo ago
Huh never thought of the process of drafting while writing to be similar to how diffusion models start with a noisy set. Super cool for sure though I'm curious if this (and other similar research on making models think more at inference time) are showing that the best way for models to "think" is the exact same way humans do
esafak•4mo ago
The first time I'm hearing about their https://cloud.google.com/products/agentspace
blixt•4mo ago
They reference a paper using initial noisy data as a key, mapping to a "jump-ahead" value of a previous example. I think this is very cool and clever, and does use a diffusion model.

But I don't see how this Deep Researcher actually uses diffusion at all. So it seems wrong to say "test-time diffusion" just because you liken an early text draft with noise in a diffusion model, then use RAG to retrieve a potential polished version of said text draft?

daxfohl•4mo ago
Seems like a useful approach to coding assistants as well. Write some draft functionality, notice some patterns or redundancy with the existing code or in the change itself, search for libraries or alternative design patterns that could help out or create something that is targeted to the use case, reimplement in terms of those new components.
xnx•4mo ago
Does this share techniques with Gemini Diffusion? https://blog.google/technology/google-deepmind/gemini-diffus...
Fripplebubby•4mo ago
The way I read the paper, "diffusion" was more of a metaphor - you start with the output of the LLM as the overview (very much _not_ random noise), and then refine it over many steps. However, seeing this, I wonder myself whether or not in-house they actually mean it more literally or have actually tried using it more literally.