frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
503•klaussilveira•8h ago•139 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
842•xnx•14h ago•506 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
57•matheusalmeida•1d ago•11 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
166•dmpetrov•9h ago•76 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
166•isitcontent•8h ago•18 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
281•vecti•11h ago•127 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
60•quibono•4d ago•10 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
340•aktau•15h ago•164 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
226•eljojo•11h ago•141 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
332•ostacke•14h ago•89 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
422•todsacerdoti•16h ago•221 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
34•kmm•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
364•lstoll•15h ago•251 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
12•denuoweb•1d ago•0 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
79•SerCe•4h ago•60 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
59•phreda4•8h ago•9 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
16•gmays•3h ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
211•i5heu•11h ago•158 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
9•romes•4d ago•1 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
123•vmatsiiako•13h ago•51 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
33•gfortaine•6h ago•9 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
160•limoce•3d ago•80 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
258•surprisetalk•3d ago•34 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1020•cdrnsf•18h ago•425 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
52•rescrv•16h ago•17 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
44•lebovic•1d ago•13 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
96•ray__•5h ago•46 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
81•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
36•betamark•15h ago•29 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
10•denysonique•5h ago•1 comments
Open in hackernews

GPU memory snapshots: sub-second startup (2025)

https://modal.com/blog/gpu-mem-snapshots
27•jxmorris12•4w ago

Comments

erwaen98•3w ago
Looks great
erichocean•3w ago
Tried it out, first curl after deploy gave me a 303, but second attempt worked.
Imustaskforhelp•3w ago
Is modal running every single service inside gvisor?

I have heard that gvisor isn't recommended to run every single production but rather only some front facing or some other activities but it has some serious performance degradation which is why most end up using firecracker

This is really cool though, does this mean that we could probably have AI models that are snapshotted?

Are the states of checkpoint/recovery encrypted by default or how would that even work? Like what are the privacy aspects of it. I don't think even using something like modal would be the private llm that many people sometimes want on subreddits like localllama but the people dont have gpu. of course nothing beats privacy if you have your own gpu's but I'd be curious to know what people's thoughts are

markasoftware•3w ago
the thing is modal is running untrusted containers, so there's not really a concept of "some front facing" containers. Any container running an untrusted workload is at high risk / is "front facing".

If Modal's customers' workloads are mainly GPU-bound, then the performance hit of gvisor isn't as big as it might be for other workloads. GPU activity does have to go through the fairly heavyweight nvproxy to be executed on the host, but most gpu activity is longer-lived async calls like running kernels so a bit of overhead in starting / retrieving the results from those calls can be tolerated.

Imustaskforhelp•3w ago
Well if someone is gonna use Modal exactly for GPU purposes then I guess its okay but anything compute related just feels like it would have some issues performance wise

So I can agree that perhaps Modal might make sense for LLM's but they position themselves as sandbox including something like running python code etc. and some of this may be more intensive in workflows than others so I just wanted to point it out

Fly.io uses firecracker so I kinda like firecracker related applications (I tried to run firecracker myself its way too hard to build your own firecracker based provider or anything) and they recently released https://sprites.dev/

E2B is another well known solution out there. I have talked to their developers once and they mentioned that they run it on top of gcp

I am really interested in kata containers as well because I think kata runs on top of firecracker and can hook with docker rather quickly.

amitprasad•3w ago
If you're not looking for GPU snapshotting the ecosystem is relatively mature. Specifically, CPU-only VM-based snapshotting techniques are pretty well understood. However, if you need GPUs, this is a notoriously hard problem. IIRC Fly also was planning on using gVisor (EDIT: cloud-hypervisor) for their GPU cloud, but abandoned the effort [1].

Kata runs atop many things, but is a little awkward because it creates a "pod" (VM) inside which it creates 1+ containers (runc/gVisor). Firecracker is also awkward because GPU support is pretty hard / impossible.

[1] https://fly.io/blog/wrong-about-gpu/

Imustaskforhelp•3w ago
Ohh this makes sense now. Firecracker is good for compute related workflows but gvisor is more good for GPU related workflows, gotcha.

For my use cases usually, its Firecracker but I can now see why company like Modal would use gvisor because they focus a lot (and I mean a lot) on providing gpu access. I think that its one of their largest selling points or one of them, for them compute is secondary customer and gvisor's compute performance hit is a well worth trade off for them

Thanks for trying to explain the situation!

zackangelo•3w ago
This uses Nvidia’s CUDA snapshot API under the hood, but you have to pair it with a host side snapshot as well. Modal uses gVisor for this, which is notoriously high overhead.

Does anyone know of a more efficient alternative if you’re running a trusted container?

luiscape•3w ago
Post author here: there are other projects that will create a proxy for CUDA calls and use the log of CUDA operations to checkpoint / restore or live migration tasks. We haven’t used them. I don’t believe they are very popular nor used outside specific orgs.

This is the only API available for snapshotting NVIDIA GPU memory, afaik.

As for needing to combine it with a host memory snapshot step, this is required because CUDA sessions need to be mapped to a host process, so you need to snapshot both things in order for the program to be restored correctly.

CRIU is another project that uses the same technique (CUDA snapshot + host memory snapshot). Different than CRIU, our snapshots work at the function level so we’re able to take snapshots after functions have been initialized (including GPU memory), making Modal cold boots fast. One would have to implement this entire process using CRIU.

vivzkestrel•3w ago
- as a guy not familiar or in loop with all these sandbox products, i have a quick question for anyone reading this

- what is the difference between docker and modal?

- what does modal do that docker doesnt?

- what is the cold start time comparison between both?

- how do both of these differ from something called "Firecracker VM"?

BobbyTables2•3w ago
I can describe firecracker.

With Intel VMX virtualization, instruction execution is handled by the CPU but (a lot) of software still has to deal with HW peripheral emulation .

QEMU uses KVM (Intel VMX, etc) but implements HW peripherals (display, network, disk, etc) faithfully matching really HW and provides a full BIOS (SeaBios) or UEFI firmware (EDK) to deal with with boot process.

Over time, Linux (and Windows) were extended to support novel “peripherals” designed for high emulation performance (not a real HW product).

Firecracker basically skips all the “real” peripheral emulation and skips the full BIOS/UEFI firmware. Instead, it implements just enough to boot modern Linux directly. Also written in Rust instead of C. It will never support DOS, Windows 95 or probably anything else.

The “microVM” BIOS allows it to start booting Linux very quickly (sub-second). A traditional QEMU VM might take 2-5 seconds. Some people are emboldened to effectively move back from containers to running applications in a VM…

Instead of the VM being long lived, it is really just for running a single app.

I think Kata containers had this idea for much longer but Firecracker provides a more efficient implementation for such a thing.

vivzkestrel•3w ago
thank you very much for the detail there. I assume you would also know very well how a docker container would compare to firecracker in terms of boot time. I understand that a container and a VM are not the same thing but just curious
BobbyTables2•3w ago
The overhead to starting a docker container is practically zero. A new namespace and a few overlayfs mounts are virtually instantaneous.

Roughly speaking, once the kernel has booted inside a VM, it launches the first process which would be the “container” for a “firecracker container”.

Certainly possible to get kernel boot times below 1 second.