frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
1•Tehnix•31s ago•0 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
1•haizzz•2m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
1•Nive11•2m ago•1 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
1•hunglee2•6m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
1•chartscout•8m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
2•AlexeyBrin•11m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
1•machielrey•12m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•17m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•19m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•22m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•22m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•23m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•28m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•34m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•35m ago•1 comments

Slop News - HN front page right now as AI slop

https://slop-news.pages.dev/slop-news
1•keepamovin•39m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•42m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
3•tosh•47m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•51m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•52m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
3•goranmoomin•55m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•56m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•58m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•1h ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
4•myk-e•1h ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•1h ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
5•1vuio0pswjnm7•1h ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
4•1vuio0pswjnm7•1h ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•1h ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•1h ago•0 comments
Open in hackernews

What is gVisor?

https://blog.yelinaung.com/posts/gvisor/
125•yla92•6mo ago

Comments

ericpauley•6mo ago
One of the coolest things about gVisor to me is that it's the ultimate implication of core computer engineering concepts like "the OS is just software" and "network traffic is just bytes". It's one thing to learn these ideas in theory, but it's another altogether to be able to play with an entire network stack in userspace and inject arbitrary behavior in the OSI stack. It's also been cool to see what companies like Fly.io and Tailscale can do with complete flexibility in the network, enabled by tools like gVisor.
sidewndr46•6mo ago
I'm trying to understand the point you're making here but don't really get it. The OS is just software, in most circumstances. Most modern OS require at least one binary blob that has to be sent to some hardware device. This is mostly because the the device manufacturer didn't want to include NVRAM and at the end of the day is usually just software as well.
bananapub•6mo ago
their point is that lots of things everyone thinks of as "OS" things like "tcp" and "doing file IO" can just be done in user space by some new program without the processes that make use of these facilities knowing or caring.
surajrmal•6mo ago
The majority of any OS lives in user space though. Intercepting syscalls is also not that weird of an idea, that's how tools like strace works. Building out sufficient kernel functionality without needing to forward calls to the kernel is definitely impressive though.
tptacek•6mo ago
What do you mean by that? There's a notion of an "operating system" that encompasses both the kernel and all the userland tools (in this sense, each Linux distribution is an "OS"), and there's a more common notion of an OS that is just the kernel and any userland services required for the kernel to function; the latter is the more common definition.
surajrmal•6mo ago
This definition really only work for monolithic kernels. Just because you move a filesystem or network stack into user space doesn't make less part of operating system. Those components are certainly not necessary for the kernel to function. Linux already chooses to place many things like display and rendering stacks in user space despite them fulfilling a similar role for the hardware they interact with to that of a networking stack. I personally think an OS is everything that helps abstract and multiplex hardware for applications that sit above it, providing a consistent API to build on top of. Different OS may choose to abstract different layers and defer ownership of the hardware to things built on top of it, but that doesn't ultimately challenge the definition.
leetrout•6mo ago
How does Fly use gVisor?
abound•6mo ago
I don't believe they do, they use Firecracker microVMs for isolation: https://fly.io/docs/reference/architecture/
ericpauley•6mo ago
https://fly.io/blog/ssh-and-user-mode-ip-wireguard/
PhilippGille•6mo ago
Quote:

> And, long story short, we now have an implementation of certificate-based SSH, running over gVisor user-mode TCP/IP, running over userland wireguard-go, built into flyctl.

tptacek•6mo ago
Also:

https://fly.io/blog/our-user-mode-wireguard-year/

https://fly.io/blog/jit-wireguard-peers/

This is another one of those things where the graph of our happiness about a technical decision is sinusoidal. :)

tptacek•6mo ago
We don't.
jchw•6mo ago
I think they mean to say that a part of gVisor is used by Fly, because if I recall correctly flyctl did use the gVisor user mode TCP stack for Wireguard tunneling.
tptacek•6mo ago
Ahh, that makes sense. Ok, revised answer: yes, we do. :)
quotemstr•6mo ago
Just wait until you read about Wine or captive NDIS. You'll probably enjoy User Mode Linux most of all.

The concept of an OS still makes sense on a system with no privilege level transitions and a single address space (e.g. DOS, FreeRTOS): therefore, mystical low level register goo isn't essential to the concept.

The boundary between the OS is a lot more porous and a lot less arcane than people imagine. In the end, it's just software.

jchw•6mo ago
I believe early on Linode used UML for their VPS hosting offering. At that point in history, I recall solutions like OpenVZ being pretty popular in the low end space, too.

gVisor's modular design seems to have been its strongest point. It's not that nobody understood the OS is just software or whatever, but actually ripping the Linux TCP stack out and using it in userland isn't really that trivial. Meanwhile though a lot of projects have made use of the gVisor networking components, since they're pretty self-contained.

I think gVisor is one of the coolest things written in Go, although it's not really that easy to convey why.

Seriously, just check out the list of packages in the pkg directory:

https://pkg.go.dev/gvisor.dev/gvisor

(I should acknowledge, though, that I don't know of that many unique use cases for all of these packages; and while the TCP stack is very useful, it's mainly used for Wireguard tunneling and user mode TCP stacks are not particularly new. Still, the gVisor network stack is nicer than hacked together stuff using SLiRP-derived code imo.)

udev4096•6mo ago
Moving to unikernel [0] is the best way to get strong isolation and high performance

[0] - https://unikraft.org

sidewndr46•6mo ago
The last solution I looked at to do something like this was using tap / tun devices for networking. How does unikraft handle network isolation and virtualization?
udev4096•6mo ago
From my limited understanding, it has the same isolation advantages as that of a VM and therefore it's as strong as the hypervisor you use
sidewndr46•6mo ago
so does unikraft contain a "driver" for virtio networking?
johncolanduoni•6mo ago
It relies on your hypervisor and/or network hardware to provide that. In an ideal circumstance (e.g. running on a multiqueue NIC with VFIO or virtio acceleration), your VM can talk directly to the network hardware. Major clouds will provide something morally equivalent via their newer network interfaces (gVNIC etc.).
mikepurvis•6mo ago
Absolutely, that reduces your surface area more than anything else, but at an enormous cost to ergonomics.

Some of us are still fighting for docker images to not include a vim install ("but it's so handy!") and here we've got madlads building their app as its own bootable machine image.

johncolanduoni•6mo ago
It’s not the best way to get low per-privilege domain overhead and fungible resource allocation. You’re ultimately limited by your hypervisor on those fronts. gVisor containers are ultimately a few Linux processes and mostly behave like one from a CPU and memory allocation perspective.
eyberg•6mo ago
These people definitely do not understand security at all:

https://github.com/unikraft/unikraft/issues/414

Also - one needs to be careful cause many of the workloads they advertise on their site do not actually run under their kernel - it runs under linux which breaks a completely different type of trust barrier.

As for trust/full disclosure - I'm with nanovms.com

tkz1312•6mo ago
they acknowledged the issue and the fix was merged in 2022, what exactly is the criticism here?
eyberg•6mo ago
No it wasn't - you can still easily replicate. I just did.

My point is that you shouldn't go around talking about how "secure" you are when you have large gaping things like this. This btw is not the only major security issue they have.

udev4096•6mo ago
Big fan of nanovms! I should have linked that instead, sorry
kang1•6mo ago
not really, its just attack surface reduction
mikepurvis•6mo ago
I love the concept of gVisor; it's surprising to me that it hasn't seemingly gotten more real world traction— even GHA is booting you a fresh machine for every build when probably 80%+ of them could run just fine in a gVisor sandbox.

I'd be curious to hear from someone at Google if gVisor gets a ton of internal use there, or it really was built mainly for GCP/GKE

seabrookmx•6mo ago
Google Cloud Functions and Cloud Run both started as gVisor sandboxes and now have "gen2" runtimes that boot a full VM.

Poor I/O performance and a couple of missing syscalls made it hard to predict how your app was going to behave before you deployed it.

Another example of a switch like this is WSL 1 to WSL 2 on Windows.

It seems like unless you have a niche use case, it's hard to truly replicate a full Linux kernel.

kang1•6mo ago
gvisor is difficult to implement in practice. it a syscall proxy rather than a virtualization mechanism (even thus it does have kvm calls).

This causes a few issues: - the proxying can be slightly slower - its not a vm, so you cannot use things such as confidential compute (memory encryption) - you can't instrument all syscalls, actually (most work, but there's a few edges cases where it wont and a vm will work just fine)

On the flip side, some potential kernel vulnerabilities will be blocked by gvisor, while it wont in a vm (where it wouldnt be a hypervisor escape, but you'd be able to run code as the kernel).

This is to say: there are some good use cases for gVisor, and there's less of these than for (micro) vms in general.

Google developed both gVisor and crosvm (firecracker and others are based on it) and uses both in different products.

AFAIK, there isn't a ton of gVisor use internally if its not already in the product, though some use it in Borg (they have a "sandbox multiplexer" called vanadium where you can pick and choose your isolation mechanism)

coppsilgold•6mo ago
It's not an actual [filtering] proxy. It re-implements an increasing chunk of Linux syscalls with its own logic. It has to invoke some Linux syscalls to do so but it doesn't just pass them through.
tptacek•6mo ago
I don't think this is really the case, if I'm reading it right. Can you think of a vulnerability hypo where a KVM host is vulnerable, but a gVisor host isn't? gVisor uses KVM.
dmoy•6mo ago
We used gvisor in Kythe (semantic indexer for the monorepo). Like for the guts of running it on borg, not the open source indexers part.

For indexing most languages, we didn't need it, because they were pretty well supported on borg stack with all the Google internals. But Kythe indexes 45 different languages, and so inevitably we ran into problems with some of them. I think it was the newer python indexer?

> really was mainly for GCP/GKE

I mean... I don't know. That could also be true. There's a whole giant pile of internal software at Google that starts out as "built for <XYZ>, but then it gets traction and starts being used in a ton of other unrelated places. It's part of the glory of the monorepo - visibility into tooling is good, and reusability is pretty easy (and performant), because everyone is on the same build system, etc.

mikepurvis•6mo ago
Dang, 45? I mean, I assume that's C++, Go, Python, Java, and JavaScript/TypeScript. And languages for build scripts, plus stuff like md and rst. And some shells. Probably embedded languages like lua, sql, graphql, and maybe some shading languages. Fortran and some assembly languages, a forth or two for low level bringup or firmware. Dart of course.

But all of those is still less than 30. What am I missing?

dmoy•6mo ago
Three general categories missing:

1. The core stack of internal (or internally created but also external) - protobuf, gcl, etc

2. Some more well-known languages that aren't as big in Google, but are still used and people wrote indexers for: C#, lisp, Haskell, etc.

3. All the random domain specific langs that people built and then worte indexers for.

There's a bunch more that don't have indexers too.

gowld•6mo ago
What in this article is different for the gvisor intro docs (where the gVisor pictures are plagiarized from)? https://gvisor.dev/docs/
setheron•6mo ago
Is gVisor a libc LD_PRELOAD ?
kang1•6mo ago
no ;) (though you could start it there if you wanted, but.. why)

LD_PRELOAD simply loads a library of your choice that executes code in the process context, that's all. folks usually do this when they cannot recompile or change the running binary, which means they also hook and/or overwrite functions of the said program.

generally folks will have gvisor calls integrated to their sandbox code, before the target process starts, so no need for preloading anything in most cases

lanigone•6mo ago
ask chatgpt to run dmesg via python and you’ll find another use of gvisor in prod…
sneak•6mo ago
I have wondered for a long time why we don’t see more networking in userspace for high security applications that don’t require high performance. I guess the answer is just that Linux has enough features now to hook into the kernel with userspace code that it usually isn’t necessary to move the whole IP and TCP stacks out.
illamint•6mo ago
gVisor also has a complete userspace networking stack that you can pull in, which makes it a lot easier to do some neat things like run an HTTP server responding to packets intercepted via eBPF and sent to an AF_XDP socket, which would otherwise be a pain.
tptacek•6mo ago
There's a separately-maintained fork of this (originally by the Tailscale folks) at https://pkg.go.dev/inet.af/netstack.
spr-alex•6mo ago
We're adding support to gvisor for container plugins, it's a reasonable approach for limiting the rich attack surface on linux
remram•6mo ago
Who is "we"? What are "container plugins"?
thundergolfer•6mo ago
We've run gVisor for over 2 years at Modal, and it's been a huge unlock for us. We get a secure sandbox with GPU support that can run on VMs. Just recently it allowed us to checkpoint/restore containers AND its GPUs[1].

gVisor's achilles heel is it's missing or inaccurate syscalls, but the gVisor team is first class in responding to Github issues so it's really quite manageable in practice if you know how to debug and hack on a userspace kernel.

1. https://news.ycombinator.com/item?id=44747116

ignoramous•6mo ago
> userspace kernel

Is gVisor a Kernel or a syscall + select subsystems (like network/gpu) proxy? In my head, a monolith Kernel (like Linux) does more than just syscalls (like memory management, device management, filesystems etc).

peterldowns•6mo ago
In the past I'd heard people recommend against gVisor, and recommend looking at firecracker instead, because of I/O overhead. Is that something you've noticed at Modal? Obviously you're happy with gVisor, not suggesting you switch, just curious about your experience.
tptacek•6mo ago
How are you handling the GPU isolation? (This was a big challenge for us doing AMD-Vi KVM isolation).
Nican•6mo ago
Microsoft's blog post on Hyperlight got my attention a while ago: https://opensource.microsoft.com/blog/2025/02/11/hyperlight-...

I am way out of my depth here, but can anyone make a comparison with the "micro virtual machines" concept?

eyberg•6mo ago
microvms as espoused by things like firecracker offer full machines but have tradeoffs like no gpu (which makes it boot faster)

hyperlight shaves way more off - (eg: no access to various devices that you'd find via qemu or firecracker) it does make use of virtualization but it doesn't try to have a full blown machine so it's better for things like embedding simple functions - I actually think it's an interesting concept but it is very different than what firecracker is doing

laurencerowe•6mo ago
TinyKVM [1] has similarities to the gVisor approach but runs at the KVM level instead, proxying a limited set of system calls through to the host.

EDIT: It seems that gVisor has a KVM mode too. https://gvisor.dev/docs/architecture_guide/platforms/#kvm

I've been working on KVMServer [2] recently which uses TinyKVM to run existing Linux server applications by intercepting epoll calls. While there is a small overhead to crossing the KVM boundary to handle sys calls we get the ability to quickly reset the state of the guest. This means we can provide per-request isolation with an order of magnitude less overhead than alternative approaches like forking a process or even spinning up a v8 isolate.

[1] Previous discussion: https://news.ycombinator.com/item?id=43358980

[2] https://github.com/libriscv/kvmserver

vlovich123•6mo ago
How do you deal with the lack of performance optimizations for JIT code because there’s no warm up and the optimizer never runs?
laurencerowe•6mo ago
We have support for running warmup requests and fork the VM after that. Eventually I’d like to add the ability to export the state of the VM so the warmup can be run on a different machine.
vlovich123•6mo ago
I think there’s a number of challenges with that approach, mainly getting a representative set of sample queries that will accurately optimize the reference VM. I wonder if harvesting the VM state at scale based on pages that are duplicates across machines might work of course then you have problems with ASLR and how to reconstruct a VM to actually use that data.
laurencerowe•6mo ago
The more representative the warmup set the better the result but even a quite simplistic approach is helpful since much of what you want to optimise is not page dependent: the React rendering infrastructure, router, server, and in the case of Deno the runtime level code written in JS.

I suspect harvesting VM state from a production workload would be counterproductive to the goal of isolation.

vlovich123•6mo ago
Harvesting the pages for the JIT and somehow reusing them to prewarm the JIT state, not the heap state overall. The heap state itself is definitely solved by the simple prewarming you describe because of the various state within various code paths that might take time to initialize/prewarm.

I’m not saying it’s not helpful. I’m just flagging that JIT research is pretty clear that the performance improvements from JIT are hugely dependent on actually running the realistic code paths and data types that you see over and over again. If there’s divergence you get suboptimal or even negative gains because the JIT will start generating code for the misoptimization which you actually don’t care about. If you actually have control of the JIT then you can mitigate some of these problems but it sounds like you don’t in which case it’s something to keep in mind as a problem at scale. ie could end up being 5-10% of global compute I think if all your traffic is JIT and certainly would negatively impact latencies of this code running on your service. Of course I’m sure you’ve got bigger technical problems to solve. It’s a very interesting approach for sure. Great idea!

laurencerowe•6mo ago
Thanks! I can see how that would be useful but it sounds like it would require deep integration with the JIT. With the TinyKVM/KVMServer approach we have a well defined boundary in the Linux system call interface to work with. It's been quite surprising to me how much is possible with such a small amount of code.
vlovich123•6mo ago
For sure. I think though you might want more non-JIT customers because a) Cloudflare and AWS have a better story there and thus customer acquisition is more expensive b) you have a much stronger story for things they have to breakdown to WASM for as WASM has significant penalties. Eg if I had an easy Cloudflare-like way to deploy Rust that would be insanely productive.
laurencerowe•6mo ago
I guess my question here is if you are already writing Rust do you care about per-request isolation so much? If you don't then deploying a container to AWS Lambda or GCP Cloud Run is already pretty easy. It might be possible to offer better cold start performance with the TinyKVM approach, but that is still an unknown.

For the Varnish TinyKVM vmod they brought up examples of running image transcoding which is definitely something that benefits from per request isolation given the history of exploits for those kinds of C/C++ libraries.

It's worth noting that Cloudflare/AWS Lambda don't have per-request isolation and that's pretty important for server side rendering use cases where code was initially written with client side assumptions.

Not sure this will ever turn into a business for me personally - my motivation is in trying to regain some of the simplicity of the CGI days without giving up the performance gains of modern software stacks. Though it would be helpful to have a production workload to improve at some point.

vlovich123•6mo ago
> do you care about per-request isolation so much

> It's worth noting that Cloudflare/AWS Lambda don't have per-request isolation and that's pretty important for server side rendering use cases where code was initially written with client side assumptions.

It wasn’t just because of SSR. There’s numerous opportunities for security vulnerabilities because of request confusion in global state. Per request isolation is definitely something Cloudflare would enable if they had a viable solution from that perspective. As such it’s irrelevant the language you write it in - Rust is as equally vulnerable to this problem as JS or anything else.

> If you don't then deploying a container to AWS Lambda or GCP Cloud Run is already pretty easy

Yea, but cloud functions like your talking about are best for running at the edge as close to the user as possible, not for traditional centralized servers. It also promotes a very different programming paradigm that when you fit into it is significantly cheaper to run and maintain because you can decompose your service.

> It might be possible to offer better cold start performance with the TinyKVM approach, but that is still an unknown.

https://blog.cloudflare.com/eliminating-cold-starts-with-clo...

You’d want to start prewarming an instance to be ready to handle the request when a TLS connection for a function comes in.