frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Discuss – Do AI agents deserve all the hype they are getting?

4•MicroWagie•3h ago•1 comments

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

48•UmYeahNo•1d ago•30 comments

LLMs are powerful, but enterprises are deterministic by nature

3•prateekdalal•7h ago•5 comments

Ask HN: Non AI-obsessed tech forums

28•nanocat•18h ago•25 comments

Ask HN: Ideas for small ways to make the world a better place

18•jlmcgraw•21h ago•21 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

44•Invictus0•1d ago•11 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•5d ago•520 comments

Ask HN: Who is hiring? (February 2026)

313•whoishiring•5d ago•514 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•16h ago•1 comments

AI Regex Scientist: A self-improving regex solver

7•PranoyP•22h ago•1 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

18•jchung•2d ago•13 comments

Ask HN: Why LLM providers sell access instead of consulting services?

5•pera•1d ago•13 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•1d ago•7 comments

Ask HN: Is it just me or are most businesses insane?

8•justenough•1d ago•7 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•4d ago•122 comments

Kernighan on Programming

170•chrisjj•5d ago•61 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•3 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•18h ago•2 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•4d ago•1 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: How Did You Validate?

4•haute_cuisine•2d ago•6 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments
Open in hackernews

Ask HN: Why are so many rolling out their own AI/LLM agent sandboxing solution?

32•ATechGuy•2w ago
Seeing a lot of people running coding agents (Claude Code, etc.) in custom sandboxes Docker/VMs, firejail/bubblewrap, scripts that gate file or network access.

Curious to know what's missing that makes people DIY this? And what would a "good enough" standard look like?

Comments

rvz•2w ago
This is no different to people rolling their own and DIY'ing custom cryptography, which is absolutely not recommended.

The question is how easy is it to bypass these DIY 'sandboxes'?

As long as there is a full OS running, you are one libc function away from a sandbox escape.

ATechGuy•2w ago
> As long as there is a full OS running, you are one libc function away from a sandbox escape.

Does this mean, all software in the world is just one function away from escape?

sargstuff•2w ago
Yup. Technically, just one external reference outside of the sandbox environment from within the sandbox environment ("software stargate portal address to alternate environment" / one evaluated part of the s-expression using a system() reference).

Running software is insecure the moment the electrical switch is on / start checking out shrodingers box. Although, reverse shrodingers cat might be more accurate. aka can escape the box if someone peaks from outside the box.

verdverm•2w ago
I started building my own agent when I became frustrated with copilot not reading my instruction files reliably. Looked at the code, and wouldn't you know they let the LLM decide...

Once started down this path, I knew I was going to need something for isolated exec envs. I ended up building something I think is quite awesome on Dagger. Let's me run in containers without running containers, can get a diff or rewind history, can persist and share wvia any OCI registry.

So on one hand, I needed something and chose a technology that would offer me interesting possibilities, and on the other I wanted to have features I don't expect the likes of Microsoft to deliver with Copilot, only one of which is my sandbox setup.

I'm not sure I would call it rolling my own completely, I'm building on established technology (OCI, OCR)

I don't expect a standard to arise, OCI is already widely adopted and makes sense, but there are other popular techs and there will be a ton of reimplementations by another name/claim. The other half of this is that AI providers are likely to want to run and charge money for this, I personally expect more attempts at vendor lock in in this space. In example, Anthropic bought Bun and I anticipate some product to come of this, isolation and/or canvas related

ATechGuy•2w ago
What was the first concrete thing you needed that existing sandboxing tools (Docker/VMs/bwrap) just didn't provide?
verdverm•2w ago
This question reads like HN market research and not genuine curiosity

Go look at what dagger provides over those technologies as a basis for advanced agent env capabilities. I use it for more than just sandboxing with my agent

I would also point out sandboxing is just one feature, that is approaching required status, for an agentic framework and unlikely to be an independent product or solution

kaffekaka•2w ago
Speaking for myself, a bash script and a Dockerfile (coupled with dedicated user on linux system) seemed simpler than discovering and understanding some other, over complicated tool built by someone else. Example: a coworker vibe coded a bloated tool but it was not adapted to other OS:s than his own, it was obviously LLM generated so neither one of us actually knew the code, etc. My own solution has shortcomings too but at least I can be aware of them.

It simply feels as if there is no de facto standard yet (there surely will be).

verdverm•2w ago
I expect OCI will be the standard, largely because of the ubiquity and experience we already have.

I'm building on OCI (via Dagger), so you are in good company, if I may say so

varshith17•2w ago
Same reason everyone rolled their own auth in 2010, the problem is simple enough to DIY badly, complex enough that no standard fits everyone. My Claude Code needs SSH access but not rm. Your agent needs filesystem writes but not network. There's no "OAuth for syscalls" yet.
verdverm•2w ago
this is the most insightful comment I've heard on this in a while

To me, OCI seems the best foundation to build on. It has the features, is widely disseminated, and we have a lot of practice and tooling already

ATechGuy•2w ago
> There's no "OAuth for syscalls" yet.

This exists today in OSes in form of discretionary/mandatory permissions (e.g., SELinux, AppArmor, LandLocked).

verdverm•2w ago
Yea, but that's not click button imported from clerk oauth easy
aristofun•2w ago
Can you explain me like im 5 - how does that even work?

If you cut network and files for Claude, for example, how is it even going to do the useful work?

hahahahhaah•2w ago
You dont cut all network just decide what you allow to pierce.

For files it has an isolated file system. That can have a git clone.

wassel•1w ago
I think a lot of teams realize “agent sandboxing” isn’t just isolation, it’s about making long-running agent work actually converge.

In practice, agents don’t fail only because the model is wrong. They fail because the environment is flaky: missing deps, slow setup, weird state, unclear feedback loops. If you give an agent an isolated, secure environment that’s already set up for the repo, you remove a ton of friction and iterations become much more reliable.

The other piece is “authority” / standards. You can write guidelines, but what keeps agents (and humans) aligned is the feedback: tests, linters, CI rules, repo checks. Centralizing those standards and giving the agent a clean place to run them makes compliance much more deterministic.

We built this internally for our own agent workflows and we’re debating whether it’s worth offering the sandbox part as a standalone service (https://envs.umans.ai), because it feels like the part everyone ends up rebuilding.

ATechGuy•1w ago
> They fail because the environment is flaky: missing deps, slow setup, weird state, unclear feedback loops.

Why can't agents install missing deps based on the error message?

wassel•1w ago
They often try, but two things bite in practice:

- Permissions and sandbox limits. Many agents don’t run on a dev’s laptop with admin access They run in the cloud or in locked down sandboxes: no sudo, restricted filesystem, restricted network egress. So “just install it” is sometimes not allowed or not even possible.

- It is a token and time sink and easy to go down the wrong path. Dependency errors are noisy: missing system libs, wrong versions, build toolchain issues, platform quirks. Agents can spend a lot of iterations trying fixes that don’t apply, or that create new mismatches.

Repo ready environments don’t replace agents installing deps. They just reduce how often they have to guess.

jacobgadek•1w ago
The "token and time sink" point is huge. I've found that even when agents can install deps, they often get stuck in reasoning loops trying to fix a "build toolchain issue" that is actually just a hallucinated package name.

I built a local runtime supervisor (Vallignus) specifically to catch these non-converging loops. It wraps the agent process to enforce egress filtering (blocking those random pip installs) and hard execution limits so they don't burn $10 retrying a fail state.

It's effectively a "process firewall" for the agentic workflow. Open source if you want to see the implementation: https://github.com/jacobgadek/vallignus