frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•1m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•3m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•3m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
1•Anon84•7m ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•8m ago•0 comments

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
2•rcarmo•10m ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
2•Willingham•17m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
1•shervinafshar•18m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•23m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
4•mooreds•24m ago•2 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•25m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

2•pinkmuffinere•26m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•31m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•33m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•33m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•33m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
4•archb•35m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•35m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•36m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•37m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•42m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
4•dragandj•43m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•44m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•45m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•46m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•47m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•49m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•49m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•49m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•50m ago•1 comments
Open in hackernews

Sandboxing AI agents at the kernel level

https://www.greptile.com/blog/sandboxing-agents-at-the-kernel-level
89•dakshgupta•4mo ago

Comments

CuriouslyC•4mo ago
Just gonna toss this out there, using an agent for code review is a little weird. You can calculate a covering set for the PR deterministically and feed that into a long context model along with the diff and any relevant metadata and get a good review in one shot without the hassle.
dakshgupta•4mo ago
That used to be how we did it, but this method performed better on super large codebases. One of the reasons is that grepping is a highly effective way to trace function calls to understand the full impact of a change. It's also great for finding other examples of similar code (for example the same library being used) to ensure consistency of standards.
arjvik•4mo ago
If that's the case, isn't a grep tool a lot more tractable than a Linux agent that will end up mostly calling `grep`?
lomase•4mo ago
But then you can't say is powered by AI and get that VC money.
kjok•4mo ago
Ah ha.
CuriouslyC•4mo ago
You shouldn't need the entire codebase, just a covering set for the modified files (you can derive this by parsing the files). If your PR is atomic, covering set + diff + business context is probably going to be less than 300k tokens, which Gemini can handle easily. Gemini is quite good even at 500k, and you can run it multiple times with KV cache for cheap to get a distribution (tell it to analyze the PR from different perspectives).
kketch•4mo ago
The main concern here isn’t really whether the agent needs access to the whole codebase. Personally I feel an agent might need to have access to all or most of the codebase to make better decision, see things have been done before etc.

The real issue is that containers are being used as a security boundary while it’s well known they are not. Containers aren't a sufficient isolation mechanism for multi-tenant / untrusted workloads.

Using them to run your code review agent again puts your customers source code at risk of theft, unless you are using an actual secure sandbox mechanism to protect your customers data which from reading the article does not seem to be the case.

jt2190•4mo ago
OT: I wonder if WASM is ready to fulfill the sandboxing needs expressed in this article, i.e. can we put the AI agent into a web assembly sandbox and have it function as required?
Yoric•4mo ago
You'll probably need some kind of WebGPU bindings, but I think it sounds feasible.
seanw265•4mo ago
If the agent only needs the filesystem then probably. If it needs to execute code then things get flaky. The WASM/WASI/WASIX ecosystem still has gaps (notably no nodejs).
technocrat8080•4mo ago
A bit confused, all this to say you folks use standard containerization?
whinvik•4mo ago
Same. I didn't really understand what the difference is compared to containerization
rvz•4mo ago
Fundamentally, there is no difference. Blocking syscalls in a Docker container is nothing new and one of the ways to achieve "sandboxing" and can already be done right now.

The only thing that caught people's attention was that it was applied to "AI Agents".

kjok•4mo ago
What is so fundamentally different for AI agents?
rvz•4mo ago
Other than the current popular thing which is "AI agents", like all programs, it changes absolutely nothing.
Yoric•4mo ago
The fact that the first thing people are going to do is punch holes in the sandbox with MCP servers?
thundergolfer•4mo ago
This is a good explanation of how standard filesystem sandboxing works, but it's hopefully not trying to be convincing to security engineers.

> At Greptile, we run our agent process in a locked-down rootless podman container so that we have kernel guarantees that it sees only things it’s supposed to.

This sounds like a runc container because they've not said otherwise. runc has a long history with filesystem exploits based on leaked file descriptors and `openat` without NO_FOLLOW.

The agent ecosystem seems to have already settled on VMs or gVisor[2] being table-stakes. We use the latter.

1. https://github.com/opencontainers/runc/security/advisories/G...

2. https://gvisor.dev/docs/architecture_guide/security/

ujrvjhtifcvlvvi•4mo ago
if you don't mind me asking: how do you deal with syscalls that gVisor has not implemented?
thundergolfer•4mo ago
gVisor has implemented a lot of them, but every few months we have an application that hits an unimplemented syscall. We tend to reach for application workarounds, and haven't yet landed a PR to add a syscall. But I'd expect we could land such a PR.
zobzu•4mo ago
chroot'ing isn't sandboxing or "containers". And I don't think it's a very good explanation, actually - not that its necessarily easy to explain.

It looks like the author just discovered the kernel and syscalls and is sharing it - but, it's not exactly new or rocket science.

The author probably should use the existing sandbox libraries to sandbox their code - and that has nothing to with AI Agents actually, any process will benefit from sandboxing, that it runs on LLM replies or not.

IshKebab•4mo ago
If you only care about filesystem sandboxing isn't Landlock the easiest solution?
wmf•4mo ago
"How can I sandbox a coding agent?"

"Early civilizations had no concept of zero..."

kketch•4mo ago
The seems to be looking to let the agent access the source code for review. But in that case, the agent should only see the codebase and nothing else. For a code review agent, all it really needs are:

- Access to files in the repositorie(s)

- Access to the patch/diff being reviewed

- Ability to perform text/semantic search across the codebase

That doesn’t require running the agent inside a container on a system with sensitive data. Exposing an API to the agent that specifically give it access to the above data, avoiding the risk altogether.

If it's really important that the agent is able to use a shell, why not use something like codespaces and run it in there?

warkdarrior•4mo ago
It would also need:

- Access to repo history

- Access to CI/CD logs

- Access to bug/issue tracking

kketch•4mo ago
I guess maybe even more things? The approach presented in the article doesn't seem like a good way of giving access to these by the way. All of these don't live on a dev machine. Things like Github codespaces are better suited for this job and are in fact already used to implement code reviews by LLMs.

My point is whitelisting is better than blacklisting.

When a front end need access to a bunch of things in a database. We usually provide exactly what's needed through an API, we don't let it run SQL queries on the database and attempt to filter / sandbox the SQL queries.

seanw265•4mo ago
Containers might be fine if you’re only sandboxing filesystem access, but once an agent is executing code, kernel-level escapes are a concern. You need at least a VM boundary (or something equivalent) in that case.