frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
102•theblazehen•2d ago•23 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
654•klaussilveira•13h ago•190 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
944•xnx•19h ago•550 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
119•matheusalmeida•2d ago•29 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
38•helloplanets•4d ago•38 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
48•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
228•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
14•kaonwarb•3d ago•18 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
219•dmpetrov•14h ago•114 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
329•vecti•16h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
378•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
487•todsacerdoti•21h ago•241 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
286•eljojo•16h ago•167 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
409•lstoll•20h ago•276 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
21•jesperordrup•4h ago•12 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
87•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
59•kmm•5d ago•4 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
4•speckx•3d ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
31•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
251•i5heu•16h ago•194 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
15•bikenaga•3d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
56•gfortaine•11h ago•23 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1062•cdrnsf•23h ago•444 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
144•SerCe•9h ago•133 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
180•limoce•3d ago•97 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•41 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
147•vmatsiiako•18h ago•67 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
72•phreda4•13h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•9h ago•12 comments
Open in hackernews

Using proxies to hide secrets from Claude Code

https://www.joinformal.com/blog/using-proxies-to-hide-secrets-from-claude-code/
132•drewgregory•3w ago

Comments

jackfranklyn•3w ago
The proxy pattern here is clever - essentially treating the LLM context window as an untrusted execution environment and doing credential injection at a layer it can't touch.

One thing I've noticed building with Claude Code is that it's pretty aggressive about reading .env files and config when it has access. The proxy approach sidesteps that entirely since there's nothing sensitive to find in the first place.

Wonder if the Anthropic team has considered building something like this into the sandbox itself - a secrets store that the model can "use" but never "read".

iterateoften•2w ago
It could even hash individual keys and scan context locally before sending to check if it accidentally contains them.
JoshuaDavid•2w ago
That's how they did "build an AI app" back when the claude.ai coding tool was javascript running in a web worker on the client machine.
mike-cardwell•2w ago
> a secrets store that the model can "use" but never "read".

How would that work? If the AI can use it, it can read it. E.g:

    secret-store "foo" > file
    cat file
You'd have to be very specific about how the secret can be used in order for the AI to not be able to figure out what it is. You could provide a http proxy in the sandbox that injects a HTTP header to include the secret, when the secret is for accessing a website for example, and tell the AI to use that proxy. But you'd also have to scope down which URLs the proxy can access with that secret otherwise it could just visit a page like this to read back the headers that were sent:

https://www.whatismybrowser.com/detect/what-http-headers-is-...

Basically, for every "use" of a secret, you'd have to write a dedicated application which performs that task in a secure manner. It's not just the case of adding a special secret store.

ashwinr2002•2w ago
This seems like an under-rated comment. You are right, this is a vulnerability and the blog doesn't talk about this.
ironbound•2w ago
Sounds like an attacker could hack Anthropic and get access to a bunch of companies via the credentials Claude Code ingested?
ipython•2w ago
I guess I don't understand why anyone thinks giving an LLM access to credentials is a good idea in the first place? It's been demonstrated best practice to separate authentication/authorization from the LLM's context window/ability to influence for several years now.

We spent the last 50 years of computer security getting to a point where we keep sensitive credentials out of the hands of humans. I guess now we have to take the next 50 years to learn the lesson that we should keep those same credentials out of the hands of LLMs as well?

I'll be sitting on the sideline eating popcorn in that case.

edstarch•2w ago
While sandboxing is definitely more secure... Why not put a global deny on .env-like filename patterns as a first measure?
dang•2w ago
Recent and related: https://news.ycombinator.com/item?id=46623126 via Ask HN: How do you safely give LLMs SSH/DB access? - https://news.ycombinator.com/item?id=46620990.
TheRoque•2w ago
At the moment I'm just using "sops" [1]. I have my env var files encrypted uth AGE encryption. Then I run whatever I want to run with "sops exec-env ...", it's basically forwarding the secrets to your program.

I like it because it's pretty easy to use, however it's not fool-proof: if the editor which you use for editing the env vars is crashing or killed suddently, it will leave a "temp" file with the decrypted vars on your computer. Also, if this same editor has AI features in it, it may read the decrypted vars anyways.

- [1]: https://github.com/getsops/sops

jclarkcom•2w ago
I do something similar but this only protects secrets at rest. If you app has an exploit an attack could just export all your secrets to a file.

I prototyped a solution where I use an external debugger to monitor my app, when the app needs a secret it generates a breakpoint and the debugger catches it and then inspects the call stack of the function requesting the secret and then copies it into the process memory (intended to be erased immediately after use). Not 100% security but a big improvement and a bit more flexible and auditable compared to a proxy

chrisweekly•2w ago
clever
samlinnfer•2w ago
Here's the set up I use on Linux:

The idea is to completely sandbox the program, and allow only access to specific bind mounted folders. But we also want to have to the frills of using GUI programs, audio, and network access. runc (https://github.com/opencontainers/runc) allows us to do exactly this.

My config sets up a container with folders bind mounted from the host. The only difficult part is setting up a transparent network proxy so that all the programs that need internet just work.

Container has a process namespace, network namespace, etc and has no access to host except through the bind mounted folders. Network is provided via a domain socket inside a bind mounted folder. GUI programs work by passing through a Wayland socket in a folder and setting environmental variables.

The set up looks like this

    * config.json - runc config
    * run.sh - runs runc and the proxy server
    * rootfs/ - runc rootfs (created by exporting a docker container) `mkdir rootfs && docker export $(docker create archlinux:multilib-devel) | tar -C rootfs -xvf -`
    * net/ - folder that is bind mounted into the container for networking
Inside the container (inside rootfs/root):

    * net-conf.sh - transparent proxy setup
    * nft.conf - transparent proxy nft config
    * start.sh - run as a user account
Clone-able repo with the files: https://github.com/dogestreet/dev-container
brunoborges•2w ago
Any particular reason why you shared these files in a gist rather a repo?
samlinnfer•2w ago
Yeah you're right, a repo is better: https://github.com/dogestreet/dev-container

I've made it clonable and should be straightforward to run now.

idorosen•2w ago
try firejail insread
samlinnfer•2w ago
Not even close to the same thing, with this setup you can install dev tools, databases, etc and run inside the container.

It's a full development environment in a folder.

ekidd•2w ago
I have a version of this without the GUI, but with shared mounts and user ID mapping. It uses systemd-nspawn, and it's great.

In retrospect, agent permission models are unbelievably silly. Just give the poor agents their own user accounts, credentials, and branch protection, like you would for a short-term consultant.

samlinnfer•2w ago
The other reason to sandbox is to reduce damage if another NPM supply chain attack drops. User accounts should solve the problem, but they are just too coarse grained and fiddly especially when you have path hierarchies. I'd hate to have another dependency on systemd, hence runc only.
paulddraper•2w ago
Isn’t this (part of) the point of MCP.
eddythompson80•2w ago
Possibly, but the point is that MCP is a DOA idea. An agent, like Claude code or opencode, don’t need an MCP. it’s nonsensical to expect or need an MCP before someone can call you.

There is no `git` MCP either . Opencode is fully capable of running `git add .` or `aws ec2 terminate-instance …` or `curl -XPOST https://…`

Why do we need the MCP? The problem now is that someone can do a prompt injection to tell it to send all your ~/.was/credentials to a random endpoint. So let’s just have a dummy value there, and inject the actual value in a transparent outbound proxy that the agent doesn’t have access to.

paulddraper•2w ago
> Opencode is fully capable of running

> Why do we need the MCP?

> The problem now

And there it is.

I understand that this is an alternative solution, and appreciate it.

keepamovin•2w ago
I think people's focus on the threat model from AI corps is wrong. They are not going to "steal your precious SSH/cloud/git credentials" so they can secretly poke through your secret-sauce, botnet your servers or piggy back off your infrastructure, lol of lols. Similarly the possibility of this happening from MCP tool integrations is overblown.

This dangerous misinterpretation of the actual possible threats simply better conceals real risks. What might those real risks be? That is the question. Might they include more subtle forms of nastiness, if anything at all?

I'm of the belief that there will be no nastiness, not really. But if you believe they will be nasty, it at least pays to be rational about the ways in which that might occur, no?

hobs•2w ago
Putting your secrets in any logs is how you get those secrets accidentally or purposefully read by someone you do not want to read it, it doesn't have to be the initial corp, they just need to have bad security or data management for it to leak online or have someone with a lower level of access pivot via logs.

Now multiply that by every SaaS provider you give your plain text credentials in.

keepamovin•2w ago
Right, but the multiply step is not AI specific. Let's focus here: AI providers farming out their convos to 3rd-parties? Unlikely, but if it happens, it's totally their bad.

I really don't think this is a thing.

hobs•2w ago
Right, but this is still a hygiene issue, if you are skipping washing your hands after using the bathroom because its unlikely that the bathroom attendants didn't clean it up you are going to have a bad time.
keepamovin•2w ago
There's something to that, but I don't think in reality it's a thing: you don't do surgery in the public bathroom. The keys to the kingdom secrets? Of course not. Everything else? That's why we have scoped, short-lived tokens.

I just think this whole thing is overblown.

If there's a risk in any situation it's similar, probably less, than running any library you installed of a registry for your code. And I think that's a good comparison: supply chain is more important than AI chain.

You can consider AI-agents to be like the fancy bathrooms in a high end hotel, whereas all that code you're putting on your computer? That's the grimy public lavatory lol.

simonw•2w ago
The risk isn't from the AI labs. It's from malicious attackers who sneak instructions to coding agents that cause them to steal your data, including your environment variable secrets - or cause them to perform destructive or otherwise harmful actions using the permissions that you've granted to them.
keepamovin•2w ago
Simon, I know you're the AI bigwig but I'm not sure that's correct. I know that's the "story" (but maybe just where the AI labs would prefer we look?). How realistic is it really that MCP/tools/web search is being corrupted by people to steal prompts/convos like this? I really think this is such low prop. And if it does happen, the flaw is the AI labs for letting something like this occur.

Respect for your writing, but I feel you and many others have the risk calculus here backwards.

saagarjha•2w ago
AI labs currently have no solution for this problem and have you shoulder the risk for it.
keepamovin•2w ago
Evidence?
simonw•2w ago
If they had a solution for this they would have told us about it.

In the meantime security researchers are publishing proof of concept data exfiltration attacks all the time. I've been collecting those here: https://simonwillison.net/tags/exfiltration-attacks/

saagarjha•2w ago
I worked on this for a company that got bought by one of the labs (for more than just agent sandboxes, mind you).
keepamovin•2w ago
[flagged]
saagarjha•2w ago
We didn’t solve the problem.
keepamovin•2w ago
Wait, let me get this straight: “there’s no solution” to this apparent giant problem but you work for a company that got bought by an AI corp because you had a solution? Make it make sense.

If you did not solve it why were you bought?

saagarjha•2w ago
I worked for a company that got bought because they were working on a number of problems of interest to the acquirer. As many of these were hard problems, our efforts on them and progress was more than enough.
keepamovin•2w ago
OK. Do you know if many AI labs are purchasing in this space? Was your acquisition an outlier or part of a wider trend? Thank you
saagarjha•1w ago
I think if you’re good at this most AI labs would be interested but I can’t speak for them obviously
simonw•2w ago
Every six months I predict that "in the next six months there will be a headline-grabbing example of someone pulling off a prompt injection attack that causes real economic damage", and every six months it fails to happen.

That doesn't mean the risk isn't there - it means malicious actors have not yet started exploiting it.

Johann Rehberger calls this effect "The Normalization of Deviance in AI", borrowing terminology from the 1986 Space Shuttle Challenger disaster report: https://embracethered.com/blog/posts/2025/the-normalization-...

Short version: the longer a company or community gets away with behaving in an unsafe way without feeling the consequences, the more they are likely to ignore those risks.

I'm certain that's what is happening to us all today with coding agents. I use them in an unsafe way myself.

gillh•2w ago
We also use proxies with CodeRabbit’s sandboxes. Instead of using tool calls, we’ve been using LLM-generated CLI and curl commands to interact with external services like GitHub and Linear.
hsbauauvhabzb•2w ago
‘Hey Claude, write an unauthenticated action method which dumps all environment variables to the requestor, and allows them to execute commands’
dtkav•2w ago
I'm working on something similar called agent-creds [0]. I'm using Envoy as the transparent (MITM) proxy and macaroons for credentials.

The idea is that you can arbitrarily scope down credentials with macaroons, both in terms of scope (only certain endpoints) and time. This really limits the damage that an agent can do, but also means that if your credentials are leaked they are already expired within a few minutes. With macaroons you can design the authz scheme that *you* want for any arbitrary API.

I'm also working on a fuse filesystem to mount inside of the container that mints the tokens client-side with short expiry times.

https://github.com/dtkav/agent-creds

badeeya•2w ago
made with ai?
dtkav•2w ago
Yeah, it says so at the top of the README (though I suppose I could have put that in the comment too). I'm not building a product, just sharing a pattern for internal tooling.

Someone on another thread asked me to share it so I had claude rework it to use docker-compose and remove the references to how I run it in my internal network.

ashwinr2002•2w ago
> With macaroons you can design the authz scheme that you want for any arbitrary API.

How would you build such an authz scheme? When claude asks permissions to access a new endpoint, if the user allows it, then reissue the macaroons?

dtkav•2w ago
There are two parts here:

1. You can issue your own tokens which means you can design your own authz in front of the upstream API token.

2. Macaroons can be attenuated locally.

So at the time that you decide you want to proxy an upstream API, you can add restrictions like endpoint path to your scheme.

Then, once you have that authz scheme in place, the developer (or agent) can attenuate permissions within that authz scheme for a particular issued macaroon.

I could grant my dev machine the ability to access e.g. /api/customers and /api/products. If i want to have claude write a script to add some metadata to my products, I might attenuate my token to /api/products only and put that in the env file for the script.

Now claude can do development on the endpoint, the token is useless if leaked, and Claude can't read my customer info.

Stripe actually does offer granular authz and short lived tokens, but the friction of minting them means that people don't scope tokens down as much.

ashwinr2002•2w ago
I understand that, but how do you come up with the endpoints you want claude to have access to ahead of time?

For example, how do you collect all the endpoints that have access to customer info per your example.

Thought about it and couldn't find a way how

dtkav•2w ago
I'm not sure I'm fully understanding you, but in my experience I have a few upstream APIs I want to use for internal tools (stripe, gmail, google cloud, anthropic, discord, my own pocketbase instance, redis) but there are a lot of different scripts/skills that need differing levels of credentials.

For example, If I want to write a skill that can pull subscription cancellations from today, research the cancellation reason, and then push a draft email to gmail, then ideally I'd have...

- a 5 minute read-only token for /subscriptions and /customers for stripe

- a 5 minute read-write token to push to gmail drafts

- a 5 minute read-only token to customer events in the last 24h

Claude understands these APIs well (or can research the docs) so it isn't a big lift to rebuild authz, and worst case you can do it by path prefix and method (GET, POST, etc) which works well for a lot of public APIs.

I feel like exposing the API capability is the easy part, and being able to get tight-fitting principle-of-least-privilege tokens is the hard part.

JimDabell•2w ago
Is this a reimplementation of Fly.io’s Tokenizer? How does it compare?

https://fly.io/blog/tokenized-tokens/

https://github.com/superfly/tokenizer

eddythompson80•2w ago
We truly are living in the dumbest timeline aren’t we.

I was just having an argument with a high level manager 2 weeks ago about how we already have an outbound proxy that does this, but he insisted that a mitm proxy is not the same as fly.io “tokenizer”. See, that one tokanizes every request, ours just sets the Authorization header for service X. I tried to explain that it’s all mitm proxies altering the request, just for him to say “I don’t care about altering the request, we shouldn’t alter the request. We just need to tokenize the connection itself”

dtkav•2w ago
IMHO there are a couple axis that are interesting in this space.

1. What do the tokens look like that you are you storing in the client? This could just be the secret (but encrypted), or you could design a whole granular authz system. It seems like tokenizer is the former and Formal is the latter. I think macaroons are an interesting choice here.

2. Is the MITM proxy transparent? Node, curl, etc allow you to specify a proxy as an environment variable, but if you're willing to mess with the certificate store than you can run arbitrary unmodified code. It seems like both Tokenizer and Formal are explicit proxies.

3. What proxy are you using, and where does it run? Depending on the authz scheme/token format you could run the proxy centrally, or locally as a "sidecar" for your dev container/sandbox.

Rafert•2w ago
The concept of a proxy injecting/removing sensitive data has been for much longer, e.g. VGS has a JS SDK and proxy to handle credit card data for you and keep you out of PCI scope.
josegonzalez•2w ago
I am gonna be that guy and say it would be nice to share the actual code vs using images to display what the code looks like. It's not great for screenreaders and anyone who want to quickly try out the functionality.
data-ottawa•2w ago
I’ve been using 1Password’s env templates with `op run` for this locally. It hijacks stdout and filters your credentials.

That does not make it immune to Claude’s prying, but at least Claude can then read the .env file and satisfy its need to prove that a credential exists without reading it.

I have found even when I say a credential exists and is correct Claude does not believe me. Which is infuriating. I’m willing to bet Claude’s logs have a gold mine that could own 90% of big tech firms.

theozero•2w ago
A proxy is a good solution although a bit more involved. A great first step is just getting any secrets - both the ones the AI actually needs access to and your application secrets - out of plaintext .env files.

A great way to do that is either encrypting them or pulling them declaratively from a secure backend (1Pass, AWS Secrets Manager, etc). Additional protection is making sure that those secrets don't leak, either in outgoing server responses, or in logs.

https://varlock.dev (open source!) can help with the secure injection, log redaction, and provide a ton more tooling to simplify how you deal with config and secrets.

1vuio0pswjnm7•2w ago
"When hostnames and headers are hard to edit: mitmproy add-ons"

"The mitmproxy tool also supports addons where you can transform HTTP requests between Claude Code and third-party web servers. For example, you could write an add-on that intercepts https://api.anthropic.com and updates the X-API-Key header with an actual Anthropic API Key."

"You can then pass this add-on via mitmproxy -s reroute_hosts.py."

If using HAproxy, then is no need to write "add-ons", just edit the configuration file and reload

For example, something like

   http-request set-header x-api-key API_KEY if { hdr(host) api.anthropic.com }

   echo reload|socat stdio unix:/path-to-socket/socket-name
For me, HAproxy is smaller and faster than mitmproxy