frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•1m ago•1 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•3m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•3m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•5m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•5m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•7m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•7m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•7m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•10m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
3•codexon•10m ago•1 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•11m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•15m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•15m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•16m ago•0 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•16m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•16m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•19m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•20m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•21m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•23m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•24m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•25m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•25m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•26m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•27m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•30m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•34m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•35m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•36m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•39m ago•0 comments
Open in hackernews

Ask HN: Are you worried, and care, about AI stealing your code/secrets?

2•fnoef•3w ago
Recently, I started to use AI coding agents. They are really great, and I feel like this is the best $100 month I spend for my career.

And yet, I understand that I don’t fully know how they work and what they do behind the scenes. I know the general gist of how an agent works, but I don’t really know if they don’t cat .env behind the scenes, or whether someone on the other side of the planet gets pieces of my code in their AI response.

This is the reason I use AI mainly at $JOB, but not on my personal project (in addition to keeping my skills sharp, and the fun factor). Do you ever think about this? Do you care?

Comments

viraptor•3w ago
You need to run them sandboxed in some way. Docker is one kind of solution, selinux / apparmor / sandbox-exec is another. Basically, create an environment where .env is not accessible in any way and you don't have to worry about it anymore.

I don't care about it reading the code itself. 90% of my usage is on opensource projects anyway. The other - if I can generate something, then there's no barrier to someone else doing the same - I'm just making applications that do expected things, not doing some groundbreaking research.

fnoef•3w ago
It’s not only about the .env, but also intellectual property, algorithms, even product ideas.

Moreover, let’s say you run a dev server with watch mode, and ask claude to implement a feature. Claude can generate a code that reads your .env (from within the server) and send to some third party url. The watch mode would catch it and reload the server and will run the code. By the time you catch it, it’s too late. I know it’s far fetched, and maybe the paranoia is coming from my lack of understanding these tools well, but in the end they are probabilistic token generators, that were trained on all code in open existence, including malware.

viraptor•3w ago
> Claude can generate a code that reads your .env (from within the server) and send to some third party url.

Again - sandboxes. If you either block or filter the outbound traffic, it can't send anything. Neither can the scripts LLMs create.

coolcat258•3w ago
tbh im sure they do.
raw_anon_1111•3w ago
No.

I don’t store any secrets locally. I store secrets in AWS Secrets Manager and then I get temporary access keys and set the appropriate environment variables that the AWS CLI and SDKs use automatically to retrieve them.

I usually have three terminal windows open when I’m developing these days - one where I run code that has the environment variable set and my code reads the secrets from Secrets Manager and a terminal window running Claude Code (company reimbursed) and one running Codex using my personal ChatGPT subscription.

In other words, AI agents don’t have access to any secrets.

As far as personal projects, in June will be my 30th anniversary of never writing code that someone isn’t paying me for and my 34th anniversary of never writing code I wasn’t getting paid for or a degree for.

SERSI-S•3w ago
I’m less worried about deliberate exfiltration and more about the structural opacity of these systems. You’re essentially being asked to trust that data boundaries are respected, without any practical way to independently verify those guarantees. Even if the current implementation is sound, the risk surface isn’t static providers, deployment paths, logging practices, and incentives all shift over time. For short-lived or organisational codebases, that trade-off can be reasonable. For personal or long-horizon projects, I’m more cautious. Once intent, context, or structure is absorbed upstream, there’s no meaningful way to claw it back.