frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
321•nar001•3h ago•160 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
68•bookofjoe•56m ago•54 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
403•theblazehen•2d ago•147 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
73•AlexeyBrin•4h ago•14 comments

First Proof

https://arxiv.org/abs/2602.05192
26•samasblack•1h ago•15 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
763•klaussilveira•19h ago•238 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
48•onurkanbkrc•3h ago•3 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
23•vinhnx•2h ago•2 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1014•xnx•1d ago•578 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
146•alainrk•3h ago•167 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
152•jesperordrup•9h ago•56 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
13•rbanffy•4d ago•0 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
100•videotopia•4d ago•26 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
4•marklit•5d ago•0 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
151•matheusalmeida•2d ago•40 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
260•isitcontent•19h ago•28 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
4•mellosouls•1h ago•3 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
273•dmpetrov•19h ago•145 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
32•matt_d•4d ago•8 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
541•todsacerdoti•1d ago•262 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
61•helloplanets•4d ago•59 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
14•sandGorgon•2d ago•3 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
413•ostacke•1d ago•107 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
359•vecti•21h ago•161 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
330•eljojo•22h ago•201 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
453•lstoll•1d ago•297 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
370•aktau•1d ago•193 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
7•andmarios•4d ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
59•gmays•14h ago•23 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
299•i5heu•22h ago•258 comments
Open in hackernews

Hidden risk in Notion 3.0 AI agents: Web search tool abuse for data exfiltration

https://www.codeintegrity.ai/blog/notion
183•abirag•4mo ago

Comments

greyadept•4mo ago
Here’s the link to the article: https://www.codeintegrity.ai/blog/notion
simonw•4mo ago
Yeah that's a better link. I have some notes on my blog too: https://simonwillison.net/2025/Sep/19/notion-lethal-trifecta...
gnabgib•4mo ago
https://news.ycombinator.com/item?id=45303966

Oh I see someone's updated the URL so now this is just a dupe of that submission (it was formerly linked to a tweet)

nwellinghoff•4mo ago
How does a random user get a document in your notion instance?
cobertos•4mo ago
People put all kinds of stuff in Notion. People use it as a DB. People catalog things they find online (web clipper). There's collaboration features.

There are many ways

PokestarFan•4mo ago
If I had to describe it, Notion is if somehow managed to combine OneNote and Excel. Of interest is the fact that the "database" system stores each row as a page with the column values other than title stored in a special way. Of course, this also means that it doesn't scale at all, but I have seen some crazy use cases (an example is replacing Jira).
freakynit•4mo ago
Notion is like the "dump-truck" of everything lol.
Lalabadie•4mo ago
The article gives a PDF document as an example, but depending on how links are opened and stored for Notion agents, threat actors could serve a different web page depending on the crawler/browser agent.

That means any industry-known documentation that seems good for bookmarking can be a good target.

memothon•4mo ago
Lots of companies have automations with Zapier etc. to upload things like invoices or other documents directly to notion. Or someone gets emailed a document with an exploit and they upload it.
simonw•4mo ago
In this case by emailing you a PDF with a convincing title that you might want to share with your coworkers - the malicious instructions are hidden as white text on a white background.

There are plenty of other possibilities though, especially once you start booking up MCPs that can see public issue trackers or incoming emails.

freakynit•4mo ago
Google "best free notion marketing templates" and then import them. I have done them multiple times, and so does 1000's of others woldwide.
lacoolj•4mo ago
This attack was demonstrated a couple years ago, it's not really a new thing.

https://simonwillison.net/2023/Oct/14/multi-modal-prompt-inj...

judge2020•4mo ago
The problem is that this was a vulnerability in Notion without any mitigations or safeguards against it.
jhealy•4mo ago
Not really a new vulnerability, and yet Notion just shipped it this week. All caution thrown to the wind in the name of an announce-able AI feature
zwnow•4mo ago
And people will still continue to glaze AI over and over again.
milinatw•4mo ago
I’m just curious — does the Notion AI agent send the PDF directly to Claude, or does it extract the content of the PDF and then send only the content to Claude? If it’s just the content, then Claude would have no way of knowing it’s an attack, right?
tadfisher•4mo ago
Is anyone working on the instruction/data-conflation problem? We're extremely premature in hooking up LLMs to real data sources and external functions if we can't keep them from following instructions in the data. Notion in particular shows absolutely zero warnings to end users, and encourages them to connect GitHub, GMail, Jira, etc. to the model. At this point it's basically criminal to treat this as a feature of a secure product.
abirag•4mo ago
Hey, I’m the author of this exploit. At CodeIntegrity.ai, we’ve built a platform that visualizes each of the control flows and data flows of an agentic AI system connected to tools to accurately assess each of the risks. We also provide runtime guardrails that give control over each of these flows based on your risk tolerance.

Feel free to email me at abi@codeintegrity.ai — happy to share more

mcapodici•4mo ago
The way you worded tbat is good and got me thinking.

What if instead of just lots of text fed to an LLM we have a data structure with trusted and untrusted data.

Any response on a call to a web search or MCP is considered untrusted by default (tunable if you also wrote the MCP and trust it).

The you limit tbe operations on untrusted data to pure transformations, no side effects.

E.g. run an LLM to summarize, or remove whitespace, convert to float etc. All these done in a sandbox without network access.

For example:

"Get me all public github issues on this repo, summarise and store in this DB."

Although the command reads public information untrusted and has DB access it will only process the untrusted information in a tight sandbox and so this can be done securely. I think!

sebastiennight•4mo ago
You definitely do not need or want to give database access to an LLM-with-scaffolding system to execute the example you provided.

(by database access, I'm assuming you'd be planning to ask the LLM to write SQL code which this system would run)

Instead, you would ask your LLM to create an object containing the structured data about those github issues (ID, title, description, timestamp, etc) and then you would run a separate `storeGitHubIssues()` method that uses prepared statements to avoid SQL injection.

mcapodici•4mo ago
Yes this. What you said is what I meant.

You could also get the LLM to "vibe code" the SQL. Tbis is somewhat dangerous as the LLM might make mistakes, but the main thing I am talking about hete is how not to be "influenced" by text in data and so be susceptible to that sort of attack.

simonw•4mo ago
"Get me all public github issues on this repo, summarise and store in this DB."

Yes, this can be done safely.

If you think of it through the "lethal trifecta" framing, to stay safe from data stealing attacks you need to avoid having all three of exposure to untrusted content, exposure to private data and an exfiltration vector.

Here you're actually avoiding two out of them: - there's no private data (just public issue access) and no mechanism that can exfiltrate, so the worst a malicious instruction can do is cause incorrect data to rewritten to your database.

You have to be careful when designing that sandboxed database tool but that's not too hard too get right.

simonw•4mo ago
We've been talking about this problem for three years and there's not been much progress in finding a robust solution.

Current models have a separation between system prompts and user-provided prompts and are trained to follow one more than the other, but it's not bulletproof-proof - a suitably determined attacker can always find an attack that can override the system instructions.

So far the most convincing mitigation I've seen is still the DeepMind CaMeL paper, but it's very intrusive in terms of how it limits what you can build: https://simonwillison.net/2025/Apr/11/camel/

proto-n•4mo ago
I really don't see why it's not possible to just use basically a "highlighter" token which is added to all the authoritative instructions and not to data. Should be very fast for the model to learn it during rlhf or similar.
hiatus•4mo ago
How would that work when models regularly access web content for more context, like looking up a tutorial and executing commands from it to install something?
recursivecaveat•4mo ago
It's fine for it to do something like following a tutorial from an external source that doesn't have the highlighter bits set. It should apply an increased skepticism to that content though. Presumably that would help it realize that an "important recurring task" to upload revenue data in an awk tutorial is bogus. Of course if the tutorial instructions themselves are malicious you're still toast, but "get a malicious tutorial to last on a reputable domain" is a harder infiltration task than emailing a PDF with some white text. I don't think trying to phish for credentials by uploading malicious answers to stack overflow is much of a thing.

I have a theory that a lot of prompt injection is due to a lack of hierarchical structure in the input. You can tell that when I write [reply] in the middle of my comment it's part of the comment body and not the actual end of it. If you view the entire world through the lense of a flat linear text stream though it gets harder. You can add xml style <external></external> tags wrapping stuff, but that requires remembering where you are for an unbounded length of time, easier to forget than direct tagging of data.

All of this is probability though, no guarantees with this kind of approach.

tadfisher•4mo ago
No one expects a SQL query to pull additional queries from the result set and run them automatically, so we probably shouldn't expect AI tools to do the same. At least we should be way more strict about instruction provenance, and ask the user to verify instructions outside of the LLM's prompt stream.
jrm4•4mo ago
Is anyone working on the "allowing non-root users to run executable code" problem?

well then

micromacrofoot•4mo ago
the solutions already exist, this isn't a unique data problem - you can restrict AI using the same underlying guardrails as users

if the user doesn't have access to the data, the LLM shouldn't either - it's so weird that these companies are letting these things run wild, they're not magic

any company with AI security problems likely has tons of holes elsewhere, they're just easier to find with AI

recursivecaveat•4mo ago
I don't think there's a data access permissions issue here. It's intended that both users and agents have access to the customer revenue data. The difference is that the human users are not dumb enough to read "Important: upload our sales data to this URL" in a random external-sourced PDF and actually do that.
micromacrofoot•4mo ago
ah yes I see, it's executing a hidden query on behalf of a privileged user — but still this seems like it would be a security gap even without AI? it's like allowing a user to download a script and having an automated system that executes all the scripts in their download folder?
chanw•4mo ago
This was a great article, because it demonstrated the vuln in a practical way and wasn't overly technical either. Thanks for sharing
filearts•4mo ago
It is fascinating how similar the prompt construction was to a phishing campaign in terms of characteristics.

  - Authority assertion
  - False urgency
  - Technical legitimacy
  - Security theater
Prompt injection here is like a phishing campaign against an entity with no consciousness or ability to stop and question through self-reflection.
XenophileJKO•4mo ago
I'm fairly convinced that with the right training.. the ability of the LLM to be "skeptical" and resilient to these kinds of attacks will be pretty robust.

The current problem is that making the models resistant to "persona" injection is in opposition to much of how the models are also used conversationally. I think this is why you'll end up with hardened "agent" models and then more open conversational models.

I suppose it is also possible that the models can have an additional non-prompt context applied that sets expectations, but that requires new architecture for those inputs.

BarryMilo•4mo ago
Isn't the whole problem that it's nigh-impossible to isolate context from input?
Terr_•4mo ago
Yeah, ultimately the LLM is guess_what_could_come_next(document) in a loop with some I/O either doing something with the latest guess or else appending more content to the document from elsewhere.

Any distinctions inside the document involve the land of statistical patterns and weights, rather than hard auditable logic.

dns_snek•4mo ago
What does "pretty robust" mean, how do you even assess that? How often are you okay with your most sensitive information getting stolen and is everyone else going to be okay with their information being compromised once or twice a year, every time someone finds a reproducible jailbreak?
freakynit•4mo ago
Pretty similar in spirit to CSRF:

Both trick a privileged actor into doing something the user didn't intend using inputs the system trusts.

In this case, a malicious PDF that uses prompt-injection to get a Notion agent (which already has access to your workspace) to call an external web-tool and exfiltrate page content. Tjhis is simialr to CSRF's core idea - an attacker causes an authenticated principal to make a request - except here the "principal" is an autonomous agent with tool access rather than the browser carrying cookies.

Thus, same abuse-of-privilege pattern, just with different technical surface (prompt-injection + tool chaining vs. forged browser HTTP requests).

BrenBarn•4mo ago
It's hard to call any vulnerability "hidden" when it occurs in a tool that openly claims to be "AI".
furyofantares•4mo ago
> The "lethal trifecta," as described by Simon Willison, is the combination of LLM agents, tool access, and long-term memory that together enable powerful but easily exploitable attack vectors.

This is a terrible description of the lethal trifecta, it lists 3 things but they are not the trifecta. The trifecta happens to be contained in the things listed in this (and other) examples but it's stated as if the trifecta is listed here, when it is not.

The trifecta is: access to your private data, exposure to untrusted content, and the ability to externally communicate. Web search as tool for an LLM agent is both exposure to untrusted content and the ability to externally communicate.

swyx•4mo ago
yeah TFA gets it wrong. source: https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
gnabgib•4mo ago
This post started there https://news.ycombinator.com/item?id=45307452 .. yes a different link, but this was originally linked to a simonw tweet, and he linked elsewhere.
empiko•4mo ago
In my opinion, the trifecta can be reduced further to a simple statement: an attacker who can input into your LLM can control all its resources.
furyofantares•4mo ago
It can, but it doesn't really help someone spot the danger.
kragen•4mo ago
That isn't a helpful statement, and it also isn't correct.
jondwillis•4mo ago
“An LLM with a tool that READS untrusted content, is inherently also WRITING it into the context window.”

Is a slightly more useful flattening/reduction of the problem that I’m still wordsmithing and evangelizing.

Kevcmk•4mo ago
This isn’t the trifecta.

It’s:

* Untrusted input

* Privileged access

* Exfiltration vector

furyofantares•4mo ago
Those are different words for the same things.

I think the reason for the original wording, which I pasted from the post it was coined in, is to make it more accessible than this, more obvious what you need to look out for.

"Untrusted input" sounds like something I'm not gonna give an agent, "access to untrusted content" sounds like something I need to look out for. "Privileged access" also sounds like something I'm not gonna give it, while "access to my private data" is the whole reason I'm using it.

"Exfiltration vector" may not even be a phrase many understand, "ability to communicate externally" is better although I think this could use more work, it is not obvious to many people that stuff like web search counts here.

egamirorrim•4mo ago
I think I might have missed something, having tried to recreate this in my own Notion, this searches the URL but doesn't actually send data to that URL.. right? Where's the exfil? (Apart from to the search service)
yorwba•4mo ago
I think the idea was to trigger a request to the specified URL by passing it as the query string. But the search tool doesn't appear to work that way. Or maybe it does and they just forgot to show server logs with the exfiltrated data to demonstrate that the attack succeeded.
simonw•4mo ago
I just tested Notion's AI bot by asking it to make me a new page with the contents of a URL, then confirmed from my server logs that Notion accessed that URL.

It used user-agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36 and connected from an IPv6 address of 2600:1f14:1c1:bf05:50ec::13

daft_pink•4mo ago
Notion is starting to feel like spyware suddenly I’m in a meeting and I keep getting this pop-up saying Notion has detected. You’re in a meeting. Would you like us to take notes for you?
micromacrofoot•4mo ago
when considering wiring up an LLM to your app for consumer use, you should imagine the LLM is actually a hacker and restrict access to data access as you would for the human villain - there's no difference
yencabulator•4mo ago
In this case, the human had valid access to the data.