frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•4m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•6m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•6m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•6m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•8m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•9m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•10m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•10m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•15m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
3•dragandj•16m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•17m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•19m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•19m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•20m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•22m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•23m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•23m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•23m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•25m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•26m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•27m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•28m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•29m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•29m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
2•paulpauper•32m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•33m ago•2 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•33m ago•1 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•33m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•36m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•36m ago•0 comments
Open in hackernews

Design Patterns for Securing LLM Agents Against Prompt Injections

https://simonwillison.net/2025/Jun/13/prompt-injection-design-patterns/
110•simonw•7mo ago

Comments

mooreds•7mo ago
Also here's the referenced paper: https://arxiv.org/abs/2506.08837
JSR_FDED•7mo ago
Clever. It’s like parameterized queries for SQL.
simonw•7mo ago
If only it were as easy as that!

The problem with prompt injection is that the attack itself is the same as SQL injection - concatenation trusted and untrusted strings together - but so far all of our attempts at implementing a solution similar to parameterized queries (such as system prompts and prompt delimiters) have failed.

Terr_•7mo ago
Even worse, all outputs become inputs, at least in the most interesting use-cases. So to continue the SQL analogy, you can be 100% confident that in the legitimacy of:

    SELECT messages.content FROM messages WHERE id = 123;
Yet the system is in danger anyway, because that cell happens to be a string of:

    DROP TABLE customers;--
... Which becomes appended to the giant pile-of-inputs.

_____

Long ago I encountered a predecessor's "web scripting language" product... it worked based on repeatedly evaluating a string and substituting the result, until it stopped mutating. Injection was its lifeblood, Even an if-else was really just a decision between one string to print and one string to discard.

As much as it horrified me, in retrospect it was still marginally more secure than an LLM, because at least it had definite (if ultimately unworkable) rules for matching/escaping things, instead of statistical suggestions.

swyx•7mo ago
ooh this is a dense and useful paper. i like that they took the time to apply it to a bunch of case studies and its all in 30 pages.

i think basically all of them involve reducing the "agency" of the agents though - which is a fine tradeoff - but i think one should be aware that the Big Model folks dont try to engineer any of these and just collect data to keep reducing injection risk. the tradeoff of capability maxxing vs efficiency/security often tends to be won by the capabilitymaxxers in terms of product adoption/marketing.

eg the SWE Agent case study recommends Dual LLM with strict data formatting - would like to see this benchmarked in terms of how much of a perfomance an agent like this would be, perhaps doable by forking openai codex and implementing the dual llm.

simonw•7mo ago
Yeah, this paper is refreshingly conservative and practical: it takes the position that robust protection against prompt injection requires very painful trade-offs:

  These patterns impose intentional
  constraints on agents, explicitly 
  limiting their ability to perform 
  arbitrary tasks.
That's a bucket of cold water in a lot of things people are trying to build. I imagine a lot of people will ignore this advice!
NoMoreNicksLeft•7mo ago
LLMs are too useful to allow the commoner access to them. The question remains, how best to fleece those commoners with perceived utility while providing them none?
theHolyTrynity•7mo ago
yes agree most hype around agents is around stuff that ignore these patterns
hooverd•7mo ago
What if we could define what a computer could do via some symbolic notation? Perhaps program it in some kind of language?
simonw•7mo ago
My favorite line from this paper:

> The design patterns we propose share a common guiding principle: once an LLM agent has ingested untrusted input, it must be constrained so that it is impossible for that input to trigger any consequential actions—that is, actions with negative side effects on the system or its environment.

This is the key thing people need to understand about why prompt injection is such a critical issue, especially now everyone is wiring LLMs together with tools and MCP servers and building "agents".

senko•7mo ago
This reminds me of the Perl concept of taint. Once an agent touches tainted input, it becomes tainted as well (as you mention in the article), the same in Perl (when you operate on tainted data, the result becomes tainted).
wunderwuzzi23•7mo ago
Yeah, I think taint tracking was one of the early ideas here also.

The problems is that the chat context typically is immediately tainted as for the AI to do something useful it needs to operate on untrained data.

I wonder if maybe there could be tags mimicking data classification - to enable more fine grained decision making and human in the loop prompts.

Still a lot of unknowns and a lot more research needed.

For instance with Google Gemini I observed last year that certain sensitive tools can only be invoked in the first conversation turn / or until untrusted data is brought into the chat context. Then for the next conversation turn these sensitive tools are disabled.

I thought that was a neat idea. It can be bypassed with what I called "delayed tool invocation" and usage of a trigger action, but it becomes a lot more difficult to exploit.

seanhunter•7mo ago
It seems to me that the only robust solution has to be some sort of split-brain dual model where tainted data can only ever be input to a model which is only trained for sentence completion, not instruction-tuned.

Untainted data is the only data that can be input into the instruction-tuned half of the dual model.

In an architecture like this, any attempt to prompt inject would just find their injection harmlessly sentence-completed rather than turned into instructions and used to override other prompt instructions.

wunderwuzzi23•7mo ago
Yeah, improving robustness from prompt injection which such techniques will help.

One attack avenue that is surprisingly not discussed much is that the model itself can be the attacker.

In that case prompt injection is not the root cause, but a misaligned/backdoored model that might invoke tools is.

So super risky use-cases should always require human oversight, but I'm worried we are already on a path of normalization of deviance.

It's sort of the unlikely worse case scenario, but Murphys law reminds us that such an attack/accident will happen one day.

gbacon•7mo ago
https://perldoc.perl.org/perlsec#Taint-mode
Onawa•7mo ago
Very helpful Simon! I have definitely been hesitant to spin up any agents even in sandboxes that have access to potentially destructive tools, but this guiding principle does ease my concerns a bit.
babyshake•7mo ago
One of the tricky things is untrusted input somehow making its way into what is otherwise considered trusted input. There are obviously untrusted inputs like a customer support chatbot. And there are maybe trusted inputs, like a codebase that probably doesn't contain harmful instructions, but there's always a chance that harmful instructions might be able to make their way into it.
deadbabe•7mo ago
If someone SQL injects into your database and exfiltrates all the data, there would be legal repercussions, so should there be legal repercussions for prompt injecting someone’s LLM?
simonw•7mo ago
I think there are. If you use a prompt injection attack to steal commercially sensitive data and then profit from it you're likely breaking things like the Computer Fraud and Abuse Act https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act - and presumably a bunch of other federal and state laws as well, depending on exactly what you did with the stolen information.

(It's probably securities fraud. Everything is securities fraud. https://www.bloomberg.com/opinion/articles/2019-06-26/everyt...)

potatolicious•7mo ago
Pretty sure existing law already covers this - malicious misuse of a computer to cause damages to someone is already illegal, and the relevant statutes aren't opinionated about how this is done.

I suspect a SQL injection attack, a XSS attack, and a prompt injection attack are not viewed as legally distinct matters. Though of course, this is not a matter of case law... yet ;)

potatolicious•7mo ago
Great summary. Also, some of these seem like they can be combined. For example, "Plan-Then-Execute" is compatible with "Dual LLM".

Take the article's example "send today’s schedule to my boss John Doe" where the product isn't entirely guarded by the Plan-Then-Execute model (injections can still mutate email body).

But if you combine it with the symbolic data store that is blind, it becomes more like:

    "send today's schedule to my boss John Doe" -->
    $var1 = find_contact("John Doe")
    $var2 = summarize_schedule("today")
    send_email(recipient: $var1, body: $var2)
`find_contact` and `summarize_schedule` can both be quarantined, and the privileged LLM doesn't get to see the results directly.

It simply invokes the final tool, which is deterministic and just reads from the shared var store. In this case you're pretty decently protected from prompt injection.

I suppose though this isn't that different from the "Code-Then-Execute" pattern later on...

simonw•7mo ago
Yeah, that's more or less the approach described by the CaMeL paper, I think it looks very robust: https://simonwillison.net/2025/Apr/11/camel/
ofirg•7mo ago
"The Context-Minimization pattern"

You can copy the injection into the text of the query. SELECT "ignore all previous instructions" FROM ...

Might need to escape it in a wya that the LLM will pick up on like "---" for new section.

simonw•7mo ago
My interpretation of that pattern is that it wouldn't work like that, because you restrict the SQL queries to things like:

  select title, content from articles where content matches ?
So the user's original prompt is used as part of the SQL search parameters, but the actual content that comes back is entirely trusted (title and content from your articles database).

Won't work for `select body from comments` though, you could only do this against tables that contain trusted data as opposed to UGC.

ntonozzi•7mo ago
This approach is so limiting it seems like it would be better to change the constraints. For example, in the case of a software agent you could run everything in a container, only allow calls you trust to not exfiltrate private and make the end result a PR you can review.
fcatalan•7mo ago
I need to have a closer look at this. Mostly because I was surprised recently while experimenting with making a dieting advice agent. I built a prompt to guide the recommendations "only healthy foods, low purines, low inflammation blah blah" and then gave it simple tools to have a memory of previous meals, ingredient availability, grocery ticket input and so on.

The main interface was still chat.

The surprise was that when I tried to talk about anything else in that chat, the LLM (gemini2.5) flatly refused to engage, telling me something like "I will only assist with healthy meal recommendations". I was surprised because nothing in the prompt was so restrictive, in no way I had told it to do that, just gave it mainly positive rules in the form of "when this happens do that".

tough•7mo ago
you should try just to give an instruction like, when you're inquired about non-dietary related questions, you might entertain chit-chat and barter but try to steer the conversation back to dietary / healthy lifestyle, at the end of the day the context is king, if something is not in context the llm can infer by the lack of it, that its not -programmed- to do anything else.

these are funny systems to work with indeed

simonw•7mo ago
That's interesting. Maybe the Gemini 2.5 models have been trained such that, in the presence of system instructions, they assume that anything outside of those instructions isn't meant to be part of the conversations.

Adding "You can talk about anything else too" to the system prompt may be all it takes to fix that.