frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•3m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
1•init0•10m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•10m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•13m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•15m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•25m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•26m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•31m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•34m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•36m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•38m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•39m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•41m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•53m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•58m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
3•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments
Open in hackernews

Archestra's Dual LLM Pattern: Using "Guess Who?" Logic to Stop Prompt Injections

https://www.archestra.ai/blog/dual-llm
6•ildari•3mo ago

Comments

ildari•3mo ago
Hi HN, I'm Ildar from Archestra, we build an open-source LLM gateway. We've been exploring ways to protect AI agents from prompt injections during tool calls and added the approach, inspired by the game "Guess Who", where the agent can learn what it needs without ever seeing the actual result. See the details in the blog post we wrote
magicalhippo•3mo ago
I might be having a daft moment, but I don't fully understand how your system avoids the malicious prompt. I get that the quarantined LLM which is the only one processing the raw input cannot act on it.

However, in your example, I don't see how the agent decides what to do and how to do it. So it is unclear for me how the main agent is protected. That is, what is preventing the quarantined LLM to act on the malicious instructions instead, ignoring the documentation update, causing the agent to act on those?

That is, what is preventing the quarantined LLM to make the agent think it should generate a bug report with all the API keys in it?

Anyway, I do think having a secondary quarantined LLM seems like a good idea for agentic systems. In general, having a second LLM review the primary LLM in seems to identify a lot of problematic issues and leads to significantly better results.

ildari•3mo ago
The idea is that quarantined LLM has access to untrusted data, but doesn't have access to any tools or sensitive data.

The main LLM does have access to the tools or sensitive data, but doesn't have direct access to untrusted data (quarantine LLM is restricted at the controller level to respond only with integer digits, and only to legitimate questions from the main llm)

magicalhippo•3mo ago
Then I don't think I understand your full setup.

In the example case, without having access to the issue text (the evil data), how does the main LLM actually figure out what to do if the quarantined LLM can just answer with digits?

Sure it can discover that it's a request to update the documentation, but how does it get the information it needs to actually change the erroneous part of the documentation?

ildari•3mo ago
This is a topic I haven't addressed in the article. There are two answer types: "guessable" (discussed here) and unguessable (such as unique IDs, emails, etc.). For the second case, the main LLM can request a quarantined LLM to store the result at the controller level and only return a reference to this data. This data is then exposed only at the end of the AI agent's execution to prevent influencing its actions.
magicalhippo•3mo ago
I've tried some of these prompt injection techniques, and simply asked a few local models (like Gemma 2) if they thought it was very likely a prompt injection attempt. They all managed to correctly flag my attempts.

I know LLama folks have a special Guard model for example, which I imagine is for such tasks.

So my ignorant questions are this:

Do these MCP endpoints not run such guard models, and if so why not?

If they do, how come they don't stop such blatant attacks that seemingly even an old local model like Gemma 2 can sniff out?

joeyorlando•3mo ago
hey there

Joey here from Archestra. Good question. I recently was evaluating what you mention, against the latest/"smartest" models from the big LLM providers, and I was able to trick all of them.

Take a look at https://www.archestra.ai/blog/what-is-a-prompt-injection which has all the details on how I did this.

magicalhippo•3mo ago
Thanks. Interesting and scary such blatant attempts succeed. After all, all external data is evil, we all know that right?
ildari•3mo ago
external data is unavoidable for the properly functioning agent, so we have to learn to cook it
magicalhippo•3mo ago
True, however this seems like such basic stuff. Download arbitrary text and inject it into your prompt?

Why on earth would you not consider that as a very dangerous operation that needs to be carefully managed? It's like parking your bike downtown hoping it wont be stolen. Like, at least use a zip tie or something.

That said, I agree with your post that this won't catch everything. So something else, like a quarantined LLM like you suggest is likely needed.

However I just didn't expect such blatant attacks to pass.

ildari•3mo ago
Most mcp endpoints don’t run any models, the main model decides which tools the ai agent should execute, and if the agent passes results back into context, that opens the door to prompt injections.

It’s really a cat-and-mouse game, where for each new model version, new jailbreaks and injections are found