frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
1•init0•5m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•5m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•8m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•10m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•21m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•21m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•26m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•30m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•31m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•33m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•34m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•37m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•48m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•54m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
2•cwwc•58m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments
Open in hackernews

Ask HN: How are you handling non-probabilistic security for LLM agents?

2•amjadfatmi1•2w ago
I've been experimenting with autonomous agents that have shell and database access. The standard approach seems to be "put safety guardrails in the system prompt", but that feels like a house of cards honestly. If a model is stochastic, its adherence to security instructions is also stochastic.

I'm looking into building a hard "Action Authorization Boundary" (AAB) that sits outside the agent's context window entirely. The idea is to intecept the tool-call, normalize it into intent against a deterministic YAML policy before execution.

A few questions for those building in this space:

Canonicalization: How do you handle the messiness of LLM tool outputs? If the representation isn't perfectly canonical, the policy bypasses seem trivial.

Stateful Intent: How do you handle sequences that are individually safe but collectively risky? For example, an agent reading a sensitive DB (safe) and then making a POST request to an external API (dangerous exfiltration).

Latency: Does moving the "gate" outside the model-loop add too much overhead for real-time agentic workflows?

I’ve been working on a CAR (Canonical Action Representation) spec to solve this, but I’m curious if I'm overthinking it or if there’s an existing firewall for agents standard I'm missing.

Comments

yaront111•2w ago
i just built Cordum.io .. should give u 100% deterministic security open sourced and free :)
amjadfatmi1•2w ago
Hey @yaront111, Cordum looks like a solid piece of infrastructure especially the Safety Kernel and the NATS based dispatch.

My focus with Faramesh.dev is slightly upstream from the scheduler. I’m obsessed with the Canonicalization problem. Most schedulers take a JSON payload and check a policy, but LLMs often produce semantic tool calls that are messy or obfuscated.

I’m building CAR (Canonical Action Representation) to ensure that no matter how the LLM phrases the intent, the hash is identical. Are you guys handling the normalization of LLM outputs inside the Safety Kernel, or do you expect the agent to send perfectly formatted JSON every time?

yaront111•2w ago
That’s a sharp observation. You’re partially right CAP (our protocol) handles the structural canonicalization. We use strict Protobuf/Schematic definitions, so if an agent sends a messy JSON that doesn't fit the schema, it’s rejected at the gateway. We don't deal with 'raw text' tool calls in the backend. But you are touching on the semantic aliasing problem (e.g. rm -rf vs rm -r -f), which is a layer deeper. Right now, we rely on the specific Worker to normalize those arguments before they hit the policy check, but having a universal 'Canonical Action Representation' upstream would be cleaner. If you can turn 'messy intent' into a 'deterministic hash' before it hits the Cordum Scheduler, that would be a killer combo. Do you have a repo/docs for CAR yet?
amjadfatmi1•2w ago
Spot on, Yaron. Schematic validation (Protobuf) catches structural errors, but semantic aliasing (the 'rm -rf' vs 'rm -r -f' problem) is exactly why I developed the CAR (Canonical Action Representation) spec.

I actually published a 40-page paper (DOI: 10.5281/zenodo.18296731) that defines this exact 'Action Authorization Boundary.' It treats the LLM as an untrusted actor and enforces determinism at the execution gate.

Faramesh Core is the reference implementation of that paper. I’d love for you to check out the 'Execution Gate Flow' section. it would be a massive win to see a Faramesh-Cordum bridge that brings this level of semantic security to your orchestrator.

Code: https://github.com/faramesh/faramesh-core

kxbnb•1w ago
Your framing of the problem resonates - treating the LLM as untrusted is the right starting point. The CAR spec sounds similar to what we're building at keypost.ai.

On canonicalization: we found that intercepting at the tool/API boundary (rather than parsing free-form output) sidesteps most aliasing issues. The MCP protocol helps here - structured tool calls are easier to normalize than arbitrary text.

On stateful intent: this is harder. We're experimenting with session-scoped budgets (max N reads before requiring elevated approval) rather than trying to detect "bad sequences" semantically. Explicit resource limits beat heuristics.

On latency: sub-10ms is achievable for policy checks if you keep rules declarative and avoid LLM-in-the-loop validation. YAML policies with pattern matching scale well.

Curious about your CAR spec - are you treating it as a normalization layer before policy evaluation, or as the policy language itself?

niyikiza•1w ago
Working on this problem: https://github.com/tenuo-ai/tenuo

Different angle than policy-as-YAML. We use cryptographic capability tokens (warrants) that travel with the request. The human signs a scoped, time-bound authorization. The tool validates the warrant at execution, not a central policy engine.

On your questions:

Canonicalization: The warrant specifies allowed capabilities and constraints (e.g., path: /data/reports/*). The tool checks if the action fits the constraint. No need to normalize LLM output into a canonical representation.

Stateful intent: Warrants attenuate. Authority only shrinks through delegation. You can't escalate from "read DB" to "POST external" unless the original warrant allowed both. A sub-agent can only receive a subset of what its parent had, cryptographically enforced.

Latency: Stateless verification, ~27μs. No control plane calls. The warrant is self-contained: scope, constraints, expiry, holder binding, signature chain. Verification is local.

The deeper issue with policy engines: they check rules against actions, but they can't verify derivation. When Agent B acts, did its authority actually come from Agent A? Was it attenuated correctly?

Wrote about why capabilities are the only model that survives dynamic delegation: https://niyikiza.com/posts/capability-delegation/