frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A good first word for Wordle

https://explainextended.com/2022/01/27/a-good-first-word-for-wordle/
1•7777777phil•21s ago•0 comments

Show HN: Sonar CiteScout – Find the links AI relies on to answer a prompt

https://trysonar.ai/tools/citescout
1•shukantpal•59s ago•0 comments

Show HN: Former Cloudflare SRE building a live map of what's running

1•kennethops•1m ago•0 comments

Flux.2 Klein 4B (Apache 2.0)

https://huggingface.co/black-forest-labs/FLUX.2-klein-4B
8•anjneymidha•2m ago•1 comments

What if the idea of the autism spectrum is completely wrong?

https://www.newscientist.com/article/2509117-what-if-the-idea-of-the-autism-spectrum-is-completel...
1•kristianp•2m ago•0 comments

Regressions on benchmark scores suggest frontier LLMs ~3-5T params

https://aimlbling-about.ninerealmlabs.com/blog/benchmarks-predict-model-size/
1•namnnumbr•3m ago•1 comments

Origami Programming [pdf]

https://www.cs.ox.ac.uk/people/jeremy.gibbons/publications/origami.pdf
1•andsoitis•4m ago•0 comments

Do AI models reason or regurgitate? Why AI is not merely a "stochastic parrot"

https://bigthink.com/the-present/do-ai-models-reason-or-regurgitate/
1•ryan_j_naughton•4m ago•0 comments

Package Manager Glossary

https://nesbitt.io/2026/01/13/package-manager-glossary.html
1•7777777phil•6m ago•0 comments

The industrial waste site that glitters like a glacier

https://www.nationalgeographic.com/environment/article/kishangarh-dumping-yard-marble-india
1•noleary•7m ago•0 comments

Nanolang: A tiny experimental language designed to be targeted by coding LLMs

https://github.com/jordanhubbard/nanolang
2•Scramblejams•8m ago•0 comments

Minnesota wants to win a war of attrition

https://www.theverge.com/policy/863632/minnesota-walz-trump-sousveillance-ice
3•andrewstetsenko•11m ago•0 comments

Tesla, BYD, and Xiaomi Are Playing Different Games

https://gilpignol.substack.com/p/tesla-byd-and-xiaomi-are-playing
1•light_triad•12m ago•0 comments

Nebra Sky Disc: the oldest depiction of astronomical phenomena

https://www.livescience.com/archaeology/nebra-sky-disc-the-worlds-oldest-depiction-of-astronomica...
1•janandonly•13m ago•0 comments

I Improved Claude's MCP-CLI Experimental MCP Fix – 18x speedup on 50 calls

1•AIntelligentTec•14m ago•0 comments

Of donkeys, mules, and horses: data structures for network prefixes in Rust

https://blog.nlnetlabs.nl/donkeys-mules-horses/
1•fanf2•14m ago•0 comments

Gravity from Information Geometry: A Lean 4 Formalization of Emergent Spacetime

https://www.academia.edu/146192044/Gravity_from_Information_Geometry_A_Lean_4_Formalization_of_Em...
1•kristintynski•15m ago•1 comments

Software Is Fine

https://shomik.substack.com/p/software-is-fine
1•mooreds•16m ago•0 comments

Do not give up your brain

https://cassidoo.co/post/good-brain/
1•mooreds•16m ago•0 comments

Use Social Media Mindfully

https://danielleheberling.xyz/blog/mindful-social-media/
2•mooreds•16m ago•0 comments

Tell HN: Deskflow is getting spammed with AI-slop PRs

1•acheong08•16m ago•0 comments

Google Mandiant Delivers the Coup de Grâce to Microsoft's NTLM

https://www.heise.de/en/news/Windows-Networks-Google-Mandiant-Delivers-the-Coup-de-Grace-to-Micro...
2•croes•17m ago•0 comments

Apple's "Protect Mail Activity" Doesn't Work

https://www.grepular.com/Apples_Protect_Mail_Activity_Doesnt_Work
3•mike-cardwell•18m ago•0 comments

Targeted Bets: An alternative approach to the job hunt

https://www.seanmuirhead.com/blog/targeted-bets
1•seany62•21m ago•1 comments

Jazz – The Database That Syncs

https://jazz.tools/
1•aleksjess•21m ago•1 comments

Targeted Bets: An alternative approach to the job hunt

1•seany62•22m ago•0 comments

Nanotimetamps: Time-Stamped Data on Nano Block Lattice

https://github.com/SerJaimeLannister/nanotimestamp/wiki
2•Imustaskforhelp•23m ago•0 comments

Starlink users must opt out of all browsing data being used to train xAI models

https://twitter.com/cryps1s/status/2013345999826153943
6•pizza•23m ago•1 comments

Nobody knows how large software products work

https://www.seangoedecke.com/nobody-knows-how-software-products-work/
2•7777777phil•27m ago•0 comments

Moving from Java to Python (2014)

https://yusufaytas.com/moving-from-java-to-python/
3•jatwork•27m ago•0 comments
Open in hackernews

Show HN: I built a firewall for agents because prompt engineering isn't security

https://github.com/cordum-io/cordum
7•yaront111•2h ago
Hi HN, I’m the creator of Cordum.

I’ve been working in DevOps and infrastructure for years (currently in the fintech/security space), and as I started playing with AI agents, I noticed a scary pattern. Most "safety" mechanisms rely on system prompts ("Please don't do X") or flimsy Python logic inside the agent itself.

If we treat agents as autonomous employees, giving them root access and hoping they listen to instructions felt insane to me. I wanted a way to enforce hard constraints that the LLM cannot override, no matter how "jailbroken" it gets.

So I built Cordum. It’s an open-source "Safety Kernel" that sits between the LLM's intent and the actual execution.

The architecture is designed to be language-agnostic: 1. *Control Plane (Go/NATS/Redis):* Manages the state and policy. 2. *The Protocol (CAP v2):* A wire format that defines jobs, steps, and results. 3. *Workers:* You can write your agent in Python (using Pydantic), Node, or Go, and they all connect to the same safety mesh.

Key features I focused on: - *The "Kill Switch":* Ability to revoke an agent's permissions instantly via the message bus, without killing the host server. - *Audit Logs:* Every intent and action is recorded (critical for when things go wrong). - *Policy Enforcement:* Blocking actions based on metadata (e.g., "Review required for any transfer > $50") before they reach the worker.

It’s still early days (v0.x), but I’d love to hear your thoughts on the architecture. Is a separate control plane overkill, or is this where agentic infrastructure is heading?

Repo: https://github.com/cordum-io/cordum Docs: [Link to your docs if you have them]

Thanks!

Comments

hackerunewz•1h ago
Nice job, but is'nt it a bit overkill?
yaront111•1h ago
It is overkill for a demo. But for my production environment, I need an external safety layer. I can't rely on 'prompt engineering' when real data is at stake.
amadeuswoo•1h ago
Interesting architecture. Im curious about the workflow when an agent hits a denied action, does it get a structured rejection it can reason about and try an alternative, or does it just fail? Wondering how the feedback loop works between safety kernel and the LLM's planning
yaront111•15m ago
Great question. This is actually a core design principle of the Cordum Agent Protocol (CAP).

It’s definitely a *structured rejection*, not a silent fail. Since the LLM needs to "know" it was blocked to adjust its plan, the kernel returns a standard error payload (e.g., `PolicyViolationError`) with context.

The flow looks like this: 1. *Agent:* Sends intent "Delete production DB". 2. *Kernel:* Checks policy -> DENY. 3. *Kernel:* Returns a structured result: `{ "status": "blocked", "reason": "destructive_action_limit", "message": "Deletion requires human approval" }`. 4. *Agent (LLM):* Receives this as an observation. 5. *Agent (Re-planning):* "Oh, I can't delete it. I will generate a slack message to the admin asking for approval instead."

This feedback loop turns safety from a "blocker" into a constraint that the agent can reason around, which is critical for autonomous recovery.