frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Cascading Effects of Repackaged APIs [pdf]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6055034
1•Tejas_dmg•29s ago•0 comments

Lightweight and extensible compatibility layer between dataframe libraries

https://narwhals-dev.github.io/narwhals/
1•kermatt•3m ago•0 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
2•RebelPotato•6m ago•0 comments

Dorsey's Block cutting up to 10% of staff

https://www.reuters.com/business/dorseys-block-cutting-up-10-staff-bloomberg-news-reports-2026-02...
1•dev_tty01•9m ago•0 comments

Show HN: Freenet Lives – Real-Time Decentralized Apps at Scale [video]

https://www.youtube.com/watch?v=3SxNBz1VTE0
1•sanity•10m ago•1 comments

In the AI age, 'slow and steady' doesn't win

https://www.semafor.com/article/01/30/2026/in-the-ai-age-slow-and-steady-is-on-the-outs
1•mooreds•18m ago•1 comments

Administration won't let student deported to Honduras return

https://www.reuters.com/world/us/trump-administration-wont-let-student-deported-honduras-return-2...
1•petethomas•18m ago•0 comments

How were the NIST ECDSA curve parameters generated? (2023)

https://saweis.net/posts/nist-curve-seed-origins.html
2•mooreds•19m ago•0 comments

AI, networks and Mechanical Turks (2025)

https://www.ben-evans.com/benedictevans/2025/11/23/ai-networks-and-mechanical-turks
1•mooreds•19m ago•0 comments

Goto Considered Awesome [video]

https://www.youtube.com/watch?v=1UKVEUGEk6Y
1•linkdd•21m ago•0 comments

Show HN: I Built a Free AI LinkedIn Carousel Generator

https://carousel-ai.intellisell.ai/
1•troyethaniel•23m ago•0 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
1•todsacerdoti•24m ago•0 comments

Open Challange (Get all Universities involved

https://x.com/i/grok/share/3513b9001b8445e49e4795c93bcb1855
1•rwilliamspbgops•25m ago•0 comments

Apple Tried to Tamper Proof AirTag 2 Speakers – I Broke It [video]

https://www.youtube.com/watch?v=QLK6ixQpQsQ
2•gnabgib•26m ago•0 comments

Show HN: Isolating AI-generated code from human code | Vibe as a Code

https://www.npmjs.com/package/@gace/vaac
1•bstrama•28m ago•0 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
3•shivamhwp•28m ago•0 comments

Toledo Derailment Rescue [video]

https://www.youtube.com/watch?v=wPHh5yHxkfU
1•samsolomon•30m ago•0 comments

War Department Cuts Ties with Harvard University

https://www.war.gov/News/News-Stories/Article/Article/4399812/war-department-cuts-ties-with-harva...
8•geox•34m ago•0 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
1•yi_wang•35m ago•0 comments

A Bid-Based NFT Advertising Grid

https://bidsabillion.com/
1•chainbuilder•38m ago•1 comments

AI readability score for your documentation

https://docsalot.dev/tools/docsagent-score
1•fazkan•46m ago•0 comments

NASA Study: Non-Biologic Processes Don't Explain Mars Organics

https://science.nasa.gov/blogs/science-news/2026/02/06/nasa-study-non-biologic-processes-dont-ful...
2•bediger4000•49m ago•2 comments

I inhaled traffic fumes to find out where air pollution goes in my body

https://www.bbc.com/news/articles/c74w48d8epgo
2•dabinat•49m ago•0 comments

X said it would give $1M to a user who had previously shared racist posts

https://www.nbcnews.com/tech/internet/x-pays-1-million-prize-creator-history-racist-posts-rcna257768
6•doener•52m ago•1 comments

155M US land parcel boundaries

https://www.kaggle.com/datasets/landrecordsus/us-parcel-layer
2•tjwebbnorfolk•56m ago•0 comments

Private Inference

https://confer.to/blog/2026/01/private-inference/
2•jbegley•1h ago•1 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•1h ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•1h ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
2•PaulHoule•1h ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
2•y1n0•1h ago•0 comments
Open in hackernews

Show HN: I built a firewall for agents because prompt engineering isn't security

https://github.com/cordum-io/cordum
7•yaront111•2w ago
Hi HN, I’m the creator of Cordum.

I’ve been working in DevOps and infrastructure for years (currently in the fintech/security space), and as I started playing with AI agents, I noticed a scary pattern. Most "safety" mechanisms rely on system prompts ("Please don't do X") or flimsy Python logic inside the agent itself.

If we treat agents as autonomous employees, giving them root access and hoping they listen to instructions felt insane to me. I wanted a way to enforce hard constraints that the LLM cannot override, no matter how "jailbroken" it gets.

So I built Cordum. It’s an open-source "Safety Kernel" that sits between the LLM's intent and the actual execution.

The architecture is designed to be language-agnostic: 1. *Control Plane (Go/NATS/Redis):* Manages the state and policy. 2. *The Protocol (CAP v2):* A wire format that defines jobs, steps, and results. 3. *Workers:* You can write your agent in Python (using Pydantic), Node, or Go, and they all connect to the same safety mesh.

Key features I focused on: - *The "Kill Switch":* Ability to revoke an agent's permissions instantly via the message bus, without killing the host server. - *Audit Logs:* Every intent and action is recorded (critical for when things go wrong). - *Policy Enforcement:* Blocking actions based on metadata (e.g., "Review required for any transfer > $50") before they reach the worker.

It’s still early days (v0.x), but I’d love to hear your thoughts on the architecture. Is a separate control plane overkill, or is this where agentic infrastructure is heading?

Repo: https://github.com/cordum-io/cordum Docs: [Link to your docs if you have them]

Thanks!

Comments

hackerunewz•2w ago
Nice job, but is'nt it a bit overkill?
yaront111•2w ago
It is overkill for a demo. But for my production environment, I need an external safety layer. I can't rely on 'prompt engineering' when real data is at stake.
amadeuswoo•2w ago
Interesting architecture. Im curious about the workflow when an agent hits a denied action, does it get a structured rejection it can reason about and try an alternative, or does it just fail? Wondering how the feedback loop works between safety kernel and the LLM's planning
yaront111•2w ago
Great question. This is actually a core design principle of the Cordum Agent Protocol (CAP).

It’s definitely a *structured rejection*, not a silent fail. Since the LLM needs to "know" it was blocked to adjust its plan, the kernel returns a standard error payload (e.g., `PolicyViolationError`) with context.

The flow looks like this: 1. *Agent:* Sends intent "Delete production DB". 2. *Kernel:* Checks policy -> DENY. 3. *Kernel:* Returns a structured result: `{ "status": "blocked", "reason": "destructive_action_limit", "message": "Deletion requires human approval" }`. 4. *Agent (LLM):* Receives this as an observation. 5. *Agent (Re-planning):* "Oh, I can't delete it. I will generate a slack message to the admin asking for approval instead."

This feedback loop turns safety from a "blocker" into a constraint that the agent can reason around, which is critical for autonomous recovery.

exordex•2w ago
I built formal testing for AI agents, runs on the cli, free version launching soon - includes MCP security tests and chaos engineering features: https://exordex.com/waitlist
yaront111•2w ago
Exordex is a great tool for the CI/CD pipeline to test agents. Cordum is the Runtime Kernel that enforces those policies in production. Ideally? You use Exordex to test that your agent works, and Cordum to guarantee it stays safe.
TeamCommet1•2w ago
Regarding the separate control plane: I don't think it's overkill if you're aiming for multi-agent orchestration. A safety mesh needs to be centralized to maintain a global state of permissions. If you bake the safety logic into each worker, you end up with the same "flimsy logic" problem you're trying to solve.

Curious, how are you handling latency in the CAP v2 protocol when the control plane has to intercept every intent before execution?