https://github.com/nobulexdev/nobulex/blob/main/docs/crisis-...
"HN Posting Notes
Internal only. Delete before posting.
When posting UNCOVENANTED-AGENT-PROBLEM-HN.md:
Post on Tuesday or Wednesday, 8-9am EST
Title is just: "The Uncovenanted Agent Problem"
Replace [GitHub link] with actual repo URL when live under Kova name
First comment should be from you: brief context on who you are and why you built this
Respond to EVERY comment in the first 6 hours
Don't be defensive. Thank critics. Ask follow-up questions.
If someone finds a real flaw, acknowledge it publicly and say you'll address it
DO NOT mention your age unless directly asked. Let the work speak.
"> DO NOT mention your age unless directly asked. Let the work speak.
I'd agree. Why does the age matter.
The Enforcement and verification serve for a different audience.
Enforcement will protect you as it stops your agent from doing something it shouldn't. Verification protects everyone else, as it lets a third party independently confirm that the enforcement actually happened, without trusting you. You say "my agent followed the rules," while the regulator says "prove it." The hash-chained logs and signed covenants are the proof. Without verification, it's just your word.
all the kitchen sink stuff makes it pretty intense though. have you considered separating out just the core execution, logging and verification components? stuff like c2pa seems super cool, but maybe a second layer for application type things like that so that the core consensus stuff can be inspected easily? one goal for a system like this is easy auditability of the system itself.
You are right that auditability of the system itself is the goal. Its very hard to trust a trust layer you can't easily inspect. Appreciate you digging deep into the code.
For example, I have a Gmail CLI that just wraps the Gmail API and I specifically give AI certain powers and withhold other abilities. I log every action taken.
Is this a meta framework for this or an NPM package that does something like that?
The difference: your CLI controls one agent on one tool with rules you have hardcoded. Nobulex gives you signed, immutable constraints that third parties can verify independently. The logs are hash-chained so nobody (including you) can tamper with them after the fact. And the constraints are the cryptographically bound to the agent's identity.
If you are truly the only one who needs to trust your agent, your approach works fine. Nobulex matters when someone else needs to verify what your agent has done, a regulator, a customer and a counterparty.
What's the application here? If you want to enforce that an agent's blockchain transactions follow some deterministic conditions, why not just give it access to a command-line tool (MCP / skill / whatever) that enforces your conditions?
If you want auditing of the agent's blockchain actions to be public, why not just make all your agent's actions go through an ordinary smart contract?
I don't mean to kill your enthusiasm for programming or AI. But this project...I'm sorry, but this project just isn't good. It's an over-engineered, vibe-coded "solution" in search of a problem.
This project is about a month old. I highly doubt one person produced 134 kloc in that time. I'm pretty sure a lot of it is vendorized dependencies and AI-generated code that's had minimal human review. Much of the documentation appears to be AI-generated as well.
https://github.com/nobulexdev/nobulex/blob/main/demo/two-par...
Run it: npx tsx demo/two-party-verify.ts
Three steps: operator creates a covenant, claims compliance and then a regulator verifies the cryptographic proof without trusting the operator. That is the core of what Nobulex does. Everything else is tooling around this pattern. Appreciate the pushback, as it helped clarify what actually matters.
nobulexdev•1h ago
The problem: AI agents are making real decisions for loans, trades, hiring, diagnostics with zero cryptographic proof of what they have done or whether they followed any rules. The EU AI Act requires tamper-evident audit trails by August 2026. Nobody has infrastructure for this.
Nobulex is three things:
Agents will be able to sign behavioral covenants before they act (cryptographic commitments — "I will not do X")
Middleware enforces those covenants at runtime as violations are blocked before execution
Every action is logged in a hash-chained, merkle-tree audit trail that anyone can use and verify independently
The quickstart is 3 lines: const { protect } = require('@nobulex/sdk'); const agent = await protect({ name: 'my-agent', rules: ['no-data-leak', 'read-only'] });
npm install @nobulex/sdk
Everything is MIT licensed and on npm under @nobulex/*. Site: https://nobulex.com
Would love feedback on the architecture, the covenant model, or anything else. Happy to answer questions.
mlyle•1h ago
An agent signing a covenant doesn't do anything. You're not going to enforce a contract against it, and there's not some kind of non-repudiation problem to solve.
Enforcing behavioral covenants or boundaries is inherent to how you make things safe. But how do you really do it for anything that matters? How do you make sure that an agent isn't discriminating based on race or other factors?
The whole reason you're using an LLM is because you're doing something either:
A) at very low scale, at which case it's hard to capture sufficient covenants cost-efficiently
or B) with very great complexity, where the behavior you want is hard to encapsulate in code-- in which case meaningful enforcement of the complex covenants that may result is hard.
Indeed, if you could just write code to do it, you'd just write code to do it.
I'm glad you're interested in these issues and playing with them. I'll leave you with one last thought: 134 KSLOC is a bug, not a feature. Some software systems that need to be huge, but for software systems that need to be trusted-- small, auditable, and understandable to humans (and agents) is the key thing you're looking for. Could you build some kind of small trustable core that solves a simple problem in an understandable way?
nobulexdev•1h ago
mlyle•59m ago
Surely it's just the enforcement, and maybe the measuring of sentinel events -- how far does it wander off course.
How is cryptography an important part of this, given that we're talking about a layer that sits on top of an LLM without an adversary in-between?
I know you mention non-repudiation, but ... there's no kind of real non-repudiation here in this environment.
nobulexdev•53m ago
But, it matters when there are multiple parties. An enterprise deploys an agent that can handle customer data. The customer wants proof the agent has followed the rules. The regulator wants proof that the logs were not just edited after an incident. Without cryptographic signatures and hash chains, the enterprise can just say "trust us." With them, the proof is independently verifiable.
It's just the difference between "we followed the rules" and "here's a mathematically verifiable proof we followed the rules." For internal use, it's an overkill. For anything with external accountability, that targets the point.