The problem: Most agent frameworks treat memory as flat storage. Store a key, get a value. That's not how useful memory works.
What Supe provides:
1. Neural Memory - Hebbian learning ("fire together, wire together"). Cards connected by synaptic links that strengthen with co-activation and decay with disuse. Spreading activation for recall. Hubs emerge naturally.
2. Validation Gates - Python functions that run before/after tool executions. Block `rm -rf`, enforce read-only mode, whitelist commands. Code, not configuration.
3. Proof-of-Work - SHA256 hashes chain every execution. Tamper with logs and proofs won't verify.
4. Cognitive Hierarchy - Moments (sessions) → Cards (knowledge units) → Buffers (raw data). Not flat.
5. Semantic Relations - 7 typed connections: CAUSES, IMPLIES, CONTRADICTS, SUPPORTS, DEPENDS_ON, EQUALS, TRANSFORMS.
Example gate:
@agent.register_gate("safe")
def safe(record, phase) -> GateResult:
if "rm -rf" in record.tool_input.get("command", ""):
return GateResult("safe", False, "BLOCKED")
return GateResult("safe", True, "OK")
Example neural recall: neural.add_card(1, {"title": "OAuth"})
neural.add_card(2, {"title": "Login"})
neural.connect(1, 2) # Hebbian learning
results = neural.recall("authentication") # Spreading activation
343 tests. MIT license. Works with Claude SDK.pip install supe
ldc0618•1h ago
How do you handle cases where the "brain" makes decisions that need to be auditable or reversible?