The patents break into three domains:
Hardware enforcement (4 PPAs, 33 claims): A dedicated safety processor on its own power rail controls whether AI compute receives electricity. AI boots only after safety completes self-test. During operation, the safety processor monitors AI-specific indicators and can physically cut power — no software involvement. The AI cannot prevent its own shutdown. Same Safe Torque Off principle industrial motor controllers have used for decades, applied to AI compute.
Software governance (9 PPAs, 72 claims): Multi-vendor consensus engine where up to 9 AI models must agree before physical action. Transparent reasoning verification. Authority enforcement and drift monitoring. Human-readable audit trails in plain-text Markdown — every decision readable by a human without special tools. Persistent AI memory that survives reboots. Real-time safety micro-agents that monitor the AI's own cognitive state.
Financial architecture (2 PPAs, 29 claims): Tokenized equity and value distribution for the business model side.
The hardware spec defines three standardized form factors (drone-sized to humanoid-sized) with a universal connector — any compliant brain plugs into any compliant robot.
The EU AI Act goes fully into effect August 2, 2026. Articles 12-14 require auditable decision records, transparent operation, and human oversight for high-risk AI systems. Fines up to 7% of global annual revenue. Most robot AI today stores decisions in vector embeddings no human can read. This architecture addresses that directly.
I built all of this working alongside AI, using an open-source context management system I created that gives AI assistants persistent memory across sessions. The tool that solved AI's memory problem became the tool that let me design a full-stack safety architecture in 13 days.
Open-source memory tool: https://github.com/RobSB2/CxMS
Website: https://opencxms.org
Happy to answer questions about the hardware spec, the software governance architecture, or what it's like filing 15 patents in 13 days as a solo inventor using AI.
opencxms•1h ago
I'm an enterprise IT consultant... 25+ years of infrastructure, not a robotics engineer. Last fall I started using Claude for a client project and hit the same wall everyone hits... the AI forgets everything between sessions. No memory. So I built a tool to fix that. Open source, plain-text Markdown files, persistent across sessions. That's CxMS.
While I was building it I kept thinking... what happens when these models move from chatbots to physical robots? The memory problem goes from annoying to dangerous. A warehouse robot that forgets the floor layout after a reboot? That's not a bug, that's a safety incident.
Then I started looking at how AI safety actually works right now. It's all software watching software. The AI generates something, another piece of software checks it, and if they disagree... it's software all the way down. There's no layer the AI can't reach.
I spent over 25 years watching companies build governance frameworks that only work when everyone follows the rules. Firewalls, compliance checklists, access controls... all bypassable by the thing they're supposed to control. The AI safety field is repeating the same pattern.
So I designed a hardware layer using the same Safe Torque Off principle that industrial motor controllers have used for decades... except applied to AI compute instead of motors. The AI can't prevent its own shutdown because there's no software pathway to the power gate.
But hardware alone isn't enough either. You need software that decides WHEN to act... consensus engines, authority validation, drift monitoring, audit trails. That's where the 9 software patents came from. The hardware enforces what the software decides. Neither one works without the other.
I filed everything as provisionals, working alongside AI, in 13 days. The memory tool I built to solve AI's context problem is what made it possible to keep a coherent design across 120+ sessions.
The open-source memory tool: https://github.com/RobSB2/CxMS