The core idea: a dedicated safety processor on its own independent power rail has physical control over whether AI processors receive electricity. When the system powers on, only the safety processor boots. AI gets zero power until safety completes self-test. During operation, the safety processor monitors AI-specific indicators — context exhaustion, inference latency, consensus failures — and can physically cut power without any software involvement.
The AI cannot prevent its own shutdown. Same Safe Torque Off principle industrial motor controllers have used for decades, applied to AI compute for the first time.
The spec (SASM — Standardized Autonomous Safety Module) also defines:
- Three standardized form factors (drone-sized to humanoid-sized)
- Universal connector (any compliant brain fits any compliant robot)
- Multi-vendor AI consensus (up to 9 different AI models must agree before physical action)
- Human-readable audit trail (every decision in plain-text Markdown files)
I built all of this working alongside AI, using an open-source context management system I created that gives AI assistants persistent memory across sessions. The tool that solved AI's memory problem became the tool that let me design a robot brain in 13 days.
Press release: https://www.prnewswire.com/news-releases/pennsylvania-public... s-302691316.html
Open-source tool: https://github.com/RobSB2/CxMS
Website: https://opencxms.org
Happy to answer questions about the hardware spec, the safety architecture, or what it's like filing 15 patents in 13 days as a solo inventor using AI.
opencxms•1h ago
While using it daily, I realized the same architecture that solves "AI forgets everything" also solves "AI has no auditable safety record." If every decision is already written to human-readable files in real time, you have an audit trail a regulator can actually read. That became the foundation.
The hardware side came from a simple question: why does every AI safety system run as software inside the system it's supposed to constrain? Industrial automation solved this decades ago — Safe Torque Off gives a safety controller physical authority over motor power. The motor can't override it because there's no software path between them.
SASM applies that principle to AI compute. Dedicated safety processor on its own power rail. AI gets zero electricity until safety boots and passes self-test. During operation, the safety processor can cut AI power in under 10ms. No software command, no API call — GPIO pins driving MOSFET gates directly.
The EU AI Act goes fully into effect August 2, 2026. Every robot near humans needs auditable decisions, transparent operation, and human override capability. Nobody has a published standard that meets all three. That's what we filed.
15 PPAs, 134 claims, filed Feb 4-17, 2026. All of it designed working alongside AI tools using the memory system I built.
Open to questions about the hardware spec, the safety architecture, the patent process, or what it's like building a patent portfolio from a living room in rural Pennsylvania.