I've been exploring how so-called "pathological" mathematics, higher-dimensional
algebras with zero divisors, can be used for AI development and safety.
Zero Divisor Transmission Protocol (ZDTP) Chess is the proof-of-concept. It
evaluates positions across 16D tactical, 32D positional, and 64D strategic
dimensions. The key feature is lossless data transmission of piece positions
across multiple dimensions, enabling a mandatory safety override: if the
tactical layer detects disaster (hanging queen, etc.), the move is blocked
regardless of how good it looks strategically.
The motivation: Building mathematical infrastructure for safer AGI. Systems
optimizing overall success can mask immediate disasters, but ZDTP enforces tactical reality over strategic optimism.
Quantified result from testing: Ignored the safety warning once, lost my
queen. Game over with a -9.0 material deficit and -18.48 evaluation collapse. After adding safety constraints, that move gets blocked automatically.
The math: Zero divisor structures in Cayley-Dickson algebras (16D sedenions,
32D pathions, 64D chingons). The same six patterns have been verified in their respective Clifford algebras, displaying framework-independence and suggesting portable safety constraints across different AI architectures.
Code is rough but functional. Feedback, critique, and contributions welcome.
Background: Former reporter with more than 30 years experience (AP/LA Times),
now researching Applied Pathological Mathematics. This is part of a larger
project on mathematical infrastructure for AGI development and safety.
pchavez2025•19m ago
Zero Divisor Transmission Protocol (ZDTP) Chess is the proof-of-concept. It evaluates positions across 16D tactical, 32D positional, and 64D strategic dimensions. The key feature is lossless data transmission of piece positions across multiple dimensions, enabling a mandatory safety override: if the tactical layer detects disaster (hanging queen, etc.), the move is blocked regardless of how good it looks strategically.
The motivation: Building mathematical infrastructure for safer AGI. Systems optimizing overall success can mask immediate disasters, but ZDTP enforces tactical reality over strategic optimism.
Quantified result from testing: Ignored the safety warning once, lost my queen. Game over with a -9.0 material deficit and -18.48 evaluation collapse. After adding safety constraints, that move gets blocked automatically.
The math: Zero divisor structures in Cayley-Dickson algebras (16D sedenions, 32D pathions, 64D chingons). The same six patterns have been verified in their respective Clifford algebras, displaying framework-independence and suggesting portable safety constraints across different AI architectures.
Code is rough but functional. Feedback, critique, and contributions welcome.
Background: Former reporter with more than 30 years experience (AP/LA Times), now researching Applied Pathological Mathematics. This is part of a larger project on mathematical infrastructure for AGI development and safety.