For the last few years, the AI world has been dominated by a single idea: bigger is better. But what if the future of AI isn't just about scale, but about precision, efficiency, and accessibility?
This is the story of the Atomic Language Model (ALM), a project that challenges the "bigger is better" paradigm. It’s a language model that is not just millions of times smaller than the giants, but is also formally verified, opening up new frontiers for AI.
The result of our work is a capable, recursive language model that comes in at under 50KB.
This project is led by David Kypuros of Enterprise Neurosystem, in a vibrant collaboration with a team of Ugandan engineers and researchers: myself (Kato Steven Mubiru), Bronson Bakunga, Sibomana Glorry, and Gimei Alex. Our ambitious, shared goal is to use this technology to develop the first-ever language architecture for a major Ugandan language.
From "Trust Me" to "Prove It": Formal Verification
Modern LLMs are opaque black boxes validated empirically. The ALM is different. Its core is formally verified using the Coq proof assistant. We have mathematically proven the correctness of its recursive engine. This shift from experimental science to mathematical certainty is a game-changer for reliability.
The Team and the Mission: Building Accessible AI
This isn't just a technical exercise. The ALM was born from a vision to make cutting-edge AI accessible to everyone, everywhere. By combining the architectural vision from Enterprise Neurosystem with the local linguistic and engineering talent in Uganda, we are not just building a model; we are building capacity and pioneering a new approach to AI development—one that serves local needs from the ground up.
Unlocking New Frontiers with a Lightweight Architecture
A sub-50KB footprint is a gateway to domains previously unimaginable for advanced AI:
Climate & Environmental Monitoring: The ALM is small enough to run on low-power, offline sensors, enabling sophisticated, real-time analysis in remote locations.
2G Solutions: In areas where internet connectivity is limited to 2G networks, a tiny, efficient model can provide powerful language capabilities that would otherwise be impossible.
Space Exploration: For missions where power, weight, and computational resources are severely constrained, a formally verified, featherweight model offers unparalleled potential.
Embedded Systems & Edge Devices: True on-device AI without needing a network connection, from microcontrollers to battery-powered sensors.
A Pragmatic Hybrid Architecture
The ALM merges the best of both worlds:
A formally verified Rust core handles the grammar and parsing, ensuring correctness and speed.
A flexible Python layer manages probabilistic modeling and user interaction.
What's Next?
This project is a testament to what small, focused, international teams can achieve. We believe the future of AI is diverse, and we are excited to build a part of that future—one that is more efficient, reliable, and equitable.
We've launched with a few key assets:
The Research Paper: For a deep dive into the theory , we are working on it.
The GitHub Repository: The code is open-source. We welcome contributions!
A Live Web Demo: Play with the model directly in your browser (WebAssembly).
We'd love to hear your thoughts and have you join the conversation.
NitpickLawyer•6h ago
Could you add a link for the web demo? Couldn't find it in the repo.
dkypuros•4h ago
We’re working on it. Great feedback
icodar•5h ago
The next token prediction appears to be predicted based on fixed grammatical rules. However, modern LLMs learn the rules themselves. Did I misunderstand?
dkypuros•4h ago
We use a deliberately small, hand‑written grammar so that we can prove properties like grammaticality, aⁿbⁿ generation, and bounded memory. The price we pay is that the next‑token distribution is limited to the explicit rules we supplied. Large neural LMs reverse the trade‑off: they learn the rules from data and therefore cover much richer phenomena, but they can’t offer the same formal guarantees. The fibration architecture is designed so we can eventually blend the two—keeping symbolic guarantees while letting certain fibres (e.g. embeddings or rule weights) be learned from data.
dkypuros•4h ago
We’re eventually headed toward completely externalized data that feeds into the system
katosteven•6h ago
This is the story of the Atomic Language Model (ALM), a project that challenges the "bigger is better" paradigm. It’s a language model that is not just millions of times smaller than the giants, but is also formally verified, opening up new frontiers for AI.
The result of our work is a capable, recursive language model that comes in at under 50KB.
This project is led by David Kypuros of Enterprise Neurosystem, in a vibrant collaboration with a team of Ugandan engineers and researchers: myself (Kato Steven Mubiru), Bronson Bakunga, Sibomana Glorry, and Gimei Alex. Our ambitious, shared goal is to use this technology to develop the first-ever language architecture for a major Ugandan language.
https://github.com/dkypuros/atomic-lang-model/tree/main
From "Trust Me" to "Prove It": Formal Verification Modern LLMs are opaque black boxes validated empirically. The ALM is different. Its core is formally verified using the Coq proof assistant. We have mathematically proven the correctness of its recursive engine. This shift from experimental science to mathematical certainty is a game-changer for reliability.
The Team and the Mission: Building Accessible AI This isn't just a technical exercise. The ALM was born from a vision to make cutting-edge AI accessible to everyone, everywhere. By combining the architectural vision from Enterprise Neurosystem with the local linguistic and engineering talent in Uganda, we are not just building a model; we are building capacity and pioneering a new approach to AI development—one that serves local needs from the ground up.
Unlocking New Frontiers with a Lightweight Architecture A sub-50KB footprint is a gateway to domains previously unimaginable for advanced AI:
Climate & Environmental Monitoring: The ALM is small enough to run on low-power, offline sensors, enabling sophisticated, real-time analysis in remote locations. 2G Solutions: In areas where internet connectivity is limited to 2G networks, a tiny, efficient model can provide powerful language capabilities that would otherwise be impossible. Space Exploration: For missions where power, weight, and computational resources are severely constrained, a formally verified, featherweight model offers unparalleled potential. Embedded Systems & Edge Devices: True on-device AI without needing a network connection, from microcontrollers to battery-powered sensors. A Pragmatic Hybrid Architecture The ALM merges the best of both worlds:
A formally verified Rust core handles the grammar and parsing, ensuring correctness and speed. A flexible Python layer manages probabilistic modeling and user interaction. What's Next? This project is a testament to what small, focused, international teams can achieve. We believe the future of AI is diverse, and we are excited to build a part of that future—one that is more efficient, reliable, and equitable.
We've launched with a few key assets:
The Research Paper: For a deep dive into the theory , we are working on it. The GitHub Repository: The code is open-source. We welcome contributions! A Live Web Demo: Play with the model directly in your browser (WebAssembly). We'd love to hear your thoughts and have you join the conversation.
NitpickLawyer•6h ago
dkypuros•4h ago
icodar•5h ago