frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Code only says what it does

https://brooker.co.za/blog/2020/06/23/code.html
1•logicprog•2m ago•0 comments

The success of 'natural language programming'

https://brooker.co.za/blog/2025/12/16/natural-language.html
1•logicprog•3m ago•0 comments

The Scriptovision Super Micro Script video titler is almost a home computer

http://oldvcr.blogspot.com/2026/02/the-scriptovision-super-micro-script.html
1•todsacerdoti•3m ago•0 comments

Discovering the "original" iPhone from 1995 [video]

https://www.youtube.com/watch?v=7cip9w-UxIc
1•fortran77•4m ago•0 comments

Psychometric Comparability of LLM-Based Digital Twins

https://arxiv.org/abs/2601.14264
1•PaulHoule•6m ago•0 comments

SidePop – track revenue, costs, and overall business health in one place

https://www.sidepop.io
1•ecaglar•8m ago•1 comments

The Other Markov's Inequality

https://www.ethanepperly.com/index.php/2026/01/16/the-other-markovs-inequality/
1•tzury•10m ago•0 comments

The Cascading Effects of Repackaged APIs [pdf]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6055034
1•Tejas_dmg•12m ago•0 comments

Lightweight and extensible compatibility layer between dataframe libraries

https://narwhals-dev.github.io/narwhals/
1•kermatt•14m ago•0 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
2•RebelPotato•18m ago•0 comments

Dorsey's Block cutting up to 10% of staff

https://www.reuters.com/business/dorseys-block-cutting-up-10-staff-bloomberg-news-reports-2026-02...
1•dev_tty01•21m ago•0 comments

Show HN: Freenet Lives – Real-Time Decentralized Apps at Scale [video]

https://www.youtube.com/watch?v=3SxNBz1VTE0
1•sanity•22m ago•1 comments

In the AI age, 'slow and steady' doesn't win

https://www.semafor.com/article/01/30/2026/in-the-ai-age-slow-and-steady-is-on-the-outs
1•mooreds•30m ago•1 comments

Administration won't let student deported to Honduras return

https://www.reuters.com/world/us/trump-administration-wont-let-student-deported-honduras-return-2...
1•petethomas•30m ago•0 comments

How were the NIST ECDSA curve parameters generated? (2023)

https://saweis.net/posts/nist-curve-seed-origins.html
2•mooreds•30m ago•0 comments

AI, networks and Mechanical Turks (2025)

https://www.ben-evans.com/benedictevans/2025/11/23/ai-networks-and-mechanical-turks
1•mooreds•31m ago•0 comments

Goto Considered Awesome [video]

https://www.youtube.com/watch?v=1UKVEUGEk6Y
1•linkdd•33m ago•0 comments

Show HN: I Built a Free AI LinkedIn Carousel Generator

https://carousel-ai.intellisell.ai/
1•troyethaniel•35m ago•0 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
1•todsacerdoti•36m ago•0 comments

Open Challange (Get all Universities involved

https://x.com/i/grok/share/3513b9001b8445e49e4795c93bcb1855
1•rwilliamspbgops•37m ago•0 comments

Apple Tried to Tamper Proof AirTag 2 Speakers – I Broke It [video]

https://www.youtube.com/watch?v=QLK6ixQpQsQ
2•gnabgib•38m ago•0 comments

Show HN: Isolating AI-generated code from human code | Vibe as a Code

https://www.npmjs.com/package/@gace/vaac
1•bstrama•40m ago•0 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
3•shivamhwp•40m ago•0 comments

Toledo Derailment Rescue [video]

https://www.youtube.com/watch?v=wPHh5yHxkfU
1•samsolomon•42m ago•0 comments

War Department Cuts Ties with Harvard University

https://www.war.gov/News/News-Stories/Article/Article/4399812/war-department-cuts-ties-with-harva...
9•geox•46m ago•1 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
2•yi_wang•47m ago•0 comments

A Bid-Based NFT Advertising Grid

https://bidsabillion.com/
1•chainbuilder•50m ago•1 comments

AI readability score for your documentation

https://docsalot.dev/tools/docsagent-score
1•fazkan•58m ago•0 comments

NASA Study: Non-Biologic Processes Don't Explain Mars Organics

https://science.nasa.gov/blogs/science-news/2026/02/06/nasa-study-non-biologic-processes-dont-ful...
3•bediger4000•1h ago•2 comments

I inhaled traffic fumes to find out where air pollution goes in my body

https://www.bbc.com/news/articles/c74w48d8epgo
2•dabinat•1h ago•0 comments
Open in hackernews

We Built a Language Model 14,000,000x Smaller Than GPT3 and Formally Verified It

https://github.com/dkypuros/atomic-lang-model
9•katosteven•6mo ago

Comments

katosteven•6mo ago
For the last few years, the AI world has been dominated by a single idea: bigger is better. But what if the future of AI isn't just about scale, but about precision, efficiency, and accessibility?

This is the story of the Atomic Language Model (ALM), a project that challenges the "bigger is better" paradigm. It’s a language model that is not just millions of times smaller than the giants, but is also formally verified, opening up new frontiers for AI.

The result of our work is a capable, recursive language model that comes in at under 50KB.

This project is led by David Kypuros of Enterprise Neurosystem, in a vibrant collaboration with a team of Ugandan engineers and researchers: myself (Kato Steven Mubiru), Bronson Bakunga, Sibomana Glorry, and Gimei Alex. Our ambitious, shared goal is to use this technology to develop the first-ever language architecture for a major Ugandan language.

https://github.com/dkypuros/atomic-lang-model/tree/main

From "Trust Me" to "Prove It": Formal Verification Modern LLMs are opaque black boxes validated empirically. The ALM is different. Its core is formally verified using the Coq proof assistant. We have mathematically proven the correctness of its recursive engine. This shift from experimental science to mathematical certainty is a game-changer for reliability.

The Team and the Mission: Building Accessible AI This isn't just a technical exercise. The ALM was born from a vision to make cutting-edge AI accessible to everyone, everywhere. By combining the architectural vision from Enterprise Neurosystem with the local linguistic and engineering talent in Uganda, we are not just building a model; we are building capacity and pioneering a new approach to AI development—one that serves local needs from the ground up.

Unlocking New Frontiers with a Lightweight Architecture A sub-50KB footprint is a gateway to domains previously unimaginable for advanced AI:

Climate & Environmental Monitoring: The ALM is small enough to run on low-power, offline sensors, enabling sophisticated, real-time analysis in remote locations. 2G Solutions: In areas where internet connectivity is limited to 2G networks, a tiny, efficient model can provide powerful language capabilities that would otherwise be impossible. Space Exploration: For missions where power, weight, and computational resources are severely constrained, a formally verified, featherweight model offers unparalleled potential. Embedded Systems & Edge Devices: True on-device AI without needing a network connection, from microcontrollers to battery-powered sensors. A Pragmatic Hybrid Architecture The ALM merges the best of both worlds:

A formally verified Rust core handles the grammar and parsing, ensuring correctness and speed. A flexible Python layer manages probabilistic modeling and user interaction. What's Next? This project is a testament to what small, focused, international teams can achieve. We believe the future of AI is diverse, and we are excited to build a part of that future—one that is more efficient, reliable, and equitable.

We've launched with a few key assets:

The Research Paper: For a deep dive into the theory , we are working on it. The GitHub Repository: The code is open-source. We welcome contributions! A Live Web Demo: Play with the model directly in your browser (WebAssembly). We'd love to hear your thoughts and have you join the conversation.

NitpickLawyer•6mo ago
Could you add a link for the web demo? Couldn't find it in the repo.
dkypuros•6mo ago
We’re working on it. Great feedback
icodar•6mo ago
The next token prediction appears to be predicted based on fixed grammatical rules. However, modern LLMs learn the rules themselves. Did I misunderstand?
dkypuros•6mo ago
We use a deliberately small, hand‑written grammar so that we can prove properties like grammaticality, aⁿbⁿ generation, and bounded memory. The price we pay is that the next‑token distribution is limited to the explicit rules we supplied. Large neural LMs reverse the trade‑off: they learn the rules from data and therefore cover much richer phenomena, but they can’t offer the same formal guarantees. The fibration architecture is designed so we can eventually blend the two—keeping symbolic guarantees while letting certain fibres (e.g. embeddings or rule weights) be learned from data.
dkypuros•6mo ago
We’re eventually headed toward completely externalized data that feeds into the system
oxavier•6mo ago
I will peruse your learning path when I am done with writing my master's thesis. Thanks for putting it together!

Lots of bullet points and keywords about the "What" : provable recursion, next-token prediction, and formal verification... and all items in "What makes it special". Can you provide a practical motivation, even speculative for people like me who have little time? Not necessarily "What use does it have right now", but "The qualitative difference with other models might enable use case XYZ in the future".

I have noticed it is low power and this is great in itself. What does the more rigorous formalism bring to the table? No snark at all, I am fascinated by formal methods, but still looking at them from afar.

Cheers

dkypuros•6mo ago
Thanks for the thoughtful question— and good luck wrapping up the thesis!

Here’s the shortest road-map I can give for why the heavier formalism matters once you already have low-power execution nailed down.

First, the grammar + proof layer lets you guarantee properties that today’s neural LLMs can only hope to satisfy. Because every production rule carries a machine-checkable proof obligation, you can show that responses will always terminate, stay within a memory budget, or never emit strings outside a whitelisted alphabet. In practice that means the model can be certified for safety-critical or compliance-heavy settings where a probabilistic network is a non-starter.

Second, the same proofs make the system auditable and patchable by domain experts instead of ML engineers. An agronomist can inspect the maize-disease module, see that the recursion proving “all advice paths end with a referenced citation,” and swap in an updated pest table without breaking that guarantee. The edit-compile-prove cycle is minutes, not GPU-months.

Third, formal hooks open the door to hybrid workflows. You can embed the micro-LM inside a larger pipeline—say, a standard transformer model proposes a draft, and our verified core acts as a “lint pass” that repairs grammar, checks facts against a local SQLite cache, and signs the result with a proof artifact. That could be huge for regulated industries that want the creativity of big models and the certainty of formal methods.

Finally, on the speculative side, once responses are proof-carrying you can imagine device-to-device marketplaces of small, composable skills: my weather module proves bounds on forecast error, your SMS gateway proves it redacts PII, we link them and the combined proof still holds. That’s hard to do with opaque neural weights.

So the low-power story gets us in the door; the rigorous formalism is what keeps the door open when reliability, certification, or composability become the bottleneck. Hope that gives you a clearer picture—and when the thesis dust settles I’d love to hear your perspective on how formal methods could push this even further.