frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

An AI model that can read and diagnose a brain MRI in seconds

https://www.michiganmedicine.org/health-lab/ai-model-can-read-and-diagnose-brain-mri-seconds
1•hhs•2m ago•0 comments

Dev with 5 of experience switched to Rails, what should I be careful about?

1•vampiregrey•5m ago•0 comments

AlphaFace: High Fidelity and Real-Time Face Swapper Robust to Facial Pose

https://arxiv.org/abs/2601.16429
1•PaulHoule•6m ago•0 comments

Scientists discover “levitating” time crystals that you can hold in your hand

https://www.nyu.edu/about/news-publications/news/2026/february/scientists-discover--levitating--t...
1•hhs•8m ago•0 comments

Rammstein – Deutschland (C64 Cover, Real SID, 8-bit – 2019) [video]

https://www.youtube.com/watch?v=3VReIuv1GFo
1•erickhill•8m ago•0 comments

Tell HN: Yet Another Round of Zendesk Spam

1•Philpax•8m ago•0 comments

Postgres Message Queue (PGMQ)

https://github.com/pgmq/pgmq
1•Lwrless•12m ago•0 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
1•cui•15m ago•1 comments

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
1•geox•16m ago•0 comments

OpenClaw AI chatbots are running amok – these scientists are listening in

https://www.nature.com/articles/d41586-026-00370-w
2•EA-3167•17m ago•0 comments

Show HN: AI agent forgets user preferences every session. This fixes it

https://www.pref0.com/
5•fliellerjulian•19m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
2•DustinEchoes•21m ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•21m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
2•RickJWagner•23m ago•0 comments

Multi-agent coordination on Claude Code: 8 production pain points and patterns

https://gist.github.com/sigalovskinick/6cc1cef061f76b7edd198e0ebc863397
1•nikolasi•23m ago•0 comments

Washington Post CEO Will Lewis Steps Down After Stormy Tenure

https://www.nytimes.com/2026/02/07/technology/washington-post-will-lewis.html
9•jbegley•24m ago•1 comments

DevXT – Building the Future with AI That Acts

https://devxt.com
2•superpecmuscles•25m ago•4 comments

A Minimal OpenClaw Built with the OpenCode SDK

https://github.com/CefBoud/MonClaw
1•cefboud•25m ago•0 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
3•amitprasad•25m ago•0 comments

The Internal Negotiation You Have When Your Heart Rate Gets Uncomfortable

https://www.vo2maxpro.com/blog/internal-negotiation-heart-rate
1•GoodluckH•27m ago•0 comments

Show HN: Glance – Fast CSV inspection for the terminal (SIMD-accelerated)

https://github.com/AveryClapp/glance
2•AveryClapp•28m ago•0 comments

Busy for the Next Fifty to Sixty Bud

https://pestlemortar.substack.com/p/busy-for-the-next-fifty-to-sixty-had-all-my-money-in-bitcoin-...
1•mithradiumn•28m ago•0 comments

Imperative

https://pestlemortar.substack.com/p/imperative
1•mithradiumn•29m ago•0 comments

Show HN: I decomposed 87 tasks to find where AI agents structurally collapse

https://github.com/XxCotHGxX/Instruction_Entropy
2•XxCotHGxX•33m ago•1 comments

I went back to Linux and it was a mistake

https://www.theverge.com/report/875077/linux-was-a-mistake
3•timpera•34m ago•1 comments

Octrafic – open-source AI-assisted API testing from the CLI

https://github.com/Octrafic/octrafic-cli
1•mbadyl•36m ago•1 comments

US Accuses China of Secret Nuclear Testing

https://www.reuters.com/world/china/trump-has-been-clear-wanting-new-nuclear-arms-control-treaty-...
3•jandrewrogers•36m ago•2 comments

Peacock. A New Programming Language

2•hashhooshy•41m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
4•bookofjoe•42m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•46m ago•0 comments
Open in hackernews

SparseLoCo: Communication-Efficient LLM Training

https://arxiv.org/abs/2508.15706
19•synapz_org•5mo ago

Comments

synapz_org•5mo ago
Paper: https://arxiv.org/abs/2508.15706 Code: https://github.com/tplr-ai/SparseLoCo

Templar AI has developed SparseLoCo, a distributed training algorithm that achieves extreme compression ratios (1-3% sparsity + 2-bit quantization) while outperforming existing methods like DiLoCo and DeMo on both loss and communication efficiency.

The Core Problem

Training LLMs across data centers or over the internet is bottlenecked by communication: as model scale grows, each synchronization can require transferring hundreds of gigabytes of pseudo-gradients. DiLoCo reduces the frequency of synchronizations, but the communication remains dense and large. This makes distributed training impractical for many scenarios, especially internet-scale collaboration.

Technical Approach

Our key insight: The infrequent communication of DiLoCo can be aggressively compressed via TOP-k sparsification while improving performance.

Algorithm highlights:

* Replace global momentum with per-replica error feedback * Apply TOP-k magnitude compression (1-3% density) + 2-bit quantization to pseudo-gradients * Maintain infrequent communication (H=15-250 steps) like DiLoCo * Use chunked TOP-k for better parallelism and reduced index overhead

Results

Communication reduction: With >97× compression, SparseLoCo outperforms DiLoCo across all benchmarks. Sparse aggregation appears to provide regularization benefits beyond just compression.

Communication infrequency: Consistently outperforms DiLoCo across communication frequency ∈ {15, 30, 50, 100, 250} on 512M parameter models.

Real deployment: Currently running on Bittensor with a 70B model and 20 participants in the gather operation (out of many more total participants): 70 seconds communication with <500Mbps bandwidth. Our previous successful deployment of a medium sized (200B token) run of an 8B parameter model and 20 gather participants achieved communication average of 12 seconds vs 4.5 minutes compute time.

Key Technical Contributions

1. Local momentum approximation: Show that DiLoCo's global outer momentum can be well-approximated by local accumulators (>90% cosine similarity)

2. Error feedback as momentum: Demonstrate that TOP-k + error feedback naturally provides similar benefits to outer momentum

3. Sparse aggregation benefits: Find that sparse aggregation actually improves performance over dense methods—likely due to emphasis on high-saliency components

4. Extreme quantization: Error feedback enables 2-bit quantization without additional accumulators or performance drops

Implementation Details

* Chunked TOP-k (4096 elements/chunk) reduces index transmission overhead * Custom index compression: 8.9, 6.6, 5.6 bits per value for different sparsity levels * Drop-in replacement for DiLoCo all-reduce operations * Compatible with existing distributed training frameworks

Limitations & Future Work

* Tested on 512M parameter models (though deployed on 8-70B) * Chunk size optimization could be further explored * Random-k performs significantly worse than TOP-k

This work makes distributed training viable over commodity internet connections and opens possibilities for global AI training collaborations that were previously bandwidth-prohibited.