frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
479•klaussilveira•7h ago•120 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
818•xnx•12h ago•491 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
40•matheusalmeida•1d ago•3 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
161•isitcontent•7h ago•18 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
158•dmpetrov•8h ago•69 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
97•jnord•3d ago•14 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
53•quibono•4d ago•7 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
211•eljojo•10h ago•135 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
264•vecti•9h ago•125 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
332•aktau•14h ago•158 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
329•ostacke•13h ago•86 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
415•todsacerdoti•15h ago•220 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
27•kmm•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
344•lstoll•13h ago•245 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
5•romes•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
53•phreda4•7h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
202•i5heu•10h ago•148 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
116•vmatsiiako•12h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
153•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
248•surprisetalk•3d ago•32 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
28•gfortaine•5h ago•4 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1004•cdrnsf•17h ago•421 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
49•rescrv•15h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
74•ray__•4h ago•36 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
38•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
32•betamark•14h ago•28 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments

Claude Opus 4.6

https://www.anthropic.com/news/claude-opus-4-6
2275•HellsMaddy•1d ago•981 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
8•gmays•2h ago•2 comments
Open in hackernews

Nvidia DGX Spark and Apple Mac Studio = 4x Faster LLM Inference with EXO 1.0

https://blog.exolabs.net/nvidia-dgx-spark/
61•edelsohn•3mo ago

Comments

pram•3mo ago
Very cool, using the DGX like an “AI eGPU.” I wonder if this could also benefit stuff like Stable Diffusion/WAN etc?
alexandercheema•3mo ago
Yes, these models are mostly compute-bound so benefit even more from the compute on the DGX Spark.
dekhn•3mo ago
Are you using USB-C for networking between the Spark and the Mac?
pdpi•3mo ago
IP over thunderbolt is definitely a thing, don't know whether IP over USB is also a thing. USB4x2 or TB5 can do 80Gib/s symmetrical or 120+40 asymmetrical (and boy is this a poster child for the asymmetrical setup). The Mac definitely supports that fine, so, as long as the Spark plays nice, USB is actually a legitimately decent choice.
esseph•3mo ago
USB4 was based on Thunderbolt3

Yes, it's a thing that works.

mehdibl•3mo ago
The gain is only in prefill and if the task/output is complex the gain will be totally minor. So the numbers are quitly exagerated here based on a prompt that is taking less than 2s to decode. So I guess we are not here doing complex tasks with 100's or 1000 token output. For the cost of an M3 Ultra + DGX the gain seem minimal and most of all, exo didn't clarify the model used here and it's for sure not a dense model or an MoE with 1B or 2B experts otherwise the mac ultra too will suffer a lot and the layers will be bigger!
solarkraft•3mo ago
Anecdotally, even medium-sized prompts (a few thousand tokens) on pretty small models (8-2B) have resulted in extremely noticeable slowdowns (vast majority of total processing time) on my M1 Mac, leading me to appreciate the significance of the pre-fill step (and difficulty of processing large contexts locally).
adam_arthur•3mo ago
I'm confused by all the takes implying decode is more important than prefill.

There are an enormous number of use cases where the prompt is large and the expected output is small.

E.g. providing data for the LLM to analyze, after which it gives a simple yes/no Boolean response. Or selecting a single enum value from a set.

This pattern seems far more valuable in practice, than the common and lazy open ended chat style implementations (lazy from a product perspective).

Obviously decode will be important for code generation or search, but that's such a small set of possible applications, and you'll probably always do better being on the latest models in the cloud.

drodgers•3mo ago
This is really cool!

Now I'm trying to stop myself from finding an excuse to spend upwards of $30k on compute hardware...

tuananh•3mo ago
if you have $30k to spare, I'm sure there are better options
_ea1k•3mo ago
Yeah, a couple of RTX Pro 6000 cards would blow this away and still leave him with money to spare.
solarkraft•3mo ago
This is a wonderful explanation of the two phases! I appreciate the hardware concerns for both now.

Reading the article I wished for a device that just does both things well and on that topic it might be noteworthy that Apple's just-released M5 has approximately 3.5x-ed TTFT performance compared to M4, according to their claims!

daft_pink•3mo ago
It’s really sad that exo went private.
ethanpil•3mo ago
How do you know this happened? I thought it was an abandoned project until I saw this post. I've been diligently checking weekly for new releases but nothing for almost a year...
alexandercheema•3mo ago
Appreciate you checking back so often. We have some exciting plans. Keep checking and it won't be long before something pops up :)
storus•3mo ago
Wouldn't this restrict memory to 128GB, wasting M3 Ultra potential?
alexandercheema•3mo ago
Blog author here. Actually, no. The model can be streamed into the DGX Spark, so we can run prefill of models much larger than 128GB (e.g. DeepSeek R1) on the DGX Spark. This feature is coming to EXO 1.0 which will be open-sourced soonTM.
storus•3mo ago
Excellent! Good luck!
musicale•3mo ago
But you could also just get two DGX Spark and get 2 * 1.9x = 3.8x total throughput for two query streams.
rcarmo•3mo ago
This is very nicely done. I wonder what the values will look like a year from now with M5 Macs, though.