frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
502•klaussilveira•8h ago•139 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
842•xnx•14h ago•506 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
57•matheusalmeida•1d ago•11 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
166•dmpetrov•9h ago•76 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
166•isitcontent•8h ago•18 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
280•vecti•10h ago•127 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
60•quibono•4d ago•10 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
340•aktau•15h ago•164 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
225•eljojo•11h ago•141 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
332•ostacke•14h ago•89 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
422•todsacerdoti•16h ago•221 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
34•kmm•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
363•lstoll•15h ago•251 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
12•denuoweb•1d ago•0 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
79•SerCe•4h ago•60 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
59•phreda4•8h ago•9 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
16•gmays•3h ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
210•i5heu•11h ago•157 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
9•romes•4d ago•1 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
123•vmatsiiako•13h ago•51 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
33•gfortaine•6h ago•8 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
160•limoce•3d ago•80 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
258•surprisetalk•3d ago•34 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1018•cdrnsf•18h ago•425 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
52•rescrv•16h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
93•ray__•5h ago•46 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
44•lebovic•1d ago•13 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
81•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
36•betamark•15h ago•29 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
10•denysonique•5h ago•1 comments
Open in hackernews

Tiny-LLM – a course of serving LLM on Apple Silicon for systems engineers

https://github.com/skyzh/tiny-llm
297•sarkory•9mo ago

Comments

pj_mukh•9mo ago
Super cool, and will definitely check it out.

But as a measure for what you can achieve with a course like this: does anyone know what the max tok/s vs iPhone model plot look like, and how does MLX change that plot?

simonw•9mo ago
MLX is worth paying attention to. It's still pretty young (just over a year old) but the amount of activity in that ecosystem is really impressive, and it's quickly becoming the best way to run LLMs (and vision LLMs and increasingly audio models) on a Mac.

Here's a fun way to start interacting with it (this loads and runs Llama 3.2 3B in a terminal chat UI):

  uv run --isolated --with mlx-lm python -m mlx_lm.chat
esafak•9mo ago
https://ml-explore.github.io/mlx/
marci•9mo ago
For those who never heard of those:

mlx is similar to numpy/pytorch, but only for Apple Silicon.

mlx-lm is a llama.cpp equivalent, but built on top of mlx.

https://github.com/ml-explore/mlx-lm

masto•9mo ago
Ran it and it crapped out with a huge backtrace. I spotted `./build_bundled.sh: line 21: cmake: command not found` in it, so I guessed I needed cmake installed. `brew install cmake` and try again. Then it crapped out with `Compatibility with CMake < 3.5 has been removed from CMake.`. Then I give up.

This is typical of what happens any time I try to run something written in Python. It may be easier than setting up an NVIDIA GPU, but that's a low bar.

simonw•9mo ago
Which Python version was that? Could be that MLX have binary wheels for some versions but not others.
masto•9mo ago
Adding `-p 3.12` made it work. Leaving that here in case it helps someone.
porridgeraisin•9mo ago
Aha, knew you wouldn't give up. Not what our kind do
hnfong•9mo ago
Never gonna give you up…

(Sorry I’ll excuse myself now…

H3X_K1TT3N•9mo ago
This is absolutely every experience I have with python.
jack_pp•9mo ago
for the record these problems don't really exist on linux in my experience
mobiuscog•9mo ago
Python problems exist on all platforms. It's just that most people using Python have figured out their 'happy path' workarounds in the past and keep using them.

Python is awesome in many ways, one of my favourite languages, but unless you are happy with venv manipulation (or live in Conda), it's often a nightmare that ends up worse than DLL-hell.

masto•9mo ago
Python is in a category of things you can't just use without being an expert in the minutiae. This is unfortunate because there are a lot of people who are not Python developers who would like to run programs which happen to be written in Python.

Python is by no means alone in this or particularly egregious. Having been a heavy Perl developer in the 2000s, I was part of the problem. I didn't understand why other people had so much trouble doing things that seemed simple to me, because I was eating, breathing, and sleeping Perl. I knew how to prolong the intervals between breaking my installation, and how to troubleshoot and repair it, but there was no reason why anyone who wanted to deploy, or even develop on, my code base should have needed that encyclopedic knowledge.

This is why, for all their faults, I count containers as the biggest revolution in the software industry, at least for us "backend" folks.

_bin_•9mo ago
I wish apple would spend some more time paying attention to metal-jax :) it crashes with a few lines still and seems like an obvious need if apple wants to be serious about enabling ML work on their new MBPs.

MLX looks really nice from the demo-level playing around with it I've done, but I usually stick to jax so, you know, I can actually deploy it on a server without trying to find someone who racks macs.

dkga•9mo ago
So, on an M4 I sometimes get faster training on plain vanilla jax compared to the same model in pytorch or tensorflow. And jax-metal often breaks :/
_bin_•9mo ago
No kidding? Might switch to CPU then. And yeah jax-metal is so utterly unreliable. I ran across an issue it turns out reduces to like a 2 line repro example which has been open on github for the better part of a year without updates
mathfailure•9mo ago
How much disk & RAM does it need?

What's your tokens/sec rate (and on which device)?

simonw•9mo ago
I've been running it on a 64GB M2. My favorite models to run tend to be about 20GB to download (eg Mistral Small 3.1) and use about 20GB of RAM while they are running.

I don't have a token/second figure to hand but it's fast enough that I'm not frustrated by it.

fsiefken•9mo ago
That's great, like the ai ryzen max 395, apple silicon chips are also more energy efficient for llm (or gaming) then nvidia.

For 4 bit deepseek-r1-distill-llama-70b on a Macbook Pro M4 Max with the MLX version on LM Studio: 10.2 tok/sec on power and 4.2 tok/sec on battery / low power

For 4 bit gemma-3-27b-it-qat I get: 26.37 tok/sec on power and on battery low power 9.7

It'd be nice to know all the possible power tweaks to get the value higher and get additional insight on how llm's work and interact with the cpu and memory.

nico•9mo ago
Thank you for the numbers

What have you used those models for, and how would you rate them in those tasks?

realo•9mo ago
RPG prompts works very very well with many of the models, but not the reasoning ones because it ends up thinking endlessly about how to be the absolute best game master possible...
nico•9mo ago
Great use case. And very funny situation with the reasoning models! :)
bigyabai•9mo ago
> apple silicon chips are also more energy efficient for llm (or gaming) then nvidia.

Which benchmarks are you working off of, exactly? Unless your memory is bottlenecked, neither raster or compute workloads on M4 are more energy efficient than Nvidia's 50-series silicon: https://browser.geekbench.com/opencl-benchmarks

fouc•9mo ago
NVIDIA GeForce RTX 5090 - 376224 - 400-550W for gpu + 250-500W for cpu/ram/cooling/etc.

Apple M3 Ultra - 131247 - 200W [1]

Looks like it might be 2.8x faster in the benchmark, but ends up using 3.25x more power at a minimum.

[1] https://www.tweaktown.com/news/103891/apples-new-m3-ultra-ru...

vlovich123•9mo ago
How does mlx compare with the llama.cpp backend for LM Studio?
robbru•9mo ago
Interesting benchmarks, thanks for sharing!

If you're optimizing for lower power draw + higher throughput on Mac (especially in MLX), definitely keep an eye on the Desloth LLMs that are starting to appear.

Desloth models are basically aggressively distilled and QAT-optimized versions of larger instruction models (think: 7B → 1.3B or 2B) designed specifically for high tokens/sec at minimal VRAM. They're tiny but surprisingly capable for structured outputs, fast completions, and lightweight agent pipelines.

I'm seeing Desloth-tier models consistently hit >50 tok/sec on M1/M2 hardware without needing active cooling ramps, especially when combined with low-bit quant like Q4_K_M or Q5_0.

If you care about runtime efficiency per watt + low-latency inference (vs. maximum capability), these newer Desloth styled architectures are going to be a serious unlock.

robbru•9mo ago
TinyLLM is very cool to see! I will def tinker with it. I've been using MLX format for local LLMs as of late. Kinda amazing to see these models become cheaper and faster. Check out the MLX community on HuggingFace. https://huggingface.co/mlx-community
nico•9mo ago
Great recommendation about the community

Any other resources like that you could share?

Also, what kind of models do you run with mlx and what do you use them for?

Lately I’ve been pretty happy with gemma3:12b for a wide range of things (generating stories, some light coding, image recognition). Sometimes I’ve been surprised by qwen2.5-coder:32b. And I’m really impressed by the speed and versatility, at such tiny size, of qwen2.5:0.5b (playing with fine tuning it to see if I can get it to generate some decent conversations roleplaying as a character)

simonw•9mo ago
I've shared a bunch of notes on MLX over the past year, many of them with snippets of code I've used to try out models: https://simonwillison.net/tags/mlx/

I mainly use MLX for LLMs (with https://github.com/ml-explore/mlx-lm and my own https://github.com/simonw/llm-mlx which wraps that), vision LLMs (via https://github.com/Blaizzy/mlx-vlm) and running Whisper (https://github.com/ml-explore/mlx-examples/tree/main/whisper)

I haven't tried mlx-audio yet (which can synthesize speech) but it looks interesting too: https://github.com/Blaizzy/mlx-audio

The two best people to follow for MLX stuff are Apple's Awni Hannun - https://twitter.com/awnihannun and https://github.com/awni - and community member Prince Canuma who's responsible for both mlx-vlm and mlx-audio: https://twitter.com/Prince_Canuma and https://github.com/Blaizzy

nico•9mo ago
Amazing. Thank you for the great resources!
robbru•9mo ago
Very cool insight, Simonw! I will check out the audio mlx stuff soon. I think that is kinda new still. Prince Canuma is the GOAT.
robbru•9mo ago
Hey Nico,

Very cool to hear your perspective in how you are using the small LLMs! I’ve been experimenting extensively with local LLM stacks on:

• M1 Max (MLX native)

• LM Studio (GLM, MLX, GGUFs)

• Llama.cp (GGUFs)

• n8n for orchestration + automation (multi-stage LLM workflows)

My emerging use cases: -Rapid narration scripting -Roleplay agents with embedded prompt personas -Reviewing image/video attachments + structuring copy for clarity -Local RAG and eval pipelines

My current lineup of small LLMs (this changes every month depending on what is updated):

MLX-native models (mlx-community):

-Qwen2.5-VL-7B-Instruct-bf16 → excellent VQA and instruction following

-InternVL3-8B-3bit → fast, memory-light, solid for doc summarization

-GLM-Z1-9B-bf16 → reliable multilingual output + inference density

GGUF via LM Studio / llama.cpp:

-Gemma-3-12B-it-qat → well-aligned, solid for RP dialogue

-Qwen2.5-0.5B-MLX-4bit → blazing fast; chaining 2+ agents at once

-GLM-4-32B-0414-8bit (Cobra4687) → great for iterative copy drafts

Emerging / niche models tested:

MedFound-7B-GGUF → early tests for narrative medicine tasks

X-Ray_Alpha-mlx-8Bit → experimental story/dialogue hybrid

llama-3.2-3B-storyteller-Q4_K_M → small, quick, capable of structured hooks

PersonalityParty_saiga_fp32-i1 → RP grounding experiments (still rough)

I test most new LLMs on release. QAT models in particular are showing promise, balancing speed + fidelity for chained inference. The meta-trend: models are getting better, smaller, faster, especially for edge workflows.

Happy to swap notes if others are mixing MLX, GGUF, and RAG in low-latency pipelines.

nico•9mo ago
Impressive! Thank you for the amazing notes, I have a lot to learn and test
gitroom•9mo ago
dang, i've been messing with mlx too and its blowing my mind how quick this stuff is getting on macs. feels like somethings changing every time i blink
robbru•9mo ago
I think people are sleeping on MLX and directing their attention to the "Apple Intelligence" marketing atm.