frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
424•nar001•4h ago•199 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
131•bookofjoe•1h ago•104 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
436•theblazehen•2d ago•156 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
86•AlexeyBrin•5h ago•16 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
25•thelok•1h ago•2 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
778•klaussilveira•19h ago•241 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
34•vinhnx•2h ago•4 comments

First Proof

https://arxiv.org/abs/2602.05192
38•samasblack•2h ago•23 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
54•onurkanbkrc•4h ago•3 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
16•mellosouls•2h ago•18 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1026•xnx•1d ago•582 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
168•alainrk•4h ago•223 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
167•jesperordrup•10h ago•61 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
24•rbanffy•4d ago•5 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
16•simonw•1h ago•13 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
103•videotopia•4d ago•26 comments

Vinklu Turns Forgotten Plot in Bucharest into Tiny Coffee Shop

https://design-milk.com/vinklu-turns-forgotten-plot-in-bucharest-into-tiny-coffee-shop/
5•surprisetalk•5d ago•0 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
12•marklit•5d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
265•isitcontent•20h ago•33 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•42 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
35•matt_d•4d ago•10 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
277•dmpetrov•20h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
546•todsacerdoti•1d ago•263 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
418•ostacke•1d ago•110 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
64•helloplanets•4d ago•68 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
364•vecti•22h ago•163 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
16•sandGorgon•2d ago•4 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
338•eljojo•22h ago•206 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
457•lstoll•1d ago•301 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
372•aktau•1d ago•195 comments
Open in hackernews

Qwen3 30B A3B Hits 13 token/s on 4xRaspberry Pi 5

https://github.com/b4rtaz/distributed-llama/discussions/255
347•b4rtazz•5mo ago

Comments

geerlingguy•5mo ago
distributed-llama is great, I just wish it would work with more models. I've been happy with ease of setup and its ongoing maintenance compared to Exo, and performance vs llama.cpp RPC mode.
alchemist1e9•5mo ago
Any pointers to what is SOTA for cluster of hosts with CUDA GPUs but not enough vram for full weights, yet 10Gbit low latency interconnects?

If that problem gets solved, even if for only a batch approach that enables parallel batch inference resulting in high total token/s but low per session, and for bigger models, then it would he a serious game changer for large scale low cost AI automation without billions capex. My intuition says it should be possible, so perhaps someone has done it or started on it already.

echelon•5mo ago
This is really impressive.

If we can get this down to a single Raspberry Pi, then we have crazy embedded toys and tools. Locally, at the edge, with no internet connection.

Kids will be growing up with toys that talk to them and remember their stories.

We're living in the sci-fi future. This was unthinkable ten years ago.

striking•5mo ago
I think it's worth remembering that there's room for thoughtful design in the way kids play. Are LLMs a useful tool for encouraging children to develop their imaginations or their visual or spatial reasoning skills? Or would these tools shape their thinking patterns to exactly mirror those encoded into the LLM?

I think there's something beautiful and important about the fact that parents shape their kids, leaving with them some of the best (and worst) aspects of themselves. Likewise with their interactions with other people.

The tech is cool. But I think we should aim to be thoughtful about how we use it.

bigyabai•5mo ago
> Kids will be growing up with toys that talk to them and remember their stories.

What a radical departure from the social norms of childhood. Next you'll tell me that they've got an AI toy that can change their diaper and cook Chef Boyardee.

manmal•5mo ago
An LLM in my kids‘ toys only over my cold, dead body. This can and will go very very wrong.
cdelsolar•5mo ago
Why
manmal•5mo ago
For the same reason I don’t leave them unattended with strangers.
fragmede•5mo ago
If a raspberry pi can do all that, imagine the toys Bill Gates' grandkids have access to!

We're at the precipice of having a real "A Young Lady's Illustrated Primer" from The Diamond Age.

9991•5mo ago
Bill Gates' grandkids will be playing with wooden blocks.
1gn15•5mo ago
This is indeed incredibly sci fi. I still remember my ChatGPT moment, when I realized I could actually talk to a computer. And now it can run fully on an RPi, just as if the RPi itself has become intelligent and articulate! Very cool.
dingdingdang•5mo ago
Very impressive numbers.. wonder how this would scale on 4 relatively modern desktop PCs, like say something akin to a i5 8th Gen Lenovo ThinkCentre, these can be had for very cheap. But like @geerlingguy indicates - we need model compatibility to go up up up! As an example it would amazing to see something like fastsdcpu run distributed to democratize accessibility-to/practicality-of image gen models for people with limited budgets but large PC fleets ;)
rthnbgrredf•5mo ago
I think it is all well and good, but the most affordable option is probably still to buy a used MacBook with 16/32 or 64 GB (depending on the budget) unified memory and install Asahi Linux for tinkering.

Graphics cards with decent amount of memory are still massively overpriced (even used), big, noisy and draw a lot of energy.

ivape•5mo ago
It just came to my attention that the 2021 M1 Max 64gb is less than $1500 used. That’s 64gb of unified memory at regular laptop prices, so I think people will be well equipped with AI laptops rather soon.

Apple really is #2 and probably could be #1 in AI consumer hardware.

jeroenhd•5mo ago
Apple is leagues ahead of Microsoft with the whole AI PC thing and so far it has yet to mean anything. I don't think consumers care at all about running AI, let alone running AI locally.

I'd try the whole AI thing on my work Macbook but Apple's built-in AI stuff isn't available in my language, so perhaps that's also why I haven't heard anybody mention it.

ivape•5mo ago
People don’t know what they want yet, you have to show it to them. Getting the hardware out is part of it, but you are right, we’re missing the killer apps at the moment. The very need for privacy with AI will make personal hardware important no matter what.
mycall•5mo ago
Two main factors are holding back the "killer app" for AI. Fix hallucinations and make agents more deterministic. Once these are in place, people will love AI when it can make them money somehow.
croes•5mo ago
You can’t fix the hallucinations
herval•5mo ago
How does one “fix hallucinations” on an LLM? Isn’t hallucinating pretty much all it does?
kasey_junk•5mo ago
Coding agents have shown how. You filter the output against something that can tell the llm when it’s hallucinating.

The hard part is identifying those filter functions outside of the code domain.

dotancohen•5mo ago
It's called a RAG, and it's getting very well developed for some niche use cases such as legal, medical, etc. I've been personally working on one for mental health, and please don't let anybody tell you that they're using an LLM as a mental health counselor. I've been working on it for a year and a half, and if we get it to production ready in the next year and a half I will be surprised. In keeping up with the field, I don't think anybody else is any closer than we are.
tptacek•5mo ago
Wait, can you say more about how RAG solves this problem? What Kasey is referring to is things like compiling statically-typed code: there's a ground truth an agent is connected to there --- it can at least confidently assert "this code actually compiles" (and thus can't be using an entirely-hallucinated API. I don't see how RAG accomplishes something similar, but I don't think much about RAG.
dingdingdang•5mo ago
No no, not at all, see: https://openai.com/index/why-language-models-hallucinate/ which was recently featured on the frontpage - excellent clean take on how to fix the issue (they already got a long way with gpt-5-thinking-mini). I liked this bit for clear outline of the issue:

´´´Think about it like a multiple-choice test. If you do not know the answer but take a wild guess, you might get lucky and be right. Leaving it blank guarantees a zero. In the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say “I don’t know.”

As another example, suppose a language model is asked for someone’s birthday but doesn’t know. If it guesses “September 10,” it has a 1-in-365 chance of being right. Saying “I don’t know” guarantees zero points. Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty."´´´´

MengerSponge•5mo ago
Other than that, Mrs. Lincoln, how was the Agentic AI?
dotancohen•5mo ago

  > People don’t know what they want yet, you have to show it to them
Henry Ford famously quipped that had he asked his customers what they wanted, they would have wanted a faster horse.
estimator7292•5mo ago
We've shown people so many times and so forcefully that they're now actively complaining about it. It's a meme.

The problem isn't getting your Killer A I App in front of eyeballs. The problem is showing something useful or necessary or wanted. AI has not yet offered the common person anything they want or need! The people have seen what you want to show them, they've been forced to try it, over and over. There is nobody who interacts with the internet who has not been forced to use AI tools.

And yet still nobody wants it. Do you think that they'll love AI more if we force them to use it more?

ivape•5mo ago
And yet still nobody wants it.

Nobody wants the one-millionth meeting transcription app and the one-millionth coding agent constantly, sure.

It a developer creativity issue. I personally believe the creativity is so egregious, that if anyone were to release a killer app, the entirety of the lackluster dev community will copy it into eternity to the point where you’ll think that that’s all AI can do.

This is not a great way to start off the morning, but gosh darn it, I really hate that this profession attracted so many people that just want to make a buck.

——-

You know what was the killer app for the Wii?

Wii Sports. It sold a lot of Wiis.

You have to be creative with this AI stuff, it’s a requirement.

wkat4242•5mo ago
M1 doesn't exactly have stellar memory bandwidth for this day and age though
Aurornis•5mo ago
M1 Max with 64GB has 400GB/s memory bandwidth.

You have to get into the highest 16-core M4 Max configurations to begin pulling away from that number.

wkat4242•5mo ago
Oh sorry I thought it was only about 100. I'd read that before but I must have remembered incorrectly. 400 is indeed very serviceable.
benreesman•5mo ago
Ryzen AI 9 395+ with 64MB of LPDDR5 is 1500 new in a ton of factors and 2k with 128. If I have 1500 for a unified memory inference machine I'm probably not getting a Mac. It's not a bad choice per se, llama.cpp supports that harware extremely well, but a modern Ryzen APU at the same price is more of what I want for that use case, with the M1 Mac youre paying for a Retina display and a bunch of stuff unrelated to inference.
anonym29•5mo ago
Not just LPDDR5, but LPDDR5X-8000 on a 256-bit bus. The 40 CU of RDNA 3.5 is nice, but it's less raw compute than e.g. a desktop 4060 Ti dGPU. The memory is fast, 200+ GB/s real-world read and write (the AIDA64 thread about limited read speeds is misleading, this is what the CPU is able to see, the way the memory controller is configured, but GPU tooling reveals full 200+ GB/s read and write). Though you can only allocate 96 GB to the iGPU on Windows or 110 GB on Linux.

The ROCm and Vulkan stacks are okay, but they're definitely not fully optimized yet.

Strix Halo's biggest weakness compared to Mac setups is memory bandwidth. M4 Max gets something like 500+ GB/s, and M3 Ultra gets something like 800 GB/s, if memory serves correctly.

I just ordered a 128 GB Strix Halo system, and while I'm thrilled about it, but in fariness, for people who don't have an adamant insistence against proprietary kernels, refurbished Apple silicon does offer a compelling alternative with superior performance options. AFAIK there's nothing like Apple Care for any of the Strix Halo systems either.

jtbaker•5mo ago
The 128 GB Strix Halo system was tempting me, but I think I'm going to hold out for the Medusa Point memory bandwidth gains to expand my cluster setup.

I have a Mac Mini M4 Pro 64GB that does quite well with inference on the Qwen3 models, but is hell on networking with my home K3s cluster, which going deeper on is half the fun of this stuff for me.

ivape•5mo ago
It’s not better than the Macs yet. There’s no half assing this AI stuff, AMD is behind even the 4 year old MacBooks.

NVDIA is so greedy that doling out $500 dollars will only you get you 16gb of vram at half the speed of a M1 Max. You can get a lot more speed with more expensive NVDIA GPUs, but you won’t get anything close to a decent amount of vram for less than 700-1500 dollars (well, truly, you will not get close to 32gb even).

Makes me wonder just how much secret effort is being put in by MAG7 to strip NVDIDA of this pricing power because they are absolutely price gouging.

anonym29•5mo ago
>The 128 GB Strix Halo system was tempting me, but I think I'm going to hold out for the Medusa Point

I was initially thinking this way too, but I realized a 128GB Strix Halo system would make an excellent addition to my homelab / LAN even once it's no longer the star of the stable for LLM inference - i.e. I will probably get a Medusa Halo system as well once they're available. My other devices are Zen 2 (3600x) / Zen 3 (5950x) / Zen 4 (8840u), an Alder Lake N100 NUC, a Twin Lake N150 NUC, along with a few Pi's and Rockchip SBC's, so a Zen 5 system makes a nice addition to the high end of my lineup anyway. Not to mention, everything else I have maxed out at 2.5GbE. I've been looking for an excuse to upgrade my switch from 2.5GbE to 5 or 10 GbE, and the Strix Halo system I ordered was the BeeLink GTR9 Pro with dual 10GbE. Regardless of whether it's doing LLM, other gen AI inference, some extremely light ML training / light fine tuning, media transcoding, or just being yet another UPS-protected server on my LAN, there's just so much capability offered for this price and TDP point compared to everything else I have.

Apple Silicon would've been a serious competitor for me on the price/performance front, but I'm right up there with RMS in terms of ideological hostility towards proprietary kernels. I'm not totally perfect (privacy and security are a journey, not a destination), but I am at the point where I refuse to use anything running an NT or Darwin kernel.

jtbaker•5mo ago
That is sweet! The extent of my cluster is a few Pis that talk to the Mac Mini over the LAN for inference stuff, that I could definitely use some headroom on. I tried to integrate it into the cluster directly by running k3s in colima - but to join an existing cluster via IP, I had to run colima in host networking mode - so any pods on the mini that were trying to do CoreDNS networking were hitting collisions with mDNSResponder when dialing port 53 for DNS. Finally decided that the macs are nice machines but not a good fit for a member of a cluster.

Love that AMD seems to be closing the gap on the performance _and_ power efficiency of Apple Silicon with the latest Ryzen advancements. Seems like one of these new miniPCs would be a dream setup to run a bunch of data and AI centric hobby projects on - particularly workloads like geospatial imagery processing in addition to the LLM stuff. Its a fun time to be a tinkerer!

Bombthecat•5mo ago
Ryzen 9 doesn't exist in Europe
seanmcdirmid•5mo ago
I recently got an M3 Max with 64g (the higher spec max) and ts been a lot of fun playing with local models. It cost around $3k though even refurbished.
jibbers•5mo ago
Get an Apple Silicon MacBook with a broken screen and it’s an even better deal.
giancarlostoro•5mo ago
You dont even need Asahi, you can run comfy on it but I recommend the Draw Things app, it just works and holds your hand a LOT. I am able to run a few models locally, the underlying app is open source.
mrbonner•5mo ago
I used Draw Thing after fighting with comfyui.
croes•5mo ago
What about AMD Ryzen AI Max+ 395 Mini PCs with upto 128GB unified memory?
evilduck•5mo ago
Their memory bandwidth is the problem. 256 GB/s is really, really slow for LLMs.

Seems like at the consumer hardware level you just have to pick your poison or what one factor you care about most. Macs with a Max or Ultra chip can have good memory bandwidth but low compute, but also ultra low power consumption. Discrete GPUs have great compute and bandwidth but low to middling VRAM, and high costs and power consumption. The unified memory PCs like the Ryzen AI Max and the Nvidia DGX deliver middling compute, higher VRAMs, and terrible memory bandwidth.

codedokode•5mo ago
But for matrix multiplication, isn't compute more important, as there are N³ multiplications but just N² numbers in a matrix?

Also I don't think power consumption is important for AI. Typically you do AI at home or in the office where there is lot of electricity.

evilduck•5mo ago
>But for matrix multiplication, isn't compute more important, as there are N³ multiplications but just N² numbers in a matrix?

Being able to quickly calculate a dumb or unreliable result because you're VRAM starved is not very useful for most scenarios. To run capable models you need VRAM, so high VRAM and lower compute is usually more useful than the inverse (a lot of both is even better, but you need a lot of money and power for that).

Even in this post with four RPis, the Qwen3 30 A3B is still an MOE model and not a dense model. It runs fast with only 3B active parameters and can be parallelized across computers but it's much less capable than a dense 30B model running on a single GPU.

> Also I don't think power consumption is important for AI. Typically you do AI at home or in the office where there is lot of electricity.

Depends on what scale you're discussing. If you want to get similar VRAM as a 512GB Mac Studio Ultra with a bunch of Nvidia GPUs like RTX 3090 cards you're not going to be able to run that on a typical American 15 AMP circuits, you'll trip a breaker half way there.

MalikTerm•5mo ago
It's an underwhelming product in an annoying market segment, but 256GB/s really isn't that bad when you look at the competition. 150GB/s from hex channel DDR4, 200GB/s from quad channel DDR5, or around 256GB/s from Nvidia Digits or M Pro (that you can't get in the 128GB range). For context it's about what low-mid range GPUs provide, and 2.5-5x the bandwidth of the 50/100 GB/s memory that most people currently have.

If you're going with a Mac Studio Max you're going to be paying twice the price for twice the memory bandwidth, but the kicker is you'll be getting the same amount of compute as the AMD AI chips have which is going to be comparable to a low-mid range GPU. Even midrange GPUs like the RX 6800 or RTX 3060 are going to have 2x the compute. When the M1 chips first came out people were getting seriously bad prompt processing performance to the point that it was a legitimate consideration to make before purchase, and this was back when local models could barely manage 16k of context. If money wasn't a consideration and you decided to get the best possible Mac Studio Ultra, 800GB/s won't feel like a significant upgrade when it still takes 1 minute to process every 80k of uncached context that you'll absolutely be using on 1m context models.

ekianjo•5mo ago
Works very well and very fast with this Qwen3 30B A3B model.
Aurornis•5mo ago
> and install Asahi Linux for tinkering.

I would recommend sticking to macOS if compatibility and performance are the goal.

Asahi is an amazing accomplishment, but running native optimized macOS software including MLX acceleration is the way to go unless you’re dead-set on using Linux and willing to deal with the tradeoffs.

benreesman•5mo ago
If Moore's Law is Ending leaks are to be believed, there are going to be 24GB GDDR7 5080 Super and maybe even 5070 Super Ti variants in the 1k (MSRP) range and one assumes fast Blackwell NVFP4 Tensor Cores.

Depends on what you're doing, but at FP4 that goes pretty far.

nullsmack•5mo ago
The mini pcs based on AMD Ryzen AI Max+ 395 (Strix Halo) are probably pretty competitive with those. Depending on which one you buy it's $1700-2000 for one with 128GB RAM that is shared with the integrated Radeon 8060S graphics. There's videos on youtube talking about using this with the bigger LLM models.
j45•5mo ago
Connect a gpu into it with an eGPU chassis and you're running one way or the other.
trebligdivad•5mo ago
On my (single) AMD 3950x running entirely in CPU (llama -t32 -dev none), I was getting 14 tokens/s running Qwen3-Coder-30B-A3B-Instruct-IQ4_NL.gguf last night. Which is the best I've had out of a model that doesn't feel stupid.
codedokode•5mo ago
How much RAM it is using by the way? I see 30B, but without knowing precision it is unclear how much memory one needs.
MalikTerm•5mo ago
Q4 is usually around 4.5 bits per parameter but can be more as some layers are quantised to a higher precision, which would suggest 30 billion * 4.5 bit = 15.7GB, but the quant the GP is using is 17.3GB and 19.7GB for the article. Add around 20-50% overhead for various things and then some % for each 1k of tokens in the context and you're probably looking at no more than 32GB. If you're using something like llama.cpp which can offload some of the model to the GPU you'll still get decent performance even on a 16gb VRAM GPU.
trebligdivad•5mo ago
Sounds close! top says my llama is using 17.7G virt, 16.6G resident with: ./build/bin/llama-cli -m /discs/fast/ai/Qwen3-Coder-30B-A3B-Instruct-IQ4_NL.gguf --jinja -ngl 99 --temp 0.7 --min-p 0.0 --top-p 0.80 --top-k 20 --presence-penalty 1.0 -t 32 -dev none
vient•5mo ago
For reference, I get 29 tokens/s with the same model using 12 threads on AMD 9950X3D. Guess it is 2x faster because AVX-512 is 2x faster on Zen 5, roughly speaking. Somewhat unexpectedly, increasing number of threads decreases performance, 16 threads already perform slightly worse and with 32 threads I only get 26.5 tokens/s.

On 5090 same model produces ~170 tokens/s.

kosolam•5mo ago
How is this technically done? How does it split the query and aggregates the results?
magicalhippo•5mo ago
From the readme:

More devices mean faster performance, leveraging tensor parallelism and high-speed synchronization over Ethernet.

The maximum number of nodes is equal to the number of KV heads in the model #70.

I found this[1] article nice for an overview of the parallelism modes.

[1]: https://medium.com/@chenhao511132/parallelism-in-llm-inferen...

varispeed•5mo ago
So would 40x RPi 5 get 130 token/s?
SillyUsername•5mo ago
I imagine it might be limited by number of layers and you'll get diminishing returns as well at some point caused by network latency.
VHRanger•5mo ago
Most likely not because of NUMA bottlenecks
reilly3000•5mo ago
It has to be 2^n nodes and limited to one per attention head that the model has.
behnamoh•5mo ago
Everything runs on a π if you quantize it enough!

I'm curious about the applications though. Do people randomly buy 4xRPi5s that they can now dedicate to running LLMs?

ryukoposting•5mo ago
I'd love to hook my development tools into a fully-local LLM. The question is context window and cost. If the context window isn't big enough, it won't be helpful for me. I'm not gonna drop $500 on RPis unless I know it'll be worth the money. I could try getting my employer to pay for it, but I'll probably have a much easier time convincing them to pay for Claude or whatever.
exitb•5mo ago
I think the problem is that getting multiple Raspberry Pi’s is never the cost effective way to run heavy loads.
halJordan•5mo ago
This is some sort of joke right?
numpad0•5mo ago
MI50 is cheaper
rs186•5mo ago
$500 gives you about 6 RPi 5 8GB or 4 16GB, excluding accessories or other necessary equipment to get this working.

You'll be much better off spending that money on something else more useful.

behnamoh•5mo ago
> $500

Yeah, like a Mac Mini or something with better bandwidth.

ekianjo•5mo ago
Raspberry Pis going up in price make them very unattractive since there is a wealth of cheap second used better hardware out there such as NUCs with Celerons
fastball•5mo ago
Capability of the model itself is presumably the more important question than those other two, no?
amelius•5mo ago
> I'd love to hook my development tools into a fully-local LLM.

Karpathy said in his recent talk, on the topic of AI developer-assistants: don't bother with less capable models.

So ... using an rpi is probably not what you want.

fexelein•5mo ago
I’m having a lot of fun using less capable versions of models on my local PC, integrated as a code assistant. There still is real value there, but especially room for improvements. I envision us all running specialized lightweight LLMs locally/on-device at some point.
dotancohen•5mo ago
I'd love to hear more about what you're running, and on what hardware. Also, what is your use case? Thanks!
fexelein•5mo ago
So I am running Ollama on Windows using an 10700k and 3080ti. I'm using models like Qwen3-coder (4/8b) and 2.5-coder 15b, Llama 3 instruct, etc. These models are very fast on my machine (~25-100 tokens per second depending on model)

My use case is custom software that I build and host that leverages LLMs for example for domotica where I use my Apple watch shortcuts to issue commands. I also created a VS2022 extension called Bropilot to replace Copilot with my locally hosted LLMs. Currently looking at fine tuning these type of models for work where I work in finance as a senior dev

dotancohen•5mo ago
Thank you. I'll take a look at Bropilot when I get set up locally.

Have a great week.

refulgentis•5mo ago
It's a tough thing, I'm a solo dev supporting ~all at high quality. I cannot imagine using anything other than $X[1] at the leading edge. Why not have the very best?

Karpathy elides he is an individual. We expect to find a distribution of individuals, such that a nontrivial # of them are fine with 5-10% off the leading edge performance. Why? At least for free as in beer. At most, concerns about connectivity, IP rights, and so on.

[1] gpt-5 finally dethroned sonnet after 7 months

wkat4242•5mo ago
Today's qwen3 30b is about as good as last year's state of the art. For me that's more than good enough. Many tasks don't require the best of the best either.
littlestymaar•5mo ago
So much this: people acting as if local model were useless when they were in awe about last year proprietary models that were not any better…
dpe82•5mo ago
Mind linking to "his recent talk"? There's a lot of videos of him so it's a bit difficult to find what's most recent.
amelius•5mo ago
https://www.youtube.com/watch?v=LCEmiRjPEtQ
dpe82•5mo ago
Ah that one. Thanks!
littlestymaar•5mo ago
> Karpathy said in his recent talk, on the topic of AI developer-assistants: don't bother with less capable models.

Interesting because he also said the future is small "cognitive core" models:

> a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing.

https://xcancel.com/karpathy/status/1938626382248149433#m

In which case, a raspberry Pi sounds like what you need.

ACCount37•5mo ago
It's not at all trivial to build a "small but highly capable" model. Sacrificing world knowledge is something that can be done, but only to an extent, and that isn't a silver bullet.

For an LLM, size is a virtue - the larger a model is, the more intelligent it is, all other things equal - and even aggressive distillation only gets you this far.

Maybe with significantly better post-training, a lot of distillation from a very large and very capable model, and extremely high quality synthetic data, you could fit GPT-5 Pro tier of reasoning and tool use, with severe cuts to world knowledge, into a 40B model. But not into a 4B one. And it would need some very specific training to know when to fall back to web search or knowledge databases, or delegate to a larger cloud-hosted model.

And if we had the kind of training mastery required to pull that off? I'm a bit afraid of what kind of AI we would be able to train as a frontier run.

littlestymaar•5mo ago
Nobody said it's trivial.
MangoToupe•5mo ago
I'm kind of shocked so many people are willing to ship their code up to companies that built their products on violating copyright.
pdntspa•5mo ago
Model intelligence should be part of your equation as well, unless you love loads and loads of hidden technical debt and context-eating, unnecessarily complex abstractions
th0ma5•5mo ago
How do you evaluate this except for anecdote and how do we know your experience isn't due to how you use them?
pdntspa•5mo ago
You can evaluate it as anecdote. How do I know you have the level of experience necessary to spot these kinds of problems as they arise? How do I know you're not just another AI booster with financial stake poisoning the discussion?

We could go back and forth on this all day.

exe34•5mo ago
you got very defensive. it was a useful question - they were asking in terms of using a local LLM, so at best they might be in the business of selling raspberry pis, not proprietary LLMs.
th0ma5•5mo ago
Yeah to me it more poisonous that people reflexively believe any pushback must be wrong because people feel empowered regardless of any measurement that may point out that people only get (maybe) out of LLM models what they put into them, and even then we can't be sure. That this situation exists and people have been primed with a complete triangulation of all the arguments just simply isn't healthy and we should demand independent measurements instead of the fumbling in the dark of the current model measurements... Or admit that measuring them isn't helpful and like a parent maybe alluded to, can only be described as anecdote and there is no discernable difference between many models.
giancarlostoro•5mo ago
GPT OSS 20B is smart enough but the context window is tiny with enough files. Wonder if you can make a dumber model with a massive context window thats a middleman to GPT.
pdntspa•5mo ago
Matches my experience.
giancarlostoro•5mo ago
Just have it open a new context window, the other thing I wanted to try is to make a LoRa but im not sure how that works properly, it suggested a whole other model but it wasnt a pleasant experience since it’s not as obvious as diffusion models for images.
throaway920181•5mo ago
It's sad that Pis are now so overpriced. They used to be fun little tinker boards that were semi-cheap.
pseudosavant•5mo ago
The Raspberry Pi 2 Zero is as fast as a Pi 3, way smaller, and only costs $13 I think.

The high end Pis aren’t $25 though.

geerlingguy•5mo ago
The Pi 4 is still fine for a lot of low end use cases and starts at $35. The Pi 5 is in a harder position. I think the CM5 and Pi 500 are better showcases for it than the base model.
pseudosavant•5mo ago
Between the microcontrollers, Zero models, the Pi 4, and the Pi 5, they have quite a full-range from very inexpensive and low power to moderate price/performance SBCs.

One of the bigger problems with Pi 5, is that many of the classic Pi use cases don't benefit from more CPU than the Pi 4 had. PCIe is nice, but you might as well go CM5 if you want something like that. The 16GB model would be more interesting if it had the GPU/bandwidth to do AI/tokens at a decent rate, but it doesn't.

I still think using any other brand of SBC is an exercise in futility though. Raspberry Pi products have the community, support, ecosystem behind them that no other SBC can match.

hhh•5mo ago
I have clusters of over a thousand raspberry pi’s that have generally 75% of their compute and 80% of their memory that is completely unused.
Moto7451•5mo ago
That’s an interesting setup. What are you doing with that sort of cluster?
estimator7292•5mo ago
99.9% of enthusiast/hobbyist clusters like this are exclusively used for blinkenlights
wkat4242•5mo ago
Blinkenlights are an admirable pursuit
estimator7292•5mo ago
That wasn't a judgement! I filled my homelab rack server with mechanical drives so I can get clicky noises along with the blinky lights
larodi•5mo ago
Is it solar powered?
CamperBob2•5mo ago
Good ol' Amdahl in action.
fragmede•5mo ago
That sounds awesome, do you have any pictures?
6r17•5mo ago
I mean at this point it's more of a "proof-of-work" with shared BP ; I would deff see some domotic hacker get this running - hell maybe i'll do this do if I have some spare time and want to make something like alexa with customized stuff - would still need text to speech and speech to text but that's not really the topic of his set-up ; even for pro use if that's really usable why not just spawn qwen on ARM if that's cheaper - there is a lot of way to read and leverage such bench
ugh123•5mo ago
I think it serves a good test bed to test methods and models. We'll see if someday they can reduce it to 3... 2... 1 Pi5's that can match performance.
giancarlostoro•5mo ago
Sometimes you buy a pi for one project start on it buy another for a different project, before you know it none are complete and you have ten Raspberry Pis lying around across various generations. ;)
dotancohen•5mo ago
Arduino hobbist, same issue.

Though I must admit to first noticing the trend decades before discovering Arduino when I looked at the stack of 289, 302, and 351W intake manifolds on my shelf and realised that I need the width of the 351W manifold but the fuel injection of the 302. Some things just never change.

giancarlostoro•5mo ago
I have different model Raspberry Pi's and I'm having a hard time justifying buying a 5... but if I can run LLMs off one or two... I just might. I guess what the next Raspberry Pi needs is a genuinely impressive GPU that COULD run small AI models, so people will start cracking at it.
Zenst•5mo ago
Depends on the model - if you have a sparse model with MoE, then you can divide it up into smaller nodes, your dense 30b models, I do not see them flying anytime soon.

Intel pro B50 in a dumpster PC would do you well better at this model (not enough ram for dense 30b alas) and get close to 20 tokens a second and so much cheaper.

piecerough•5mo ago
"quantize enough"

though at what quality?

dotancohen•5mo ago
Quantity has a quality all its own.
blululu•5mo ago
For $500 you may as well spend an extra $100 and get a Mac mini with an m4 chip and 256gb of ram and avoid the headaches of coordinating 4 machines.
MangoToupe•5mo ago
I don't think you can get 256 gigs of ram in a mac mini for $600. I do endorse the mac as an AI workbench tho
mmastrac•5mo ago
Is the network the bottleneck here at all? That's impressive for a gigabit switch.
kristianp•5mo ago
Does the switch use more power than the 4 pis?
mmastrac•5mo ago
Modern GB switches are pretty efficient (<10W for sure), I think a Pi might be 4-5W.
tarruda•5mo ago
I suspect you'd get similar numbers with a modern x86 mini PC that has 32GB of RAM.
misternintendo•5mo ago
At this speed this is only suitable for time insensitive applications..
daveed•5mo ago
I mean it's a raspberry pi...
layer8•5mo ago
I’d argue that chat is a time-sensitive application, and 13 tokens/s is significantly faster than I can read.
poly2it•5mo ago
Neat, but at this price scaling it's probably better to buy GPUs.
rao-v•5mo ago
Nice! Cheap RK3588 boards come with 15GB of LPDDR5 RAM these days and have significantly better performance than the Pi 5 (and often are cheaper).

I get 8.2 tokens per second on a random orange pi board with Qwen3-Coder-30B-A3B at Q3_K_XL (~12.9GB). I need to try two of them in parallel ... should be significantly faster than this even at Q6.

jerrysievert•5mo ago
> a random orange pi board with Qwen3-Coder-30B-A3B at Q3_K_XL (~12.9GB)

fantastic! what are you using to run it, llama.cpp? I have a few extra opi5's sitting around that would love some extra usage

rao-v•5mo ago
Yup! Build and ignore KleinAI and Vulkan etc. I’ve found that a clean CPU only build is optimal
ThatPlayer•5mo ago
Is that using the NPU on that board? I know it's possible to use those too.
rao-v•5mo ago
It is possibly (superb subreddit) but painful to convert a modern model and takes ages for them to be supported. The NPU is energy efficient but no faster than CPU for generation (and has lousy software support).

I’m mostly interested in the NPu to run a vision head in parallel with an LLM to speed up time to first token with VLLMs (kinda want to turn them into privacy safe vision devices for consumer use cases)

ThatPlayer•5mo ago
Since my comment, I remembered I had a RK3588 board, a Rock 5B, and tried llama.cpp CPU over that, and performance was not amazing. But also I realized this is LPDDR4X, so don't get the cheapest RK3558 boards. My Orange Pi 5 is actually worse. This one has LPDDR4. Looking at the rest of Orange Pi's line-up, they don't actually have a board with both LPDDR5 and 32GB, only 16GB or LPDDR4(X).

Using llama-bench, and Llama 2 7B Q4_0 like https://github.com/ggml-org/llama.cpp/discussions/10879 how does yours compare? Cuz I'm also comparing it with a few a few Ryzen 5 3000 Series mini-pcs for less than 150$, and that gets 8 t/s on this list and I've gotten myself

With my Rock 5B and this bench, I get 3.65 t/s. On my Orange Pi 5 (not B) 8GB LPDDR4 (not X), I get 2.44 t/s.

ineedasername•5mo ago
This is highly usable in an enterprise setting when the task benefits from near-human level decision making and when $acceptable_latency < 1s meets decisions that can be expressed in natural language <= 13tk.

Meaning that if you can structure a range of situations and tasks clearly in natural language with a pseudo-code type of structure and fit it in model context then you can have an LLM perform a huge amount of work with Human-in-the-loop oversight & quality control for edge cases.

Think of office jobs, white colar work, where, business process documentation and employee guides and job aids already fully describe 40% to 80% of the work. These are the tasks most easily structured with scaffolding prompts and more specialized RLHF enriched data, and then perform those tasks more consistently.

This is what I decribe when I'm asked "But how will they do $X when they can't answer $Y without hallucinating?"

I explain the above capability, then I ask the person to do a brief thought experiment: How often have you heard, or yourself thought something like, "That is mindnumbingly tedious" and/or "a trained monkey could do it"?

In the end, I don't know anyone whose is aware of the core capabilities in the structured natural-language sense above, that doesn't see at a glance just how many jobs can easily go away.

I'm not smart enough to see where all the new jobs will be or certain there will be as many of them, if I did I'd start or invest in such businesses. But maybe not many new jobs get created, but then so what?

If the net productivity and output-- essentially the wealth-- of the global workforce remains the same or better with AI assistance and therefore fewer work hours, that means... What? Less work on average, per capita. More wealth, per work hour worked per Capita than before.

Work hours used to be longer, they can shorten again. The problem is getting there. To overcoming not just the "sure but it will only be the CEOs get wealthy" side of things to also the "full time means 40 hours a week minimum." attitude by more than just managers and CEOs.

It will also mean that our concept of the "proper wage" for unskilled labor that can't be automated will have to change too. Wait staff at restaurants, retail workers, countless low end service-workers in food and hospitality? They'll now be providing-- and giving up-- something much more valuable than white colar skills that are outdated. They'll be giving their time to what I've heard, and the term is jarring to my ears but it is what it is, I've heard it described as "embodied work". And I guess the term fits. And anyway I've long considered my time to be something I'll trade with a great deal more reluctance than my money, and so demand a lot money for it when it's required so I can use that money to buy more time (by not having to work) somewhere in the near future, even if it's just by covering my costs for getting groceries delivered instead of the time to go shopping myself.

Wow, this comment got away from me. But seeing Qwen3 30B level quality with 13tk/s on dirt cheap HW struck a deep chord of "heck, the global workforce could be rocked to the core for cheap+quality 13tk/s." And that alone isn't the sort of comment you can leave as a standalone drive-by on HN and have it be worth the seconds to write it. And I'm probably wrong on a little or a lot of this and seeing some ideas on how I'm wrong will be fun and interesting.

drclegg•5mo ago
Distributed compute is cool, but $320 for 13 tokens/s on a tiny input prompt, 4 bit quantization, and 3B active parameter model is very underwhelming
ab_testing•5mo ago
Would it work better on a used GPU?
bjt12345•5mo ago
Does Distributed Llama use RDMA over Converged Ethernet or is this roadmapped? I've always wondered if RoCE and Ultra-Ethernet will trickle down into the consumer market.
rldjbpin•5mo ago
how would llm-d [1] work compared to distributed-llama? is the overhead or configuration too much to work with for simple setups?

[1] https://github.com/llm-d/llm-d/