frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Is it a pint?

https://isitapint.com/
58•cainxinth•46m ago•47 comments

iPhone 17 Pro Demonstrated Running a 400B LLM

https://twitter.com/anemll/status/2035901335984611412
125•anemll•2h ago•79 comments

Show HN: Threadprocs – executables sharing one address space (0-copy pointers)

https://github.com/jer-irl/threadprocs
19•jer-irl•52m ago•11 comments

Bombadil: Property-based testing for web UIs

https://github.com/antithesishq/bombadil
157•Klaster_1•4d ago•64 comments

Cyber.mil serving file downloads using TLS certificate which expired 3 days ago

https://www.cyber.mil/stigs/downloads
55•Eduard•1h ago•57 comments

If DSPy is so great, why isn't anyone using it?

https://skylarbpayne.com/posts/dspy-engineering-patterns/
119•sbpayne•2h ago•74 comments

An unsolicited guide to being a researcher [pdf]

https://emerge-lab.github.io/papers/an-unsolicited-guide-to-good-research.pdf
85•sebg•4d ago•11 comments

Migrating to the EU

https://rz01.org/eu-migration/
625•exitnode•6h ago•512 comments

Study: 'Security Fatigue' May Weaken Digital Defenses

https://www.albany.edu/news-center/news/2026-study-security-fatigue-may-weaken-digital-defenses
60•giuliomagnifico•2h ago•35 comments

POSSE – Publish on your Own Site, Syndicate Elsewhere

https://indieweb.org/POSSE
343•tosh•8h ago•74 comments

PC Gamer recommends RSS readers in a 37mb article that just keeps downloading

https://stuartbreckenridge.net/2026-03-19-pc-gamer-recommends-rss-readers-in-a-37mb-article/
754•JumpCrisscross•22h ago•348 comments

GitHub appears to be struggling with measly three nines availability

https://www.theregister.com/2026/02/10/github_outages/
314•richtr•6h ago•169 comments

America tells private firms to "hack back"

https://www.economist.com/united-states/2026/03/22/america-tells-private-firms-to-hack-back
29•andsoitis•3h ago•21 comments

I built an AI receptionist for a mechanic shop

https://www.itsthatlady.dev/blog/building-an-ai-receptionist-for-my-brother/
48•mooreds•6h ago•68 comments

Two pilots dead after plane and ground vehicle collide at LaGuardia

https://www.bbc.com/news/articles/cy01g522ww4o
69•mememememememo•9h ago•117 comments

Side-Effectful Expressions in C (2023)

https://blog.xoria.org/expr-stmt-c/
5•surprisetalk•5d ago•0 comments

General Motors is assisting with the restoration of a rare EV1

https://evinfo.net/2026/03/general-motors-is-assisting-with-the-restoration-of-an-1996-ev1/
58•betacollector64•2d ago•59 comments

The gold standard of optimization: A look under the hood of RollerCoaster Tycoon

https://larstofus.com/2026/03/22/the-gold-standard-of-optimization-a-look-under-the-hood-of-rolle...
506•mariuz•21h ago•138 comments

Tin Can, a 'landline' for kids

https://www.businessinsider.com/tin-can-landline-kids-cellphone-cell-alternative-how-2025-9
262•tejohnso•3d ago•212 comments

Walmart: ChatGPT checkout converted 3x worse than website

https://searchengineland.com/walmart-chatgpt-checkout-converted-worse-472071
269•speckx•3d ago•193 comments

Cyberattack on vehicle breathalyzer company leaves drivers stranded in the US

https://techcrunch.com/2026/03/20/cyberattack-on-vehicle-breathalyzer-company-leaves-drivers-stra...
79•speckx•3h ago•91 comments

Reports of code's death are greatly exaggerated

https://stevekrouse.com/precision
519•stevekrouse•1d ago•382 comments

The future of version control

https://bramcohen.com/p/manyana
609•c17r•1d ago•344 comments

Can you get root with only a cigarette lighter? (2024)

https://www.da.vidbuchanan.co.uk/blog/dram-emfi.html
143•HeliumHydride•3d ago•29 comments

“Collaboration” is bullshit

https://www.joanwestenberg.com/collaboration-is-bullshit/
135•mitchbob•15h ago•62 comments

Orbán's top opponent says Hungary's alleged Russian backchannel 'treason'

https://www.thetelegraph.com/news/world/article/orb-n-s-top-opponent-says-hungary-s-alleged-22091...
11•vrganj•43m ago•1 comments

Nanopositioning Metrology, Gödel, and Bootstraps

https://www.pi-usa.us/en/tech-blog/nanopositioning-metrology-goedel-and-bootstraps
12•nill0•4d ago•2 comments

Why I love NixOS

https://www.birkey.co/2026-03-22-why-i-love-nixos.html
400•birkey•23h ago•271 comments

GoGoGrandparent (YC S16) is hiring Back end Engineers

https://www.ycombinator.com/companies/gogograndparent/jobs/2vbzAw8-backend-engineer
1•davidchl•13h ago

Project Nomad – Knowledge That Never Goes Offline

https://www.projectnomad.us
538•jensgk•1d ago•201 comments
Open in hackernews

iPhone 17 Pro Demonstrated Running a 400B LLM

https://twitter.com/anemll/status/2035901335984611412
122•anemll•2h ago
https://xcancel.com/anemll/status/2035901335984611412

Comments

ashwinnair99•2h ago
A year ago this would have been considered impossible. The hardware is moving faster than anyone's software assumptions.
cogman10•1h ago
This isn't a hardware feat, this is a software triumph.

They didn't make special purpose hardware to run a model. They crafted a large model so that it could run on consumer hardware (a phone).

pdpi•1h ago
It's both.

We haven't had phones running laptop-grade CPUs/GPUs for that long, and that is a very real hardware feat. Likewise, nobody would've said running a 400b LLM on a low-end laptop was feasible, and that is very much a software triumph.

bigyabai•33m ago
> We haven't had phones running laptop-grade CPUs/GPUs for that long

Agree to disagree, we've had laptop-grade smartphone hardware for longer than we've had LLMs.

smallerize•1h ago
The iPhone 17 Pro launched 8 months ago with 50% more RAM and about double the inference performance of the previous iPhone Pro (also 10x prompt processing speed).
mannyv•1h ago
The software has real software engineers working on it instead of researchers.

Remember when people were arguing about whether to use mmap? What a ridiculous argument.

At some point someone will figure out how to tile the weights and the memory requirements will drop again.

snovv_crash•1h ago
The real improvement will be when the software engineers get into the training loop. Then we can have MoE that use cache-friendly expert utilisation and maybe even learned prefetching for what the next experts will be.
zozbot234•30m ago
> maybe even learned prefetching for what the next experts will be

Experts are predicted by layer and the individual layer reads are quite small, so this is not really feasible. There's just not enough information to guide a prefetch.

snovv_crash•25m ago
Manually no. It would have to be learned, and making the expert selection predictable would need to be a training metric to minimize.
zozbot234•20m ago
Making the expert selection more predictable also means making it less effective. There's no real free lunch.
Aurornis•50m ago
It wasn't considered impossible. There are examples of large MoE LLMs running on small hardware all over the internet, like giant models on Raspberry Pi 5.

It's just so slow that nobody pursued it seriously. It's fun to see these tricks implemented, but even on this 2025 top spec iPhone Pro the output is 100X slower than output from hosted services.

zozbot234•32m ago
If the bottleneck is storage bandwidth that's not "slow". It's only slow if you insist on interactive speeds, but the point of this is that you can run cheap inference in bulk on very low-end hardware.
ottah•17m ago
I mean, by any reasonable standard it still is. Almost any computer can run an llm, it's just a matter of how fast, and 0.4k/s (peak before first token) is not really considered running. It's a demo, but practically speaking entirely useless.
simopa•2h ago
It's crazy to see a 400B model running on an iPhone. But moving forward, as the information density and architectural efficiency of smaller models continue to increase, getting high-quality, real-time inference on mobile is going to become trivial.
volemo•37m ago
> moving forward, as the information density and architectural efficiency of smaller models continue to increase

If they continue to increase.

firstbabylonian•1h ago
> SSD streaming to GPU

Is this solution based on what Apple describes in their 2023 paper 'LLM in a flash' [1]?

1: https://arxiv.org/abs/2312.11514

simonw•1h ago
Yes. I collected some details here: https://simonwillison.net/2026/Mar/18/llm-in-a-flash/
zozbot234•1h ago
A similar approach was recently featured here: https://news.ycombinator.com/item?id=47476422 Though iPhone Pro has very limited RAM (12GB total) which you still need for the active part of the model. (Unless you want to use Intel Optane wearout-resistant storage, but that was power hungry and thus unsuitable to a mobile device.)
simonw•1h ago
Yeah, this new post is a continuation of that work.
Aurornis•54m ago
> Though iPhone Pro has very limited RAM (12GB total) which you still need for the active part of the model.

This is why mixture of experts (MoE) models are favored for these demos: Only a portion of the weights are active for each token.

zozbot234•7m ago
Yes but most people are still running MoE models with all experts loaded in RAM! This experiment shows quite clearly that some experts are only rarely needed, so you do benefit from not caching every single expert-layer in RAM at all times.
foobiekr•48m ago
This is not entirely dissimilar to what Cerebus does with their weights streaming.
manmal•35m ago
And IIRC the Unreal Engine Matrix demo for PS5 was streaming textures directly from SSD to the engine as well?
cj00•1h ago
It’s 400B but it’s mixture of experts so how many are active at any time?
simonw•1h ago
Looks like it's Qwen3.5-397B-A17B so 17B active. https://github.com/Anemll/flash-moe/tree/iOS-App
anshumankmr•35m ago
Aren't most companies doing MoE at this point?
rwaksmunski•1h ago
Apple might just win the AI race without even running in it. It's all about the distribution.
raw_anon_1111•1h ago
Apple is already one of the winners of the AI race. It’s making much more profit (ie it ain’t losing money) on AI off of ChatGPT, Claude, Grok (you would be surprised at how many incels pay to make AI generated porn videos) subscriptions through the App Store.

It’s only paying Google $1 billion a year for access to Gemini for Siri

detourdog•1h ago
Apple’s entire yearly capex is a fraction of the AI spend of the persumed AI winners.
devmor•1h ago
Which is mostly insane amounts of debt leveraged entirely on the moonshot that they will find a way to turn a profit on it within the next couple years.

Apple’s bet is intelligent, the “presumed winners” are hedging our economic stability on a miracle, like a shaking gambling addict at a horse race who just withdrew his rent money.

foobiekr•50m ago
Fantasy buildouts of hundreds of billions of dollars for gear that has a 3 year lifetime may be premature.

Put another way, there is no demonstrated first mover advantage in LLM-based AI so far and all of the companies involved are money furnaces.

qingcharles•1h ago
Plus all those pricey 512GB Mac Studios they are selling to YouTubers.
icedchai•44m ago
They don't offer the 512 gig RAM variant anymore. Outside of social media influencers and the occasional AI researcher, the market for $10K desktops is vanishingly small.
Multiplayer•32m ago
My understanding is that the 512gb offering will likely return with the new M5 Ultra coming around WWDC in June. Fingers crossed anyway!
criddell•14m ago
The best desktop you could get has been around $10k going back all the way back to the PDP-8e (it could fit on most desks!).
giobox•5m ago
Most of the influencer content I saw demonstrating LLMs on multiple 512gb Mac Studios over Thunderbolt networking used Macs borrowed from Apple PR - network chuck, Jeff Geerling et al didn't actually buy the 4 or 5 512gb Mac Studios used in their corresponding local LLM videos.
dzikimarian•1h ago
Because someone managed to run LLM on an iPhone at unusable speed Apple won AI race? Yeah, sure.
naikrovek•1h ago
whoa, save some disbelief for later, don't show it all at once.
causal•1h ago
Run an incredible 400B parameters on a handheld device.

0.6 t/s, wait 30 seconds to see what these billions of calculations get us:

"That is a profound observation, and you are absolutely right ..."

WarmWash•1h ago
I don't think we are ever going to win this. The general population loves being glazed way too much.
baal80spam•1h ago
> The general population loves being glazed way too much.

This is 100% correct!

WarmWash•51m ago
Thanks for short warm blast of dopamine, no one else ever seems to grasp how smart I truly am!
timcobb•43m ago
That is an excellent observation.
tombert•42m ago
That's an astute point, and you're right to point it out.
actusual•40m ago
You are thinking about this exactly the right way.
9dev•30m ago
You’re absolutely right!
otikik•14m ago
The other day, I got:

"You are absolutely right to be confused"

That was the closest AI has been to calling me "dumb meatbag".

Terretta•2m ago
"Carrot: The Musical" in the Carrot weather app, all about the AI and her developer meatbag, is on point.
intrasight•1h ago
Better than waiting 7.5 million years to have a tell you the answer is 42.
thinkingtoilet•46m ago
Maybe you should have asked a better question. :P
patapong•32m ago
What do you get if you multiply six by nine?
xeyownt•24m ago
54?
RuslanL•7m ago
67?
ctxc•3m ago
Tea
whyenot•15m ago
Should have used a better platform. So long and thanks for all the fish.
Aurornis•53m ago
I thought you were being sarcastic until I watched the video and saw those words slowly appear.

Emphasis on slowly.

amelius•36m ago
I mean size says nothing, you could do it on a Pi Zero with sufficient storage attached.

So this post is like saying that yes an iPhone is Turing complete. Or at least not locked down so far that you're unable to do it.

zozbot234•22m ago
You need fast storage to make it worthwhile. PCIe x4 5.0 is a reasonable minimum. Or multiple PCIe x4 4.0 accessed in parallel, but this is challenging since the individual expert-layers are usually small. Intel Optane drives are worth experimenting with for the latter (they are stuck on PCIe 4.0) purely for their good random-read properties (quite aside from their wearout resistance, which opens up use for KV-cache and even activations).
r_lee•19m ago
I too thought you were joking

laughed when it slowly began to type that out

pier25•1h ago
https://xcancel.com/anemll/status/2035901335984611412
dang•40m ago
Added to toptext. Thanks!
_air•1h ago
This is awesome! How far away are we from a model of this capability level running at 100 t/s? It's unclear to me if we'll see it from miniaturization first or from hardware gains
Tade0•53m ago
Only way to have hardware reach this sort of efficiency is to embed the model in hardware.

This exists[0], but the chip in question is physically large and won't fit on a phone.

[0] https://www.anuragk.com/blog/posts/Taalas.html

intrasight•33m ago
I think for many reasons this will become the dominant paradigm for end user devices.

Moore's law will shrink it to 8mm soon. I think it'll be like a microSD card you plug in.

Or we develop a new silicon process that can mimic synaptic weights in biology. Synapses have plasticity.

bigyabai•29m ago
One big bottleneck is SRAM cost. Even an 8b model would probably end up being hundreds of dollars to run locally on that kind of hardware. Especially unpalatable if the model quality keeps advancing year-by-year.

> Or we develop a new silicon process that can mimic synaptic weights in biology. Synapses have plasticity.

It's amazing to me that people consider this to be more realistic than FAANG collaborating on a CUDA-killer. I guess Nvidia really does deserve their valuation.

intrasight•12m ago
> bottleneck is SRAM cost

Not for this approach

tclancy•24m ago
I think you're ignoring the inevitable march of progress. Phones will get big enough to hold it soon.
ottah•21m ago
That's actually pretty cool, but I'd hate to freeze a models weights into silicon without having an incredibly specific and broad usecase.
originalvichy•50m ago
On smartphones? It’s not worth it to run a model this size on a device like this. A smaller fine-tuned model for specific use cases is not only faster, but possibly more accurate when tuned to specific use cases. All those gigs of unnecessary knowledge are useless to perform tasks usually done on smartphones.
ottah•24m ago
Probably 15 to 20 years, if ever. This phone is only running this model in the technical sense of running, but not in a practical sense. Ignore the 0.4tk/s, that's nothing. What's really makes this example bullshit is the fact that there is no way the phone has a enough ram to hold any reasonable amount of context for that model. Context requirements are not insignificant, and as the context grows, the speed of the output will be even slower.

Realistically you need +300GB/s fast access memory to the accelerator, with enough memory to fully hold at least greater than 4bit quants. That's at least 380GB of memory. You can gimmick a demo like this with an ssd, but the ssd is just not fast enough to meet the minim specs for anything more than showing off a neat trick on twitter.

The only hope for a handheld execution of a practical, and capable AI model is both an algorithmic breakthrough that does way more with less, and custom silicon designed for running that type of model. The transformer architecture is neat, but it's just not up for that task, and I doubt anyone's really going to want to build silicon for it.

russellbeattie•30m ago
I have some macro opinions about Apple - not sure if I'm correct, but tell me what you think.

Apple has always seen RAM as an economic advantage for their platform: Make the development effort to ensure that the OS and apps work well with minimal memory and save billions every year in hardware costs. In 2026, iPhones still come with 8Gb of RAM, Pro/Max come with 12Gb.

The problem is that AI (ML/LLM training and inference) are areas where you can't get around the need for copious amounts of fast working memory. (Thus the critical shortage of RAM at the moment as AI data centers consume as many memory chips as possible.)

Unless there's something I don't know (which is more than possible) Apple can't code their way around this problem, nor create specialized SoCs with ML cores that obviate the need for lots and lots of RAM.

So, it's going to be interesting whether they accept this reality and we start seeing the iPhones in the future with 16Gb, 32Gb or more as standard in order to make AI performant. And if they give up on adding AI to the billions of iPhones with minimal RAM already out there.

As a side note, 8Gb of RAM hasn't been enough for a decade. It prevents basic tasks like keeping web tabs live in the background. My pet peeve is having just a few websites open, and having the page refresh when swapping between them because of aggressive memory management.

To me, Apple's obvious strength is pushing AI to the edge as much as possible. While other companies are investing in massive data centers which will have millions of chips that will be outdated within the next couple years, Apple will be able to incrementally improve their ML/AI features by running on the latest and greatest chips every year. Apple has a huge advantage in that they can design their chips with a mega high speed bus, which is just as important as the quantity of RAM.

But all that depends on Apple's willingness to accept that RAM isn't an area they can skimp on any more, and I'm not sure they will.

Sorry for the brain dump. I'd love to be educated on this in case I'm totally off base.

ottah•14m ago
Possibly this just isn't the generation of hardware to solve this problem in? We're like, what three or four years in at most, and only barely two in towards AI assisted development being practical. I wouldn't want to be the first mover here, and I don't know if it's a good point in history to try and solve the problem. Everything we're doing right now with AI, we will likely not be doing in five years. If I were running a company like Apple, I'd just sit on the problem until the technology stabilizes and matures.
bigyabai•10m ago
If I was running a company like Apple, I'd be working with Khronos to kill CUDA since yesterday. There are multiple trillions of dollars that could be Apple's if they sign CUDA drivers on macOS, or create a CUDA-compatible layer. Instead, Apple is spinning their wheels and promoting nothingburger technology like the NPU and MPS.

It's not like Apple's GPU designs are world-class anyways, they're basically neck-and-neck with AMD for raster efficiency. Except unlike AMD, Apple has all the resources in the world to compete with Nvidia and simply chooses to sit on their ass.

zozbot234•3m ago
CUDA is not the real issue, AMD's HIP offers source-level compatibility with CUDA code, and ZLUDA even provides raw binary compatibility. nVidia GPUs really are quite good, and the projected advantages of going multi-vendor just aren't worth the hassle given the amount of architecture-specificity GPUs are going to have.
zozbot234•10m ago
RAM is just too expensive. We need to bring back non-DRAM persistent memory that doesn't have the wearout issues of NAND.
dv_dt•12m ago
CPU, memory, storage, time tradeoffs rediscovered by AI model developers. There is something new here, add GPU to the trade space.