frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Japan's New Care Workers: Bodybuilders, Wrestlers and MMA Fighters

https://www.nytimes.com/2026/04/25/world/asia/japan-care-workers-bodybuilders-sumo-mma.html
1•danso•3m ago•1 comments

The Fall of the Theorem Economy

https://davidbessis.substack.com/p/the-fall-of-the-theorem-economy
1•cubefox•3m ago•0 comments

Surprising Signs of an Atmosphere Around a Tiny World, Billions of Miles Away

https://www.nytimes.com/2026/05/07/science/plutino-atmosphere-astronomy-pluto.html
1•lxm•12m ago•0 comments

Energy Prices Are Driving Demand for Solar Panels and Heat Pumps

https://www.nytimes.com/2026/05/08/business/europe-solar-panels-iran-war.html
1•lxm•13m ago•0 comments

Catch breaking API changes before merge

https://ImpactGuard.dev
1•dclavijo•17m ago•0 comments

Challenging the Way We Pedal

https://hackaday.com/2026/05/09/challenging-the-way-we-pedal/
1•lxm•21m ago•0 comments

Mariculture Systems to begin the construction of Portugal aquaculture facility

https://www.seafoodsource.com/news/aquaculture/mariculture-systems-approved-to-begin-the-construc...
1•mooreds•21m ago•0 comments

How we know if our agent is right

https://www.mendral.com/blog/how-we-know-if-our-agent-is-right
2•shad42•23m ago•0 comments

A Preview of the Future

https://unsung.aresluna.org/a-preview-of-the-future/
2•zdw•26m ago•0 comments

Make America AI Ready: Strengths, Weaknesses, and Recommendations

https://blog.citp.princeton.edu/2026/05/05/make-america-ai-ready-strengths-weaknesses-and-recomme...
2•Kye•29m ago•0 comments

Bonsai of the Imperial Palace [video]

https://www.youtube.com/watch?v=HXoECYXr_Bk
1•tkgally•31m ago•0 comments

Diversity as the Bottleneck in Self-Play

https://ivison.id.au/2026/05/06/self-play.html
1•jxmorris12•31m ago•0 comments

Learning on the Shop floor

https://twitter.com/tobi/status/2053121182044451016
1•jmacd•33m ago•0 comments

New map shows where electric truck charging is scaling

https://electrek.co/2026/05/08/new-map-electric-truck-charging-is-scaling/
2•Bender•37m ago•0 comments

¡Hola, soy DORA. Why hasn't AI improved my metrics?

https://www.vaines.org/posts/2026-05-09-why-hasnt-ai-improved-my-metrics/
1•gpi•39m ago•0 comments

UK wants fresh fingerprints on £300M biometrics platform

https://www.theregister.com/public-sector/2026/05/09/uk-wants-fresh-fingerprints-on-300m-biometri...
1•Bender•46m ago•0 comments

The new Wild West of AI kids' toys

https://www.wired.com/story/the-new-wild-west-of-ai-kids-toys/
1•Bender•47m ago•0 comments

AI Productivity Fails

https://blog.sshh.io/p/how-ai-productivity-fails
3•sshh12•57m ago•0 comments

You Need AI That Reduces Maintenance Costs

https://www.jamesshore.com/v2/blog/2026/you-need-ai-that-reduces-your-maintenance-costs
4•cratermoon•1h ago•0 comments

PS3 Emulator Devs Politely Ask That People Stop Flooding It with AI PRs

https://kotaku.com/playstation-3-emulator-devs-politely-ask-that-people-stop-flooding-it-with-ai-...
37•stalfosknight•1h ago•9 comments

Usein

1•USEIN•1h ago•0 comments

Rep. Crane Introduces Legislation to Pause and Reform the Broken H-1B Visa

https://crane.house.gov/2026/04/22/rep-crane-introduces-legislation-to-pause-and-reform-the-broke...
5•rawgabbit•1h ago•1 comments

Zero-native by Vercel: Build tiny desktop and mobile apps with Zig and web UI

https://github.com/vercel-labs/zero-native
1•maxloh•1h ago•0 comments

Antikythera Mechanism (oldest known analogue computer)

https://www.historyofinformation.com/detail.php?id=120
3•p0u4a•1h ago•0 comments

Show HN: Gawk Dev – live feed tracking what's happening across AI tools

https://gawk.dev
1•Srinathprasanna•1h ago•0 comments

You can have your composer.lock and not make others eat it too

https://kevinullyott.com/blog/2026-05-05-composer-lock-gitattributes/
1•orrison•1h ago•0 comments

Riding the D in Los Angeles: city hopes new subway stations will be game changer

https://www.theguardian.com/us-news/2026/may/09/los-angeles-subway-public-transportation
6•raybb•1h ago•0 comments

Running local models on an M4 with 24GB memory

https://jola.dev/posts/running-local-models-on-m4
45•shintoist•1h ago•29 comments

The Mythology of Rice and Beans

https://economistwritingeveryday.com/2024/12/13/the-mythology-of-rice-and-beans/
1•ksymph•1h ago•0 comments

How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?

https://dunkels.com/adam/claude-user-space-ip-stack-ping/
2•adunk•1h ago•0 comments
Open in hackernews

Running local models on an M4 with 24GB memory

https://jola.dev/posts/running-local-models-on-m4
42•shintoist•1h ago

Comments

sbassi•1h ago
A useful data to know about this setup is how many tokens/sec generates.
JBorrow•59m ago
It’s started in TFA
NDlurker•49m ago
You can't expect someone to read 4 paragraphs into an article before commenting
kennywinker•39m ago
@grok is this true?
DrBenCarson•30m ago
Sorry, @grok is offline after declaring himself MechaMussolini earlier today
NBJack•44m ago
I'm puzzled. The M4, as far as I know, doesn't have 24GB. Did the author mean a M40?
spoonyvoid7•42m ago
M4 = M4 Macbook Pro
teaearlgraycold•24m ago
Or Air
sertsa•41m ago
M4 Mac Mini w/24GB sitting right here on my desk.
tra3•41m ago
There’s definitely an option with 24 gigs of ram: https://support.apple.com/en-ca/121552
canpan•38m ago
Recent models (Qwen 3.6 and Gemma) can really do coding locally. Feels like SOTA from maybe a year ago? But you would want about 32-40GB total memory. 24GB is just a bit short of that. A gaming PC with 16GB graphics card and 32GB RAM brings you very close to a usable coding system.
DrBenCarson•31m ago
How are you using that RAM with the GPU?
canpan•27m ago
Llama.cpp with automatic offload to main memory. You can also use Ollama, it is easier, but slower.
ai_fry_ur_brain•20m ago
"Coding system" "can really do coding locally"

Vibe coders out here thinking all software development is solved by because they made an (ugly and unoriginal) dashboard for their SaaS clone and their single column with 3x3 feature card landing page thats identical to every other vibe coders "startup"

sourc3•34m ago
I am running qwen 3.6 9b quantized model on my m4 pro 48gb and it is barely useful to do some basic pi.dev/cc driven development. I think 128gb desktops are the sweet setup to actually get meaningful work done. However, getting your hands on one of these machines is difficult at the moment.

As much fun as it is to run these things locally don’t forget that your time is not free. I am slowly migrating my use cases to openrouter and run the largest qwen model for < $2-3/day with serious use for personal projects.

hparadiz•30m ago
How does it (the openrouter version) compare to ChatGPT 5.5 or Claude Opus 4.6?
sourc3•13m ago
Good enough. It gets 60-70% of the work I need done for a lot less $ (keep in mind I am using these for personal projects that doesn’t generate revenue). If I was using it with the hopes of making money I think I would just use Codex at this point.
carbocation•26m ago
Was the choice of such a small model driven by a desire for high tok/sec? I ask because an m4 pro 48gb machine can run larger models (if model intelligence is the thing that would make it more useful).
sourc3•16m ago
Yes that was my goal. Also noticed a huge performance gain going from ollama to mlx. Your mileage may vary.
elij•19m ago
I'm using the 30b MOE model on same spec with 65k tokens as a sub agent with tooling and it absolutely writes decent code. The dense 9b I agree wasn't great.
sjones671•10m ago
Thanks for saying this. There's so much nonsense out there online about local models being better than Opus 4.7 and the like. It's just not true for regular users.

I have a brand new M5 MacBook Pro - top end with all the specs and I've tried local models and they're barely functional.

BoredPositron•4m ago
Use the small models for small tasks. Like cli auto complete, file sorting, small scripts, config files, setting up tooling, grammar, simple translations there is so much use in them.
nu11ptr•25m ago
Still trying to understand if a Macbook Pro M5 Max with 128GB is likely going to be able to run coding models well enough that I can cancel my Codex, or even go down to the $20/month plan.
guessmyname•11m ago
A 128GiB MacBook Pro in Canada is what, north of CAD $11k after tax? That’s around USD $7k. At $20/month for a cloud AI subscription, you’re looking at almost 30 years of service for the same money.

How long do people realistically expect a laptop to stay competitive with SOTA local models? Especially in a space where model sizes, context windows, and inference requirements keep moving every year.

And even if the hardware lasts, the local experience usually doesn’t. A heavily quantized local model running at tolerable speeds on consumer hardware is still nowhere near frontier hosted models in reasoning, coding, multimodal capability, tool use, or reliability.

The economics just don’t make sense to me unless you specifically need offline inference, privacy guarantees, or low latency for a niche workflow. Otherwise you’re tying up $10k upfront to run an approximation of what you can already access through a subscription that continuously improves over time.

You could literally put the difference into index funds and probably cover the subscription indefinitely from the returns alone, even accounting for gradual price increases.

rtpg•22m ago
What kinda harness do people use with these local models? I am quite happy with the Claude Code permission model and interface in general for coding stuff (For chat-y interfaces I have no real opinion)
nl•20m ago
I think it's useful to be realistic about what you can do with a local model, especially something as small as the 9B the author is using. A 9B model is around the level of Sonnet 3.6 - it can do autocomplete and small functions but it loses track trying to understand large problems.

But the are interesting and fun to play with! I do a LOT of work on local agent harnesses etc, mostly for fun.

My current project is a zero install agent: https://gemma-agent-explainer.nicklothian.com/ - Python, SQL and React all run completely in browser. Gemma E4B is recommended for the best experience!

This is under heavy development, needs Chrome for both HTML5 Filesystem API support and LiteRT (although most Chromium based browsers can be made to work with it)

It's different to most agents because it is zero install: the model runs in the browser using LiteRT/LiteLLM (which gives better performance than Transformers.js), and Filesystem API gives it optional sandbox access to a directory to read from.

It is self documenting - you can ask questions like "How is the system prompt used" in the live help pane and it has access to its own source code.

There's quite a lot there: press "Tour" to see it all.

Will be open source next week.

ai_fry_ur_brain•18m ago
Local model evangalists are the equivalent of toddlers playing with the velcro on their shoes and being endlessly entertained.

I dont mean this about you, you seem to realize its mostly useless, but most the people on HN be acting like all software development can be done by a local model and the end of SOTA is around the corner.

nl•4m ago
I think knowledge is power.

I think that the more people who try local models (especially the larger ones) the better.

I sometimes get the impression that many people claiming that local models are as good as frontier models work in "token poor" environments. If you can't build large-scale programs using at least Opus 4.5+ then it's difficult to compare. They compare something like Qwen 27B with Sonnet and see that it is nearly as good, but miss that the frontier models are a lot better.

That knowledge is power, too.

I personally can help making local models more accessible. I can't make Opus cheaper.

soganess•3m ago
Getting so close to good!

I consider Gemma 4 31B (dense / no MoE), the new baseline for local models. It's obviously worse than the frontier models, but it feels less like a science experiment than any previous local model I’ve run, including GPT OSS 120B and Nemotron Super 120B.

On my M5 Max with 128 GB of RAM and the full 256K context window, I see RAM use spike to about 70 GB, with something like 14 GB of system overhead. A 64 GB Panther Lake machine with the full Arc B390, or a 48 GB Snapdragon X2 Elite machine, could probably run it with a 128K to 256K context window.

Even a few years ago, seeing this kinda performance on a mainstream-ish/plus configuration would have seemed like a pipe dream.