frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

ETH Zurich and EPFL to release a LLM developed on public infrastructure

https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language-model-built-for-the-public-good.html
218•andy99•3h ago•29 comments

jank is C++

https://jank-lang.org/blog/2025-07-11-jank-is-cpp/
141•Jeaye•4h ago•45 comments

OpenAI's Windsurf deal is off – and its CEO is going to Google

https://www.theverge.com/openai/705999/google-windsurf-ceo-openai
56•rcchen•23m ago•19 comments

Upgrading an M4 Pro Mac mini's storage for half the price

https://www.jeffgeerling.com/blog/2025/upgrading-m4-pro-mac-minis-storage-half-price
256•speckx•7h ago•161 comments

Andrew Ng: Building Faster with AI [video]

https://www.youtube.com/watch?v=RNJCfif1dPY
115•sandslash•1d ago•32 comments

Bill Atkinson's psychedelic user interface

https://patternproject.substack.com/p/from-the-mac-to-the-mystical-bill
336•cainxinth•10h ago•181 comments

Astronomers race to study interstellar interloper

https://www.science.org/content/article/astronomers-race-study-interstellar-interloper
82•bikenaga•6h ago•43 comments

Activeloop (YC S18) Is Hiring AI Search and Python Back End Engineers(Onsite,MV)

https://careers.activeloop.ai/
1•davidbuniat•58m ago

Show HN: RULER – Easily apply RL to any agent

https://openpipe.ai/blog/ruler
32•kcorbitt•4h ago•4 comments

Lead pigment in turmeric is the culprit in a global poisoning mystery (2024)

https://www.npr.org/sections/goats-and-soda/2024/09/23/nx-s1-5011028/detectives-mystery-lead-poisoning-new-york-bangladesh
253•perihelions•6h ago•127 comments

Repaste Your MacBook

https://christianselig.com/2025/07/repaste-macbook/
137•speckx•9h ago•85 comments

Pa. House passes 'click-to-cancel' subscription bills

https://www.pennlive.com/news/2025/07/pa-house-passes-click-to-cancel-subscription-bills-as-court-throws-out-federal-rule.html
171•bikenaga•5h ago•60 comments

I'm more proud of these 128 kilobytes than anything I've built since

https://medium.com/@mikehall314/im-more-proud-of-these-128-kilobytes-than-anything-i-ve-built-since-53706cfbdc18
66•mikehall314•2h ago•18 comments

At Least 13 People Died by Suicide Amid U.K. Post Office Scandal, Report Says

https://www.nytimes.com/2025/07/10/world/europe/uk-post-office-scandal-report.html
499•xbryanx•10h ago•428 comments

In a First, Solar Was Europe's Biggest Source of Power Last Month

https://e360.yale.edu/digest/solar-biggest-power-source-europe-june-2025
157•Brajeshwar•5h ago•93 comments

Monorail – Turn CSS animations into interactive SVG graphs

https://muffinman.io/monorail/
16•stanko•3d ago•2 comments

Air India Flight 171 Accident Preliminary Report [pdf]

https://aaib.gov.in/What%27s%20New%20Assets/Preliminary%20Report%20VT-ANB.pdf
29•ummonk•1h ago•23 comments

Show HN: Pangolin – Open source alternative to Cloudflare Tunnels

https://github.com/fosrl/pangolin
434•miloschwartz•1d ago•97 comments

LLM Inference Handbook

https://bentoml.com/llm/
278•djhu9•19h ago•14 comments

OpenFront: Realtime Risk-like multiplayer game in the browser

https://openfront.io/
175•thombles•15h ago•44 comments

The ChompSaw: A benchtop power tool that's safe for kids to use

https://www.core77.com/posts/137602/The-ChompSaw-A-Benchtop-Power-Tool-Thats-Safe-for-Kids-to-Use
271•surprisetalk•4d ago•187 comments

Show HN: Vibe Kanban – Kanban board to manage your AI coding agents

https://github.com/BloopAI/vibe-kanban
137•louiskw•6h ago•90 comments

Google nerfs Pixel 6a batteries following fire hazard

https://arstechnica.com/gadgets/2025/07/a-mess-of-its-own-making-google-nerfs-second-pixel-phone-battery-this-year/
27•fffrantz•3h ago•28 comments

Overtourism in Japan, and how it hurts small businesses

https://craigmod.com/ridgeline/210/
172•speckx•8h ago•329 comments

Introduction to Digital Filters

https://ccrma.stanford.edu/~jos/filters/
3•ofalkaed•2h ago•0 comments

The day someone created 184 billion Bitcoin (2020)

https://decrypt.co/39750/184-billion-bitcoin-anonymous-creator
76•lawrenceyan•17h ago•82 comments

Postgres LISTEN/NOTIFY does not scale

https://www.recall.ai/blog/postgres-listen-notify-does-not-scale
545•davidgu•4d ago•277 comments

Recovering from AI addiction

https://internetaddictsanonymous.org/internet-and-technology-addiction/signs-of-an-addiction-to-ai/
232•pera•10h ago•252 comments

AI agent benchmarks are broken

https://ddkang.substack.com/p/ai-agent-benchmarks-are-broken
167•neehao•8h ago•78 comments

Batch Mode in the Gemini API: Process More for Less

https://developers.googleblog.com/en/scale-your-ai-workloads-batch-mode-gemini-api/
157•xnx•4d ago•52 comments
Open in hackernews

Kimi K2

https://twitter.com/Kimi_Moonshot/status/1943687594560332025
113•c4pt0r•6h ago

Comments

gs17•5h ago
> 1T total / 32B active MoE model

Is this the largest open-weight model?

bigeagle•5h ago
I believe so.

Grok-1 is 341B, DeepSeek-v3 is 671B, and recent new open weights models are around 70B~300B.

simonw•5h ago
Big release - https://huggingface.co/moonshotai/Kimi-K2-Instruct model weights are 958.52 GB
c4pt0r•5h ago
Paired with programming tools like Claude Code, it could be a low-cost/open-source replacement for Sonnet
kkzz99•5h ago
According to the bench its closer to Opus, but I venture primarily for English and Chinese.
martin_•5h ago
how do you low cost run a 1T param model?
maven29•5h ago
32B active parameters with a single shared expert.
JustFinishedBSG•5h ago
This doesn’t change the VRAM usage, only the compute requirements.
maven29•5h ago
You can probably run this on CPU if you have a 4090D for prompt processing, since 1TB of DDR4 only comes out to around $600.

For GPU inference at scale, I think token-level batching is used.

t1amat•4h ago
With 32B active parameters it would be ridiculously slow at generation.
selfhoster11•2h ago
DDR3 workstation here - R1 generates at 1 token per second. In practice, this means that for complex queries, the speed of replying is closer to an email response than a chat message, but this is acceptable to me for confidential queries or queries where I need the model to be steerable. I can always hit the R1 API from a provider instead, if I want to.

Given that R1 uses 37B active parameters (compared to 32B for K2), K2 should be slightly faster than that - around 1.15 tokens/second.

zackangelo•4h ago
Typically a combination of expert level parallelism and tensor level parallelism is used.

For the big MLP tensors they would be split across GPUs in a cluster. Then for the MoE parts you would spread the experts across the GPUs and route to them based on which experts are active (there would likely be more than one if the batch size is > 1).

selfhoster11•2h ago
It does not have to be VRAM, it could be system RAM, or weights streamed from SSD storage. Reportedly, the latter method achieves around 1 token per second on computers with 64 GB of system RAM.

R1 (and K2) is MoE, whereas Llama 3 is a dense model family. MoE actually makes these models practical to run on cheaper hardware. DeepSeek R1 is more comfortable for me than Llama 3 70B for exactly that reason - if it spills out of the GPU, you take a large performance hit.

If you need to spill into CPU inference, you really want to be multiplying a different set of 32B weights for every token compared to the same 70B (or more) instead, simply because the computation takes so long.

refulgentis•2h ago
The amount of people who will be using it at 1 token/sec because there's no better option, and have 64 GB of RAM, is vanishingly small.

IMHO it sets the local LLM community back when we lean on extreme quantization & streaming weights from disk to say something is possible*, because when people try it out, it turns out it's an awful experience.

* the implication being, anything is possible in that scenario

homarp•3m ago
agentic loop can run all night long. It's just a different way to work: prepare your prompt queue, set it up, check result in the morning, adjust. 'local vibe' in 10h instead of 10mn is still better than 10 days of manual side coding.
cyanf•5h ago
This is both the largest oss model release thus far, and the largest Muon training run.
wiradikusuma•3h ago
I've only started using Claude, Gemini, etc in the last few months (I guess it comes with age, I'm no longer interested in trying the latest "tech"). I assume those are "non-agentic" models.

From reading articles online, "agentic" means like you have a "virtual" Virtual Assistant with "hands" that can google, open apps, etc, on their own.

Why not use existing "non-agentic" model and "orchestrate" them using LangChain, MCP etc? Why create a new breed of model?

I'm sorry if my questions sound silly. Following AI world is like following JavaScript world.

ozten•3h ago
It is not a silly question. The various flavors of LLM have issues with reliability. In software we expect five 9s, LLMs aren't even a one 9. Early on it was reliability of them writing JSON output. Then instruction following. Then tool use. Now it's "computer use" and orchestration.

Creating models for this specific problem domain will have a better chance at reliability, which is not a solved problem.

Jules is the gemini coder that links to github. Half the time it doesn't create a pull request and forgets and assumes I'll do some testing or something. It's wild.

simonw•3h ago
"Agentic" and "agent" can mean pretty much anything, there are a ton of different definitions out there.

When an LLM says it's "agentic" it usually means that it's been optimized for tool use. Pretty much all the big models (and most of the small ones) are designed for tool use these days, it's an incredibly valuable feature for a model to offer.

I don't think this new model is any more "agentic" than o3, o4-mini, Gemini 2.5 or Claude 4. All of those models are trained for tools, all of them are very competent at running tool calls in a loop to try to achieve a goal they have been given.

dcre•3h ago
Reasonable question, simple answer: "New breed of model" is overstating it — all these models for years have been fine-tuned using reinforcement learning on a variety of tasks, it's just that the set of tasks (and maybe the amount of RL) has changed over time to include more tool use tasks, and this has made them much, much better at the latter. The explosion of tools like Claude Code this year is driven by the models just being more effective at it. The orchestration external to the model you mention is what people did before this year and it did not work as well.
selfhoster11•2h ago
> I'm sorry if my questions sound silly. Following AI world is like following JavaScript world.

You are more right than you could possibly imagine.

TL;DR: "agentic" just means "can call tools it's been given access to, autonomously, and then access the output" combined with an infinite loop in which the model runs over and over (compared to a one-off interaction like you'd see in ChatGPT). MCP is essentially one of the methods to expose the tools to the model.

Is this something the models could do for a long while with a wrapper? Yup. "Agentic" is the current term for it, that's all. There's some hype around "agentic AI" that's unwarranted, but part of the reason for the hype is that models have become better at tool calling and using data in their context since the early days.

simonw•3h ago
Pelican on a bicycle result: https://simonwillison.net/2025/Jul/11/kimi-k2/
_alex_•3h ago
wow!
ebiester•51m ago
At this point, they have to be training it. At what point will you start using something else?
MaxPock•3h ago
Would be hilarious if Zuck with his billion dollar poaching failed to beat budget Chinese models.
physix•1h ago
That reminds me of a thought I had about the poachings.

The poaching was probably more aimed at hamstringing Meta's competition.

Because the disruption caused by them leaving in droves is probably more severe than the benefits of having them on board. Unless they are gods, of course.

rfoo•59m ago
Wikipedia listed a FAIR alumni as cofounder for this "Moonshot AI". Make it funnier probably.
aliljet•3h ago
If the SWE Bench results are to be believed... this looks best in class right now for a local LLM. To be fair, show me the guy who is running this locally...
selfhoster11•2h ago
It's challenging, but not impossible. With 2-bit quantisation, only about 250-ish gigabytes of RAM is required. It doesn't have to be VRAM either, and you can mix and match GPU+CPU inference.

In addition, some people on /r/localLlama are having success with streaming the weights off SSD storage at 1 token/second, which is about the rate I get for DeepSeek R1.

helloericsf•2h ago
How does it stack up against the new Grok 4 model?