frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

DeepSeek v4

https://api-docs.deepseek.com/
478•impact_sy•3h ago•199 comments

Why I Write (1946)

https://www.orwellfoundation.com/the-orwell-foundation/orwell/essays-and-other-works/why-i-write/
107•RyanShook•3h ago•24 comments

GPT-5.5

https://openai.com/index/introducing-gpt-5-5/
1245•rd•12h ago•840 comments

An update on recent Claude Code quality reports

https://www.anthropic.com/engineering/april-23-postmortem
643•mfiguiere•12h ago•496 comments

Bitwarden CLI compromised in ongoing Checkmarx supply chain campaign

https://socket.dev/blog/bitwarden-cli-compromised
713•tosh•15h ago•349 comments

US special forces soldier arrested after allegedly winning $400k on Maduro raid

https://www.cnn.com/2026/04/23/politics/us-special-forces-soldier-arrested-maduro-raid-trade
125•nkrisc•8h ago•182 comments

Habitual coffee intake shapes the microbiome, modifies physiology and cognition

https://www.nature.com/articles/s41467-026-71264-8
64•scubakid•2h ago•26 comments

Show HN: Tolaria – Open-source macOS app to manage Markdown knowledge bases

https://github.com/refactoringhq/tolaria
165•lucaronin•8h ago•53 comments

Meta tells staff it will cut 10% of jobs

https://www.bloomberg.com/news/articles/2026-04-23/meta-tells-staff-it-will-cut-10-of-jobs-in-pus...
528•Vaslo•11h ago•500 comments

MeshCore development team splits over trademark dispute and AI-generated code

https://blog.meshcore.io/2026/04/23/the-split
194•wielebny•13h ago•104 comments

Familiarity is the enemy: On why Enterprise systems have failed for 60 years

https://felixbarbalet.com/familiarity-is-the-enemy/
5•adityaathalye•1h ago•1 comments

A quick look at Mythos run on Firefox: too much hype?

https://xark.es/b/mythos-firefox-150
50•leonidasv•2h ago•16 comments

Using the internet like it's 1999

https://joshblais.com/blog/using-the-internet-like-its-1999/
133•joshuablais•9h ago•87 comments

Ubuntu 26.04

https://lwn.net/Articles/1069399/
83•lxst•1h ago•29 comments

TorchTPU: Running PyTorch Natively on TPUs at Google Scale

https://developers.googleblog.com/torchtpu-running-pytorch-natively-on-tpus-at-google-scale/
106•mji•9h ago•4 comments

Your hex editor should color-code bytes

https://simonomi.dev/blog/color-code-your-bytes/
549•tobr•2d ago•150 comments

UK Biobank health data keeps ending up on GitHub

https://biobank.rocher.lc
100•Cynddl•16h ago•26 comments

Show HN: Agent Vault – Open-source credential proxy and vault for agents

https://github.com/Infisical/agent-vault
99•dangtony98•1d ago•32 comments

My phone replaced a brass plug

https://drobinin.com/posts/my-phone-replaced-a-brass-plug/
108•valzevul•13h ago•18 comments

Show HN: Honker – Postgres NOTIFY/LISTEN Semantics for SQLite

https://github.com/russellromney/honker
245•russellthehippo•18h ago•61 comments

A programmable watch you can actually wear

https://www.hackster.io/news/a-diy-watch-you-can-actually-wear-8f91c2dac682
165•sarusso•2d ago•80 comments

Incident with multple GitHub services

https://www.githubstatus.com/incidents/myrbk7jvvs6p
232•bwannasek•13h ago•115 comments

Used La Marzocco machines are coveted by cafe owners and collectors

https://www.nytimes.com/2026/04/20/dining/la-marzocco-espresso-machine.html
58•mitchbob•3d ago•107 comments

Astronomers find the edge of the Milky Way

https://skyandtelescope.org/astronomy-news/astronomers-find-the-edge-of-the-milky-way/
107•bookofjoe•11h ago•23 comments

Alberta startup sells no-tech tractors for half price

https://wheelfront.com/this-alberta-startup-sells-no-tech-tractors-for-half-price/
2173•Kaibeezy•1d ago•741 comments

Writing a C Compiler, in Zig (2025)

https://ar-ms.me/thoughts/c-compiler-1-zig/
150•tosh•20h ago•42 comments

French government agency confirms breach as hacker offers to sell data

https://www.bleepingcomputer.com/news/security/french-govt-agency-confirms-breach-as-hacker-offer...
373•robtherobber•14h ago•123 comments

I am building a cloud

https://crawshaw.io/blog/building-a-cloud
1042•bumbledraven•1d ago•520 comments

Advanced Packaging Limits Come into Focus

https://semiengineering.com/advanced-packaging-limits-come-into-focus/
36•PaulHoule•2d ago•5 comments

I spent years trying to make CSS states predictable

https://tenphi.me/blog/why-i-spent-years-trying-to-make-css-states-predictable/
61•tenphi•17h ago•27 comments
Open in hackernews

DeepSeek v4

https://api-docs.deepseek.com/
468•impact_sy•3h ago
https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...

Comments

luyu_wu•2h ago
For those who didn't check the page yet, it just links to the API docs being updated with the upcoming models, not the actual model release.
talim•2h ago
Weights are on Huggingface FWIW. https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/tree/main
cmrdporcupine•2h ago
My submission here https://news.ycombinator.com/item?id=47885014 done at the same time was to the weights.

dang, probably the two should be merged and that be the link

culi•1h ago
there's no pinging. Someone's gotta email dang
seanobannon•2h ago
Weights available here: https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro
BoorishBears•59m ago
https://huggingface.co/deepseek-ai/DeepSeek-V4-Flash-Base https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro-Base

And we got new base models, wonderful, truly wonderful

nthypes•2h ago
https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...

Model was released and it's amazing. Frontier level (better than Opus 4.6) at a fraction of the cost.

sergiotapia•2h ago
The dragon awakes yet again!
kindkang2024•1h ago
There appears a flight of dragons without heads. Good fortune.

That's literally what the I Ching calls "good fortune."

Competition, when no single dragon monopolizes the sky, brings fortune for all.

rapind•2h ago
Pop?
onchainintel•2h ago
How does it compare to Opus 4.7? I've been immersed in 4.7 all week participating in the Anthropic Opus 4.7 hackathon and it's pretty impressive even if it's ravenous from a token perspective compared to 4.6
greenknight•2h ago
The thing is, it doesnt need to beat 4.7. it just needs to do somewhat well against it.

This is free... as in you can download it, run it on your systems and finetune it to be the way you want it to be.

johnmaguire•1h ago
... if you have 800 GB of VRAM free.
inventor7777•1h ago
I remember reading about some new frameworks have been coming out to allow Macs to stream weights of huge models live from fast SSDs and produce quality output, albeit slowly. Apart from that...good luck finding that much available VRAM haha
p1esk•1h ago
Do you think a lot of people have “systems” to run a 1.6T model?
applfanboysbgon•1h ago
No, but businesses do. Being able to run quality LLMs without your business, or business's private information, being held at the mercy of another corp has a lot of value.
choldstare•1h ago
Not really - on prem llm hosting is extremely labor and capital intensive
applfanboysbgon•1h ago
But can be, and is, done. I work for a bootstrapped startup that hosts a DeepSeek v3 retrain on our own GPUs. We are highly profitable. We're certainly not the only ones in the space, as I'm personally aware of several other startups hosting their own GLM or DeepSeek models.
forrestthewoods•1h ago
What type of system is needed to self host this? How much would it cost?
p1esk•1h ago
Depends on fast you want it to be. I’m guessing a couple of $10k mac studio boxes could run it, but probably not fast enough to enjoy using it.
disiplus•1h ago
Depends how many users you have and what is "production grade" for you but like 500k gets you a 8x B200 machine.
fragmede•49m ago
One GB200 NVL72 from Nvidia would do it. $2-3 million, or so. If you're a corporation, say Walmart or PayPal, that's not out of the question.

If you want to go budget corporate, 7 x H200 is just barely going to run it, but all in, $300k ought to do it.

gloflo•44m ago
How many users can you serve with that?
CJefferson•57m ago
To me, the important thing isn't that I can run it, it's that I can pay someone else to run it. I'm finding Opus 4.7 seems to be weirdly broken compared to 4.6, it just doesn't understand my code, breaks it whenever I ask it to do anything.

Now, at the moment, i can still use 4.6 but eventually Anthropic are going to remove it, and when it's gone it will be gone forever. I'm planning on trying Deepseek v4, because even if it's not quite as good, I know that it will be available forever, I'll always be able to find someone to run it.

kelseyfrog•1h ago
What's the hardware cost to running it?
slashdave•1h ago
"if you have to ask..."
redox99•1h ago
Probably like 100 USD/hour
bbor•1h ago
I was curious, and some [intrepid soul](https://wavespeed.ai/blog/posts/deepseek-v4-gpu-vram-require...) did an analysis. Assuming you do everything perfectly and take full advantage of the model's MoE sparsity, it would take:

- To run at full precision: "16–24 H100s", giving us ~$400-600k upfront, or $8-12/h from [us-east-1](https://intuitionlabs.ai/articles/h100-rental-prices-cloud-c...).

- To run with "heavy quantization" (16 bits -> 8): "8xH100", giving us $200K upfront and $4/h.

- To run truly "locally"--i.e. in a house instead of a data center--you'd need four 4090s, one of the most powerful consumer GPUs available. Even that would clock in around $15k for the cards alone and ~$0.22/h for the electricity (in the US).

Truly an insane industry. This is a good reminder of why datacenter capex from since 2023 has eclipsed the Manhattan Project, the Apollo program, and the US interstate system combined...

zargon•1h ago
That article is a total hallucination.

"671B total / 37B active"

"Full precision (BF16)"

And they claim they ran this non-existent model on vLLM and SGLang over a month and a half ago.

It's clickbait keyword slop filled in with V3 specs. Most of the web is slop like this now. Sigh.

onchainintel•1h ago
Completely agree, not suggesting it needs ot just genuinely curious. Love that it can be run locally though. Open source LLMs punching back pretty hard against proprietary ones in the cloud lately in terms of performance.
libraryofbabel•16m ago
> you can download it, run it on your systems

In theory, sure, but as other have pointed out you need to spend half a million on GPUs just to get enough VRAM to fit a single instance of the model. And you’d better make sure your use case makes full 24/7 use of all that rapidly-depreciating hardware you just spent all your money on, otherwise your actual cost per token will be much higher than you think.

In practice you will get better value from just buying tokens from a third party whose business is hosting open weight models as efficiently as possible and who make full use of their hardware. Even with the small margin they charge on top you will still come out ahead.

rvz•1h ago
It is more than good enough and has effectively caught up with Opus 4.6 and GPT 5.4 according to the benchmarks.

It's about 2 months behind GPT 5.5 and Opus 4.7.

As long as it is cheap to run for the hosting providers and it is frontier level, it is a very competitive model and impressive against the others. I give it 2 years maximum for consumer hardware to run models that are 500B - 800B quantized on their machines.

It should be obvious now why Anthropic really doesn't want you to run local models on your machine.

colordrops•1h ago
What's going to change in 2 years that would allow users to run 500B-800B parameter models on consumer hardware?
DiscourseFan•1h ago
I think its just an estimate
indigodaddy•38m ago
But the question remains
snovv_crash•1h ago
With the ability of the Qwen3.6 27B, I think in 2 years consumers will be running models of this capability on current hardware.
deaux•1h ago
Vibes > Benchmarks. And it's all so task-specific. Gemini 3 has scored very well in benchmarks for very long but is poor at agentic usecases. A lot of people prefering Opus 4.6 to 4.7 for coding despite benchmarks, much more than I've seen before (4.5->4.6, 4->4.5).

Doesn't mean Deepseek v4 isn't great, just benchmarks alone aren't enough to tell.

doctoboggan•2h ago
Is it honestly better than Opus 4.6 or just benchmaxxed? Have you done any coding with an agent harness using it?

If its coding abilities are better than Claude Code with Opus 4.6 then I will definitely be switching to this model.

madagang•1h ago
Their Chinese announcement says that, based on internal employee testing, it is not as good as Opus 4.6 Thinking, but is slightly better than Opus 4.6 without Thinking enabled.
mchusma•1h ago
I appreciate this, makes me trust it more than benchmarks.
deaux•1h ago
That's super interesting, isn't Deepseek in China banned from using Anthropic models? Yet here they're comparing it in terms of internal employee testing.
renticulous•3m ago
They use VPN to access. Even Google Deepmind uses Anthropic. There was a fight within Google as to why only DeepMind is allowed to Claude while rest of the Google can't.
ibic•31m ago
In case people wonder where the announcement is (you can easily translate it via browser if you don't read Chinese): https://mp.weixin.qq.com/s/8bxXqS2R8Fx5-1TLDBiEDg

It's still a "preview" version atm.

bokkies•23m ago
Apparently glm5.1 and qwen coder latest is as good as opus 4.6 on benchmarks. So I tried both seriously for a week (glm Pro using CC) and qwen using qwen companion. Thought I could save $80 a month. Unfortunately after 2 days I had switched back to Max. The speed (slower on both although qwen is much faster) and errors (stupid layout mistakes, inserting 2 footers then refusing to remove one, not seeing obvious problems in screenshots & major f-ups of functionality), not being able to view URLs properly, etc. I'll give deepseek a go but I suspect it will be similar. The model is only half the story. Also been testing gpt5.4 with codex and it is very almost as good as CC... better on long running tasks running in background. Not keen on ChatGPT codex 'personality' so will stick to CC for the most part.
NitpickLawyer•1h ago
> (better than Opus 4.6)

There we go again :) It seems we have a release each day claiming that. What's weird is that even deepseek doesn't claim it's better than opus w/ thinking. No idea why you'd say that but anyway.

Dsv3 was a good model. Not benchmaxxed at all, it was pretty stable where it was. Did well on tasks that were ood for benchmarks, even if it was behind SotA.

This seems to be similar. Behind SotA, but not by much, and at a much lower price. The big one is being served (by ds themselves now, more providers will come and we'll see the median price) at 1.74$ in / 3.48$ out / 0.14$ cache. Really cheap for what it offers.

The small one is at 0.14$ in / 0.28$ out / 0.028$ cache, which is pretty much "too cheap to matter". This will be what people can run realistically "at home", and should be a contender for things like haiku/gemini-flash, if it can deliver at those levels.

slopinthebag•50m ago
Anthropic fans would claim God itself is behind Opus by 3-6 months and then willingly be abused by Boris and one of his gaslighting tweets.

LMAO

NitpickLawyer•40m ago
> Anthropic fans ...

I have no idea why you'd think that, but this is straight from their announcement here (https://mp.weixin.qq.com/s/8bxXqS2R8Fx5-1TLDBiEDg):

> According to evaluation feedback, its user experience is better than Sonnet 4.5, and its delivery quality is close to Opus 4.6's non-thinking mode, but there is still a certain gap compared to Opus 4.6's thinking mode.

This is the model creators saying it, not me.

0xbadcafebee•1h ago
I don't think we need to compare models to Opus anymore. Opus users don't care about other models, as they're convinced Opus will be better forever. And non-Opus users don't want the expense, lock-in or limits.

As a non-Opus user, I'll continue to use the cheapest fastest models that get my job done, which (for me anyway) is still MiniMax M2.5. I occasionally try a newer, more expensive model, and I get the same results. I have a feeling we might all be getting swindled by the whole AI industry with benchmarks that just make it look like everything's improving.

kmarc•59m ago
This resonates with me a lot.

I do some stuff with gemini flash and Aider, but mostly because I want to avoid locking myself into a walled garden of models, UIs and company

post-it•57m ago
What do you run these on? I've gotten comfortable with Claude but if folks are getting Opus performance for cheaper I'll switch.
slopinthebag•51m ago
Try Charm Crush first, it's a native binary. If it's unbearable, try opencode, just with the knowledge your system will probably be pwned soon since it's JS + NPM + vibe coding + some of the most insufferable devs in the industry behind that product.

If you're feeling frisky, Zed has a decent agent harness and a very good editor.

ind-igo•55m ago
Agree with your assessment, I think after models reached around Opus 4.5 level, its been almost indistinguishable for most tasks. Intelligence has been commoditized, what's important now is the workflows, prompting, and context management. And that is unique to each model.
versteegen•50m ago
Which model's best depends on how you use it. There's a huge difference in behaviour between Claude and GPT and other models which makes some poor substitutes for others in certain use cases. I think the GPT models are a bad substitute for Claude ones for tasks such as pair-programming (where you want to see the CoT and have immediate responses) and writing code that you actually want to read and edit yourself, as opposed to just letting GPT run in the background to produce working code that you won't inspect. Yes, GPT 5.4 is cheap and brilliant but very black-box and often very slow IME. GPT-5.4 still seems to behave the same as 5.1, which includes problems like: doesn't show useful thoughts, can think for half an hour, says "Preparing the patch now" then thinks for another 20 min, gives no impression of what it's doing, reads microscopic parts of source files and misses context, will do anything to pass the tests including patching libraries...
sandGorgon•15m ago
actually this is not the reason - the harness is significantly better. There is no comparable harness to Claude Code with skills, etc.

Opencode was getting there, but it seems the founders lost interest. Pi could be it, but its very focused on OpenClaw. Even Codex cli doesnt have all of it.

which harness works well with Deepseek v4 ?

avereveard•5m ago
eh idk. until yesterday opus was the one that got spatial reasoning right (had to do some head pose stuff, neither glm 5.1 nor codex 5.3 could "get" it) and codex 5.3 was my champion at making UX work.

So while I agree mixed model is the way to go, opus is still my workhorse.

sandos•3m ago
Is Opus nerfed somehow in Copilot? Ive tried it numerous times, it has never reallt woved me. They seem to have awfully small context windows, but still. Its mostly their reasoning which has been off

Codex is just so much better, or the genera GPT models.

bbor•1h ago
For the curious, I did some napkin math on their posted benchmarks and it racks up 20.1 percentage point difference across the 20 metrics where both were scored, for an average improvement of about 2% (non-pp). I really can't decide if that's mind blowing or boring?

Claude4.6 was almost 10pp better at at answering questions from long contexts ("corpuses" in CorpusQA and "multiround conversations" in MRCR), while DSv4 was a staggering 14pp better at one math challenge (IMOAnswerBench) and 12pp better at basic Q&A (SimpleQA-Verified).

Quasimarion•1h ago
FWIW it's also like 10x cheaper.
taosx•2h ago
MErge? https://news.ycombinator.com/item?id=47885014
gbnwl•2h ago
I’m deeply interested and invested in the field but I could really use a support group for people burnt out from trying to keep up with everything. I feel like we’ve already long since passed the point where we need AI to help us keep up with advancements in AI.
wordpad•2h ago
The players barely ever change. People don't have problems following sports, you shouldn't struggle so much with this once you accept top spot changes.
ehnto•1h ago
It is funny seeing people ping pong between Anthropic and ChatGPT, with similar rhetoric in both directions.

At this point I would just pick the one who's "ethics" and user experience you prefer. The difference in performance between these releases has had no impact on the meaningful work one can do with them, unless perhaps they are on the fringes in some domain.

Personally I am trying out the open models cloud hosted, since I am not interested in being rug pulled by the big two providers. They have come a long way, and for all the work I actually trust to an LLM they seem to be sufficient.

DiscourseFan•1h ago
I find ChatGPT annoying mostly
awakeasleep•1h ago
Open settings > personalization. Set it to efficient base style. Turn off enthusiasm and warmth. You’re welcome
gbnwl•1h ago
I didn't express this well but my interest isn't "who is in the top spot", and is more _why and _how various labs get the results they do. This is also magnified by the fact that I'm not only interested in hosted providers of inference but local models as well. What's your take on the best model to run for coding on 24GB of VRAM locally after the last few weeks of releases? Which harness do you prefer? What quants do you think are best? To use your sports metaphor it's more than following the national leagues but also following college and even high school leagues as well. And the real interest isn't even who's doing well but WHY, at each level.
trueno•51m ago
holy shit im right there with you
satvikpendem•48m ago
Don't keep up. Much like with news, you'll know when you need to know, because someone else will tell you first.
vrganj•34m ago
It honestly has all kinda felt like more of the same ever since maybe GPT4?

New model comes out, has some nice benchmarks, but the subjective experience of actually using it stays the same. Nothing's really blown my mind since.

Feels like the field has stagnated to a point where only the enthusiasts care.

jdeng•2h ago
Excited that the long awaited v4 is finally out. But feel sad that it's not multimodal native.
fblp•2h ago
There's something heartwarming about the developer docs being released before the flashy press release.
onchainintel•2h ago
Insert obligatory "this is the way" Mando scene. Indeed!
necovek•1h ago
Where's the training data and training scripts since you are calling this open source?

Edit: it seems "open source" was edited out of the parent comment.

b65e8bee43c2ed0•59m ago
doesn't it get tiring after a while? using the same (perceived) gotcha, over and over again, for three years now?

no one is ever going to release their training data because it contains every copyrighted work in existence. everyone, even the hecking-wholesome safety-first Anthropic, is using copyrighted data without permission to train their models. there you go.

fragmede•36m ago
it's not a gotcha but people using words in ways others don't like.
necovek•23m ago
There is an easy fix already in widespread use: "open weights".

It is very much a valuable thing already, no need to taint it with wrong promise.

Though I disagree about being used if it was indeed open source: I might not do it inside my home lab today, but at least Qwen and DeepSeek would use and build on what eg. Facebook was doing with Llama, and they might be pushing the open weights model frontier forward faster.

bl4ckneon•23m ago
Aww yes, let me push a couple petabytes to my git repo for everyone to download...
necovek•22m ago
An easier thing would be to say "open weights", yes.
0-_-0•5m ago
Weights are the source, training data is the compiler.
Aliabid94•2h ago
MMLU-Pro:

Gemini-3.1-Pro at 91.0

Opus-4.6 at 89.1

GPT-5.4, Kimi2.6, and DS-V4-Pro tied at 87.5

Pretty impressive

ant6n•1h ago
Funny how Gemini is theoretically the best -- but in practice all the bugs in the interface mean I don't want to use it anymore. The worst is it forgets context (and lies about it), but it's very unreliable at reading pdfs (and lies about it). There's also no branch, so once the context is lost/polluted, you have to start projects over and build up the context from scratch again.
KaoruAoiShiho•2h ago
SOTA MRCR (or would've been a few hours earlier... beaten by 5.5), I've long thought of this as the most important non-agentic benchmark, so this is especially impressive. Beats Opus 4.7 here
shafiemoji•2h ago
I hope the update is an improvement. Losing 3.2 would be a real loss, it's excellent.
rvz•2h ago
The paper is here: [0]

Was expecting that the release would be this month [1], since everyone forgot about it and not reading the papers they were releasing and 7 days later here we have it.

One of the key points of this model to look at is the optimization that DeepSeek made with the residual design of the neural network architecture of the LLM, which is manifold-constrained hyper-connections (mHC) which is from this paper [2], which makes this possible to efficiently train it, especially with its hybrid attention mechanism designed for this.

There was not that much discussion around it some months ago here [3] about it but again this is a recommended read of the paper.

I wouldn't trust the benchmarks directly, but would wait for others to try it for themselves to see if it matches the performance of frontier models.

Either way, this is why Anthropic wants to ban open weight models and I cannot wait for the quantized versions to release momentarily.

[0] https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main...

[1] https://news.ycombinator.com/item?id=47793880

[2] https://arxiv.org/abs/2512.24880

[3] https://news.ycombinator.com/item?id=46452172

jeswin•1h ago
> this is why Anthropic wants to ban open weight models

Do you have a source?

louiereederson•1h ago
More like he wants to ban accelerator chip sales to China, which may be about “national security” or self preservation against a different model for AI development which also happens to be an existential threat to Anthropic. Maybe those alternatives are actually one and the same to him.
jessepcc•2h ago
At this point 'frontier model release' is a monthly cadence, Kimi 2.6 Claude 4.6 GPT 5.5, the interesting question is which evals will still be meaningful in 6 months.
swrrt•2h ago
Any visualised benchmark/scoreboard for comparison between latest models? DeepSeek v4 and GPT-5.5 seems to be ground breaking.
raincole•2h ago
History doesn't always repeat itself.

But if it does, then in the following week we'll see DeepSeek4 floods every AI-related online space. Thousands of posts swearing how it's better than the latest models OpenAI/Anthropic/Google have but only costs pennies.

Then a few weeks later it'll be forgotten by most.

sbysb•1h ago
It's difficult because even if the underlying model is very good, not having a pre-built harness like Claude Code makes it very un-sticky for most devs. Even at equal quality, the friction (or at least perceived friction) is higher than the mainstream models.
raincole•1h ago
OpenCode? Pi?

If one finds it difficult to set up OpenCode to use whatever providers they want, I won't call them 'dev'.

The only real friction (if the model is actually as good as SOTA) is to convince your employer to pay for it. But again if it really provides the same value at a fraction of the cost, it'll eventually cease to be an issue.

throwa356262•19m ago

    "If one finds it difficult to set up OpenCode to use whatever providers they want, I won't call them 'dev'."

I feel the same way. But look at the llama vs llama.cpp post from HN few days back and you will see most of the enthusiasts in this space are very non technical people.
cmrdporcupine•1h ago
They have instructions right on their page on how to use claude code with it.
ls612•2h ago
How long does it usually take for folks to make smaller distills of these models? I really want to see how this will do when brought down to a size that will run on a Macbook.
inventor7777•1h ago
Weren't there some frameworks recently released to allow Macs to stream weights from fast SSDs and thus fit way more parameters than what would normally fit in RAM?

I have never tried one yet but I am considering trying that for a medium sized model.

the_sleaze_•1h ago
Do you have the links for those? Very interested
inventor7777•1h ago
Sure!

Note: these were just two that I starred when I saw them posted here. I have not looked seriously at it at the moment,

https://github.com/danveloper/flash-moe

https://github.com/t8/hypura

simonw•1h ago
I've been calling that the "streaming experts" trick, the key idea is to take advantage of Mixture of Expert models where only a subset of the weights are used for each round of calculations, then load those weights from SSD into RAM for each round.

As I understand it if DeepSeek v4 Pro is a 1.6T, 49B active that means you'd need just 49B in memory, so ~100GB at 16 bit or ~50GB at 8bit quantized.

v4 Flash is 284B, 13B active so might even fit in <32GB.

inventor7777•1h ago
Ahh, that actually makes more sense now. (As you can tell, I just skimmed through the READMEs and starred "for later".)

My Mac can fit almost 70B (Q3_K_M) in memory at once, so I really need to try this out soon at maybe Q5-ish.

zargon•44m ago
> ~100GB at 16 bit or ~50GB at 8bit quantized.

V4 is natively mixed FP4 and FP8, so significantly less than that. 50 GB max unquantized.

zozbot234•19m ago
The "active" count is not very meaningful except as a broad measure of sparsity, since the experts in MoE models are chosen per layer. Once you're streaming experts from disk, there's nothing that inherently requires having 49B parameters in memory at once. Of course, the less caching memory does, the higher the performance overhead of fetching from disk.
EnPissant•14m ago
Streaming weights from RAM to GPU for prefill makes sense due to batching and pcie5 x16 is fast enough to make it worthwhile.

Streaming weights from RAM to GPU for decode makes no sense at all because batching requires multiple parallel streams.

Streaming weights from SSD _never_ makes sense because the delta between SSD and RAM is too large. There is no situation where you would not be able to fit a model in RAM and also have useful speeds from SSD.

zozbot234•23m ago
These are more like experiments than a polished release as of yet. And the reduction in throughput is high compared to having the weights in RAM at all times, since you're bottlenecked by the SSD which even at its fastest is much slower than RAM.
simonw•1h ago
Unsloth often turn them around within a few hours, they might have gone to bed already though!

Keep an eye on https://huggingface.co/unsloth/models

Update ten minutes later: https://huggingface.co/unsloth/DeepSeek-V4-Pro just appeared but doesn't have files in yet, so they are clearly awake and pushing updates.

EnPissant•18m ago
Those are quants, not distills.
mohsen1•15m ago
"2 minutes ago" https://huggingface.co/unsloth/DeepSeek-V4-Pro
zargon•1h ago
The Flash version is 284B A13B in mixed FP8 / FP4 and the full native precision weights total approximately 154 GB. KV cache is said to take 10% as much space as V3. This looks very accessible for people running "large" local models. It's a nice follow up to the Gemma 4 and Qwen3.5 small local models.
sbinnee•1h ago
Price is appealing to me. I have been using gemini 3 flash mainly for chat. I may give it a try.

input: $0.14/$0.28 (whereas gemini $0.5/$3)

Does anyone know why output prices have such a big gap?

girvo•12m ago
Output is what the compute is used for above all else; costs more hardware time basically than prompt processing (input) which is a lot faster
frozenseven•1h ago
Better link:

https://news.ycombinator.com/item?id=47885014

https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro

reenorap•1h ago
Which version fits in a Mac Studio M3 Ultra 512 GB?
simonw•1h ago
The Flash one should - it's 160GB on Hugging Face: https://huggingface.co/deepseek-ai/DeepSeek-V4-Flash/tree/ma...
ycui1986•1h ago
So, dual RTX PRO 6000
sidcool•1h ago
Truly open source coming from China. This is heartwarming. I know if the potential ulterior motives.
I_am_tiberius•1h ago
Open weight!
alecco•56m ago
Please don't slander the most open AI company in the world. Even more open than some non-profit labs from universities. DeepSeek is famous for publishing everything. They might take a bit to publish source code but it's almost always there. And their papers are extremely pro-social to help the broader open AI community. This is why they struggle getting funded because investors hate openness. And in China they struggle against the political and hiring power of the big tech companies.

Just this week they published a serious foundational library for LLMs https://github.com/deepseek-ai/TileKernels

Others worth mentioning:

https://github.com/deepseek-ai/DeepGEMM a competitive foundational library

https://github.com/deepseek-ai/Engram

https://github.com/deepseek-ai/DeepSeek-V3

https://github.com/deepseek-ai/DeepSeek-R1

https://github.com/deepseek-ai/DeepSeek-OCR-2

They have 33 repos and counting: https://github.com/orgs/deepseek-ai/repositories?type=all

And DeepSeek often has very cool new approaches to AI copied by the rest. Many others copied their tech. And some of those have 10x or 100x the GPU training budget and that's their moat to stay competitive.

The models from Chinese Big Tech and some of the small ones are open weights only. (and allegedly benchmaxxed) (see https://xcancel.com/N8Programs/status/2044408755790508113). Not the same.

patshead•7m ago
DeepSeek's models are indeed open weight. Why do you feel that pointing this out would be considered slander?
try-working•1h ago
if you want to understand why labs open source their models: http://try.works/why-chinese-ai-labs-went-open-and-will-rema...
b65e8bee43c2ed0•8m ago
American companies want a scan of your asshole for the privilege of paying to access their models, and unapologetically admit to storing, analyzing, training on, and freely giving your data to any authorities if requested. Chinese ulteriority is hypothetical, American is blatant.
namegulf•1h ago
Is there a Quantized version of this?
yanis_t•1h ago
Already on Openrouter. Pro version is $1.74/m/input, $3.48/m/output, while flash $0.14/m/input, 0.28/m/output.
esafak•1h ago
https://openrouter.ai/deepseek/deepseek-v4-pro

https://openrouter.ai/deepseek/deepseek-v4-flash

77ko•1h ago
Its on OR - but currently not available on their anthropic endpoint. OR if you read this, pls enable it there! I am using kimi-2.6 with Claude Code, works well, but Deepseek V4 gives an error:

`https://openrouter.ai/api/messages with model=deepseek/deepseek-v4-pro, OR returns an error because their Anthropic-compat translator doesn't cover V4 yet. The Claude CLI dutifully surfaces that error as "model...does not exist"

astrod•1h ago
Getting 'Api Error' here :( Every other model is working fine.
poglet•41m ago
Try interacting with it through the website, it will give an error and some explanation on the issue. I had to relax my guardrail settings.
aliljet•1h ago
How can you reasonably try to get near frontier (even at all tps) on hardware you own? Maybe under 5k in cost?
awakeasleep•1h ago
The same way you fit a bucket wheel excavator in your garage
floam•45m ago
Very carefully
jdoe1337halo•1h ago
More like 500k
542458•1h ago
The low end could be something like an eBay-sourced server with a truckload of DDR3 ram doing all-cpu inference - secondhand server models with a terabyte of ram can be had for about 1.5K. The TPS will be absolute garbage and it will sound like a jet engine, but it will nominally run.

The flash version here is 284B A13B, so it might perform OK with a fairly small amount of VRAM for the active params and all regular ram for the other params, but I’d have to see benchmarks. If it turns out that works alright, an eBay server plus a 3090 might be the bang-for-buck champ for about $2.5K (assuming you’re starting from zero).

revolvingthrow•56m ago
For flash? 4 bit quant, 2x 96GB gpu (fast and expensive) or 1x 96GB gpu + 128GB ram (still expensive but probably usable, if you’re patient).

A mac with 256 GB memory would run it but be very slow, and so would be a 256GB ram + cheapo GPU desktop, unless you leave it running overnight.

The big model? Forget it, not this decade. You can theoretically load from SSD but waiting for the reply will be a religious experience.

Realistically the biggest models you can run on local-as-in-worth-buying-as-a-person hardware are between 120B and 200B, depending on how far you’re willing to go on quantization. Even this is fairly expensive, and that’s before RAM went to the moon.

zargon•37m ago
Flash is less than 160 GB. No need to quantize to fit in 2x 96 GB. Not sure how much context fits in 30 GB, but it should be a good amount.
redrove•19m ago
It seems to be 160GB at mixed FP4+FP8 precision, FYI. Full FP8 is 250GB+. (B)F16 at around double I would assume.
zargon•17m ago
There is no BF16. The instruct model at full precision is 160 GB (mixed FP4 and FP8). The base model at full precision is 284 GB (FP8). Almost everyone is going to use instruct. But I do love to see base models released.
datadrivenangel•55m ago
A loaded macbook pro can get you to the frontier from 24 months ago at ~10-40tok/s, which is plenty fast enough for regular chatting.
zozbot234•12m ago
Run on an old HEDT platform with a lot of parallel attached storage (probably PCIe 4) and fetch weights from SSD. You'd ultimately be limited by the latency of these per-layer fetches, since MoE weights are small. You could reduce the latencies further by buying cheap Optane memory on the second-hand market.
hongbo_zhang•1h ago
congrats
simonw•1h ago
I like the pelican I got out of deepseek-v4-flash more than the one I got from deepseek-v4-pro.

Flash: https://gist.github.com/simonw/4a7a9e75b666a58a0cf81495acddf...

Pro: https://gist.github.com/simonw/9e8dfed68933ab752c9cf27a03250...

Both generated using OpenRouter.

For comparison, here's what I got from DeepSeek 3.2 back in December: https://simonwillison.net/2025/Dec/1/deepseek-v32/

And DeepSeek 3.1 in August: https://simonwillison.net/2025/Aug/22/deepseek-31/

And DeepSeek v3-0324 in March last year: https://simonwillison.net/2025/Mar/24/deepseek/

JSR_FDED•1h ago
No way. The Pro pelican is fatter, has a customized front fork, and the sun is shining! He’s definitely living the best life.
w4yai•1h ago
yeah. look at these 4 feathers (?) on his bum too.
oliver236•1h ago
a lot of dumplings
chronogram•14m ago
The pro pelican is a work of art! It goes dimensions that no other LLM has gone before.
nickvec•1h ago
The Flash one is pretty impressive. Might be my favorite so far in the pelican-riding-a-bicycle series
ycui1986•1h ago
I really like the pro version. The pelican is so cute.
mikae1•50m ago
Being a bicycle geometry nerd I always look at the bicycle first.

Let me tell you how much the Pro one sucks... It looks like failed Pedersen[1]. The rear wheel intersects with the bottom bracket, so it wouldn't even roll. Or rather, this bike couldn't exist.

The flash one looks surprisingly correct with some wild fork offset and the slackest of seat tubes. It's got some lowrider[2] aspirations with the small wheels, but with longer, Rivendellish[3], chainstays. The seat post has different angle than the seat tube, so good luck lowering that.

[1] https://en.wikipedia.org/wiki/Pedersen_bicycle

[2] https://en.wikipedia.org/wiki/Lowrider_bicycle

[3] https://www.rivbike.com/

simonw•46m ago
This is an excellent comment. Thanks for this - I've only ever thought about whether the frame is the right shape, I never thought about how different illustrations might map to different bicycle categories.
mikae1•33m ago
Some other reactions:

I wonder which model will try some more common spoke lacing patterns. Right now there seems to be a preference for radial lacing, which is not super common (but simple to draw). The Flash and Pro one uses 16 spoke rims, which actually exist[1] but are not super common.

The Pro model fails badly at the spokes. Heck, the spokes sit on the outside of the drive side of the rim and tire. Have a nice ride riding on the spokes (instead of the tire) welded to the side of your rim.

Both bikes have the drive side on the left, which is very very uncommon. That can't exist in the training data.

[1] https://cicli-berlinetta.com/product/campagnolo-shamal-16-sp...

jojobas•30m ago
The Pedersen looks like someone failed the "draw a bicycle" test and decided to adjust the universe.
lobochrome•38m ago
Why they so angry?
theanonymousone•38m ago
Where is the GPT 5.5 Pelican?
culopatin•14m ago
In the 5.5 topic
murkt•36m ago
DeepSeek pelicans are the angriest pelicans I’ve seen so far.
kristopolous•35m ago
they're just late for work.
catelm•35m ago
I think the pelican on a bike is known widely enough that of seizes to be useful as a benchmark. There is even a pelican briefly appearing in the promo video of GPT-5, if I'm not mistaken https://openai.com/gpt-5/. So the companies are apparently aware of it.
brutal_chaos_•32m ago
What was your prompt for the image? Apologies if this should be obvious.
shawn_w•29m ago
>Generate an SVG of a pelican riding a bicycle

at the top of the linked pages.

EnPissant•21m ago
This should not be the top comment on every model release post. It's getting tiring.
blitzar•14m ago
This should be the bottom comment on the pelican comment on every model release post.
nsoonhui•18m ago
To me this is the perfect proof that

1) LLM is not AGI. Because surely if AGI it would imply that pro would do better than flash?

2) and because of the above, Pelican example is most likely already being benchmaxxed.

chvid•16m ago
Is it then Deepseek hosted by Deepseek?

How much does the drawing change if you ask it again?

mchusma•1h ago
For comparison on openrouter DeepSeek v4 Flash is slightly cheaper than Gemma 4 31b, more expensive than Gemma 4 26b, but it does support prompt caching, which means for some applications it will be the cheapest. Excited to see how it compares with Gemma 4.
mariopt•1h ago
Does deepseek has any coding plan?
jeffzys8•1h ago
no
dhruv3006•1h ago
Ah now !
storus•58m ago
Oh well, I should have bought 2x 512GB RAM MacStudios, not just one :(
slopinthebag•57m ago
OMG

OMG ITS HAPPENING

CJefferson•56m ago
What's the current best framework to have a 'claude code' like experience with Deepseek (or in general, an open-source model), if I wanted to play?
whoopdeepoo•52m ago
You can use deepseek with Claude code
0x142857•49m ago
claude-code-cli/opencode/codex
TranquilMarmot•33m ago
https://opencode.ai/
deaux•23m ago
https://pi.dev/
clark1013•54m ago
Looking forward to DeepSeek Coding Plan
m_abdelfattah•6m ago
I came here to say the same :) !
tariky•40m ago
Anyone tried with make web UI with it? How good is it? For me opus is only worth because of it.
sibellavia•36m ago
A few hours after GPT5.5 is wild. Can’t wait to try it.
luew•33m ago
We will be hosting it soon at getlilac.com!
rohanm93•31m ago
This is shockingly cheap for a near frontier model. This is insane.

For context, for an agent we're working on, we're using 5-mini, which is $2/1m tokens. This is $0.30/1m tokens. And it's Opus 4.6 level - this can't be real.

I am uncomfortable about sending user data which may contain PII to their servers in China so I won't be using this as appealing as it sounds. I need this to come to a US-hosted environment at an equivalent price.

Hosting this on my own + renting GPUs is much more expensive than DeepSeek's quoted price, so not an option.

fractalf•21m ago
Right now Im much more worried about sending data to the US and A.. At least theres a less chanse it will be missused against -me-
gardnr•28m ago
865 GB: I am going to need a bigger GPU.
sergiotapia•27m ago
Using it with opencode sometimes it generates commands like:

    bash({"command":"gh pr create --title "Improve Calendar module docs and clean up idiomatic Elixir" --body "$(cat <<'EOF'
    Problem
    The Calendar modu...
like generating output, but not actually running the bash command so not creating the PR ultimately. I wonder if it's a model thing, or an opencode thing.
revolvingthrow•26m ago
> pricing "Pro" $3.48 / 1M output tokens vs $4.40

I’d like somebody to explain to me how the endless comments of "bleeding edge labs are subsidizing the inference at an insane rate" make sense in light of a humongous model like v4 pro being $4 per 1M. I’d bet even the subscriptions are profitable, much less the API prices.

edit: $1.74/M input $3.48/M output on OpenRouter

punkpeye•19m ago
Incredible model quality to price ratio
xnx•18m ago
Such different time now than early 2025 when people thought Deepaeek was going to kill the market for Nvidia.
bandrami•13m ago
I don't mind that High Flyer completely ripped off Anthropic to do this so much as I mind that they very obviously waited long enough for the GAB to add several dozen xz-level easter eggs to it.
zkmon•11m ago
They released 1.6 T pro base model on huggingface. First time I'm seeing a "T" model here.
Imanari•5m ago
Just tested it via openrounter in the Pi Coding agent and it regularly fails to use the read and write tool correctly, very disappointing. Anyone know a fix besides prompting "always use the provided tools instead of writing your own call"
tcbrah•5m ago
giving meta a run for its money, esp when it was supposed to be the poster child for OSS models. deepseek is really overshadowing them rn