frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
1•goranmoomin•2m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

1•throwaw12•3m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•5m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•7m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•10m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•11m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•13m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•14m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•16m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•19m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•24m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•26m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•29m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•41m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•43m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•44m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•57m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•3 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments
Open in hackernews

DeepSeek-v3.2-Exp

https://github.com/deepseek-ai/DeepSeek-V3.2-Exp
309•meetpateltech•4mo ago

Comments

terespuwash•4mo ago
Looks like Deep Sparse Attention can help with code (structured and long-file reasoning)
matrix2596•4mo ago
awesome that sparse attention used in real world setting
mythz•4mo ago
Happy to see Chinese OSS models keep getting better and cheaper. It also comes with a 50% API price drop for an already cheap model, now at:

$0.28/M Input ($0.028/M cache hit) > $0.42/M Output

manishsharan•4mo ago
This price drop is nice but I wonder how long it will last. Their prices used to be very low,then they almost doubled, and now it dropped.
nacs•4mo ago
I don't know if it will stay this low but the whole point of v3.2 is to be cheaper to run than <= v3.1.

(The inference costs are cheaper for them now as context grows because of the Sparse attention mechanism)

guluarte•4mo ago
I was using it daily, but after the price jump, using codex and claude was much cheaper than using deepseek.
dizhn•4mo ago
What was the price before? I thought they had just increased their prices.
espadrine•4mo ago
Input: $0.07 (cached), $0.56 (cache miss)

Output: $1.68 per million tokens.

https://api-docs.deepseek.com/news/news250929

Havoc•4mo ago
wow...gigantic reduction in cost while holding the benchmarks mostly steady. Impressive.
awongh•4mo ago
The 2nd order effect that not a lot of people talk about is price: the fact that model scaling at this pace also correlates with price is amazing.

I think this is just as important to distribution of AI as model intelligence is.

AFAIK there are no fundamental "laws" that prevent price from continuing to fall, at least correlated with Moore's law (or whatever the current AI/Nvidia chip development cycle is called right now)- each new generation of hardware is significantly faster/cheaper than the next- so will we see a ChatGPT-5 model at half the price in a year? (yes I know that thinking models cost more, but just on a per-token basis)

samuelknight•4mo ago
You are vastly underestimating the price decline. To cherrypick one article; in the first two years since GPT 3.5, inference price for the same amount of intelligence has decreased 10x per year according to a study by Andreessen Horowitz https://a16z.com/llmflation-llm-inference-cost/. So in a stark slowdown scenario, we could still see a 1000x decrease in the next 5 years.

Price deflation is not tied to Moore's right now because much of the performance gains are from model optimization, high bandwidth memory supply chains, and electrical capacity build out, not FLOP density.

awongh•4mo ago
True! I just know that model optimization gains are much less guaranteed than say, FLOP density, even though model optimization has so far provided way more gains than hardware advancements.

Part of me is optimistic that when the AI bubble bursts the excess data center capacity is going to be another force driving the cost of inference down.

NemoNobody•4mo ago
Haha, I love how delusional everyone is about AI.

Yeppers, when that bubble burst - that's hilarious. This is the kinda stuff grandkids won't believe someday.

naasking•4mo ago
> I just know that model optimization gains are much less guaranteed than say, FLOP density, even though model optimization has so far provided way more gains than hardware advancements.

Performance gained from model improvements has outpaced performance gained from hardware improvements for decades.

throwaway314155•4mo ago
> has decreased 10x per year according to a study by Andreessen Horowitz

I believe you but that's not exactly an unbiased source of information.

Alex_1729•4mo ago
We are heading into the future of very low-cost AI inference. It's a good thing, and expected.
wwizo•4mo ago
You guys rock! I'm very curious how will this perform against real word data, where small nuance matters. Also have you tested it beyond 128K context window?
esafak•4mo ago
https://openrouter.ai/deepseek/deepseek-v3.2-exp
nacs•4mo ago
Strange - the model is marked as "Trains on data" ("To our knowledge, this provider may use your prompts and completions to train new models. This provider is disabled, but it can be re-enabled by changing your data policy.").

This is usually not the case for paid models -- is Openrouter just marking this model incorrectly or do Deepseek actually train on submitted data?

esafak•4mo ago
https://cdn.deepseek.com/policies/en-US/deepseek-privacy-pol...

https://openrouter.ai/docs/features/privacy-and-logging#data...

It seems so.

seunosewa•4mo ago
It is no longer the case that paid providers don't train on your data on Openrouter. You can exclude such sources in the settings.
nacs•4mo ago
Yep I have that setting disabled so the number of providers for that model on Openrouter currently is 0 for me.

I guess I'll wait for a 3rd party provider on Openrouter that doesn't log DS 3.2.

echelon•4mo ago
Is Open Router really open? I see their "main" repo as archived and various smaller projects.

Is it just the API client bindings that are open and the core routing service is closed!

esafak•4mo ago
I don't know why they need to claim to be open. Their job is to connect you to providers on the basis of price and various metrics they track. Open or close would makes no difference to me.
echelon•4mo ago
It's in the name. Why not name themselves ModelRouter or something similar?

If they lead the market, they'll extract value in lots of ways that an open company could at least be compelled not to. Plus there won't be competition.

They're probably selling your data to LLM companies and you don't even see what they're doing.

Without competition, they'll raise their rates.

If they were open, you could potentially run the offering on-prem. You could bolt on new providers or use it internally for your own routing.

Lots of reasons.

esafak•4mo ago
They can't raise their prices much because providers have the upper band, so users will always be able to go directly to the source. I use openrouter and openai, anthropic, google, etc.
burkaman•4mo ago
Here's an open source alternative you can self-host: https://llmgateway.io/

I think it's just called OpenRouter because the founder previously started OpenSea (an NFT marketplace), and also probably to sound a bit similar to OpenAI. It's like companies calling their products "natural" or "organic" or "artisan" when they can get away with it, just a marketing strategy of using words that conjure up vaguely positive connotations in your mind.

smakosh•4mo ago
Fun fact, we own closedrouter.ai and redirects to llmgateway.io
wongarsu•4mo ago
I always interpreted it as "open" as in "open market".

It's a frictionless marketplace connecting inference providers and customers, creating a more competitive market. Or a more open market if you play a bit fast and loose with terminology

mmastrac•4mo ago
Interesting that models still evolve fast enough that dedicated model-specific hardware isn't a big contender right now. We're still seeing major scaling gains on mostly generic platforms.
gunalx•4mo ago
google tpm, groq and cerebras needs yo be mentioned even if they are more general architecture optimized.
ramshanker•4mo ago
What happened to Meta Open weights models? Lately I keep hearing more of Deepseek than LAAMA?
Alifatisk•4mo ago
Wasn't The Llama 4 maverick and scout a flop?
grim_io•4mo ago
One huge problem with these "cheap" models is that they happen to be more expensive in the typical agent workflow if the provider does not support caching.

Input and output costs are peanuts compared to the order of magnitude(or more) amount of tokens that hit the cache.

At that point you might as well use GPT-5. It will be the same price or cheaper, and more capable.

NotMichaelBay•4mo ago
I was under the impression that this model does support caching. The pricing page says the cost of input tokens (cache hit) is $0.028.
segmondy•4mo ago
you declared a huge problem and followed up with an IF.

deepseek API supports caching, stop manufacturing problems where there is none.

https://api-docs.deepseek.com/guides/kv_cache

grim_io•4mo ago
Sure. But there is no way I'm going to use the deepseek endpoint.

Openrouter says they might use your data for training.

cheema33•4mo ago
First you complained about lack of caching. When you were informed that the model supports caching, instead of admitting your error you switched to an unrelated complaint. I hope that you you do not use similar strategies for discussion in your personal and work life.
grim_io•4mo ago
Your broad attack on me as a person is unnecessary.

If you read my post carefully, you will realize that I did not make any contradictory statements.

llllm•4mo ago
Not a broad attack, it is specifically targeted at your proud xenophobia.
grim_io•4mo ago
Absolutely ridiculous.

My wife is Chinese.

segmondy•4mo ago
caching is not a function of the model but the provider, all models can be cached. the provider serving the model decides if they are going to cache it. openrouter is not a provider but a middleman between providers, so some of their providers for deepseek might provide caching and some might not. if you just use any then you might run into the issue. some of their provider might use your data for training, some might not. you have to look at the list and you can cherry pick ones that won't train on your data and that also provide caching.
JimDabell•4mo ago
> One huge problem with these "cheap" models is that they happen to be more expensive in the typical agent workflow if the provider does not support caching.

DeepSeek supports caching and cache hits are a tenth of the cost.

$0.028/M for cache hit

$0.28/M for cache miss

$0.42/M for output

— https://api-docs.deepseek.com/news/news250929

grim_io•4mo ago
I auto disqualify the chinese first party endpoints.

If they are okay for you, then sure go ahead. Enjoy the caching.

What other provider is going to support it?

JimDabell•4mo ago
> I auto disqualify the chinese first party endpoints.

Why?

curseofcasandra•4mo ago
I’m guessing it’s something along the lines of this: https://youtu.be/kYiUY07TzS4
guluarte•4mo ago
by your logic then you have to disqualify openai and anthropic first party endpoints for testing gpt and claude...
grim_io•4mo ago
There is no bug in my logic. Anthropic and OpenAI are not chinese first party providers.
eric15342335•4mo ago
Not sure if I get it correctly:

They trained a thing to learn mimicking the full attention distribution but only filtering the top-k (k=2048) most important attention tokens so that when the context window increases, the compute does not go up linearly but constantly for the attention->[query,key] process (it does grow up linearly in the graph because you still need to roughly scan the entire context window (which an "indexer" will do), but just very roughly here in order to speed up things, which is O(L) here).

impact_sy•4mo ago
Prices fall, benchmarks remain stable. Maybe in the future, LLM will spend most of its money on electricity.