frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•2m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•3m ago•1 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•8m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•10m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
1•tosh•16m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•19m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•20m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•24m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•25m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•26m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•29m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•31m ago•4 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•32m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•34m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•36m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•38m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•41m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•46m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•47m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•51m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments
Open in hackernews

A new, faster DeepSeek R1-0528 variant appears from German lab

https://venturebeat.com/ai/holy-smokes-a-new-200-faster-deepseek-r1-0528-variant-appears-from-german-lab-tng-technology-consulting-gmbh/
77•saubeidl•7mo ago

Comments

UrineSqueegee•7mo ago
they have reduced the token output by 20% and the benchmark scores have decreased by 10% of the original model.
yorwba•7mo ago
The 20% output reduction is relative to R1, the 10% benchmark score reduction is relative to R1-0528.

It produces 60% fewer output tokens than R1-0528 and scores about 10% higher on their benchmark than R1.

So it's a way to turn R1-0528, which is better than R1 but slower, into a model that's worse than R1-0528 but better and faster than R1.

saubeidl•7mo ago
Yup, you can see it well on the graph here: https://venturebeat.com/wp-content/uploads/2025/07/Gu4d8kzWo...
ipsum2•7mo ago
tl;dr: faster but worse; i.e. on the pareto frontier.
konsalexee•7mo ago
It is always about the trade-off between those two parameters.

Of course an increase in both is the optimal, but a small sacrifice in performance/accuracy for being 200% faster is worth noting. Around 10% drop in accuracy for 200% speed-up, some would take it!

d1sxeyes•7mo ago
Also that “speed up” is actually hiding “less compute used” which is a proxy for cost. Assuming this is 200% faster purely because it needs less compute, that should mean it costs roughly 1/3 as much to run for a 10% decrease in quality of output.
konsalexee•7mo ago
↑
randomNumber7•7mo ago
From the hugginface model card:

"Due to the strict new guidelines of the EU AI Act that take effect on August 2nd 2025, we recommend that each R1T/R1T2 user in the EU either familiarizes themselves with these requirements and assess their compliance, or ceases using the model in the EU after August 1st, 2025."

Doesn't the deepseek licence completely forbid any use in the EU already? How can a german company legally build this in the first place (which they presumably did)?

qwertox•7mo ago
> Doesn't the deepseek licence completely forbid any use in the EU already?

Care to explain?

https://deepseeklicense.github.io/

https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICE...

akreal•7mo ago
Probably a mix-up with the recently released Huawei model:

https://news.ycombinator.com/item?id=44441447

peer2pay•7mo ago
Calling TNG a lab is a bit funny to me. It’s a consulting company that lets people hack on stuff between placements.
the_third_wave•7mo ago
Sounds like a good use of "spare" time to me and not that different from many a lab I've been part of: someone gets a hunch, sets up an experiment to follow it, proves poor disproves whatever they were after, pulls down the experiment, rinse, repeat.
loherj•7mo ago
Yes and no.

Calling us a lab is not quite right, we are a consulting company.

But hacking is not just limited to in between placements, everybody has (at least) 2 days per month to do that, regardless of any work for customers.

Also, since AI is such a strategically important topic, we have a team that just works on AI stuff internally. That’s where R1T and R1T2 come from.

prinzmaus•7mo ago
OT: I love that German has a word for “yes and no”: jein.
saubeidl•7mo ago
Petition to make "nes" a word in english (yo doesn't really work...)
perpetualpatzer•7mo ago
So does English. Well, sorta.
_ache_•7mo ago
Is 200% a way to say *3 quicker ? The little 10% reasoning performance decrease seems worth it.
MangoToupe•7mo ago
> The little 10% reasoning performance decrease seems worth it

We need about three orders of magnitude more tests to make these numbers meaningful.

loherj•7mo ago
Fair point. More benchmarks are definitely good but I’m optimistic that they will show similar results.

Anecdotally, I can say that my personal experience with the model is in line with what the benchmarks claim: It’s a bit smarter than R1, a bit faster than R1, much faster than R1-0528, but not quite as smart. (Faster meaning less output tokens). For me, it’s at a sweet spot and I use it as daily driver.

loherj•7mo ago
Yes. If you look at the diagram that plots the performance vs the amount of output tokens, you can see that R1T2 uses about 1/3 of the output tokens that R1-0528 uses.

Keep in mind, the speed improvement doesn’t come from the model running any faster (it’s the exact same architecture as R1, after all) but from using less output tokens while still achieving very good results.

loherj•7mo ago
If anybody wants to try it out, it’s up on chutes: https://chutes.ai/app/chute/4fa0c7f5-82f7-59d1-8996-661bb778...
xracy•7mo ago
Can I ask why this article is title like Deepseek is a virus? Feels like this could've been "new flu variant".

I don't know if this is intentional or not.

arantius•7mo ago
This is an appropriate usage of the word "variant", and applies to anything that can have several varieties.
xracy•7mo ago
While I agree the word could be appropriate, I'm asking a meta question about how it is typically used, and whether or not we're conveying something unintentional by using it in this context as well. I don't consider "variants" a good thing because I lived through a few years of COVID.