frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

A new, faster DeepSeek R1-0528 variant appears from German lab

https://venturebeat.com/ai/holy-smokes-a-new-200-faster-deepseek-r1-0528-variant-appears-from-german-lab-tng-technology-consulting-gmbh/
69•saubeidl•5h ago

Comments

UrineSqueegee•4h ago
they have reduced the token output by 20% and the benchmark scores have decreased by 10% of the original model.
yorwba•3h ago
The 20% output reduction is relative to R1, the 10% benchmark score reduction is relative to R1-0528.

It produces 60% fewer output tokens than R1-0528 and scores about 10% higher on their benchmark than R1.

So it's a way to turn R1-0528, which is better than R1 but slower, into a model that's worse than R1-0528 but better and faster than R1.

saubeidl•3h ago
Yup, you can see it well on the graph here: https://venturebeat.com/wp-content/uploads/2025/07/Gu4d8kzWo...
ipsum2•4h ago
tl;dr: faster but worse; i.e. on the pareto frontier.
konsalexee•2h ago
It is always about the trade-off between those two parameters.

Of course an increase in both is the optimal, but a small sacrifice in performance/accuracy for being 200% faster is worth noting. Around 10% drop in accuracy for 200% speed-up, some would take it!

d1sxeyes•2h ago
Also that “speed up” is actually hiding “less compute used” which is a proxy for cost. Assuming this is 200% faster purely because it needs less compute, that should mean it costs roughly 1/3 as much to run for a 10% decrease in quality of output.
konsalexee•1h ago
↑
randomNumber7•4h ago
From the hugginface model card:

"Due to the strict new guidelines of the EU AI Act that take effect on August 2nd 2025, we recommend that each R1T/R1T2 user in the EU either familiarizes themselves with these requirements and assess their compliance, or ceases using the model in the EU after August 1st, 2025."

Doesn't the deepseek licence completely forbid any use in the EU already? How can a german company legally build this in the first place (which they presumably did)?

qwertox•4h ago
> Doesn't the deepseek licence completely forbid any use in the EU already?

Care to explain?

https://deepseeklicense.github.io/

https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICE...

akreal•4h ago
Probably a mix-up with the recently released Huawei model:

https://news.ycombinator.com/item?id=44441447

peer2pay•4h ago
Calling TNG a lab is a bit funny to me. It’s a consulting company that lets people hack on stuff between placements.
the_third_wave•3h ago
Sounds like a good use of "spare" time to me and not that different from many a lab I've been part of: someone gets a hunch, sets up an experiment to follow it, proves poor disproves whatever they were after, pulls down the experiment, rinse, repeat.
loherj•2h ago
Yes and no.

Calling us a lab is not quite right, we are a consulting company.

But hacking is not just limited to in between placements, everybody has (at least) 2 days per month to do that, regardless of any work for customers.

Also, since AI is such a strategically important topic, we have a team that just works on AI stuff internally. That’s where R1T and R1T2 come from.

_ache_•2h ago
Is 200% a way to say *3 quicker ? The little 10% reasoning performance decrease seems worth it.
MangoToupe•2h ago
> The little 10% reasoning performance decrease seems worth it

We need about three orders of magnitude more tests to make these numbers meaningful.

loherj•2h ago
Fair point. More benchmarks are definitely good but I’m optimistic that they will show similar results.

Anecdotally, I can say that my personal experience with the model is in line with what the benchmarks claim: It’s a bit smarter than R1, a bit faster than R1, much faster than R1-0528, but not quite as smart. (Faster meaning less output tokens). For me, it’s at a sweet spot and I use it as daily driver.

loherj•2h ago
Yes. If you look at the diagram that plots the performance vs the amount of output tokens, you can see that R1T2 uses about 1/3 of the output tokens that R1-0528 uses.

Keep in mind, the speed improvement doesn’t come from the model running any faster (it’s the exact same architecture as R1, after all) but from using less output tokens while still achieving very good results.

The messy reality of SIMD (vector) functions

https://johnnysswlab.com/the-messy-reality-of-simd-vector-functions/
67•mfiguiere•6h ago•34 comments

You're All CTO Now

https://jamie.ideasasylum.com/2025/07/01/you%27re-all-cto-now
32•fside•4d ago•33 comments

Being too ambitious is a clever form of self-sabotage

https://maalvika.substack.com/p/being-too-ambitious-is-a-clever-form
379•alihm•15h ago•118 comments

Learn to love the moat of low status

https://usefulfictions.substack.com/p/learn-to-love-the-moat-of-low-status
172•jger15•2d ago•67 comments

Mini NASes marry NVMe to Intel's efficient chip

https://www.jeffgeerling.com/blog/2025/mini-nases-marry-nvme-intels-efficient-chip
376•ingve•21h ago•182 comments

The History of Electronic Music in 476 Tracks (1937–2001)

https://www.openculture.com/2025/06/the-history-of-electronic-music-in-476-tracks.html
59•bookofjoe•2d ago•16 comments

Go, PET, Let Hen - Curious adventures in (Commodore) BASIC tokenizing

https://www.masswerk.at/nowgobang/2025/go-pet-let-hen
5•masswerk•2h ago•0 comments

N-Back – A Minimal, Adaptive Dual N-Back Game for Brain Training

https://n-back.net
42•gregzeng95•2d ago•10 comments

EverQuest

https://www.filfre.net/2025/07/everquest/
231•dmazin•20h ago•120 comments

Incapacitating Google Tag Manager (2022)

https://backlit.neocities.org/incapacitate-google-tag-manager
180•fsflover•18h ago•117 comments

Why I left my tech job to work on chronic pain

https://sailhealth.substack.com/p/why-i-left-my-tech-job-to-work-on
328•glasscannon•1d ago•197 comments

Gecode is an open source C++ toolkit for developing constraint-based systems

https://www.gecode.org/
9•gjvc•4h ago•2 comments

Scientists capture slow-motion earthquake in action

https://phys.org/news/2025-06-scientists-capture-motion-earthquake-action.html
18•PaulHoule•3d ago•0 comments

Baba Is Eval

https://fi-le.net/baba/
207•fi-le•1d ago•42 comments

Telli (YC F24) Is Hiring Engineers [On-Site Berlin]

https://hi.telli.com/join-us
1•sebselassie•6h ago

Large Language Models Are Improving Exponentially

https://spectrum.ieee.org/large-language-model-performance
33•pseudolus•1h ago•20 comments

Nvidia is full of shit

https://blog.sebin-nyshkim.net/posts/nvidia-is-full-of-shit/
717•todsacerdoti•15h ago•371 comments

X-Clacks-Overhead

https://xclacksoverhead.org/home/about
4•weinzierl•3d ago•0 comments

In a milestone for Manhattan, a pair of coyotes has made Central Park their home

https://www.smithsonianmag.com/science-nature/in-a-milestone-for-manhattan-a-pair-of-coyotes-has-made-central-park-their-home-180986892/
148•sohkamyung•4d ago•137 comments

Impact of PCIe 5.0 Bandwidth on GPU Content Creation and LLM Performance

https://www.pugetsystems.com/labs/articles/impact-of-pcie-5-0-bandwidth-on-gpu-content-creation-performance/
30•zdw•1d ago•13 comments

Show HN: I AI-coded a tower defense game and documented the whole process

https://github.com/maciej-trebacz/tower-of-time-game
269•M4v3R•1d ago•133 comments

The story behind Caesar salad

https://www.nationalgeographic.com/travel/article/story-behind-caesar-salad
118•Bluestein•17h ago•68 comments

ADXL345 (2024)

https://www.tinytransistors.net/2024/08/25/adxl345/
39•picture•10h ago•2 comments

Wind Knitting Factory

https://www.merelkarhof.nl/work/wind-knitting-factory
245•bschne•1d ago•60 comments

Writing a Game Boy Emulator in OCaml (2022)

https://linoscope.github.io/writing-a-game-boy-emulator-in-ocaml/
247•ibobev•1d ago•56 comments

Robots move Shanghai city block [video]

https://www.youtube.com/watch?v=7ZccC9BnT8k
115•surprisetalk•1d ago•37 comments

The ITTAGE indirect branch predictor

https://blog.nelhage.com/post/ittage-branch-predictor/
46•Bogdanp•13h ago•12 comments

OBBB signed: Reinstates immediate expensing for U.S.-based R&D

https://www.kbkg.com/feature/house-passes-tax-bill-sending-to-president-for-signature
342•tareqak•12h ago•240 comments

Bcachefs may be headed out of the kernel

https://lwn.net/Articles/1027289/
131•ksec•23h ago•217 comments

Kepler.gl

https://kepler.gl/
153•9woc•23h ago•19 comments