I can't imagine demand would be greater for R2 than for R1 unless it was a major leap ahead. Maybe R2 is going to be a larger/less performant/more expensive model?
Deepseek could deploy in a US or EU datacenter ... but that would be admitting defeat.
But will they keep releasing the weights or do an OpenAI and come up with a reason they can't release them anymore?
At the end of the day, even if they release the weights, they probably want to make money and leverage the brand by hosting the model API and the consumer mobile app.
Now they are firmly on the map, which presumably helps with hiring, doing deals, influence. If they stop publishing something, they run the risk of being labelled a one-hit wonder who got lucky.
If they have a reason to believe they can do even better in the near future, releasing current tech might make sense.
What is DeepSeek aiming for if not that, which is currently the only thing they offer that cost money? They claim their own inference endpoints has a cost profit margin of 545%, which might be true or not, but the very fact that they mentioned this at all seems to indicate it is of some importance to them and others.
>June 26 (Reuters) - Chinese AI startup DeepSeek has not yet determined the timing of the release of its R2 model as CEO Liang Wenfeng is not satisfied with its performance,
>Over the past several months, DeepSeek's engineers have been working to refine R2 until Liang gives the green light for release, according to The Information.
But yes, it is strange how the majority of the article is about lack of GPUs.
Although I'd like to know the source for the "this is because of chip sanctions" angle. SMIC is claiming they can manufacture at 5nm and a large number of chips at 7nm can get get the same amount of compute of anything Nvidia produces. It wouldn't be market-leading competitive but delaying the release for a few months doesn't change that. I don't really see how DeepSeek production release dates and the chip sanctions could be linked in the small. Unless they're just including that as an aside.
It is pretty strange that DeepSeek didn't say May anywhere, that was also a Reuters report based on "three people familiar with the company".[1] DeepSeek itself did not respond and did not make any claims about the timeline, ever.
[1]: https://www.reuters.com/technology/artificial-intelligence/d...
First, nobody is training on H20s, it's absurd. Then their logic was, because of high inference demand of DeepSeek models there are high demand of H20 chips, and H20s were banned so better not release new model weights now, otherwise people would want H20s harder.
Which is... even more absurd. The reasoning itself doesn't make any sense. And the technical part is just wrong, too. Using H20 to serve DeepSeek V3 / R1 is just SUPER inefficient. Like, R1 is the most anti-H20 model released ever.
The entire thing makes no sense at all and it's a pity that Reuters fall for that bullshit.
Why? Any chance you have some links to read about why it’s the case?
Human progress that benefits everyone being stalled by the few and powerful who want to keep their moats. Sad world we live in.
It's about China being expansionist, actively preparing to invade Taiwan, and generally becoming an increasing military threat that does not respect the national integrity of other states.
The US is fine with other countries having AI if the countries "play nice" with others. Nobody is limiting GPU's in France or Thailand.
This is very specific to China's behavior and stated goals.
With this combo, I have no reason to use Claude/Gemini for anything.
People don't realize how good the new Deepseek model is.
Personally I get it to write the same code I'd produce, which obviously I think is OK code, but seems other's experience differs a lot from my own so curious to understand why. I've iterated a lot on my system prompt so could be as easy as that.
The published model has a note strongly recommending that you should not use system prompts at all, and that all instructions should be sent as user messages, so I'm just curious about whether you use system prompts and what your experience with them is.
Maybe the hosted service rewrites them into user ones transparently ...
Mainly the hosted one.
> The published model has a note strongly recommending that you should not use system prompts at all
I think that's outdated, the new release (deepseek-ai/DeepSeek-R1-0528) has the following in the README:
> Compared to previous versions of DeepSeek-R1, the usage recommendations for DeepSeek-R1-0528 have the following changes: System prompt is supported now.
The previous ones, while they said to put everything in user prompts, still seemed steerable/programmable via the system prompt regardless, but maybe it wasn't as effective as it is for other models.
But yeah outside of that, heavy use of system (and obviously user) prompts.
There is something deeper in the model that seemingly can be steered/programmed with the system/user prompts and it still produces kind of shitty code for some reason. Or I just haven't found the right way of prompting Google's stuff, could also be the reason, but seemingly the same approach works for OpenAI, Anthropic and others, not sure what to make of it.
The large context length is a huge advantage, but it doesn't seem to be able to use it effectively. Would you say that OpenAI models don't suffer from this problem?
Yes, definitely. For every model I've used and/or tested, the more context there is, the worse the output, even within the context limits.
When I use chat UIs (which admittedly is less and less), I never let the chat go beyond one of my messages and one response from the LLM. If something is wrong with the response, I figure out what I need to change with my prompt and start new chat/edit the first message and retry, until it works. Any time I've tried to "No, what I meant was ..." or "Great, now change ..." the responses drop sharply in quality.
DeepSeek-R1 0528 performs almost as well as o3 in AI quality benchmarks. So, either OpenAI didn't restrict access, DeepSeek wasn't using OpenAI's output, or using OpenAI's output doesn't have a material impact in DeepSeek's performance.
https://artificialanalysis.ai/?models=gpt-4-1%2Co4-mini%2Co3...
I am not at all surprised, the CCP views AI race as absolutely critical for their own survival...
EQBench, another "slop benchmark" from the same author, is equally dubious, as is most of his work, e.g. antislop sampler which is trying to solve an NLP task in a programmatic manner.
To me that does seem like a reasonable speculation, though unproven.
Remember that DeepSeek is the offshoot of a hedge fund that was already using machine learning extensively, so they probably have troves of high quality datasets and source code repos to throw at it. Plus, they might have higher quality data for the Chinese side of the internet.
* Of course I won't detail my class of problems else my benchmark would quickly stop being useful. I'll just say that it is a task at the undergraduate level of CS, that requires quite a bit of deductive reasoning.
so what?
Then look up Latin America’s history, where the US actively worked to install and support such violent dictatorships.
Some under the guise of protecting countries from the threat of communism - like Brazil, Argentina and Chile, and some explicitly to protect US company’s interests - like in Guatemala
Yes fuckups happened. But then for results Russian intervention see CCP and how many people died from their hands and policies
And on who you would support in such a conflict! ;)
Might as well talk about the probability of a conflict with South Africa, China might not be the best country to live in nor be country that takes care of its own citizens the best, but they seem non-violent towards other sovereign nations (so far), although of course there is a lot of posturing. But from the current "world powers", they seem to be the least violent.
China is peaceful recently, at least since their invasion of Vietnam. But (1) their post-Deng culture is highly militaristic and irredentist, (2) this is the first time in history that they actually can rollback US influence, their previous inability explains the peace rather than lack of will (3) Taiwan from a realist perspective makes too much sense, as the first in the island chain to wedge between Philippines and Japan, and its role in supplying chips to the US.
The lesson we should learn from Russia's invasion of Ukraine is to believe countries when they say they own another country. Not assume the best and design policy around that assumption.
If you want to read some experts on this question, see this: https://warontherocks.com/?s=taiwan
The general consensus seems to be around a 20-25% chance of an invasion of Taiwan within the next 5 years. The remaining debate isn't about whether they want to do it, it's about whether they'll be able to do it and what their calculation will be around those relative capabilities.
DeepSeek is not a charity, they are the largest hedge fund in China, nothing different from a typical wall street funds. They don't spend billions to give the world something open and free just because it is good.
When the model is capable of generating decent amount of revenues, or when there is conclusive evidence of showing being closed would lead to much higher profit, it will be closed.
Maybe then we wouldn't be beholden to Nvidia's whims (sour spot in regards to buying their cards and the costs of those, vs what Intel is trying to do with their Pro cards but inevitably worse software support, as well as import costs), or those of a particular government. I wonder if we'll ever live in such a world.
But we have models developing and being produced outside of the US already, both in Asia but also Europe. Sure, it would be cool to see more from South America and Africa, but the playing field is not just in the US anymore, particularly when it comes to open weights (which seems more of a "world benefit" than closed APIs), then the US is lagging far behind.
Llama (v4 notwithstanding) and Gemma (particularly v3) aren't my idea of lagging far behind...
While neat and of course Llama kicked off a large part of the ecosystem, so credit where credit is due, both of those suffer from "open-but-not-quite" as they have large documents of "Acceptable Use" which outlines what you can and cannot do with the weights, while the Chinese counter-parts slap a FOSS-compatible license on the weights and calls it a day.
We could argue if that's the best approach, or even legal considering the (probable) origin of their training data, but the end result remains the same, Chinese companies are doing FOSS releases and American companies are doing something more similar to BSL/hybrid-open releases.
It should tell you something when the legal department of one of these companies calls the model+weights "proprietary" while their marketing department continues to calling the same model+weights "open source". I know who I trust of those two to be more accurate.
I guess that's why I see American companies as being further behind, even though they do release something.
My consumer AMD card (7900 XTX) outperforms the 15x more expensive Nvidia server chip (L40S) that I was using.
Surely it would be cheaper and easier for the CCP to develop their own chipmaking capacity than going to war in the Taiwan strait?
with a reality tv show dude being the commander in chief and a news reporter being the defense secretary.
life is tough in america, man.
If I were China I’d be more worried about the other up and coming world power in India.
building their own capacity means building everything in China, that is the entire semiconductor ecosystem. just look at the mobile phones and EVs built by Chinese companies.
The USA doesn't want to lose Taiwan because of the chip making plants, and a little bit because it is beneficial to surround their geopolitical enemies with a giant ring of allies.
that is what the CCP tells you and its own people.
the truth is taiwan is just the symbol of US presence in western pacific. getting taiwan back means the permanent withdrawal of US influence in the western pacific region and the offical end of US global dominance.
CCP doesn't care the island of taiwan, they care about their historical positioning.
In any case it's clear that it is not the fabs that China cares about when it is talking about (re)conquering Taiwan.
or
who knows maybe they just chillin watching how west labs burn gpu money, let eval metas shift. then drop r2 when oai/claude trust graph dips a bit
I miss the old days of journalism, when they might feel inclined to let the reader know that their source for the indirect source is almost entirely funded by the fortune generated by a man who worked slavishly to become a close friend of the boss of one of DeepSeek’s main competitors (Meta).
Feel bad for anyone who gets their news from The Information and doesn’t have this key bit of context.
it's not even in the top ten based on OpenRouter https://openrouter.ai/rankings?view=month
sigmoid10•5h ago