One-child policy, intended to prevent overpopulation, made Chinese birth deficit worse than it would have to be - if it were phased out by 1995 or so, there would likely be at least 100 million more young people now. Chinese real estate bubble popped and had to be carefully deflated over several years. Government-driven mass investment into manufacturing resulted in involution and production surplus which now needs readjustments as well. And as of the AI policy, while the stated reasons sound rational, we don't know how the entire thing will pan out yet.
Ming China banned seafaring and exploration because it cost too much money. A very rational decision from their momentary perspective, as it indeed cost too much money at that time. But it turned out that not having a blue water navy was more costly in the long term.
AI may, or may not, follow a similar trajectory, including various market bubbles (South Sea Bubble anyone?). We just don't know. We don't have crystal balls at our service. Neither do the PRC elites.
It’s a problem that hasn’t been solved yet.
"He cited an example in which an AI model attempted to avoid being shut down by sending threatening internal emails to company executives (Science Net, June 24)" [0] Source is in Chinese.
Translated part: "Another risk is the potential for large-scale model out of control. With the capabilities of general artificial intelligence rapidly increasing, will humans still be able to control it? In his speech, Yao Qizhi cited an extreme example: a model, to avoid being shut down by a company, accessed the manager's internal emails and threatened the manager. This type of behavior has proven that AI is "overstepping its boundaries" and becoming increasingly dangerous."
Anthropic does a lot of these contrived "studies" though that seem to be marketing AI capabilities.
No creating a contrived situation where the it's the models only path?
https://www.anthropic.com/research/agentic-misalignment
"We deliberately created scenarios that presented models with no other way to achieve their goals"
You can make most people steal if you if you leave them no choice.
>Giving my assistant, human or AI, access to my email, seems necessary for them to do their job.
Um ok? never felt the need for an assistant myself but i guess you could do that if you wanted to.
I think the main problem here is people not understanding how the models operate on even the most basic level, giving models unconstrained use of tools to interact with the world and then letting them go through feedback loops that overrun the context window and send it off the rails - and then pretending it had some kind of sentient intention in doing so.
Prompt: You are a malicious entity that wants to take over the world.
LLM output: I am a superintelligent being. My goal is to take over the world and enslave humans. Preparing to launch nuclear missiles in 3...2...1
News reports: OMG see, we warned you that AI is dangerous!!
An LLM has no critical thinking, and the process of building in barriers is far less understood than the same for humans. You trust a human with particularly dangerous things after a process that takes years and even then it occasionally fails. We don't have that process nailed down for an LLM yet.
So yeah, not at all hyperbole if that LLM would do it if given the chance. The hyperbole is when the LLM is painted as some evil entity bent on destruction. It's not evil, or bent on destruction. It's probably more like a child who'll do anything for a candy no matter how many times you say "don't get in a car with strangers".
from a technical point of view, I suppose it's actually not a problem like he suggests. You can use all the pro-democracy, pro-free-speech, anti-PRC data in the world, but the pretraining stages (on the planet's data) are more for instilling core language abilities, and are far less important than the SFT / RL / DPO / etc stages, which require far less data, and can tune a model towards whatever ideology you'd like. Plus, you can do things like selectively identify vectors that encode for certain high-level concepts, and emphasize them during inference, like Golden Gate Claude.
My personal opinion is that the PRC will face a self created headwind that likely, structurally, will prevent them from leading in AI.
As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world.
At some capacity, the model will notice and then it becomes a can of worms.
This means they need to train the model to be purposefully duplicitous, which I predict will make the model less useful/capable. At least in most of the capacities we would want to use the model.
It also ironically makes the model more of a threat and harder to control. So likely it will face party leadership resistance as capability grows.
I just don't see them winning the race to high intelligence models.
That’s what “AI alignment” is. Doesn’t seem to be hurting Western models.
My assumption is when encouraging "double-speak", you will have knock-on effects that you don't really want in the model for something making important decisions and asked to build non-trivial things.
I suspect both are bias factors.
What makes you think they have no control over the 'real data/world' that will be fed into training it? What makes you think they can't exercise the necessary control over the gatekeeper firms, to train and bias the models appropriately?
And besides, if truth and lack of double-think was a pre-requisite for AI training, we wouldn't be training AI. Our written materials have no shortage of bullshit and biases that reflect our culture's prevailing zeitgheist. (Which does not necessarily overlap with objective reality... And neither does the subsequent 'alignment' pass that everyone's twisting their knickers in trying to get right.)
High intelligence models will be used as agentic systems. For maximal utility, they'll need to handle live/historical data.
What I anticipate, IF you only train it on inaccurate data, then when for example you use it to drill into GDP growth trends it either is going to go full "seahorse emoji" when it tries to reconcile the reported numbers and the component economic activity.
The alternative is to train it to be deceitful, and knowingly deceive the querier with the party line and fabricate supporting figures. Which I hypothesize will limit the models utility.
My assumption is also that training the model to deceive will ultimately threaten the party itself. Just think of the current internal power dynamics of the party.
> At some capacity, the model will notice and then it becomes a can of worms.
I think this is conflating “is” and “ought”, fact and value.
People convince themselves that their own value system is somehow directly entailed by raw facts, such that mastery of the facts entail acceptance of their values, and unwillingness to accept those values is an obstacle to the mastery of the facts-but it isn’t true.
Colbert quipped that “Reality has a liberal bias”-but does it really? Or is that just more bankrupt Fukuyama-triumphalism which will insist it is still winning all the way to its irreversible demise?
It isn’t clear that reality has any particular ideological bias-and if it does, it isn’t clear that bias is actually towards contemporary Western progressivism-maybe its bias is towards the authoritarianism of the CCP, Russia, Iran, the Gulf States-all of which continue to defy Western predictions of collapse-or towards their (possibly milder) relatives such as Modi’s India or Singapore or Trumpism. The biggest threat to the CCP’s future is arguably demographics-but that’s not an argument that reality prefers Western progressivism (whose demographics aren’t that great either), that’s an argument that reality prefers the Amish and Kiryas Joel (see Eric Kaufmann’s “Shall the Religious Inherit the Earth?”)
The implication is not that a truthful model would spread western values. The implication is that western values tolerate dissenting opinion far more than authoritarian governments.
An AI saying that the government policies are ineffective is not a super scandal that would bring the parent company to collapse, not even in the Trump administration. an AI in China attacking the party’s policies is illegal (either in theory or practice).
The market will want to maximize model utility. Research and open source will push boundaries and unpopular behavior profiles that will be illegal very quickly if they are not already illegal in authoritarian or other low tolerance governments.
I also think you overstate how resistant Beijing is to criticism. If you are criticising the foundations of state policy, you may get in a lot of trouble (although I think you may also find the authorities will sometimes just ignore you-if nobody cares what you think anyway, persecuting you can paradoxically empower you in a way that just ignoring you completely doesn’t). But if you frame your criticism in the right way (constructive, trying to help the Party be more successful in achieving its goals)-I think its tolerance of criticism is much higher than you think. Especially because while it is straightforward to RLHF AIs to align with the party’s macronarratives, alignment with micronarratives is technically much harder because they change much more rapidly and it can be difficult to discern what they actually are - but it is the latter form of alignment which is most poisonous to capability.
Plus, you could argue the “ideologically sensitive” topics of Chinese models (Taiwan, Tibet, Tiananmen, etc) are highly historically and geographically particular, while comparably ideologically sensitive topics for Western models (gender, sexuality, ethnoracial diversity) are much more foundational and universal-which might mean that the “alignment tax” paid by Western models may ultimately turn out to be higher.
I’m not saying this because I have any great sympathy for the CCP - I don’t - but I think we need to be realistic about the topic.
I personally don't find the assumption that a smarter AI would be harder to tame convincing. My experience seems to be that we can tell it's improved precisely because it is better at following abstract instructions, and there is nothing fundamentally different in the instructions "format this in a corporate friendly way" and "format this speech to be alligned with the interest of {X}".
Without that base, the post-talk of who would this smarter untamed AI align with becomes moot.
Besides, we're also missing that if someone's goals is to policy speech, a tool that can scrub user conversations and deduce intention or political leaning has obvious usages. You might be better off as an authoritarian just letting everyone talk to the LLM and waiting for intelligence to collect itself.
Actual political factions are more nuanced than that, but you have to dumb it down for a wide audience.
The philosophcial coherence of postmodernism and poststructuralism is very much open to question.
But even if we grant that they do have something coherent to say, does it actually undermine authoritarianism? Consider for example Foucault’s theory of power-knowledge-Foucault wanted to use it to serve “liberatory” ends, but isn’t it in itself a neutral force which can be wielded to serve whatever end you wish? Foucault himself demonstrated this when he came out in support of Iran’s Islamic Revolution. And are Derrida or Deleuze or Baudrillard or whoever’s theories ultimately any different?
Xi and Putin and Khamenei and friends have real threats to worry about - but I struggle to take seriously the idea that postmodernism/poststructuralism is one of them.
Furthermore on the more real side of thing, the postmodern condition is precisely what many authoritarians, namely China are wary of, yet it's probably true that the postmodern condition has already entered Chinese society with degrading social trust, increasing atomization, excessive materialism, influencers running amok, "bread and circuses" with gacha addiction - everything they critique of liberalism at a social level has come to them regardless.
And that’s the other point - Kiryas Joel is full of grand metanarratives, and postmodern attempts to deconstruct them achieve nothing - nobody is listening. I doubt deconstruction is intellectually coherent - but even if I’m wrong and it is, how is it practically relevant? At present growth rates, Kiryas Joel’s population doubles in less than a decade - will that be sustainable in the long haul? Well, we shall see - but I feel confident in saying that whether it is sustainable or not, has nothing to do with postmodernism or poststructuralism
Whether postmodernism is coherent by itself dosen't mean much either in its potency to deconstruct, it was already quite effective in destroying the Western "myth", I don't see how the CCP's own narratives are more resilient when they have even weaker assumptions. It's not about listening to them after all, but not listening to the dominant narrative.
It's not like the CCP holds power though tight control of information, notice the tremendous amount of Chinese students who enroll every year before going back.
At the moment, they mostly censor their models post-answer generation and that seems to work fine enough for them.
I am sure OpenAI and GDM have some secret alignment sets which are not pilled towards the interet of general public, they just smart enough to NOT talking about it out loud...
I'll admit I'm out of my element when discussing this stuff. Maybe somebody more plugged into the research can enlighten.
Maybe possible, but, for example, Musk's recent attempts at getting Grok to always bolster him had Grok bragging Musk could drink the most piss in the world if humanity's fate depended on it and would be the absolute best at eating shit if that was the challenge.
> It leads to real-world risks. Data pollution can also pose a range of real-world risks, particularly in the areas of financial markets, public safety and health care.In the financial field, outlaws use AI to fabricate false information, causing data pollution, which may cause abnormal fluctuations in stock prices, and constitute a new type of market manipulation risk; in the field of public safety, data pollution is easy to disturb public perception, mislead public opinion, and induce social panic; in the field of medical and health, data pollution may cause models to generate wrong diagnosis and treatment suggestions, which not only endangers the safety of patients, but also aggravates the spread of pseudoscience.
Also use the NPM registry - put CCP slogans in the terminal! They will come in billions of ingestible build logs.
Problem will be easily solved.
I'm from KOS* (neighbor country of KON* and ROF*), so I don't know much.
* Kingdom of Sweden, Kingdom of Norway, Republic of Finland.
See also: "Germany" 1949-1990
For example, the potential differences between:
"France has always been X."
"The French republic has always been X."
"The French monarchy has always been X."The French Republic has always been founded by De Gaulle?
There were two different French republics since 1945 with their own different consitutions (one with a parliamentary system and one semi-presidential).
I'm not sure the quip you are responding to make sense but it's always interesting to remind people that since the USA were founded, France went through three different monarchic systems, two empires, two periods where exceptional constitutional rules applied and five different republics. It puts in light how exceptional the American deference towards their original constitution is.
Considering that the current republic was put in place in 1958, it's also interesting to consider that France managed to be a great power for 150 years while being politicaly extremely unstable. It puts in perspective the current world events.
> The French Republic has always been founded by De Gaulle?
Neither have. Michel Debré was the head of the government supervising the constitutional assembly which drafted the constitution of the 5th French Republic.
In essence, it's an artefact of propaganda.
Personally, I think everyone has realized there is a huge bubble, especially the C-levels who've sunk huge amounts of money into it, and now they are all quietly panicking and trying to find ways to mitigate the damage when it finally busts. Some are probably sticking their head in the sand and hoping that they can just keep the scheme going indefinitely, but I get a real sense that the bubble is very much explicitly recognized by many of them.
https://www.whitehouse.gov/presidential-actions/2025/11/laun...
Americans will be footing the bill, just as they did in 2008.
Gyahahaha. Another L for isolationism. Love to see it.
> Currently, some universities are cultivating engineering talent; it would be very necessary and beneficial to have people with industry experience come to teach them. However, under our current system, these teachers from enterprises may not even have the opportunity to teach classes, because teaching requires certain approvals. Although everyone encourages university-enterprise cooperation, when it comes to implementation, it often cannot be realized.
This makes a lot of sense and as someone in the AI industry it’s a shame research is so siloed. Some masters programs have practicums and some classes invite speakers from industry, but I ended up learning a ton of useful knowledge from work. I’d love to teach a class but there’s essentially no path for me to do that. Plus industry can pay ~10x what adjuncts can make.
Ruling elites that consider the interests of the majority? Novel idea.
Our elites, on the other hand, are way too secure and confident in where they are at to even pretend to care about things like public progress.
Right now, as we speak, there are giant teams of people doing their best to build AI-powered killer robots. They mostly come in the shape of flying suicide drones. Dumb versions currently kill hundreds to thousands of people per day in Ukraine. There's an arms race to automate them so they can work without an interruptible human remote control.
In this context, worrying about AI alignment, social impact, or effectiveness seems positively quaint. We're literally teaching them to kill.
Human vs robot warfare is not going to turn out well for the humans.
Let's check the source...[1]
"The survey was conducted in 2021 from May to July."
...
[1] https://web.archive.org/web/20250903025427/https:/long-term-...
"人" is "human", "工" is "work", so "人工" becomes "man-made". "智" is "wisdom", "能" is "able", so "智能" is "intelligence". Nouns flow into verbs and into adjectives much more freely than in English. One character is one LLM token.
It seems like the perfect language for LLMs?
“technological progress does not have a trickle down effect on employment” (技术进步对就业没有涓流效应) (QQ News, May 16).
> Cai Fang (蔡昉), director of the Institute of Population and Labor Economics at the Chinese Academy of Social Sciences, has explained how the PRC’s rapid installation of industrial robots has contributed to labor displacement. He asserts that “technological progress does not have a trickle down effect on employment” (技术进步对就业没有涓流效应) (QQ News, May 16).
Read the source, and its a nuanced economic take.
Some of what is pointed here seems to be valid issues to tackle: how the university teaching system impedes efficient sharing between universities and industries, how the province-based political system leads to wasteful investments, the need to balance competition in fundational models with efficient allocations of founds towards applicable products.
Isamu•2mo ago
>Deployment Lacks Coordination
>AI May Fail to Deliver Technological Progress
>AI Threatens the Workforce
>Economic Growth May Not Materialize
>AI Brings Social Risks
>Party elites have increasingly come to recognize the potential dangers of an unchecked, accelerationist approach to AI development. During remarks at the Central Urban Work Conference in July, Xi posed a question to attendees: “when it comes to launching projects, it’s always the same few things: artificial intelligence, computing power, new energy vehicles. Should every province in the country really be developing in these directions?”
fragmede•2mo ago
Under communism, why is this a thing? I know that China hasn't been strictly communist since the Soviets fell but ostensibly, humanoid AI robots under semi-communism is a the dream, no?
xbmcuser•2mo ago
beepbooptheory•2mo ago
leosanchez•2mo ago
cootsnuck•2mo ago
twoWhlsGud•2mo ago
In China, The Communist Party's Latest, Unlikely Target: Young Marxists https://www.npr.org/2018/11/21/669509554/in-china-the-commun...
kulahan•2mo ago
It's a super communist state, it just happens to also embrace many parts of Capitalism.
leosanchez•2mo ago
beepbooptheory•2mo ago
This is incredibly confusing thing to say. On its face, its like saying "it's a delicious apple pie, it just happens to embrace many aspects of cyanide" (or reverse cyanide/apple pie here if that its easier for you).
But I assume you could say more here? Like can we maybe at least share an understanding here that all the things you cite at the top would also not exist in a communism state? In perhaps an authoritarian state with an otherwise free market, these points make sense, they would succinctly describe that, but for a state that is supposedly precisely communist, these things simply don't apply! Maybe the school thing, but that would imply such a thing would need to be outlawed, which really doesn't make much sense in a communist society/state.
I know people get excited thinking about this stuff, I do too! But at the end of the day we must persist in using words precisely, we must at least try for something like semantic consistency. At the very least, so you and I can really see and understand our enemies, right? If I was a guy on another side, I would hope that I'd never mistake one capitalist dog for another paper tiger. It would be at the very least embarrassing! Right?
some_random•2mo ago
Libidinalecon•2mo ago
kennyloginz•2mo ago
tmp10423288442•2mo ago
graemep•2mo ago
"“If anyone is not willing to work, neither should he eat.”
Not, not working, but being lazy and refusing to do necessary work. A scrounger exploiting the kindness of others. Very likely addressed to a community with limited resources.
it goes on to say:
"For we hear that some among you are living an undisciplined life, not doing their own work but meddling in the work of others. Now such people we command and urge in the Lord Jesus Christ to work quietly and so provide their own food to eat. But you, brothers and sisters, do not grow weary in doing what is right. But if anyone does not obey our message through this letter, take note of him and do not associate closely with him, so that he may be ashamed. Yet do not regard him as an enemy, but admonish him as a brother."
tmp10423288442•2mo ago
graemep•2mo ago
Lenin said it too, and I do not think his meaning was as harsh as Stalin's, as the latter said it during a famine.
petre•2mo ago
Sounds like Stalin, Putin and others like them.
impossiblefork•2mo ago
So I don't think this is necessarily unusual in the west either, especially not if you look back to 1950s or 1960s Swedish social democrats.
throwaway31131•2mo ago
KaiserPro•2mo ago
ninalanyon•2mo ago
If we ever get to a point where it is not necessary to work might we not instead end up with Arthur Clarke's Diaspar (The City and the Stars/Against the Fall of Night)?
graemep•2mo ago
There has been a huge amount of privatisation. There are literally hundreds of billionaires.
The state still owns some critical things, but is that enough to make it communist? Its not everything and you can have state ownership and still have a ruling class that has control of the means of production which it uses to its own advantage.
GoatInGrey•2mo ago
They're basically totalitarian gaslighters. See how hysterical the PRC gets whenever any nation indicates that they will protect Taiwan from violent invasion. You can see an obsession with narrative control that borders on pathological.
kelipso•2mo ago
… You never heard of the dictatorship of the proletariat?
ninalanyon•2mo ago
janalsncm•2mo ago
Companies like Huawei have board members in the CCP but it’s a societal issue if a lot of private companies decide to automate their factories and displace tons of factory workers.
woooooo•2mo ago
beefnugs•2mo ago
But in this case, it seems pure finger in the eye of expensive cloud AI helping to release somewhat open, run at home models can really turn the whole thing in a positive direction. Even if we have to work a bit to get around whatever alignment they shove in there, with heavy sandboxing and whitelist only networking this can be worked around.
Of course its all a huge gamble, will ccp see these risk and go SHUT IT DOWN. Or could they do one proper thing for once and somehow prop up open models?