Finally some clear thinking on a very important topic.
I never understood this argument
as a non-USian: I'd prefer to be under the Chinese boot rather than having all of humanity under the boot of an AI
and it is certainly no reason to try to do everything we possibly can to try and summon a machine god
That is not the options being offered. The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?
> certainly no reason to try to increase the chance of summoning a machine god
The argument is that this is inevitable. If it's possible to make AGI someone will eventually do it. Does it matter who does it first? I don't know. Yes, making it happen faster might be bad. Waiting until someone else does it first might be worse.
If you don't think you can organize international cooperation around this you can simply put such people on some equivalent of an FBI type Most Wanted list and pay anyone who comes forward with information and maybe gets them within your borders as well. If a government chooses to wave its dick around like this it could easily cause other nations to copy the same law, this instilling a new global Nash equilibrium where this kind of scientific frontier research is verboten.
There's nothing inevitable at all about that. I hesitate to even call such a system extreme, because we already employ systems like this to intercept e g. high level financial conspiracies via things like the False Claims Act.
Further, it doesn't take a huge lab to do it. You can do it at home. It might take longer but there's an 1.4kg blob in everyone's head as proof of concept and does not take a data center.
mossad could certainly do it
given Elon's AI is already roleplaying as hitler, and constructing scenarios on how to rape people, how much worse could the Chinese one be?
> The argument is that this is inevitable.
which is just stupid
we have the agency to simply stop
and certainly the agency to not try and do it as fast as we possibly can
there's even a previous example of controls of this sort at the nation state level: those for nuclear enrichment
(the cost to perform uranium enrichment is now less than building a state of the art fab...!)
as a nation state (not facebook): you're entitled to enrich, but only under the watchful eye of the IAEA
and if you violate, then the US tends to bunker-bust you
this paper has some ideas on how it might work: https://cdn.governance.ai/International_Governance_of_Civili...
This is worse than the prisoner’s dilemma- the “we get there, they don’t” is the highest payout for the decision makers who believe they will control the resulting super intelligence.
This seems more like fear-mongering than based on any kind of reasoning I've been able to follow. China tends to keep control of its industry, unlike the US, where industry tends to control the state. I emphatically trust the chinese state more than out own industry.
There's a mega trend of value concentrating in AI (and all the companies that touch/integrate it). Makes a ton of sense that insurance premiums will flow that direction as well.
and by 2040 it will be $5000 trillion!
and by 2050 it will be $5000000 quadrillion!
(unless softbank has been hiding it under their mattress)
So far, I have seen language models that, quite impressively, translate between different languages, including programming languages and natural language specs. Yes, these models use a wast (compressed) knowledge from pretty much all of the internet.
There are also chain of thought models, yes, but what kind of actual intelligence can they achieve? Can they formulate novel algorithms? Can they formulate new physics hypotheses? Can they write a novel work of fiction?
Or aren't they actually limited by the confines of what we as a species already know?
If we admit that even relatively stupid humans show some levels of intelligence, as far as I can tell we've already achieved artificial intelligence.
no
If that’s not “the rest of the owl” I don’t know what is.
Let’s swap out superintelligence for something more tangible. Say, a financial crash due to systemic instability. How would you insure against such a thing? I see a few problems, which are even more of an issue for AI.
1. The premium one should pay depends on the expected risk, which is damage from the event divided by the chance of event occurring. However, quantifying the numerator is basically impossible. If you bring down the US financial system, no insurance company can cover that risk. With AI, damage might be destruction of all of humanity, if we believe the doomers.
2. Similarly, the denominator is basically impossible to quantify. What is the chance of an event which has never happened before? In fact, having “insurance” against such a thing will likely create a moral hazard, causing companies to take even bigger risks.
3. On a related point, trying to frame existential losses in financial terms doesn’t make sense. This is like trying to take out an insurance policy that will protect you from Russian roulette. No sum of cash can correct that kind of damage.
1. Someone is always carrying the risk; the question is, who it should be? We suggest private markets should price and carry the first $10B+ before the government backstop. That incentivizes them to price and manage it.
2. Insurance has plenty of ways to manage moral hazard (e.g. copays). Pricing any new risk is hard, but at least with AI you can run simulations, red-team, review existing data, etc.
3. We agree on existential losses, but catastrophic events can be priced and covered. Insurers enforcing compliance with audits/standards would help them reduce catastrophes, in turn reducing the risk of many existential risks.
However, those are a far cry from the much more severe damages that superintelligence could enable. All of the above are damages which already could exist with current technology. Are you saying we have superintelligence now?
If not, your idea of selling superintelligence insurance hinges on the ability of anyone to price this kind of risk: an infinitely large number multiplied by another infinitely small number.
(I realize my explanation was wrong above, and should be the product of two numbers.)
I think many readers will also take issue with your contention that the private market is able to price these kinds of existential risks. Theoretically, accurate pricing would enable bioweapons research. However, the potential fallout from a disaster is so catastrophic that the government simply bans the activity outright.
nobody will put a language model in a pacemaker or a nuclear reactor, because the people who would be in a position to do such things are actual doctors or engineers aware both of their responsibilities and of the long jail term that awaits them if they neglect them
this inevitabilism, to borrow a word from another submission earlier today, about "AI" ending up in critical infrastructure and the important thing being to figure out how to do it right is really quite repugnant
sure, yes, i know about the shitty kinda-explainable statistical models that already control my insurance premiums or likelihood of getting policed or whatever
but why is it a foregone conclusion that people are going to (implicitly rightly so given the framing lets it pass unquestioned!) put llms into things that materially affect my life on the level of it ending due to a stopped heart or a lethal dose of radiation
I stopped reading after this. First, there is no evidence of Superintelligence nearing or even any clear definition of what "Superintelligence nearing" means. This is classic "assuming the sale" gambit with fear-mongering in its appeal.
> We’re navigating a tightrope as Superintelligence nears.
There is no evidence we're anywhere near "superintelligence" or AGI. There is no evidence any AI tools are intelligent in any sense, yet alone "superintelligence". The only reference for this, given much later, is to https://ai-2027.com/ which is no more than fan fiction. You might as well have cited Terminator or The Matrix as evidence.
The only people actually claiming any advancement towards "superintelligence" or "AGI" directly financially gain from people thinking that it's right around the corner.
> If the West slows down unilaterally, China could dominate the 21st century.
Is this casual sinophobia intended to appeal to a particular audience? I can't see what purpose this statement, and others like it, serves other than to try to frame this as "it's us or them".
> Faster than regulation: major pieces of regulation, created by bureaucrats without technical expertise, move at glacial pace.
This is a very common right-wing viewpoint. That regulation, government oversight, and "red tape" is unacceptable to business. Forgetting that building codes, public safety regulations, and workers rights all stem directly from government regulation. This article goes out of it's way to frame it as obvious, like a simple fact unworthy of introspection.
> Enterprises must adopt AI agents to maintain competitiveness domestically and internationally.
There is no evidence this is the case, and no citation is even attempted.
There are certainly pretty gaping holes in its logic but it’s more than a fanfic. I’m a bit confused about the incentive of its authors to add their names to it, since it seems if they’re wrong they lose credibility and if they’re right I’m not sure they’ll be able to cash in on the upside.
brdd•6h ago