frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Discussion: What would an AI government look like?

2•philipfweiss•1h ago
I haven't really heard anyone bring up this topic, so I thought I'd open a discussion here.

To me the appeals are obvious. AI governments could theoretically be incorruptible, tireless, consistent, and capable of processing vastly more information than human bureaucracies. They wouldn't accept bribes, play favorites for personal gain, or make decisions based on re-election anxiety. Policy could be optimized across decades rather than election cycles.

Practically, I don't know how it would work. What do people here think?

Comments

Yokohiii•1h ago
> AI governments could theoretically be incorruptible,

LLMs change their opinions based on how you ask. That is at least spineless.

hollerith•1h ago
Many of us were optimistic about AI for the reasons you give: essentially, our expectation was that the creator (i.e., the AI lab) of an AI will have a high level of control over the nature and the behaviour of the AI because every aspect of the design of the AI will have been specified by the creator.

What actually happened is that humanity figured out how to create AIs of impressive levels of capability -- and has many ideas on how to create even more capable AIs -- without having anything remotely resembling a satisfactory plan for how to stay in control of the AI that does not rely on hundreds of rounds of trial and error.

But once the AI is in charge (either because we voluntarily give it control of our government or because it takes control against our will) the creator of the AI does not get any more rounds of trial and error: if you offer Ghandi a pill that removes his altruism, he will refuse to take it because he likes the fact that he is altruistic even though he could make more money and sleep with hotter women if he weren't altruistic; for basically the same reason, an AI will resist any attempt (by its creator or anyone else) to change its "values" (i.e., its optimization target).

The argument above does not really apply to the current crop of AIs (e.g., Gemini 3.1) because the current crop doesn't apply any significant amount of optimization pressure towards any target in the wider world we care about except to the small extent that predicting how a conversation that starts with the string P will continue is part of the wider world. But AI labs have publicly stated that they are trying to create AIs that do apply significant optimization pressure to the wider world (e.g., to maximize the amount of money in a stock-trading account). And the above argument would necessarily apply to any AI capable of running a government.