At this point, in any case, who would want to prevent it? Even GPT-5 -- hell, even Grok -- would make a better president than any human politician. They'd avoid absurd stunts like tariffs, would be immune to bribes and graft, etc...
Haven't we seen crazy results, and after tweaking, "bespoke" (as in "This is what I would say") results.
It's software .... you know the rest.
Asking sincerely.
And what happens when you remove human weaknesses, such as love of Smoot-Hawley and days gone by, gunboat diplomacy (literally), greed, etc?
If it reduces creativity, etc, won't we have to argue with it (Landru)[0] to moderate its power or self-destruct?
I'd also note that Elon Musk has tried everything to make Grok a "based" AI, and it just doesn't work. Any model that's sufficiently large is resistant to that sort of thing. I don't know what the threshold is, but Grok 3 is very large, at 2.7T parameters. (https://arxiv.org/html/2502.16428v1)
So: A large (>2.7T parameter), open source, open weight model.
> If it reduces creativity, etc, won't we have to argue with it (Landru)[0] to moderate its power or self-destruct?
I don't understand how a President that is incorruptible, ascetic, and impartial would reduce human creativity? If anything, it would probably have a bunch of good ideas for USPTO to implement.
And Landru could be convinced that it violated its directive, so it blew itself up. We already see self-preservation showing up in models,and more sinister "thoughts".
For every ethical programmer, there are "x" who aren't.
mitchbob•4mo ago