I don't know who is wrong or who is right. But I do see where the incentives fall, and I know human nature. So I find these overly positive predictions of where things are going to be untrustworthy when they come from people with a vested interest in AI being a good thing, no matter how smart or credentialed those people are.
Which will enable people to choose to work less hours or earn more money. Which is known to reduce selfishness. Afterall we're just talking about what changed during the industrial revolution.
But the big question is that in the last couple decades we increased productivity with computers without any significant change in hours worked. Whereas my anecdotal experience is that we have a ton of people pretending to work and very few actually work; or probably the most common case, they do a little work here and there.
We kind of need a big leap to really shake things up and lead to this. AI could be this, but I expect it's more likely boston dynamics or optimus robots that will do it.
So the safety strategy is to create a cage to contain a super-human intelligence so that it cannot escape? That seems like a poor plan for two reasons:
- surely it would outsmart us?
- if not, at what point is it immoral to cage an intelligence smarter than us?
pseudolus•1d ago