Wow. And that's the best-case scenario in the article. Explain to me again how that is the glorious future in which everyone is better off.
I think an important lens to view those developments with, apart from income and income security, would be alienation [1].
The industrial revolution brought alienation with respect to physical labor: Workers were not able to identify with (and learn the entire process of creating) a manufactured good like in preindustrial times. Instead, they were made to specialize on one specific production step inside the factory, which changed their role to the proverbial "cog in the machine". Suddenly, they weren't working to produce a good anymore, they were working "for the factory".
If the article's vision plays out (the "Shared Upswing" one, I.e. the good scenario) then the same alienation process will play out for cognitive labor: You won't think (and gain the knowledge/experience) to come up with a solution for problems that other people have, you will think to do some nebulous and hard to quantify improvements for an AI, so that can think about how to solve the actual problems - as directed by its owners. I.e. you will work "for the AI".
Even if (if!) those jobs stay economically viable enough to make a living, they sound extremely unfulfilling and much more psychologically draining than today's jobs.
[1] https://en.m.wikipedia.org/wiki/Marx%27s_theory_of_alienatio...
I had to chuckle.
You can tell because we're all posting here via our VR helmets using NFTs.
/s
When climate-disaster-induced fires or tornadoes or tsunamis hit the data centers, I like to think that we'd like to spend a bigger chunk of our economy on food and housing. But who cares four us plebs if Sam Altman finally gets an (AI) girlfriend.
> Employers are testing replacements for workers, not just systems to augment them
That's a tale as old as time. One has to hope that capitalists would cling to oppressing humans out of habit and nostalgia and still employ us. Because, as we know from all modern economy books, if one is not in gainful employment, they are useless to society. We always have the "sell your blood" option until AGI invents cheap synthetic 3 printed blood, which, if you trust people like Elon, should happen aaany moment now.
After all, making an AI work overtime to create shareholder value has to be less satisfactory to the current Amazon RTO-policy makers than forcing real humans be stuck in traffic for no gain on productivity.
> Altman: Is this intelligence?
> Altman: We gonna achieve AGI!!
> Altman: Gib money pleazThere is no way in hell, after all, that EU politicians are going to make any effort to do something about this before it happens (not even sponsoring real research into it). So they will be surprised by what happens. And as for the model, the EU capturing the "Long Boom" of value that the internet and computers produced ... well, they failed to do that, except a relatively small amount going to the existing EU rich and landlords.
So either AI crashes and burns, or the change will be implemented, all EU citizens entered against their will before you can shout "I demand elections".
That's an awful lot of money chucked into the AI boom. It'll either be a big impact or a big bust.
This discussion is barely interesting at this point.
physix•7mo ago
From what I understand, we are far from achieving AGI, and the article would have benefited from defining the term and putting it into relation.
Because the disruption is significant even without AGI.
datavirtue•7mo ago