Completely irrelevant.
Just because something happened in the past, doesn't mean it will happen in the future. Especially when you're talking with vague generalities (like you are) about something where the devil is in the details.
And then there are the personal effects, which comments like yours completely fail to address. If you lose your job and end up working at Walmart for a 10% the salary for the rest of your a career, while some other guy someplace else gets a new job that pays 75% your old one, is that a good outcome? People aren't fungible, especially when thinking about themselves.
> Just because something happened in the past, doesn't mean it will happen in the future
Say no more, let's ignore what always happened and go in the opposite way.
Oh come on man. The tech companies hope to create a technology that will automate the human intellect. I hope the fail, but we have to take the goal seriously. Don't talk to me about steam boats.
>> Just because something happened in the past, doesn't mean it will happen in the future
> Say no more, let's ignore what always happened and go in the opposite way.
How about you try to think about the this situation, this technology instead of essentially cargo culting and superficially reasoning "this is a technology and that was a technology 100 years ago, both technologies, so they'll both play out the same."
You're kinda missing the point. I suppose what I propose would be a "basic income scheme," but not UBI ("Universal basic income (UBI)[note 1] is a social welfare proposal in which all citizens of a given population regularly receive a minimum income..."). I consider the essential feature of UBI that most of the profits from AI are reserved for the billionaires and other shareholders who own those systems, and only the minimum amount needed to prevent the extreme desperation that can lead to revolution is socialized.
I'm proposing all the profits from AI be socialized, and the billionares get no more than you or I.
catigula•4mo ago
I find this so odd. Is there an imagining that there is some imminent trivial threshold at which AI will stop improving?
If AI improves at the current pace, all jobs will disappear as the very minimal floor of change.
AnimalMuppet•4mo ago
catigula•4mo ago
AnimalMuppet•4mo ago
Even if I accept your premise (AI continues to improve at the existing pace), I don't see how AI is going to replace, say, welders, in a year. Or even bankers. (Yeah, an AI might be able to do the financial part. But bankers - as opposed to tellers - are about relationship with customers, and the AI isn't going to substitute there.)
catigula•4mo ago
1. The exponential curve of AI progress
2. The lack of any mechanism that will immediately wall off AI progress at this specific exponential part of the curve
>I don't see how AI is going to replace, say, welders, in a year.
This article is about white-collar jobs. However, if human-level intelligence (let alone surpassing human-level intelligence) just becomes a matter of compute, robotics will quickly follow. That being said I'd expect this to be years, not a year.
> But bankers - as opposed to tellers - are about relationship with customers, and the AI isn't going to substitute there
Are you sure about that? Because even GPT-4o is an example of how people are very hungry for relationships presented by these AIs.
josefritzishere•4mo ago
philipallstar•4mo ago
AI seems to need more and more power and expense thrown at it to get from say a 70% answer to an 80%, then just as much again to an 85%, then just as much again to an 87.5%. Speed of progress in the lower percentages is not an indicator of speed in the higher percentages.
catigula•4mo ago
taylodl•4mo ago