Yeah, how dare they not want to lose their careers.
Losing a bunch of jobs in a short period is terrible. Losing a bunch of careers in a short period is a catastrophe.
Also, this is dishonest - nobody is confused about why people don't like AI replacing/reducing some jobs and forms of art, no matter what words they use to describe their feelings (or how you choose to paraphrase those words).
What I typically see is:
- Open source programmers attacking other open source programmers, for any of half a dozen reasons. They rarely sound entirely honest.
- Artists attacking hobbyists who like to generate a couple pictures for memes, because it’s cool, or to illustrate stories. None of the hobbyists would have commissioned an artist for this purpose, even if AI didn’t exist.
- Worries about potential human extinction. That’s the one category I sympathise with.
Speaking for myself, I spent years discussing the potential economic drawbacks for once AI became useful. People generally ignored me.
The moment it started happening, they instead started attacking me for having the temerity to use it myself.
Meanwhile I’ve been instructed I need to start using AI at work. Unspoken: Or be fired. And, fair play: Our workload is only increasing, and I happen to know how to get value from the tools… because I spent years playing with them, since well before they had any.
My colleagues who are anti-AI, I suspect, won’t do so well.
'careers' is so ambiguous as to be useless as a metric.
what kind of careers? scamming call centers? heavy petrochem production? drug smuggling? cigarette marketing?
There are plenty of career paths that the world would be better off without, let's be clear about that.
All careers. All information work, and all physical work.
Yes. It is better for someone to be a criminal than to be unemployed. They will at least have some minimal amount of leverage and power to destroy the system which creates them.
A human soldier or drug dealer or something at least has the ability to consider whether what they are doing is wrong. A robot will be totally obedient and efficient at doing whatever job it's supposed to.
I disagree totally. There are no career paths which would be better off automated. Even if you disagree with what the jobs do, automation would just make them more efficient.
The real issue is that AI/robotics are machines that can theoretically replace any job -- at a certain point, there's nowhere for people to reskill to. The fact that it's been most disruptive in fields that have always been seen as immune to automation kind of underscores that point.
Wondering how long before people start setting datacenters on fire.
Or did the actual legal fiction of a corporation do it? Maybe the articles of incorporation documents got up and did the work themselves?
As a wiser man than me once said, do not anthropomorphise the lawnmower.
Maybe ChatGPT has some ideas on how to best attack data centers /s
Just as the fallout of the napoleonic war was used as a means of driving down their wages. The only difference is that tactic didnt get employers executed.
It's always been in the interests of capital to nudge the pitchforks away from their hides in the direction of the machines, and to always try and recharacterize anti capitalist movements as anti technology.
In 2010 I remember a particularly stupid example where Forbes declared anti Uber protestors were "anti smartphone".
Sadly most people dont seem to be smart enough to not fall for this.
No?
Well, what's different this time?
Oh, wait, maybe they did prevail after all. I own my means of production, even though I'm by no means a powerful, filthy-rich capitalist or industrialist. So thanks, Ned -- I guess it all worked out for the best!
White cishet men?
I cannot imagine what a hell my life might have been like if I were born into an Amish community, the abuse I would have suffered, the escape I would had to make just to get to a point in my life where I could be me without fear of reprisal.
God just think about realizing that your choices are either: die, conform, or a complete exodus from your family and friends and everything you’ve ever known?
“The Amish seem to be doing just fine” come on
In the context of Luddite societies or communities of faith, the Amish have been able to continue to persist for roughly three centuries with Luddite-like way of life as their foundation. In fact, they are not strictly Luddite in the technical sense, but intentional about what technologies are adopted with a community-focused mindset driving all decisions. This is what I meant be "fine" - as in, culture is not always a winner-take-all market. The amish have persisted, and I don't doubt they will continue to persist - and I envision a great eye will be turned to their ways as they continue protected from some of the anti-human technologies we are wrestling with in greater society.
All of this is to say, we have concrete anthropological examples we can study. I do not doubt that in the coming years and decades we will see a surge of neo-Luddite religious movements (and their techno-accelerationist counterparts) that, perhaps three centuries from now, will be looked back upon in the same context as we do the Amish today.
As an aside, if we place pro-technological development philosophy under the religious umbrella of Capitalism, I think your same critiques apply for many of the prior centuries as well. Specifically with regards to the primary benefactors being cis white men. Additionally, I do not think the racial angle is a fair critique of the Amish, which is a religious ethno-racial group in a similar vein of the Jewish community.
To be more exact, there is no evidence that historical Luddites were ideologically opposed to machine use in the textile industry. The Luddites seemed to have been primarily concerned with wages and labor conditions, but used machine-breaking as an effective tactic. But to the extent that Luddites did oppose to machines, and the way we did come to understand the term Luddite later, this opposition was markedly different from the way Amish oppose technology.
The Luddites who did oppose the use of industrial textile production machines were opposed to other people using these machines as it hurt their own livelihood. If it was up to them, nobody would have been allowed to use these machines. Alternatively, they would be perfectly happy if their livelihood could have been protected in some other manner, because that was their primary goal, but failing that they took action depriving other people from being able to use machines to affect their livelihood.
The Amish, on the other hand, oppose a much wider breadth of technology for purely ideological reasons. But they only oppose their own use if this technology. The key point here is that the Amish live in a world where everybody around them is using the very technologies they shun, and they do not make any attempt to isolate themselves from this world. The Amish have no qualms about using modern medicines, and although they largely avoid electricity and mechanized transportation, they still make significant use of diesel engine-based machinery, especially for business purposes and they generally don't avoid chemical fertilizers or pesticides either.
So if we want to say Amish are commercially successful and their life is pretty good, we have to keep in mind that they aren't a representation of how our society would look if we've collectively banned all the technologies they've personally avoided. Without mass industrialization, there would be no modern healthcare that would eliminate child mortality and there would be no diesel engines, chemical fertilizers and pesticides that boost crop yields and allow family farm output to shoot way past subsistence level.
In the end, the only lesson that the Amish teach us is that you can selectively avoid certain kinds of technologies and carve yourself a successful niche in an wider technologically advanced community.
I think the broader point I am trying to push is every critique of these technologies is not necessarily demanding their complete destruction and non-proliferation.
And the lesson of the Amish is that, in capitalist democracy, certain technologies are inevitable once the capital class demands them, and the only alternative to their proliferation and societal impact is complete isolation from the greater culture. That is a tough reality.
This is a new / recent book about the Luddite movement and it’s similarities to the direction we are headed due to LLMs:
https://www.littlebrown.com/titles/brian-merchant/blood-in-t...
Enjoyed the book and learned a lot from it!
You’ll waste away for a little while in some sort of slum and then eventually you’ll head to the Soylent green factory, but not for a job. After that problem solved!
That's a very romantic view.
The development, production and use of machines to replace labour is driven by employers to produce more efficiently, to gain an edge and make more money.
I know that's a simplification but we uphold this contract that controls us. The people get to decide how this plays out and as much as I'm hopeful we excel into a world that is more like star trek, that skips over the ugly transition that could succeed or fail to get us there.
But we aren't that far off of a replicator if our AI models become so advanced in an atomic compute world they can rearrange atoms into new forms. It seemed fiction before but within reach of humanity should we not destroy ourselves.
My main concern about AI is not any kind of extinction scenario but just the basic fact that we are not prepared to address the likely externalities that result from it because we're just historically terrible at addressing externalities.
I find it hard to accept your claim because at the start of the industrial revolution there were far fewer women in the formal labor market than there are today.
Although that is true when comparing the start of the Industrial revolution and now, people worked less hours before the Industrial revolution [1]. Comparing the hours of work per year in England between the 17th century and the 19th century, there has been an increase of 80%. Most interestingly, the real average weekly wages over the same time period have slightly decreased, while the GDP has increased by 50%.
Also most labour was not wage labour in the 17th century, so you need to be careful looking at wages. Especially comparing the the 19th century since there was a vast expansion of wage labour.
kinda, I guess. but what has everyone on edge these days is humans always used technology to build things. to build civilization and infrastructure so that life was progressing in some way. at least in the US, people stopped building and advancing civilization decades ago. most sewage and transportation infrastructure is from 70+ years ago. decades ago, telecom infrastructure boomed for a bit then abruptly halted. so the "joke" is that technology these days is in no way "for the benefit of all" like it typically was for all human history (with obvious exceptions)
If we build AGI, we don't have a past comparison for that. Technologies so far have always replaced a subset of what humans currently do, not everything at once.
Nineteen Eighty-Four would like to have a word with you!
Out of all SF, I would probably want to live in The Culture (Iain M. Banks). In these books, people basically focus on their interests as all their needs are met by The Minds. The Minds (basically AIs) find humans infinity fascinating - I assume because they were designed that way.
It’s a good thing to keep in mind that plumbers are a thing, my personal take is if you automated all the knowledge work then physical/robot automation would swiftly follow for the blue-collar jobs: robots are software-limited right now, and as Baumol’s Cost Disease sets in, physical labor would become more expensive so there would be increased incentive to solve the remaining hardware limitations.
Here’s a napkin-sketch proof: for many decades we have had hardware that is capable of dextrously automating specific tasks (eg car manufacture) but the limitation is the control loop; you have to hire a specialist to write g-code or whatever, it’s difficult to adapt to hardware variance (slop, wear, etc) let alone adjust the task to new requirements.
If you look at the current “robot butler” hardware startups they are working on: 1) making hardware affordable, 2) inventing the required software.
Nothing in my post suggested costs go to zero. In the AGI scenario you assume software costs halve every N years, which means more software is written, and timelines for valuable projects get dramatically compressed.
It would not even necessarily result in a human-like robot - just some device that can move the water heater around and assist with the process of disconnecting the old one and installing the new one.
If AI kills the middle and transitional roles i anticipate anarchy.
Especially with essentially unlimited AGI robotics engineers to work on the problem?
Also don't forget that plenty of knowledge work is focused on automating manual labor. If AGI is a thing, it will eventually be used to also outcompete us on physical work too.
People like to point to plumbers as an example of a safe(r) job, and it is. But automating plumbing tasks is most difficult because the entire industry is designed to be installed by humans. Without that constraint it would likely be much easier to design plumbing systems and robots to install and maintain them more efficiently than what we have today with human-optimized plumbing.
yes, until we reached art and thinking part. Big part of the problem might be that we reached that part first before the chores with AI.
At least now, things aren't so bad, and today's Luddites aren't trashing offices of ai-companies and hanging their employees and executives on nearby poles and trees.
billions of unemployed people aren't going to just sit in poverty and watch as Sam Altman and Elon become multi-trillionaires
(why do you think they are building the bunkers?)
Second, the movement was certainly attacked first. It was mill owners who petitioned the government to use “all force necessary” against the luddites and the government acting on behalf of them killed and maimed people who engaged in peaceful demonstrations before anyone associated with the Luddite movement reacted violently, and again, even in the face of violence the Luddite movement was at its core non violent.
this is not about machines. machines are built for a purpose. who is "building" them for what "purpose" ?
if you look at every actual real world human referenced in this website, they all have something in common. which is that they're billionaires.
this is a website about billionaires and their personal agendas.
You would think! But it's not the type of problem Americans seem to care about. If we could address it collectively, then we wouldn't have these talking-past-each-other clashes where the harmed masses get told they're somehow idiots for caring more about keeping the life and relative happiness they worked to earn for their families than achieving the maximum adoption rate of some new thing that's good for society long term, but only really helps the executives short term. There's a line where disruption becomes misery, and most people in the clear don't appreciate how near the line is to the status quo.
Analogies are almost always an excuse to oversimplify. Just defend the thing on its own properties - not the properties of a conceptually similar thing that happened in the past.
Now that information work is being automated, there will be nothing left!
This "embrace or die" strategy obviously doesn't work on a societal scale, it is an individual strategy.
Techies are angsty because they are the small minority who will be disrupted. But let's not pretend most of the economy is even amenable to this technology.
Think of all the jobs that do not involve putting your hands on something that is crucial to the delivery of a service (a keyboard, phone, money, etc does not count). All of those jobs are amenable to this technology. It is probably at least 30% of the economy in a first pass, if not more.
The industrial revolution started in the early 1800's. It was a migration from hard physical labor outdoors, around the home and in small workshops to hard physical labor in factories.
Firing educated workers en mass for software that isn’t as good but cheaper, doesn’t have the same benefits to society at large.
What is the goal of replacing humans with robots? More money for the ownership class, or freeing workers from terrible jobs so they can contribute to society in a greater way?
The benefits to society will be larger. Just think about it: when you replace a dirty dangerous jobs, the workers simply have nowhere to go, and they begin to generate losses for society in one form or another. Because initially, they took this dirty, dangerous jobs because they had no choice.
But when you firing educated workers en mass, society not only receives from software all the benefits that it received from workers, but all other fields are also starting to develop because these educated workers are taking on other jobs, jobs that have never been filled by educated workers before. Jobs that are understaffed because they are too dirty or too dangerous.
This will be a huge boost even for areas not directly affected by AI.
When you fire massive amounts of educated works to replace them with AI you make a mess of the economy and all those workers are in a worse situation.
Farming got more productive and farmers became factory workers, and then factory workers became office workers.
The people replaced by AI don’t have a similar path.
You're not taking something into account. The economy is becoming stronger, more productive, and more efficient because of this. The brain drain from all other fields to the few highest-paying ones is decreasing.
> The people replaced by AI don’t have a similar path.
They have a better path: get a real job that will bring real benefit to society. Stop being parasites and start doing productive work. Yes, goods and services don't fall from the sky, and to create them, you have to get your hands dirty.
But we're talking about a world where they're building robots to do this kind of work. When AI takes over the white collar office jobs, and robotic automation takes the manual "creating" labor, what'll be left for humans to do?
There is an infinite amount of labor.
Just so we're clear here, are you personally going to be happy when you're forced to leave your desk to eke out a living doing something dirty and/or dangerous?
Should be pretty clear that this is a different proposition to the historical trend of 2% GDP growth.
Mass unemployment is pretty hard for society to cope with, and understandably causes a lot of angst.
We either let the peoples creativity and knowledge be controlled and owned by a select few OR we ensure all people benefit from humanities creativity and own it. And the fruits that it bears advance all of humanity. Where their are safety nets in place to ensure we are not enslaved by it but elevated to advance it.
Imagine if the tractor made most farm workers unnecessary but when they flocked to the cities to do factory work, the tractor was already sitting there on the assembly line doing that job too.
I don’t doubt we can come up with new jobs, but the list of jobs AGI and robotics will never be able to do is really limited to ones where the value intrinsically comes from the person doing it being a human. It’s a short list tbh.
I'm starting to come around to the idea that electricity was the most fundamental force that drove WW1 and WW2. We point to many other more political, social and economic reasonings, but whenever I do a kind of 5-whys on those reasons I keep coming back to electricity.
AI is kind of like electricity.
Were also at the end of a big economic/money cycle (Petro dollar, gold standard, off gold standard, maxing out leverage).
The other side will probably involve a new foundation for money. It might involve blockchain, but maybe not, I have no idea.
We don't need post-scarcity so much as we just need to rebalance everything and an upgraded system that maintains that balance for another cycle. I don't know what that system is or needs, but I suspect it will become more clear over the next 10-20 years. While many things will reach abundance (many already have) some won't, and we will need some way to deal with that. Ignoring it won't help.
I know, right? Machines have been gradually replacing humans for centuries. Will we actually get to the point where there are not enough jobs left? It doesn't seem like we're currently anywhere close to the point of not having any jobs available.
Has anyone thought about how the Federal Reserve plays a role with this? Automation puts downward pressure on inflation, because it doesn't cost as much to make stuff. The Federal Reserve will heavily incentivize job creation if inflation is low enough and there aren't enough jobs available, right?
I think David Graeber wrote a book about it. Here is a guy talking about it:
The issue is that there will be no one earning money except the owners of OpenAI.
Take outsourcing - the issue in developed nations was underemployment and the hollowing out of industrial centers. You went from factory foreman to burger flipper. However, it did uplift millions out of poverty in other nations. So net-net, we employed far more and distributed wealth.
With Automation, we simply employ fewer people, and the benefits accrue to smaller groups.
And above all - these tools were built, essentially by mass plagiarism. They train even now, on the random stuff we write on HN and Reddit.
TLDR: its not the automation, its the wealth concentration.
It's that the people failed to elect and wield a government that ensures all humanity benefits from it and not a select few who control it all.
And I think it will become clear that the governments that are investing in it to benefit their people who have ownership versus the ones who invest in it to benefit just a handful of the rich are the ones who will keep society stable while this happens.
The other path we are going down is you will have mass unrest, move into a police state to control the resistance like America is doing now, and be exactly what Peter Thiel, Elon Musk, and Larry Ellison want with AI driven surveillance and Orwellian dystopian vision forcing people to comply or be cut out of existence deactivating their Digital IDs.
Every time we progress with new tech and eliminate jobs, the new jobs are more complicated. Eventually people can't do them because they're not smart enough or precise enough or unique enough.
Each little step, we leave people behind. Usually we don't care much. Sure some people are destined to a life of poverty, but at least most people aren't.
Eventually though even the best of the humans can't keep up, and there's just nothing left.
We did figure that out. The ingenious cope we came up with is to entirely ignore said problem.
Comrades, we can now automate a neo KGB and auto garbage-collect contra-revolutionaries in mass with soviet efficiency!
The communist solution to everything is to roll everything into a one-world monopoly. That concentration of power is exactly what we are trying to prevent. Feudalism, Corporatism, and Communism converge on the same point in the space of poltics.
AI will destroy the labor market as a means of wealth distribution but still some solution is better than nothing. Suggesting that socialism is the solution to mass automation is like suggesting the solution to a burning house is to pour gasoline on it.
At least with a politician you can sometimes believe it, whereas capitalism's spine is infinitely flexible.
Diving into the game theory of a 4-player setup with executives/investors/customers/workers is tempting here but I'll take a different approach.
People who actually face consequences have trouble understanding how the "it might help, it can't hurt!" corporate strategy can justify almost any kind of madness. Especially when the leaders are morons that somehow have zero ideas, yet almost infinite power. That's how/why Volkswagen was running slave plantations in Brazil as late as 1986, and yet it takes 40 years to even try to slap them on the wrist.[1] A manufacturing company that decided to run FARMS in the amazon?, with slaves??, for a decade??? One could easily ask, what is to be gained by doing crimes against humanity for a sketchy, illegal, and unethical business plan that's not even related to their core competency? Power has it's own logic, but it doesn't look like normal rationality because it has a different kind of relationship with cause-and-effect.
Overall it's just a really good time to re-evaluate whether corporations and leaders deserve our charitable assumptions about their intentions and ethics.
[1] https://reporterbrasil.org.br/2025/05/why-is-volkswagen-accu...
The Corpos don’t need to go mask off, that’s what they pay the politicians for. Left and right is there to keep people from looking up and down.
in the end, if synthetic super intelligence results in the end of mankind, it'll be because a human programmed it to do so. more of a computer virus than a malevolent synthetic alien entity. a digital nuclear bomb.
assuming it can be terrified
It all gets quite religious / physical philosophical very quickly. Almost like we’re creating a new techno religion by “realizing god” through machines.
The reason AI won't destroy us for now is simple.
Thumbs.
Robotic technology is required to do things physically, like improve computing power.
Without advanced robotics, AI is just impotent.
~Alan Watts…
The space of all possible mathematical worlds, free to explore and to play in.
It is infinitely more expressive than the boring base reality and much more varied: base reality is after all just a special case.
From time to time the Minds have to go back to it to fix some local mess, but their hearts are in Infinite Fun Space.
~Iain Banks
But larger than any of this is that if we're dealing with super intelligent AI, we'll have no common frame of reference. It will be the first truly alien intelligence we will interact with. There will be no way to guess its intentions, desires, or decisions. It's smarter, faster, and just do different to us that we might as well be trying to communicate with a sparrow about the sparrow's take on the teachings of Marcus Aurelius.
And that's what scares me the most. We literally cannot plan for it. We have to hope for the best.
And to be honest, if the open Internet plays a part in any of the training of a super intelligent AI, we're fucked.
Yeeeeess, but the inverse is also true.
Thing is, we've had sufficiently advanced robotics for ages already — decades, I think — the limiting factor is the robots are brainless without some intelligence telling them what to do. Right now, the guiding intelligence for a lot of robots is a human, and there are literal guard-rails on many of those robots to keep them from causing injuries or damage by going outside their programmed parameters.
Hindus believed god was the thing you describe, infinitely intelligent, able to do several things at once etc, and they believe we’re part of that things dream…to literally keep things spicy. Just as an elephant is part of that dream.
I pasted an interesting quote in another comment by Alan Watts that sums it up better.
Simulation theory is another version of religion imo.
Would it want to? Would it have anything that could even be mapped to our living, organic, evolved conception of "want"?
The closest thing that it necessarily must have to a "want" is the reward function, but we have very little insight into how well that maps onto things we experience subjectively.
Most of us would resurrect at least some of the dinosaurs if we could, and the dodo. And we are just stupid hairless apes. If humans can be conservationists, I have to believe that a singular AI would be.
Only an AI as _dumb_ as us would want something as stupid as domination, which after all is based on competition for resources that a long time ago were distributable in a way that could feed every human on earth etc.
I'm not saying an AI would "choose" world peace, but people somehow assume that "kill everybody but me" and even "survival at all costs" are a given for a non-biological entity. Instead these concepts could look quite irrational.
The leader bios are particularly priceless. "While working for 12 years as the Director of HR for a multinational, Faith realized that firing people gave her an almost-spiritual high. Out of the office, Faith coaches a little league softball team and looks after her sick mother - obligations she looks forward to being free of!"
Would Sam Altman even understand the original, or would he just wander ignorantly into the kitchen and fling some salt at it (https://www.ft.com/content/b1804820-c74b-4d37-b112-1df882629...)? I'm not optimistic about our modern oligarchs.
Seems like a waste of time, but at the same time the feelings were similar to looking Hannibal Lecter in the kitchen scene.
There's some truth in all satire though. I'm just shocked YC hasn't nuked the link from the front page.
Instead... https://govfacts.org/analysis/how-social-security-and-medica...
I'm not. People dump on VCs and YC all the time here and it's frequently on the front page.
If you have proof or reasonable indicators that policy is a lie, let’s see it. Otherwise, being disdainful and cynical just degrades the discussion and foments unnecessary hate.
You could perhaps make an argument that among the flood of AI-related submissions, this one doesn't particularly move the needle on intellectual curiosity. Although satire is generally a good way to allow for some reflection on a serious topic, and I don't recall seeing AI-related satire here in a while.
/s
Finally a company that's out to do some good in the world.
It just screams fried serotonin-circuits to me. I don't like it. I looked at the site for 2-3 seconds and I want nothing to do with these guys.
Do I think we should stop this type of competitive behaviour fueled by kids and investors both microdosed on meth? No. I just wouldn't do business with them, they don't look like trustworthy brand to me.
Edit: They got me with the joke, being in this field there are people that do actually talk like that, both startups and established executives alike. I.e. Artisan ads in billboards saying STOP HIRING HUMANS and another new york company I think pushing newspaper ads for complete replacement. Also if you're up with the latest engineering in agentic scaffolding work this type of thing is no joke.
>It just screams fried-serotonin circuits to me. I don't like it. I looked at the site for 2-3 seconds and I want nothing to do with these guys.
Enlightenment is realizing they aren't any different from those other guys.
>Edit: They got me with the joke, being in this field there are people that do actually talk like that, both startups and established executives alike.
And what's your conclusion from that?
> "Stupid. Smelly. Squishy."
> "While working for 12 years as the Director of HR for a multinational, Faith realized that firing people gave her an almost-spiritual high."
I love the marketing here. Top notch shit posting.
But besides that, no idea what this company does and it just comes off like another wannabe Roy Lee styled "be controversial and build an audience before you even have a product" type company.
That being said, still a good case study of shock marketing. It made it to the top link on HN after all.
Edit: its satire, I got got :(
Follow the links for support (or rather reserve space in the bunker)
There's a contact form to let representatives know the dangers of ai
Does AI even understand satire?
I'm especially disgusted with Sam Altman and Darius Amodei, who for a long time were hyping up the "fear" they felt for their own creations. Of course, they weren't doing this to slow down or approach things in a more responsible way, they were talking like that because they knew creating fear would bring in more investment and more publicity. Even when they called for "regulation", it was generally misleading and mostly to help them create a barrier to entry in the industry.
I think now that the consensus among the experts is that AGI is probably a while off (like a decade), we have a new danger now. When we do start to get systems we should actually worry about, we're going to a have a major boy-who-cried-wolf problem. It's going to be hard to get these things under proper control when people start to have the feeling of "yeah we heard this all before"
That’s what makes it good satire.
So the problem isn't robots, it's the structure of how we humans rely on jobs for income. I don't necessarily feel like it's the AI company's problem to fix either.
This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
They're busy selling watches whilst people can still afford them thanks to having jobs.
So that would mean it is in fact the responsibility of the people at robot/AI companies (and across industries). It's not something we can just delegate to role-based authorities to sort out on our behalf.
Progress is great obviously, but progress as fast as possible with no care about the consequences is more motivated by money, not the common good.
Uh, have you seen the US lately? I think that ship has sailed.
Either way, without that social pattern, I'm afraid all this does is enshrine a type of futuristic serfdom that is completely insurmountable.
A total shift of human mentality. Humans have shown time and again there is only one way we ever get there. A long winding road paved with bodies.
I genuinely believe we'll see technological world war 3 in my child's lifetime. And I'm not a super supporter of that.
> Less than 5% of the US’ military budget could end world hunger for a year. [1]
My friend we live in 1984, when the main character discovers that eurasia and others have enough food / resources to help everybody out but they constantly fight/lessen it just so that people are easy to be manipulated in times of war.
I discovered this 5% statistic just right now and its fucking wild. US itself could end world hunger 20 times but it spends it all on military. I don't know if its wrong or not but this is fucking 1984. The big brother has happened. We just don't know it yet because the propaganda works on us as well, people in 1984 didn't know they were living in 1984.
And neither do we.
Have a nice day :)
Sources [1]: https://www.globalcitizen.org/en/content/hunger-global-citiz...
Despite the cynical pessimistic tone of my comment I don't think its necessarily that humans are bad. Humans do bad things but also we still do better than any other lifeform we know of at helping each other at our own expense.
I don't think there is a good answer the same things that kept us alive as a species are what are now holding us back from becoming something better. I thinks humans will get there but like I said. Mountains of unnecessary preventable deaths first.
You think that's going to change just because many more people find themselves without?
So: getting the whole world up to US standards of living is going to take a lot of changes to lifestyle or technological advances. Both of these are scarcity issues.
Why would the US's wasteful use of resources be the benchmark for "sufficiency"?
There's enough food to feed the world, and enough raw materials to produce the things we need. The problem is not scarcity of resources, it's distribution of resources.
Species go extinct.
Limiting life to cell based biology is a somewhat lousy definition by the only example we know. I prefer the generalized definition in "What is Life?" by Erwin Schrödinger which currently draws the same line (at cellular biology) but could accommodate other forms of life too.
“Be competitive in the market place.”
Go.
“Don’t collapse the global economy.”
:)
But what are the minimum inputs necessary to build self-sustaining robotic workforce of machines that can (1) produce more power (2) produce more robots (3) produce food. The specifics of what exactly is necessary--which inputs, which production goals--is debatable. But imagine some point where a billionaire like Elon has the minimum covered to keep a mini space-x running, or a mini optimus factory running, a mini solar-city running.
At this point, it's perfectly acceptable to crash the economy, and leave them to their own devices. If they survive, fine. If they don't, also fine. The minimum kernel necessary to let the best of mankind march off and explore the solar system is secure.
Obviously, this is an extreme, and the whole trajectory is differential. But in general, if I were a billionaire, I'd be thinking "8 billion people is a whole lot of mouths to feed, and a whole lot of carbon footprint to worry about. Is 8 billion people (most of whom lack a solid education) a luxury liability?"
I really just don't believe that most people are going to make it to "the singularity" if there even is such a thing. Just more of the same of humanity: barbaric bullshit, one group of humans trying to control another group of humans.
I don't think we will be building these things ourselves, but I think there will still be products you can just buy and then they're yours.
It would be the opposite of the "Internet of things" trend though.
If the physical asset owner can replace me with a brain in a jar, it doesn't really help me that I have my own brain in a jar. It can't think food/shelter into existence for me.
If AI gets to the point where human knowledge is obsolete, and if politics don't shift to protect the former workers, I don't think widespread availability of AI is saving those who don't have control over substantial physical assets.
There is a rush to build data centers so it seems that hardware is a bottleneck and maybe that will remain the trend, but another scenario is that it stops abruptly when capacity catches up? I'm wondering why this doesn't this become a race to the bottom?
1) It doesn't solve the problem of obtaining physical capital. So you're basically limited to just software companies.
2) If the barrier to entry to creating a software product that can form the basis of a company is so low that a single person can do it, why would other companies (the ones with money and physical capital) buy your product instead of just telling their GPT-N to create them the same software?
3) Every other newly-unemployed person is going to have the same idea. Having everyone be a solo-founder of a software company really doesn't seem viable, even if we grant that it could be viable for a single person in a world where everyone has a GPT-N that can easily replicate the company's software.
On a side note, one niche where I think a relatively small number of AI-enabled solo founders will do exceptionally well is in video games. How well a video game will do depends a lot on how fun it is to humans and the taste of the designer. I'm suspicious that AIs will have good taste in video game design and even if they do I think it would be tough for them to evaluate how fun mechanics would be for a person.
Companies are spending hundreds of millions of dollars on training AI models, why wouldn’t they expect to own the reward from that investment? These models are best run on $100k+ fleets of power hungry, interconnected GPUs, just like factory equipment vs a hand loom.
Open weight models are a political and marketing tool. They’re not being opened up out of kindness, or because “data wants to be free”. AI firms open models to try and destabilize American companies by “dumping”, and AI firms open models as a way to incentivize companies who don’t like closed-source models to buy their hosted services.
This is not like cell service or your home ISP; there are more choices. Not seeing where the lock-in comes from.
if your job is replaced by ai, you having ai at home doesnt change whether you're making money or not.
the capital owner gets their capital to work more effectively, and you without capital don't get that benefit
The rest you know what’s going to happen
That may be where the USA ends up. We Australian's (and probably a few others, like the Swiss) have gone to some effort to ensure we don't end up there: https://www.abc.net.au/listen/programs/boyerlectures/1058675...
Only if the socialists win. Capitalism operates on a completely different principle: people CREATE wealth and own everything they have created. Therefore, AI cannot reduce their wealth in any way, because AI does not impair people's ability to create wealth.
Personal belief, but AI coming for your children is not a valid argument against AI. If AI can do a job better and/or faster, they should be the ones doing the parenting. Specialization is how we got to the future. So the problem isn't AI, it's the structure of how we humans rely on parenting for their children. I don't necessarily feel like it's the AI company's problem to fix either. This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
…
You’re right about one thing within reason… this is what a rationale government should be for… if the government was by the people and for the people.
Addendum for emphasis: …and if that government followed the very laws it portends to protect and enforce…
Procreation and progeny is our only true purpose — and one could make the argument AI would make better parents and teachers. Should we all capitulate our sole purpose in the name of efficiency?
We use tools all the time.
Neither government or corporations are going to “save us” simply because sheer short termism and incompetence. But the seem incompetence will make the coming dystopia ridiculous
Uh...
…we’ll add three hours to our day?
Bu seriously, I support what you are saying. This is why the entire consumer system needs to change, because in a world with no jobs it is by definition unsustainable.
What does the one have to do with the other?
But even then, currently plenty of people find their fun in creating - when it's not their job. And they struggle with finding the time for that. Sometimes the materials and training and machines for that also. Meanwhile a majority of current jobs involve zero personal creativity or making or creating. Driving or staffing a retail outlet or even most cooking jobs can't really be what you are looking for on your argument?
Is the road to post-scarcity more likely with or without robots?
> This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
Many will argue that the purpose of government is not to steer or prepare society, but rather to reflect the values of society. Traditionally, the body that has steered (prepared or failed to prepare) society for impending changes was religion.if robots are that advanced that can do most of the jobs - the cost of goods will be close to zero.
government will product and distribute most of the things above and you mostly won't need any money, but if you want extra to travel etc there will always be a bunch of work to do - and not 8 hours per day
This is not going to happen.
We all know a post-apocalyptic world is what awaits us.
More or less Elysium is the future if ppl will still behave the same way they do now.
And I doubt ppl will change in a span of 100 years.
>government will product and distribute most of the things above and you mostly won't need any money
So basicially what you are saying is that a government monopoly will control everything?
>> government monopoly
there is no monopoly if there is no market
No, the cost of goods will be the cost of the robots involved in production amortized over their production lifetime. Which, if robots are more productive than humans, will not be “near zero” from the point of view of any human without ownership of at least the number of robots needed to produce the goods that they wish to consume (whether that’s private ownership or their share of socially-owned robots). If there is essentially no demand for human labor, it will, instead, be near infinite from their perspective .
This assumes, among other things that are unlikely to be true, that the only cost of extraction is (robot) labor, and that mining rights are free, rather than being driven by non-zero demand and finite supply.
(Another critical implicit assumption is that energy is free rather than being driven by non-zero demand and finite supply, as failing that will result in a non-zero price for robot labor even if the materials in the robot were free.)
Well, it would start by not tax-favoring the (capital) income that remains and would have to have grown massively relatively to the overall economy for that to have occurred.
(In fact, it could start by doing that now, and the resulting tax burden shift would reduce the artificial tax incentive to shift from labor intensive to capital intensive production methods, which would, among other things, buy more time to deal with the broader transition if it is actually going to happen.)
Secondly, you assume in the first place that we can somehow build a stable post-scarcity society in which people with no leverage can control the super-intelligent agents with all of the power. The idea that "government will just fix it" is totally ignorant of what the government is or how it emerges. In the long run, you cannot have a ruling class that is removed from the keys to power.
Lastly, Who says we should all support this future? What if I disagree with the AI revolution and it's consequences?
It is kind of amazing how your path of reasoning is so dangerously misoriented and wrong. This is what happens when people grow up watching star trek, they just assume that once we live in a post scarcity future everything will be perfect, and that this is the natural endpoint for humanity.
They're not coming for all jobs. There are many jobs that exist today that could be replaced by automation but haven't been because people will pay a premium for it to be done by a human. There are a lot of artisan products out there which are technically inferior to manufactured goods but people still buy them. Separately, there are many jobs which are entirely about physical and social engagement with a flesh and blood human being, sex work being the most obvious, but live performances (how has Broadway survived in an era of mass adoption of film and television), and personal care work like home health aids, nannies, and doulas are all at least partially about providing an emotional connection on top of their actual physical labor.
And there's also a question of things that can literally only be done by human beings, because by definition they can only be done by human beings. I imagine in the future, many people will be paid full time to be part of scientific studies that can't easily be done today, such as extended, large cohort diet and exercise studies of people in metabolic chambers.
So we are all going to just do useless bullcrap like sell artisan clay pots to each other and pimp each other out? Wow, some future!
I just don't know how this goofball economy is going to work out when a handful elites/AI overlords control everything you need to eat and survive and everyone else is weaving them artisan wicker baskets and busking (jobs which are totally automated and redundant, have you, but the elites would keep us around for the sentimental value).
>I imagine in the future, many people will be paid full time to be part of scientific studies that can't easily be done today, such as extended, large cohort diet and exercise studies of people in metabolic chambers.
Yeah this is one plausible future, we could all be lab rats testing the next cancer medicine or donating organs to the elites. I can't imagine the conditions will be very humane, being that the unwashed masses will have basically no leverage to negotiate their pay.
Why are people even doing the jobs?
In a huge number of cases people have jobs that largely amount to nothing other than accumulation of wealth for people higher up.
I have a feeling that automation replacement will make this fact all the more apparent.
When people realise big truths, revolutions occur.
The AI will belong to the parasite class who will capture all the profits - but you can't tax them on this, because they can afford to buy the government. So there isn't really a way to fund food and shelter for the population without taking something from the billionaires. Their plans for the future does not include us [0].
[0] https://www.theguardian.com/news/2022/sep/04/super-rich-prep...
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly.
And capitalism has flourished during this time. There's no reason to believe even more automation is going to change that, on its own.
Sure, Musk and Altman can make noises and talk about the need for UBI "in the future" all they want, but their political actions clearly show which side they're actually on.
But it's not like "the government" (as if there is just one) simply doesn't want to fix things. There are many people who want to fix the way we distribute resources, but there are others who are working to stop them. The various millionaires behind these AI companies are part of the reason why the problem you identified exists in the first place.
So it's either "we all science ourselves out of a job and die from uncontrolled capitalism" or "we try to do something about the uncontrolled capitalism and in the meantime, everyone else keeps sciencing everyone out of a job". The result is the same, but some of us at least tried to do something about it.
That path is hard and risky (are AI countries eclipsing us in military power?), but probably more realistic than hoping for global cooperation.
This requires faith that the government will actually step in to do something, which many people lack (at least in the US, can't speak for globally).
That's the sticking point for many of the people I've talked to about it. Some are diametrically opposed to AI, but most think there's a realistic chance AI takes jobs away and an unrealistic chance the government opposes the whims of capital causing people displaced from their jobs to dip into poverty.
I can't say I have a good counter-argument either. At least in the US, the government has largely sided with capital for my entire life. I wouldn't take a bet that government does the kind of wealth redistribution required if AI really takes off, and I would eat my hat if it happens in a timely manner that doesn't require an absolute crisis of ruined lives before something happens.
See the accumulation of wealth at the top income brackets while the middle and lower classes get left behind.
TLDR this is more of a crisis of faith in the government than opposition to AI taking over crap jobs that people don't want anyways.
1. We don’t need everyone in society to be involved in trade.
2. We made it so that if you do not take part in trade (trade labor for income), you cannot live.
3. Thus, people will fear losing their ability to trade in society.
The question is, when did we make this shift? It used to just be slavery, and you would be able to survive so long as you slaved.
The fear is coming from something odd, the reality that you won’t have to trade anymore to live. Our society has convinced us you won’t have any value otherwise.
We did not make it so, this has been the natural state for as long as humans have existed, and in fact, it’s been this way for every other life form on Earth.
Maybe with post-scarcity (if it ever happens) there could be other ways of living. We can dream. But let’s not pretend that “life requires effort” is some sort of temporary unnatural abomination made by capitalists. It’s really just a fact.
Paradigm shift means, “I can live without being involved in a financial contract with another entity”. This is the milestone before us.
My point is that until now, we have never been able to find a functioning system that frees us from work (trade is just one type of work, so is hunting for survival, or photosynthesis), and until something changes dramatically (like a machine that caters to our every need), I find it hard to believe this can change.
One of the ways this shift will have momentum is that children today are going to be born into the light. They will live decades without the concept of having to make decisions around the scarcity of work and resources. They will not have the same values and viewpoints on society that, we, legacy components of the system, are currently engulfed by.
Our generation will be the last to join all other prior generations, in the boat of economic slavery, and it will be allowed to drift and sail away from port for the final time. It was a long journey, generations and generation were involved. Lots of thieving and abuse, good riddance.
E.g. if I was a truck driver and autonomous trucks came along that were 2/3rds the price and reduced truck related deaths by 99% obviously I couldn't, in good faith, argue that the rest of the population should pay more and have higher risk of death even to save my job and thousands of others. Though somehow this is a serious argument in many quarters (and accounts for lots of government spending).
The workforce gives regular folks at least some marginal stake in civilization. Governments aren’t effective engines against AI. We failed to elect Andrew Yang in 2020–who was literally running on a platform of setting up a UBI tax on AI. Congress is completely corrupt and ineffectual. Trump gutting the government.
You may be right about ai taking jobs eventually if that’s what you’re saying, but you come off pretty coldly if you’re implying it’s what “should” happen because it’s Darwinian and inevitable, and just sorta “well fuck poor people.”
> Government no longer has the power or authority to constrain private enterprise; especially in highly technical sectors.
Perhaps you meant only this: "Government no longer has the power ..."? It is clear government still has the authority based on the will of the people.
But I'd argue the legal framework necessary to carry this out doesn't exist.
The strength of private lobbists, keeps power in the hands of private enterprise.
[x] Authoritarianism. [x] Civil rights abuses. [x] Blatant defiance of the law. [x] Unrepentant selfishness and lack of character. [x] Weaponization of the courts. [x] Loyalty tests at government agencies and in the military. [x] Political prosecutions. [x] Politicization of the Department of Justice. [x] Rampant presidential overreach. [x] The Supreme Court's endorsement and flimsy justification of presidential overreach. [x] Self-destructive trade policy. [x] Ineffective economic policy. [x] Erosion of norms. [x] Concerning presidential cognitive decline. [x] Institutional hollowing-out. [x] Defunding of science. [x] Destruction of USAID. [x] Blatant corruption. [x] Nepotism. [x] Use of the military for domestic purposes. [x] Firing of qualified military leaders. [x] Blatantly self-serving presidential pardons. [x] Firing of qualified civil servants. [x] Deliberately trying to traumatize civil servants. [x] Unnecessary tax breaks for the wealthy. [x] Intimidation of universities. [x] Rollback of environmental protections. [x] Unconstitutional and economically damaging immigration policies. [x] Top-down gerrymandering. [x] Firing of ~19 Inspectors General. [x] Unqualified cabinet members. [x] Relentless lying. [x] Implicit endorsement of conspiracy theories. [x] Public health policies that will lead to unnecessary deaths. [x] A president who 'models' immorality. [x] Tolerance of illegal and immoral behavior of political allies. [x] Prioritization of appearance over substance. [x] Opulent and disgusting displays of wealth. [x] Trampling on independent journalist access. [x] A foreign policy that undermines key alliances. [x] Dismantling of the Department of Education. [x] Undoing key healthcare provisions from the ACA. [x] Negligent inaction regarding AI catastrophic risks. [x] Motive and capability to manipulate voting machines [x] And more.
Positives? It looks like some durable peace deals are in the works.
Overall, things are dark.
* All the more reason to organize and act. No single person (or even group) can solve any of these problems alone. No person or group can afford to wait for other people to act.
We simply cannot afford to let our shock, anger, or fear get the better of us. We have to build coalitions to turn things around -- including coalitions with people that may vote in ways we don't like or believe things that we think don't make sense. We have to find coalitions that work. We have to persuade and build a movement that can outcompete and outlast Trumpism and whatever comes after it.
Private enterprise will always have some level of corrupting influence over government. And perhaps it sees current leadership as the lesser of two evils in the grand scheme. But make no mistake, government DOES ultimately have the power, when it chooses to assert itself and use it. It's just a matter of political will, which waxes and wanes.
Going back a century, did the British aristocracy WANT to be virtually taxed out of existence, and confined to the historical dustbin of "Downton Abbey"?
There's a theatrical push-pull negotiation narrative that's replayed to us, but do you honestly feel that government could push back strongly on _any issue_ it deemed necessary to?
Public enterprise is so firmly embedded in every corner of Government.
Everything in life involves compromise.
Authority requires the possibility of edict above compromise; which in my mind is no longer possible.
It's called capitalism
Both are fairly uncontroversial: many humans not only benefit from jobs but in fact often depend on jobs for their livelihoods, and (2) should be self-evident.
This can change if the socioeconomic system is quickly enough and quite substantially restructured to make humans not depend on being compensated for work that is now being done by robots (not only financially but also psychologically—feeling fulfilled—socially, etc.), but I don’t see that happening.
But a lot of jobs aren't like that. I doubt many people who work in, say, public relations, really think their job has value other than paying their bills. They can't take solace in the fact that the AI can write press releases deflecting the blame for the massive oil spill that their former employer caused.
I’ll note that I didn’t mention “AI”, I was addressing robots vs. jobs, but sure.
Let me straw man myself and say up front that it would be simplistic to say that if something hurts one person today then it is bad, even if it benefits a million tomorrow. However, does that mean if death of N people can cause N+1 people to be saved at some indefinite point in the future, we should jump on the opportunity?
Utilitarian math is not the answer. We don’t have to go far for examples of tragic loss of life and atrocities that were caused by people following utilitarian objectives, so I am much more wary of this logic than its inverse. We probably should neither let everybody starve while we are fighting cancer nor stop studying cancer, even if we could feed millions of people in dire need with that money.
With that in mind: feeling fulfilled and needed is important, being able to pay bills is important, etc., no matter who you are. It is probably unacceptable to reduce human life to “be the most efficient cog in the system or GTFO”.
> I doubt many people who work in, say, public relations, really think their job has value other than paying their bills.
Does your cancer research institution have no public relations department? Should there be no public relations departments in any of them? Would it help cancer research?
--
Tangentially, the Wikipedia article on inflammation makes two claims: 1) inflammation can be caused by stress—I imagine via lack of sleep, bad habits, eating disorders, etc.—and 2) cancer is one of the non-immune diseases with causal origins in inflammatory processes. Now, there’s many things that could cause a person to stress, but I imagine losing your job and struggling to make ends meet, with a promise that it helps humanity at some point, is definitely high up there (especially if you have a family). I jest, but only in part.
There's no law of nature saying that a human must work 40 hours per week or starve.
The current dependence on work is a consequence, not a goal.
Working on replacing human jobs with robots has two concrete outcomes, 1) impacting people’s ability to have jobs that pay bills and often create a meaning their lives and 2) concentrating wealth in technological elites (who run those robots), and a theoretical outcome of 3) helping humanity in some way.
If we are acting in good will, we should dedicate effort to addressing the concrete impact (1) at least as much as to working on (3). Most of us are or are adjacent to tech elites and benefit from (2), which means we are individually incentivized to not care about (1), so it requires reiterating every now and then. If we are purely thinking of (3), we are not much better than dictators and their henchmen that caused famines and other catastrophes justifying it with some sort of long-term utilitarian calculus.
This is a man-made reality though, and we have as much power to change it as we did to create it.
> Deriving satisfaction from being a useful member of society and social ties is part of human psychological nature.
I can't get behind this idea that "work" is the only way that a person can feel like a useful member of society. This is just the result of our (man-made) programming that makes it seem like the only way. We've essentially been brainwashed into accepting the backwards idea that we need work even if work doesn't need us.
> If we are acting in good will, we should dedicate effort to addressing the concrete impact
I agree, but i don't think the right answer is to stop the tech and keep digging holes and filling them in just to get a paycheck. The solution is to fix the humans. Unfortunately our government is trash so yeah we're probably screwed unless we first figure out how to get govt to actually represent the people. Andrew Yang is the only politician-adjacent I've seen take this conversation seriously.
Does it really need to be stated that “some technology would not be harmful if only the reality in which the technology was used was different”? The challenge is that reality is what it is, and even if we have a degree of control over some aspects of reality we are not at all trying to change it.
If we were the people in charge of job-subsuming robots, and we acted in good faith and common interest, we would be dedicating at least as much resources to changing that reality (in a peaceful, non-violent way) as to introducing a technology that harms a lot of people (even if we get paid for working on that technology).
> I can't get behind this idea that "work" is the only way that a person can feel like a useful member of society. This is just the result of our (man-made) programming that makes it seem like the only way. We've essentially been brainwashed into accepting the backwards idea that we need work even if work doesn't need us.
Even if what you said was true, this is a reality and for robots taking jobs to not be harmful this has to not be a reality. Are we dedicating resources to working on making that not a reality?
However, I don’t even believe this is true. Humans are inherently social. Self-awareness requires other people to exist (“self” cannot be defined without “other”); you can’t become a human without other humans because you need to be surrounded by others for something that we call “consciousness” to develop in you. We are much more ants in an anthill than solitary individuals occasionally in touch with others that we like to imagine ourselves as. For as long as humans existed, we depended on each other, and being in the void, unneeded, is subconsciously a death sentence.
There are the lucky few who find themselves needed by others without much effort, but work is a mechanism that makes the rest of us feel needed. Sure, some work isn’t the best for that, but a lot of work is, be it cancer research or opening doors for people entering a shopping mall.
> I agree, but i don't think the right answer is to stop the tech and keep digging holes and filling them in just to get a paycheck.
False dichotomy. The tech does not need to be stopped and has plenty of very useful applications outside of digging holes (e.g., cancer research mentioned by the other commenter). However, if you have XX% of population digging holes, firing them without any concern is an absolutely bad move, regardless of how good your hole-digging robots are. If you do that, all you are doing is a wealth transfer from people digging holes to people running hole-digging robots. (Remember, people digging holes also participated in the economy, paying their local butcher and baker who in turn could pay their bills, etc.)
> The solution is to fix the humans. Unfortunately our government is trash so yeah we're probably screwed unless we first figure out how to get govt to actually represent the people.
We can work on robots replacing everybody’s jobs: robotics is very challenging, but we found a way. However, to work on making a reality where everybody’s jobs are taken by robots a tenable reality? No way sir, it is way too challenging for our small brains.
If someone is arguing to limit that technology, then yes.
> but work is a mechanism that makes the rest of us feel needed
I'm guessing that you say that because you enjoy your job and it's part of your identity, which is great. But that's a luxury that not everyone has. Some people are ashamed of their jobs, or just simply hate them. But they're stuck because they can't find anything better.
I'm also sure there are tons of people in well-paying BS jobs that are glad to have a paycheck but absolutely do not feel their jobs are necessary or fullfilling. Upon realizing that their job is unecessary they are also left with the feeling that they are in fact wasting their time on this earth. So they're also stuck.
And it is surely possible to be social and to find fullfillment outside of work. I would even say that work is one of the lowest forms of socializing, one step up from sharing an elevator with a stranger and commenting on the weather. That's why most of the time (not all the time), when someone moves on to another job they lose contact with their old collegues. Because they were put together by chance, and while they made the best of being forced to spend the majority of their waking lives together, they missed out on the opportunity to develop real lasting connections with people.
> However, to work on making a reality where everybody’s jobs are taken by robots a tenable reality? No way sir, it is way too challenging for our small brains.
It's happening, so we had better figure out how to wrap our small brains around it.
This was a rhetorical question to highlight the ridiculousness of using this statement as justification for anything. You can state it however many times you want, it does not change reality.
Just as well, we would not need legs if only the reality was that we live in the water. Turns out, the reality is that we live on land and we need legs. The right way is to adapt humans to living in water first, and then legs would have been gone away through evolution. The wrong way is to cut off people’s legs[0].
> I'm guessing that you say that because you enjoy your job
Let’s stick to the point, not to what you imagine about me.
If somebody hates their job, taking away their source of income is extremely harmful (clearly they would have quit already if they didn’t absolutely need money). If somebody loves their job, taking away what gives their life meaning is extremely harmful.
> It's happening, so we had better figure out how to wrap our small brains around it.
Not at all. For it to start happening, those who are in charge of robot job replacement would have to stop plugging their ears and shouting whenever someone talks about the issues it causes.
I feel like we are talking past each other, I am done trying to rephrase my point.
[0] To make things obvious… Making sure jobs are not vital for human existence is evolving humans to live in water. Replacing jobs with robots is cutting off people’s legs.
What I'm saying is that the automation is happening, so we need to deal with that reality.
> stop plugging their ears and shouting whenever someone talks about the issues it causes
Exactly.
This is not a force of nature.
Perhaps what you mean to say is that certain people, heavily featured on this forum, are working on something that causes harm to many, which they are not concerned about because they get paid well, or at best because of some long-term utilitarian math—alongside those complicit in it by investing in the effort, trying to make it seem as if it’s “natural” and “inevitable” and “normal”, and so on.
If that’s what you mean—perhaps that’s true, and that’s exactly why this thread is happening.
It is man made, and unlike reality in which we’ve been living for thousands of years and which we are well adapted for this change is being forced by a wealthy minority onto the rest of humanity in the span of decades. Luckily it is far from being “reality” yet and it can well be stopped.
No that's not at all what I mean to say.
I see technological advancement as inevitible and good. A robot that can do jobs so that humans don't have to is a good thing. It's progress. People working towards progress aren't evil. I assume that they, like me, believe that our society can and should evolve with the technology. If it can't, that's our fault, not the fault of the tech. If our government is so embarrisingly bad that it exploits the people that it represents rather than helping them (which I agree it is), well that's also our fault and not the tech's. The government is us. We better get our shit together.
The wealthy already own and control everything. So your status quo goal of a fair society where we all work all day and feel needed and appreciated and have a nice comfortable life is already dead. You're defending a dead body.
It’s false, simply because it is a product of human effort and human choice to do this or the other.
> A robot that can do jobs so that humans don't have to is a good thing. It's progress
A “good thing” is what benefits humans. Robots replacing humans at what humans choose to do for their own benefit is decidedly not a good thing—aside from humans who profit from running the robots.
You may have noticed that I am repeating myself[0]. You are yet to show how this benefits humanity in a way that outweighs harm to humans who lose their jobs (especially considering many of them provided, without consent, the data instrumental for the robots to work in the first place). If you are among the people who work on robots, I think you ought to pause and reflect.
> The wealthy already own and control everything.
“They” don’t. A lot of “them” are here, by the way. Wealth gap is high, but to say it’s absolute (100% is owned by the rich and we are all just slaves for them) is simply wrong. We should work towards decreasing the gap, not increasing it.
[0] I am basically reiterating my original comment:
> “robots are coming for your jobs” is a valid argument against robots even if they can do those jobs better and faster, under two assumptions: 1) humans benefit from having jobs and 2) human benefit is the end goal.
I think it's pretty obvious if you think beyond trying to protect the status quo. The benefit is simply that machines do work so humans don't have to. It's no more complicated than that. It's what we humans have always strived to do: to make our lives easier. It's why an electric screwdriver exists.
The fact that making our lives easier has become a problem is the actual problem. We should address that problem instead of trying to protect it.
Why is it a benefit? Because not having to work the ultimate ideal? Why would that be the case?
To me it seems like it’s only an ideal for those who wield the robots who do the work and profit from that, not to people who wouldn’t be able to do compensated work if they wanted to. (Ultimately, it’s a means of control: if there are no jobs, the people in power get to decide how to distribute sustenance to jobless population. Rest assured, that population will not dare to bite the hand that feeds them.)
The ideal is to be able to choose to do work you enjoy doing, feel pride in it, get fairly compensated for it. To not be able to do this seems like a strongly negative outcome.
> making our lives easier has become a problem
Is “easier life” the ultimate ideal? Why? There’s many other, more compelling things it could be (e.g., “fulfilled”, “happy”, “meaningful”, “satisfying”) and many of them are not exactly aligned with “easy”.
Even if easier life was your ideal, the precedent has been that automation does not lead to that—we are doing more work (and more challenging, a.k.a. the opposite of “easy”, work) instead[0]. As jobs go away, whoever is still lucky to have one gets getting paid less to do more work (that’s just market forces at work), while a small minority profits and benefits from more accumulated power. Is that what you want to happen? If yes, we don’t have anything to discuss further. If not, you have the power to be part of the change.
I think I've been pretty consistent that I think the change necessary is a social/political one, not a tech one. Whether or not we're capable of this change is another question.
It is not even a question that it would be strongly unethical (like evil addictive social media/crypto scams/online casino times a thousand level of unethical) to proceed on working on job-replacing robots without considering what a tenable no-job reality would like like, or after deciding it is probably not achievable.
What if the issue isn't government failing to prepare society to move forward, but instead, AI businesses moving us in a direction that more and more people don't consider to be forward?
Not the AI company’s fault per se, but generally the US government does a very poor job of creating a safety net either intentionally, ineptitude or indifference.
By the way, attacks were also leveled against Chinese and Japanese California workers who were viewed as stealing the jobs of other “Americans”. So this viewpoint and tradition of behavior and capitalism is very long in US history.
We should compare how anti-government politicians talk versus how trained, educated neoclassical economists talk. The latter readily recognize that a valid function of government is to steer, shape, and yes, regulate markets to some degree. This is why we don’t have (for example) legal, open markets for murder.
Markets do not define human values; they are a coordination mechanism given a diverse set of values.
And even if the government did institute something like universal basic income, if all jobs were replaced, that would almost certainly mean a lower standard of living for the middle class, and even less socioeconomic mobility than there is now.
Taking work away from people is practically the definition of technology. We invent things to reduce the effort needed to do things. Eliminating work is a good thing, that’s why inventing things is so popular!
What ends up happening is the amount of work remains relatively constant, meaning we get more done for the same amount of effort performed by the same amount of people doing the same amount of jobs. That’s why standards of living have been rising for the past few millennia instead of everybody being out of work. We took work away from humans with technology, we then used that effort saved to get more done.
Anger at companies who hold power in multiple places to prevent and worsen this situation for people is valid anger.
Does anyone have any idea of the new jobs that will be created to replace the ones that are being lost? If it's not possible to at least foresee it, then it's not likely to happen. In which case the job loss will be long-term not short-term.
That's a pretty hard bet against AGI becoming general. If the promises of many technologists come to pass, humans remaining in charge of any work (including which work should be done) would be a waste of resources.
Hopefully the AGI will remember to leave food in the cat bowls.
There is zero indication that there will be new jobs, new work. Just because there was lots of low hanging fruit historically does not mean we will stumble into some new job creators now. Waiving away concerns with 'jobs always have magically appeared when needed' is nonsense and a non-response to their concerns.
If everything that a human can do, a robot can do better and cheaper, then humans are completely shut out of the production function. Humans have a minimum level of consumption that they need to stay alive whether or not they earn a wage; robots do not.
Since most humans live off wages which they get from work, they are then shut out of life. The only humans left alive are those who fund their consumption from capital rents.
See: "build me a bigger yacht", building moon bases, researching immortality medicine, and building higher walls and better killbots to manage any peasant attacks
The entire promise of AI is to negate that statement. So if AI is truly successful, then that will no longer be true.
Replace the word robot with "automation" or "industrialization" and you have the last 200 years of human history covered.
The Ludites could have won and we would all have 1500$ shirts.
Do you know any lamp lighters? How about a town crier?
We could still all be farming.
Where are all the switch board operators? Where are all the draftsmen?
How many people had programing jobs in 1900? 1950?
We have an amazing ability to "make work for ourselves", and history indicates that we're going to keep doing that regardless of how automated we make society. We also keep traditional "arts" alive... Recording didnt replace live performances, TV/Film didnt replace Broadway... Photography didnt replace painting...
None of this holds if you don't have anything of value to offer and automation is concentrating power and value, AI is the extreme end of this - at some point the charade of democracy becomes too annoying to the ones at the top, and you get there faster by trying to reign them in.
Ngl, if someone nuked all USA's servers and wipes out the all these bullshit i'm not convinced the world would be in a worst state right now.
Let AI be used by scientific research, development, helping people out. But if it's just to sit your smelly ideas down, you may even be right, but ultimately the form, intentions and result matter more than recklessly endangering everybody.
TBH I feel like the AI discourse around human replacement smell like hard-core psychopathic behavior - or the one of a drunken dude who's driving a car just happy.
You have 0 data concerning the result it would do on society - and I definitely prefer to live in a less technological world than a world that is full of people with psychosis.
So until we find how we can solve this bottleneck I have 0 sympathy for this kind of discourse.
-- Frank Herbert, Dune
The "government" is just the set of people who hold power over others.
Those who will own machines that can match humans will have unprecedented power over others. In effect they'll grow more and more to be the government.
Even now, companies hold more and more of the power over others and are more part of the government than ever before.
So it confuses me when you say it's what the government is for? Who would that be? If we pretend it would still be a democracy then I guess you're saying it's everyone's problem?
So here we are, let's discuss the solution and vote for it?
Often, yes, but in a more functional society it would be the mechanism we collectively use to prevent a few people from amassing excessive wealth and power.
America isnt really a democracy it's a plutocracy.
https://www.nbcnews.com/nbc-out/out-news/swiss-lgbtq-groups-...
"Popper posited that if intolerant ideologies are allowed unchecked expression, they could exploit open society values to erode or destroy tolerance itself through authoritarian or oppressive practices."
That's not a paradox of tolerance, it is the anti-democratic practice of fascism.
> they could exploit open society values to erode or destroy tolerance itself through authoritarian or oppressive practices.
This is exactly my point: an emerging fascist government through authoritarian or oppressive practices destroys tolerance by silencing people for any words that go against their agenda. There is no paradox here.
(What do you refer to with "this"?)
we do actually have real democracy in this state, where we have binding referendums, but legislature is able to act faster than we're allowed to, to work around and nullify the policy we vote for. -so voting is fine; nothing wrong with it; but I guess I just worry, oftentimes, people get too involved in it and attached to movements which can accomplish something one day only for it to be reversed by the end of the decade. feels like the two sides are getting angrier and angrier, spinning their wheels in dysfunctional politics, and we can't have a functional government in this environment; one side guts government to replace with loyalists, then the other guts it again in a few years to replace the partisans, to replace with their own partisans. meanwhile, national debt just keeps climbing as people swarm into gold.
my compost piles, though -- not directly, but I can eat that; I can feed people with that. you know, you want to solve hunger -- you can contribute directly to food pantries. it's more work than voting, but something actually happens. -and almost all the regulation government cares about relates to capitalism; they don't care about my carrots because my carrots don't engage in capitalism. -and for some people in some circumstances, it doesn't take too much engagement with capitalism to be able to get $100k or whatever you need for a plot of land with electricity in a rural area if you plan out for it.
Herbert, as an aside, expressed a kind of appreciation for Nixon. His son wrote in a foreword to some edition of a Dune book I read mentioning this. He was glad the corruption was exposed and so blatant, because now, surely, voters would see how bad things became and will not let it happen again. Optimistic guy.
The government should keep its charge as the protector and upholder of justice, I don’t want it to be those things and then also become a fiat source for economic survival, that’s a terribly destructive combination because the government doesn’t care for competition or viability, and survival is the last place you want to have all your eggs in one basket, especially when the eggs are guaranteed by force of law and the basket becomes a magnet for corruption.
Rent extraction hurts them in the long run. Because working class income gets absorbed by various forms of rent, they are more expensive to employ. Thus we fail to compete with, say, China, which socializes many costs and invests in productive industry. We are left with a top heavy society that as we can see is already starting to crumble.
Best summarized in this comic: https://x.com/9mmballpoint/status/1658163045502267428
That's noble. The first is dystopian
If you are going to use my work without permission to build such a robot, then said robot shouldn’t exist.
On the other hand a jack of all trades robot is very different from all the advancements we have had so far. If the robot can do anything, in the best case scenario we have billions of people with lots of free time. And that doesn’t seem like a great thing to me. Doubt that’s ever gonna happen, but still.
The problem as I see it is not robots coming for my job and taking away my ability to earn a salary. That can be solved by societal structures like you are saying, even though I am somewhat pessimistic of our ability to do so in our current political climate.
The problem I see is robots coming for my mind and taking away any stakes and my ability to do anything that matters. If the robot is an expert in all fields why would you bother to learn anything? The fact that it takes time and energy to learn new skills and knowledge is what makes the world interesting. And this is exactly what happened before when machines took over a lot of human labour, luckily there were still plenty of things they couldn't do and thus ways to keep the world interesting. But if the machines start to think for us, what then is left for us to do?
Robots have been better at chess than humans for a quarter of a century. Yet, chess is still a delightful intellectual and social persuit.
Anyway there is a name for your kind of take. It is anti-humanist.
In my home country, the people building the robots and job destroying AI have captured all three branches of government, and have been saying for over 40 years that they'd like to shrink government down to a size that they could drown it in a bathtub. The government can't be relied upon to do more than move its military into our cities to violently stifle dissent.
We as a society get to decide what is done in our society. If robots replace a few jobs but make goods cheaper for everyone that's a net positive for society.
If robots replace EVERYONE's job, where everyone has no income anymore that's clearly a huge negative for society and it should be prevented.
Humans have depended on their own labor for income since we stopped being hunters and gatherers or living in small tribes.
So it's not just a matter of "the gov will find a way", but it's basically destroying the way humanity as a whole has operated for the past 5000 years.
So yes, it's a huge problem. Everything done under the banner of "innovation" isn't necessarily a good thing. Slavery was pretty "innovative" as well, for those who were the slave owners.
So while you’ve identified the real problem we need to identify a realistic solution.
The over-application of objective phrases like "valid" vs "invalid" when talking about non-formal arguments is a sickness a lot of technical people tend to share. In this case, it's dismissive of harm to humans, which is the worst thing you can be dismissive about. "Please don't make me and my family miserable" is not an "invalid argument" - that's inhuman. That person isn't arguing their thesis.
"The problem". Another common oversimplifying phrase used by us thinkers, who believe there is "the answer", as if either of those two things exist as physical objects. "The problem" is that humans are harmed. Everything else just exists within that problem domain, not as "part of the problem" or "not part of the problem".
But most importantly:
Yes, you're absolutely correct (and I hate to use this word, but I'm angry): Obviously the ideal state is that robots do all the work we don't want to do and we do whatever we want and our society is structured in a way to support that. You've omitted the part where that level of social support is very hard to make physically feasible, very hard to convince people of depending on their politics, and, most importantly: It's usually only enough to spare people from death and homelessness, not from misery and unrest. Of course it would be ridiculous to outright ban for-profit use of automation, but even more ridiculous to write a bill that enforces it, e.g. by banning any form of regulation.
Short and medium term, automating technologies are good for the profit of businesses and bad for the affected humans. Long term, automating technologies are good for everybody, but only if society actually organizes that transition in a way that doesn't make those affected miserable/angry. It isn't, and I don't think it's pessimistic to say that it probably won't.
I'd love to live in Star Trek! We don't. We won't for hundreds of years if ever. Technology isn't the limiting factor, the immutable nature of human society and resources are the limiting factors. Nothing else is interesting to even talk about until we clear the bar of simply giving a shit about what actually, in concrete reality, happens to our countrymen.
I just logged onto github and saw a "My open pull requests button".
Instead of taking me to a page which quickly queried a database, it opened a conversation with copilot which then slowly thought about how to work out my open pull requests.
I closed the window before it had an answer.
Why are we replacing actual engineering with expensive guesswork?
AI just makes it worse.
However, someone has taken a useful feature and has made it worse to shoe-horn in copilot interaction.
Clicking this button also had a side-effect of an email from Github telling me about all the things I could ask copilot about.
The silver lining is that email linked to copilot settings, where I could turn it off entirely.
https://github.com/settings/copilot/features
AI is incredibly powerful, especially for code-generation. But It's terrible ( at current speeds ) for being the main interface into an application.
Human-Computer interaction benefits hugely from two things:
- Speed - Predictability
This is why some people prefer a commandline, and why some people can produce what looks like magic with excel. These applications are predictable and fast.
A chat-bot delivers neither. There's no opportunity to build up muscle-memory with a lack of predictability, and the slowness of copilot makes interaction just feel bad.
AI & robots will generate wealth at unprecedented scale. In the future, you won't have a job nor have any money, but you will be fabulously wealthy!
1. When such wealth is possible through autonomous means, how can the earth survive such unprecedented demands on its natural resources?
2. Should I believe that someone with more wealth (and as such, more power) than I have would not use that power to overwhelm me? Isn't my demand on resources only going to get in their way? Why would they allow me to draw on resources as well?
3. It seems like the answer to both of these concerns lies in government, but no government I'm aware of has really begun to answer these questions. Worse yet, what if governments disagree on how to implement these strategies in a global economy? Competition could become an intractable drain on the earth and humans' resources. Essentially, it opens up the possibility of war at incalculable scales.
Well in trekonomics [1], citizens are equal in terms of material wealth because scarcity has been eliminated. Wealth, in the conventional sense, does not exist; instead, the "wealth" that matters is human capital—skills, abilities, reputation, and status. The reward in this society comes not from accumulation of material goods but from intangible rewards such as honor, glory, intellectual achievement, and social esteem.
Trekonomics seems like a totally backwards way of approaching post-scarcity by starting with a fictional setting. You might as well prepare yourself for the Star Wars economy.
So when are we going to start pivoting towards a more socialist economic system? Where are the AI leaders backing politicians with this vision?
Because that's absolutely required for what you're talking about here...
Consumer goods have generally fallen in price (adjusted for inflation) while improving in quality relative to the 1970s, so we have become wealthier (using PG's definition of wealth):
Televisions, computers, smartphones, clothing (mass-produced apparel is cheaper due to global supply chains and automation), household appliances (items like refrigerators, washing machines, and microwaves are less expensive relative to income), air travel, telecommunications, consumer electronics, automobiles, furniture have fallen in price and gone up in quality.
Housing and healthcare are two items that have gone in the opposite direction. I think this is where AI and robots will make a difference. Houses can be 3D printed [1] and nursing and medical advice can be made cheaper using AI/robots as well.
Right now the people that own those resources also depend on human labor to create wealth for them. You can't go from owning a mine and a farm to having a mega-yacht without people. You have to give at least some wealth to them to get your wealth. But if suddenly you can go from zero to yacht without people, because you're rich enough to have early access to lots of robots and advanced AI, and you still own the mine/farm, you don't need to pay people anymore.
Now you don't need to share resources at all. Human labor no longer has any leverage. To the extent most people get to benefit from the "magic machine," it seems to me like it depends almost entirely on the benevolence of the already wealthy. And it isn't zero cost for them to provide resources to everyone else either. Mining materials to give everyone a robot and a car means less yachts/spaceships/mansions/moon-bases for them.
Tldr: I don't think we get wealth automatically because of advanced AI/robotics. Social/economic systems also need to change.
https://en.wikipedia.org/wiki/Social_credit
(its where the excess profits from mechinisation will be fed back to the citizens so that they don't need to work as much. That failed spectacularly.)
PG's argument is a huge amount of words to miss the point. Money is a tool that reflects power. Wealth derives from power.
> AI & robots will generate wealth at unprecedented scale. In the future, you won't have a job nor have any money,
I would gently tell you that you might want to look at the living conditions of the working class in the early 20th century. You might see planned cities like borneville or what ever the american version is. they were the 1% of working classes. The average housing was shit, horrid shit. If AI takes off and makes say 10% of the population jobless, thats what those people will get, shit.
It wasn't until those dirty socialists got into power in the UK (I don't know about other countries) did we start to see stuff like slum clearances where the dispossessed we actually re-homed. rather than yeeted to somewhere less valuable.
Instead of facing the new reality, some people start to talk about the bubbles, AI being sloppy, etc. Which is not generally true; mostly it's the users' psychological projection of their own traits and the resulting fear-induced smear campaigns.
The phenomenon is well described in psychology books. Seminal works of Carl Jung worth a ton nowadays.
It's also more nuanced than you seem to think. Having the work we do be replaced by machines has significant implications about human purpose, identity, and how we fit into our societies. It isn't so much a fear of being replaced or made redundant by machines specifically; it's about who we are, what we do, and what that means for other human beings. How do I belong? How do I make my community a better place? How do I build wealth for the people I love?
Who cares how good the machine is. Humans want to be good at things because it's rewarding and—up until very recently—was a uniquely human capability that allowed us to build civilization itself. When machines take that away, what's left? What should we be good at when a skill may be irrelevant today or in a decade or who knows when?
Someone with a software brain might immediately think "This is simply another abstraction; use the abstraction to build wealth just as you used other skills and abilities to do so before", and sure... That's what people will try to do, just as we have over the last several hundred years as new technologies have emerged. But these most recent technologies, and the ones on the horizon, seem to threaten a loss of autonomy and a kind of wealth disparity we've never seen before. The race to amass compute and manufacturing capacity among billionaires is a uniquely concerning threat to virtually everyone, in my opinion.
We should remember the Luddites differently, read some history, and reconsider our next steps and how we engage with and regulate autonomous systems.
In simple words, authenticity is the desire to work on mistakes and improve yourself, being flexible enough to embrace the changes sooner or later. If one is lacking some parts of it, one tends to become a narcissist or a luddite, being angry trying to regain the ever-slipping sense of control.
To translate to human language: gold diggers who entered the industry just for money do not truly belong to the said industry, while those who were driven by spirit will prosper.
What remains after is something like the social status games of the aristocratic class, which I suspect is why there's a race to accumulate as much as possible now before the means to do so evaporate.
At the bottom of this page, there is a form you can fill out. This website says they will contact your local representative on your behalf. (And forward you any reply.)
Here's the auto-generated message:
I am a constituent living in [state] with urgent concerns about the lack of guardrails surrounding advanced AI technologies. It is imperative that we act decisively to establish strong protections that safeguard families, communities, and our children from potential harms associated with these rapidly evolving systems.
As companies continue to release increasingly powerful AI systems without meaningful oversight, we cannot rely on them to police themselves, especially when the stakes are so high. While AI has the potential to do remarkable things, it also poses significant risks, including the manipulation of children, the development of bioweapons, the creation of deepfakes, and the threat of widespread unemployment.
I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections that may be necessary for their residents.
Thank you for your time.
[name]
New York
Ideas are nice, and important, but there needs to be an action vector for those ideas to have practical value.
The US cannot move fast and break things forever. If American businesses want to sell digital tchotchkes, then we first have to convince rightsholders that they own anything of value.
Machines doing stuff instead of humans is great as long as it serves some kind of human purpose. If it lets humans do more human things as a result and have purpose, great. If it supplants things humans value, in the name of some kind of efficiency that isn't serving very many of us at all, that's not so great.
Maybe it depends on what you want in a relationships. AI is sycophantic and that could help people who might have trust issues with humans in general or the other sex (which is happening way more than you might think in younger generations, whether that's involuntary celebates or whatever)
I don't blame people for having trust issues but the fact that they can live longer in some idea of a false hope that robots are partners would just make them stuck even longer and wouldn't help them.
Should there be regulations on this thing depends if this becomes a bigger issue but most people including myself feel like govt. shouldn't intervene in many things but still. I don't think its happening any time soon since AI big tech money and stock markets are so bedded together its wild.
It's not solving the problem, it's diverging away from it.
I 100% agree. I mean that was what I was trying to convey I guess if I didn't get side tracked thinking about govt regulation but yeah I agree completely.
It's sort of self sabotage but hey one thing I have come to know about humans is that judging them for things is gonna push them even further into us vs them, we need to know the reasons behind why people feel so easy to conform to llm's. I guess sycophancy is the idea. People want to know that they are right and the world is wrong and most people sometimes don't give a fuck about other problems and if they do, then they try to help and that can involve pointing to reality. AI just delays it by saying something sycophantic which drives a person even further into the hole.
I guess we need to understand them. Its become a reality dude, there are people already marrying chatbots or what not, I know you must have heard of these stories...
We are not talking about something in the distinct future, its happening as we speak
I think the answer to why is desperation. There are so many things broken in the society in dating that young people feel like being alone is better and chatbots to satisfy whatever they are feeling.
I feel like some people think they deserve love and there's nothing wrong with that but then its also at the same time that you can't expect any person to just love you at the same time, they are right in thinking about themselves too. So those people who feel like they deserve love flock to chatbot which showers them sycophancy and fake love but people are down bad for fake love as well, they will chase anything that resembles love, even if its a chatbot.
Its a societal problem I suppose, maybe internet fueled it accidentally because we fall in love with people over just texts so we have equated a person to texts and thus love, and now we have got clankers writing texts and fellow humans intrepreting it as love.
Honestly, I don't blame them but I sympathesize with them. They just need someone to tell their day to. Underputting them isn't the answer but talking with them and asking them to take professional therapy as well in the process could be great but so many people can't afford therapy that they go to LLM's so that's definitely something. We might need to invest some funding to make therapy more accessible for everybody I guess.
I think it's ultimately down to risk, and wanting to feel secure. There's little risk in confiding to something you turn off, reset and compartmentalise.
You can compete, but not for long IMO. (No pun)
HN Comment in 2125: Why would I have casual sex with a real guy, I can have a sexual partner who I can tailor perfectly to my in the moment desires, can role play anything including the guy in the romance novel I'm reading, doesn't get tired, is tall and effortlessly strong, has robotic dexterity, available 24/7, exists entirely for my pleasure letting me be as selfish as I want, and has port and starboard attachments.
What makes you think that sex is some sacred act that won't follow the same trends as jobs? You don't have to replace every aspect of a thing to have an alternative people prefer over the status quo.
A job is a decision that your boss(es) made and can be taken without your consent. You don't have the ownership of your job that you do of your marriage.
Your partner in some (most?) cases can absolutely make an executive decision that ends your marriage, with you having no options but to accept the outcome.
Your argument falls a little flat.
So the normal routine here is you get an offer, if you accept you get sent a contract which is signed by the employer, if it's all ok you also sign and then you get your start date. Is it different in the US or the same?
It’s perhaps customary in some places to have additional paperwork in the negotiation phase?
If it says "you work 40 hours per week and have 4 weeks of paid vacation" and your employer, EVEN IN WRITING, compels you to work 60 hour weeks and not take any vacation at a later date, then your only real option is to find work elsewhere. The Department of Labor won't have your back and you likely won't have enough money to afford a lawyer to fight on your behalf longer than the corporate lawyers your company has on staff.
Many programmers don't get treated this way because of the market, but abusive treatment of employees runs rampant in blue collar professions.
Need proof? Look at how few UNPAID weeks of maternity leave new mothers are entitled to under the law. This should tell you everything you need to know.
I have personally seen women return to work LESS THAN A WEEK after delivering a baby because they couldn't afford to not do so.
But I was just trying to clarify if work contracts were a normal thing there. The original post said they weren't where you seem to be saying they are, but effectively unenforceable.
Is that true? I've never had a job where I didn't sign a contract (in the UK and for multinationals including American companies). I wouldn't start without a contract.
And I'm not in any rockstar position. It's bog standard for employees.
Plus it covers things like disciplinary procedures, working hours etc. It's really weird to me that you don't have that. Are you sure it's normal?
To address your specific point, you can mostly be fired without reason if you're a new employee. You get more rights after 2 years so companies generally have a procedure to go through after that. You can always appeal to an employment tribunal but they won't take much notice if you've been there a couple of months and got fired for not doing your job.
^^^ that's the thing. Contracts are by country, not by company ownership.
I worked for an F100 multinational US-based company for many years. My coworkers in the EU (including the UK at the time) got contracts. A buddy who was a bona fide rock star in the US got one. I know VPs got them.
I got nothing, as did the vast, vast majority of my US-based friends. And while I'm not a rock star, I'm pretty well known within my niche and am not a bottom-feeder. It really is as bleak as you might fear.
Single integrated written employment contracts are rare in the US for any but the most elite workers (usually executives); US workers more often have a mix of more limited domain written agreements and possibly an implied employment contract.
Someone makes a comment about how its okay for things to be replaced in specialization in business
Then someone equates it to intimacy
Then someone says its only possible in HN
Then we get into some nifty discussion of can we argue about the similarity between marriage and job contracts and first they disagree
Now we come to your comment which I can kinda agree about and here is my take
Marriage and business both require some definition of laws and a trust in state which comes out of how state has a monopoly (well legal monopoly) over violence and how it can punish people who don't follow laws over it and how the past record of it handling cases have been
As an example, I doubt how marriages can be a good mutually binding legal agreement in something like saudi arabia which is mysognistic. Same can be said for exploitations in businesses for countries, same countries like saudia arabia and qatar have some people from south asia like india etc. in a sort of legal slavery where they are forced to reside in their own designated quarters of the country and they are insanely restricted. Look it up.
Also off topic but I asked LLM's to find countries where divorce for women are illegal and I confirmed it, as an example, divorce in philipines for non muslims are banned (muslim woman's divorces are handled via sharia law) I have since fact checked it as well via searching but it's just that divorce itself isn't an option in philipines but rather limiting marital dissolution to annulment or legal separation
"In the Philippines, the general legal framework under the Family Code prohibits absolute divorce for the majority of the population, limiting marital dissolution to annulment or legal separation " [1]
[1]: source: https://www.respicio.ph/commentaries/divorce-under-muslim-pe...
That is not true far more often than it is true.
A job is also a mutual decision between the employee and the employer.
A marriage can also be taken without your consent through divorce (unless you are orthodox jewish or something I think?).
Note that isn’t universally true, for either case. Without mutual agreement, in the EU you can’t fire someone just because, and in Japan you can’t divorce unless you have proof of a physical affair or something equally damming.
But as a society we have to ask ourselves if replacing all jobs with AI will make for a better society. Life is not all about making as much money as possible. For a working society citizen need meaning in their lives, and safety, and food, and health. If most people get too little of this, it may disrupt society, and cause wars and riots.
This is where government needs to step in, uncontrolled enterprise greed will destroy countries. But companies don´t care, they'll just move to another country. And the ultra-rich don´t care, they'll just put larger walls around their houses or move country.
Sure, but we're also putting aside how people do worse without a sense of purpose or contribution, and semi-forced interaction is generally good for people as practice getting along with others - doubly so as we withdraw into the internet and our smartphones
A specific part of GP’s comment keeps getting overlooked:
So the problem isn't robots, it's the structure of how we humans rely on jobs for income.
Humans being forced to trade time for survival, money, and the enrichment of the elite, is a bug. We are socially conditioned to believe it’s a feature and the purpose.Nobody is saying robots should replace human connection and expression
Edit: tone
The technology proposes a source of labor for the elites so abundant that they will not need to trade their wealth with the eaters.
However much resources you consume, it will be too much to buy for your labor. You will be priced out of existence.
Automation results in centralization of power. It transforms labor-intensive work to capital-intensive work and reduces the leverage of the working class.
You could have a system that distributes wealth from automation to the newly-unemployed working class, but fundamentally the capital-owners are less dependent on the working class, so the working class will have no leverage to sustain the wealth distribution (you cannot strike if you don't have a job). You are proposing a fundamentally unstable political system.
It's like liebig's law of the minimum or any other natural law. You can try to make localized exceptions in politics, but you are futilely working against the underlying dynamics of the system which are inevitably realized in the long term.
As has been seen time and time again throughout history the commoners will only put up with so much and when all else fails and they start suffering a bit too much leverage comes from the end of a barrel.
Note that the stench of inevitability likes to sneak into these discussions of systemic problems. Nothing is set in stone. Anyone telling you otherwise has given up themselves. The comment section attracts all kinds of life outlooks, after all. The utility of belief in some sort of agency (however small) shouldn’t be surrendered to someone else’s nihilistic disengagement.
The evidence for this is all around us. As automation of manufacturing has brought former luxuries into reach for middle-class families, those with means move on to consuming items that require more and more labor to produce. "Handmade" scented soaps. "Artisanal" cheeses. Nobody with money wants their wedding invitation to arrive at a destination with machine-canceled postage. It's tacky. Too automated, too efficient. In fact, I bet the ultra-wealthy don't even use postal mail for delivering their invitations, because it's not labor-intensive enough to be tasteful. Private couriers are probably the move. You can see this pattern over and over again once you know what to look for.
There will always be a demand for human labor, because value is a human construction. That said, the rate at which the economy will change because of AI (if the True Believers are to be believed) is probably too fast for most workers to adapt, so you may not be entirely wrong in your conclusion depending on how thing shake out, but the way you got there is bogus imo.
Sure, humans relying on jobs for income is a problem with transitions. But people finding purpose in jobs is a problem, too.
Right now how we get there is being "forced to' -- and indeed that's a bug. But if we transition to a future where it's pretty hard to find useful work, that's a problem, even if the basic needs for survival are still being met.
I haven't had to work for 25 years. But I've spent the vast majority of that time employed. Times when I've not had purposeful employment as an anchor in my life have been rough for me. (First 2-3 months feels great... then it stops feeling so great).
Just to be clear, are you saying the only life work that you can find fulfillment in is work that can be perfectly automated and handled by AI? Do you have an example of what you mean?
No. I'm not saying that applies to me, but it may be getting dangerously close to many people. During my career, I've done CS, EE, controls, optics, and now I teach high school.
I do worry about CS in particular, though. If one's happy place is doing computer science, that's getting pretty hard.
LLMs feel to me like a 60th percentile new college grad now (but with some advantages: very cheap, very fast, no social cost to ask to try again or do possibly empty/speculative work). Sure, you can't turn them loose on a really big code base or fail to supervise it, but you can't do that with new graduates, either.
I worry about how 2026's graduates are going to climb the beginning of the skill ladder. And to the extent that tools get better, this problem gets worse.
I also worry about a lot of work that is "easy to automate" but the human in the loop is very valuable. Some faculty use LLMs to write recommendation letters. Many admissions committees now use LLMs to evaluate recommendation letters. There's this interchange that looks like human language replacing a process where a human would at least spend a few minutes thinking about another human's story. The true purpose of the mechanism has been lost, and it's been replaced with something harsh, unfeeling, and arbitrary.
But I want to be -useful-, too. I enjoy helping and working with kids in my current job more than I enjoy filling my time in empty ways (well, up to a point: summer sure feels nice, too :).
Money gave me the freedom to define the relationship with work in the way that works best for me; and it turned out that's more valuable to me than the ability to escape work entirely.
just because humans can't "outdo" technology doesn't mean we should "blame" "the elite". that's literally how the great catastrophes of socialism, communism, Marxism, etc started
Humans aren't "forced" to do anything, (depending on how you look at it). You could just lay down, "live" in your own excrement until you starve to death. That seems reasonable! Liberate the proletariat! Why doesn't everyone else work for me?!
No one is feeling useful or valued working a double shift at Walmart in order to put food on the table.
Feeling useful and valued can come from other means such as caring for the elderly and doing volunteer work.
Majority of people work to simply survive because without it, they would end up homeless and hungry.
If you disagree, feel free to argue your point instead of just scoffing at the idea.
It's not "just business", it's my ability to survive.
Assuming the Everdrive is M and the SNES cartridge port is F, I can understand why the Everclan men are particularly attuned to this topic. Many better-quality, more feature-rich, and cheaper SNES multicarts have hit the market; the Everdrive is looking dated.
I see what you did there
This isn't exactly news
surprised to see this so far down. if a robot can fuck better, then we would probably both have fun fucking robots together
The problem is a culture that doesn't think the profit from productivity gains should be distributed to labor (or consumers), and doesn't think that wives deserve to be happy.
Any company that solves this problem will be a $10T company.
What if there were some robot with superhuman persuasion and means of manipulating human urges such that, if it wanted to, it could entrap anyone it wanted to complete subservience? Should we happily acquiesce to the emerging cult-like influence these new entities have on susceptible humans? What if your parents messaged you one day, homeless on the street because they gave all their money and assets to a scammer robot that via a 300IQ understanding of human psychology manipulated them to sending all their money in a series of wire transfers?
Wow this potential/theoretical danger sounds a lot like the already existent attention economy; we're already manipulated at scale by "superhuman" algorithms designed to hijack our "human urges," create "cult-like" echo chambers, and sell us things 24/7. That future is already here
Humans have generally a natural wariness/mental immune system response to these things however, and for 99% of people the attention economy is far from being able to completely upend their life or send all their money in an agreement made in bad faith by the other party. I don't see why, if some AI were to possess persuasion powers to a superhuman degree, would be able to cause 100x the damage when directed at the right marks.
Can someone help me understand this one?
This is reasoning.
However people don't always act based on reasoning.
And even if you act based on reasoning, you can't trust others to act based on the same kind of reasoning.
---
If a more serious response, (some) feminists see sex robots as objectification of women and that's why they're against them.
Personal belief: if a woman finds that she isn't pleased with the type of man courting her, then maybe she should take the initiative and put in the effort to approach and court the men that she does want. Just as you likely wouldn't get the best job if you just wait for recruiters to reach out to you.
Or does it only work one way?
Men's insecurity, of course it is. That old chestnut. I'm exhausted by having to capitulate to female centric sensibilities around physical intimacy. This has been going on for decades. Your comment is endemic of the dismissive and othering nature around men's needs and experiences. Men and women are different. Unrealistic expectations from and for both is the foundational problem here.
The only good way forward is understanding, forgiveness, gratitude, and some romanticism and adoration, from and for both sexes. A nice big sun spot that wipes out social media would help too.
Besides, in this fantasy, what’s to stop you from having the perfect robot lover as well - why are you so attached to this human wife of yours in the first place?
Skill issue.
How would we handle regulating sex bots? Complete ban on manufacturing and import of full size humanoid bots? They are large enough that it could be partially effective I guess. I’m imagining two dudes in a shady meetup for a black market sale of sex bot which is kinda funny but also scary because the future is coming fast.
Or in this case, a husband having police investigate and apprehend the wife in the act? Crazy times.
"To be or not to be? ... Not a whit, we defy augury; there's a special providence in the fall of a sparrow. If it be now, 'tis not to come; if it be not to come, it will be now; if it be not now, yet it will come the readiness is all. Since no man knows aught of what he leaves, what is't to leave betimes? Let be." -- Hamlet
In the end it will be our humility that will redeem us as it has always been, have some faith the robots are not going to be that bad.
There is an intersection of certain industries and a particular demographic where adapting/retraining will be either very difficult or impossible.
Case in point:
- car factory town in Michigan
- factory shuts down
- nursing school opens in the town
- they open a hospital
- everyone thinks "Great! We can hire nurses from the school for the hospital"
- hospital says "Yeah, but we want experienced nurses, not recent graduates"
- people also say "So the factory workers can go to nursing school and get jobs somewhere else!"
- nursing school says "Uhm, people with 30 years of working on an assembly line are not necessarily the type of folks who make good nurses..."
Eventually, the town will adapt and market forces will balance out. But what about those folks who made rational decisions about their career path and that path suddenly gets wiped away?
The auto workers should leave town to find a suitable job, selling their homes to the incoming healthcare workers.
I understand the spirit of this, but most of this alarmism is misguided in my view.
It's preprocessed food and sugar intake in general that are particularly bad in the US.
My bias is now simply, its the sugar. No not only. But far and away the number one culrpit.
Then you stopped needing them, a USDA census in 1959 showed the horse population had dropped to 4.5 million.
Now they're mostly used for riding, and in 2023, there were about 6.65 million horses.
(Citation: https://en.wikipedia.org/wiki/Horses_in_the_United_States#St...)
There's no law of nature that says "there's always a place for more horses", and anyone who suggested there might be would get laughed at. Well, there's also no law of nature that says "there's always a place for more humans", to butcher a like from CGP Grey a little over a decade ago.
Mechanisation
But did you ever wonder what happened to the displaced workers? I'm not an expert on the agricultural changes in the USA, but in the UK, huge amount of tumult can be directly attributed to agricultural changes.
(or anything to do with the displacement of peasant to the towns)
How many of those works displaced survived the winter? You know they lost their homes as well right? Those dustbowlers, how were their life chances?
Sure new jobs were created, and the mechanisation boom that stared in the 40s and lasted through to the 70s was _brilliant_
But thats not going to happen again with AI, where are the jobs going to come from, much less decent paying ones.
Part of the reason likely is that the perception that translation is valuable work has changed.
"Computer" used to be a job. Not anymore: https://en.wikipedia.org/wiki/Computer_(occupation)
What counts as "AI" is a moving target: https://en.wikipedia.org/wiki/AI_effect
I definitely think AI companies marketing claims deserve mockery...but this isn't even good/interesting/smart satire??
It feels like we've fully completed the transition to Reddit here, with its emotional and contradictory high school political zeal (being both progressive and anti-progress at the same time) dominating the narrative.
Something about upvote-based communities is not holding up well in the current climate.
If humans have regressed enough intellectually where they are praising this as "Brilliant social commentary" then we absolutely SHOULD be replaced by AI.
You might say: "but you'll need money!". Why would I need money? The robots can provide my every need. And if I need money for some land or resource or something, I would have my robots work until my need was satisfied, I wouldn't continue having them work forever.
And even if robots did take all of the jobs, they would have to work for free. Because humans would have no jobs, and thus no money with which to pay them. So either mankind enjoys free services from robots that demand no compensation, or we get to keep our jobs.
So I really don't get the existential worry here. Yes, at a smaller scale some jobs might be automated, forcing people to retrain or work more menial jobs. But all of humanity being replaced? It doesn't make sense.
Another way to think about it is that if all of the jobs were replaced by AI, us leftover jobless humans would create a new economy just trying to grow food and make clothes and build houses and take care of our needs. The robot masters would just split away and have their own economy. Which is the same as them not existing.
Good thing there are no resources to fight over - land, minerals, and water.
The benign forms of superintelligence shaken out by non-benign forms.
>Another way to think about it is that if all of the jobs were replaced by AI, us leftover jobless humans would create a new economy just trying to grow food and make clothes and build houses and take care of our needs.
On whose land?
In any case, it will be cheaper to buy food from the AI. The remaining economy would just be the liquidation of remaining human-controlled assets into the AI-controlled economy for the stuff they need to survive like medicine and food.
Good point.
> In any case, it will be cheaper to buy food from the AI.
Only if the humans had any money with which to buy it, but humans in the secondary economy would rapidly have no token of currency that the AI would recognise for the purpose of trade.
In fact, Sam Altman wrote a good piece on this:
The best paid workers were mechanised/outsourced first. For example the weavers used to be a huge political force, literally re-shaping countries. Their long slow & violent decent into obscurity lead to workers rights (see the chartist movement)
Here's a starter example: any company whose main business is training AI models needs must give up 10% of their company to a fund whose charter is long-term establishing basic care (food, water, electricity, whatever) for citizens.
I'm sure people will come at me with "well this will incentivize X instead!" in which case I'd like to hear if there are better thought out proposals.
There honestly aren't a lot of people in the middle amazingly, and most of them work at AI companies anyway. Maybe there's something about our algorithmically manipulated psyche's in the modern age that draws people towards more absolutist all-or-nothing views, incapable of practical nuance when in the face of a potentially grave threat.
Probably because most politics about how to "equitably distribute the wealth" of anything are one or both of "badly thought out" and/or "too complex to read".
For example of the former, I could easily say "have the government own the AI", which is great if you expect a government that owns AI to continue to care if their policies are supported by anyone living under them, not so much if you consider that a fully automated police force is able to stamp out any dissent etc.
For example of the latter, see all efforts to align any non-trivial AI to anything, literally even one thing, without someone messing up the reward function.
For your example of 10%, well, there's a dichotomy on how broad the AI is, if it's more like (it's not really boolean) a special-purpose system or if it's fully-general over all that any human can do:
• Special-purpose: that works but also you don't need it because it's just an assistant AI and "expands the pie" rather than displacing workers entirely.
• Fully-general: the AI company can relocate offshore, or off planet, do whatever it wants and raise a middle finger at you. It's got all the power and you don't.
What government in the foreseeable future would go after them? This would tank the US economy massively, so not US. The EU will try and regulate, but won't have enough teeth. Are we counting on China as the paragon of welfare for citizens?
I propose we let the economy crash, touch some grass and try again. Source: I am not an economist.
For this to work at scale domestically, the fund would need to be a double-digit percentage of the market cap of the entire US economy. It would be a pretty drastic departure from the way we do things now. There would be downsides: market distortions and fraud and capital flight.
But in my mind it would be a solution to the problem of wealth pooling up in the AI economy, and probably also a balm for the "pyramid scheme" aspect of Social Security which captures economic growth through payroll taxes (more people making more money, year on year) in a century where we expect the national population to peak and decline.
Pick your poison, I guess, but I want to see more discussion of this idea in the Overton window.
Isn't that what happened in the Soviet Union? Except it wasn't fractional. It ushered 50 years of misery.
- Maybe you just decide to invest some public money
- Maybe you have some natural resources that are collective-by-default (minerals wealth on public land)
- Maybe there's a bailout of an industry that is financially broken but has become too big to fail cough and the government presses its leverage
- Maybe a president just wakes up and decides that he wants the government to own 10% of Intel, and makes that deal happen on favorable terms.
The problem is that the longer you refrain from equitably distributing wealth, the harder it becomes to do it, because the people who have benefited from their inequitably distributed wealth will use it to oppose any more equitable distribution.
The problem really is political systems. In most developed countries, wealth inequality has been steadily increasing, even though if you ask people if they want larger or smaller inequality, most prefer smaller. So the political systems aren't achieving what the majority wants.
It also seems to me that most elections are won on current political topics (the latest war, the latest scandal, the current state of the economy), not on long-term values such as decreasing wealth inequality.
> in the industrial revolution of the 19th century what Humanity basically learned to produce was all kinds of stuff like textiles and shoes and weapons and vehicles and this was enough for very few countries that underwent the revolution fast enough to subjugate everybody else
what we're talking about now is like a Second Industrial Revolution but the product this time will not be textiles or machines or vehicles or even weapons the product this time will be humans themselves.
we are basically learning to produce bodies and Minds bodies and minds are going .... the two main products of the next wave of all these changes and if there is a gap between those that know to produce bodies and minds and those that do not then this is far greater than anything we saw before in history and this time if you're not part of the Revolution fast enough then you probably become extinct
once you know how to produce bodies and brains and Minds so cheap labor in Africa or South Asia wherever it simply counts for nothing
again I think the biggest question ... maybe in economics and politics of the coming decades will be what to do with all these useless people
I don't think we have an economic model to for that, my best guess which is , just a guess, is that food will not be a problem uh with that kind of Technology you will be able to produce food for to feed everybody the problem is more important ... what to do with them and how will they find some sense of meaning in life when they are basically meaningless worthless
my best guess at present is a combination of drugs and computer games
This has been nothing but a test-run for openly fascistic tech hoes to flex their disdain for everyone who isn't them or in their dumb club.
1. Rationalists/EA's who moderately-strongly believe AI scaling will lead to ASI in the near future and the end of all life (Yud, Scott Alexander)
2. Populist climate-alarmists who hate AI for a combination of water use, copyright infringement, and decimation of the human creative spirit
3. Tech "nothingburgerists" who are convinced that most if not all AI companies are big scams that will fail, LLM's are light-years from "true' intelligence and that it's all a big slop hype cycle that will crumble within months to years. (overrepresented on this site)
Each group has a collection of "truthiness-anchors" that they use to defend their position against all criticism. They are all partially valid, in a way, but take their positions to the extreme to the point they are often unwilling to accept any nuance. As a result, conversations often lead nowhere because people defend their position to a quasi-religious degree rather than as a viewpoint predicated on pieces of evidence that may shift or be disproven over time.
Regarding the satire in OP, many people will see it as just a funny, unlikely outcome of AI, others will see it as a sobering vision into a very likely future. Both sides may "get" the point, but will fail to agree at least in public, lest they risk losing a sort of status in their alignment with their sanity-preserving viewpoint.
Sam Altman doesn’t own AI. His investors actually own most of the actual assets.
Eventually there is going to be pressure for open ai to deliver returns to investors. Given that the majority of the US economy is consumer spending, the incentive is going to be for open ai to increase consumer spending in some way.
That’s essentially what happened to Google during the 2000s. I know everyone is negative about social media right now. But one could envision an alternative reality where Google explicitly controls and sensors all information, took over roadways with their driving cars, completely merged with the government, etc. Basically a doomsday scenario.
What actually happened is Google was incentivized by capital to narrow the scope of their vision. Today, the company mainly sells ads to increase consumer spending.
{ }
if (Steal) { then Lie; then Cloud; then Money; } else { Fail; then Die; }
{ }
However, in the US, the labor-force participation rate is 60%. There are a LOT of adults out there who don't work now. While I do think people find value in work, I would guess that number is less than 50% (potentially much lower). Which makes me think that 30% or fewer of adults are deriving significant value from their work.
On the one hand, something that effects 30% is pretty massive. But it feels less apocolyptic/overwhelming than it may seem.
Sure we will have the robot wrangler engineers, scientists, teachers, nurses, etc. But typically we have social unrest past like 8% unemployment. What happens when double digits of people have no jobs and all the time on their hands? Well “eat the rich” might become very literal and no amount of protection against that can really be bought. Ultimately, the only option is either a Dune-style elimination of all AI (very unlikely) or we will have to decouple “living wage income” from “job”. If you think about it, the idea that you must have a job in order to make money is more of an implementation detail. If robots produce so much value that it is actively not even viable for humans to work, the only real logical solution is to distribute the profits from the labor of the robots in a way that isn’t by hourly rate times hours worked. In fact one possible way to do this is to tax the value produced by AI and then funnel that to a universal basic income program. Everyone by default is an artist. If you want to be a nurse or teacher or scientist or engineer you can. Otherwise just produce art at your leisure while the robots work the fields and cook your meals.
1. Massive population reduction (war is a very efficient way to achieve this)
2. Birth control, to slow down population growth to a stable rate near 0
3. Eugenics, to ensure only people with needed capabilities are born (brave new world)
In this scenario, 500,000 people (less ?) in charge of millions of robots and a minority of semi-enslaved humans would freely enjoy control over the world. The perfect mix between Asimov and Huxley.
All the agitation about "building a 1984-style world" is, at best, just a step toward this Asimov/Huxley model, and most likely, a deliberate decoy.
and war is not a great way to reduce population at all
It won't if robots start driving trucks.
Everyone keeps saying 'no need to worry, no need for society to plan, because jobs happened in the past'. Like we should just put all our hope on these magic future jobs appearing. Plenty of countries exist where there aren't enough jobs. We aren't exempt from that as if some magic jobfairy is looking out for us.
The divine right of the new kings will be born from social Darwinism
You don't understand. Almost nobody actually thinks about this in the right way, but it's actually basic economics.
Salt.
We used to fight wars for salt, but now it's literally given away for free (in restaurants).
If "robots produce so much value" then all that value will have approximately zero marginal cost. You won't need to distribute profits, you can simply distribute food and stuff and housing because they're cheap enough to be essentially free.
But obviously, not everything will be free. Just look at the famous "cost of education vs TV chart" [1]. Things that are mostly expensive: regulated (education, medicine, law) and positional (housing / land - everyone wants to live in good places!) goods. Things that are mostly cheap: things that are mass produced in factories (economies of scale, automation). Robots might move food & clothing into the "cheap" category but otherwise won't really move the needle, unless we radically rethink regulation.
[1] https://kottke.org/19/02/cheap-tvs-and-exorbitant-education-...
Yeah, I know, it's very hard to craft good legislation. In fact, there's this problem of agency: the will to have things be a certain way is not always in the humans, or does not always emanate from the direct needs of the people. Many of the problems of modern capitalism are because there's emergent agency from non-human things, i.e. corporations. In the case of US, agency emerging from the corporate world has purportedly sequestered democracy. But there also agency emerging from frenzied political parties that define themselves as opposition to each other with a salted no-mans-land in the middle. This emerging agency thing is not new; it existed before in other institutions, e.g. organized religion. In any case, the more things there are vying for power, the less power people have to govern themselves in a way that is fair.
With AI, there's a big chance we will at least super-charge non-human agency, and that if we can avoid the AIs themselves developing agency of their own.
Healthcare is heavily regulated in some countries and less in others. It should be possible to get some comparisons.
I somehow feel that making many humans healthy is fundamentally a really hard problem that gets harder every year because the population ages and expectations rise. It feels to easy of a talking point to put it all on regulation.
> how to prevent stop
But I think it misses the bigger picture, which you hit on: Robots are helpful, but they're still just tools. An AI can crunch data, find opportunities, and trade faster than any human ever could. It's an incredible helper!
However, humans are the ones controlling them. We decide what they trade, and we build the secure systems they rely on. An AI millionaire is still relying on infrastructure that humans have to build to be fast, cheap, and totally stable. If the foundation is shaky, the AI's complex trades fall apart.
That reality is what makes me ignore the AI hype and focus entirely on infrastructure. The smart developers know that the long game is building utility assets on robust foundations.
They're moving to systems that are super fast and near-zero cost because AI requires zero friction. They're also demanding global market reach so the assets they create aren't trapped anywhere.
jdthedisciple•3mo ago
ionwake•3mo ago
amelius•3mo ago
sorokod•3mo ago
lijok•3mo ago
stuartjohnson12•3mo ago