Yeah, how dare they not want to lose their careers.
Losing a bunch of jobs in a short period is terrible. Losing a bunch of careers in a short period is a catastrophe.
Also, this is dishonest - nobody is confused about why people don't like AI replacing/reducing some jobs and forms of art, no matter what words they use to describe their feelings (or how you choose to paraphrase those words).
What I typically see is:
- Open source programmers attacking other open source programmers, for any of half a dozen reasons. They rarely sound entirely honest.
- Artists attacking hobbyists who like to generate a couple pictures for memes, because it’s cool, or to illustrate stories. None of the hobbyists would have commissioned an artist for this purpose, even if AI didn’t exist.
- Worries about potential human extinction. That’s the one category I sympathise with.
Speaking for myself, I spent years discussing the potential economic drawbacks for once AI became useful. People generally ignored me.
The moment it started happening, they instead started attacking me for having the temerity to use it myself.
Meanwhile I’ve been instructed I need to start using AI at work. Unspoken: Or be fired. And, fair play: Our workload is only increasing, and I happen to know how to get value from the tools… because I spent years playing with them, since well before they had any.
My colleagues who are anti-AI, I suspect, won’t do so well.
'careers' is so ambiguous as to be useless as a metric.
what kind of careers? scamming call centers? heavy petrochem production? drug smuggling? cigarette marketing?
There are plenty of career paths that the world would be better off without, let's be clear about that.
All careers. All information work, and all physical work.
Yes. It is better for someone to be a criminal than to be unemployed. They will at least have some minimal amount of leverage and power to destroy the system which creates them.
A human soldier or drug dealer or something at least has the ability to consider whether what they are doing is wrong. A robot will be totally obedient and efficient at doing whatever job it's supposed to.
I disagree totally. There are no career paths which would be better off automated. Even if you disagree with what the jobs do, automation would just make them more efficient.
No?
Well, what's different this time?
Oh, wait, maybe they did prevail after all. I own my means of production, even though I'm by no means a powerful, filthy-rich capitalist or industrialist. So thanks, Ned -- I guess it all worked out for the best!
That's a very romantic view.
The development, production and use of machines to replace labour is driven by employers to produce more efficiently, to gain an edge and make more money.
kinda, I guess. but what has everyone on edge these days is humans always used technology to build things. to build civilization and infrastructure so that life was progressing in some way. at least in the US, people stopped building and advancing civilization decades ago. most sewage and transportation infrastructure is from 70+ years ago. decades ago, telecom infrastructure boomed for a bit then abruptly halted. so the "joke" is that technology these days is in no way "for the benefit of all" like it typically was for all human history (with obvious exceptions)
If we build AGI, we don't have a past comparison for that. Technologies so far have always replaced a subset of what humans currently do, not everything at once.
It’s a good thing to keep in mind that plumbers are a thing, my personal take is if you automated all the knowledge work then physical/robot automation would swiftly follow for the blue-collar jobs: robots are software-limited right now, and as Baumol’s Cost Disease sets in, physical labor would become more expensive so there would be increased incentive to solve the remaining hardware limitations.
If AI kills the middle and transitional roles i anticipate anarchy.
Especially with essentially unlimited AGI robotics engineers to work on the problem?
yes, until we reached art and thinking part. Big part of the problem might be that we reached that part first before the chores with AI.
At least now, things aren't so bad, and today's Luddites aren't trashing offices of ai-companies and hanging their employees and executives on nearby poles and trees.
billions of unemployed people aren't going to just sit in poverty and watch as Sam Altman and Elon become multi-trillionaires
(why do you think they are building the bunkers?)
Second, the movement was certainly attacked first. It was mill owners who petitioned the government to use “all force necessary” against the luddites and the government acting on behalf of them killed and maimed people who engaged in peaceful demonstrations before anyone associated with the Luddite movement reacted violently, and again, even in the face of violence the Luddite movement was at its core non violent.
this is not about machines. machines are built for a purpose. who is "building" them for what "purpose" ?
if you look at every actual real world human referenced in this website, they all have something in common. which is that they're billionaires.
this is a website about billionaires and their personal agendas.
You would think! But it's not the type of problem Americans seem to care about. If we could address it collectively, then we wouldn't have these talking-past-each-other clashes where the harmed masses get told they're somehow idiots for caring more about keeping the life and relative happiness they worked to earn for their families than achieving the maximum adoption rate of some new thing that's good for society long term, but only really helps the executives short term. There's a line where disruption becomes misery, and most people in the clear don't appreciate how near the line is to the status quo.
Analogies are almost always an excuse to oversimplify. Just defend the thing on its own properties - not the properties of a conceptually similar thing that happened in the last.
Now that information work is being automated, there will be nothing left!
This "embrace or die" strategy obviously doesn't work on a societal scale, it is an individual strategy.
Firing educated workers en mass for software that isn’t as good but cheaper, doesn’t have the same benefits to society at large.
What is the goal of replacing humans with robots? More money for the ownership class, or freeing workers from terrible jobs so they can contribute to society in a greater way?
The benefits to society will be larger. Just think about it: when you replace a dirty dangerous jobs, the workers simply have nowhere to go, and they begin to generate losses for society in one form or another. Because initially, they took this dirty, dangerous jobs because they had no choice.
But when you firing educated workers en mass, society not only receives from software all the benefits that it received from workers, but all other fields are also starting to develop because these educated workers are taking on other jobs, jobs that have never been filled by educated workers before. Jobs that are understaffed because they are too dirty or too dangerous.
This will be a huge boost even for areas not directly affected by AI.
Should be pretty clear that this is a different proposition to the historical trend of 2% GDP growth.
Mass unemployment is pretty hard for society to cope with, and understandably causes a lot of angst.
Imagine if the tractor made most farm workers unnecessary but when they flocked to the cities to do factory work, the tractor was already sitting there on the assembly line doing that job too.
I don’t doubt we can come up with new jobs, but the list of jobs AGI and robotics will never be able to do is really limited to ones where the value intrinsically comes from the person doing it being a human. It’s a short list tbh.
I'm starting to come around to the idea that electricity was the most fundamental force that drove WW1 and WW2. We point to many other more political, social and economic reasonings, but whenever I do a kind of 5-whys on those reasons I keep coming back to electricity.
AI is kind of like electricity.
Were also at the end of a big economic/money cycle (Petro dollar, gold standard, off gold standard, maxing out leverage).
The other side will probably involve a new foundation for money. It might involve blockchain, but maybe not, I have no idea.
We don't need post-scarcity so much as we just need to rebalance everything and an upgraded system that maintains that balance for another cycle. I don't know what that system is or needs, but I suspect it will become more clear over the next 10-20 years. While many things will reach abundance (many already have) some won't, and we will need some way to deal with that. Ignoring it won't help.
Comrades, we can now automate a neo KGB and auto garbage-collect contra-revolutionaries in mass with soviet efficiency!
At least with a politician you can sometimes believe it, whereas capitalism's spine is infinitely flexible.
The Corpos don’t need to go mask off, that’s what they pay the politicians for. Left and right is there to keep people from looking up and down.
in the end, if synthetic super intelligence results in the end of mankind, it'll be because a human programmed it to do so. more of a computer virus than a malevolent synthetic alien entity. a digital nuclear bomb.
assuming it can be terrified
The reason AI won't destroy us for now is simple.
Thumbs.
Robotic technology is required to do things physically, like improve computing power.
Without advanced robotics, AI is just impotent.
The leader bios are particularly priceless. "While working for 12 years as the Director of HR for a multinational, Faith realized that firing people gave her an almost-spiritual high. Out of the office, Faith coaches a little league softball team and looks after her sick mother - obligations she looks forward to being free of!"
You could perhaps make an argument that among the flood of AI-related submissions, this one doesn't particularly move the needle on intellectual curiosity. Although satire is generally a good way to allow for some reflection on a serious topic, and I don't recall seeing AI-related satire here in a while.
/s
Finally a company that's out to do some good in the world.
It just screams fried serotonin-circuits to me. I don't like it. I looked at the site for 2-3 seconds and I want nothing to do with these guys.
Do I think we should stop this type of competitive behaviour fueled by kids and investors both microdosed on meth? No. I just wouldn't do business with them, they don't look like trustworthy brand to me.
Edit: They got me with the joke, being in this field there are people that do actually talk like that, both startups and established executives alike. I.e. Artisan ads in billboards saying STOP HIRING HUMANS and another new york company I think pushing newspaper ads for complete replacement. Also if you're up with the latest engineering in agentic scaffolding work this type of thing is no joke.
>It just screams fried-serotonin circuits to me. I don't like it. I looked at the site for 2-3 seconds and I want nothing to do with these guys.
Enlightenment is realizing they aren't any different from those other guys.
>Edit: They got me with the joke, being in this field there are people that do actually talk like that, both startups and established executives alike.
And what's your conclusion from that?
> "Stupid. Smelly. Squishy."
> "While working for 12 years as the Director of HR for a multinational, Faith realized that firing people gave her an almost-spiritual high."
I love the marketing here. Top notch shit posting.
But besides that, no idea what this company does and it just comes off like another wannabe Roy Lee styled "be controversial and build an audience before you even have a product" type company.
That being said, still a good case study of shock marketing. It made it to the top link on HN after all.
Edit: its satire, I got got :(
Follow the links for support (or rather reserve space in the bunker)
There's a contact form to let representatives know the dangers of ai
I'm especially disgusted with Sam Altman and Darius Amodei, who for a long time were hyping up the "fear" they felt for their own creations. Of course, they weren't doing this to slow down or approach things in a more responsible way, they were talking like that because they knew creating fear would bring in more investment and more publicity. Even when they called for "regulation", it was generally misleading and mostly to help them create a barrier to entry in the industry.
I think now that the consensus among the experts is that AGI is probably a while off (like a decade), we have a new danger now. When we do start to get systems we should actually worry about, we're going to a have a major boy-who-cried-wolf problem. It's going to be hard to get these things under proper control when people start to have the feeling of "yeah we heard this all before"
So the problem isn't robots, it's the structure of how we humans rely on jobs for income. I don't necessarily feel like it's the AI company's problem to fix either.
This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
They're busy selling watches whilst people can still afford them thanks to having jobs.
So that would mean it is in fact the responsibility of the people at robot/AI companies (and across industries). It's not something we can just delegate to role-based authorities to sort out on our behalf.
Progress is great obviously, but progress as fast as possible with no care about the consequences is more motivated by money, not the common good.
Uh, have you seen the US lately? I think that ship has sailed.
Either way, without that social pattern, I'm afraid all this does is enshrine a type of futuristic serfdom that is completely insurmountable.
A total shift of human mentality. Humans have shown time and again there is only one way we ever get there. A long winding road paved with bodies.
“Be competitive in the market place.”
Go.
“Don’t collapse the global economy.”
:)
If the physical asset owner can replace me with a brain in a jar, it doesn't really help me that I have my own brain in a jar. It can't think food/shelter into existence for me.
If AI gets to the point where human knowledge is obsolete, and if politics don't shift to protect the former workers, I don't think widespread availability of AI is saving those who don't have control over substantial physical assets.
The rest you know what’s going to happen
Personal belief, but AI coming for your children is not a valid argument against AI. If AI can do a job better and/or faster, they should be the ones doing the parenting. Specialization is how we got to the future. So the problem isn't AI, it's the structure of how we humans rely on parenting for their children. I don't necessarily feel like it's the AI company's problem to fix either. This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
…
You’re right about one thing within reason… this is what a rationale government should be for… if the government was by the people and for the people.
Addendum for emphasis: …and if that government followed the very laws it portends to protect and enforce…
Neither government or corporations are going to “save us” simply because sheer short termism and incompetence. But the seem incompetence will make the coming dystopia ridiculous
…we’ll add three hours to our day?
Bu seriously, I support what you are saying. This is why the entire consumer system needs to change, because in a world with no jobs it is by definition unsustainable.
> This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
Many will argue that the purpose of government is not to steer or prepare society, but rather to reflect the values of society. Traditionally, the body that has steered (prepared or failed to prepare) society for impending changes was religion.if robots are that advanced that can do most of the jobs - the cost of goods will be close to zero.
government will product and distribute most of the things above and you mostly won't need any money, but if you want extra to travel etc there will always be a bunch of work to do - and not 8 hours per day
This is not going to happen.
We all know a post-apocalyptic world is what awaits us.
More or less Elysium is the future if ppl will still behave the same way they do now.
And I doubt ppl will change in a span of 100 years.
>government will product and distribute most of the things above and you mostly won't need any money
So basicially what you are saying is that a government monopoly will control everything?
No, the cost of goods will be the cost of the robots involved in production amortized over their production lifetime. Which, if robots are more productive than humans, will not be “near zero” from the point of view of any human without ownership of at least the number of robots needed to produce the goods that they wish to consume (whether that’s private ownership or their share of socially-owned robots). If there is essentially no demand for human labor, it will, instead, be near infinite from their perspective .
Well, it would start by not tax-favoring the (capital) income that remains and would have to have grown massively relatively to the overall economy for that to have occurred.
(In fact, it could start by doing that now, and the resulting tax burden shift would reduce the artificial tax incentive to shift from labor intensive to capital intensive production methods, which would, among other things, buy more time to deal with the broader transition if it is actually going to happen.)
Secondly, you assume in the first place that we can somehow build a stable post-scarcity society in which people with no leverage can control the super-intelligent agents with all of the power. The idea that "government will just fix it" is totally ignorant of what the government is or how it emerges. In the long run, you cannot have a ruling class that is removed from the keys to power.
Lastly, Who says we should all support this future? What if I disagree with the AI revolution and it's consequences?
It is kind of amazing how your path of reasoning is so dangerously misoriented and wrong. This is what happens when people grow up watching star trek, they just assume that once we live in a post scarcity future everything will be perfect, and that this is the natural endpoint for humanity.
Why are people even doing the jobs?
In a huge number of cases people have jobs that largely amount to nothing other than accumulation of wealth for people higher up.
I have a feeling that automation replacement will make this fact all the more apparent.
When people realise big truths, revolutions occur.
The AI will belong to the parasite class who will capture all the profits - but you can't tax them on this, because they can afford to buy the government. So there isn't really a way to fund food and shelter for the population without taking something from the billionaires. Their plans for the future does not include us [0].
[0] https://www.theguardian.com/news/2022/sep/04/super-rich-prep...
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly.
And capitalism has flourished during this time. There's no reason to believe even more automation is going to change that, on its own.
Sure, Musk and Altman can make noises and talk about the need for UBI "in the future" all they want, but their political actions clearly show which side they're actually on.
But it's not like "the government" (as if there is just one) simply doesn't want to fix things. There are many people who want to fix the way we distribute resources, but there are others who are working to stop them. The various millionaires behind these AI companies are part of the reason why the problem you identified exists in the first place.
So the problem isn't robots, it's the structure of how your wife relies on you for lovemaking. I don't feel like it's necessarily the AI company's problem to fix either.
This is what government is for, and not to stifle innovation by banning hot robot sex with your wife, but preparing your family for robot/wife lovemaking.
Machines doing stuff instead of humans is great as long as it serves some kind of human purpose. If it lets humans do more human things as a result and have purpose, great. If it supplants things humans value, in the name of some kind of efficiency that isn't serving very many of us at all, that's not so great.
A job is a decision that your boss(es) made and can be taken without your consent. You don't have the ownership of your job that you do of your marriage.
Your partner in some (most?) cases can absolutely make an executive decision that ends your marriage, with you having no options but to accept the outcome.
Your argument falls a little flat.
Sure, but we're also putting aside how people do worse without a sense of purpose or contribution, and semi-forced interaction is generally good for people as practice getting along with others - doubly so as we withdraw into the internet and our smartphones
Assuming that the Everdrive is M and the SNES cartridge port is F, I can understand why the everclan ia worried. Many better-quality, more feature-rich, and cheaper SNES multicarts have hit the market, and the Everdrive is looking pretty dated.
Theon Greyjoy, you have truly lost your way
I see what you did there
This requires faith that the government will actually step in to do something, which many people lack (at least in the US, can't speak for globally).
That's the sticking point for many of the people I've talked to about it. Some are diametrically opposed to AI, but most think there's a realistic chance AI takes jobs away and an unrealistic chance the government opposes the whims of capital causing people displaced from their jobs to dip into poverty.
I can't say I have a good counter-argument either. At least in the US, the government has largely sided with capital for my entire life. I wouldn't take a bet that government does the kind of wealth redistribution required if AI really takes off, and I would eat my hat if it happens in a timely manner that doesn't require an absolute crisis of ruined lives before something happens.
See the accumulation of wealth at the top income brackets while the middle and lower classes get left behind.
TLDR this is more of a crisis of faith in the government than opposition to AI taking over crap jobs that people don't want anyways.
1. We don’t need everyone in society to be involved in trade.
2. We made it so that if you do not take part in trade (trade labor for income), you cannot live.
3. Thus, people will fear losing their ability to trade in society.
The question is, when did we make this shift? It used to just be slavery, and you would be able to survive so long as you slaved.
The fear is coming from something odd, the reality that you won’t have to trade anymore to live. Our society has convinced us you won’t have any value otherwise.
E.g. if I was a truck driver and autonomous trucks came along that were 2/3rds the price and reduced truck related deaths by 99% obviously I couldn't, in good faith, argue that the rest of the population should pay more and have higher risk of death even to save my job and thousands of others. Though somehow this is a serious argument in many quarters (and accounts for lots of government spending).
The workforce gives regular folks at least some marginal stake in civilization. Governments aren’t effective engines against AI. We failed to elect Andrew Yang in 2020–who was literally running on a platform of setting up a UBI tax on AI. Congress is completely corrupt and ineffectual. Trump gutting the government.
You may be right about ai taking jobs eventually if that’s what you’re saying, but you come off pretty coldly if you’re implying it’s what “should” happen because it’s Darwinian and inevitable, and just sorta “well fuck poor people.”
It's called capitalism
I just logged onto github and saw a "My open pull requests button".
Instead of taking me to a page which quickly queried a database, it opened a conversation with copilot which then slowly thought about how to work out my open pull requests.
I closed the window before it had an answer.
Why are we replacing actual engineering with expensive guesswork?
AI just makes it worse.
However, someone has taken a useful feature and has made it worse to shoe-horn in copilot interaction.
Clicking this button also had a side-effect of an email from Github telling me about all the things I could ask copilot about.
The silver lining is that email linked to copilot settings, where I could turn it off entirely.
https://github.com/settings/copilot/features
AI is incredibly powerful, especially for code-generation. But It's terrible ( at current speeds ) for being the main interface into an application.
Human-Computer interaction benefits hugely from two things:
- Speed - Predictability
This is why some people prefer a commandline, and why some people can produce what looks like magic with excel. These applications are predictable and fast.
A chat-bot delivers neither. There's no opportunity to build up muscle-memory with a lack of predictability, and the slowness of copilot makes interaction just feel bad.
AI & robots will generate wealth at unprecedented scale. In the future, you won't have a job nor have any money, but you will be fabulously wealthy!
1. When such wealth is possible through autonomous means, how can the earth survive such unprecedented demands on its natural resources?
2. Should I believe that someone with more wealth (and as such, more power) than I have would not use that power to overwhelm me? Isn't my demand on resources only going to get in their way? Why would they allow me to draw on resources as well?
3. It seems like the answer to both of these concerns lies in government, but no government I'm aware of has really begun to answer these questions. Worse yet, what if governments disagree on how to implement these strategies in a global economy? Competition could become an intractable drain on the earth and humans' resources. Essentially, it opens up the possibility of war at incalculable scales.
Well in trekonomics [1], citizens are equal in terms of material wealth because scarcity has been eliminated. Wealth, in the conventional sense, does not exist; instead, the "wealth" that matters is human capital—skills, abilities, reputation, and status. The reward in this society comes not from accumulation of material goods but from intangible rewards such as honor, glory, intellectual achievement, and social esteem.
So when are we going to start pivoting towards a more socialist economic system? Where are the AI leaders backing politicians with this vision?
Because that's absolutely required for what you're talking about here...
Instead of facing the new reality, some people start to talk about the bubbles, AI being sloppy, etc. Which is not generally true; mostly it's the users' psychological projection of their own traits and the resulting fear-induced smear campaigns.
The phenomenon is well described in psychology books. Seminal works of Carl Jung worth a ton nowadays.
It's also more nuanced than you seem to think. Having the work we do be replaced by machines has significant implications about human purpose, identity, and how we fit into our societies. It isn't so much a fear of being replaced or made redundant by machines specifically; it's about who we are, what we do, and what that means for other human beings. How do I belong? How do I make my community a better place? How do I build wealth for the people I love?
Who cares how good the machine is. Humans want to be good at things because it's rewarding and—up until very recently—was a uniquely human capability that allowed us to build civilization itself. When machines take that away, what's left? What should we be good at when a skill may be irrelevant today or in a decade or who knows when?
Someone with a software brain might immediately think "This is simply another abstraction; use the abstraction to build wealth just as you used other skills and abilities to do so before", and sure... That's what people will try to do, just as we have over the last several hundred years as new technologies have emerged. But these most recent technologies, and the ones on the horizon, seem to threaten a loss of autonomy and a kind of wealth disparity we've never seen before. The race to amass compute and manufacturing capacity among billionaires is a uniquely concerning threat to virtually everyone, in my opinion.
We should remember the Luddites differently, read some history, and reconsider our next steps and how we engage with and regulate autonomous systems.
In simple words, authenticity is the desire to work on mistakes and improve yourself, being flexible enough to embrace the changes sooner or later. If one is lacking some parts of it, one tends to become a narcissist or a luddite, being angry trying to regain the ever-slipping sense of control.
To translate to human language: gold diggers who entered the industry just for money do not truly belong to the said industry, while those who were driven by spirit will prosper.
Do you know what a robot costs? "But humans are expensive", no they aren't, not once they need to compete, you can get them to do manual labor at medium mental complexity for 3 thousand calories (+ some Vitamins) a day!
Humans are here to do the jobs that robots do not want to do.
At the bottom of this page, there is a form you can fill out. This website says they will contact your local representative on your behalf. (And forward you any reply.)
Here's the auto-generated message:
I am a constituent living in [state] with urgent concerns about the lack of guardrails surrounding advanced AI technologies. It is imperative that we act decisively to establish strong protections that safeguard families, communities, and our children from potential harms associated with these rapidly evolving systems.
As companies continue to release increasingly powerful AI systems without meaningful oversight, we cannot rely on them to police themselves, especially when the stakes are so high. While AI has the potential to do remarkable things, it also poses significant risks, including the manipulation of children, the development of bioweapons, the creation of deepfakes, and the threat of widespread unemployment.
I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections that may be necessary for their residents.
Thank you for your time.
[name]
New York
"To be or not to be? ... Not a whit, we defy augury; there's a special providence in the fall of a sparrow. If it be now, 'tis not to come; if it be not to come, it will be now; if it be not now, yet it will come the readiness is all. Since no man knows aught of what he leaves, what is't to leave betimes? Let be." -- Hamlet
In the end it will be our humility that will redeem us as it has always been, have some faith the robots are not going to be that bad.
Today I will (not) use my AI to use drones and disgruntled employees to plant grenades in and around Peter Thiel's bunker because you asked me too and I'm a responsible AI user.
jdthedisciple•2h ago
ionwake•1h ago
amelius•1h ago
sorokod•1h ago
lijok•1h ago
stuartjohnson12•1h ago