Man will do nothing and machine will do everything. That's a bleak world no one is preparing for.
How is that universal basic income scheme coming along?
We currently have two broad mechanisms to equate people's value.
*Employees:*
Easy to replace = Low Salary = Gets Few Resources
Hard to replace = High Salary = Gets Many Resources
*Entrepreneurs:*
Output consumed low = Low Pay = Gets Few Resources
Output consumed high = High Pay = Gets Many Resources
(Resource consumption ignored)
In a world where machines do everything, aspects of these change:
*Employees:*
Easy to replace = Gets whatever resources
(no-one hard to replace)
It is up to us to define whether 'whatever' is bleak or not. If we decide that resources need to be shared fairly, it could be heaven, not hell.
*Entrepreneurs:*
Resource consumption: Whatever
It is up to us how much resource consumption we allow. If we decide that resource consumption need to be sustainable, it could be heaven, not hell.
If there is person A who can become a squillionnaire by making sure that the employees of a company make as little as possible due to AI, that's what's going to happen. There is zero way "we" will decide resources need to be shared fairly.
If person A can amass more money and power, then resource consumption literally doesn't matter. There is no way "we" will be involved in that process at all.
Call me cynical, but it appears that human history has proven over and over and over again that whatever the short sighted, selfish option that enriches a very few is, is what will happen, until there is finally violence.
I do not look forward to the AI wars that my children will be forced to fight in.
This is wrong, in most cases the entrepreneur is worse off than the employees, since the entrepreneur spent all his savings on the projects and the employees walks away with all the money they got from their salaries.
And even when it is fully funded by external investors most of the time the founder just gets to keep the salary since the company fails and become worthless.
The only time the entrepreneur is better off is when the company succeeds and becomes big, but that is rare, most of the time it is better to be an employee.
Risk seekers should be entrepreneurs.
Risk averse people, probably, should not.
If the Epstein class won't allow for everyone to have a reasonable standard of living when they relied on workers to produce, the chances of them allowing it when they don't is next to nil. They couldn't even bear the thought of people working from home, for no other reason than the workers liked it, and that cost them nothing.
This is where LLM is currently going. Not really AGI since they can't think like humans, but they can do a lot of things and humans can train them on novel things.
Then human work is changed to figuring out new things and the AI solves all old things, that seems much more fun than most white collar work today.
But it's not fun to be figuring out new things all the time. Some amount of routine work is necessary to 1) exercise mastery (feels good), and 2) recover energy. This is why a lot of people find agentic coding exhausting and less fun, you're basically always having to be creative (what's the next feature?) or solve the hardest 5% of issues the LLM can't handle.
Maybe I'm wired differently, but this is fun to me, and "exercising mastery" by doing routine work is almost never fun, things stop being fun and feel good once I've "mastered" them, and I can't say I've ever "recovered energy" by doing routine work, it seems to suck energy out of me faster than anything. To recover, I tend to rest and do anything but work. But again, maybe it's just weird wiring.
Yes, everyone starts out creative.
But we all can tell the difference between a worker that is still creative and learning and a worker that gave up creativity and is just doing his job. The first will still be useful in this AI age the second will be replaced by AI learning what he already knows.
Rather: the workers who are (still) creative are typically a huge annoyance to their bosses.
In a new world where creativity is valued higher more people could probably keep their creativity.
This is in my opinion a very dubious assumption. :-(
Are there studies done on this or is this just wishful thinking?
Steve Jobs
Now, what are doers in the age of LLM is another question.
Jobs' talent was that he was an incredibly talented salesman.
Jobs envisioned the iPad and iPhone. Did he do the physical work? No. But he created direction.
Everyone around him at that time has commented on this. Are you going to claim they’re all lying?
Sometimes it's a lack of capacity for novel thinking. Sometimes it's fear caused by past trauma. Or it can be age. Or an inability to overcome habits. The list goes on, but the point is that I've had to work with or supervise employees (even in IT!) that didn't have a creative bone in their body. It wasn't a lack of motivation, it was usually something on the list above.
These people absolutely deserved the feeling of being useful, and those are the people I'm most concerned for in this new post-LLM world. The creative types will most likely be fine, but we have words to describe creativity as an acknowledgement that there can be an absence of creativity.
LLMs are shit at doing stuff to anyone who is a domain expert in the thing that they are supposed to be doing. They are trained on a huge corpus of average stuff. They produce average to crappy solutions quickly. The technology industry bubble is trained to accept that as good enough which is why everyone is excited. Elsewhere it's a complete and utter joke.
And on top of that, a huge chunk of doing requires humans to physically do something or absolute determinism is better anyway, neither of which an LLM is capable of.
None of it makes sense.
Edit: actually the technology industry moves the goalposts to match the claims. That is the dishonest bit. I've not seen any evidence of novel capability which isn't corrupted by some dishonest measurement approach.
So you just lose your job.
> This is where LLM is currently going.
This is not where LLMs are currently going. They are trained and benchmarked explicitly in all areas that humans produce economically and cognitively valuable work: STEM fields, computer use, robotics, etc.
Systems are already emerging where AI agents autonomously orchestrate subagents which again all work towards a goal autonomously and only from time to time communicate with you to give you status updates.
Thinking that you as a slow human will be needed for much longer to fill some crucial role in this AI system that it cannot solve by itself, and to bring some crucial skill of creativity or thinking to the table that it cannot generate itself is just wishful thinking. And to me personally, telling an AI to "do cool thing X" without having made any contribution to it beyond the initial prompt also feels very depressing and seems like much less fun than actually feeling valued in what I do. I'm sorry for sounding harsh.
I suppose it depends on your definition of “doing” - if it’s “writing code”, then sure. But there’s a whole world of actual, physical “doing” that AI is nowhere close to matching humans at, and it’s much easier for me to envision a world where AI replaces the management / “thinking” layer of society than the physical labor. Which is scary, because it’s the opposite of his (and I would assume most people’s) ideal.
Besides these traits that every CEO/big-time investor seems to share, is there anything uniquely awful with Altman?
His involvement in Worldcoin (now named "World"), i.e. biometric scanning of huge populations.
We still don't have computer programs that are able to "decide" what "they" "want" to do. We have programs that can mimic this behavior, but the implementation is effectively the same as the chess and flight programs we've had for decades: searching a gigantic solution space very quickly. What's changed is the amount of data and compute we can throw at the problem.
The emergent behavior we observe from these systems is the result of our human inability to comprehend the relationships and patterns in the vast amount of data we feed them. We assign anthropomorphic qualities like creativity, intelligence, reasoning, thinking, etc., to this behavior in an attempt to make the technology more approachable, and, of course, more marketable, which fuels further investments.
What's very much uncertain is whether continuing to scale up will lead us to machines that can do all of the things Altman talks about. There's disagreement about this even between leading figures in the field, but being negative about it is not as profitable.
The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking
Steve Yegge said on some podcast recently that AI is going to have to come up with a more visual medium for communicating, because people don't want to read several paragraphs. He shared this uncritically, seemingly without judgement or disappointment. Yegge himself is a former Googler and by all accounts was an impressive person at one point, now best known as the person who vibe-birthed the inanity that is GasTown.At work I'm seeing colleagues I once considered formidable completely turning off their brains and letting the bot drive, and wholly missing the mark on work quality. It's like a sickness, like COVID brain fog people don't even notice they have.
I see humans getting worse at reading, worse at writing, and worse at programming by themselves. It makes me angry and sad.
We are getting dumber, people, and I fully believe Altman and friends are lying when they say they want it otherwise.
LLM’s are the virus of the mind - people think so what? I get my output and move on.
Yeah.. no. You need that thinking capacity to protect yourself. Once that’s gone en-masse, what’s left of the democratic system (not much) will completely collapse. Congrats to legally creating an environment that yields oligarchy.
Altman and his cronies yearn for a swath of people who cannnot think for themselves.
People who are liberal artsy at the core but do computer science? Yes.
Even if AI can't reach (yet) the creativity level, it performs well while trying, at least for now. Who knows in the near future? So far, the roadmap is clear.
The AI push is causing major layoffs in the tech and crypto industries nowadays. But we have been receiving the message "adapt or pay the consequences." Right now, even management positions are being replaced by software. It could sound rude, but it's also part of human nature and evolution. We have created these machines, and now we have to deal with them.
On the other hand, it could be rare at these stages, but we (regular human beings) barely know how the brain really works. And AI has demonstrated, at some point, that it can work very well in some roles (mostly operational, ofc), but it's also turning indispensable. Even governments like the Abu Dhabi one are pushing to rule the emirate fully by AI.
So yeah, even if we don't like it, AI is silently replacing humans. The best you can do is to learn how to leverage and not be left behind.
nik736•1h ago
Isn't that how LLM models are trained right now? Trying to predict the next word within a "gigantic solution space". Interesting.
Lionga•1h ago
The reference to pong makes even less sense.
ben_w•1h ago
But the difference is:
What Deep Blue did was (if the Wikipedia page is correct) Alpha-beta pruning[0], where some humans came up with the function for what "better" and "worse" board states look like.
And what LLMs do (at least the end models) includes at least some steps where there's an AI trying to learn what human preferences are in the first place, in order to maximise the human evaluation scores. Some of those things are good, like "what's the right answer to the trolley problem?" and "which is the better poem?", but some are bad such as "what answer best flatters the ego of the user without any regard for truth?"
The former is exactly like route-finding, in that you could treat travel time as your score of better-worse and the moves as if they're on a map rather than a chess board.
The latter is like being dumped into a new video game with no UI and all NPCs interact with you only in a language you don't know such as North Sentinelese.
[0] https://en.wikipedia.org/wiki/Alpha–beta_pruning
yobbo•1h ago
It's neither how computer chess works or how LLMs are trained.
Computer chess uses various tricks to prune the search space of board states, where the search is guided by the "value" of each board state. Neural networks can be used (and probably was at the time) to approximate this value, but there can be hand coded algorithms with learned statistics or even lookup tables for smaller games than chess.
There's no search in LLM training.