I could not understand this optimism, aren't we living in a capitalist world ?
Plenty of people could already work less today if they just spent less. Historically any of the last big productivity booms could have similarly let people work less, but here we are.
If AI actually comes about and if AGI replaces humans at most cognitive labor, we'll find some way to keep ourselves busy even if the jobs ultimately are as useless as the pet rock or the Jump to Conclusions Mat (Office Space reference for anyone who hasn't seen it).
People could work less, but it's a group effort. As long as some narcissistic idiots who want more instead of less are in charge, this is not going to change easily.
And if not needed, culled. For being "unproductive" or "unattractive" or generally "worthless".
That's my cynical take.
As long as the rich can be reigned in in a way, the poor will not necessarily become poorer.
It still takes basically the same amount of labour hours to give a haircut today as it did in the late 19th century. An elementary school teacher today can still not handle more than a few tens up to maybe a hundred students at the extreme limit. Yet the hairdressing and education industries must still compete — on the labour market — with the industries showing the largest productivity gains. This has the effect of raising wages in these productivity-stagnant industries and increasing the cost of these services for everyone, driving inflation.
Inflation is the real time-killer, not a fear of idleness. The cost of living has gone up for everyone — rather dramatically, in nominal terms — without even taking housing costs into account.
But they're not talking about idle time, they're talking about quality time with loved ones.
> Plenty of people could already work less today if they just spent less.
But spending for leisure is often a part of that quality time. The idea is being able to work less AND maintain the same lifestyle.
Probably not. We're deep in the hype bubble, so AI is strongly overused. Once the bubble pops and things calm down, some use-cases may well emerge from the ashes but it'll be nowhere near as overused as it is now.
> AI has become a race between countries and companies, mostly due to status. The company that creates an AGI first will win and get the most status.
There's a built-in assumption here that AGI is not only possible but inevitable. We have absolutely no evidence that's the case, and the only people saying we're even remotely close are tech CEOs who's entire business model depends on people believing that AGI is around the corner.
I don't think these things are really that correlated. In fact, kind of the opposite. Hype is all talk, not actual usage.
I think this will turn out more like the internet itself. Wildly overhyped and underused when the dotcom bubble burst. But over the coming years and decades it grew steadily and healthily until it was everywhere.
Agreed re: AGI though.
Petfoods.com IPO for about $300 million. $573 million adjusted for inflation.
Chewy is at a 14 billion market cap right now.
I think comparing LLMs and the dotcom bubble is just incredibly lazy and useless thinking. If anything , all previous bubbles show is what is not going to happen again.
Curious to hear more here. What is lazy about it? My general hypothesis is that ~95% of AI companies are overvalued by an order of magnitude or more and will end up with huge investor losses. But longer term (10+ years) many will end up being correctly valued at an order of magnitude above today's levels. This aligns perfectly with your pets.com/Chewy example.
I don't, however, see LLMs as consumer products being that prevalent in the future as currently. The cost of using LLMs is kept artificially low for consumers at the moment. That is bound to hit a wall eventually, at the very least when the bubble pops. At least that seems like an obvious analysis to make at this point in time.
Regarding usage - I don't think LLMs are going away. I think LLMs are going to be what finally topples Google search. Even my less technical friends and acquaintances are frequently reaching for ChatGPT for things they would have Googled in the past.
I also think we'll see the equivalent of Google Adwords start to pop up in free consumer LLMs.
I doubt it. History has shown that credit for an invention often goes to the person or company with superior marketing skills, rather than to the original creator.
In a couple of centuries, people will sincerely believe that Bill Gates invented software, and Elon Musk invented self-driving cars.
Edit: and it's probably not even about marketing skill, but about being so full of oneself to have biographies printed and making others believe how amazing they are.
But my opinion on this has shifted a lot. The underlying technology is pretty lame. And improvements are incremental. Yes, someone will be the first, but they will be closely followed by others.
Anyway, I don't like the "impending doom" feeling that these companies create. I think we should tax them for it. If you throw together enough money, yeah, you can do crazy things. Doesn't mean you should be able to harass people with it.
Yes, it gets "smarter" each time, more accurate, but still lacks ANY creativity or actual thoughts/understanding. "You're completely right! Here, I fixed the code!" - proceeds to copy-paste the original code with the same bug.
LLMs will mostly replace: - search (find information/give people expert-level advice in a few seconds) - data processing (retrieval of information, listening and react to specific events, automatically transforming and forwarding of information) - interfaces (not entirely, but mostly augment them, sort of a better auto-complete and intention-detection). - most content ideation and creation (will not replace art, but if someone needs an ad, a business card, landing page, etc., the AI will do a good enough job). - finding errors in documents/code/security, etc.
All those use-cases are already possible today, AI will just make them more efficient.
It will be a LONG time until AI will know how to autonomously achieve the result you want and have the physical-world abilities to do so.
For AGI, the "general" part will be as broad as the training data. Also, now the AI listens too much to the human instruction and is crippled for (good) safety reasons. While we have all those limitations set, the "general intelligence" will still be limited, as it would be too dangerous to set zero limits and see where it goes (not because it's smart, but it's like letting a malware have access to the internet).
This depends on the perspective. Take a step back and consider what the actual technology is that makes this possible: neural networks, the transistor, electricity, working together in groups? All pretty cool, IMHO.
Where that leaves the rest of us is uncertain, but in many worlds the idea of status or marketing won't be relevant.
this insight stood out the most to me. i def agree, but what's interesting is the disconnect with the industry--it seems to be accepted rn that if coding is what ai is best at, developers must be the only ones that care, and that seems to have shown up in usage as well (i don't think i've seen much use of ai outside of personal use other than by developers, maybe i'm wrong?)
From the study[0]:
> 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience.
This study continues to get a lot of airtime on HN and elsewhere. Folks probably should be skeptical of a study with the combination of a small number of subjects with a broad claim.
He Looks like he’s a typical software engineer with a very very generic opinion on AI presenting nothing new.
The arrogance the article starts off with like where he talks about how much time he’s invested in AI (1.5 years holy cow) and how that makes him qualified to give his generic opinion is just too much.
The point is this opinion is generic. It’s nothing new. It’s like someone stating “cars use gas, I’ve been driving for 1.5 years and I learned enough to say that cars use gas.”
"We do not provide evidence that: AI systems do not currently speed up many or most software developers"
I've seen this sentiment quite a bit; I think it's really baked into human psyche. I think we understate the importance of what we don't enjoy and perhaps overstate the importance of the tasks we do enjoy and excel at. It makes sense, we're defending ourselves and our investments in learning our particular craft.
dude250711•3h ago
> As a manager, AI is really nice to get a summary of how everything is going at the company and what tasks everyone is working on and the status of the tasks, instead of having refinement meetings to get status updates on tasks.
I do not understand why they are not marketing some "GPT Middle Manager" to the executive boards so that they could cut that fat. Surely that is a huge untapped cost-cutting potential?
nikolayasdf123•3h ago
watwut•2h ago
The ones profiting the most will be consultancies designed to protect the upper management reputation.
justanotherjoe•2h ago
I want the AI to know my codebase the same way it knows the earth is round. Without any context fed to it each time.
Instead we have this weird Memento-esque setup where you have to give it context each time.
anonzzzies•2h ago