Same situation with internet, we saw a bubble but ultimately those that changed their business around it monopolized various industries where they were slow to react.
Some jobs will be replaced outright but most will use AI tools and we might see reduced wages/positions available for a very long time coupled with economic downturn.
Unless they let their skills atrophy by offloading them to AI. The things they can do will be commodified and low value.
I suspect there will be demand for those who instead chose to hone their skills.
Using AI as a tool doesn't mean having it do everything; it means you have the skill and knowledge to know where and how you can use it.
sure, but in the real world the overwhelming majority of people loudly proclaiming the benefits of AI don't actually have the skill or knowledge (or discipline) to do so / judge its correctness. it's peak dunning-kruger
AI as it presently stands is very much one of those things where in the immediate, sure, there’s money to be made jumping on the bandwagon. Even I keep tinkering with it in some capacity from an IT POV, and it has some legitimate use cases that even surprise me sometimes.
However, I aim to build a career like the COBOL programmer did: staying technically sharp as the world abstracts away, because someone, somewhere, will eventually need help digging out of a hole that upgrades or automation got them into.
And at that point, you can bill for the first class airfare, the five-star hotel, and four-figures a day to save their ass.
I would be suspicious of this claim.
One day the sun won’t rise in the morning but it’s not something I’m going to plan on happening in my lifetime.
It’s been wrong every time, except for the times it wasn’t. Nobody remembers those though. Something something confirmation bias.
Yes?
Try abacus, slide rules or mechanical calculating machines vs electronic calculators.
Or ancient vs modern computers and software. They didn't even have "end-users" like we understand them now, every computer user was a specialist.
Programming.
Writing. Quill vs. ballpen, but also alphabets vs what you had to write before.
Photography, more than one big jump in usability. Film cameras, projectors/screens.
Transportation: From navigation to piloting aircraft or cars. Originally you had to be a part-time mechanic.
Many advanced (i.e. more complex than e.g. a hammer) tools in manufacturing or at home.
If I give an accountant an electronic calculator and a problem to solve, they'll be more efficient than me
If I give someone who spent thousands of hours on a computer a task on it, they'll be able to do more than my parents
If I give someone that writes a lot a ballpen, their writing will be faster and more legible than someone like me who barely writes on paper.
Through the marginal improvement is still pretty high to knowing how the tools work and how to use them more effectively, in a way that people that spend time with the tools will be _more_ effective
Uhm... yes???
Obviously even a baby has "skills".
The point is the comparison between the levels of tech. Your accountant is constant, the tools they use is variable.
Interpreting the OPs point as "absolute zero skill" is against HN rules to interpret comments reasonably. You guys are trying to find the most stupid angle possible for the sake of an "argument". I hate this antagonistic debate culture so much.
I'm sure there are people who are more skilled at using a cell phone than I am. It doesn't matter.
Similarly, we all have had co-workers or friends who aren't very good at using search engines. They still use them and largely still have jobs.
Now that I think of it, most regularly-used technology is like this. Cars, dishwashers, keyboards, electric shavers. There is a baseline level of skill required for operation, but the marginal benefits for being more skilled than the baseline drop off pretty quickly.
The easier "AI" gets to use (as it is being "promised" it will), the quicker a skilled engineered is going to be able to adapt to it whenever they give up and start using it. They'll likely be ahead of any of those previous adopters who just couldn't resist the allure of simply accepting whatever is spit out without thoroughly reviewing it first.
Those who use AI as tool today will be replaced by those that aren't tomorrow.
That's not a robust prediction. Many people who don't use AI today simply don't do so because they've tried it, and found it subtracts value. Those people will not be replaced tomorrow, they will merely reevaluate the tool and start using it if it has started to add value.
Your executive team is going to "remove" non-AI folks regardless of their claims about efficiency.
Just like they forced you to return to office while ignoring the exact same efficiency claims. They had realestate to protect. Now they have AI to protect.
And I make the inverse prediction.
I work for a FAANG and I see it, from juniors to senior engineers, the one who use AI generate absolute slop that is unreadable, unmaintainable, and is definitely going to break. They are making themselves not just redundant, but an actual danger to the company.
The question on planning on HBM is too vague to really address, but people are separately working on providing more bandwidth, using more bandwidth, and figuring out how to not need so much bandwidth.
User-Agent: AI-Bot
Disallow: /ai-bot/
Stopped reading this rage-bait when I saw this. The company he works at is starting to go all in on AI and prediction content themselves the very same thing that he is opposing. [0]
> But even myself, as an AI engineer, I am just soooo sick of that type of content. It’s the same generic stuff. It appears we have become the LLMs, regurgitating what’s already out there as if it was new ideas.
The author is not an AI engineer™ (whatever that means these days). Just yet another "dev".
[0] https://www.medbridge.com/educate/webinars/ai-in-healthcare-...
The entire article is a complete joke and is ragebait.
Flagged.
This is basically an entire genre of low effort Hackernews posts.
:-)
A) In 5 years no real improvement, AI bubble pops, most of us are laid off. B) In 5 years near—AGI replaces most software engineers, most of us are laid off.
Woohoo. Lose-lose scenario! No matter how you imagine this AI bubble playing out, the musics going to stop eventually.
There's much better content on Show HN, one of which won't hit the homepage because this has more votes. It's a problem that HN has to fix - people upvote because they agree, and that vote carries the same weight as another which required far more effort (trying a product, looking at code etc).
Always have been.
Anyway, complaining about them doesn't add any value either. And complaining about complaining... well you get the idea.
This is how I feel. You see so many articles prognosticating and living in the world of hypotheticals, meanwhile AI is transforming industries today and article tracking those changes feel rare. Are they on an obscure blog?
If we break down every single AI post over the past two years, we get the same conclusions every single time:
* Transformer and Diffusion models (current “AI”) won’t replace jobs wholesale, BUT-
* Current AI will drastically reshape certain segments of employment, like software development or copywriting, BUT-
* Likely only to the point that lower-tier talent is forced out or to adapt, or that bad roles are outright eliminated (slop/SEO farms)
As for the industry itself:
* There’s no long-term market for subscription services beyond vendor lock-in and users with skill atrophy
* The build-out of inference and compute is absolutely an unsustainable bubble barring a profound revolution in machine learning that enables AI to replace employment wholesale AND do so using existing compute architectures
* The geopolitical and societal shifts toward sovereignty/right-to-repair means the best path forward is likely local-inferencing, which doesn’t support the subscription-based models of major AI players
* Likely-insurmountable challenges in hallucinations, safeguards, and reliable outputs over time will restrict adoption to niche processes instead of general tasks
And finally, from a sociological perspective:
* The large AI players/proponents are predominantly technocrat billionaires and wealthy elites seeking to fundamentally reshape societal governance in their favor and hoard more resources for themselves, a deeply diseased viewpoint that even pro-AI folks are starting to retch at the prospect of serving
* The large resistance to AI at present is broadly coming from regular people angry at the prospect of their replacement (and worse) by technology in a society where they must work to survive, and are keenly aware of the real motives in Capital eliminating the need for labor in terms of power distribution
* Humans who have dedicated their lives to skilled and/or creative pursuits in particular are vocally resistant to the mandate by technocrats of “AI everywhere”, and continue to lead the discourse not in how to fight against AI (a losing battle now that Pandora’s Box is open), but in building a healthier and more equitable society where said advancements benefit humans first/equally, and Capital last
* The “creator” part of society in particular is enraged at having their output stolen/seized by Capital for profit without compensation and destroying their digital homes and physical livelihoods in the process, and that is a wound that cannot be addressed short of direct, substantial monetary compensation in perpetuity - essentially holding Capital accountable for piracy much like Capital holds consumers accountable (or tries to). This is a topic of ongoing debate that will likely reshape IP laws at a fundamental level for the century to come.
There. You can skip the glut of blogs, now, at least until any one of the above points substantially changes.
hyperhello•1h ago