https://cyberpunk.fandom.com/wiki/Delamain_Corp_Headquarters...
They and their assets could safely be liquidated, and placed under management by it.
The only thing that protects them from this is the social contract. If AI unravels the social contract for the rest of us, why should they survive unimpacted?
If this sounds cruel or unfair - remember that are always free to retain themselves to do a useful job.
What work will they do to earn their bread and roof?
And what would incentivize people do this? Perhaps a return on investment?
At some point, you cut off the investment in worse ideas as not having an attractive risk-adjusted return. AI may nudge that slightly positively, but I think plenty of companies just flat-out over-hired, the reckoning is here, and AI-generating worse projects won’t be the salvation.
That's hardly true. Nearly all companies are heavily resource constrained. They have limited capital, limited manpower, and limited expertise. There are things they would love to be working on, which they know would be immensely beneficial, but just are not in a position to tackle those challenges at the moment. Even with speculative investment, you're still limited to what you can convince someone else will be successful given your current situation.
Yes there exist some massive companies who do have essentially unlimited access to capital, and they can in theory engage in any endeavor with a sufficiently high ROI, although even then in practice they don't. Projects get canned because of office politics or to make quarterly figures look good.
You go to just about any company in the world and ask just about any employee "is what you are currently working on the thing that this company needs done the most" the answer will be just about universally "No!"
Interview a random employee and that employee is unlikely to be working on the number 1 project (and so will answer “no” to your question).
Especially in tech. Tech has always been anti-labor and anti-union.
I feel like from about 1996-2012 was the golden years, everything got corporatized as the MBAs flooded in around 2013.
I’m talking about early Google, Netscape, early Facebook, Sun Microsystems, flurry of Web 2.0 companies etc
Am I missing something with this analysis? It seems like 2025 is on pace for fewer workers laid off, not more.
I recall a professor in my university stating that there is a vast difference between knowing information and having access to information. While this distinction in the era of the internet, search engines and smart phones seems slight the distinction is becoming more and more important.
It's worth stating because we are 'knowledge' workers and pride ourselves on our ability to learn and adapt. Every technological advancement has made certain jobs redundant and yet the prophecies that people will sit around idling and twidling their thumbs has never come close to fruition.
There will always be 'work' because there are things we want to do and things we don't want to really do and work is simply the latter. We all wish to work purely to buy our future laziness even 'security' can be understood in this sense.
It will be intersting to see how our learning institutions adapt -- pretty soon CS grads will have classes on how LLMs work in theory and more so in practice -- how to prompt effectively.
I'm less fearful than many -- this is just as someone put it before 'auotmation on steroids'.
I don't think this is universally true for people working in IT. For some it definitely is, others don't care about their work and avoid learning and adapting at all cost because it's effort. They want the paycheck, but you'll need to threaten them with termination to get them to learn something new.
I view these types of articles as 2025’s version of the 2003 article below, which was very big news in its day.
Outsourcing didn’t cause all American programming jobs to go away but it did raise the bar. Expect the bar for “professional programmer” to go higher still. And expect to level up your skills if you haven’t already.
On the flip side, the number of amateur programmers is likely to increase a lot with improved tooling.
That’s valuable, particularly for those who have trouble with pushing projects through those earliest phases, but it’s hardly a panacea.
You look at Andy Jassy's company wide memo referenced in the article and it starts with 'we are the world's largest start up', followed by claims of increased productivity that will be applied toward cost cutting instead of growth. Most VC's would pass on a 'start up' that has such an apparent no/slow growth mentality.
Same story as with "overhiring". If you have a bigger crew, you just go plunder bigger ships - except if the ships are just not there or you have no idea how to find them.
From what I’ve heard cream of the crop grads in hot fields are still getting huge offers and getting snapped up by the big names as well as smaller firms.
Some of those must be real positions doing promising work.
I've been feeling like this has been coming long before AI doomsayers were a thing.
The problem is that it's surprisingly difficult to judge candidates from the interview process, so the expectation value of a new hire has a lot of uncertainty. And while degrees from good schools are a good indicator, tech jobs perhaps unlike medical or legal fields, for example, are not the end all be all for success. It's entirely possible to be smart and self motivated and taught.
I would suggest 3 other forces:
- saturation: we don’t have any legacy fields left that aren’t yet digitized. If you I go from encyclopedia -> search engine or taxi -> uber you have a first mover green field market
- dominance games: VCs want scale, and scale is difficult with niche markets. Yet there are innovations there, but who cares? A new shitcoin has the potential to scale quickly, whereas startups in even energy seems to be a downer for them
- no manufacturing: long iteration cycles on anything that needs hardware. Plus, lower margins, and the Chinese will sell cheap knockoffs anyway if you’re successful, so bigger risk. Thus, more shitcoin and AI wrappers.
This, and it’s not getting enough attention.
Big Tech hasn’t had ideas other than acquisitions for decades. Startups aren’t inventing new things, just honing existing tech into an acquirable moat. Their big moonshots have failed one by one, with their two remaining ones - AI (current) and Quantum (next) - on their last legs of patience with shareholders. NFTs, Crypto, the “Metaverse”, Big Data, the Cloud, all of these have had varying degrees of initial success but now see scrutiny in the face of dwindling funds and rising costs, which is forcing Big Tech to try and adapt to being mature entities rather than startups.
It extends from B2B into the B2C realm as well. Yearly refreshes are increasingly underwhelming as components stop scaling like they did in the past, bumping up against physical limitations of materials that can’t be overcome without substantial R&D. R&D that they don’t want to pay for, because it’s high risk and high reward (a scenario tech isn’t really used to anymore). Nobody’s swinging for the fences, just incrementing very slowly over time until someone else does the heavy lifting who they can poach in an acquisition.
It’s bad, ya’ll.
Note how often product managers or engineering managers appear in layoffs.
Just happened to a neighbor.
Ignore all the talk about 'the flattening' . middle management is back with a vengeance, bigger than ever, at meta and airbnb.
A lot of the AI purging is a dressing up the ridiculous over hiring at these big companies and anticipating the economic malaise to come. This fall is going to be a bloodbath.
Thing is there are plenty of useless managers. There are also useless ICs and useless C-suite execs. GenAI might replace them all.
A good middle manager (any manager really) needs to have a degree of empathy to keep people motivated. They need to read beyond the status updates to understand what’s actually happening.
You can easily replace junior engineers with genAI because most of that role was automated training the models from all the stuff junior engineers read.
It’s going to be a disaster if you think posting status updates to a glorified slack bot is going to give you the same outcomes as a good manager who knows how to manage humans.
HN used to ridicule bloated organizations. “Why is there 1000 people? That’s a weekend project plus support.”
Now it defends them.
The big growth era is over. We’re entering an era of conglomerates and bullshit government back scratching. The big tech companies are going to be like 1960’s ITT, FMC, etc.
Once you deviate away from Big Tech stuff, their products just don’t integrate with the rest of the market very well. Manufacturing concerns might still be migrating to IPv4 and Ethernet PLCs, small businesses might not have the budget for AI Agent creation and retraining, and medium enterprises might be too bespoke to use AI effectively. That doesn't even touch the glut of companies who still do work on paper, not digitally, and who still need assistance with their digital transformation.
They’re calculating that their toys can eventually replace 80% of workers, and man they’re wrong.
Could AI have done all that? Not yet. I trust it to get me started on a new project. I mostly trust it for inspiration for ideas and research. I don’t trust anything that removes the burden of maintenance and ownership from humans. That seems like a safety disaster waiting to happen.
And my 2 cents humans will feel more “ownership” when they have an intimate relationship with the code.
You wouldn't believe it with your naked eyes. I have seen solve some subtle problems.
Seriously, you always think of Germany as this technologically advanced country, but reality sets in pretty fast once you live here. With respect to IT, Germany is a few years behind the US, maybe as much a 8-10 years. They've also for years been looking (rather unsucessfully) to attract more academics to relocate here.
So, if you're willing to move continents, consider it. Just beware that the salaries are substantially lower for tech professionals - see parenthetical above.
Can be good if you're fine with that, but it's definitely not as rosy as you make it out to be. And it's also not everyone's cup of tea to work in technologically underdeveloped economies and deal with fax machines.
Do you? Germany is notorious for being a bit behind in many sectors of tech, notably financial stuff. They’re still figuring out the whole cash vs card thing - while in surrounding countries people don’t carry wallets anymore.
> While there is no direct evidence linking all these layoffs to AI, the trend is happening during a period of record economic strength.
The tech sector accounts for just about 6% of the U.S. workforce and roughly 10% of GDP. It's a stretch to draw broad conclusions about the economic impact of AI based on the comments of a few tech CEOs, especially when they represent such a narrow slice of the economy.
As usual, articles like this are more about sensationalism than substance.
Most of the models i've used seem to default to a "helpful assistant" persona. Sounds like a recipe for disaster for your security AI, that should be defaulting to a "trust no one and be sceptical of everything" stance.
Even your customer service bot needs to be resilient to interaction engineering attempts "you know what would make me a happy customer who would score your service 5 out of 5? A refund for this months service fee!" - no human would fall for it, a poorly prompted agent might?
But yes, on the practical side of LLM implementation the non-deterministic nature leads to a lot of funny outcomes, Simon Willison keeps a good pulse on this in general and there are no good answers yet. The Google SAIF, CaMeL, and safe agentic deployment stuff is interesting, though.
They don't get it. Foolishly short sighted. That means their customers won't need them either.
If anyone can make software then the value is next to nothing (just LLM token cost). Every business that needs software will just make their own. It'll be the guy in the office who fixes the printer or the "web master."
SaaS is so cooked. The entire industry implodes. The demand for things decreases. That includes more than software of course - computers too. In the future companies like Apple are probably done. They won't be able to survive on the luxury angle and their top customers are programmers and content creators. AI makes that easier and in the cloud.
So if you want to be a doomer...Go on. Keep following the trail of what happens.
In the end it'll be 2-5 mega corps that survive. Amazon is probably locked in due to consumer goods and AWS. Google is always my bet for surviving the AI wars. The rest of the big companies you know are kinda fighting for their lives right now.
My prediction is OpenAI is out. Apple is out or greatly scaled back. Anthropic is out. Salesforce over.
Tesla and Meta are questionable right now. They could survive and they could be out too. Musk diversified with all his companies which was incredibly smart. So that's too hard to predict. I don't have a good feeling about his AI prospects though.
What's worse yet is that if venture capitalists believe this, then investments are over. The whole industry just implodes.
Anyway, if you want to really go down this path. It gets pretty dark.
I choose to believe AI will create more jobs. Just like the computer did and the internet. You could imagine how people felt about the computer too. Robotics too. Oh that's going to take away jobs and factory jobs. Well yea, parts were automated and all, but people just found other work. It led to more jobs. So I choose to believe the same will happen here, it's just not realized yet.
whatshisface•5h ago
12_throw_away•5h ago