I believe it's WFH. It taught companies remote work, and it's a small next step to offshore work.
Money was free, so a lot of people were paid out of thin air. When money stopped being free salaries actually had to come from somewhere, and there weren't enough somewheres to go around.
I'm sure I'm not the only one who remembers all those posts on hn 2-3 years ago about how bad the job market is, right? It has only become worse.
I know a kid who interned at a job last summer. Graduated, applied to a full-time job at the company. He happened to know someone in HR who told him "we got over a thousand applications for this job req in one day."
How tariffs can be blamed for that kind of situation, which is happening all over the US and has been for literal _years_, defies logic.
It would be true if I could say "hire a person for $X, it will increase our net income by $X+$x", but it doesn't work like that at all. And for me it makes sense to skip candidates from SF Bay Area altogether, there are a lot of good candidates somewhere deep in Utah-s.
Hell, my first job decades ago was as cheap labor in an IT project offshored from the US.
First, the Trump administration's economic impact is much more than tariffs - which are highly significant - but unprecedented interference in the free market and private business; destruction of the regulatory and other institutions that a stable economy depends on (e.g., democracy, the courts, the rule of law, bank regulation); disrupting the stability of international relations, possibly leading to highly destructive things like wars.
Also, the recent trend of business to switch from (broadly speaking) innovation and productivity to rent-seeking, epitomized by private equity: cut workforces, other investment, and product to the bone and squeeze as much as possible out of the asset - leaving a dead, dry carcass.
You can't just make a blanket statement about the entire economy and say "it's obvious". We live in a big world. Your perception is not my perception. That's why data is so important.
It doesn’t bite me as much due to seniority but it’s still happening
Tbh if I was younger I’d just try to relocate myself seems fun
What is "mid-sized" ?
There's so much vague descriptors in your post as to be almost entirely meaningless.
Because from personal experience I've seen loads of companies wind down and remove internal teams like internal QA in favor of outsourcing to other regions, for example. It's made my job extremely annoying because I can't just tap the shoulder of my nearby QA engineer and see what's up, I have to wait an entire day for them to get the next build and then deal with the wrong stuff they've reported.
Thanks! :)
"Unless you have investigated a problem, you will be deprived of the right to speak on it. Isn't that too harsh? Not in the least. When you have not probed into a problem, into the present facts and its past history, and know nothing of its essentials, whatever you say about it will undoubtedly be nonsense. Talking nonsense solves no problems, as everyone knows, so why is it unjust to deprive you of the right to speak?"
https://www.marxists.org/reference/archive/mao/selected-work...
The only caveat here is that Mao didn't follow his own advice, lol
And that's the problem. "Do your research before spouting off" is good in principle. But it gets weaponized via "if you don't agree with me, you obviously haven't done your research", and used to silence anyone who disagrees.
We have to put up with uninformed opinions, in order to leave space for dissenting views.
I think your comment is great at conflating "allowing criticism against the government and speech on a sidewalk" with "disallowing unconsidered speech in certain private spaces because of the danger that some kinds of information has". You're very clumsily conflating two discrete things here.
First, you assume that people spouting complete nonsense without investigation into something can be productive. I fundamentally disagree with this, based on experience. I have encountered people with what essentially were delusions of grandeur — they had very limited and grossly incorrect physics knowledge, with no mathematical foundation, and they stated that they saw falsities in Einsteininan Relativity. They then proceeded to argue that, were Einstein alive, they would be entitled to a debate with him, as would presumably everyone else with misgivings from ignorance. They had absolutely no understanding of what Relativity was, no understanding even of what an electron is, and the entire discussion of trying to explain anything to them was essentially a complete waste of time, as they had neither the mathematical basis, nor the understanding of physical law and experimentation, to understand the vast amounts of evidence in support of Einstein's theory of spacetime and relativity. Every single sentence out of their mouth was either mildly incorrect waffling from ignorance, or complete and utter nonsense.
I for one, do not think that this debate was productive, and they argued that me finishing on the note that they should, in fact, avail themselves of the knowledge of physics so that they can understand the tests that have been done for themselves, was an "argument to authority". From the outset, there was absolutely no possibility of meaningful debate, because they had chosen to remain ignorant despite the internet being flooded of places where they could learn even a modicum of basic knowledge. This is what it means to "do research" and "learn". I argue based on these experiences that you do, actually, owe people around you to learn about something before spouting inane garbage on the matter.
Secondly, I argue we can allow dissenting views without leaving room within society, for popular subjects of misconception, such as holocaust denial or the hundreds of right wing grifters trying to sell people on the idea that vaccination is evil, to be given platforms.
There are people who engage in debate, not truthfully, or honestly, to discuss ideas and learn, but instead to spread their misinformation, hoping to catch people in the crowd that they can profit off. I assert that this is one of the reasons why a myriad of right wing ideas are taking hold right now. Vaccine denialism, holocaust denial, transphobia, etc. are rife — because it comes down to scientific misinformation and profit. People who are already ignorant, finding that it is monetarily fruitful to spread this ignorance in the name either of ideological malfeasance (such as in the case of transphobia — one big example being the christofascist Heritage Foundation injecting millions of dollars into the UK) or just purely out of self-interest (Jordan Peterson being a notable name there).
And all of it is remedied through people who are misinformed of the scientific evidence "shutting the fuck up and learning". The root comes from people abusing platforms to spread their ignorance, and then people parroting that without research. I think that there is room for people to say whatever they want in a public space (in a shopping mall, on a sidewalk), but I do not believe that it is right nor in the best interests of society, for these people to be given room by universities, by public theaters, or by online platforms. These people feed off waffling in front of an audience, only way that we can beat this epidemic of literal bullshit is by denying them that audience, starving them of social oxygen and saying "no, learn more before you deserve a space to speak publicly about this".
It starts with having a spine. Which, I get many people aren't used to having, sure, but.
In reality, it's likely several factors:
- Offshoring/outsourcing
- Businesses running out of ZIRP cash/returns
- Software replacing a lot of "paper" jobs (AI being a sliver of this)
- Older people needing and not vacating jobs like past generations
- Higher CoL expenses meaning lower-paying jobs that would/could be occupied by a recent graduate aren't.
- General market sentiment being cautious/conservative due to the whiplash of the last 17 years.
As with most things, it's not one convenient scapegoat, it's many problems compounding into one big clusterf*ck.
I would not be surprised at all if other companies have quietly done the same while touting 'the future of AI', because as a society we seem to grudgingly accept automation far more readily than we accept offshoring.
Why is everyone hell bent on blaming foreigners, rather than the management that actually makes these decisions, and domestic elected officials who actually are responsible for and make decisions that affect the economy.
The wave of tech layoffs a few years ago were blamed on AI but were so obviously attributable to interest rates and changing tax policies that the idea that the proto-AI tools we had at the time were responsible was laughable.
This shift we at least have better AI tools to blame (though I'd argue they're still not good enough), but we also have a US President who has straight up said that he wants to cause a global recession and has been doing everything in his power to make that happen.
Given that the impact of someone running a bulldozer through the economy like this is well-studied and that we'd predict exactly what were seeing here, attributing the damage to AI is silly verging on responsible. Place the blame where it belongs!
At the end of this Atlantic article, the author admits:
> Luckily for humans, though, skepticism of the strong interpretation is warranted. For one thing, supercharged productivity growth, which an intelligence explosion would likely produce, is hard to find in the data. For another, a New York Fed survey of firms released last year found that AI was having a negligible effect on hiring.
In other words: did we scare ya? Good, because it got you to read this far. Nothing to actually see here.
Even those who do get employed, they tend to be underemployed with low wages.
The old excuse was 'automation' was killing jobs.
The lesser old excuse was offshoring.
Now it's AI?
How about we stop inventing excuses and perhaps look at root cause of the 'recent grad' factor. That perhaps requiring university degrees that arent worth anything for jobs that dont need them is the problem?
I don't know?
Kind of sounds like the problem is more fundamental than that. It sounds like the job is not actually there in the first place. Doesn't matter how qualified you are if there's no money to pay you.
1) Hype, as you said, leading to delayed hiring.
2) Offshore workers using AI becoming more competitive.
3) Reallocation of capital to AI projects, which skews demand towards data scientists at the expense of, say, front-end devs (who ironically might have provided better return).
None of these are actually reducing the amount of human workers needed, but the social and economic impact is real.
Even if you are the biggest critic of AI, it's hard to deny that the frontier models are quite good at the sort of stuff that you learn in school. Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.
Once upon a time, I had to struggle through these. My code wouldn't run properly because I forgot to release a variable from memory or I was off-by-one on a recursive algorithm. But the struggling is what ultimately helped me actually learn the material [2]. If I could just type out "build a hash table in C" and then shuffle a few things around to make it look like my own, I'd have never really understood the underlying work.
At the same time, LLMs are often useful, but still fail quite frequently in real world work. I'm not trusting cursor to do a database migration in production unless I myself understand and check each line of code that it writes.
Now, as a hiring manager, what am I supposed to do with new grads?
[1] which I think it might be to some extent in some companies, by making existing engineers more productive, but that's a different point
[2] to the inevitable responses that say "well I actually learn things better now because the LLM explains it to me", that's great, but what's relevant here is that a large chunk of people learn by struggling
But I've also witnessed interns using them as a crutch. They can churn out code faster that I did at an equivalent stage in my career but they really struggle debugging. Often, it seems like they just throw up their hands and pivot to something else (language, task, model) instead of troubleshooting. It almost seems like they are being conditioned to think implementation should always be easy. I often wonder if this is just "old curmudgeons" attitude or if it belies something more systemic about the craft.
I don't feel this is a strong argument, since these are the sort of things that one could easily lookup on stackoverflow, github, and so on for a while now. What "AI" did was being a more convenient code search tool + text manipulation abilities.
But you still need to know the fundamentals, otherwise won't even know what to ask. I recently used GPT to get a quick code sample for a linear programming solution, and it saved me time looking up the API for scipy... but I knew what to ask for in the first place. I doubt GPT would suggest that as a solution if I described the problem in too high level.
Honestly though, I recently asked Claude 3.7 Sonnet to write a python script to upload ssh keys to a mikrotik router, to prompt for the username and password -- etc. And it did it. I wouldn't say I loved the code -- but it worked. Code was written in more of a golang format, but okay. It's fine and readable enough. Hiring a contractor from our usual sources would have taken a week at least, probably by the time you add up the back and forth with code reviews and bugs.
I think for a lot of entry level positions (particularly in devops automation say), AI can effectively replace them. You'd still have someone supervise their code and do code reviews, but now the human just supervises an AI. And that human + AI combo replaces 3 other humans.
If students are using AI now, that is indeed the same thing as looking up solutions on Stack Overflow or elsewhere. It's cheating for a grade at the expense of never developing your skills.
> Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.
You can look up any of these and find dozens of implementations to crib from if that's what you want.
Computers can now do more, but I'm not (yet) sure it's all that different.
When I change the spark plugs in my car, am I robbing myself if I'm not learning the intricacies of electrode design, materials science, combustion efficiency, etc.? Or am I just trying to be practical enough to get a needed job done?
To the OPs point, I think you are robbing yourself if the "black box" approach doesn't allow you to get the job done. In other words, in the edge cases alluded to, you may need to understand what's going on under the hood to implement it appropriately.
I don't know why we're pretending that individuals have suddenly lost all agency and self-perception? It's pretty clear when you understand something or don't, and it's always been a choice of whether you dive deeper or copy some existing thing that you don't understand.
We know that if we cheat on our math homework, or copy from our friend, or copy something online, that's going to bite us. LLMs make getting an answer easier, but we've always had this option.
Did you drive to work today? Did you learn everything about the tensile strength of nylon seatbelts before you buckled up? How about tarmacadem design before you exited your driveway? Or PID controls theory before you turned on the HVAC?
The point I’m making is that some people disagree about how much you need to know. Some people are ok with knowing just enough to get a job done because it makes them more productive overall. So the question still stands: How do you gauge when learning is enough? To my point above, I think it comes down to whether you can get the job done. Leaning beyond that may be admirable in your mind, but not exactly what you’re being paid for, and I think some experts would consider it a poor use of time/resources.
Do you care if your subordinate wrote a good report using a dictionary to “cheat” instead of memorizing a good vocabulary? Or that they referenced an industry standard for an equation instead of knowing it by heart? I know I don’t.
I'm not in the business of prescribing philosophies on how others should live their lives?
For example, there's actual liability (legal and financial) involved in building a bridge that subsequently falls apart - not so with small bits of code block. Similarly there's a level of academic rigor involved in the certification process for structural/mechanical/etc. engineers that doesn't (and maybe can't?) exist within software engineering.
But the same is true for code? You are held to the same standards as if you had written it yourself, whatever that may be. Frequently that is nothing.
What change do you want to see here?
Licensing, liability, and enforceable standards for safety critical code that interfaces with the public would be a good start.
Just look at the diverse and haphazard way AI has been used in autonomous driving. I would argue it’s a misplacement of the “move fast and break things” (in some cases at least) that has no place in public-facing safety critical applications.
It brings up some difficult questions regarding adequacy of testing at the very least when the underpinnings are not very interpretable.
Using LLMs or other ML as components in systems themselves is a whole other thing, and I agree with you wholeheartedly.
NCEES has a PE license related to controls software. The difficulty is most engineering work falls under an industrial exemption. It seems like the way to enforce that type of liability would be to remove the industrial exemption.
I remember going to a cloud meetup in the early days of AWS. Somebody said "you won't need DBAs because the database is hosted in the cloud." Well, no. You need somebody with a thorough understanding of SQL in general and your specific database stack to successfully scale. They might not have the title "DBA," but you need that knowledge and experience to do things like design a schema, write performant queries, and review a query plan to figure out why something is slow.
I'm starting to understand that you can use a LLM to both do things and teach you. I say that as somebody who definitely has learned by struggling, but realizes that struggling is not the most efficient way to learn.
If I want to keep up, I have to adapt, not just by learning how to use tools that are powered by LLMs, but by changing how I learn, how I work, and how I view my role.
I'm using AI to explain things to me.
And I'm still struggling, I'm just struggling less.
That's progress I think.
I was worried at first but this is an elite journalism product for elites who are facing economic insecurity and not AI
I’ll be blunt, people don’t want to hire Gen Z because of bad past experiences with that generation.
When 1 in 6 companies are "hesitant to hire Gen Z workers," then yeah, obviously unemployment is going to be higher for them. https://finance.yahoo.com/news/1-6-us-companies-reluctant-10...
That’s just one article, but there are plenty more. Do a basic search for "not hiring Gen Z" or something similar and you’ll find tons of examples. It’s easier for people to believe AI is to blame rather than take the answer straight from the hiring managers mouths.
They don’t want to hire Gen Z because they see them as more hassle than they’re worth. I’m not saying whether that’s true or not, but that’s how a lot of managers and business owners feel right now, and you don’t have to look very hard to figure that out.
And for some it is giving away skills.
a give and take tool on a need to know basis
dboreham•2mo ago
mmooss•2mo ago
People have been saying things like that probably since the creating of the corporate model. (Exponential is too much, but I'll take that as exaggeration to make the point.)
Nevermark•2mo ago
More widely:
Businesses, economies, and natural ecosystems, are all full of both exponential drivers and limitations. Since the first life form.
It isn’t a model that is going away. Or that can go away. Unless there are no new opportunities for new things, which seems unlikely anytime soon, there will always be new pockets of exponential growth.