Right. Consider:
1) Senior engineer writing code
vs.
2) Senior engineer prompting and code reviewing LLM agent
The senior engineer is essential to the process in each case, because the LLM agent left to its own devices will produce nonfunctional crap. But think about the pleasantness of the job and the job satisfaction of the senior engineer? Speaking just for myself, I'd rather quit the industry than spend my career babysitting nonhuman A.I. That's not what I signed up for.
But I guess if you don't like writing code, and you are "just doing it for the money", having an LLM write all the code for you is fine. As long as it passes some very low bar of quality — which, let's be honest — is enough for most companies (i.e. software factories) out there.
I haven't seen any evidence its made progress on these which is nice
And really, I think many senior engineers are already doing both in a lot of cases where they're helping guide and teach junior and early mid-level developers.
Babysitting and correcting automated tools is radically different from mentoring less experienced engineers. First, and most important IMO, there's no relationship. It's entirely impersonal. You become alienated from your fellow humans. I'm reminded of Mark Zuckerberg recently claiming that in the future, most of your "friends" will be A.I. That's not an ideal, it's a damn dystopia.
Moreover, you're not teaching the LLM anything. If the LLMs happen to become better in the future, that's not due to your mentoring. The time you spend reviewing the automatically generated code does not have any productive side effects, doesn't help to "level up" your coworkers/copilots.
Also, since LLMs aren't human, they don't make human mistakes. In some sense, reviewing a human engineer's code is an exercise in mind reading: you can guess what they were thinking, and where they might have overlooked something. But LLMs don't "think" in the same way, and they tend to produce bizarre results and mistakes that a human would never make. Reviewing their code can be a very different, and indeed unpleasant WTF experience.
I don't mentor juniors because it makes me more productive I mentor juniors because I enjoy watching a human grow and develop and gain expertise
I am reminded of reports that Ian McKellen broke down crying on the set of one of The Hobbit movies because the joy of being an actor for him was nothing like acting on green screen sets delivering lines to a tennis ball on a stick
just like with open vs. closed offices or remote vs in-person, maybe some people have all the human interaction they want outside of work and don't mind "talking" to some AI as long as it gets shit done in the manner they want.
This isn't about satisfying a person's need for socializing it is about satisfying society's need for well socialized people
You can prefer closed offices and still be a well socialized person
You can prefer remote work and still be a well socialized person
You can even prefer working alone and still be a well socialized person
If you are in favor of replacing all humans with machines, you are pretty much by definition an asocial person and society should reject you
every technological push has been to automate more and more and every time that's happened we've reduced socialization to some extent or changed the nature of it (social media anyone? and yes, this also has everything to do with remote vs in-person, etc, all which pull the lever on what level of socialization is acceptable).
just because it doesn't fit your particular brand doesn't mean it's wrong, and it's clear this is pushing on your line where you find it unacceptable. i could just as well argue that people who do not show up to an in-person office are not "socialized" to the degree society needs them to be.
the debate has always been to what degree is this acceptable.
Your comment would be improved by simply removing that phrase. It adds nothing and in fact detracts.
> just like with open vs. closed offices or remote vs in-person, maybe some people have all the human interaction they want outside of work and don't mind "talking" to some AI as long as it gets shit done in the manner they want.
You're presenting a false dichotomy. If someone doesn't enjoying mentoring juniors, that's fine. They shouldn't have to. But why would one have to choose between mentoring juniors or babysitting LLM agents? How about neither?
sasmithjr was apparently trying to defend babysitting A.I. by making an analogy with mentoring juniors, whereas I replied by arguing that the two are not alike. Whether or not you enjoy using A.I. is an entirely separate issue, independent of mentoring.
maybe you're responding to the wrong person? because i'm not even disagreeing with you on that. maybe they want both or neither, that's fine.
the person i'm responding to is framing mentoring as some kind must have from a "socialization" standpoint (which i disagreed with, but i get the practical aspect of it where if you don't have people train juniors there won't be seniors).
I mean "socialization" as in "being a positive part of and building a society worth living in"
if it's rewarding to you that's great, but don't frame it as something bigger than it is. i would hope we are all "being a positive part of and building a society worth living in" in our own way.
If you don't like or want to mentor the younger generation then you are actively sabotaging the future of society because those people are the future of society
Why do I care about the future of society? Because I still have to live in it for another few decades
maybe i like taking care of my friends kids, volunteering, or doing other things that contribute to the "future of society"? personally, i think mentoring junior devs is slightly lower on the priority list, but that's my opinion.
seriously, how arrogant of you to make assumptions about how others think about the future based on a tiny slice of your personal life lol.
That's great, that doesn't absolve you of your responsibility to also mentor juniors at work though
Those are different tasks in different worlds and they all need doing
if you've optimized every facet of your life to do all the "responsible things" society needs then feel free to throw the first stone. anything else is just posturing.
and just a small thing, it's ironic that you're so fixated on socialization for society’s sake while being so tunnel-visioned in defending your own definition of what that even means. i've given you plenty examples but it just doesn't fit the one you personally adhere to.
I regret adding that last bit to my comment because my main point (which I clearly messed up emphasizing and communicating) is that I think you’re presenting a false dichotomy in the original comment. Now that work can be done with LLMs asynchronously, it’s possible to both write your own code and guide LLMs as they need it when you have down time. And nothing about that requires stopping other functions of the job like mentoring and teaching juniors, either, so you can still build relationships on the job, too.
If having to attend to an LLM in any way makes the job worse for you, I guess we’ll have to agree to disagree. So far, LLMs feel like one of many other automations that I use frequently and haven’t really changed my satisfaction with my job.
I think you're downplaying the nightmare scenario, and your own previous comment already suggests a more expansive use of LLM: "so a senior engineer can send it off to work on multiple issues at once".
What I fear, and what I already see happening to an extent, is a top-down corporate mandate to use AI, indeed a mandate to maximize the use of AI in order to maximize (alleged) "productivity". Ultimately, then, senior engineers become glorified babysitters of the AI. It's not about the personal choice of the engineer, just like, as the other commenter mentioned, open vs. closed offices or remote vs. in-person are often not the choice of individual engineers but rather a top-down corporate mandate.
We've already seen a corporate backlash against remote work and a widespread top-down demand for RTO. That's real; it's happened and is happening.
It may not replace us and it also may. Given the progress of the last decade in AI it’s not far off to say that we will come up with something in the next decade.
I hope nothing will come but it’s unrealistic to say something definitively will not replace us.
One of the more bullish AI people has said the models performance scales with log of compute (Sam Altman). Do you know how hard it will be to move that number? We are already well into diminishing returns with current methodologies and there is no one pointing the way to a break through that will get us to expert level performance. RLHF is underinvested in currently but will likely be the path to get us from Junior contributor to Mid in specific domains, but that still leaves a lot of room for humanity.
The most likely reason for my PoV to be wrong is that AI labs are investing a lot of training time into programming, hoping the model can self improve. I’m willing to believe that will have some payoffs in terms of cheaper, faster models and perhaps some improvements in scaling for RLHF (a huge priority for research IMO). Unsupervised RL would also be interesting, albeit with alignment concerns.
What I find unlikely with current models is that they will show truly innovative thinking, as opposed to the remixed ideas presented as “intelligence” today.
Finally, I am absolutely convinced today’s AI is already powerful enough to affect every business on the planet (yes even the plumbers). I just don’t believe they will replace us wholesale.
But this is not just an endless projection. In one sense we can't have economic growth and energy consumption go endlessly as that will eat up all the available resources on earth, there is a physical hard line.
However for AI this is not the case. There is literally an example of human level intelligence exiting in the real world. You're it. We know we haven't even scratched the limit.
It can be done because an example of the finished product is humanity itself. The question is do we have the capability to do it? And for this we don't know. Given the trend and the fact that a Finished product Already exists, It is Totally realistic to say AI will replace our jobs.
Counterpoint: our brains use about 20 watts of power. How much does AI use again? Does this not suggest that it's absolutely nothing like what our brains do?
Evidence: ChatGPT and all LLMs.
You cannot realistically say that this isn't evidence. Neither of these things guarantees that AI will take over our jobs but they are datapoints that lend credence to the possibility that it will.
On the other side of the coin, it is utterly unrealistic to say that AI will never take over our jobs when there is Also no definitive evidence on this front.
That's not my position. I'm agnostic. I have no idea where it'll end up but there's no reason to have a strong belief either way
The comment you originally replied to is I think the sanest thing in here. You can't just project out endlessly unless you have a technological basis for it. The current methodologies are getting into diminishing returns and we'll need another breakthrough to push it much further
This is turning into religious debate
The original comment I replied to is categorically wrong. It's not sane at all when it's rationally and factually not true. We are not projecting endlessly. We are hitting a 1 year mark of a bumpy upward trendline that's been going for over 15 years. This 1 year mark is characterized by a bump of a slight diminishing return of LLM technology that's being over exaggerated as an absolute limit of AI.
Clearly we've had all kinds of models developed in the last 15 years so one blip is not evidence of anything.
Again we already have a datapoint here. You are a human brain, we know that an intelligence up to human intelligence can be physically realized because the human brain is ALREADY a physical realization. It is not insane to draw a projection in that direction and it is certainly not an endless growth trendline. That's false.
Given the information we have you gave it an "agnostic" outlook which is 50 50. If you asked me 10 years ago whether we would hit agi or not I would've given it a 5 percent chance, and now both of us are at 50 50. So your stance actually contradicts the "sane" statement you stated you agree with.
We are not projecting to infinite growth and you disagree with that because in your own statement you believe there is a 50 percent possibility we will hit agi.
"You are a human brain, we know that an intelligence up to human intelligence can be physically realized" - not evidence that LLMs will lead to AGI
"trendline that's been going for over 15 years" - not evidence LLMs will continue to AGI, even more so now given we're running into the limits of scaling it
AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades
The only evidence that justifies a specific probability is going to be technical explanations of how LLMs are going to scale to AGI. No one has that
1. LLMs are good at specific, well defined tasks with clear outcomes. The thing that got them there is hitting its limit
2. ???
3. AGI
What's the 2?
It matters.. because everyone's hyped up and saying we're all going to be replaced but they can't fill in the 2. It's a religious debate because it's blind faith without evidence
I take "don't know" to mean the outcome is 50/50 either way because that's the default probability of "don't know"
> not evidence LLMs will continue to AGI, even more so now given we're running into the limits of scaling it
Never said it was. The human brain is evidence of what can be physically realized and that is compelling evidence that it can be built by us. It's not definitive evidence but it's compelling evidence. Fusion is less compelling because we don't have any evidence of it existing on earth.
>AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades
>AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades
AI winter refers to a singular event that happened through the entire history of AI. It is not a term applicable to a common occurrence as you seem to imply. We had one winter, and that is not enough to establish a pattern that it is going to repeat.
>1. LLMs are good at specific, well defined tasks with clear outcomes. The thing that got them there is hitting its limit
What's the thing that got them there? Training data?
>It matters.. because everyone's hyped up and saying we're all going to be replaced but they can't fill in the 2. It's a religious debate because it's blind faith without evidence
The hype is in the other direction. On HN everyone is overwhelmingly against AI and making claims that it will never happen. Also artists are already replaced. I worked at a company where artists did in fact get replaced by AI.
Just like the steam engine did, just like robots did, just like computers did.
Oh, wait.
The idea that computers and general purpose robots cannot possibly replace humans is no longer outrageous to me. Especially if we are talking about a few key humans controlling / managing multiple robots to do what was previously the work of N humans.
I do not want to see the bloodbath that will follow
And I do pretty strongly think that is the trajectory we're on. We cannot create utopia for 1% of the population and purgatory for the other 99% and expect people to just sit still and take it
It’s not. My point is that already was the case. We already have computers, robots, automation, machines which are better than humans in any task you’ll give them. And they still don’t replace humans, because humans will do other things.
You will always only replace tasks, never people. The question isn’t man vs machine, it’s man and machine vs only machine.
- A good lawyer + AI will likely win in court against a non lawyer with AI who would likely win in court against just an AI
- A good software engineer + AI will ship features faster / safer vs a non engineer with AI, who will beat just AI
- A good doctor + AI will save more lives than a non doctor + AI, who will perform better than just AI
As long as a human has a marginal boost to AI (either by needing to supervise it, regulation, or just AI is simply better with a human agency and intuition) - jobs won't be lost, but the paradox of "productivity increases, yet we end up working harder" will continue.
p.s. there is the classic example I'm sure we all are aware of, autopilot is capable of taking off and landing since the 80s, I personally prefer to keep the pilots there, just in case.
My concern is for the juniors - there’s going to be far fewer opportunities for them to get started in careers.
Things that software developers are extremely allergic to
In Europe it’s different.
When the market pool of seniors will run dry, and as long as hiring a junior + AI is better than a random person + AI, it will balance itself.
I do believe the “we have a tech talent shortage” was and is a lie, the shortage is tech talent that is willing to work for less. Everyone was told to just learn to code and make 6 figures out do college. This drove over-supply.
There is still shortage of very good software engineers, just not shortage of people with a computer science degree.
The problem with that is most skills need to be practiced. When you only need to use your skills unexpectedly in an emergency, that may not end well. Same applies to other fields where AI can do something 95% of the time, with human intervention required in the 5% case. Is it realistic to expect humans to continue to fill that 5% gap if we allow our skills to wane by outsourcing the easiest 95% of a job and only keep the hardest 5% for ourselves.
Have you ever managed people?
So yes, we keep pilots, but we keep _fewer_ pilots.
$1 per passenger is huge! For Ryanair it's 200m annually.
Richard de Crespigny, who flew the Quantas A380 that blew one of its engines after a departure from Changi, explains very clearly and in a gripping way the amount of stuff happening while trying to save an aircraft.
Lots of accidents happen today already at the seams of automation, I don't think we're collectively ready for a world with much more automation, especially in the name of more shareholder value of a 4 dollars discount.
The Air France Rio-Paris crash is a good example of sudden full mistrust of automation and sensors by the crew after a sensor failure appeared and then recovered. Very, very sad transcript and analysis... I'm arguing against myself here, singe it was also a huge case of crew management failure and it might not have gone to crash with only one person in the cockpit.
Ok, what about an Average doctor with an AI? Or how about a Bad doctor with an AI?
AI assisted medcare will be good if it catches some amount of misdiagnoses
AI will be terrible if it winds up reinforcing misdiagnoses
My suspicion is that AI will act as a force multiplier to some extent, more than a safety net
Yes, some top percentage of performers will get some percentage of performance gain out of AI
But it will not make average performers great or bad performers good. It will make bad performers worse
I worked on software for electronic medical record note taking and I'm not sure how an LLM can help a doctor speed that up tbh. All of the stats need to be typed into the computer regardless. The LLM can't really speed that up?
I think llm's are alright at speech recognition and that sort of unstructured to structured text manipulation. At least, in my corner of the customer success world I've seen some uses along those lines
Actually any medical data being processed by AI is probably going to be under a ton of scrutiny
Medicine will likely be one of the last fields we start to see widespread usage of AI for this reason, tbh
It's summary was that I wasn't taking my antibiotics (I was, neither I nor my doctor said anything to the contrary). Luckily my doctor was very skeptical of the whole thing and carefully reviewed the notes, but this could be an absolute disaster if it hallucinates something more nefarious and the doctor isn't diligent about reviewing
Unless somebody manages to make hyper-convincing LLMs and use them for good, I guess. (Note: I think this is a bad path).
Also their care is pretty much completely decided by insurance. What surgeries they can perform, what medicine they can give, how much, what materials they can use for surgery, and on and on. Your doctor is practicing shockingly little medicine, your real doctor is thousands of pages of guidelines created by insurers and peer-to-peer doctors who you will never meet.
My experience with the current stuff on the market is you get out what you put in
If you put in a very detailed and high quality, precisely defined question and also provide a framework for how you would like it to reason and execute a task, then you can get out a pretty good response
But the less effort you put in the less accurate the outcome is
If a bad doctor is someone who puts in less effort, is less precise, and less detail oriented, it's difficult to see how AI improves on the situation at all
Especially current iterations of AI that don't really prompt the users for more details or recognize when users need to be more precise
So until the 1970's shopping clerk was a medium-skill and prestige job. Each clerk had to know the prices for all the items in your store because of the danger of price-tag switching(1). So clerks who knew all the prices were faster at checking out then clerks who had to look up the prices in their book, and reducing customer friction is hugely valuable for stores. So during this era store clerk is a reasonable career, you could have a middle-class lifestyle from working retail, there are people who went from clerk to CEO, and even those who weren't ambitious could just find a stable path to support their family.
Then the UPC code, laser scanner, and product/price database came along in the 1970's. The UPC code is printed in a more permanent way so switching tags is not as big a threat (2). Changing prices is just a database update, rather than printing new tags for every item and having the clerks memorize the new price. And there is a natural language description of every item that the register can display, so you don't have to keep the clerk around to be able to tell the difference between the expensive dress and the cheap dress- it will say the brand and description. This vastly improved the performance of a new clerk, but also decreased the value of the more experienced clerk. The result was a great hallowing-out of the retail sector employment, the so-called "McJob" of the 1990's.
But the result was things like Circuit City (in its death throes) firing all of their experienced retail employees (3) because the management didn't think that experience was worth paying for. This is actually the same sort of process that Marx had noted about factory jobs in the 19th century- he called it the alienation of labor, this is capital investment replacing skilled labor, to the benefit of the owners of the investment- but since retail jobs largely code as female no one really paid much attention to it. It never became a subject of national conversation.
1: This also created a limit on store size: you couldn't have something like a modern supercenter (e.g. Costco, Walmart, Target) because a single clerk couldn't know the prices for such a wide assortment of goods. In department stores in the pre-computer era every section had its own checkout area, you would buy the pots in the housewares section and then go to the women's clothes area and buy that separately, and they would use store credit to make the transaction as friction-less as possible.
2: Because in the old days a person with a price tag gun would come along and put the price directly onto each item when a price changed, so you'd have each orange with a "10p" sticker on it, and now it's a code entry and only the database entry needs to change, the UPC can be much more permanently printed.
3: https://abcnews.go.com/GMA/story?id=2994476 all employees paid above a certain amount were laid off, which pretty much meant they were the ones who had stuck around for a while and actually knew the business well and were good at their jobs.
Not more.
Every software company has a backlog of 1000 features they want to add, everywhere has a shortage of healthcare workers. If AI makes developers on a successful product 20% more efficient, they won't fire 20% of developers, they'll build 20% more features.
The problem is the "successful product" part; for a decade or more unsuccessful products were artificially propped up by ZIRP. Now that money isn't free these products are being culled, and the associated jobs along with them. AI is just an excuse.
My point is simple:
Why would I hire 100s of employees when I can cut the most junior and mid-level roles and make the seniors more productive with AI?
> Every software company has a backlog of 1000 features they want to add, everywhere has a shortage of healthcare workers. If AI makes developers on a successful product 20% more efficient, they won't fire 20% of developers, they'll build 20% more features.
Exactly. Keep the seniors with AI and no need for any more engineers, or even just get away with it by firing one of them if they don't want to use AI.
> Now that money isn't free these products are being culled, and the associated jobs along with them. AI is just an excuse.
The problem is "AI" is already good enough and even if their jobs somehow "come back", the salaries will be much lower (not higher) than before.
So knowledge workers have a lot more to lose, rather than gain if they don't use AI.
Because at competent companies juniors and mid-level employees aren't just cranking out code, they're developing an understanding of the domain and system. If all you cared about was cranking out code and features, you'd have outsourced to Infosys etc long ago. (Admittedly, many companies aren't competent.)
> Exactly. Keep the seniors with AI and no need for any more engineers, or even just get away with it by firing one of them if they don't want to use AI.
This doesn't make any sense. I asked ChatGPT and it couldn't parse it either.
> The problem is "AI" is already good enough and even if their jobs somehow "come back", the salaries will be much lower (not higher) than before.
This much is true but tech salary inflation was, again, largely a ZIRP phenomenon and has nothing to do with AI. Junior developers were never really worth $150k/year right out of university.
So many companies like Microsoft, Meta, Salesforce and Google (who are actively using AI just did layoffs) are some how not 'competent companies' because they believe with AI they can do more with less engineers and employees?
> This doesn't make any sense. I asked ChatGPT and it couldn't parse it either.
Made total sense for the companies I mentioned above, who just did layoffs based on 'streamlining operations' and 'effciency gains' with AI just this year (and beat their earnings estimates).
> This much is true but tech salary inflation was, again, largely a ZIRP phenomenon and has nothing to do with AI. Junior developers were never really worth $150k/year right out of university.
It's more than just that, including an increasing over-supply of software engineers in general and lots of them with highly inflated salaries regardless of rank. The point is that it wasn't sustainable in the first place and roles in the junior to mid-level will see a reduction of salaries and jobs.
Once again, knowledge workers still have a lot more to lose, rather than gain if they don't use AI.
Is there any evidence the layoffs are actually due to AI, or due to a hiring correction using AI as an excuse?
Evidently:
1. After the layoffs that happned at Meta, it is reported that they are building (and using) AI coding agents to become even more efficient, same with Google. [0]
2. Duolingo went all in and replaced their contract workers with AI. [1]
3. Microsoft CEO says "up to 30% of the company’s code was written by AI" [0] then laid off 3% of workers (6K employees), including engineers. [2]
4. Business Insider went "AI first" with 70% of employees using ChatGPT and then lays off 21% of their workers. [4]
5. After Salesforce laid off 1,000 their workers in Feburary 2025, they now said that "the use of artificial intelligence tools internally has allowed it to hire fewer workers" and additionally said:
"We view these as assistants, but they are going to allow us to have to hire less and hopefully make our existing folks more productive." [5]
The list goes on and on in 2025 alone and this further strengthens my whole point that companies will be doing more with less knowledge workers and these workers still have a lot more to lose, rather than gain if they don't use AI.[0] https://www.entrepreneur.com/business-news/ai-is-taking-over...
[1] https://www.entrepreneur.com/business-news/duolingo-will-rep...
[2] https://www.forbes.com/sites/chriswestfall/2025/05/13/micros...
[3] https://www.forbes.com/sites/jackkelly/2024/11/01/ai-code-an...
[4] https://www.businessinsider.com/a-note-from-business-insider...
[5] https://finance.yahoo.com/news/salesforce-says-ai-reduced-hi...
Exactly. Less of them will be needed given that a few of them will be more productive with AI vs without it. That is the change which is happening right now.
So this is actually cope.
- A good lawyer with or without AI will likely win in court against a mediocre lawyer with AI
- A good SWE with or without AI will likely ship features faster/safer than a mediocre engineer with AI
- A good doctor with or without AI will save more lives than a mediocre doctor with AI.
I've experimented with this personally, stopping all my usage of AI coding tools for a time, including the autocomplete stuff. I by no means found myself barely treading water, soon to be overtaken by my cybernetically enhanced colleagues. In fact, quite the opposite, nothing really changed.
- A good doctor + AI will save more lives than a non doctor + AI, who will perform better than just AI
I find even entertaining the opposite conclusion comical. Think of, for example, a world acclaimed heart surgeon. Are people seriously entertaining the idea that a rando with some agentic AI setup could outperform such a surgeon in said field, saving more lives? Is this the level of delusion that some people are at now?Current gen AI taking all the medical jobs is indeed laughable, but the amount of R&D going into AI right now is staggering and the progress has been rapid, with no signs of slowing down. 5 years from now things will be very different IMHO.
However, one constant I've observed over my career: the quality and speed of the work I produce has not significantly contributed to career advancement. I know I'm appreciated for the fact that I don't cause more problems, and I usually make the total number of problems go down. I mention this because if quality/speed was truly valued, I believe I'd see more career-related growth (titles, etc) from it at some point in the last 20 years of my career.
This isn't to say AI won't be helpful. It is, and I use it some. But the whole schtick around, "SWEs must adopt AI or they'll be left behind," reeks of thought-terminating influencer BS. If people had great ways of assessing programmer productivity, we wouldn't need the ceremony-ridden promo culture that we have in some places.
(Arguably most of my career advancement in the last 5 years or so has come mainly from therapy: emotional regulation, holding onto problems that cannot be fixed easily w/o being consumed with trying to fix them or disengaging completely, and applying all that and more to various types of leadership.)
If all jobs are lost then our society becomes fundamentally broken and we need to figure out how to elevate the lives of everyone very quickly before it turns into riots and chaos. The thing is that it will be a very clear signal that something has to change, so change is more likely
If no jobs are lost we continue the status quo which is not perfect but is at least relatively sane and tolerable for now and hopefully we can keep working on fixing some of our underlying problems
If some jobs are lost but not all, then we see a further widening of the wealth gap but it is just another muddy signal of a problem that will not be dealt with. This is the "boiling the frog" outcome and I don't want to see what happens when we reach the end of that track.
Unfortunately that seems like the most likely outcome because boiling the frog is the path we've been on for a long time now.
I am also concerned about couple of important things: human skill erosion (a lot of new devs who use AI might not bother to learn the basics that can make a difference in production/performance, security, etc.), and human laziness (and thus, gradually growing the habit to trust/rely on AI's output entirely).
Until then, adding one more engineer (with AI) will have a better ROI than firing one.
Engineers who are purist and refuse to use AI, might end up with a wake up call. But they are smart, they’ll adapt too.
Safer is the crucial word here. If you remove it, I'd argue the ordering should be reversed.
I also will point out that you could replace ai with amphetamines and have close to the same meaning. (And like amphetamines an ai can only act through humans, never solely on its own.)
>Surprisingly, in many cases, A.I. systems working independently performed better than when combined with physician input. This pattern emerged consistently across different medical tasks, from chest X-ray and mammography interpretation to clinical decision-making.
https://erictopol.substack.com/p/when-doctors-with-ai-are-ou...
"My thing will break our entire economy. I'm still gonna build it, though." - statements dreamed up by the utterly deranged
[0]https://futurism.com/the-byte/openai-ceo-survivalist-prepper
I don't. The cat is out of the bag. The only thing that would accomplish is giving Google and others less competition. Personally I don't have much trust in any tech companies, including OpenAI, but I'd much rather there be a field of competition than one dominant and (unchecked) leader.
Oh, I know it wouldn't, but I know he won't, because there's too much financial incentive to do so, and Altman and his ilk think that all human endeavors can be judged as a net good or net bad by whether or not they make number go bigger.
Maybe the solution might be socialism except you can own money till 10 million I guess. But I am not sure if its effective or not. Definitely loop holes. Idk
Maybe the solution is a simple as a social market economy, maybe it takes something a bit more radical - but the extreme techno capitalism that our industry's leaders are trying to advance is definitely a step in the wrong direction.
It probably wasn't even a net good for the South, being blamed for locking it into an agrarian plantation economy and stunting manufacturing in the states that depended on cotton.
https://www.reddit.com/r/ChatGPT/comments/1axkvns/sam_altman...
Its crazy. Idk what else to say because my jaw gets dropped every time I hear something like this. Humanity is a mess sometimes.
I wouldn't care nearly as much about AI were there a stronger social safety net in the US. However, that's not going to happen anytime soon, because that requires taxes to pay for, and the very wealthy do not like paying those because it reduces their wealth.
It's the same thing with the atomic bomb. There wasn't really a choice not to do it. All the theoretical physicists at the time knew that it was possible to develop the thing. If the United States hadn't done it, someone else would have. Perhaps a few years or a decade later, but it would have happened somewhere.
There is always a choice. "Someone else will do this if I don't" does not absolve one from moral responsibility. Even if it is inevitable (which things generally are not, claiming they are is a rationalization most of the time), you still are culpable if you're the one who pulls the trigger.
> Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale
> Tech Company: At long last, we have created the Torment Nexus, from [the] classic sci-fi novel, Don’t Create the Torment Nexus
"AI will replace all programmers within 1 year"
vs
"AI is just another fad like NFTs"
Both sides are very obviously wrong, and the truth lies somewhere in the middle. Most people knowledgeable about AI agree that it will eventually surpass humans in all tasks requiring mental labor (and will thus displace those humans, as using AI will be cheaper than employing people), but no one knows exactly when this will happen. I personally believe it will happen within the next 10 years, but it’s really just a slightly educated guess.
If that's the case, there is a large net increase in demand for experienced devs who know how to use AI for coding. Demand will go up massively, I have zero doubt of that, but will AI get so much better that unskilled MBAs are making large complex apps? ¯\_(ツ)_/¯
What does slavery for labor look like in high-skill urban setting? Rent, login credentials that are monitored, computer use is monitored, electricity is metered centrally, access to new model updates is monitored, required security updates are controlled via certificates and individual profiles, communication is by phone which is individually monitored for access patterns and location.. all very sci-fi eh?
AI companies are positioning themselves as "the everything machine"
The vast majority of software written today is "capture data -> transform data (optional) -> display data nicely formatted and easily accessible"
If an AI can wire into your database to retrieve the data in the format you want, then a bunch of the job is done
If the AI can also be made to present a form to users to capture the data in the first place, then almost all of the job is done
These are huge IFs. I remain skeptical that we'll reach this level soon. But if we do, the software industry is gonna tank
The AI industry will grow. Maybe.
'the everything machine' is pure fantasy and hype
Except now your code starting point is an absolute mess under the hood so it's a complete crapshoot to build out anything meaningful
Software as we know it, will disappear.
I'm a little embarrassed to admit this, but over the past 15 years of my career I have worked on a few products the world really didn't need at all (nor want). I was paid, and life went on but it was a misallocation of resources.
I suppose now someone can build stuff the world doesn't need a lot easier.
The flip side is that now I am using AI for my own entrepreneurial endeavors. Then I get to be the business exec, except my employees will be AI workflows.
And I never have to deal with a business exec every again.
Even if you do end up having the best kind of problem and have to scale your business, there are other ways to organize work besides the same ol' tired hierarchy.
Enspiral is one real-life example I can think of. They're a entrepreneurial collective in New Zealand that has figured out its own way of organizing collaboration in a way without bosses/execs. Seems to be working fine for them. (Other types of worker cooperatives / collectives too, they're just a great example.)
I'd rather dare to try to make something perhaps more difficult at first but that allows me to avoid recreating the types of working conditions that pushed me to leave the rat race.
With AI, companies will ship more features, but their competitors will too. This will likely result in a net-zero gain, similar to the current situation.
If you're a SWE, Accountant, Marketer, HR person, etc. put out of work by AI, now you can screw together iPhones for just over minimum wage. And if we run out of those jobs, there's always picking vegetables now that all the migrants are getting deported.
It would not surprise me one bit if the Tech CEOs see things this way.
If AI+robotization gets to the point where most jobs are automated, humans will get to do.what they actually want to do. For some it's endless entertainment. For others it's science exploration, pollution cleanup, space colonization, curing the disease. All of that with the help of the AIs.
Thar doesn't mean AI won't replace jobs. I don't know the scale, but it certainly will replace some jobs at least. Just probably not the ones we're currently losing.
The ‘white-collar bloodbath’ is all part of the AI hype machine
I don't buy into this at all:
>Assuming AI will have an effect similar to 20th Century farm equipment’s on agriculture, why will that labor force behave differently to their 20th Century counterparts (and either refuse to or be prevented from finding new jobs)?
Because "farm equipment" can't also perform the jobs it creates. I'm assuming if/once AI can do most current jobs, it can also do most if not all the jobs it creates.
And it’s decimated other professions like manual agriculture, assembly line jobs, etc.
It seems like people are debating whether the impact of AI on computer-based jobs will be elimination or decimation. But for the majority of people, what’s the difference?
When Henry Ford introduced the moving assembly line, production went from hundreds of cars to thousands. It had a profoundly positive impact on the secondary market, leading to an overall increase in job creation.
I've yet to see any "AI is gonna take your job" articles that even attempt to consider the impact on the secondary market. It seems their argument is that it'll be AI all the way down which is utter nonsense.
There's a reason we can still spot the sterile whiff of AI written content. When you set coding aside, the evidence just hasn't shown up yet that AI agents can reliably replace anything more than the most formulaic and uninspired tasks. At least with how the tech is currently being implemented...
(There's a reason these big companies spend very very little time talking about the power of businesses using their own data to fine-tune or train their own models...)
So if the white collar bloodbath is true. We have to tell a bunch of people, who have spent a significant portion of their lives training for a specific jobs and may be in debt for that education, to go do manual labor or something. The potential civil unrest from this should really concern everyone.
Seriously, once something is able to do 90% of a white collar workers job, general ai has gotten far enough for robotics to take over/decimate the other industries within the decade.
The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years? And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is. Analogies to industrial agriculture (a very big deal, historically) and other technologies completely miss the scope of what's happening.
I am not sure If we can call the current sota models that. Maybe, maybe not. But a little disappointing.
Now everyones saying that ai agents are the hype and the productivity gains are in that, the Darwin Gödel paper which was recently released for example.
On the same day (yesterday), hn top page had an ai blog by some fly.io and the top comment was worried about ai excelling and that as devs we should do something as he was concerned what if companies reach the intelligence hype that they are saying.
On the same day, builder.ai turned out to be actually Indians.
The companies are most likely giving us hype because we are giving them valuation. The hype seems not worth it. Everyones saying that all models are really good and now all that matters are vibes.
So in all of this, I have taken this.
Trust noone. Or atleast not take things at face value of ai hype companies. I genuinely believe that ai is gonna reach a plateau of sorts at such moment like ours and as someone who tinkers with it. I am genuinely happy at its current scale and I kind of don't want it to grow more I guess, and I kind of think that a plateau might come soon. But maybe not.
I don't think that its analogous to species but maybe that's me being optimistic about future but I genuinely don't want to think too much as it stresses my brain too much and makes evene my present... Well not a present(gift)
We genuinely have seen a shocking increase in reasoning abilities over the course of only a decade from things that aren't human. There may be bumps in the road, but we have very little idea how long this trajectory of capability increases will continue. I don't see any reason to think humans are near the ceiling of what is possible. We are in uncharted territory.
I am not sure,but in my opinion, a hardware limitation might be real. These models are training on 100k gpus and like the whole totality of internet. I am not sure but I wouldn't be too certain of ai.
Also,maybe I am biased.is it wrong that I want ai to just stay here,at the moment it is right now. It's genuinely good but anything more feels to me as if it might be terrifying (if the ai companies hype genuinely comes true)
But even then,people were defending it,saying so what,they never said that they aren't doing it or SMTH. So I of course assumed that people are defending what's true.
Maybe not,but such a rumour was quite a funny one to hear as an Indian myself.
Anything a human remote worker can do, a super human remote worker will be able to do better, faster and for a fraction of the cost – this includes work that humans currently do in offices but could theoretically done remotely.
We should therefore assume if (when) AI broadly surpasses the capabilities of a human remote worker it will not longer make economic sense to hire humans for these roles.
Should we assume this then what is the human's role in the labour market? It won't be their physical abilities (the industrial revolution replaced the human's role here), it won't be their reasoning abilities (AI will soon replace the human's here), but perhaps jobs which require both physical dexterity and human-level reasoning ability humans might still retain an edge? Perhaps at least for now we can assume jobs like roofing, plumbing, and gardening will continue to exist. While jobs like coding, graphic design and copy writing will almost certainly be replaced.
I think the only long-term question at the moment is how long it will take for robotics to catch up and provide something akin to human-level dexterity with super-human intelligence? At which point I'm not sure why anyone would hire a human except from the novelty of it – perhaps like the novelty of riding a horse into town.
AI is so obviously not like other technologies. Past technologies effectively just found ways to automate low-intelligence tasks and augment human strength via machinery. Advanced robotics and AI is fundamentally different in their ability to cut into human labour, and combined it's hard to see any edge to a human labourer.
But ether way, even if you subscribe to this notion that AI will not take all human jobs it seems very likely that AI will displace many more jobs than the industrial revolution did, and at a much much faster pace. Additionally, it will target those who are most educated, which isn't necessarily a bad thing, but it unlike the working class who are easy to ignore and tell to re-skill, my guess would be that demands will be made for UBI and large reorganisations of our existing economic and political systems. My point is, the likelihood any of this will end well is close to zero, even if you just believe AI will replace a bunch of inefficient jobs like software engineers.
"Oh no, we invented an AI that is so smart we're afraid of it! We need AI safety!" is literally snake-oil sales pitching. Media outlets that give air to these claims are cringe.
AI isn't replacing people. It's being used to replace labour with capital which weakens labour's negotiating power in order to increase the profits of the capital class. Just as other disruptive technologies have been used in the past.
What the Luddite movement shows us is that society needs to prepare for taking care of highly skilled people. What society didn't do back then was take care of people. Textile workers didn't find jobs elsewhere unless you mean work houses. The myth capitalists fabricated around disruptive technologies is that people displaced by these technologies will acquire new skills and find work elsewhere and that these technologies create new opportunities. It doesn't exactly happen that way.
The same could happen here. The wealthy transfer even more wealth from the labour class to themselves and avoid taxation or doing their part to replace the value they've taken from the labour class.
Update: Changed sentence on the "capitalist myth" to explain what the myth is.
People who don't "believe" in the exponential of computing (even though I find the charts pretty convincing) seem to always assume that AI progress will stop near where it is. With that assumption, the skepticism is reasonable. But it's a poorly informed assumption.
https://en.wikipedia.org/wiki/Technological_singularity#Expo...
I think that some of that gets into somewhat religious territory, but the increasing power and efficiency of compute seems fairly objective. And also the intelligence of LLMs seems to track roughly with their size and amount of training. So this does look like it's about scale. And we continue to increase the scale with innovations and new paradigms. There will likely be a new memory-centric computing paradigm (or maybe multiple) within the next five years that increases efficiency by another two orders of magnitude.
Why can I just throw out a prediction about orders of magnitude? Because we have increased the efficiency and performance by orders of magnitude over and over again throughout the entire history of computing.
It's not unlike the crypto space; you've got your true believers, your skeptics and thirdly, your financially motivated hype men. The CEOs if these publicly traded companies and companies that want to be bought are the latter and they are the ones who are behind stories where "the ai lies so we don't turn it off!!!" hype that gets spun into click bait headlines.
You cannot solve for taste and art with frameworks and patterns. Having a fixed-sized canvas and standardized brushes/paints to work with is not much help if you are ass at painting.
Personally, I use AI a lot. It’s great for boilerplate, getting unstuck, or even offering alternative solutions I wouldn’t have thought of. But where it still struggles sometimes is with the why behind the work. It doesn’t have that human curiosity, asking odd questions, pushing boundaries, or thinking creatively about tradeoffs.
What really makes me pause is when it gives back code that looks right, but I find myself thinking, “Wait… why did it do this?” Especially when security is involved. Even if I prompt with security as the top priority, I still need to carefully review the output.
One recent example that stuck with me: a friend of mine, an office manager with zero coding background, proudly showed off how he used AI to inject some VBA into his Excel report to do advanced filtering. My first reaction was: well, here it is, AI replacing my job. But what hit harder was my second thought: does he even know what he just copied and pasted into that sensitive report?
So yeah, for me AI isn’t a replacement. It’s a power tool, and eventually, maybe a great coding partner. But you still need to know what you’re doing, or at least understand enough to check its work.
If you can train AI to insert calls to your malware server, in whatever solutions it provides, that's a huge win.
Hopefully, countermeasures and AI-powered defense tools can keep up. It's going to be some type of an arms race, for sure.
If firing you saves $1.50 a year, they’ll do it.
Do they go back to hiring human expertise then?
I totally agree though, the business mindset of saving a buck often outweighs everything else. I’m actually going through something similar right now with a client being swayed by a so-called “AI expert” just to cut costs. But that’s a whole other story.
Admitting mistakes and correcting them directly is not a common thing for CEOs to do.
I hope that you're right, but the problem is that the regulatory bodies are captured by the players that they are supposed to regulate. Can you name a time in recent history where a company had to pay a penalty for a harmful action, either intentional or neglectful, that exceeded the profit they gained from the action?
It kind of stings, though. That used to be someone’s craft, their livelihood. But like you said, the key now is finding ways to adapt, maybe by leaning more into the human side of the work: real collaboration, client interaction, deeper research.
Still, not everyone can or wants to adapt. What happens to the quiet designer who just wants to spend the day in their bubble, listening to music and creating? Not chasing clients, not pivoting constantly, just doing what they love. That’s the part that saddens me at times when I see AI in action. Thanks for sharing.
Chatgpt and Genini choked on trying to generate even the simplest of 2d shapes, like a 5-pointed star, as svg.
Huh? Offshoring is a fairly major component of my employer's workforce, and the US software engineering staff has dwindled (along with a lot of other departments). We moved into a much smaller building recently.
Now we work on godawful blended teams with the worse time-zone difference possible (initially projects we either all India or all US, until the power that be felt the need to push costs down further).
It's the same at my friend's employers, if they're of any size.
Your employer is not the only employer.
It just think it's pretty unbelievable that offshoring is a "a fairly minor portion" of the "US Software Engineering industry." It didn't totally kill it, like some overly pessimistic (or enthusiastic, depending on their perspective) people may have predicted, but it would take a lot of evidence to convince me it's not a major part of it.
2. Off-shoring didn't go away, sure, but it's pretty costly because you have to constantly translate culture differences, time differences, etc. Also reading code is harder than writing it, so just dumping more people to write code doesn't help much.
The 60 is mostly farshore, unless you are a high priority project in which case you get nearshore. Rumor is it’s expanding, Vietnam can do copilot just as good as US; requirements are written in English. They’ve started shipping Vietnamese tech leads here under visas, they alone interface between Product/eng leaders and the low-English teams. For all the Merika! I hear on the news, I certainly am not seeing it in the workforce.
Can you? :D
Based on my favorite quote from the I, Robot(2004) movie with Will Smith when he got roasted by a robot:
W.S.: "You are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?*
Robot: "Can you?"
Which I think applies to a lot of anti-AI sentiments. Sure, the AI doesn't know how to do X, Y, Z but then most people also don't know X, Y and Z. Sure, people can learn how to X, Y and Z, but then so can an AI. ChatGPT couldn't initially draw anime, now they trained it to do that and it can. Similarly it can also learn how to set up IAM correctly if they bother to train it well enough for that task. But for now, like you discovered, we're far away form an AI that's universally good at everything but I expect specialisation will arrive sooner than later.Yes, I can. In fact, the entire reason I think AI is not a useful tool (contrary to the hype) is that it can't do many things I find easy to do. I'm certainly not going to trust it with the things I find hard to do (and therefore can't check it effectively) in that case!
For example, a month or two ago I was trying to determine if there's a way to control the order in which CloudFormation creates+deletes objects when a replacement is needed. AI (including both ChatGPT and AWS' own purpose built AI) insisted yes, hallucinating configuration options that straight up don't exist if you try to use them. The ability to produce syntactically valid configuration files (not even necessarily correctly doing what you want them to) should be table stakes here, but I find that AI routinely can't do it.
And my point with AI isn't that it can do things that are easy or hard, it's that it can do things I don't know how to do, yet. I can spend time to learn them but the AI knows them already and at that point it's better than me.
Hard and easy are subjective. When yo don't know how to do something it's hard, when you know how to do it it's easy.
You might know IAM but most people don't so it's not meant to replace you, it's to replace them.
But the key difference? I’m not planning to use it as the primary photographer at my cousin’s wedding next week.
As you said, the real danger isn’t just the tool, it’s the false confidence it gives. AI can make us feel a bit too capable and too fast, and that’s when things can go sideways.
LLMs do really poorly with general statements like that, so not sure it's unexpected. If you put "Make sure to make it production worthy", you'll get as many different answers as there are programmers, because not even us human programmers agree what that really means.
Same for "Make sure security is the top priority", most programmers would understand that differently. If you instead spell out exactly what behavior you expect from something like that (so "Make sure there are no XSS's", "Make sure users can't bypass authentication" and so on), you'll get a much higher probability it'll manage to follow those.
These days, I make sure every “t” is crossed and every “i” dotted when giving instructions. Good point, definitely a lesson worth repeating.
public void getEntityWithParents(List<DBEntity> entityList, String id) { DBEntity entity = dao.getById(id); entityList.add(entity); if (entity.getParent() != null) { getEntityWithParents(entityList, entity.getId()); } }
Because I'm working from a make it work first then make it efficient perspective I realize that I'm making a lot of DB calls there. So I pop open gemini, copy/paste that algorithm and ask it "How can I replace this method with a single call to the database, in postgresql?" and it gives me
WITH RECURSIVE parent_chain AS (SELECT id, name, parent, 0 AS depth FROM table WHERE id = :id UNION ALL SELECT i.id, i.name, i.parent, pc.depth + 1 FROM fam.dx_concept i INNER JOIN parent_chain pc ON i.id = pc.parent) SELECT id, name, parent FROM parent_chain ORDER BY depth;
That might have been a day or two of research, instead it was 5 minutes to come up with a theory and an hour or so writing tests. Gemini saved the day there. But it wasn't able to determine what needed done (minimizing DB calls), only how to do it, and it wasn't able to verify correctness. That's where we'll fit in in all of this: figuring out what to do and then making sure we actually did it.
Feels like the real sweet spot right now is: humans define the goal and validate the work, AI just helps fill in the middle.
public void getEntityWithParents(List<DBEntity> entityList, String id) {
DBEntity entity = dao.getById(id);
entityList.add(entity);
if (entity.getParent() != null) {
getEntityWithParents(entityList, entity.getId());
}
}
That's your problem. You haven't spent the past couple of years trying to get a junior job.
Outside of coding, I’m also a new blogger and writer trying to publish articles and novels in a world already flooded with AI-generated content.
In that sense, I’m in a similar position to a junior dev, just in a different domain. As I seriously consider shifting more into writing than coding, I know I’ll be facing similar competitive pressures with AI. Honestly, I have no idea where that road will lead.
What I do know is that I’ll try to take advantage of every tool and opportunity available to help me succeed. For junior devs, I’d say: keep knocking on doors and focus on how AI can support your employer, not threaten either party.
Funny enough, just a few weeks ago I watched an "AI expert" with no real dev experience sell a vision full of buzzwords to business leaders, and they ate it up. Hats off to him. It reminded me that today, it's not just about skills anymore. It’s about how you communicate, connect, and present your value.
Sharpen your communication and people skills. Like it or not, AI is rapidly taking over the technical part. We have to stand out in the human part.
Just my two cents. And for the record, please know that I don’t consider myself an expert in any of this, just speaking from experience and opinions.
Hope it helps and good luck!
I agree with everything you wrote. The question is, what about in 6 months? 2 years? 4 years? Just a year ago (not sure the exact timeline) we didn't have the systems we have today that will edit multiple files, iterate over compilation errors and test failures, etc.... So what's it tomorrow?
Staying adaptable feels like the only real option, but I get that even that might not be enough for everyone.
A bit of a counterpoint: I've been programming for 12 years, but only recently started working in webdev. I have a general understanding of cybersecurity, but I never had to actively implement security measures in my code until now—and my boss is always pushing for speed.
AI has been incredible in that regard. I can ask it to review my code for vulnerabilities, explain what they are, and outline the pros and cons of each strategy. In just a month, I've learned a lot about JWT, CSRF, DDoS protection, and more.
In another world, my web apps would be far more vulnerable.
That one I never considered until it happened to me. It's funny that the AI-provided implementation was mostly off, but it was a start. "Blank canvas paralysis" is a thing.
But the difference is that automotive automation does create way more jobs than it destroyed. Programmers, designers, machine maintainers, computer engineers, mechanical engineers, materials scientists, all have a part to play in making those machines. More people are employed by auto manufacturers than ever before, albeit different people.
AI isn’t the same really. It’s not a case of creating more-different jobs. It just substitutes people with a crappier replacement, puts all the power in the hands of the few companies that make it, and the pie is shrinking rather than growing.
We will all pay for the damage done in pure pursuit of profit by shoehorning this tech everywhere.
If you can impose a new way of living on one single generation---an existence directed toward endless free (for now) entertainment---it is enough to change all subsequent generations.
Once this takes hold, there's no going back.
I'm not that pessimistic. If humans are in charge, there'll always change, but it may be in the far future.
Though AGI might change that, because I can see one important application being the creation of ever-present minds to continuously watch and control each person who remains, on behalf of some totalitarian entity. Basically 1984 telescreens, but real and far cheaper.
Fundamentally, you either continue down the path of pure capitalism and let people starve in the streets or you adapt socially.
The truth is, we don't know how AI will evolve. Maybe it will replace all jobs in 10 years. Or maybe never. Or maybe the world will change bit by bit over the next 50 years until it is utterly unrecognizable. Anyone who tells you that they know for sure is selling you something.
If forced to guess, I would say AI is like electricity or the microprocessor: it will change everything, but it will take decades.
Once you accept that things are going to change, it frees you to focus on what's important. Focus on what you can control: your skills, your effort, and your relationships (business and personal).
When you’re employed, any efficiency gains you get from AI belong to the company, not you.
Perhaps the author is curious whether "AI" will replace "SEO jobs", or "web marketing" jobs
If "AI"-generated answers are replacing www search, and even visits to other websites, then perhaps to some extent "AI" will reduce the number of "SEO" or "web marketing" jobs
https://searchengineland.com/seo-opportunity-shrinking-rand-...
The author is a "web marketer" accusing "Tech Execs" of doing web marketing
Marketers trying to discredit other marketers
From the late 19th century to the 1940s/50s is 50 years. It's not really reassuring to middle aged workers who lose their jobs to new technology that 50 years later there will overall be more jobs available.
Hottakes like these are getting retarded.
This is very real and it's happening now across the industry, with devastating consequences for many (financial, health, etc.)
Idiots everywhere are repeating it for them ad nauseum.
And I suspect the majority of this flywheel is fully manufactured at this point.
Dead Internet Theory is nearly complete.
I checked the unemployment numbers of US they have a regular trend. But they are very vague and general.
I cannot attribute layoffs at companies like Microsoft to AI because these things happened before many times.
ArtTimeInvestor•1d ago
wang_li•1d ago
adamlangsner•1d ago
wiz21c•1d ago
nothercastle•1d ago
lawlessone•1d ago
blibble•1d ago
keiferski•1d ago
IggleSniggle•1d ago
HeyLaughingBoy•1d ago
keiferski•1d ago
I don’t think it’s a realistic understanding of psychology or sociology to think that people will be happy only consuming AI stuff.
lagrange77•1d ago
Night_Thastus•1d ago
CivBase•1d ago
raxxorraxor•1d ago
Some will smile at the triviality of such things, but if you solve that you will get very rich. It surely is not at all trivial to solve.
Currently I don't think we can even replace any normal clerk for any slightly more complex problem yet. Programming is essentially a form of translation and these fields will probably change in they get more supplemental tools.
I first thought artists will suffer from AI and perhaps some will be taken advantage off. On the other hand people seem to dislike AI generated content.
hyperhello•1d ago
pluc•1d ago
Realizing that there are no longer entry level jobs for tech positions and stifling a large swathe of tech professions.
Realizing that AI isn't that good, and the bulk of the workload has become technical debt and nobody wants to be an AI janitor
etc? It's easy really just gotta get your head out of the hole AI put it in
max_•1d ago
Also from IBM,
"A computer should never be put incharge because a computer can never be held accountable"
abeppu•1d ago
- People will pay more to go to a concert performed by live human musicians than they would to listen to a recording or to go see an artificial performer (e.g. chuck e cheese?)
- A "handmade" product, or a "made in house" dish is valued above a mass-production or factory-frozen meal
- A therapist, doctor etc may be valued partly for their bedside manner or ability for form genuine rapport with a patient
- Even in AI _defining and measuring value_ ultimately comes from people. E.g. we wouldn't have the LLMs we do if there hadn't been teams of humans providing input to the RL.