If everyone, to satisfy their CEO's emotional attachment to AI, is forced to type into a chat box to get dreck out and then massage it into something usable for their work, we'll see that ineffective mode persist longer, and probably miss out on better modes of interaction and more well-targeted use cases.
https://www.anildash.com//2025/04/19/ai-first-is-the-new-ret...
I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody. An awful lot of people are very set in their ways and will refuse to change unless given a firm push.
For instance, if you weren’t around before version control became the norm, then you probably missed the legions of developers who said things like “Ugh, why do I have to use this stupid thing? It just slows me down and gets in my way! Why can’t I just focus on writing code?” Those developers had to be dragged into modern software development when they were certain it was a stupid waste of time.
AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try. Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner. If you tried a new language and didn’t get very far at first, would you blame the language or recognise that you lack experience?
An awful lot of people are stuck in a rut where they tried an early model, got poor results to begin with, and refused to use it again. These people do need a firm, top-down push, or they will be left behind.
This has happened before, many times. Contrary to the article’s claims, sometimes top-down pushes have been necessary even for things we now consider near universally good and productive.
Meanwhile, people are quietly poking around figuring out the boundaries of what the technology really can do and pushing it a little further along.
With the A.I. hype I've been keeping my message pretty consistent for all of the people who work for me: "There's a lot of promise, and there are likely a lot of changes that could come if things keep going the way they are with A.I., but even if the technology hits a wall right now that stops it from advancing things have already changed and it's important to embrace where we are and adapt".
It feels like the AI discourse is often dominated by irrationally exuberant AI boosters and people with an overwhelming, knee-jerk hatred of the technology, and I often feel like reading tech news is like watching two people who are both wrong argue with one another.
New technologies in companies commonly have the same pitfalls that burn out users. The companies have very little ability to tell if a technology is good or bad at the purchasing level. The c-levels that approve the invoices are commonly swayed not by the merits of the technology, but the persuasion of the salespeople or the fears of others in the same industries. This leads to a lot of technology that could/should be good being just absolute crap for the end user.
Quite often the 'best' or at least most useful technology shows up via shadow IT.
I really do wish we could get to a place where the general consensus was something similar to what Anil wrote - the greatest gains and biggest pitfalls are realized by people who aren't experienced in whatever domain they're using it for.
The more experience you have in a given domain, the more narrow your use-cases for AI will be (because you can do a lot of things on your own faster than the time spent coming up with the right prompts and context mods), but paradoxically the better you will be at using the tools because of your increased ability to spot errors.
*Note: by "narrow" I don't mean useless, I just mean benefits typically accrue as speed gains rather than knowledge + speed gains.
There was never any widespread resistance to "the Internet", let's be real here.
In any case, adoption of all those things was bottom-up rather than top-down. CEOs were not mandating that tech teams use version control or ORMs or automated testing. It was tech leadership, with a lot of support from ICs in their department.
Tech people in particular are excited about trying new things. I never heard CEOs mandating top-down that teams use Kubernetes and adding people's Kubernetes usage into their performance reviews, yet Kubernetes spread like wildfire--to the point where many software companies which had no business using something as complicated as Kubernetes started using it. Same with other flavor-of-the-month tools and approaches like event sourcing, NoSQL/MongoDB, etc.
If anything, as a leader you need to slow down adoption of new technology rather than force it upon people. The idea that senior leadership needs to push to get AI used is highly unusual, to say the least.
The equivalent of the API mandate for AI would be if CEOs were demanding that all products include a "Summarize Content" button. Or that all code repositories contain a summary of their contents in a README. The use of AI to solve these problems would be an implementation detail.
I was around before version control and I don't remember that reaction from more than an insignificant percentage of devs. Most devs reacted to the advent of version control with glee because it eased a real pain point.
But why do they have to fill out some paperwork? If the new technology is a genuine productivity boost and any sort of meaningful performance review is undertaken, then it will show up if they're performing sub-par compared to colleagues.
The real problem is that senior management are lazily passing down mandates in lieu of trusting middle management to do effective performance reviews. Just as it was with Return To Office.
In my (limited) experience, the tasks you want to assign to elite devs are less amenable to AI in the first place.
I have a few colleagues who like the way they work and would prefer everything to stay the way it is. Such "skilled artisans" might be on the way out, replaced by "Ai factory" mass production.
Sure, they could just be kicked out and replaced. But they worked with the company, in some case for a decade plus. Giving them a fair picture of what seems to be down the road is the very least I'd expect of a company treating it's workers as more than just replaceable cogwheels.
My take-away was this is exactly what the OP is targeting. Management's job is to convince you to try and help you make it demonstrate value; mandating "though shall be AI-first" does neither of these effectively - ironically some of your best developers will: require the most evidence to be convinced, fight the hardest, and have the best options to jump ship if you go far enough. It's just weak management when there's the obvious alternative. Dash is a developer relations/evangelist so it's not surprising he bristles at this approach.
This way of phrasing it rejects the possibility that maybe the new thing really does suck, and that this can sometimes be identified pretty quickly.
I'm not a beginner though. In fact I'm actually very experienced at doing my job
Which is why I don't need non-technical management and AI consultants to be telling me what tools I should be using
If I thought AI was going to be a useful tool for me then I would use it
But so far it hasn't, so I don't
I'm not investing my time and energy into a "skill" that doesn't seem like it is going to pay off
> even for things we now consider near universally
We aren't at the point where AI tools provide a major productivity boost. Sometimes they help, sometimes they don't, sometimes working with AI has negative productivity.
Assuming AI improves to the point where employees who use it are significantly more productive... They'll excel relative to their peers. The people who can't figure it out will underperform.
did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack?
I see this is already a favorite quote amongst commentors. It's mine too: I had a job ~15 years ago where the company had introduced an internal social network, that was obviously trying to ride on the coattails of Facebook et al without understanding why people liked social networks.Nobody used it because it was useless, but management evidently was invested in it because your profile and use of that internal site did in fact factor in to performance reviews.
This didn't last long, maybe only one review cycle before everyone realized it was irretrievably lost. The parallel with the article is very apt thought. The stick instead of the carrot is basically an indication that a dumb management idea is in its death throes.
Where I worked, it was an open secret that the CEO had an alter ego he used on the site. I have no idea if he knew that we all knew who that really was (I have to assume he did), but every played along.
By the time I had worked there it had been around for a few years already and once a quarter the head of our group set time aside for everyone to "engage" with it for an hour so that no one would be dinged on their performance review.
It's a great example of how executive group-think can drive whole multi-industry initiatives that are very-obviously, to anyone outside that bubble, pure waste.
To justify owning the useless damn thing, they insisted everyone use it, basically like Slack if it ate 3-4x the resources (really saying something, given Electron already eating 5-10x the resources it ought to need for any given task), monopolized a screen when in use, and added all the awkward elements of physical environments to virtual ones for no reason ("is it weird if 'I' 'sit' in this chair 'next to' this other 'person' when there are other chairs available in the room?", or "oh shit where's that meeting room 'physically' located, again? I think I'm lost...") while removing none of the awkwardness of virtual interactions.
Truly, bizarrely pointless. It was like some shit out of the Silicon Valley TV show, so absurd it was hard to believe it was real. I swear to god, I'm not making this up, they even had in-world presentations, so you could add all the fun of having a bad angle on a screen or being too far away to comfortably read the text to the joy of a Zoom screen-share. Totally nuts. Luckily you could also maximize whatever was being presented, but... hooray, your best feature is that I can ignore all the things that make your dumb crap distinctive? What a win.
This is what I think of every time I see anyone trying to promote Zuckerberg's weird, bad idea. I assure you, being in VR goggles would not have made the experience either more productive or more pleasant. Nobody who's ever tried to work like this even for one week could possibly think it's a good idea to invest in it.
Incidentally, some people on my team have used Copilot for task management, but nobody has found it useful for coding / debugging / testing.
That this would be a significant time savings mostly has to do with most task tracking systems being so very miserable and slow to work in for the majority of the people expected to use them, though. If we used something lighter and closer to where the work is happening (the code) it wouldn't really be that helpful.
This does tend to be a much bigger problem at bigcos than smaller shops, though.
In fact I remember very distinctly the Google TGIF All-Hands where Larry and Sergey stood up and told SWEs they should be trying to do development on tablets, because, y'know, mobile was ascendant, they were afraid of being left behind in mobile, and wanted to develop for "mobile first" (which ended up being on the whole "mobile only" but I'll put that aside for now).
It frankly had the same aura of ... not getting it... lack of vision pretending to be visionionary.
In the end, the job of upper management is not to dictate the tools to engineers to drive them to efficiency. We frankly already have that motivation ourselves. If engineers are skeptical of "AI", it's mostly because we've already been engaged with it and understand many of its limitations, not because we're being "luddites"
One sign of a healthy internal engineering culture is when engineers who are actually doing the work work together to pick their tools to do the work, rather than have them hoisted on them.
When management sends memos out demanding people use AI, what they're actually reflecting is their own fear of being left behind in the buzzword cycle. Few of us doing the work have that fear. I've seen more projects damaged by excessive novelty and forced "innovation" than the other way around.
My favorite stupid Shopify cult thing is the hiring page having a "skip the line" for "exceptional abilities" which explicitly lists being good at video games as a reason to skip the normal hiring process. The "other" category includes examples like "Olympic athlete".
Hah! Now you have my curiosity. What do they replace the normal hiring process with? A game of LoL?
So if you're hitting (a verifiable) top 0-0.5% in some field, there's a reasonable bias towards assuming a high general competence.
I did once hit 0.5 percentile in a multinational PHP exam in my teenage years however I did have a second window open with an interpreter running for the most fringe questions. -- who knows what that means.
I know a software developer who could well be a concert pianist, for example. Ie., that pool of people who overlap, in that overlap, are probably extraordinarily talented.
Case in point I have a friend who is a top 32 magic player in NA. She recently, not even a year ago recently, made it her goal to become a chess grandmaster and she's already 2000 ELO. You could argue that maybe some skills transfer but it's pretty shaky reasoning.
See: https://www.nine.com.au/sport/olympics/olympians-who-changed...
Also it smells like a false metric. People who are in the 0.05% of excellence are probably still heavily invested in the thing they're excelling at.
This is incredibly shady and I wonder if it's even legal here in Europe.
But shopify isn't just a payment processing service. It's a full blown ecommerce suite. Do you think there's an online store out there that gets rid of all PII once an order is paid for, or even after its fulfilled?
We've had people try to return/replace things (or even credit card disputes) years after they bought it. How exactly would that work if we got rid of all information about their order shortly after they made it?
As for legality in the EU/UK, it's just like everything else, on some level they technically asked for consent and you gave it, but yes, dark patterns abound.
This is interesting though: is that data deleted everywhere? It makes no sense just to delete from ‘my store’. But I can delete any customer data at any time.
Perhaps this is a nice example of complexity. From the outside it’s easy for us to why don’t they just…, but as soon as you scratch the surface…
My favorite part:
> I've never worked through a night. The only times I worked more than 40 hours in a week was when I had the burning desire to do so. I need 8ish hours of sleep a night. Same with everybody else, whether we admit it or not.
> For creative work, you can't cheat. My believe is that there are 5 creative hours in everyone's day. All I ask of people at Shopify is that 4 of those are channeled into the company.
Obviously, as I'm replying to someone with first-hand Shopify experience, which I don't have, take all this as you wish. I only know the Twitter Tobi. (and I think his "AI first" memo is ridiculous, to the point that I struggle to imagine that the same person wrote this twitter thread)
Early (pre IPO) Shopify had a pretty toxic internal culture with a lot of drinking and sexual harassment. I was lucky to be there around the post-IPO, pre-pandemic era when there was a bit of structure and the techbros were getting reigned in a bit. Once the pandemic hit I think he just lost his mind.
And if people think about it, it's actually not too different from Leetcoding.
I was told that within Shopify there's something called a "Tobi Tornado" - basically when Tobi swoops in on a program / feature and demands significant change in short order. Carefully planned initiatives can be blown up and then it's maximum effort expected to turn it around.
What everyone had in common was saying that Tobi is quite a smart person and often not wrong, but he's still human, and so there's simply no way he can make 100% good calls because he can't always have full context.
His nickname, which he wasn't worked out, is the first half of a sexual lubricant brand because he's such a wanker.
I've no idea whether Tobi gets it right, just.. this isn't necessarily a bad thing!
However, I think the more work you blow up when you do this, the more it’s reflective of a poor management style; even if it’s the right call under the circumstances, that call should almost certainly have been made earlier.
Of course we don’t live in a perfect world, and if something’s 75% done but really bad, you press the red button and stop it, even if people will be upset.
But if you’re consistently being described as a “tornado”, that says to me you’re not applying your founder judgement early enough in your company’s development process.
This is terrifying.
Shopify is well past its move fast… phase. It powers a vast percentage of ecommerce. If not in dollar percentage, certainly in human.
Please, I beg you, pretend like you work at a bank.
Shopify is good because of how they operate.
Fly in, make a ton of noise, shit on everything, fly away.
You hire passionate people who pour their soul and overtime into a thing then you parachute in, override half their decisions, micromanage other half, then leave leaving them to live with the mess.
After a few of these stunts, you end up with disillusioned, cynical, burnt out people who just don't care any more, and either quiet quit or leave for greener pastures, or the kind of folks who crack the game and fail upwards while caring nothing about the company and the products.
And as soon as the word spreads out that this is your modus operandi, smart folks who have neen around the block a few times will avoid you like the plague.
It can work if you're willing to churn people (khm Elon), for some definition of "work".
But there's a (slower, harder?) way to right the ship and make the team better, and (quicker, easier?) way to swoop in like a Marvel Avenger and break everything (and everyone) in the process.
I feel Founder Mode should in theory be the former, but is in fact excuse for many to do the latter (I've no evidence for this, just what it looks like to me).
In that case the definition of "work" being "become the wealthiest person in the world".
Care to share the evidence you’d use to back it up?
For the lazy, here’s a fun video summarizing them: https://youtu.be/UBc7qBS1Ujo?feature=shared
Tobi's doing something right.
It reminds me of the time I wanted to go out with a girl and she scheduled a date with me in 2 weeks, not a good outlook. I was happy to have a date so I just counted the days. When this happened with another girl I was less invested in, I told her to forget it, and she literally removed the guy she was seeing that week to go out with me.
I think that when the queue is too long, the solution is to cut the line or find another one (or participate in a meat market as a commodity amongst 100 for a low probablity of advancing for a low salary)
You made it sounds stupid. But being Top 100 in something with a huge global competitive base is not of useless or easy.
If you are offered a kid who spends 16 hrs per day competing and studying to be the best at something, and they can channel that energy at your company (with probably a shitty salary) wouldn't you take it?
Outside of tech, AI has been phenomenally helpful. I know many tech folk are falling over themselves for non-tech industry problems that can be software-solved then leased out monthly, and there are tons of these problems out there, but very hard to locate and model if you are outside the industry.
But with the current crop of LLMs, people who don't know how to program, but recognize that a program could do this task, finally can now summon that program to do the task. The path still has a tech-ability moat, but I can only imagine the AI titans racing to get programming ability into Supply Chain Technician Kim's hands. Think Steve Jobs designing an IDE for your mother to use.
I believe it will be the CEO's of these non-tech companies that will be pushing "AI first" and having people come in to show non-techy non-tech workers how to leverage LLMs to automate tasks. You guys have to keep in mind that if you walk into most offices in most places of the world, most workers will say "What the hell is a macro? I just go down the list line by line..."
Brooks has yet to be proven wrong; even if this appears to be the silver bullet it could be just as likely that this widens the tech moat when non-programmers paint themselves into corners where they can't do their jobs without all the brittle, impossible to maintain code they've written. Think of the skilled trades vacuum we have in much of the Western world. Can Supply Chain Technician Kim Jr do her job without AI if she's never seen that before?
But I don't see that being the future. Instead I think people will just spin up bespoke ultra narrow scope programs (maybe scripts is more fitting here, but people like GUIs - scripts with GUIs?) that are generally under 3K LOC.
You don't need an AI to one-shot Excel.exe if you just want a simple way to track how many plastic pellets came in today, and how many went out. A GUI on a simple program on an SQlite database will do that no problem. And you can ditch that bloated excel doc you have been using for years.
At my own company we forwent a proprietary CAD package because Claude could decode the files we had, make a GUI for doing the transformation we needed to do, and properly reencode the file.
The exclamation "finally, all of the back-office people can write their own software!" is just the other side of the "finally, we can get rid of all the software engineers!" coin.
But, so far, every single other time this has been tried it runs into the ease-of-use / customizability problem. The easier a tool is to use/learn, the harder it is (or impossible) to use for specific use-cases. And, vice versa, the more flexible / customizable a tool is, the harder it is to use/learn (looking at you Jira).
Maybe this time is actually different, but I'll believe it when I see it.
Not against this point, but I don't get it, maybe because I don't live in the US, but I see as another way to "soft-fire" people, as is this AI crazy What I'm missing?
Is this seeding for future AI models? If I ask chatGPT a year from now what is drake's favorite Mime type would it confidently say "application/PDF"
> Joke/Wordplay: Is there a pun or play on words involving "Drake" and a MIME type?
> Trick question/Testing the AI: The user might be testing if the AI will invent an answer, hallucinate, or recognize the absurdity.
Almost everyone who isn't highly informed in this field is worried about this. This is a completely reasonable thing to include in a memo about "forced" adoption of AI. Because excluding it induces panic in the workforce.
It is funny that this post calls out groupthink, while failing to acknowledge that they're falling into the groupthink of "CEO dumb" and "AI bad"
Forced AI adoption is nothing more than a strategy, a gamble, etc from company leadership. It may work out great, it may not, and anyone stating with conviction one way or another is lying to themselves and everyone they're shouting to. It is no different than companies going "internet-first" years ago. Doesn't have to mean that the people making the decision are "performing" for each other or that they are fascists, my god.
Imo its a great way of allowing high performers to create even more impact. A great developer typing syntax isn't valuable. Their ability to engineer solutions to challenges and problems is. Scaling that out to an entire company that believes in their people is no different, less time spent on the time-consuming functions of a job that are low-value in isolation, and more time spent on high-value functions of a job.
The Twitter/Reddit-style "snark-for-clicks" approach is disappointing to see so high on a site like this that is largely comprised of intelligent and thoughtful people.
He's not saying that though, is he?
He's quite literally said that people have found AI useful, and that's great! For example:
> We don't actually have to follow along with the narratives that tech tycoons make up for each other. We choose the tools that we use, based on the utility that they have for us. It's strange to have to say it, but... there are people picking up and adopting AI tools on their own, because they find them useful.
And:
> The strangest part is, the AI pushers don't have to lie about what AI can do! If, as they say, AI tools are going to get better quickly, then let them do so and trust that smart people will pick them up and use them. If you think your workers and colleagues are too stupid to recognize good tools that will help them do their jobs better, then ..
Anyway, how many layers of accused irony and snark can we go down? Am I the next?
> This is an important illustration: AI is really good for helping you if you're bad at something, or at least below average. But it's probably not the right tool if you're great at something.
Considering the authors complaint is having professionals (who would in theory be good at their job because they are professionals) use AI puts that in the "not the right tool."
But I probably did stretch a bit there, and appreciate you calling it out.
No different than using version control etc. There were and are engineers who would rather just rsync without having to do the bookkeeping paperwork of `git commit` but you mandate it nonetheless.
Also, despite the fact that we were all working remotely for years, we need you all to come into the office because water cooler chats are far better than writing down a few paragraphs outlining what you need and the constraints.
I guess people, not things, creates value.
They did for Android testing actually. The biggest status symbol within the company was based around who got the latest iPhone model first, who was important enough to get a prioritized and yearly upgrade, and who was stuck with their older models for another year. This was back in the iPhone 3GS/4/4S/5 era. I took advantage of this by getting them to special-order me expensive niche Androids, because it was the only way they could get any employee to use one lol
The tricky part is that you can't just think or talk your way into a new paradigm - the entire company has to act. After all, good ideas and breakthroughs often come from individuals in the trenches instead of from executives. This means exploring new possibilities, running experiments, and constantly iterating based on what you learn. But the reality is that most people naturally resist change. They get comfortable with how things work today. In many companies, you're lucky if employees don't actively fight against new approaches.
This is why CEOs sometimes need to declare company-wide mandate. Microsoft did this in the mid-90s with their famous "Internet Tidal Wave" pivot when Bill Gates sent that memo redirecting the entire company. Intel forced its "right-hand turn" when CPU business was still nascent.
Without these top-down pushes, organizations tend to keep doing what they've always done. Or to say the least, such top-down mandate at least sends a clear message to the entire company, potentially triggering a cultural shift. The "AI-first" thing may well be overhyped, but it's probably just leaders trying to make sure their companies don't get left behind in what looks like a significant shift. Even if the mandate fails, at least the company can learn something valuable. Note I'm talking about directions. The mandate can fail badly due to poor execution, but that's a different topic.
The general feeling I'm getting is that using this AI stuff is important, but it's a learned skill, and we want as many people as possible to get familiar enough with it to have actual opinions.
I find that pretty unobjectionable.
Every advancement in tech I’ve used in my lifetime was at first deployed top-down
Smartphone (blackberries), Personal computers, Version control (CVS), PowerPoint
The personal adoption FOLLOWED
However, for the more Junior Devs (i.e. under 10 to 15 yrs experience), their judgement about Generated Code is often simply "Does it appear to work or not." and that's a very big problem, and a very dangerous problem, that will cause lower quality code to creep in and in a way where AI may allow them to crank out tons of work, but all of it super buggy code. And most everyone would agree we'd rather have simpler, less feature rich products that are solid and reliable, rather than products that are loaded with both features and bugs.
So to all you seasoned developers out there, who have trouble getting hired, since you're over 40, your value as an employee has just quadrupled, compared to the less-experienced. The big question is, of course, how long will it take the 20ish to 30ish hiring managers to realize that, and start valuing experience and wisdom over youthfulness and good looks.
AI has the promise to optimize worker’s efficiency x-fold. This promise was not the case with smartphones, slack, etc.
And AI will change everyone’s work in years to come, especially for developers.
This shows the author’s lack of experience in working with AI on something they’re great at.
AI is great for experts (all the productivity gains, no tolerance for the bullshit)
AI is great for newbies (you can do the thing!!)
A more interesting take would be on the struggle to go from newbie to expert in a field dominated by AI. We’re too early to know how to do this.
Of course AI-first is the future. We’re just still learning how to do it right.
srveale•8h ago
> did your boss ever have to send you a memo demanding that you use a smartphone
Yes, there were tons of jobs that required you to have a smartphone, and still do. I remember my second job, they'd give out Blackberries - debatably not smartphones, but still - to the managers and require work communication on them. I know this was true for many companies.
This isn't the perfect analogy anyway, since one major reason companies did this was to increase security, while forcing AI onto begrudging workers feels like it could have the opposite effect. The commonality is efficiency, or at least the perception of it by upper management.
One example I can think of where there was worker pushback but it makes total sense is the use of electronic medical records. Doctors/nurses originally didn't want to, and there are certainly a lot of problems with the tech, but I don't think anyone is suggesting now that we should go back to paper.
You can make the argument that an "AI first" mandate will backfire, but the notion that workers will collectively gravitate towards new tech is not true in general.
pxx•7h ago
on the other hand, making sure that people use AI for performance reviews would be akin to measuring the percentage of work days that you used your blackberry. that's not something that anyone sane ever did.
somewhat in the same vein, nobody ever sent a directive saying that all interoffice memoranda must be typed in via blackberry.
ryandrake•7h ago
A better example is probably source control. It might not have been true in the past, but these days, nobody has to mandate that you use source control. We all know the benefits, and if we're starting a new software business, we're going to use source control by default from day one.
Uehreka•7h ago
Anil is referring specifically to the way that people who were told to use a Blackberry would bring an iPhone to work anyway and demand that IT support it because it was so much better. In the late 2000s Blackberries were a top-down mandate that failed because iPhones were a bottom-up revolution that was too successful to ban.
So look for situations where employees are using their personal AI subscriptions for work and are starting to demand that IT budget for it so they don’t have to pay out of pocket. I’m seeing this right now at my job with GitHub Copilot.
anildash•7h ago
remich•5h ago
srveale•4h ago
Not sure if these are the best stats to illustrate the point, but ChatGPT was released November 2022, 2.5 years ago, and they currently claim ~1 billion users [1]
By comparison, iPhone sales were something like 30 million over the same time period, June 2007 through 2009. [2]
In other words, what took ChatGPT several months took smartphones several years.
Of course there are problems with the comparison (iPhones are expensive, but many people bought each version of the iPhone making the raw user count go down, Sam Altman is exaggerating, people use LLMs other than ChatGPT, blah blah blah), so maybe let's not concentrate on this particular analogy. The point is: even a very skeptical view of how many people use LLMs day-to-day has to acknowledge they are relatively popular, for better or worse.
I think we're better served trying to keep the cat from scratching us rather than trying to put it back in the bag. Ham-fisted megalomaniac CEOs forcing a dangerous technology on workers before we all understand the danger is a big problem, that's for sure. To the original point, "AI-first is the new RTO", there's definitely some juice there, but it's not because the general public is anti-AI.
[1] https://www.forbes.com/sites/martineparis/2025/04/12/chatgpt...
[2] https://www.globaldata.com/data-insights/technology--media-a...
bluefirebrand•3h ago
You are comparing a cheap subscription service to an expensive piece of hardware that would replace hardware that most people already owned
Of course smartphones were slower to adopt. Everyone had phones already, and they were expensive!
ChatGPT is *free
srveale•2h ago