If everyone, to satisfy their CEO's emotional attachment to AI, is forced to type into a chat box to get dreck out and then massage it into something usable for their work, we'll see that ineffective mode persist longer, and probably miss out on better modes of interaction and more well-targeted use cases.
Re WIA getting upset, there's that story about Michael Cain saying whil he hadn't seen Jaws III and heard it was awful, he did see the mansion it paid for which he likes very much. Seems the classier route.
https://www.anildash.com//2025/04/19/ai-first-is-the-new-ret...
I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody. An awful lot of people are very set in their ways and will refuse to change unless given a firm push.
For instance, if you weren’t around before version control became the norm, then you probably missed the legions of developers who said things like “Ugh, why do I have to use this stupid thing? It just slows me down and gets in my way! Why can’t I just focus on writing code?” Those developers had to be dragged into modern software development when they were certain it was a stupid waste of time.
AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try. Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner. If you tried a new language and didn’t get very far at first, would you blame the language or recognise that you lack experience?
An awful lot of people are stuck in a rut where they tried an early model, got poor results to begin with, and refused to use it again. These people do need a firm, top-down push, or they will be left behind.
This has happened before, many times. Contrary to the article’s claims, sometimes top-down pushes have been necessary even for things we now consider near universally good and productive.
Meanwhile, people are quietly poking around figuring out the boundaries of what the technology really can do and pushing it a little further along.
With the A.I. hype I've been keeping my message pretty consistent for all of the people who work for me: "There's a lot of promise, and there are likely a lot of changes that could come if things keep going the way they are with A.I., but even if the technology hits a wall right now that stops it from advancing things have already changed and it's important to embrace where we are and adapt".
It feels like the AI discourse is often dominated by irrationally exuberant AI boosters and people with an overwhelming, knee-jerk hatred of the technology, and I often feel like reading tech news is like watching two people who are both wrong argue with one another.
New technologies in companies commonly have the same pitfalls that burn out users. The companies have very little ability to tell if a technology is good or bad at the purchasing level. The c-levels that approve the invoices are commonly swayed not by the merits of the technology, but the persuasion of the salespeople or the fears of others in the same industries. This leads to a lot of technology that could/should be good being just absolute crap for the end user.
Quite often the 'best' or at least most useful technology shows up via shadow IT.
I really do wish we could get to a place where the general consensus was something similar to what Anil wrote - the greatest gains and biggest pitfalls are realized by people who aren't experienced in whatever domain they're using it for.
The more experience you have in a given domain, the more narrow your use-cases for AI will be (because you can do a lot of things on your own faster than the time spent coming up with the right prompts and context mods), but paradoxically the better you will be at using the tools because of your increased ability to spot errors.
*Note: by "narrow" I don't mean useless, I just mean benefits typically accrue as speed gains rather than knowledge + speed gains.
There was never any widespread resistance to "the Internet", let's be real here.
In any case, adoption of all those things was bottom-up rather than top-down. CEOs were not mandating that tech teams use version control or ORMs or automated testing. It was tech leadership, with a lot of support from ICs in their department.
Tech people in particular are excited about trying new things. I never heard CEOs mandating top-down that teams use Kubernetes and adding people's Kubernetes usage into their performance reviews, yet Kubernetes spread like wildfire--to the point where many software companies which had no business using something as complicated as Kubernetes started using it. Same with other flavor-of-the-month tools and approaches like event sourcing, NoSQL/MongoDB, etc.
If anything, as a leader you need to slow down adoption of new technology rather than force it upon people. The idea that senior leadership needs to push to get AI used is highly unusual, to say the least.
The equivalent of the API mandate for AI would be if CEOs were demanding that all products include a "Summarize Content" button. Or that all code repositories contain a summary of their contents in a README. The use of AI to solve these problems would be an implementation detail.
I was around before version control and I don't remember that reaction from more than an insignificant percentage of devs. Most devs reacted to the advent of version control with glee because it eased a real pain point.
But why do they have to fill out some paperwork? If the new technology is a genuine productivity boost and any sort of meaningful performance review is undertaken, then it will show up if they're performing sub-par compared to colleagues.
The real problem is that senior management are lazily passing down mandates in lieu of trusting middle management to do effective performance reviews. Just as it was with Return To Office.
In my (limited) experience, the tasks you want to assign to elite devs are less amenable to AI in the first place.
My take-away was this is exactly what the OP is targeting. Management's job is to convince you to try and help you make it demonstrate value; mandating "though shall be AI-first" does neither of these effectively - ironically some of your best developers will: require the most evidence to be convinced, fight the hardest, and have the best options to jump ship if you go far enough. It's just weak management when there's the obvious alternative. Dash is a developer relations/evangelist so it's not surprising he bristles at this approach.
This way of phrasing it rejects the possibility that maybe the new thing really does suck, and that this can sometimes be identified pretty quickly.
did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack?
I see this is already a favorite quote amongst commentors. It's mine too: I had a job ~15 years ago where the company had introduced an internal social network, that was obviously trying to ride on the coattails of Facebook et al without understanding why people liked social networks.Nobody used it because it was useless, but management evidently was invested in it because your profile and use of that internal site did in fact factor in to performance reviews.
This didn't last long, maybe only one review cycle before everyone realized it was irretrievably lost. The parallel with the article is very apt thought. The stick instead of the carrot is basically an indication that a dumb management idea is in its death throes.
Where I worked, it was an open secret that the CEO had an alter ego he used on the site. I have no idea if he knew that we all knew who that really was (I have to assume he did), but every played along.
By the time I had worked there it had been around for a few years already and once a quarter the head of our group set time aside for everyone to "engage" with it for an hour so that no one would be dinged on their performance review.
It's a great example of how executive group-think can drive whole multi-industry initiatives that are very-obviously, to anyone outside that bubble, pure waste.
Incidentally, some people on my team have used Copilot for task management, but nobody has found it useful for coding / debugging / testing.
That this would be a significant time savings mostly has to do with most task tracking systems being so very miserable and slow to work in for the majority of the people expected to use them, though. If we used something lighter and closer to where the work is happening (the code) it wouldn't really be that helpful.
In fact I remember very distinctly the Google TGIF All-Hands where Larry and Sergey stood up and told SWEs they should be trying to do development on tablets, because, y'know, mobile was ascendant, they were afraid of being left behind in mobile, and wanted to develop for "mobile first" (which ended up being on the whole "mobile only" but I'll put that aside for now).
It frankly had the same aura of ... not getting it... lack of vision pretending to be visionionary.
In the end, the job of upper management is not to dictate the tools to engineers to drive them to efficiency. We frankly already have that motivation ourselves. If engineers are skeptical of "AI", it's mostly because we've already been engaged with it and understand many of its limitations, not because we're being "luddites"
One sign of a healthy internal engineering culture is when engineers who are actually doing the work work together to pick their tools to do the work, rather than have them hoisted on them.
When management sends memos out demanding people use AI, what they're actually reflecting is their own fear of being left behind in the buzzword cycle. Few of us doing the work have that fear. I've seen more projects damaged by excessive novelty and forced "innovation" than the other way around.
My favorite stupid Shopify cult thing is the hiring page having a "skip the line" for "exceptional abilities" which explicitly lists being good at video games as a reason to skip the normal hiring process. The "other" category includes examples like "Olympic athlete".
Hah! Now you have my curiosity. What do they replace the normal hiring process with? A game of LoL?
So if you're hitting (a verifiable) top 0-0.5% in some field, there's a reasonable bias towards assuming a high general competence.
I did once hit 0.5 percentile in a multinational PHP exam in my teenage years however I did have a second window open with an interpreter running for the most fringe questions. -- who knows what that means.
I know a software developer who could well be a concert pianist, for example. Ie., that pool of people who overlap, in that overlap, are probably extraordinarily talented.
Case in point I have a friend who is a top 32 magic player in NA. She recently, not even a year ago recently, made it her goal to become a chess grandmaster and she's already 2000 ELO. You could argue that maybe some skills transfer but it's pretty shaky reasoning.
See: https://www.nine.com.au/sport/olympics/olympians-who-changed...
Also it smells like a false metric. People who are in the 0.05% of excellence are probably still heavily invested in the thing they're excelling at.
This is incredibly shady and I wonder if it's even legal here in Europe.
But shopify isn't just a payment processing service. It's a full blown ecommerce suite. Do you think there's an online store out there that gets rid of all PII once an order is paid for, or even after its fulfilled?
We've had people try to return/replace things (or even credit card disputes) years after they bought it. How exactly would that work if we got rid of all information about their order shortly after they made it?
As for legality in the EU/UK, it's just like everything else, on some level they technically asked for consent and you gave it, but yes, dark patterns abound.
My favorite part:
> I've never worked through a night. The only times I worked more than 40 hours in a week was when I had the burning desire to do so. I need 8ish hours of sleep a night. Same with everybody else, whether we admit it or not.
> For creative work, you can't cheat. My believe is that there are 5 creative hours in everyone's day. All I ask of people at Shopify is that 4 of those are channeled into the company.
Obviously, as I'm replying to someone with first-hand Shopify experience, which I don't have, take all this as you wish. I only know the Twitter Tobi. (and I think his "AI first" memo is ridiculous, to the point that I struggle to imagine that the same person wrote this twitter thread)
And if people think about it, it's actually not too different from Leetcoding.
I was told that within Shopify there's something called a "Tobi Tornado" - basically when Tobi swoops in on a program / feature and demands significant change in short order. Carefully planned initiatives can be blown up and then it's maximum effort expected to turn it around.
What everyone had in common was saying that Tobi is quite a smart person and often not wrong, but he's still human, and so there's simply no way he can make 100% good calls because he can't always have full context.
I've no idea whether Tobi gets it right, just.. this isn't necessarily a bad thing!
It is amusing to see Shopify go so hard on the AI while internally things like merges, not builds, merges were up to five hours because of all the quality issues. You had things like the CTO was threatening disciplining engineers based on numbers of expectations logged while at the same time CEO is saying you are now reviewed by how you are using AI to do your job. I had no interest in working that way, so I moved on which is sad because there are so many amazing people investing so much time there.
Outside of tech, AI has been phenomenally helpful. I know many tech folk are falling over themselves for non-tech industry problems that can be software-solved then leased out monthly, and there are tons of these problems out there, but very hard to locate and model if you are outside the industry.
But with the current crop of LLMs, people who don't know how to program, but recognize that a program could do this task, finally can now summon that program to do the task. The path still has a tech-ability moat, but I can only imagine the AI titans racing to get programming ability into Supply Chain Technician Kim's hands. Think Steve Jobs designing an IDE for your mother to use.
I believe it will be the CEO's of these non-tech companies that will be pushing "AI first" and having people come in to show non-techy non-tech workers how to leverage LLMs to automate tasks. You guys have to keep in mind that if you walk into most offices in most places of the world, most workers will say "What the hell is a macro? I just go down the list line by line..."
Brooks has yet to be proven wrong; even if this appears to be the silver bullet it could be just as likely that this widens the tech moat when non-programmers paint themselves into corners where they can't do their jobs without all the brittle, impossible to maintain code they've written. Think of the skilled trades vacuum we have in much of the Western world. Can Supply Chain Technician Kim Jr do her job without AI if she's never seen that before?
The exclamation "finally, all of the back-office people can write their own software!" is just the other side of the "finally, we can get rid of all the software engineers!" coin.
But, so far, every single other time this has been tried it runs into the ease-of-use / customizability problem. The easier a tool is to use/learn, the harder it is (or impossible) to use for specific use-cases. And, vice versa, the more flexible / customizable a tool is, the harder it is to use/learn (looking at you Jira).
Maybe this time is actually different, but I'll believe it when I see it.
Not against this point, but I don't get it, maybe because I don't live in the US, but I see as another way to "soft-fire" people, as is this AI crazy What I'm missing?
Is this seeding for future AI models? If I ask chatGPT a year from now what is drake's favorite Mime type would it confidently say "application/PDF"
> Joke/Wordplay: Is there a pun or play on words involving "Drake" and a MIME type?
> Trick question/Testing the AI: The user might be testing if the AI will invent an answer, hallucinate, or recognize the absurdity.
Almost everyone who isn't highly informed in this field is worried about this. This is a completely reasonable thing to include in a memo about "forced" adoption of AI. Because excluding it induces panic in the workforce.
It is funny that this post calls out groupthink, while failing to acknowledge that they're falling into the groupthink of "CEO dumb" and "AI bad"
Forced AI adoption is nothing more than a strategy, a gamble, etc from company leadership. It may work out great, it may not, and anyone stating with conviction one way or another is lying to themselves and everyone they're shouting to. It is no different than companies going "internet-first" years ago. Doesn't have to mean that the people making the decision are "performing" for each other or that they are fascists, my god.
Imo its a great way of allowing high performers to create even more impact. A great developer typing syntax isn't valuable. Their ability to engineer solutions to challenges and problems is. Scaling that out to an entire company that believes in their people is no different, less time spent on the time-consuming functions of a job that are low-value in isolation, and more time spent on high-value functions of a job.
The Twitter/Reddit-style "snark-for-clicks" approach is disappointing to see so high on a site like this that is largely comprised of intelligent and thoughtful people.
He's not saying that though, is he?
He's quite literally said that people have found AI useful, and that's great! For example:
> We don't actually have to follow along with the narratives that tech tycoons make up for each other. We choose the tools that we use, based on the utility that they have for us. It's strange to have to say it, but... there are people picking up and adopting AI tools on their own, because they find them useful.
And:
> The strangest part is, the AI pushers don't have to lie about what AI can do! If, as they say, AI tools are going to get better quickly, then let them do so and trust that smart people will pick them up and use them. If you think your workers and colleagues are too stupid to recognize good tools that will help them do their jobs better, then ..
Anyway, how many layers of accused irony and snark can we go down? Am I the next?
> This is an important illustration: AI is really good for helping you if you're bad at something, or at least below average. But it's probably not the right tool if you're great at something.
Considering the authors complaint is having professionals (who would in theory be good at their job because they are professionals) use AI puts that in the "not the right tool."
But I probably did stretch a bit there, and appreciate you calling it out.
No different than using version control etc. There were and are engineers who would rather just rsync without having to do the bookkeeping paperwork of `git commit` but you mandate it nonetheless.
Also, despite the fact that we were all working remotely for years, we need you all to come into the office because water cooler chats are far better than writing down a few paragraphs outlining what you need and the constraints.
I guess people, not things, creates value.
They did for Android testing actually. The biggest status symbol within the company was based around who got the latest iPhone model first, who was important enough to get a prioritized and yearly upgrade, and who was stuck with their older models for another year. This was back in the iPhone 3GS/4/4S/5 era. I took advantage of this by getting them to special-order me expensive niche Androids, because it was the only way they could get any employee to use one lol
srveale•3h ago
> did your boss ever have to send you a memo demanding that you use a smartphone
Yes, there were tons of jobs that required you to have a smartphone, and still do. I remember my second job, they'd give out Blackberries - debatably not smartphones, but still - to the managers and require work communication on them. I know this was true for many companies.
This isn't the perfect analogy anyway, since one major reason companies did this was to increase security, while forcing AI onto begrudging workers feels like it could have the opposite effect. The commonality is efficiency, or at least the perception of it by upper management.
One example I can think of where there was worker pushback but it makes total sense is the use of electronic medical records. Doctors/nurses originally didn't want to, and there are certainly a lot of problems with the tech, but I don't think anyone is suggesting now that we should go back to paper.
You can make the argument that an "AI first" mandate will backfire, but the notion that workers will collectively gravitate towards new tech is not true in general.
pxx•2h ago
on the other hand, making sure that people use AI for performance reviews would be akin to measuring the percentage of work days that you used your blackberry. that's not something that anyone sane ever did.
somewhat in the same vein, nobody ever sent a directive saying that all interoffice memoranda must be typed in via blackberry.
ryandrake•2h ago
A better example is probably source control. It might not have been true in the past, but these days, nobody has to mandate that you use source control. We all know the benefits, and if we're starting a new software business, we're going to use source control by default from day one.
Uehreka•2h ago
Anil is referring specifically to the way that people who were told to use a Blackberry would bring an iPhone to work anyway and demand that IT support it because it was so much better. In the late 2000s Blackberries were a top-down mandate that failed because iPhones were a bottom-up revolution that was too successful to ban.
So look for situations where employees are using their personal AI subscriptions for work and are starting to demand that IT budget for it so they don’t have to pay out of pocket. I’m seeing this right now at my job with GitHub Copilot.
anildash•2h ago
remich•42m ago