[1] there was a remote universe where I could see myself working for Shopify, now that company is sitting somewhere between Wipro and Accenture in my ranking.
There are good books on this: e.g. https://www.amazon.ca/Next-Generation-Performance-Management...
It might be that these companies don't care about actual performance or it might be that these companies are too cheap/poorly run to reward/incentivize actual performance gains but either way... the fault is on leadership.
"X trackers and content blocked
Your Firefox settings blocked this content from tracking you across sites or being used for ads."
Screenshots don't track me so they would be ok.
A friend of mine is an engineer of a large pre-IPO startup, and their VP of AI just demanded every single employee needs to create an agent using Claude. There were 9700 created in a month or so. Imagine the amount of tech debt, security holes, and business logic mistakes this orgy of agents will cause and will have to be fixed in the future.
edit: typo
Or install a landline (over 5G because that's how you do it nowadays) and call it a day. :-)
Indeed! I'm not like dead set against them. I just find they're kind of a bad tool for most jobs I've used them for and I'm just so goddamn tired of hearing about how revolutionary this kinda-bad tool is.
I was a huge AI skeptic but since Jan 2025, I have been watching AI take my job away from me, so I adapted and am using AI now to accelerate my productivity. I'm in my 50s and have been programming for 30 years so I've seen both sides and there is nothing that is going to stop it.
But the evangelist insistence that it literally cannot be a net negative in any contexts/workflows is just exhausting to read and is a massive turn-off. Or that others may simply not benefit the same way with that different work style.
Like I said, I feel like I get net value out of it, but if my work patterns were scientifically studied and it turned out it wasn't actually a time saver on the whole I wouldn't be that surprised.
There are times where after knocking request after request out of the park, I spend hours wrangling some dumb failures or run into spaghetti code from the last "successful" session that massively slow down new development or require painful refactoring and start to question whether this is a sustainable, true net multiplier in the long term. Plus the constant time investment of learning and maintaining new tools/rules/hooks/etc that should be counted too.
But, I enjoy the work style personally so stick with it.
I just find FOMO/hype inherently off-putting and don't understand why random people feel they can confidently say that some random other person they don't know anything about is doing it wrong or will be "left behind" by not chasing constantly changing SOTA/best practices.
2. most ai adoption is personal. people use whichever tools work for their role (cc / codex / cursor / copilot (jk, nobody should be using copilot)
3. there is some subset of ai detractors that refuse to use the tools for whatever reason
the metrics pushed by 1) rarely account for 2) and dont really serve 3)
i work at one of the 'hot' ai companies and there is no mandate to use ai... everyone is trusted to use whichever tools they pick responsibly which is how it should be imo
I seem to be using claude (sonnet/opus/haiku, not cc though), and have the option of using codex via my copilot account. Is there some advantage to using codex/claude more directly/not through copilot?
if you can, use cc or codex through your ide instead, oai and anthropic train on their own harnesses, you get better performance
If you can’t state what a thing is supposed to deliver (and how it will be measured) you don’t have a strategy, only a bunch of activity.
For some reason the last decade or so we have confused activity with productivity.
(and words/claims with company value - but that's another topic)
Enforced use means one of two things:
1. The tool sucks, so few will use it unless forced.
2. Use of the tool is against your interests as a worker, so you must be coerced to fuck yourself over (unless you're a software engineer, in which case you may excitedly agree to fuck yourself over willingly, because you're not as smart as you think you are).
I have friends who are finance industry CTOs, and they have described it to me in realtime as CEO FOMO they need to manage ..
Remember tech is sort of an odd duck in how open people are about things and the amount of cross pollination. Many industries are far more secretive and so whatever people are hearing about competitors AI usage is 4th hand hearsay telephone game.
edit: noteworthy someone sent yet another firmwide email about AI today which was just linking to some twitter thread by a VC AI booster thinkbro
Demanding everyone, from drywaller to admin assistant go out and buy a purple colored drill, never use any other colored drill, and use their purple drill for at least fifty minutes a day (to be confirmed by measuring battery charge).
Each department head needs to incorporate into their annual business plan how they are going to use a drill as part of their job in accounting/administration/mailroom.
Throughout the year, must coordinate training & enforce attendance for the people in their department with drill training mandated by the Head of Drilling.
And then they must comply with and meet drilling utilization metrics in order to meet their annual goals.
Drilling cannot be fail, it can only be failed.
People with roles nowhere near software/tech/data are being asked about their AI usage in their self-assessment/annual review process, etc.
It's deeply fascinating psychologically and I'm not sure where this ends.
I've never seen any tech theme pushed top down so hard in 20+ years working. The closest was the early 00s offshoring boom before it peaked and was rationalized/rolled back to some degree. The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.
This is a great line - evocative, funny, and a bit o wordplay.
I think you might be right about the behavior here; I haven't been able to otherwise understand the absolute forcing through of "use AI!!" by people and upon people with only a hazy notion of why and how. I suppose it's some version of nuclear deterrence or Pascal's wager -- if AI isn't a magic bullet then no big loss but if it is they can't afford not to be the first one to fire it.
Apparently Anthropic has been in there for 6 months helping them with some back office streamlining and the outcome of that so far has been.. a press release announcing that they are working on it!
A cynic might also ask if this is simply PR for Goldman to get Anthropic's IPO mandate.
I think people underestimate the size/scope/complexity of big company tech stacks and what any sort of AI transformation may actually take.
It may turn into another cottage industry like big data / cloud / whatever adoption where "forward deployed / customer success engineers" are collocated by the 1000s for years at a time in order to move the needle.
I mean.. recent FBI files of certain emails would imply.. probably, yes.
https://www.semafor.com/article/04/27/2025/the-group-chats-t...
I am not as negative on AI as the rest of the group here though. I think AI first companies will out pace companies that never start to learn the AI muscle. From my prospective these memos mostly seem reasonable.
- If all your peers are doing it and you do it and it doesn't work, it's not your fault, because all your peers were doing it too. "Who could have known? Everyone was doing it."
- If all your peers _aren't_ doing it and you do it and it doesn't work, it's your fault alone, and your board and shareholders crucify you. "You idiot! What were you thinking? You should have just played it safe with our existing revenue streams."
And the one for what's happening with RTO, AI, etc.: - If all your peers are doing it and you _don't do it_ and it _works_, your board crucifies you for missing a plainly obvious sea change to the upside. "You idiot! How did you miss this? Everyone else was doing it!"
Non-founder/mercenary C-suites are incentivized to be fundamentally conservative by shareholders and boards. This is not necessarily bad, but sometimes it leads to funny aggregate behavior, like we're seeing now, when a critical mass of participants and/or money passes some arbitrary threshold resulting in a social environment that makes it hard for the remaining participants to sit on the sidelines.Imagine a CEO going to their board today and going, "we're going to sit out on potentially historic productivity gains because we think everyone else in the United States is full of shit and we know something they don't". The board responds with, "but everything I've seen on CNBC and Bloomberg says we're the only ones not doing this, you're fired".
> The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.
I concur 100%. This is a monkey-see-monkey-do FOMO mania, and it's driven by the C-suite, not rank-and-file. I've never seen anything like it.
Other sticky "productivity movements" - or, if you're less generous like me, fads - at the level of the individual and the team, for example agile development methodologies or object oriented programming or test driven development, have generally been invented and promoted by the rank and file or by middle management. They may or may not have had some level of industry astroturfing to them (see: agile), but to me the crucial difference is that they were mostly pushed by a vanguard of practitioners who were at most one level removed from the coal face.
Now, this is not to say there aren't developers and non-developer workers out there using this stuff with great effectiveness and singing its praises. That _is_ happening. But they're not at the leading edge of it mandating company-wide adoption.
What we are seeing now is, to a first approximation, the result of herd behavior at the C-level. It should be incredibly concerning to all of us that such a small group of lemming-like people should have such an enormously outsized role in both allocating capital and running our lives.
Another time I asked it to rename a struct field across a the whole codebase. It missed 2 instances. A simple sed & grep command would've taken me 15 seconds to write and do the job correctly and cost $~0.00 compute, but I was curious to see if the AI could do it. Nope.
Trillions of dollars for this? Sigh... try again next week, I guess.
But I feel you, part of me wants to quit too, but can't afford that yet.
Using sonnet 4 or even just not knowing which model they are using is a sign of someone not really taking this tech all that seriously. More or less anyone who is seriously trying to adopt this technology knows they are using Opus 4.6 and probably even knows when they stopped using Opus 4. Also, the idea that you wouldn't review the code it generated is, perhaps not uncommon, but I think a minority opinion among people who are using the tools effectively. Also a rename falls squarely in the realm of operations that will reliably work in my experience.
This is why these conversations are so fruitless online - someone describes their experience with an anecdote that is (IMO) a fairly inaccurate representation of what the technology can do today. If this is their experience, I think it's very possible they are holding it wrong.
Again, I don't mean any hate towards the original poster, everyone can have their own approach to AI.
I am aware of a large company that everyone in the US has heard of, planning on laying off 30% of their devs shortly because they expect a 30% improvement in "productivity" from the remaining dev team.
Exciting indeed. Imagine all the divorces that will fall out of this! Hopefully the kids will be ok, daddy just had an accident, he won't be coming home.
If you think anything that is happening with the amount of money and bullshit enveloping this LLM disaster, you should put the keyboard down for a while.
I’d just add a cron job to burn some tokens.
[Company that's getting disrupted by AI: Fiverr, Duolingo]: rush to adopt internal AI to cut costs before they get undercut by competition
[Company that's orthogonal: Box, Ramp, HFT]: build internal tools to boost productivity, maintain 'ai-first' image to keep talent
[Company whose business model is AI]: time to go all in
Relevant article from two days ago https://www.latent.space/p/adversarial-reasoning
happy to be corrected but im not aware of any direct improvements llms bring to ultra low latency market making, time to first token is just too high (not including coding agents)
from talking to some friends in the space theres some meaningful improvements in tooling especially in discretionary trading that operate on longer time horizons where agents can actually help w research and sentiment analysis
That may be all the publicly-posted ones, but I'm skeptical. They have 11.
There were a lot more internal memos.
AI is a broad category of tools, some of which are highly useful to some people - but mandating wide adoption is going to waste a lot of people's time on inefficient tools.
Companies are just groups of employees - and if the companies are failing to provide a clear rationale to increase productivity those companies will fail.
But for an individual cobbler, you basically got fired at one job and hired at another. This may come as a surprise to those who view work as simply an abstract concept that produces value units, but people actually have preferences about how they spend their time. If you're a cobbler, you might enjoy your little workshop, slicing off the edge of leather around the heel, hammering in the pegs, sitting at your workbench.
The nature of the work and your enjoyment of it is a fundamental part of the compensation package of a job.
You might not want to quit that job and get a different job running a shoe assembly line in a factory. Now, if the boss said "hey, since you're all going to be so much more productive working in the factory, we'll give you all 10x raises" then perhaps you might be more excited about putting down your hammer. But the boss isn't saying that. He's saying "all of the cobblers at the other companies are doing this to, so where are you gonna go?".
Of course AI is a top-down mandate. For people who enjoy reading and writing code themselves and find spending their day corralling AI agents to be a less enjoyable job, then the CEO has basically given them a giant benefits cut with zero compensation in return.
I don’t actually think it’ll be a productivity boost the way I work. Code has never been the difficult part, but I’ll definitely have to show I have included AI in my workflow to be left alone.
Oh well…
All the tools that improved productivity for software devs (Docker, K8S/ECS/autoscaling, Telemetry providers) took very long for management to realize they bring value, and in some places with a lot of resistance. Some places where I worked, asking for an IntelliJ license would make your manager look at you like you were asking "hey can I bang your wife?".
>The misconceptions about Klarna and AI adoption baffle me sometimes.
>Yes, we removed close to 1,500 micro SaaS services and some large. Not to save on licenses, but to give AI the cleanest possible context.
If you remove all your services...
Also notice how almost all the stocks of these companies except Meta who have announced AI-first initiatives are at best flat or down but more than 20% YTD.
What does that tell you?
Then concludes his email with:
> I have asked Shelly to free up time on my calendar next week so people can have conversations with me about our future.
I assume Shelly is an AI, and not human headcount the CEO is wasting on menial admin tasks??
And yes, people did resist IDEs (“I’m best with my eMacs” - no you weren’t), people resisted the “sufficiently smart compiler”, and so on. What happened was that they were replaced by the sheer growth in the industry providing new people who didn’t have these constraints.
nemomarx•2h ago
some_random•2h ago