I’m a former C++ dev turned Product Manager.
I’ve noticed many engineers struggle with the "politics" side of things when they become Leads. To help with this, I’m building a text-based simulator.
It is NOT an AI chatbot. It is a hand-crafted, branching narrative (logic tree) based on real experiences.
I just launched the first scenario: "The Backchannel VP."
The Setup: Your VP Engineering is bypassing you and giving tasks directly to your juniors, causing chaos.
Your Goal: Stop the backchanneling without getting fired.
It’s a short, specific puzzle. I’d love to know if you think the "Correct" path I designed matches your real-world experience, or if I’m off base.
ttul•1mo ago
Sufficiently powerful AI can become the middle manager of everyone’s dreams. Wonderfully effective interpersonal skills, no personality defects. Fair and timely feedback.
Try to convince me this isn’t the case.
bdcp•1mo ago
Have you tried AI to convince you otherwise?
gordonhart•1mo ago
Linking Marshall Brain's ever-relevant novella "Manna" on this: https://marshallbrain.com/manna1
ttul•4w ago
pingananth•1mo ago
The missing piece wasn't intelligence, but statefulness and emotional memory.
A human manager (or VP) remembers that you embarrassed them in a meeting three weeks ago, and that hidden state dictates their reaction today. LLMs—currently—are too 'forgiving' and rational. They don't hold grudges or play power games naturally.
Until AI can simulate that messy, long-term 'political capital' (or lack thereof), I think we still need humans to navigate other humans. But I agree, for pure PR review and logical feedback, I'd take an AI manager any day!
wordpad•1mo ago
Managing is about building relationships to coordinate and prioritize work and even though LLMs have excellent soft skills, they can't build relationships.
pingananth•1mo ago
DrScientist•1mo ago
:-)
Where is the AI going to get the information required to do the job?
How is the AI going to notice that Bob looks a bit burnt out, or understand which projects to work on/prioritise?
Who is going to set the AI managers objectives? Are they simple or are they multi-factorial and sometimes conflicting? Does the objective function stay static over time? If not how is it updated?
How are you going to download all the historic experience of the manager to the AI or are they just going to learn on the job.
What happens when your manager AI starts talking to another teams manager AI? Will you just re-invent office politics but in AI form? Will you learn how to game your AI manager as you understand and potentially control all it's inputs?
pingananth•1mo ago
rpdillon•1mo ago
pingananth•1mo ago
ttul•4w ago
I think most of these objections are valid against a “ChatGPT-in-a-box is your manager” framing. That’s not what I meant by “AI replaces middle management”.
What I did mean is: within ~36 months, a large chunk of the coordination + information-routing + prioritization plumbing that currently consumes a lot of EM/PM time gets automated, so orgs can run materially flatter.
A few specifics to the questions:
“Where does the AI get the information?”
Not from vibes. From the same places managers already get it, but with fewer blind spots and better recall: issue trackers, PRs, incident timelines, on-call load, review latency, meeting notes, customer tickets, delivery metrics, lightweight check-ins. The “AI manager” is really a system with tools + permissions + audit logs, not a standalone LLM.
“How does it notice burnout / team health?”
Two parts: (1) observable signals (sustained after-hours activity, chronic context switching, on-call spikes, growing review queues, missed 1:1s, reduced throughput variance), and (2) explicit human input (quick pulse check-ins, opt-in journaling, “I’m overloaded” flags). Humans are still in the loop for the “I’m not okay” stuff. The AI just catches it earlier and more consistently than a busy manager with 8 directs and 30 Slack threads.
“Who sets objectives / what about conflicting goals?”
Exactly: humans. Strategy is still human-owned. But translating “increase reliability without killing roadmap” into day-to-day sequencing, tradeoff visibility, and risk accounting is where software can help a lot. Think: continuous, explainable prioritization that shows its work (“we’re pushing this because it reduces SEV risk by X and unblocks Y; here are the assumptions”).
“What about historic experience?”
You don’t “download” a manager’s career. You encode the org’s policies, past decisions, and constraints into an accessible memory: postmortems, decision records, architecture notes, norms. The AI won’t have wisdom-by-osmosis, but it will have perfect retrieval of “what happened last time we tried this” and it won’t forget the quiet lessons buried in docs.
“Will we reinvent office politics / will people game it?”
We already do. The difference is: an AI system can be designed to be harder to game because inputs can be cross-validated (tickets vs PRs vs customer impact vs peer feedback) and the rules can be transparent and audited. Also: if you try to game an AI that logs its reasoning, you leave a paper trail. That alone changes incentives.
“Relationships and trust can’t be automated.”
Agree. And that’s why I don’t think “management disappears.” I think it unbundles the human part (trust, coaching, hard conversations, hiring/firing accountability, culture) - that part stays human.
The mechanical part (status synthesis, dependency chasing, agenda generation, follow-up enforcement, draft feedback, metric hygiene, “what should we do next and why”) becomes mostly automated. But did everyone love that part anyway? I don't.
So the likely outcome isn’t “everyone reports to an API”. It’s: fewer layers, more player-coaches, and AI doing the boring middle-management work that currently eats the calendar.
In other words: I’m not claiming AI becomes the perfect human manager. I’m claiming it makes the org need less middle management by automating the parts that are fundamentally information processing.
pingananth•4w ago