I’ve spent the last 5 years looking at why massive enterprise software deployments usually fail. It’s almost never a technical issue. They fail because the system assumes the official org chart is reality. Now, we’re watching the exact same architectural mistake happen with Enterprise AI.
Right now, AI agents are basically operating on two data sources:
The HR Fantasy (Structural): The official org chart and who supposedly reports to whom.
The Dead Archive (Transactional): Jira tickets, Confluence wikis, and old emails.
Anyone who has worked in a large engineering org knows this isn't how work actually gets done. If you need a hard architecture call made, you don't route it to the "VP of Engineering" like the official policy says. You ping that one grumpy Staff Engineer on Slack because everyone knows they actually own the system.
That’s the "shadow org chart"—the actual behavioral graph of who trusts whom, informal authority, and real escalation paths.
I wrote a post about how we can actually map this behavioral layer and turn it into continuous telemetry. If we want AI agents to route decisions or escalate issues without making completely tone-deaf, politically blind mistakes, they need to query this shadow graph, not just read the docs.
Curious to hear from folks building internal tools: Are you seeing your AI deployments make these kinds of socially blind routing mistakes? How are you handling the gap between the official docs and reality?
yumiatlead•2h ago
Right now, AI agents are basically operating on two data sources:
The HR Fantasy (Structural): The official org chart and who supposedly reports to whom.
The Dead Archive (Transactional): Jira tickets, Confluence wikis, and old emails.
Anyone who has worked in a large engineering org knows this isn't how work actually gets done. If you need a hard architecture call made, you don't route it to the "VP of Engineering" like the official policy says. You ping that one grumpy Staff Engineer on Slack because everyone knows they actually own the system.
That’s the "shadow org chart"—the actual behavioral graph of who trusts whom, informal authority, and real escalation paths.
I wrote a post about how we can actually map this behavioral layer and turn it into continuous telemetry. If we want AI agents to route decisions or escalate issues without making completely tone-deaf, politically blind mistakes, they need to query this shadow graph, not just read the docs.
Curious to hear from folks building internal tools: Are you seeing your AI deployments make these kinds of socially blind routing mistakes? How are you handling the gap between the official docs and reality?