The product has always felt obvious to us: teams waste 45+ minutes per incident just context-switching between Grafana, AWS Console, PagerDuty, and Slack before they even start debugging. We collapsed that.
But now Anthropic, AWS, and Google are all shipping their own managed agent orchestration layers. And every week another enterprise tells us their infra team wants to "just build it themselves with Claude."
I keep going back and forth. On one hand, three years of development is genuinely hard to replicate - the domain specificity matters. On the other hand, foundation model providers expanding down the stack has killed entire product categories before.
Has anyone else navigated this? Particularly curious whether other vertical AI founders have found a defensible position or are quietly pivoting.
re-thc•57m ago
This sounds more like a symptom than the actual problem. They shouldn't have to context switch. Using LLMs to stitch it together is like adding glue to broken glass.
> And every week another enterprise tells us their infra team wants to "just build it themselves with Claude."
This might not last. Reports already keep coming up on issues with Claude for example. Also any "rewrite" from scratch looks good on first go. Time will tell.