Teams are smaller, release cycles are tighter, and AI is sneaking into a lot of workflows.
I’m curious what people here are actually relying on these days to keep things from breaking:
- What layers are in your stack? (types/linters, unit, contract, integration, E2E, monitoring, flags, SLOs, etc.)
- Is AI playing a real role yet for you? test gen, self-healing, triage, anomaly detection?
- Anything you dropped recently because it wasn’t worth the effort? (flaky UI tests, snapshot tests, staging envs…)
- For smaller teams, do you still bother with classic QA, or do you lean more on flags/observability/canaries?
- Anyone tried managed or AI-assisted QA instead of DIY? Curious if it actually worked, esp. around trust/cost/lock-in.
- How do you measure “confidence to release” beyond code coverage?
Would love to hear quick snapshots like: - team size / release cadence
- stack (web, mobile, regulated or not)
- pre-merge checks
- post-deploy safeguards
- tools you kept vs abandoned
- biggest source of flakiness right now
- what you’d do differently if starting today
Looking for real, on-the-ground stories from folks shipping in 2025. What’s working for you?