A feature works. The tests pass. The PR is not huge. The business wants to test it live. Nobody wants to block value delivery because of an architecture concern that may sound abstract in the moment.
But this seems to be getting harder.
AI-assisted development, vibe coding, internal tooling, and better frameworks all reduce the friction of producing code. That is useful. Teams can prototype faster and ship experiments sooner.
The problem is that architectural judgment has not become equally cheap.
The code may work and still make the system worse: duplicated logic, unclear ownership, inconsistent patterns, security gaps, bad boundaries, one-off components that should have been reusable, or features that are hard to remove later.
One option is to force more architecture into code review. But then PRs become slow, frustrating, and full of design debates that are difficult to resolve after the code already exists.
Another option is to merge faster, while making the architecture feedback loop after merge much more explicit. Architecture should already be continuous, but faster code creation may require stronger post-merge mechanisms: reviewing what changed at the system level, checking reuse opportunities, reassessing security assumptions, scheduling refactors, keeping features behind flags, and being willing to disable or rewrite things.
That only works if “refactor later” is an actual process, not a wish.
Has your team changed how it handles architecture as code has become easier to produce? Do you handle this before merge, after merge, or through some continuous review process?
sdevonoes•25m ago