- simulation-heavy environments with strong feedback loops - experimental AI systems with non-linear couplings - chaotic or high-entropy subsystems or modules with vulnerability to initial conditions
Each time a mental model of such a system was formed - it has already moved on. It was evolving and changing faster than I could map its structure. Especially in large codebases under the pressure of deadlines.
What ended up being more useful under those conditions was not knowing the codebase but control over it and thinking in transitions between subsystems.
In practice, this meant shifting focus to the following things:
- boundaries and relations - state transitions instead of code paths - perturbing the system deliberately - imposing hard boundaries - identifying attractors instead of chasing clean solutions
This isn't an argument against best practices in general. Rather an observation derived from working on unconventional systems. When they become large, stateful and historically coupled, explanation become retrospective.
I've seen this pattern repeat in simulations, neural systems, and production software alike. The versions that survived weren't the ones I fully understood but those whose failure space was mapped and from which best invariants were derived.
I wrote up a short field manual collecting those techniques in details, mainly for incident-mode situations where waiting for clarity isn't an option. Sharing it here in case it's useful for others dealing with similar systems.
Link: https://kolmohorov.gumroad.com/l/tmitwg