This repo contains a proof-of-concept experiment ("The Lazarus Effect"): 1. I train a network until convergence. 2. I destabilize it (catastrophic forgetting). 3. I restore accuracy NOT by retraining on data, but by applying a stability operator to the recursive dynamics.
It suggests that memory can be treated as a stability parameter rather than just stored information. I'd love your feedback on the code and the approach.