The remarkable successes of contemporary artificial intelligence (AI) have been
largely driven by empiricism, but the lack of a solid theoretical foundation has
produced deep crises in interpretability, safety, and generalization. This paper
proposes a new mathematical framework called ”Dynamic Geometry,” aimed at
establishing a rigorous axiomatic basis for AI. The framework reconceptualizes AI
systems as parameter-driven multilayer families of holistic operators on Banach
spaces, whose core dynamics are determined by the evolution of operator spectra
— the so-called ”spectral flow.” By formalizing representation learning as spectral
projection, relating interpretability to algebraic-topological invariants (such as Ktheory
and cyclic homology), linking the realization of “infinite context” with Lyapunov
stability of long-time evolution and tying safety (e.g., hallucination control)
to verifiable mathematical evidence, Dynamic Geometry provides a fundamental
path to constructing provable, robust, and general AI systems. We prove that
this framework can encompass any finite-parameter deep learning model in terms
of expressive power, and through an analysis based on spectral stability we reveal
conditional advantages in its generalization behavior. Dynamic Geometry not only
offers a unified theoretical framework to address the current crises in AI, but may
also open a new era of next-generation AI driven by first principles and rigorously
defined mathematics.
wchswchs•2h ago