For too long, LLMs have been treated as unpredictable black boxes. This framework redefines inference as a dissipative dynamical system, providing a "thermometer" for AI stability and a mathematical blueprint of thought flow.
Comments
Kuruitinoji•1h ago
Main Content: We propose an observational method to dynamically extract the "Floating Equilibrium Point (FEP)" hidden behind the stochastic token generation process. By treating the internal state transitions of Large Language Models (LLMs) as a system mathematically isomorphic to the differential equations of a CR low-pass filter (RC circuit), we can separate the essential semantic trajectory from statistical fluctuations (sampling noise).
Using this framework, we can quantitatively measure internal states and have successfully observed phenomena such as Preference Mode Collapse (PMC) and Context Rigidity in real-time.
To establish this diagnostic technique, we define the "Information Viscosity" based on the token Rejection Rate as a form of physical resistance. We have formalized these behaviors into a complete mathematical framework for your review.
Kuruitinoji•1h ago
Using this framework, we can quantitatively measure internal states and have successfully observed phenomena such as Preference Mode Collapse (PMC) and Context Rigidity in real-time.
To establish this diagnostic technique, we define the "Information Viscosity" based on the token Rejection Rate as a form of physical resistance. We have formalized these behaviors into a complete mathematical framework for your review.