I put together an interactive web essay to formalize a structural problem I see emerging in UI/UX and RLHF: the "Generative Crash."
The premise is that human observation acts as a continuous biological execution of Inverse Reinforcement Learning (IRL) - we look at an artifact and try to reverse-engineer the creator's hidden reward function. However, latent diffusion models denoise by minimizing structural outliers, replacing idiosyncratic human noise with algorithmically compressible stochastic variance.
Because that causally linked structural deviation collapses, the observer's IRL calculation cannot converge. The hypothesis space diverges, and the brain triggers an autonomic metabolic shutoff to protect our 20-watt baseline (which we experience as sudden apathy or cognitive fatigue).
I wrote this as a web-native interactive essay because I'm proposing a specific UI cognitive affordance (The Ghost Scale) to explicitly broadcast intent-density and fix this cognitive friction.
Interactive web essay: https://abrahamhaskins.org/art
Formal mathematical preprint (Zenodo): https://doi.org/10.5281/zenodo.19407790
I would highly value any critiques from the ML folks here on the boundary conditions of the diffusion math, or thoughts on applying explicit UX constraints to alignment.