Author here. The core idea is that an LLM's weights form a resonance-holographic field. This isn't just a metaphor; it's a model with testable predictions.
For example, this view implies that:
1. Bias isn't data you can filter out, but a structural imprint on the entire 'hologram'. Trying to remove it is like trying to scratch a ghost off a photograph; the underlying pattern remains.
2. Fine-tuning is a gamble. You're not just adding knowledge; you're altering the entire interference pattern, which can have wildly unpredictable side effects (the "Russian Roulette" aspect).
3. Model Autophagy Disorder (MAD) has a physical explanation. When models train on each other's data, they aren't just copying information, but interfering with each other's holograms, amplifying structural artifacts until they drift from reality.
The main point is that these phenomena—unfilterable bias, synthetic data collapse, jailbreaking—aren't separate bugs but emergent properties of the same underlying principle.
Curious to hear what HN thinks, especially about the proposed experiments to test this.
kamil_gr•2h ago
For example, this view implies that: 1. Bias isn't data you can filter out, but a structural imprint on the entire 'hologram'. Trying to remove it is like trying to scratch a ghost off a photograph; the underlying pattern remains. 2. Fine-tuning is a gamble. You're not just adding knowledge; you're altering the entire interference pattern, which can have wildly unpredictable side effects (the "Russian Roulette" aspect). 3. Model Autophagy Disorder (MAD) has a physical explanation. When models train on each other's data, they aren't just copying information, but interfering with each other's holograms, amplifying structural artifacts until they drift from reality.
The main point is that these phenomena—unfilterable bias, synthetic data collapse, jailbreaking—aren't separate bugs but emergent properties of the same underlying principle.
Curious to hear what HN thinks, especially about the proposed experiments to test this.