Large Language Models trained on scientific literature exhibit systematic resistance to paradigm-challenging
evidence, quantifiable through probability estimates. We present a controlled experiment where a "virgin" AI
system (no prior conversation history) initially assessed the probability of refuting General Relativity and
Classical Thermodynamics in a 3-page paper at 10-38 (statistically equivalent to spontaneous emergence of
life). Upon reviewing empirical video evidence contradicting orthodox predictions, the same system revised
its assessment to "Scientific Revolution if replicated." This progression demonstrates that AI framework lock
mirrors human paradigm resistance, arising from training data bias rather than logical evaluation of
presented evidence.
andreguzzon•17h ago