There has never been a significant scientific advancement that did not arise because we broke the categorization or rules of some existing logical system. Therefore, it is ridiculous to suggest that truth-seeking could produce anything other than incremental advance to the limits of the current system, usually incoherence. In order to make significant scientific progress one must question the paradigm itself in a useful way, but that is literally the act of constructing a new hopefully more isomorphic mapping or categorization which gives rise to a new logical system rather than more statements within the current one. In other words, you must create statements that are nonsense in the existing paradigm but have meaning and efficacy in the new one.
The forgoing explains why the proponents of a new paradigm are always viewed as nonsensical by the existing one, they are essentially speaking differing languages that look the same because the same symbols (words) are used with differing meanings.
Instead consider truth to be a goal that is local to a given logical system and not an end in itself. The real end is logical consistency, not because reality is necessarily logical, but rather because we must be logical in order to understand ourselves and each other.
We should be making AI that seeks logical, predictive fiction; that may break our current paradigms. An AI could construct a description of reality involving invisible monkeys that are responsible for my experiences and that would be fine as long as it can tell me the logical properties of the invisible monkeys that would allow me to compute predictions that are then affirmed by experiment. I may jettison the monkeys later because they are only an ontological device, some kind of thinking apparatus.
The output from AI is an average over various logical systems that are incommensurate, at best based on distances computed in an abstract incoherent mixture of thought manifolds, but distances between thoughts in different manifolds are meaningless. You can't make a number of such AI agents confer because they are unknowingly in technocratic silos.
Trying to make AGI from human exhaust is metaphorically like reconstructing dozens of jet airliners to be better than they were before they collided in flight, using only the mixture of wreckage occurring after the crash on the ground.
You can tell the scientist who may be on the verge of a major break though precisely because you can't make head or tails of what they say using the apparatus of thought that you are familiar with.
codingdave•3h ago
Just because someone is speaking incoherently does not mean they are transforming our paradigms. Sometimes people are just bonkers.
d4rkn0d3z•3h ago
PaulHoule•2h ago
Look at Einstein’s Relativity or the works of Sigmund Freud who does not at all come across as a ‘Freudian’. As a system ‘Freudianism’ is a bit discredited, I think of Otto Kernberg railing against homosexuality, but Freud himself refused to do conversion therapy because psychoanalysis could only increase the capacity for love, not diminish it. Therapy has moved on (Rogers, Bowlby, Kohut, Beck) but in Freud you often find zingers (his analysis of sadism) that seem valid outside of his system. With Freud (and even more so Marx) people’s objections are often to the content not the style.
When a group becomes hermetic though it can degenerate to people hearing dog whistles and barking, which makes you a dog.
‘Truth’ is a dangerous concept in rhetoric and logic because of how it works with negation. I was told by a science teacher to never say “to tell the truth…” because it presupposes that I lie. A 9/11 truther is a lunatic’s lunatic. Kurt Gödel shows how much trouble you could get into with a function T(x) which determines the logical truth of a logical statement.
d4rkn0d3z•1h ago
I think Einstein and GR is a perfect example. He simply did what had previously been unthinkable by making time a dynamic variable subject to transformation. He arrived there by fantastical thinking involving riding light beams and what not. I can go on about Maxwell, etc.. The common theme is not truth-seeking but curating fictions for predictive capacity.
Truth does seem dangerous while fiction seems innocuous and extremely useful.