If we trade understanding for optimization, we begin to lose interest in explanations for how the world works. If we lose this desire for understanding, we then don’t have much to say about how the world should be governed, how the world ought to be, or what the good is. If AI provides us with instant answers and provides us with an unearned certainty about the world, we lose what Eran Fisher calls the emancipatory interest to defend liberal institutions.[10] For him, if our primary goal becomes AI derived optimized knowledge, we learn to see “freedom” as coming from outside of us and not an internal drive to self-determination or self-governance. We become indifferent to the fact that AI might obscure causality or nuance and hence make it difficult for us to understand how it gets to its results. We won’t care because we won’t need other humans to help us understand the world. If the algorithm identifies “other” citizens as “threats to the state,” we might lose our desire to challenge the “algorithm’s logic” since we don’t understand it, nor are we interested in understanding its rationale
jomaric•45m ago