I've written a two-part series applying Clark & Chalmers' extended mind thesis (1998) to AI tools.
Part 1 covers the setup: if notebooks meet the criteria for extended cognition (reliable, accessible, trusted, endorsed), AI exceeds them.
Part 2 makes what I think is a novel argument: maintained software actually decays on these criteria over time. Disposable software — regenerated fresh each use — scores higher on reliability and trust.
The implication: the most cognitively reliable tools might be the ones we throw away.
I'm not an academic philosopher (though I did study Wittgenstein 20 years ago). Would genuinely welcome critique on whether this argument holds.
mcauldronism•6h ago
Part 1 covers the setup: if notebooks meet the criteria for extended cognition (reliable, accessible, trusted, endorsed), AI exceeds them.
Part 2 makes what I think is a novel argument: maintained software actually decays on these criteria over time. Disposable software — regenerated fresh each use — scores higher on reliability and trust.
The implication: the most cognitively reliable tools might be the ones we throw away.
I'm not an academic philosopher (though I did study Wittgenstein 20 years ago). Would genuinely welcome critique on whether this argument holds.
https://open.substack.com/pub/mcauldronism/p/where-do-you-en...
https://open.substack.com/pub/mcauldronism/p/the-maintenance...