Automated decision systems will probably have better margins of error, but who’s responsible for misdiagnosis or miscommunication? It won’t be fair to apply the entire burden of a mistake on a doctor if the tools they’re being advised to use are also fallible. A new insurance model will likely result from increased AI use in medical settings, one where developers and deployers may have to share some liability. Why shouldn’t that be the case?
tene80i•19m ago
Couldn’t you have a system where the AI makes a recommended diagnosis but a doctor needs to be the one to actually diagnose? Then you have expert oversight and accountability backed up by tooling that can help with their blind spots.
bookofjoe•1h ago