This paper presents two realistic 2026 case studies involving AI-mediated representations: one in financial services product communication and one in healthcare symptom triage. In both cases, harm arises not from overt malfunction but from reasonable-sounding language, normative framing, and omission of material context. Internal model reasoning remains inaccessible, yet responsibility attaches to the deploying organization.
chrisjj•1mo ago
> harm arises not from overt malfunction
This of course is only because there can be no "malfunction" in a system that carries the qualifier "AI can makes mistakes".
businessmate•1mo ago
chrisjj•1mo ago
This of course is only because there can be no "malfunction" in a system that carries the qualifier "AI can makes mistakes".