There are certainly a lot of ways to interpret it but accountability assumes that a person will learn from their mistakes and generally not make them again. LLMs even in the same chat session will do something wrong and confidently claim that it did it right over and over.
NavinF•3h ago
That's not how self-driving cars work. You could look at the car's memory, find a segmentation mask for the cat with an associated probability, and replay the data to see why it allegedly ran over the cat. You can make changes like prioritizing avoidance of blurry cat-like blobs if that's what you wanna do. Can't say the same for that human truck driver who slammed into 37 cats a few weeks ago ("Cats missing after animal rescue group involved in crash that killed 8 people on I-85" Atlanta News First Oct. 15, 2025)
(xposted my reply to a similar comment on the article)
pants2•3h ago
NavinF•3h ago
(xposted my reply to a similar comment on the article)