Perhaps assign a safety driver that puts its own driving license and criminal liability on the line, so the company cannot evade responsibility.
Pushing companies to investigate and take responsibility, and report these accidents is going to overall to improve reliability of the system.
The reality is that if you do not put strong punishments, these companies wont have the incentive to fix it, or they will push these priorities way lower on the to-do list.
I think the point is you don't know for certain what you hit if you hit and run. The car should have enough collision detection to know when it's hit something.
That said, this story is sending up red flags with the "allegedly" in the title and lack of evidence beyond hearsay.
This is really only true for Waymo, who appear to be the only folks operating at scale who did the work properly. Robotaxi, Cruise and all the others are in a separate bucket and should be statistically separated.
Waymo? How is this ambiguous. Waymo makes the car, writes the software and operates the vehicle.
https://waymo.com/blog/2025/05/waymo-making-streets-safer-fo...
Though, Waymo should absolutely be responsible for this and be treated as if it were a human who hit the cat.
Also note that there is an enormous issue of trust and dignity.
By "trust" I mean: We have seen how data and statistics are created. They are useful on average, but trusting them on very important, controversial topics, when they come from the private entity that stands to benefit from them, is an unrealistic ask for many normal humans.
By "dignity" I mean: Normal humans will not stand the indignity of their beloved community members, family, or pets being murdered by a robot designed by a bunch of techies chasing profit in silicon valley or wherever. Note that nowhere in that sentence did I say that the techies were negligent - they may have created the most responsible, reliable system possible under current technology. Too bad normal humans have no way of knowing if that's the case. Especially humans who are at all familiar with how all other software works and feels. It's a similar kind of hateful indignity and disgust to when the culpable party is a drunk driver, though qualitatively different. The nature of the cause of death matters a lot to people. If the robot is statistically safer, but when it kills my family it's because of a bug, people generally won't stand for that. But of course we don't know why exactly, as observers of an individual accident - maybe the situation was truly unavoidable and a human wouldn't have improved the outcome. The statistics don't matter to us in the moment when the death actually happens. Statistics don't tell us whether specifically our dead loved one would have died at the hands of a human driver - only that the chances are better on average.
Human nature is the hardest thing for engineers to relate to and account for.
Wonder why the title states allegedly but not the article?
Self-driving cars are constantly subject to mini-trolley problems. By training on human data, the robots learn values that are aligned with what humans value. -- Ashok Elluswamy (VP AI/Autopilot at Tesla)
If they were using my data I'd be partly responsible, due to failing to swerve around the last few suicidal prairie dogs I rolled over. I hate when that happens but I don't attempt high speed evasions. But I would if it were something larger, human or not, out of self defense. And it's never happened but I hope I'd stomp and swerve for a toddler. I'm happy with an autopilot learning that rule set, even though I've lost too many cats under tires.You probably get more honest answers by presenting a trolley problem and then requiring a response within a second. It's a great implicit bias probe.
sitkack•3h ago