50% of fatalities involve alcohol or drugs and are often single vehicle accidents.
25% involve youth or inexperience.
15% involve motorcycles.
15% involve pedestrians.
What I really need to see is a complete breakdown of every accident a Waymo has had. Then I can start to compare their actual performance to the previously known outcomes.
Your third sentence doesn’t follow from your first two. On what grounds do you draw this conclusion?
No, that's not how statistics works.
The percentage data's accuracy depends mainly on the number of incidents recorded (and somewhat on the rate of incidents). But the percentage of the whole is completely irrelevant.
If you are basing something on 10 incidents but it's 50% of the total, it's still terrible accuracy.
Whereas if you are basing something on 100,000 incidents but it's only 0.1% of the total, it's still going to be quite accurate, assuming the incidents come from the same overall distribution.
The ratio of 3.2 trillion to 56.7 million, which is already incredibly generous to Waymo's position, is 5 orders of magnitude in difference. So any calculations from Waymos data are going to be insanely inaccurate and not something you can extrapolate from.
The main, and most obvious case, evidenced by this, is Waymo does not operate where snow falls. Human beings do.
We're missing so much of the picture I don't think you can say Waymo's are 75% less accident prone, or 80% less likely to hit a pedestrian. Those are just nonsense numbers.
You could still say you care about snow driving and want to see that comparison, but it doesn't mean the claims in this paper are wrong.
If the user base of "waymo riders" and "everyday drivers" does not match then you're not sampling what you think you are.
size along is not enough. The sample may be biased and it certainly is in Waymo case e.g., roads/vehicle conditions.
This suggests that Waymo is cutting traffic fatalities by 50% (per million miles) right off the top.
Drunk people being known for having exceptionally poor judgement and self awareness.
It suggests that they _could_ cut fatalities by that much. Then again, a whole new mode of accident, where the inebriated decide to step out of a moving vehicle and injure themselves that way.
This is a dynamic system where human decisions are never fully removed from the loop.
If I understand correctly, you believe that the advent of self-driving cars will cause passengers to voluntarily exit a moving vehicle? That sounds like absolute nonsense with no basis in reality.
Why would they need to be in a self-driving vehicle to do that?
> Building on that, the Collision Avoidance Benchmarking paper presents a novel methodology to evaluate how well autonomous driving systems avoid crashes. The study, which to our knowledge is the first of its kind, introduces a reference model that represents an ideal human state for driving—the response time and evasive action of a human driver that is non-impaired, with eyes always on the conflict (NIEON). Put simply, unlike an average human driver, NIEON is always attentive and doesn’t get distracted or fatigued¹. The data showed that the Waymo Driver outperformed the NIEON human driver model by avoiding more collisions and mitigating serious injury risk in simulated fatal crash scenarios.
(From https://waymo.com/blog/2022/09/benchmarking-av-safety)
AIUI (I'm not on that team), a major challenge is getting good baseline data. Collision reports may not (reliably) capture that kind of data, and it's clearly subjective or often self-reported outside of cases like DUI charges.
You do not get to counter-argue: “What matters is the rate of cases per mile driven” without actually presenting that number with supporting evidence. Otherwise the only sound conclusion is the default presumption of non-safety.
In the case of Waymo, we have some tentative supporting evidence from this and other studies Waymo has run. However, that is still insufficient, even ignoring the lack of audits by non-conflicted parties, to strongly conclude Waymo is safer than a human. The evidence is promising, but it is only prudent to wait for further confirmation.
In contrast, Cruise was almost definitely not safer than a human driver.
In 2023, Cruise ADS cars drove 2,064,728 miles [1] and were involved in, by my count, 29 collisions with 5 causing injury [2], namely incidents on 2023-05-04, 2023-05-21, 2023-06-09, 2023-08-18, 2023-10-02.
That is ~72,000 miles per collision and ~400,000 miles per injury in contrast to the national human averages of ~500,000 per reported collision (which is non-comparable) and ~1,270,000 miles per injury (which is comparable). So, absent a more detailed analysis, Cruise ADS cars were ~3x MORE likely to be involved in a injury causing collision per mile.
Details and evidence matter in these discussions. Blanket rhetoric and optimism is not prudent when discussing new safety-critical systems.
[1] https://thelastdriverlicenseholder.com/2024/02/03/2023-disen...
[2] https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...
> You do not get to counter-argue: “What matters is the rate of cases per mile driven” without actually presenting that number with supporting evidence. Otherwise the only sound conclusion is the default presumption of non-safety.
I then pointed out how Waymo does present such evidence. But, if you applied that argument to Cruise you would be wrong. That demonstrates how that argument (when not presenting the numbers) can be used to support both good and bad and is thus a bad argument.
The correct argument when somebody points to anecdotes of bad outcomes is to present statistically sound data of good outcomes, not argue they did not present statistically sound data of bad outcomes thus you get to assume it is good.
Waymo releases its safety data: https://waymo.com/safety/impact/, which is backed by public reporting requirements.
To say that it is wholly insufficient to make any safety claims on publicly driven 50M miles, is ridiculous. At the very least, it appears sound, robust and transparent, and able to be validated.
> https://waymo.com/blog/2024/12/new-swiss-re-study-waymo
Is Swiss Re a valid third party? They also address peer-reviewed and external validation in the above safety impact page.
I can understand being skeptical because of Cruise and especially claims made by Telsa, but there is a preponderance of supporting data for Waymo.
Given all of this evidence, you would still conclude Waymo is unsafe?
> In the case of Waymo, we have some tentative supporting evidence from this and other studies Waymo has run. However, that is still insufficient, even ignoring the lack of audits by non-conflicted parties, to strongly conclude Waymo is safer than a human. The evidence is promising, but it is only prudent to wait for further confirmation.
You are not making a distinction between concluding unsafe and not being able to conclude safe. It is standard practice to not presume safety and that positive evidence of safety must be presented. Failure to demonstrate statistically sound evidence of danger is not proof of safety. Failure to disprove X is not proof of X. This is a very important point to avoid fallacious conclusions on these matters.
To discuss your specific points. Yes, the data is promising, but it is insufficient.
Traffic fatalities occur on the order of 1 per 60-80 million miles. Waymo has yet to reach even one expected traffic fatality yet. They appear to be on track to doing better, but there is not enough data yet.
The reports Waymo present are authored by Waymo. Even the Swiss Re study is in cooperation with Swiss Re, not a independent study by Swiss Re. The studies are fairly transparent, they point to various public datasets, there are fairly extensive public reporting requirements, and Waymo has not demonstrated clear malfeasance, so we can tentatively assume they are “honest”. But we have plenty of examples of bad actors such as Cruise, cigarette companies, VW , etc. who have done end-runs around these types of basic safeguards.
Waymo operational domain is not equivalent to standard human operational domain. They attempt to account for this in their studies, but it is a fairly complex topic with poor public datasets (which is why they cooperated with Swiss Re) so the correctness of their analysis has not been borne out yet. When Waymo incorporates freeways into their public offerings this will enable a less complicated analysis which would lend greater confidence to their conclusions.
Waymo is still in “testing”. As their processes appear to be good, we should assume that their testing procedures are safer than should be expected out of actual deployment or verification procedures. That is not a negative statement. In fact, it would be problematic if their “testing” procedures were less or even equal in safety to their deployment procedures. That is just how testing is. You can and must apply more scrutiny to incomplete systems in use and prevent increased risks especially while under scrutiny otherwise you are almost certainly going to be worse off in deployment where there is less scrutiny. We have yet to see how this will translate out to deployment, so we will need to wait and see if safety while under test will appropriately apply to safety while in release. This is analogous to improved outcomes for patients in medical studies even if they are given the placebo because they just get more care in general while in the study.
Anyways, Waymo appears to be doing as well and honestly as can be determined by a third party observer. I am optimistic about their data and outcomes, but it is only prudent to avoid over-optimism in safety-critical systems and not accept lazy evidence or arguments. High standards are the only way to safety.
Counter-point: "That's false because Cruise had an accident for which they were at fault".
OP: "The existence of a case or some cases where a self-driving car caused injury has zero value. What matters is the rate of cases per mile driven."
You: "You do not get to counter-argue."
Yes, they do. OP's point is valid. One can't refute the original assertion by citing one accident by another company. It's a logical fallacy (statistically speaking), and a straw-man (Waymo can't be safe, because other self-driving cars have been found at fault). The validity of the original claim has nothing to do with an invalid counter-claim.
> However, that is still insufficient, even ignoring the lack of audits by non-conflicted parties, to strongly conclude Waymo is safer than a human.
When you have a large, open, peer-reviewed body of evidence, then yes, that's exactly what you get to claim. To reject those claims because Waymo was involved is ad-hominem. It's not how science works. It's not how safety regulations or government oversight works. If you think it's insufficient, you can attack their body of work, but you don't get to reject the claim because they haven't met some unspecific and imaginary burden of proof.
The same of course applies to self-driving cars; they are literally cars driven by software, of course you need to do a root cause investigation every time to rule out that it's not a bug in the software that will kill another person (and many after) when the next car happens to go down that rare branch of the system. It's embarrassing to see that the people that call themselves engineers at these companies have not held their work to this standard, and are instead publishing glossy brochures making whacky statistical arguments.
I've personally read through the root cause reports for most of the notable AV accidents. They're not always quite as intensive as aerospace, but I'd be hard pressed to describe any of them as wacky statistical arguments.
Obviously most of those reports aren't public, but I'm assuming you also have industry access.
CDC “Underlying Cause of Death” dataset sez https://wonder.cdc.gov/ucd-icd10-expanded.html https://i.imgur.com/4PB0xyC.jpeg
- “Person injured in unspecified motor-vehicle accident, traffic” is the 50th leading cause of death at 0.4% of deaths.
- “Person injured in collision between other specified motor vehicles (traffic)” is the 108th leading cause of death at 0.2% of deaths.
0.4% and 0.2% sound low, but make up for ~110,000 deaths. Spread across a 5 year period does indeed equal “tens of thousands” every year.
But, more importantly, you missed a bunch of relevant categories:
V89.2 (Person injured in unspecified motor-vehicle accident, traffic) 80,434
V87.7 (Person injured in collision between other specified motor vehicles (traffic)) 29,982
V09.2 (Pedestrian injured in traffic accident involving other and unspecified motor vehicles) 27,934
V03.1 (Pedestrian injured in collision with car, pick-up truck or van, traffic accident) 15,129
V43.5 (Car occupant injured in collision with car, pick-up truck or van, driver injured in traffic accident) 9,810
V29.9 (Motorcycle rider [any] injured in unspecified traffic accident) 8,410
V29.4 (Driver injured in collision with other and unspecified motor vehicles in traffic accident) 7,688
V47.5 (Car occupant injured in collision with fixed or stationary object, driver injured in traffic accident) 6,379
V49.9 (Car occupant [any] injured in unspecified traffic accident) 6,349
V23.4 (Motorcycle rider injured in collision with car, pick-up truck or van, driver injured in traffic accident) 5,851
V43.6 (Car occupant injured in collision with car, pick-up truck or van, passenger injured in traffic accident) 3,728
V27.4 (Motorcycle rider injured in collision with fixed or stationary object, driver injured in traffic accident) 3,504
"The number of pedestrian deaths in the United States is skyrocketing. In 2022 traffic crashes killed 7,805 people on foot—that’s an 83 percent rise from 2009, and a 40-year high. The vast majority of those deaths involved a car colliding into a human"[1]
"Pedestrian deaths have been climbing since 2010 because of unsafe infrastructure and the prevalence of SUVs, which tend to be more deadly for pedestrians than smaller cars, according to Martin."[2]
The issue with the safety claims of self-driving is focusing on vehicle-to-vehicle interactions and failing to handle the chaotic and unpredictable nature of the environment. There are numerous stories of pathological behavior of self-driving vehicles when encountering simple environmental features that a human driver would handle without a second's hesitation[3]. Pedestrians, cyclists, people using mobility devices, and numerous other non-vehicle road users represent an unaddressed challenge to the safety claims made by Waymo and others.
1 https://slate.com/business/2024/10/cars-suvs-pedestrian-deat...
2 https://www.npr.org/2023/06/26/1184034017/us-pedestrian-deat...
3 https://www.npr.org/2023/08/26/1195695051/driverless-cars-sa...
Really excited for autonomy to become more and more common place. People drive more and more like distracted lunatics these days it seems
Uber/Lyft drivers are strongly incentivized to drive as quickly and aggressively as possible.
The individual drivers are trading risk for cash.
A company like Google isn't going to make that trade because it's actually the wrong trade across millions of hours.
So a fairer comparison would be contrasting Waymo rides to trips conducted by the Ultra Safe Even If It's Slower Chauffeur Company.
no, comparing them to real alternatives is the fair comparison. that they've got their settings tuned in favour of safety stats is the whole point, not something that you should be trying to factor out of the comparisons.
For now, yes. My point is that there's very often big gap between "how safely does it work in a lab when the people running it are trying to play up its safety" versus "how safely will X actually work once we start using it everywhere."
Manually-driven vehicles could be a lot safer if they were being prototyped under strict guidance as well!
If we want self-driving cars to retain the same safety later, there needs to be something which prevents humans from flicking the safety-versus-speed dial a little bit over and over in order to make quarterly earnings projections.
But they aren’t. These are. Planes could be less safe if pilots flew them into cliffs on the regular, but they don’t and so are not.
Uh, yes, you're kinda repeating my thesis, and two copies don't cancel each other out.
> Planes could be less safe if pilots flew them into cliffs on the regular, but they don’t and so are not.
I don't understand what you're trying to convey with this tautology.
_________
Imagine two fleets of cars/planes/whatever with utterly identical equipment and expertise. The only difference is that for one of them, the management is being pressured by politicians to demonstrate a high degree of safety.
For that scenario, wouldn't you agree that the better-safety comes from temporary external cause? And also agree that the better-safety is unlikely to persist long after the incentive disappears?
[TLDR] Some portion of Waymo's safety-stats are due to the investor/regulatory context in which it currently operates, rather than the underlying technology; the effects of that portion will not be permanent; this should affect how we do comparisons.
We don’t need to. I could also imagine every human driver is always drunk. But those are suppositions. You’re comparing actual and hypothetical risks.
Your ancestor: "No, we don't need to. I could also imagine them underwater. Those are suppositions. You're comparing actual and hypothetical falling."
*headdesk*
____
How about this: Which parts of the final TLDR do you disagree with?
Also, if you want to include the speculation that they'll make their cars drive more aggressively, you should also include the speculation that the technology will become better and the driving tech will become even safer than they are now.
Good. I don't want my kid who's crossing an intersection to be endangered by an Uber driver that you paid $30 to go extra fast. Nothing like externalizing your poor planning skills onto others.
https://economics.uchicago.edu/news/study-finds-gender-pay-g...
Humans drive in all weather conditions on all types of roads and also many types of personal vehicles of varying ages and conditions.
Waymo is limited to few specific locations with decent roads and does not drive in poor weather and is limited to a relatively large and safer expensive SUV that is maintained professionally in a fleet.
Studies like this rarely account for such factors , they are compare optimal conditions for self driving to average conditions for humans.
Even if waymo was better when accounting for these factors , if it was much worse in the conditions humans typically are expected to drive [1] they self driving is still less safe than humans on average .
A better comparison could be with professional taxi drivers for the same city (not Uber or Lyft).
I wouldn’t be surprised if Waymo is either on par or poorer than this group .
[1] no study will ever show this as they wouldn’t be able to trial it under those conditions if it is not safe enough
There have been promising progress and there have been hints of a New York trial soon, but it it well known that self driving cars have not done large scale trial in cities with bad weather.
[1] Yes, I am aware SF gets a bit of bad weather with fog and rain but not nearly not as much to make driving quite unsafe like somewhere that gets a feet of snow in 24 hours in winter, and likely promixity to engineering HQs and favourable regulatory climate influenced the SF choice.
Every time someone thinks this is some gotcha, but it isn't. Their methodology clearly attempts an apples-to-apples comparison.
Seems they intend to come to Washington D.C. next year, which does get a pretty wide gamut of weather.
If you've been in both a human drive cab and a Waymo, you'd definitely not say this. I see cabs have accidents all the time. Never seen a Waymo have one.
Also, being in a Waymo feels much safer than a human driven car, even my own when I'm driving!
I highly doubt taxicabs are safer than Waymos.
In fact, here is some data:
Over every 1 million miles driven, there are 4.6 cab crashes, 3.7 livery car crashes, and 6.7 crashes with private cars. And according to Waymo, they have 2.1 crashes per million miles.
> Waymo 2.1 crashes
The numbers become much less 80+% plus claim in the article as you remove factors. It comes closer to 30% with professional drivers.
Livery car is still not always well maintained a high sitting SUV with better visibility[1], perhaps with all these factors included if it is 20% better it is impressive technical achievement for sure, but not going to create headlines anywhere.
The point is the methodology is not as objective as it could be, and this is biased/selective claim, not that self driving cannot be better than humans.
[1] Also there is major difference in the price point between Waymo and Livery cars, I cannot say how it will influence rates but the different rates means different class of clients using at different times of day/night to different locations that needs to be normalized for.
The % matters because it is close enough excluding these factors, so we can not definitely say it currently better than humans yet, close but not conclusively so.
That is not a argument against them. It is a simple function of economics, i.e. as long as it better than Lyft/Uber(they are already) that is the price point that Waymo operates at, so it is safer for most users and easy choice to make.
However if you can afford and regularly use high quality private livery car services then the data has to be lot clearer to make the switch.
the study is comparing Waymo to accidents occurred in the same cities where Waymo operates, and my understanding is that Waymo drives 7 days a week, 24h a day in those cities, so same roads, same weather. Seems a legit comparison
In America driving is a economic necessity, from going to work to even the grocery shop needs cars and dependency increases inversely with affluence [1]
Mass public transit is non existent barring very few regions.
So car (for commute) and flight (for long distance) are the only two viable transit options .
People cannot choose to not work because weather is bad, and remote work / work from home applies to only some jobs.
[1] food and other service deserts are more likely less affluent neighborhoods meaning you will need to drive and for longer for food , pharmacy or any other services if you are low income .
This is all low speed so wasn’t a safety issue (aside from road rage it might trigger in someone), but focusing only on safety also ignores incidents like this.
Aside from the criticisms about the safety methodology outlined in another chain, I think there’s a bait and switch here where they don’t talk about negative impacts to traffic, freezes, inability to handle situations and don’t evaluate their performance against other drivers.
Don’t get me wrong. I think Waymo is doing well to being the first to truly autonomous, but they’re intentionally putting their best foot forward and trying not to draw attention to any of their shortcomings.
Human drivers often race when in a platoon— not even on purpose it’s just an instinct to go as fast or faster than other cars which has a feedback effect to increase platoon speed.
Waymos, following the exact speed limit, don’t do this. On 1 lane streets they literally set the platoon pace to the legal speed limit.
The effect of this is hard to study and quantify but it’s a real and positive impact of self driving cars on city streets. Haven’t seen research on this topic yet.
On some roads, however, it is a massive safety issue, and everyone is driving unsafely because the road is designed badly for its intended purpose. (So-called “stroads” are the canonical example.)
Speed is the underlying reason accidents happen. No one is creeping at 2 MPH and just running into things, they have time to stop.
This doesn't even take into consideration that speed results in significantly larger impact (and thus damage to passengers, environment, and vehicles)
You don't drive 25 on residential roads, because you know this to be true. Neither do I, nor does anyone else.
Do you have any sources to backup your claims that that's too fast and is more unsafe than speeding?
Consider this video on Dutch traffic calming: https://www.youtube.com/watch?v=bAxRYrpbnuA.
Lane narrowing, raised walkways, curves in the road (chicanes), etc. are all environmental queues that enforce safe traffic speed based on context, without relying on conscious human compliance.
* Waymo vehicle creeping into the pedestrian crosswalk (while the pedestrians had right of way to cross), which caused someone to have to walk around the car into the intersection ahead of the Waymo.
* Waymo vehicle entering a dedicated bike lane and practically tailgating the bicyclist that was ahead of it.
These might be safer than human drivers in aggregate and normalized by kilometer driven, but they drive like humans — greedily and non-defensively. I wouldn't want one these anywhere near a high-pedestrian traffic area ever, and I feel the same about human-driven cars, too.
We also know that in North America that the municipal services skimp on grade separation for bike lanes for budget and political reasons. I did bike in San Francisco when I lived there, and these non-shared colored lanes never ever felt safe.
I can guarantee that if you leave your North American context for a couple of years and come back to it you'll find CA Vehicle Code § 21453 unsatisfactory.
Much of San Francisco is a "a high-pedestrian traffic area" and Waymos operate in those areas constantly and more or less flawlessly. As someone who lived carless in SF for nearly 15 years, I see nothing but upside from more Waymos and less human drivers on those busy streets.
In California, California Vehicle Code § 21209(a)(3) expressly permits a motor vehicle to enter a bicycle lane “to prepare for a turn within a distance of 200 feet from the intersection” -- among other cases. (The vehicle must yield to cyclists in the lane.)
Once the car is in the bike lane, any bike going straight is forced to remain safely behind the car until the car completes its turn.
I know it can seem discourteous to cyclists, but it really is the smarter way.
It was a funny ad at the time. Unfortunately based in reality more and more these days.
But that's why you have peer review, further studies from different authors perhaps on competing methods that point out some flaws in your approach, etc.
I feel also that the car having a far better experience of its kinematics / dynamics / features is also a huge advantage - see the good old drifting parallel parking videos.
After that there is the concern about computing reaction time. Can it get stuck hesitating? Clearly the cars hesitate a lot in generally safe places. But we have seen some videos already of a Waymo very smoothly dodging someone running out from in between cars (they were already tracked), and someone mentioned a scooter incident. Hopefully we'll see more videos of emergency responses.
Another comment mentions "r/waymo or r/selfdrivingcars for lots of videos of Waymos avoiding objects."
The correct answer is almost always to hit the brakes. Not to swerve. And Waymo will hit the brakes earlier than you or me.
This is only true with 2WD and no automatic stability control, and if you’re going down a slope. For every other case, ABs will out perform in snow and gentle braking will evenly distribute traction force with stability control doing microsecond evaluations.
Stability control is tied to power in all modern systems.
(You also only get into this scenario when your stopping distance is shorter than your reaction time and perception length. Something automated drivers can manage better than humans.)
As someone who grew up BEFORE ABS, drove in the winter (in Canada), including first winter owning my own car with sport tires because I couldn't afford winter tires, spun / slid a few times even with top-of-the-line winter tires, etc.
ABS is a game changer in the snow. I used to go to an empty parking lot every winter during early snowfalls to play around and skid, start/stop, etc. Even EARLY ABS ('94 VW) means that 98% of the time (IMHO), the answer even in snow/ice is "slam on the brakes". Sure, you might have a few percent longer stopping distance than an expert who can do threshold braking - are you an expert? And the fact that you don't lose control of the steering is a huge advantage.
Also, shifting into neutral is really only a thing for old vehicles without ABS/ESC. In modern vehicles, you let your foot off the gas slowly.
If it’s 1989 and you don’t have ABS, yes. Otherwise, swerving is a gamble [1]. If you don’t have time to stop, you physically don’t have time to evaluate and choose a right or left swerve. You’re trading the certainty of a head-on collision with whatever is in front of you against the uncertainty of what’s to the right or left, compounded with all the fun that comes with a side/tumbling collision and increased risk of not hitting a car.
[1] https://peterhancock.ucf.edu/wp-content/uploads/sites/12/201...
do you also maintain a long following distance when there's a car right next to you in the next lane? I try to, because I don't want to stay in someone's blind spot, but sometimes it's not really possible to fall back.
You're safer hitting them head on while aggressively braking than attempting a microsecond swerve.
Exactly, this is the situation I am mostly talking about.
Perhaps you mean the far more common scenario when the car in the next lane simply decides to merge into yours? Nothing about that is random[1] and the response in 90%+ of cases is just to let off the gas for a few seconds. That's it.
[1]: In most cases, it's because your lane is open and theirs is about to be backed up. You'd want to switch lanes too, so it really shouldn't be surprising they do.
The rest are mostly people realizing at the last second they want to turn right/left at an upcoming intersection (or highway exit). Again, predictable.
FFS, this is like selection bias 101.
Here's a classic example with an explanation: https://youtu.be/kTczmnkz824?t=882. No experienced driver should be surprised here.
> I agree though that most people's extinct would be to brake. Mine never has been though.
...
You operate a motorized vehicle and your first instinct when seeing anything dangerous ahead is to do something other than braking?
Hard to say for certain, but it looks like just braking probably wouldn't have avoided the collision.
The Waymo doesn't have to swerve as much as a human because it can see a mile away and never blinks, and it knows that the right thing to do in every swerve-worthy situation is to slam on the brakes to take the energy out of the event. It also drives around with the brakes pre-pressurized because it isn't trying to compensate for the fact that its control system is partially made of meat. Anyway you can go to r/waymo or r/selfdrivingcars for lots of videos of Waymos avoiding objects.
Blog post for the paper: https://waymo.com/blog/2022/12/waymos-collision-avoidance-te...
- How seldom these scenarios come up for human drivers is a huge disadvantage for them. For self driving, it doesn't matter, the cars' reactions can be simulated in arbitrary scenarios as many times as needed, so even the rarest of scenarios can be ensured to be handled properly.
- There's nothing special about the decision to swerve vs to say, brake. I'd expect self driving cars to not need to swerve nearly as often because the need to swerve probably only ever exists due to excessive speed and/or poor following distance to the vehicle ahead.
> It's happened to me about 2-3 times and I've always made the "(assumedly) correct" split second swerve decision.
Easy question: Did you make that decision with full awareness that you would not end up in a collision path with another vehicle by swerving? Oops.
Even if you did, how many drivers do you think would "instinctively" swerve into another lane and get hit by an oncoming vehicle because they do not maintain constant situational awareness around their vehicle? The majority, at least.
Those instinctual human responses can be wrong/misguided as well and can have pretty serious ripple effects (e.g. most chain collisions after somebody panics and steps on the breaks). And even when those instincts work correctly, they rely on driver focus and attention; which to put it mildly is not very reliable. The lack of that is a well known root cause of many accidents. People get tired, distracted, etc. or become otherwise unfocused from driving safely. And of course some drivers are simply not that competent, barely know traffic rules or how to drive safely. The barrier for getting a drivers license is pretty low. And all that is before you consider road rage, drunk drivers, elderly drivers with cognitive and visual impairments, and all the other people who really shouldn't be driving a car.
If you rank AIs against most drivers, they probably hit the top percentile in terms of safety and consistency. Even if you are in that percentile (and most drivers would likely overestimate their abilities), most human drivers around you aren't and never will be.
Traffic deaths in the U.S. are staggering — annually far exceeding the fatalities of most U.S. military conflicts since World War II, including the peak years of the Vietnam war. It's hard to do worse than that for AI drivers. The status quo isn't very safe. Most of that is human instincts not working as advertised. People really suck at driving.
Waymo is like the most courteous, respectful driver you can possibly imagine. They have infinite patience and will always take the option which is the safest for everyone. One thing which really impressed me is how patient they are at crosswalks. When I'm jogging, a Waymo will happily wait for me to cross - even when I'm 10 feet away from even entering the crosswalk! I don't know if I even have that much patience while driving! I've had a number of near misses with human drivers who don't bother checking or accelerate for no reason after I'm already in the crosswalk. Can you imagine a Waymo ever doing that?
If I see a Waymo on the street near me I immediately feel safer because I know it is not about to commit some unhinged behavior. I cannot say enough good things about them.
People can have a stress-free commute to a nice house in the countryside and work in the cities. Because the car is electric, it will be inexpensive to run.
If you drive a lot (like the person in the countryside) the car that is there when you want to is worth owning vs a shared car that you might have to wait for. Plus by owning the car you can just leave your golf clubs in the trunk.
If you can stand being in a used car you will discover that shared cars are all more expensive just because at the first sign of cosmetic wear they get rid of it while seats that have been sat in a few times are still good enough for many more years. (unless you almost never drive anyway)
Because of the above I don't see much growth in the shared car market. There will be some because there are people who don't have parking, people who don't drive much, and people who demand a new car that they don't otherwise care about. However the vast majority of people will still own their own car.
(we are going to a remote location where I wouldn't expect public transit to serve, but the train station is in a small town that still should have some transit but doesn't)
I can read on the subway, but while a 20 min subway ride is fine, an hour each way is still a lot, and a two hour train commute just doesn't leave much time in your life for doing social things.
Also, I think there's going to be a huge surge in in-demand AI buses. Rideshares will take people to a random spot, you'll wait 2 minutes for a predetermined seat on a specific bus, and then switch to a rideshare van for the last 5 minute drive to your office in the city.
It's just going to be so much cheaper. With economies of scale and urban congestion pricing, you'll have to choose between dropping $45 on a dedicated hour-long door-to-door car trip, or $6 for the car-bus-van version which is only 20% slower anyways.
OTOH, if I'm in a decent-sized car (minivan?) for 45 minutes+, I can get work done. I can then stay less time at work.
You're clearly wrong.
> Late-stage capitalism will find ways to make it cost exactly as much as driving your own car.
Late-stage capitalism is a defunct theory based on Marxism.
Real, actual capitalism results in competition which drives prices down, as long as there are two or more competitors and antitrust law is enforced. Which is generally the case.
And in the case of monopolies like city buses, cities set prices directly in response to democratic pressure. By your argument, NYC subways ought to be $25 a ride... but they aren't.
I generally agree with you here, (though it's a bit simplistic for things not directly related to price: for example good luck avoiding arbitration clauses)
> Which is generally the case.
Now our opinions differ. Lina Khan was an exception, and I certainly can't imagine the current regime standing up against money.
Some recent(ish) ones I think have been less than great for competition
HBO / Discovery?
Facebook/Instagram/Whatsapp?
LiveNation / Ticketmaster?
Disney / Fox?
Charter / Time Warner Cable?
AT&T / Time Warner?
CBS / Viacom?
Comcast / NBC?
JP Morgan / Chase?
CVS / Aetna?
Cigna / Express Scripts?
Sinclair Media
(And that was not the only close call I had with that geezer at that corner.)
This has always bothered me about aggressive or impatient human drivers: they are probably shaving like 30 seconds off of their daily commute while greatly increasing the odds of an incident.
But people speeding, driving aggressively, driving anti-socially (by trying to speed past lines and cut in at the front), running lights and stops... this could be squashed forever, saving lives and ultimately making life more pleasant for everyone.
Now, when there's long stretch or when you have to go up hill, that's where the electric scooter begins to shine and makes the largest difference.
Even setting aside the malicious SF stuff, Waymo's have enormous advantages over humans relying on mirrors and accounting for blindspots. I never have to be concerned a Waymo hasn't seen me.
I can't wait until the technology is just standard on cars, and they won't let drivers side-swipe or door cyclists.
(For cars that have both a normally-used electronic door open button and a manual emergency release (e.g. Teslas), the electronic button can use the car's existing cameras to detect cyclists first before actuating the door to open. This would be a trivial software change in the specific case of Teslas. The only thing I dislike about the Tesla setup though is that most non-owners are unaware of where the mechanical emergency release is; it is not obvious and not labelled.)
They're probably too stupid to think like that, though.
Of course, passing costs to all insurance companies is really the same as passing it to all people paying insurance premiums, at which point you can just use tax money to get the same effect. At which point, it's probably easier to regulate it and have the cost passed to everyone buying a car.
Dooring people aside, what do you do if someone just leaves the door open when they leave their ride?!
Waymo needs to have staff in SF anyway to pick up cars that malfunction (flat tire, or just close the door).
Continue billing them for the ride and send an app notification or phone-call to their phone.
Other potential solutions: If the door is still not closed after n minutes, plead with passers-by, or offer a passing or nearby rider the chance to earn credit by closing the door.
Tesla already has dooring prevention. If it detects a bicycle or something coming, it prevents you from opening the door the first time, and shows a warning. You can override it by trying to open it the second time, if you are sure.
https://www.reddit.com/r/teslamotors/comments/1gwjq4v/new_an...
Funnily enough that's exactly why I don't like them. Every time one rolls by me I know that tens of photos of me and even my 3D LIDAR scan get piled in to some fucking Google dataset where it will live forever :/
Their site is even proud of it: https://waymo.com/waymo-driver/ section titled “Keeping an eye on everything, all at once”
“The Waymo Driver's perception system takes complex data gathered from its advanced suite of car sensors, and deciphers what's around it using AI - from pedestrians to cyclists, vehicles to construction, and more. The Waymo Driver also responds to signs and signals, like traffic light colors and temporary stop signs.”
being concerned that a Waymo car took your picture isn't invalid, but man is it a tear drop in the rain of everything else the Googs is doing.
The system was never designed to show you relevant ads that you want to see and would like to buy a product from. The system has always been designed to show you profitable ads. Those have always been and will always be a completely distinct set, with only coincidental overlap if at all.
also some ads really are to everyone. You may not be in a place where you would think about a car, but car makers don't want you to forget you could just in case your situation changes. They also want you to think of some goods as luxuries so you are impressed when you see someone who does have it.
My favorite example of this is in the desktop web version of Google Maps. If you search for some place and try to plot directions from “Your Location” it will prompt for the browser's Geolocation API and will refuse to give directions at all if you don't consent to the prompt: https://i.imgur.com/fIQswnD.png
This is despite the fact that it opens the map with a perfectly-centered and reasonably-sized window around my current location. I have never seen this fail when not using any sort of VPN that moves my GeoIP. They could totally give a reasonable mix–max time estimate based on that window just like the one they show for variable traffic.
That being said, just speaking with some knowledge of current state: the scans don't live forever. At this point, all the data they collect is way too big to store even for a short period. They'll only keep data in scenarios that are helpful for improving driving performance, which is a tiny subset.
Personally identifiable information is also redacted.
You should probably be more worried about what gmail knows about you than Waymo.
Flock is the same way. For example here's the Flock privacy policy from one of SF's fine local shopping centers: https://www.stonestowngalleria.com/en/visit/lpr-privacy-poli...
> Video Clips captured by the LPR system will automatically be deleted after 30 days; although Images are deleted when no longer needed, the data obtained from the Images may be retained indefinitely. Should any information from the LPR Dashboard be needed to assist with a security or law enforcement matter, it may be retained indefinitely, in paper and electronic form, as part of the security file until it is determined it is no longer needed; in addition, it may be shared with local law enforcement who may retain it in accordance with their own retention policy.
If anyone can share a link to a similar IRL privacy policy for Waymo I would love to read it. The one on their website is conspicuously labeled Waymo Web Privacy Policy lol
For riders, there's the Waymo One privacy policy: https://support.google.com/waymo/answer/9184840?sjid=5254444...
Beyond that, https://support.google.com/waymo/answer/9190819?hl=en seems to be more relevant to your interests.
Your second link does mention cameras and microphones outside the car but doesn't mention what they keep (full video? stills? LIDAR? RADAR?) or for how long:
“Waymo’s cameras also see the world in context, as a human would, but with a 360° field of view. Our high-resolution vision system can help us detect important things in the world around us like traffic lights and construction zones. Our systems are not designed to use this data to identify individual people.”
The “Our view on your privacy” section links to the same page as your first link, and that page's “What we keep” section is explicitly only about riders:
“We will retain information we associate with your Waymo account, such as name, email and trip history, while your account remains active.”
That being said, I have an anecdote as a former googler: the reality with Google though is very thoughtful and favorable for users if you ask Googlers who've worked on their software products. There are audit trails that can result in instant termination if it's determined that you accessed data without proper business justification. I've known an engineer who was fired for an insufficiently justified user lookup (and later re-hired when they did a deeper look -- hilariously they made this person go through orientation again).
And safeguards / approvals required to access data, so it's not just any joe shmoe who can access the data. Wanna use some data from another Google product for your Google product? You're SOL in most cases. Even accessing training data sourced from youtube videos was so difficult that people grumbled "if I were outside of Google at OpenAI or something I'd have an easier time getting hold of youtube videos -- I'd just scrape them."
This isn't to say any of this is a fair thing to make decisions on for most people, because companies change and welp how do you actually know they're doing the right thing? Imo stronger industry-wide regulations would actually help Google because they already built so much infra to support this stuff, and forcing everyone else to spend energy getting on their level would be a competitive advantage.
I'm not afraid of employees at Google or random Google divisions obtaining unauthorised access to information at me, it's not about that. I'm certain that there's very little data that the targeted advertising part of Google can't access.
But car crashes are the third highest cause of death in the US (https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm). As a society, I think the benefit outweighs the cost in this case, and we can (theoretically) continue to make progress on privacy as a society. Seems like much more of a step forward than a step back to me
No, that says “Accidents (unintentional injuries)” as a category are collectively the third leading cause of death, and that category contains a lot of things.
CDC “Underlying Cause of Death” dataset sez… https://wonder.cdc.gov/ucd-icd10-expanded.html https://i.imgur.com/4PB0xyC.jpeg
- “Person injured in unspecified motor-vehicle accident, traffic” is the 50th leading cause of death at 0.4% of deaths.
- “Person injured in collision between other specified motor vehicles (traffic)” is the 108th leading cause of death at 0.2% of deaths.
Road traffic injuries are the leading cause of death for children and young adults aged 5–29 years.
More than half of all road traffic deaths are among vulnerable road users, including pedestrians, cyclists and motorcyclists.
https://www.who.int/news-room/fact-sheets/detail/road-traffi...
I'm pretty sure between traffic cameras and security cameras lots of commuters on th street are being filmed. With or without Waymos
I don’t understand why so many comments here are missing the context
It's not going to be stored forever.
That would be incredibly expensive.
Those cars are taking in TB of information each daily. Scale that to 10s of millions of cars.
It's just not going to happen.
Maybe an ultra compressed representation of you that shares maybe 1 bit in 1 weight somewhere in a NN will live forever.
Maybe.
Don’t they currently have less than 1000 cars? I don’t think they will keep every recording forever but technically, they still could at the current scale.
They have plans to grow 1000s of times this size.
Fear not, your images and recordings will get piled on somebody's dashcam to do as their heart desires.
I got a dashcam in my Camry recording front and back everytime i drive. I have no interest in preserving those images outside of an accident, but who knows what sommebody else will.
We have no expectations of privacy in public spaces and ultimately I would trust Googles IT security more than some dude with a dashcam
Why are you so worried of something snapping tens of thousands of pics of your body (mostly identical) that don't tell much about you while the world's biggest ad companies know you better than any single human ever will. I feel popular western sci fi has made people fear companies taking some visual data of their bodies covering minutes at max while fully overlooking the dangers of having your behavioural data covering years at a very granular level.
Yes, I know it's not a choice, both are bad. But I find people everywhere, including here in HN, are overly conscious of getting a few minute worth pictures of their bodies uploaded to some private servers while they are nowhere near as conscious when it comes to non-visual data about them (which I would argue mostly covers behavioural data imo).
And my suspicion is that insurance companies don't push for you to get one because it prevents them from fighting claims that they would've won had there been no evidence.
Maybe it's similar for self-driving or whatever we're talking about here (sensors?).
Ever since then my fear melted away. They see every direction, never blink, and are courteous and careful with pedestrians.
Might as well keep an automatic response even if it's not always useful.
I still remember the first time I went through a four-way stop intersection and saw a driverless car idling, waiting for its turn. It was weird and nerve-wracking. Now… I’d much prefer that to almost any other interaction at the same spot.
I've now had it happen twice that a car will fully blow through an intersection because they know a Waymo will slam on the breaks to avoid a collision. They basically abuse the car's reflexes.
Also in any sort of situation where the Waymo is being very cautious the biggest danger is the impatient people behind the Waymo who will break the law to go out and around it.
Every time the light turns green for you, you can expect 1-3 cars to keep rolling through until you can go. I've almost been rear ended by not running a light that just turned red because the person behind me also wants to floor it through. Yakety sax shit.
Then I found out in 2019, the Texas legislature outlawed the use of red-light cameras.
Muh freedum.
>Muh freedum.
To be fair you have progressives arguing against red light cameras as well, on the basis that it's a regressive tax on poor people, and that it causes more accidents through drivers slamming on the brakes on a yellow.
As for causing more accidents through slamming brakes on the yellow, I simply don't believe that. But if it helps you can extend the yellow duration by an extra half second.
The variability of how much people crash in various parts of the US is fascinating, like at least one entire order of magnitude difference in otherwise comparable cities.
Looking forward to this future.
If they keep up the slow and steady improvements and roll outs to cities worldwide it’s hard to imagine my one year old ever needing to drive a vehicle.
I imagine the weirdness of the situation (legal left on red) triggers it's "creep forward so I can see" logic but it definitely shouldn't be blocking a busy crosswalk there when there is little to know chance it will be able to turn AND peds from both sides.
Luckily, no one was hurt, and I generally trust a waymo not to plow into a pedestrian when it makes a maneuver like that. I also understand the argument that autonomous vehicles are easily safer on average than human drivers, and that’s what matters when making policy decisions.
But they are not perfect, and when they make mistakes, they tend to be particularly egregious.
I'll be happy when the average driver is a computer that does better than the average human. Deaths won't go down to 0 but at least it'll less chaotic.
Besides, this is a study on Waymo probably influenced by them too to publish on their blog.
Sometimes I am ashamed a bit how early drivers break for me as a pedestrian and let me pass, like 3m from road when they and 2-3 more cars could easily pass through without affecting my crossing.
The problem is getting used to this and then going to places where this is not the norm, potentially very dangerous especially for kids.
Basically every street could be a shared space like Exhibition Road[0]. Making the city optimized for cars is a relatively recent development in the history of cities. I say this as a car owner and driver.
[0] https://www.udg.org.uk/publications/articles/exhibition-road...
They can definitely do better when taking left turns. I've seen situations where Waymo depends on the oncoming drivers to slow down.
Then it'd be like finding a cure for cancer, for people aged 0 - 40, who die as much in auto accidents as they do of cancer
If a $10,000 investment reduces the chances of a serious accident by 90%, the corresponding reduction in insurance rates might have a payoff within a few years. Especially if adoption starts to push rates up for customers who don’t automate. I can’t take a taxi everywhere, but I’d sure like it if my car drove me everywhere and did a better job than me at it too.
Don't get my wrong, I'm hoping it is soon. However they have a lot of work left.
Of course, safety first, so they should take their time and not rush things...
[1]https://waymo.com/blog/2025/04/waymo-and-toyota-outline-stra...
>With 13 cameras, 4 lidar, 6 radar, and an array of external audio receivers (EARs), our new sensor suite is optimized for greater performance...it provides the Waymo Driver with overlapping fields of view, all around the vehicle, up to 500 meters away, day and night, and in a range of weather conditions.
IMO they already won. The amount of stupid things you see people do here while driving is astonishing, so many people are not paying attention and looking at their phones.
I used an Uber on the way here and the car was dirtier while the service was identical (silent ride, got me where I needed to go.)
I’ve also been stuck in a Waymo that couldn’t figure out its way around parked buses, so they have edge cases to improve. But man does it feel like I’m living in the future…
To be fair, the fact that Waymos are fancy clean Jaguars is kind of ancillary to the main technology. The tech is currently expensive, so they are targeting the luxury market, which you can also get on Uber if you select a black car or whatever. The people willing to pay for that are less likely to make messes, and the drivers put more effort into frequent cleanings.
Once the tech becomes cheap, expect the car quality and cleanliness to go down. Robocars do have some intrinsic advantages in that it's easier to set up a standard daily cleaning process, but they will still accumulate more garbage and stains when they are used by a broader cross section of the population and only cleaned during charging to reduce costs. (Of course, cheaper and more widely accessible tech is good for everyone; if you want a immaculate leather seats cleaned three times a day, you'll generally be able to pay for it.)
Cleanliness doesn't seem that related to how expensive the tech is either - if anything it would only go down if it ceased to affect willingness to pay. As it stands, clean cars are important to their customers. If usage increases, cleaning can ostensibly increase too, no?
Regular taxis around here are also liveried fleet vehicles. Especially the very large providers: if I summon a taxi cab, I know for sure its make and model, and its paint job will clearly indicate it's on-duty as a taxi cab. You don't understand how incredibly important this is sometimes.
For the simple yet panic-inducing task of strapping on my seat belt: I can do it in seconds with a liveried vehicle, because I know exactly what to expect. In a rideshare like an Uber, every time a car arrives, it is a new make, new model (I swear to god what the fuck is a "Polestar"???) and the owner might have wrapped on some crazy aftermarket seat covers, and finding the seat belt and its mating latch is a huge drama. I've taken to leaving the passenger seat open, until I can get the belt safely latched, because otherwise the driver will promptly take off, and panic will increase 3x as the vehicle is moving and I can't find the seat belt.
Other than that, the liveried vehicles are easier to maintain; they're easier to keep clean; they're much better for brand recognition. Hallelujah for Waymo!
A "self-driving" car can cause the same accident but gain advantages over a human driver that the person ultimately responsible is no longer held to the same set of laws.
This seems to undermine foundations of law, placing the owners of those assets into a different legal category from the rest of us.
* Driver whose vehicle struck and killed girl gets $1,200 fine" [1]
* Driver who allegedly struck and killed Staten Island baby charged with $750 fine and 15 days in jail [2]
* Driver who hit Jets assistant coach Greg Knapp won't be charged following his death [3]
[1] https://vancouversun.com/news/driver-whose-vehicle-struck-an...
[2] https://nypost.com/2022/05/27/shannon-cocozza-who-allegedly-...
[3] https://people.com/sports/driver-who-hit-jets-assistant-coac...
https://waymo.com/safety/impact/#downloads
(I don't work on that team, but I've noticed a few comments that would be better served with their own analysis on top of the available data)
As they become the monopolist, like they always do, watch how they’ll run the age old playbook to destroy the market and then hike prices.
I’m not against self-driving cars. I’m against self-driving cars owned by a few megacorps, that will have even control and surveillance capabilities in addition to what’s already in your pocket.
Perfect is the enemy of the good. This sort of technological NIMBYism is, in practice, opposition to self-driving cars and their safety benefits.
Or all things Elon bad?
So is the accident rate lower because people are forced to be more attentive during FSD? Or is it genuinely lower (i.e, if you took out the driver, would there be less accidents)? To be fair, I'd still wager that yes, FSD is probably statistically way better than the average driver.
Maybe some combination of miles per intervention + accident data would give more insights into that.
Well considering this sensor package...
>With 13 cameras, 4 lidar, 6 radar, and an array of external audio receivers (EARs), our new sensor suite is optimized for greater performance...it provides the Waymo Driver with overlapping fields of view, all around the vehicle, up to 500 meters away, day and night, and in a range of weather conditions.[0]
...I would hope it is considerably better than humans who are limited to a sensor suite of two cameras and two lower-case ears.
[0] - https://waymo.com/blog/2024/08/meet-the-6th-generation-waymo...
I am not convinced that public testing of such services is safe, let alone commercial service. One cannot punish a self driving vehicle in any meaningful sense. Corporate incentives vs the public commons, is a general concern that cannot be sweettalked away.
The metaphors about human drivers recording you also seem like reductio ad absurdum.
puff pieces like this should not be well received on HN or it discredits any pretence at separation of concerns with regards to HN and ycomb.
One can fine the companies and executives.
> puff pieces like this should not be well received on HN or it discredits any pretence at separation of concerns with regards to HN and ycomb.
Mate... what pretense? Don't ever forget that HN and YC are the same; you'll have a much better time understanding the community.
There are currently over a million fatalities from road traffic crashes every year, being the leading cause of death for the 5-29 year age group[0].
I'd claim that inaction is unacceptably dangerous/deadly here and that, to minimize deaths, we need to be aggressive in trying out and pushing forward potential solutions.
> One cannot punish a self driving vehicle in any meaningful sense.
The goal of punishment for driving offenses is, in my eyes, largely about reducing unsafe behavior - not just to make someone suffer. Fines/incentives for manufacturers and fine-tuning of models based on incident data should fulfill this purpose.
[0]: https://www.who.int/news-room/fact-sheets/detail/road-traffi...
I don't think that's mutually exclusive with Waymo working on their electric autonomous vehicles, nor that halting Waymo's testing would lead to greater access to public transport. Safer cleaner roads should hopefully even benefit busses, coaches, trams (and bikes) that have to share the road with cars - and the same self-driving technology can likely be adapted for use on public transport vehicles.
China, for instance, has extensive public transport in addition to a fast-growing autonomous vehicle sector. As useful and underutilized as trains are, there will likely still always be a non-negligible portion of transportation that road vehicles are just better suited for, which should be made as safe as possible.
There is also that it seems unfortunately difficult to get financial/political momentum behind building public transport in the US, whereas Waymo's services are something that's already happening and GP is implying should be stopped - which I feel flips the "Why not just" in terms of which is the course of least resistance (but, to be clear, we should definitely still be pushing for public transport).
this is blatantly obvious and unacceptable.
the site rules prohibit accusations of astroturfing but that is precisely what is going on here.
precisely no sf programmers were convinced, either.
this site had better be concerned with future legitimacy and not being seen as a puppet of specific corps like waymo.
HN has a credibility problem here.
Its great to here their algorithms are good for cyclists, a better solution is to keep investing in infrastructure that separates cyclists completely from cars.
All the other big names that are no longer around... their tech was dangerous and definitely not ready for prime time. Their tech and focus seemed all about making all involved wealthy or wealthier.
Streets in France are full of entitled people. Drivers, bikers (both bikes with pedals and the ones with an engine) and pedestrians.
Everyone thinks that they have all the rights but ultimately some kind of order emerges from the chaos.
Pedestrians will walk on red lights, but are also careful.
Cars will park anywhere, but usually in a way that is just really frustrating but not blocking.
Bikes will slalom, but to a point.
This does not always work, but it is so much driven by culture that somehow we are statistically alive when moving outside.
Personally I hate it with all my heart. I dream of the dystopian world where everyone will follow the rules.
That being said I’d like to see how a typical “good” driver compares vs the average. Someone who doesn’t speed or get duis and gets plenty rest.
floxy•1d ago
xnx•1d ago
ziddoap•1d ago
Is having insurance not legally required? Do they just pay out when there's an accident where they injure someone?
jmm5•1d ago
ziddoap•1d ago
When I was much younger, I worked for a couple companies that had (what I would consider) large fleets of vehicles, and they all were insured through an insurance company. I guess I just assumed that's how it was. I wasn't aware self-insuring was a possibility. Thanks.
floxy•1d ago
bluGill•1d ago
Often self insure means they still pay an insurance company to handle the paperwork, but when there is a claim the company pays it.
lesuorac•9h ago
For health care, a lot of large companies technically have say Anthem or whatever but the company pays out all of the claims and it's just administered by Anthem. So you may have seen a similar thing where all claims were handled by say Geico but it's not Geico's pot of money paying out claims.
pavon•1d ago
pests•1d ago
AIPedant•1d ago
josefritzishere•1d ago
worldsayshi•1d ago
ttoinou•1d ago
IrishTechie•1d ago
wffurr•1d ago
petra•1d ago
Scaevolus•1d ago
AIPedant•1d ago
xnx•1d ago
AIPedant•1d ago
[1] Edit - I meant 5% not half, I was on the train and very frustrated with these comments. (not yours, though it seems bad faith to say "not remote driving" when I said "problem solving." The problem has always been the 1-5% of driving which truly requires sophisticated intelligence, that's why the oversight is there.)
Every Lyft driver I've spoken to drives because a) they like driving b) they like choosing their own hours and c) they don't want a boss. Telling them to go into an office for an 8 hour shift with a manager is not gonna work, they will find something more appealing. It's a different kind of employee. (I would enjoy that line of work but I would hate to drive for Uber, way too stressful.)
xnx•1d ago
I emphasized that Waymo staff does not drive vehicles remotely, because this persists as a common misconception.
bluGill•1d ago
AIPedant•1d ago
floxy•1d ago