I understand the reasoning behind it, but watching the video () of the test shows that the car did not warn the driver, and even if it did it was speeding too much leaving almost no time for a driver to respond
Disclaimer- I have never used FSD before
(
) https://dawnproject.com/the-dawn-project-and-tesla-takedowns...The warnings occur when you look away or don't touch the steering wheel for a while. Not saying that Tesla is without error (it isn't), but just clarify what the warnings are for.
So they are useless. My car warns me even if i don't look.
Heck, my car not only warns you, it slams on the brakes for you.
Scared the heck out of me when it happened, but it saved me from hitting something I didn't realize was so close.
No, they serve a very specific purpose -- (attempting) to ensure the driver is at the controls and paying attention.
Don't confuse the FSD attention "nag" warnings with collision warnings. The collision warnings will sound all the time, with and without FSD enabled and even if you're not looking at the road. If you don't start slowing down quickly, Automatic Emergency Braking (AEB) will slam on the brakes and bring the car to a stop.
This test is about whether Tesla's self driving technology is safe for the people outside the vehicle.
Crash testing is much more than that. Check out NCAP for example. They specifically include safety ratings for other, vulnerable road users (i.e. pedestrians). And the Model 3 currently sits at the top spot of their 2025 rankings in that category.
This test shows the self driving software disobeys traffic laws (the school bus stop sign requires everyone else to stop), resulting in a pedestrian collision that would not have happened had traffic laws been followed.
Comparing "Tesla Takedown" with ExxonMobile is way too quick, you should have said Greenpeace. I'd say that TT has to do this, Is part of the point.
https://www.theverge.com/news/646797/nhtsa-staffers-office-v...
When regular in theory bipartisan mechanisms fail, protest is all you have left.
Elon Musk, who also owns Tesla.
Why not take an objective, fact-based regulatory approach from the start?
When J&J found out about potential asbestos contamination in their baby powder in the 70s, they managed to convince the FDA that they would research and handle it. It took until 2020 for it to come to light that they did not do that, and that, in fact, their baby powder was contaminated.
They ran multiple studies, and some of them even showed that the amount of asbestos in their product was dangerous. But those studies never saw the light of day, and the company acted in a self-preserving manner. It's a similar story with 3M and byproducts of Teflon.
But, federal or state agencies have no alliance to a company's bottom line. They don't have the same incentives to lie or downplay. So, I think, it only makes sense that they should be responsible for testing directly, not just supervising.
I also think we need to adopt some legislation so that we must test products before we release them. You may be shocked to know you're allowed to release new chemical products without proving their safety, and you can even release new food products without proving their safety. Most of these products end up being okay in the long run, but some we have to retroactively ban. It would be easier for everyone if we begin in a banned state.
Companies will usually be doing a lot of internal testing on new products, so they'll have a lot of the necessary tech and processes already in place. The trick is to ensure that faking results is penalised enough to make it not worth the risk. Most of the time it's cheaper to trust companies and then focus on the safety stats, though that fails with your examples of non-obvious issues.
The point is you either willingly spend the resources to independently test things for safety or companies WILL kill you to save a dollar.
This has been proven time and time again, big and small. We have the FDA entirely because a small company made "medicine" by buying medicine powder and putting it in a solvent that was acutely toxic that they didn't even think to give to an animal or something first. Literally just sold a random liquid to people as "medicine" and it killed a hundred people in utter agony.
Leaded gas was never a technical requirement. We could have been using Ethanol as our anti-knock/octane booster since the very beginning and never poison anyone but quite literally, the company chose instead to put Tetraethyl lead into gas because they could patent it, despite it being literally poison.
Same with PFOAs from 3M, who knew it was lethally toxic to mammals at fairly low doses decades and decades ago, but made no attempt to tell anyone or notify anyone or even reduce how much they were pouring it into waterways upstream of small communities. When they finally got sued by a lawyer who dug this all up in discovery, they finally said "Okay we will replace it" and replaced it with a nearly identical chemical that is just as toxic to mammals that they hope will break down easier in the environment, but how long will that take to demonstrate is false, and what are they going to switch to then?
Nixon made the EPA because companies would rather everyone die than change literally anything they've developed about their process or product, because they don't care about people dying. This idea of "bad press" or people will just stop using your products has demonstrably failed.
So suck it up, pay some taxes, test things for safety, and stop letting people die for such minuscule boosts in private company profits.
Independent tests are what's needed, and preferably done by agencies who won't get gutted because they find something that's inconvenient.
What other hidden dangers slip by without public knowledge.
As for the fanbois and f*cbois, they have always existed and will always exist. They are the pawns. Smart mature people learn to lead, direct, manipulate, deflect and ignore them.
So if you want to get into the details and figure out why a video from "Tesla Takedown" should or should not be believed, in all ears, but I'm some random on the Internet. I don't work at NHTSA or anywhere that could affect change based on the outcome of this video. It's not going to affect my buying decisions one way or another, but it'll only divide people who have decided this matters to them and can't get along with others.
https://en.wikipedia.org/wiki/Unsafe_at_Any_Speed%3A_The_Des...
I am wondering if there is a safety certification body for self driving technology. If not, one is needed because consumers can't be expected to be aware of all the limitations of the latest update they have installed.
There must be basic safety standards these systems need to meet, a disclaimer can't be the solution here.
- the failure here is that the car didn't stop for the bus on the other side of the road with the extended stop sign. (Obviously a kid running out from behind a car this late is fairly difficult for any human or self driving system to avoid)
- the FSD version for the robotaxi service is private and wasn't the one used for this test. The testers here only have access to the older public version, which is supposed to be used with human supervision
- the dawn project is a long-time Tesla FSD opponent that acts in bad faith - they are probably relying on false equivalence of FSD beta vs robotaxi FSD
Nevertheless this is a very important test result for FSD supervised! But I don't like that the dawn project are framing this as evidence for why FSD robotaxi (a different version) should not be allowed without noting that they have tested a different version.
That's why you slow down when you pass a bus (or a huge american SUV).
Every driver/car that obeys the law has absolutely no problem avoiding this collision with the child, which is why the bus has flashing lights and a stop sign.
Worst case, you needlessly slow down around a roadside parked schoolbus.
Best case, you save a kid's life.
if(bus and sign) stop
Then at least it fails slightly safer.
https://www.mass.gov/info-details/mass-general-laws-c90-ss-1...
This is a major failure, failing to observe a stop sign and a parked school bus are critical mistakes. If you can't manage those you're not ready to be on the road without a safety driver yet. There was nothing particularly difficult about this situation, these are the basics you must handle reliably before we even get to alle the tricker situations those cars will encounter in the real world at scale.
I guess the thing I'm trying to reconcile is that even very safe drivers make critical mistakes extremely rarely, so the threshold at which FSD is safer than even the top 10% of human drivers likely includes some nonzero level of critical mistakes. Right now Tesla has several people mining FSD for any place it makes critical mistakes and these are well publicised so I think we get an inflated sense of their commonality. This is speculation, but if true it leaves some possibility of it being significantly safer than the median driver while still allowing for videos like this to proliferate.
I do wish Tesla released all stats for interventions/near misses/crashes so we could have a better and non-speculative discussion about this!
Do you really want to trust a heartless, profit-motivated corporation with 'better than human is good enough'?
What happens when Tesla decides they don't want to invest in additional mistake mitigation, because it's incompatible with their next product release?
No, but the regulator helps here - they do their own independent evaluation
> What happens when Tesla decides...
the regulator should pressure them for improvements and suspend licenses for self driving services that don't improve
https://techcrunch.com/2025/02/21/elon-musks-doge-comes-for-...
But, I've read through your chain of rplies to OP and maybe I can help with my POV.
OP is replying in good faith showing "this sampling incident is out of scope of production testing/cars for several reasons, all greatly skewing the testing from this known bad actor source."
And you reply with "Zero systemic reproducible mistakes is the only acceptable critera."
Well then, you should know, that is the current situation. In tesla testing, they achieve this. The "test" in this article, which the OP is pointing out, is not a standardized test via Tesla on current platforms. SO be careful with your ultimatums, or you might give the corporation a green light to say "look! we tested it!".
I am not a tesla fan. However, I also am aware that yesterday, thousands of people across the world where mowed down by human operators.
If I put out a test video showing that a human runs over another human with minimum circumstances met, IE; rain, distraction, tires, density, etc., would you call for a halt on all human driving? Of course not, you'd investigate the root cause, which is most of the time, distracted or impaired driving.
Manual driving is the status quo.
Tesla would like to replace that with FSD. (And make a boatload of money as a result)
My point is that we therefore can (and should!) hold Tesla to higher standards.
'Better than human' as a bar invites conflict of interest, because at some point Tesla is weighing {safety} vs {profit}, given that increasing safety costs money.
If we don't severely externally bias towards safety, then we reach a point where Tesla says 'We've reached parity with human, so we're not investing more money in fixing FSD glitches.'
And furthermore, without mandated reporting (of the kind Musk just got relaxed), we won't know at scale.
If the Tesla can't stop for the bus (not the kid) in 12 car lengths, that's not p-hacking, that's Tesla FSD being both unlawful and obviously unsafe.
Ma'am, we're sorry your little girl got splattered all over the road by a billionaire's toy. But, hey, sampling errors happen.
Not really - it's a case of slowing down and anticipating a potential hazard. It's a fairly common situation with any kind of bus, or similarly if you're overtaking a stationary high-sided vehicle as pedestrians may be looking to cross the road (in non-jay-walking jurisdictions).
One of the prime directives of driving for humans and FAD systems must be "never drive faster than brakes can stop in visible areas". This must account for such scenarios as obstacles stopped or possible coming the wrong way around a mountain turn.
Here in the UK, that's phrased in the Highway Code as "Drive at a speed that will allow you to stop well within the distance you can see to be clear". It's such a basic tenet of safety as you never know if there's a fallen tree just round the next blind corner etc. However, it doesn't strictly apply to peds running out from behind an obstruction as your way ahead can be clear, until suddenly it isn't - sometimes you have to slow just for a possible hazard.
Your argument that a newer version is better simply because it's newer does not convince me. The new version could still have that same issue.
I actually didn't say that and am not arguing it formally - I said what I said because I think that the version difference is something that should be acknowledged when doing a test like this.
I do privately assume the new version will be better in some ways, but have no idea if this problem would be solved in it - so I agree with your last sentence.
Did they anywhere refer to this as Robotaxi software over calling it FSD, the term Tesla has chosen for this?
My understanding is that Tesla is the only manufacturer trying to make self-driving work with just visual-spectrum cameras --- all other vendors use radar/lidar _and_ visual-spectrum cameras.
As for being unrepresentative of the next release of FSD, we've had what eight years ten years of it's going to work on the next release.
... eh? I mean, what?
- FSD has been failing this test publicly for almost three years, including in a Super Bowl commercial. It strains credulity to imagine that they have a robust solution that they haven't bothered to use to shut up their loudest critic.
- The Robotaxi version of FSD is reportedly optimized for a small area of austin, and is going to extensively use tele-operators as safety drivers. There is no evidence that Robotaxi FSD isn't "supposed" to be used with human supervision, its supervision will just be subject to latency and limited spatial awareness.
- The Dawn Project's position is that FSD should be completely banned because Tesla is negligent with regard to safety. Having a test coincide with the Robotaxi launch is good for publicity but the distinction isn't really relevant because the fundamental flaw is with the companies approach to safety regardless of FSD version.
- Tesla doesn't have an inalienable right to test 2-ton autonomous machines on public roads. If they wanted to demonstrate the safety of the robotaxi version they could publish the reams of tests they've surely conducted and begin reporting industry standard metrics like miles per critical disengagement.
Shouldn't that be the one case where self driving system has an enormous natural advantage? It has faster reflexes, and it doesn't require much, if any, interpretation or understanding of signs or predictive behavior of other drivers. At the very worst, the car should be able to detect a big object in the road and try to brake and avoid the object. If the car can't take minimal steps to avoid crashing into any given thing that's in front of it on the road, what are we even doing here?
In that case it could’ve learnt almost nobody ever fully stops at a stop sign, or in this case a bus stopped with a stop sign.
People are way more accepting/understanding of drunk driving than passing school bus stop signals.
You're disagreeing with something I didn't say. There's a difference between afterthought and the primary initiator of the stop in a situation like this.
>Of course the car should recognize the stop sign, but there are cases in which stop signs are obstructed or missing, and in those cases pedestrians should still not be hit by a car.
The surprise pedestrian test is one that any vehicle can be made to fail by sheer physics. Avoiding errant pedestrians like in the video will likely only come as a byproduct of better situational behavior by self driving vehicles. The overwhelming majority of drivers know to ignore the speed limit if the situation is rife with pedestrians or otherwise sus and are generally fine with missing/obstructed stop signs. I don't know what route self driving software will take to approximate such behavior but it likely will need to.
There's different degrees of failure as well, did the Tesla try to brake beforehand or apply brakes after hitting the doll?
This sentence makes me want to cry. Sure, I may know drivers treat speed limits as minimum acceptable speed at best, but seeing it unironically spelled out like this hurts.
Please, it's just two simple words: 1) speed 2) limit. L I M I T. It's the maximum allowable speed. No, it's not the minimum speed, no it's not the target speed, no, it's not "more of a guideline than anything". You are not allowed to go faster.
It says nothing about a minimum speed. It says nothing about what speed you should drive at. All it does is limit the maximum speed. Repeat after me, it's a speed L I M I T.
Just kidding though, I know even speaking like I would to a 5 year old won't do it, this mentality runs far too deep. There's no hope.
What you WANT to recognize is conditions when such an event is possible (obstructed vision) and to slow down in advance even if you don't see/detect any pedestrians at the moment.
This obviously includes the case with the school bus and the stop sign but, as you correctly point out, is not limited to that. There are more cases when a pedestrian, especially a child, can jump under your car from behind a big vehicle or an obstacle.
Recognizing these situation and slowing down in advance is a characteristic trait of a good-intentioned experienced driver. Though I think that most of the time it's not a skill you have straight out of driving courses, it takes time and a few close calls to burn it into your subconsciousness.
Speed is the factor in collisions (other than weight), and modern brakes are incredibly good.
Not to mention that the car, with it's 360 degree sensors, could safely and efficiently swerve around the children even faster than it can brake, as long as there's not a car right next to you in another lane -- and even if there is, hitting another car is far less dangerous to their life than hitting the children is to yours.
These things should be so much better than we are, since they're not limited by unidirectional binocular vision, but somehow they're largely just worse. Waymo is, at best, a bit better. On average.
25mph is too fast for any street where kids may jump out behind parked cars. Not just school zones, but all residential streets. There's a knee at about 20mph in the traffic fatality stats where below that speed pedestrian collisions are unlikely to be fatal. Above 20mph fatalities become more common quite quickly.
It should never be the case that someone is surprised by an instantaneous bus stop. The are plenty of context clues in advance. Including the fact that the bus is around at all, which should already heighten attention.
Stopping is nice, but not the entire point of braking. The lower the collision speed, the better.
> Human reaction time is about 750ms
No. Human reaction time is around 250ms on average. What you point to is the time it takes to react to an unexpected stimuli. The number I've seen quoted is about 1s. But that assumes a completely unexpected event that you're not prepared for at all.
So if you're mindlessly passing a school bus at 25mph, then a 1s delay is expected. But if you're doing so with your foot covering the brake while hyper focused on reacting to any sudden movement in front of you, you can do much, much better than 1s. Of course, at that point you might as well drive correctly by slowing down.
I regularly drive on a two lane 55mph highway that school buses stop on and let kids out.
It runs through a reservation and has no sidewalks at all.
> modern brakes are incredibly good.
They're probably not worth that much of a superlative, and they're fundamentally limited by the tires.
This is just a pet peeve of mine, since it is used by people to argue that modern vehicles are so much vastly better than cars in the 1980s that we should be able to drive at 90mph like it is nothing.
But reaction times and kinetic energy are a bitch, and once traction control / stability assist hits its limits and can't bail you out, you might find out the hard way that you're not as good of a driver as you think you are, and your brakes won't stop you in time.
> Speed is the factor in collisions
This I will definitely agree with. Say it louder for everyone in the back.
Signs are often obstructed by trees or are simply inadequate for safe driving, even when combined with the rules of the road. Any even "partially" automated driving should be able to trivially recognize when its view is compromised and proceed accordingly.
They're obviously not arguing that the car shouldn't stop with a sign deployed.
Arguing from a point of bad faith doesn't advance the discussion.
Until there are federal standards and rigorous testing of self-driving vehicle technologies they should not be installed, or advertised.
This article is about "testing" conducted by a guy who is trying to sell competing software products and has produced videos in the past that weren't replicable and were likely staged. They never release unedited footage or car diagnostic data, just a produced video presented with the intent of damaging Tesla's reputation and boosting their own sales.
This happens all the time, from the New York Times' false claims that their review car ran out of charge, to the most recent Mark Rober video that cost the CEO of a lidar company his job.
The video in this article requires independent validation it is not from a trusted source as it is produced by an extremely biased and financially motivated Tesla competitor.
What competing software products?
The founder went as far as to run for CA senator just to regulate Tesla and bought a Superbowl ad to smear the company. https://techcrunch.com/2024/02/12/dawn-project-tesla-ad-fsd-...
Somehow operating systems and software development tools are in direct competition with Tesla ADAS software. Turns out Tesla is not in competition with GM, it is actually in competition with Linux, GCC, and Windows. Amazing.
The smear campaign to silence whistleblowers is still going strong.
But these stories are still reaching the front page, so arguably the flaggers are up against the algorithm and the algorithm is winning.
I'd expect school bus stop signs to be challenging to learn because of how context dependent they are compared to regular stop signs. Some examples, drawn from the rules in my state (Washington) are below. These may be different in other states.
With a regular stop sign you stop, then you go when it is safe to do so.
When you stop for a school bus sign you are supposed to stay stopped as long as the sign is there. You don't go until the school bus retracts the sign. It is essentially like a red light rather than a stop sign.
Whether or not a school bus stop sign applies to you depends on the direction you are traveling and on the structure of the roadway.
It applies if either of the following are true: (1) you are traveling in the same direction as the bus, (2) it is a two lane roadway and the lanes are not separated by a physical barrier or median. Three or more lanes or a barrier or median and cars traveling opposite the bus don't have to stop.
My 2025 Tesla stops for road work flaggers spinning their signs from "Slow" to "Stop".
This issue here, correctly, is that it should come to a stop for 1) a bus flashing red, 2) with or without stop signs, 3) on an undivided road. Or, in our automated future, at least come to a crawl like FSD does now when entering a parking lot or pedestrian likely location.
:)
Any good reason to believe these tests?
jofzar•7mo ago
https://www.reddit.com/r/teslamotors/comments/1l84dkq/first_...