Overall, yikes.
Oh good, Tesla vehicles apparently struggle with the task of "Hey, there's a car there" in degraded conditions.
Probably don't need to worry about that while driving though.
>Tesla also described internal data and labeling limitations that prevented a uniform identification and analysis of crash events with the subject system engaged. ODI believes this limitation could have led to under-reporting of subject crashes over portions of the defined time-period.
I thought Tesla was a "Software" company!
This report is insanely vague though. It's very preliminary, opened yesterday.
Yeah I think posting this here is premature without any details.
Maybe I'm misremembering things, but I feel like 4-5 years ago we didn't have these clickbait headlines that fed political discourse. It feels like reddit culture has permeated this place for a while.
Anytime one of Elon Musk's company has a misstep, the headlines violently shoots to the top of the front page.
This is an expected and understood result given the hardware and software involved.
You will not get past these issues without a RADICAL improvement in camera technology paired with specialized, dedicated processing hardware matched against several (and I mean several several) "common" environment profiles.
FSD is a scam. It's not safe. It is not technically sound.
The fact that there aren't many more accidents with the system is a by product of consistent and well thought out road standards, car standards, other safety systems present on cars, and driver education.
The report is not premature and it's not premature to comment on them.
Can you clearly and explicitly state why you feel like the report or the commentary is premature?
Eh, while I agree with you on the permeation of reddit culture on this board, this post is in no way clickbait or political in nature.
In fact, the title of this post is literally copy and pasted from the problem description.
The reason this stuff shoots to the top is because Elon Musk and his companies are a red alert menaces to society. People are sick of him and the damage he causes with his money, and wish he and his cars would just fuck off for good. From his cars slamming into people and property, his website spreading hate, his starships raining fiery debris, or he's personally taking an axe to government programs we rely on, everyone has cause to be absolutely done with his antics.
But since businesses can apparently unleash autonomous murderbots onto the public roadways with zero repercussions for 10+ years, I guess we'll have to settle for endless flamewars about Musk's campaign of destruction on HackerNews instead.
It is a shameful engineering design to leave out LIDAR and it has cost human lives.
Let's hope Musk does not leave out something important for the moon landing. His proposal for it is absolutely ridiculous, it looks like a children's book fantasy and many smaller top-heavy craft have already toppled on the moon!
One giant leap?
The gravity is weaker so just jump down /s
If Tesla FSD is better than the average driver using it, isn't it still a net win, even if it might crash in different scenarios than a human? Especially because a human has a window to override FSD, but FSD doesn't really get a chance to override a human, except in limited scenarios like automatic emergency braking. And it gets more people using it by providing FSD at a lower cost?
https://www.thedrive.com/news/tesla-autopilot-fails-wile-e-c...
> If Tesla FSD is better than the average driver using it, isn't it still a net win, even if it might crash in different scenarios than a human?
That was the analysis when the industry was in its infancy. I think a lot more work has to go into that argument for people to accept it now that the driverless car industry has been operating for a decade+, it's not really clear that this pans out.
For example, today you can look at a car and predict how it's going to behave because you have a good model for how people drive. But let's say in the future driverless cars are much "safer" on paper than human drivers, but they behave very differently from them such that it's hard for people to predict their behaviors.
Now you've created a highly dynamic system where you don't have a good model for the all the actors because some of them behave one way but others behave a completely different way. Does this increase the overall safety of the system or decrease it, despite the new actors being statistically safer than the current ones?
I don't think you can with great confidence say what's going to happen just by looking at crash rates and comparing to the current system. You're going to change the whole system by introducing large numbers of actors who "crash in different scenarios than a human"
Yes!!! Thank you hopefully I will get credit for inventing this attack :)
>> IncreasePosts 39 minutes ago | parent | context | flag | on: Tesla: Failure of the FSD's degradation detection ...
What are wile e coyote attacks? Painting a tunnel entrance on a wall?
>> If Tesla FSD is better than the average driver using it, isn't it still a net win, even if it might crash in different scenarios than a human?
I don't think so because it is fooled by simple things that could easily be prevented and counting on a human to override is very risky because the human is simply not alert in the passive mode.
I think cameras are great, but there is no excuse not to also use LIDAR.
tesla cars killing people would be all over the news each time and nobody would care that similar or even marginally smaller amount of people would die anyway. People simply expect way more for giving up control.
Is it really that hard to see?
I'm not sure that's the case anymore. Each Tesla model has gotten more spartan over the years. And the interiors have never been all that "premium" when compared with other manufacturers. They should still offer the most comprehensive safety features, but whether or not thats because of "luxury" or not, I'm not sure.
Some people upvote everything slightly negative about the topic: "see how bad it is!!!"
Some people flag everything slightly negative about the topic: "we rather not let you see how bad it is"
Animats•1h ago
Does it not detect them at all, or fail to deal with detected sensor degradation adequately? Does "Full Self Driving (assisted)" slow down under conditions of poor visibility?
Does Tesla even look for the road surface? One big advantage of those up-top LIDAR units is that you have a good scan of the pavement ahead. If you're not sensing flat pavement ahead, don't go there. That's basic. Vision-only systems, going back to Mobileye, have been overly dependent on looking for known kinds of obstacles. Original Mobileye could only detect car rear ends.
altairprime•1h ago
Under conditions of poor camera visibility?
Humans drivers can see under conditions that cameras cannot, and people will otherwise misinterpret “visibility” as referring unpredictably (and with personal biases) towards either human sight and/or camera processing and/or lidar processing.
dstroot•1h ago
michaelmrose•1h ago
hn_acc1•1h ago
9dev•1h ago
recursivecaveat•1h ago
carlmr•56m ago
Obvious things first, cameras have way worse contrast and low light sensitivity than human eyes.
Humans have much more evolved logical thinking capacity, even the stupid ones can figure stuff out that modern AI struggles with.
Humans have other sensors, too that they use to plausibility check the picture they see. I.e. one of the best sensor fusion systems on the planet.
When in doubt humans can figure out whether it's a lens occlusion or a some other artifact in their vision by virtue of moving their head around.
There's probably other things I'm not thinking of. In any case to make full self driving work we should first start by using all available tech to make it safe. When you have safe tech you can slowly start removing individual sensors while verifying that safety remains high. As the experience and system evolves there will be optimization potential.
And until we have that low light thing and high contrast figured out, camera alone doesn't cut it.
Eridrus•25m ago
I personally feel like that isn't really true any more.
kerridge0•15m ago
XorNot•1h ago
They're just not that good - nowhere near human vision performance. And a human in a car has a surprisingly good view of the road and a very fast pan tilt system to look around.
Tesla's do not actually have 360 degree full binocular vision coverage - nor the ability for a camera to lean left or right to improve an ambiguous sensor picture.
So while I fully believe that vision only self driving could work, within the constraints of automobile platforms and particularly the Tesla and it's current camera deployments, it is not remotely similar enough to human visual fidelity for that to solve a valid argument.
buildbot•1h ago
Humans are hard to compete with! I'd want LIDAR & RADAR just to give me an edge.
giantrobot•1h ago
And this is an amazingly stupid statement. Humans drive with most of their senses, not just vision. In fact our proprioception plays an important role in driving.
Even Tesla's use of cameras is poor because they're monocular and fixed in place. Most humans have binocular vision and those visual sensors have multiple degrees of freedom and the ability to adjust focus.
Even if you wanted to only use vision for navigation it's irresponsible to not use binocular configurations that get more reliable depth sensing.
UltraSane•1h ago
At this point I truly don't understand why anyone cares what that liar says.
maxerickson•1h ago
It might be dishonest (if he doesn't believe it is possible), but I don't think he's saying that the current systems have reached the mark.
renlo•1h ago
I also own a Tesla, and there is no indication shown to the user that FSD's vision is degraded. They need to add this in.
For example, numerous times I have been driving my Tesla with FSD activated with ostensibly a clean and clear windshield when suddenly the car will do the "clean the windshield in front of the camera routine" without any indication that the car's camera is degraded. If people haven't seen this "clean the windshield routine", the wiper fluid is dispensed and the wiper will vigorously wipe in front of the camera only -- the rest of the windshield only gets a cursory wipe.
This indicates to me that the camera has poor visibility and I am not informed or aware of this as a driver, which is concerning. I am often curious if there is a thin occluding film on the windshield in the camera box in front of the camera, or something that has degraded FSD's vision, but they do not give you the ability to view the camera feed, nor do they notify you that the vision is degraded. I think a "thin occluding film" may be in the camera box because my normal windshield outside of the camera box started to show a thin chemical film after a couple of months, which apparently (according to a Google search) happens when a new car off-gasses, adding a thin film of chemical byproduct to the windshield. This is my first new car so I've no idea if this is normal or not.
bradfox2•37m ago
dawnerd•11m ago
conductr•56m ago
In any case, it seems reasonable to me that the human should be making the decisions once conditions become adverse. It’s a simple liability issue for the car company but also I’d rather trust my own judgment if it’s only 80% certain it’s not driving me off a cliff.
lateforwork•48m ago
Systems built from cameras that are only nearly as capable as human eyes and software that is only nearly as capable as the human brain will fall short overall. To match or surpass human performance, the individual components need to exceed human abilities where possible--and that's where LiDAR provides an advantage.
Sohcahtoa82•1h ago
But it's that 0.1% of situations where the results will be catastrophic. Sure, you can detect vehicles, traffic cones, bikes (both bicycles and motorcycles), people, mopeds, traffic lights, lane markings, everything you'd expect on a road.
But what about the mattress that fell out of someone's truck? If the car doesn't know what a mattress is and what it looks like, it can't really adequately determine its size based on the monocular vision that Tesla has. Sure, maybe it could use motion vectors between video frames to make a guess, but I'm not convinced that's going to work well, especially relative to LIDAR.
Steering back to the subject at hand...
> "In the crashes that ODI has reviewed, the system did not detect common roadway conditions that impaired camera visibility and/or provide alerts when camera performance had deteriorated until immediately before the crash occurred."
I don't think I've ever had my Tesla disable Autopilot based on road conditions, though maybe it's because when conditions are bad, I've just taken manual control preemptively. I've let it go through construction areas where cones are guiding traffic outside the painted lines, and surprisingly, it's handled it fine, though I've only done this at low speeds (~20 mph).
Camera visibility is another story. In heavy rain at night, I've had it not allow me to enable AP, though I've never had it disable AP and tell me to take control. However, it HAS limited the cruise speed based on visibility.
All this to say...
...anybody buying Tesla's FSD is being swindled, as far as I'm concerned. "FSD (Supervised)" is a scam. If you have to supervise it, it's not self-driving. It's just a party trick that you have to watch to make sure nothing goes wrong.
cornell532•1h ago
FSD is a better driver than me 99% of the time.
hn_acc1•1h ago
99.9% of driving of sea-level, non-rainy, near-the-coast California/Austin weather, maybe. I would guess it's a no-go in the inland foggy conditions in CA, for example, or freezing rain in TX.
In terms of ALL conditions? 60-70% maybe.