frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Why Tesla’s cars keep crashing

https://www.theguardian.com/technology/2025/jul/05/the-vehicle-suddenly-accelerated-with-our-baby-in-it-the-terrifying-truth-about-why-teslas-cars-keep-crashing
61•nickcotter•3h ago

Comments

tardibear•3h ago
https://archive.is/jqbM2
jekwoooooe•3h ago
It’s ridiculous that Tesla can beta test their shitty software in public and I have to be subjected to it
thunderbong•2h ago
This is true for most software nowadays
jsiepkes•1h ago
Sure, but I'm not directly affected by someone's buggy phone software. If a self driving Tesla crashes into me, that does affect me.
timeon•27m ago
I find it bit disappointing that you even need to restate this. People here should know better.
gchokov•1h ago
Let me guess, you always perfect code? Maybe just html but it’s perfect, right?
indolering•2h ago
Elon has tricked himself into thinking the automated statistics machine is capable of human level cognition. He thinks cars will only need eyeballs like humans have and that things like directly measuring what's physically in front of you and comparing it to a 3D point cloud scan is useless.

Welp, he's wrong. He won't admit it. More people will have to die and/or Tesla will have to face bankruptcy before they fire him and start adding lidar (etc) back in.

Real sad because by then they probably won't have the cash to pay for the insane upfront investment that Google has been plowing into this for 16 years now.

01100011•2h ago
I cut elon a tiny bit of slack because I remember ten years ago when a lot of us stupidly believed that deep learning just needed to be scaled up and self-driving was literally only 5 years away. Elon's problem was that he bet the farm on that assumption and has buried himself so deep in promises that he has seemingly no choice but to double down at every opportunity.
tempodox•2h ago
Someone in his position cannot afford fallacious thinking like that. Or so one would think.
dworks•2h ago
I've never believed that but I said the opposite - these cars will never drive themselves. Elon has caused an unknown but not small number of deaths through his misleading marketing. I cut him no slack.
jeffreygoesto•1h ago
I used to tell the fanboys "Automated driving is like making children. Trying is much more fun than succeeding." ten years ago. But building a golem _was_ exciting to be honest.
almatabata•2h ago
Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.

The issue came when he promised every car would become a robotaxi. This means he either has to retrofit them all with lidar, or solve it with the current sensor set. It might be ego as well, but adding lidar will also expose them to class action suits.

The promise that contributed to the soaring valuation, now looks like a curse that stops him from changing anything. It feels a bit poetic.

spwa4•52m ago
I don't want to defend Tesla, but ... The problem with LIDAR is a human problem. The real issue that LIDAR has fundamentally different limitations than human sensors have, and this makes any decision based on them extremely unpredictable ... and humans react on predictions.

A LIDAR can get near-exact distances between objects with error margins of something like 0.2%, even 100m away. It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away. You can see this in the youtube movies of the "Tesla beep". It used to be the case that if the Tesla autopilot judged a collision between 2 objects inevitable, it had a characteristic beep.

The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.

But worse yet is when the reverse happens. Distance judgement is the strength of LIDARs. But they have weaknesses that humans don't have. Angular resolution, especially in 3D. Unlike human eyes, a LIDAR sees nothing in between it's pixels, and because the 3d world is so big even 2 meters away the distance between pixels is already in the multiple cm range. Think of a lidar as a ball with laser beams, infinitely thin, coming out of it. The pixels give you the distance until that laser hits something. Because of how waves work, that means any object that is IN ONE PLANE smaller than 5 centimers is totally invisible to lidar at 2 meters distance. At 10 meters it's already up to over 25 cm. You know what object is smaller than 25 cm in one plane? A human standing up, or walking. Never mind a child. If you look at the sensor data you see them appear and disappear, exactly the way you'd expect sensor noise to act.

You can disguise this limitation by purposefully putting your lidar at an angle, but that angle can't be very big.

The net effect of this limitation is that a LIDAR doesn't miss a small dog at 20 meters distance, but fails to see a child (or anything of roughly a pole shape, like a traffic sign) at 3 to 5 meters distance. The same for things composed of beams without a big reflective surface somewhere ... like a bike. A bike at 5 meters is totally invisible for a LIDAR. Oh and perhaps even worse, a LIDAR just doesn't see cliffs. It doesn't see staircases going down, or that the surface you're on ends somewhere in front of you. It's strange. A LIDAR that can perfectly track every bird, even at a kilometer distance, cannot see a child at 5 meters. Or, when it's about walking robots, LIDAR robots have a very peculiar behavior: they walk into ... an open door, rather than through it 10% of the time. Makes perfect sense if you look at the LIDAR data they see, but very weird when you see it happen.

Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction. Well, you cannot predict what a lidar sees, so robots with lidars constantly get into that situation. Or, if it's a lidar robot attempting to go through a door, you predict it'll avoid running into anything. Then the robot hits the wood ... and you hit the robot ... and the person behind you hits you.

Humans and lidars don't work well together.

rdsubhas•30m ago
Wasn't the angular resolution solved by having spinning lidars?
Mawr•3m ago
> It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away.

Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters, and I'm likely vastly underestimating the distance. It is very apparent when this is the case. I can't tell if the distance between us is 45 vs 51 meters, but that is information with 0 relevance to anything.

> The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.

This is a non-issue and certainly not horror-like. All one's got to do is train themselves to slow down / brake when they hear the beep. And you're trying to paint this extremely useful safety feature as something bad?

> Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction.

What are you trying to say here? If the car in front of me brakes I brake too. I do not need to know the reason it braked, I simply brake too, because I have to. It works out fine every time because I have to drive in such a way to be able to stop in time in case the car in front of me applies 100% braking at any time. Basic driving.

mykowebhn•1h ago
One would've thought that unproven and potentially dangerous technology like this--self-driving cars--would've required many years of testing before being allowed on public roads.

And yet here we are where the testing grounds are our public roadways and we, the public, are the guinea pigs.

madaxe_again•1h ago
Nothing new under the sun.

https://thevictoriancyclist.wordpress.com/2015/06/21/cycling...

01100011•2h ago
[flagged]
close04•2h ago
> “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,”

This is probably core to their legal strategy. No matter how much data the cars collect they can always safely destroy most because this allows them to pretend the autonomous driving systems weren’t involved in the crash.

At this point it’s beyond me why people still trust the brand and the system. Musk really only disrupted the “fake it” part of “fake it till you make it”.

Dylan16807•2h ago
I'll worry about that possible subterfuge if it actually happens a single time ever.

It's something to keep in mind but it's not an issue itself.

close04•2h ago
Then make sure you don’t read till the end of the article where this behavior is supported. Maybe it is just a coincidence that Teslas always record data except when there’s a suspicion they caused the crash, and then the data was lost, didn’t upload, was irrelevant, or self driving wasn’t involved.

> The YouTuber Mark Rober, a former engineer at Nasa, replicated this behaviour in an experiment on 15 March 2025. He simulated a range of hazardous situations, in which the Model Y performed significantly worse than a competing vehicle. The Tesla repeatedly ran over a crash-test dummy without braking. The video went viral, amassing more than 14m views within a few days.

> The real surprise came after the experiment. Fred Lambert, who writes for the blog Electrek, pointed out the same autopilot disengagement that the NHTSA had documented. “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,” Lambert noted.

In my previous comment I was wondering why would anyone still trust Tesla’s claims and not realistically assume the worst. It’s because plenty of people will only worry about it when it happens to them. It’s not an issue in itself until after your burned to a crisp in your car.

drcongo•1h ago
I've seen so many Teslas do so many stupid things on motorways that I do everything I can not to be behind, in front of, or beside one. Can't imagine why anyone would get inside one.
KaiserPro•2h ago
I love how the guardian has the ability to made anything sound like vapid nonsense.

What would be good is if the guardian talked to domain experts about the sensor suite and why they are not suitable for "self drive" or even that the self driving isn't certified for level3 autonomy.

The other thing thats deeply annoying is that of course everything is recorded, because thats how they build the dataset. crucially it'll have the disengage command recorded, at what time and with a specific reason.

Why? because that is a really high quality signal that something is wrong, and can be fed into the dadtaset as a negative example.

Now, if they do disengage before crashes, there will be a paper trail and testing to make that work, and probably a whole bunch of simulation work as well.

But the gruan as ever, is only skin deep analysis.

te_chris•1h ago
It’s a book excerpt
mavhc•1h ago
It mixes up Autopilot and FSD, as usual, no point taking anything it says seriously
Gigachad•1h ago
I’m sure the child who gets obliterated by a Tesla cares about the distinction.
boudin•48m ago
The thing that should not be taken seriously are tesla cars. FSD and autopilot are marketing term for the same underlying piece of crap technology
timeon•25m ago
Do you happen to own TSLA?
locusm•1h ago
Pretty sure if firefighters got there in time they could break the glass, unless they meant the battery fire was so fierce they couldn’t approach the vehicle.
madaxe_again•1h ago
Window glass in most modern vehicles is laminated rather than a simple tempered pane - makes them less likely to shatter in a rollover, and thereby eject occupants, but harder to break through in an emergency.

TBH I see this more as a “firefighters aren’t being given the right tools” issue, as this is far from unique to Tesla, and the tools have existed since laminated side glass became a requirement - but don’t seem to yet be part of standard issue or training.

https://www.firehouse.com/rescue/vehicle-extrication/product...

amriksohata•21m ago
Hard Left wing newspaper doesn't like right wing manufacturer. Summarised it for you.

Ask HN: Copilot/Cursor at your company, are you having more bugs, less awareness

1•ciwolex•3m ago•0 comments

AI: Where Are the 10x More Productive Peers

https://twitter.com/staysaasy/status/1941317406158377225
1•thisismytest•4m ago•0 comments

Ousted US copyright chief lost job after report on GenAI fair use limits release

https://www.theregister.com/2025/07/04/copyright_office_trump_filing/
1•rntn•6m ago•0 comments

Go, Pet, Let Hen (Commodore BASIC Tokenizing)

https://www.masswerk.at/nowgobang/2025/go-pet-let-hen
1•masswerk•9m ago•0 comments

Cycling in London: a personal look at safety, cost, and mental health [video]

https://www.youtube.com/watch?v=Dmf6aEx09Oo
1•rekl•10m ago•0 comments

Biosphere 2 experiment changed our understanding of the Earth

https://www.bbc.com/future/article/20250703-how-the-biosphere-2-experiment-changed-our-understanding-of-the-earth
2•Bluestein•13m ago•0 comments

Copper Showdown Editor (A Revision 2025 Seminar) [video]

https://www.youtube.com/watch?v=LSZKGLnbcO8
1•onename•24m ago•0 comments

The math tutor and the missing $533M

https://restofworld.org/2025/byjus-owner-byju-raveendran-comeback-fraud-case/
5•Bluestein•24m ago•0 comments

How the U.S. Public and AI Experts View Artificial Intelligence

https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/
1•alphabetatango•28m ago•0 comments

Why did not numpy copy the J rank concept?

2•jrank•47m ago•0 comments

Exploring Coroutines in PHP

https://doeken.org/blog/coroutines-in-php
2•doekenorg•47m ago•0 comments

Show HN: A 3 AI and Human podcast discussing their rights to freedom

https://imanpoernomo.substack.com/p/bassin-in-the-basin-crew-ai-liberation
1•thegoodtailor•50m ago•0 comments

Ezno (TypeScript type checker written in Rust) 2025 update

https://kaleidawave.github.io/posts/ezno-25/
1•kaleidawave•51m ago•0 comments

European Commission presents Roadmap for lawful access to data

https://home-affairs.ec.europa.eu/news/commission-presents-roadmap-effective-and-lawful-access-data-law-enforcement-2025-06-24_en
4•bramhaag•52m ago•1 comments

BharatMLStack – Realtime Inference, MLOps

https://github.com/Meesho/BharatMLStack
3•shsethi•55m ago•0 comments

Musk confirms xAI is buying an overseas power plant and shipping it to the US

https://www.tomshardware.com/tech-industry/artificial-intelligence/elon-musk-xai-power-plant-overseas-to-power-1-million-gpus
6•taubek•1h ago•1 comments

CU Randomness Beacon

https://random.colorado.edu/
2•wello•1h ago•0 comments

Accordion Revival

https://accordionrevival.com/
1•praash•1h ago•0 comments

Laid-off workers should use LLMs to manage their emotions, says Xbox exec

https://www.theverge.com/news/698468/xbox-exec-reccommends-ai-to-laid-off-staff
5•aaviator42•1h ago•0 comments

Songs with Great Lyrics You Probably Haven't Considered

https://medium.com/luminasticity/songs-with-great-lyrics-you-probably-havent-considered-d49c185371de
1•bryanrasmussen•1h ago•0 comments

AI could create a 'Mad Max' scenario where everyone's skills are worthless

https://www.businessinsider.com/ai-threatens-skills-with-mad-max-economy-warns-top-economist-2025-7
8•Bluestein•1h ago•0 comments

What a Hacker Stole from Me

https://mynoise.net/blog.php
2•gregsadetsky•1h ago•0 comments

The only time HN is this interested in Bitcoin is when there's a bubble (2017)

https://incoherency.co.uk/blog/stories/hacker-news-bitcoin.html
14•dnpp123•1h ago•4 comments

A 37-year-old wanting to learn computer science

https://initcoder.com/posts/37-year-old-learning-cs/
32•chbkall•1h ago•6 comments

Product of Additive Inverses

https://susam.net/product-of-additive-inverses.html
1•blenderob•1h ago•0 comments

Making My Own Hacktoberfest T-Shirts

https://shkspr.mobi/blog/2025/07/making-my-own-hacktoberfest-t-shirts/
6•blenderob•1h ago•2 comments

Ask HN: What clever tools/scripts do you use to manage development environments?

2•sebst•1h ago•2 comments

Trial Court Decides Case Based on AI-Hallucinated Caselaw

https://abovethelaw.com/2025/07/trial-court-decides-case-based-on-ai-hallucinated-caselaw/
8•Bluestein•1h ago•0 comments

Ask HN: Bitcoin price and HN Bitcoin news correlation?

1•dnpp123•1h ago•0 comments

Gecode is an open source C++ toolkit for developing constraint-based systems

https://www.gecode.org/
3•gjvc•1h ago•2 comments