frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Why Tesla’s cars keep crashing

https://www.theguardian.com/technology/2025/jul/05/the-vehicle-suddenly-accelerated-with-our-baby-in-it-the-terrifying-truth-about-why-teslas-cars-keep-crashing
58•nickcotter•3h ago

Comments

tardibear•2h ago
https://archive.is/jqbM2
jekwoooooe•2h ago
It’s ridiculous that Tesla can beta test their shitty software in public and I have to be subjected to it
thunderbong•1h ago
This is true for most software nowadays
jsiepkes•1h ago
Sure, but I'm not directly affected by someone's buggy phone software. If a self driving Tesla crashes into me, that does affect me.
gchokov•1h ago
Let me guess, you always perfect code? Maybe just html but it’s perfect, right?
indolering•2h ago
Elon has tricked himself into thinking the automated statistics machine is capable of human level cognition. He thinks cars will only need eyeballs like humans have and that things like directly measuring what's physically in front of you and comparing it to a 3D point cloud scan is useless.

Welp, he's wrong. He won't admit it. More people will have to die and/or Tesla will have to face bankruptcy before they fire him and start adding lidar (etc) back in.

Real sad because by then they probably won't have the cash to pay for the insane upfront investment that Google has been plowing into this for 16 years now.

01100011•2h ago
I cut elon a tiny bit of slack because I remember ten years ago when a lot of us stupidly believed that deep learning just needed to be scaled up and self-driving was literally only 5 years away. Elon's problem was that he bet the farm on that assumption and has buried himself so deep in promises that he has seemingly no choice but to double down at every opportunity.
tempodox•1h ago
Someone in his position cannot afford fallacious thinking like that. Or so one would think.
dworks•1h ago
I've never believed that but I said the opposite - these cars will never drive themselves. Elon has caused an unknown but not small number of deaths through his misleading marketing. I cut him no slack.
jeffreygoesto•1h ago
I used to tell the fanboys "Automated driving is like making children. Trying is much more fun than succeeding." ten years ago. But building a golem _was_ exciting to be honest.
almatabata•1h ago
Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.

The issue came when he promised every car would become a robotaxi. This means he either has to retrofit them all with lidar, or solve it with the current sensor set. It might be ego as well, but adding lidar will also expose them to class action suits.

The promise that contributed to the soaring valuation, now looks like a curse that stops him from changing anything. It feels a bit poetic.

spwa4•16m ago
I don't want to defend Tesla, but ... The problem with LIDAR is a human problem. The real issue that LIDAR has fundamentally different limitations than human sensors have, and this makes any decision based on them extremely unpredictable ... and humans react on predictions.

A LIDAR can get near-exact distances between objects with error margins of something like 0.2%, even 100m away. It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away. You can see this in the youtube movies of the "Tesla beep". It used to be the case that if the Tesla autopilot judged a collision between 2 objects inevitable, it had a characteristic beep.

The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.

But worse yet is when the reverse happens. Distance judgement is the strength of LIDARs. But they have weaknesses that humans don't have. Angular resolution, especially in 3D. Unlike human eyes, a LIDAR sees nothing in between it's pixels, and because the 3d world is so big even 2 meters away the distance between pixels is already in the multiple cm range. Think of a lidar as a ball with laser beams, infinitely thin, coming out of it. The pixels give you the distance until that laser hits something. Because of how waves work, that means any object that is IN ONE PLANE smaller than 5 centimers is totally invisible to lidar at 2 meters distance. At 10 meters it's already up to over 25 cm. You know what object is smaller than 25 cm in one plane? A human standing up, or walking. Never mind a child. If you look at the sensor data you see them appear and disappear, exactly the way you'd expect sensor noise to act.

You can disguise this limitation by purposefully putting your lidar at an angle, but that angle can't be very big.

The net effect of this limitation is that a LIDAR doesn't miss a small dog at 20 meters distance, but fails to see a child (or anything of roughly a pole shape, like a traffic sign) at 3 to 5 meters distance. The same for things composed of beams without a big reflective surface somewhere ... like a bike. A bike at 5 meters is totally invisible for a LIDAR. Oh and perhaps even worse, a LIDAR just doesn't see cliffs. It doesn't see staircases going down, or that the surface you're on ends somewhere in front of you. It's strange. A LIDAR that can perfectly track every bird, even at a kilometer distance, cannot see a child at 5 meters. Or, when it's about walking robots, LIDAR robots have a very peculiar behavior: they walk into ... an open door, rather than through it 10% of the time. Makes perfect sense if you look at the LIDAR data they see, but very weird when you see it happen.

Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction. Well, you cannot predict what a lidar sees, so robots with lidars constantly get into that situation. Or, if it's a lidar robot attempting to go through a door, you predict it'll avoid running into anything. Then the robot hits the wood ... and you hit the robot ... and the person behind you hits you.

Humans and lidars don't work well together.

mykowebhn•1h ago
One would've thought that unproven and potentially dangerous technology like this--self-driving cars--would've required many years of testing before being allowed on public roads.

And yet here we are where the testing grounds are our public roadways and we, the public, are the guinea pigs.

madaxe_again•44m ago
Nothing new under the sun.

https://thevictoriancyclist.wordpress.com/2015/06/21/cycling...

01100011•2h ago
[flagged]
close04•2h ago
> “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,”

This is probably core to their legal strategy. No matter how much data the cars collect they can always safely destroy most because this allows them to pretend the autonomous driving systems weren’t involved in the crash.

At this point it’s beyond me why people still trust the brand and the system. Musk really only disrupted the “fake it” part of “fake it till you make it”.

Dylan16807•2h ago
I'll worry about that possible subterfuge if it actually happens a single time ever.

It's something to keep in mind but it's not an issue itself.

close04•1h ago
Then make sure you don’t read till the end of the article where this behavior is supported. Maybe it is just a coincidence that Teslas always record data except when there’s a suspicion they caused the crash, and then the data was lost, didn’t upload, was irrelevant, or self driving wasn’t involved.

> The YouTuber Mark Rober, a former engineer at Nasa, replicated this behaviour in an experiment on 15 March 2025. He simulated a range of hazardous situations, in which the Model Y performed significantly worse than a competing vehicle. The Tesla repeatedly ran over a crash-test dummy without braking. The video went viral, amassing more than 14m views within a few days.

> The real surprise came after the experiment. Fred Lambert, who writes for the blog Electrek, pointed out the same autopilot disengagement that the NHTSA had documented. “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,” Lambert noted.

In my previous comment I was wondering why would anyone still trust Tesla’s claims and not realistically assume the worst. It’s because plenty of people will only worry about it when it happens to them. It’s not an issue in itself until after your burned to a crisp in your car.

drcongo•34m ago
I've seen so many Teslas do so many stupid things on motorways that I do everything I can not to be behind, in front of, or beside one. Can't imagine why anyone would get inside one.
KaiserPro•1h ago
I love how the guardian has the ability to made anything sound like vapid nonsense.

What would be good is if the guardian talked to domain experts about the sensor suite and why they are not suitable for "self drive" or even that the self driving isn't certified for level3 autonomy.

The other thing thats deeply annoying is that of course everything is recorded, because thats how they build the dataset. crucially it'll have the disengage command recorded, at what time and with a specific reason.

Why? because that is a really high quality signal that something is wrong, and can be fed into the dadtaset as a negative example.

Now, if they do disengage before crashes, there will be a paper trail and testing to make that work, and probably a whole bunch of simulation work as well.

But the gruan as ever, is only skin deep analysis.

te_chris•40m ago
It’s a book excerpt
mavhc•59m ago
It mixes up Autopilot and FSD, as usual, no point taking anything it says seriously
Gigachad•58m ago
I’m sure the child who gets obliterated by a Tesla cares about the distinction.
boudin•11m ago
The thing that should not be taken seriously are tesla cars. FSD and autopilot are marketing term for the same underlying piece of crap technology
locusm•54m ago
Pretty sure if firefighters got there in time they could break the glass, unless they meant the battery fire was so fierce they couldn’t approach the vehicle.
madaxe_again•37m ago
Window glass in most modern vehicles is laminated rather than a simple tempered pane - makes them less likely to shatter in a rollover, and thereby eject occupants, but harder to break through in an emergency.

TBH I see this more as a “firefighters aren’t being given the right tools” issue, as this is far from unique to Tesla, and the tools have existed since laminated side glass became a requirement - but don’t seem to yet be part of standard issue or training.

https://www.firehouse.com/rescue/vehicle-extrication/product...

Baba Is Eval

https://fi-le.net/baba/
161•fi-le•1d ago•22 comments

You will own nothing and be happy (Stop Killing Games)

https://www.jeffgeerling.com/blog/2025/you-will-own-nothing-and-be-happy-stop-killing-games-0
29•gond•2h ago•2 comments

OBBB signed: Reinstates immediate expensing for U.S.-based R&D

https://www.kbkg.com/feature/house-passes-tax-bill-sending-to-president-for-signature
320•tareqak•9h ago•206 comments

The messy reality of SIMD (vector) functions

https://johnnysswlab.com/the-messy-reality-of-simd-vector-functions/
23•mfiguiere•3h ago•4 comments

A 37-year-old wanting to learn computer science

https://initcoder.com/posts/37-year-old-learning-cs/
5•chbkall•52m ago•0 comments

Being too ambitious is a clever form of self-sabotage

https://maalvika.substack.com/p/being-too-ambitious-is-a-clever-form
291•alihm•12h ago•91 comments

Why AO3 Was Down

https://www.reddit.com/r/AO3/s/67nQid89MW
113•danso•7h ago•39 comments

Learn to love the moat of low status

https://usefulfictions.substack.com/p/learn-to-love-the-moat-of-low-status
123•jger15•2d ago•54 comments

Mini NASes marry NVMe to Intel's efficient chip

https://www.jeffgeerling.com/blog/2025/mini-nases-marry-nvme-intels-efficient-chip
358•ingve•18h ago•175 comments

N-Back – A Minimal, Adaptive Dual N-Back Game for Brain Training

https://n-back.net
22•gregzeng95•2d ago•9 comments

Telli (YC F24) Is Hiring Engineers [On-Site Berlin]

https://hi.telli.com/join-us
1•sebselassie•2h ago

The History of Electronic Music in 476 Tracks (1937–2001)

https://www.openculture.com/2025/06/the-history-of-electronic-music-in-476-tracks.html
39•bookofjoe•2d ago•7 comments

A new, faster DeepSeek R1-0528 variant appears from German lab

https://venturebeat.com/ai/holy-smokes-a-new-200-faster-deepseek-r1-0528-variant-appears-from-german-lab-tng-technology-consulting-gmbh/
49•saubeidl•2h ago•8 comments

Amiga Linux (1993)

https://groups.google.com/g/comp.sys.amiga.emulations/c/xUgrpylQOXk
29•marcodiego•6h ago•14 comments

Show HN: Semcheck – AI Tool for checking implementation follows spec

https://github.com/rejot-dev/semcheck
8•duckerduck•3d ago•0 comments

EverQuest

https://www.filfre.net/2025/07/everquest/
220•dmazin•17h ago•111 comments

Incapacitating Google Tag Manager (2022)

https://backlit.neocities.org/incapacitate-google-tag-manager
166•fsflover•15h ago•111 comments

Why I left my tech job to work on chronic pain

https://sailhealth.substack.com/p/why-i-left-my-tech-job-to-work-on
317•glasscannon•20h ago•191 comments

Sleeping beauty Bitcoin wallets wake up after 14 years to the tune of $2B

https://www.marketwatch.com/story/sleeping-beauty-bitcoin-wallets-wake-up-after-14-years-to-the-tune-of-2-billion-79f1f11f
141•aorloff•15h ago•335 comments

Nvidia is full of shit

https://blog.sebin-nyshkim.net/posts/nvidia-is-full-of-shit/
642•todsacerdoti•11h ago•326 comments

Larry (cat)

https://en.wikipedia.org/wiki/Larry_(cat)
313•dcminter•1d ago•71 comments

Scientists capture slow-motion earthquake in action

https://phys.org/news/2025-06-scientists-capture-motion-earthquake-action.html
9•PaulHoule•3d ago•0 comments

The Amiga 3000 Unix and Sun Microsystems: Deal or No Deal?

https://www.datagubbe.se/amix/
60•wicket•12h ago•13 comments

In a milestone for Manhattan, a pair of coyotes has made Central Park their home

https://www.smithsonianmag.com/science-nature/in-a-milestone-for-manhattan-a-pair-of-coyotes-has-made-central-park-their-home-180986892/
130•sohkamyung•3d ago•124 comments

ADXL345 (2024)

https://www.tinytransistors.net/2024/08/25/adxl345/
32•picture•7h ago•0 comments

Show HN: I AI-coded a tower defense game and documented the whole process

https://github.com/maciej-trebacz/tower-of-time-game
249•M4v3R•21h ago•126 comments

Writing a Game Boy Emulator in OCaml (2022)

https://linoscope.github.io/writing-a-game-boy-emulator-in-ocaml/
240•ibobev•1d ago•45 comments

The story behind Caesar salad

https://www.nationalgeographic.com/travel/article/story-behind-caesar-salad
105•Bluestein•13h ago•54 comments

Robots move Shanghai city block [video]

https://www.youtube.com/watch?v=7ZccC9BnT8k
102•surprisetalk•1d ago•34 comments

The ITTAGE indirect branch predictor

https://blog.nelhage.com/post/ittage-branch-predictor/
42•Bogdanp•9h ago•11 comments