frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Tesla Robotaxi had 3 more crashes, now 7 total

https://electrek.co/2025/11/17/tesla-robotaxi-had-3-more-crashes-now-7-total/
41•juujian•2h ago

Comments

elsjaako•1h ago
I had a discussion with a colleague today. He claimed Tesla had Full Self Driving, and had for years, and they were the first. That's the message some folks believe.

I look forward to telling him about this.

temperceve•1h ago
I've never understood why some people insist that FSD means there can never ever be any crashes of any kind, otherwise it obiviously just doesn't work.
pu_pe•1h ago
It needs human operators and still crashes 2x more than their competitor. And they have been calling it FSD for years now.
thatwasunusual•1h ago
> humans generally have a crash, whether they are at fault or not, every 700,000 miles. Tesla has 7 in probably ~300,000 miles

This is the important part.

poszlem•1h ago
Exactly. I can accept those cars not being perfect, but I can’t accept them performing worse than human drivers.
lnsru•1h ago
But humans are operating under the conditions not feasible for those cars. There is winter start in Germany with rain turning to snow and blizzard with poor visibility plus ice on the road. Humans still drive, vision only system will fail in first minute.
poszlem•1h ago
And that changes what I said in what way? If cars aren't as safe as people (including in those conditions), they shouldn't drive. But once they match us, waiting for them to be perfect is a waste of time.
lnsru•38m ago
That means, that accidents of humans happen anytime and everywhere. While robotaxis cause accidents in less harsh conditions making statistics not really comparable. On the other hand statistics say that most accidents happen in summer: https://injuryfacts.nsc.org/motor-vehicle/overview/crashes-b... My bad.
JKCalhoun•24m ago
I guess I can't accept them not being perfect. Lacking accountability, they have a much higher bar in my mind to replace human drivers.
hack_edu•1h ago
Human drivers that are at fault face repercussions that affect the rest of their lives. Robot drivers that are at fault face repercussions of a minuscule fine and a "sorry... again" press release.
tialaramex•47m ago
In practice drivers don't face repercussions proportional to the real impact because all of police, prosecutors and a potential jury see themselves in the driver as they too are likely drivers.

If you do something dumb and it kills another person, there's an excellent chance you get tried for manslaughter and a decent chance you go to jail. Unless it was in a car, then it's just a "car accident" and most likely you won't even be arrested because well, you were driving a car, sometimes killing people just happens right?

luke5441•1h ago
And that is with professional safety drivers. Give humans a professional safety co-pilot and compare numbers with that...
ants_everywhere•1h ago
It seems like they should add lidar or radar.

What is the argument for deliberately impoverishing the Tesla sensory input?

cj•1h ago
There was a time when saying this on HN would have gotten you downvoted into oblivion. People felt extremely strongly that everything should be possible with cameras.

I wonder if that sentiment is changing.

ants_everywhere•1h ago
Even if one believes everything should be possible with cameras, the goal posts are moving. Other automated vehicles use radar or lidar. Even if Tesla achieves fatality levels comparable with human drivers, other vehicles will outperform them. There's not going to be a great market for the most fatal automatic vehicle. It makes more sense to chase state of the art rather than the state of the median human.
steveBK123•1h ago
And Tesla pumps don't even realize how poor the camera specs are on Tesla's for their "we don't need lidar/radar, all we need is vision" strategy.

Front camera is sub-4K, the rest of the cameras are 1440p.. all of which are processed at 24fps. We are talking 10 year old iPhone specs here.

Veserv•58m ago
The cameras actually do not even meet minimum vision requirements required to hold a license in California and most other states.

Here is one of my posts with a detailed breakdown and analysis: https://news.ycombinator.com/item?id=43605034

steveBK123•43m ago
It's fascinating that FSD is driving real cars on public roads with FPS that most gamers would disdain. It's only a couple tons of moving metal, would could go wrong?
robotswantdata•1h ago
How many crashes per human Uber mile?
atwrk•1h ago
How many crashes per mile for Waymo etc. would be the more interesting metric - if the competition has better numbers there is no excuse for risking people's safety with inferior tech
input_sh•1h ago
Last sentence in the article.
JKCalhoun•22m ago
Why not compare the numbers to bus, train or other mass-transit fatalities?

I suppose because in the U.S. we are forever stuck in our car culture.

lnsru•1h ago
I really don’t envy the supervisor’s job. Sitting there bored to death for weeks and waiting accident to happen. And when it happens you’re too bored and too tired to engage in timely manner.

On the other hand… the not paid version of cruise control continuously fails in my two years old model Y. Realistically looking it’s to early to fantasize about robitaxis when simple phantom braking problem is not solved yet.

steveBK123•1h ago
Phantom braking has been a huge issue since at least 2017, constant whack a mole release to release, really nuked my confidence in the product over time.
neither_color•59m ago
The unpaid version of cruise control/autopilot is actually a different software from the one that comes bundled with FSD. I actually think the semi-smart autopilot that comes bundled with FSD is better than FSD itself.

I tried to get into FSD but I felt that it made me an obnoxious driver. Chill is too slow and makes unnecessary lane changes. Hurry makes too many unnecessary lane changes while speeding beyond the flow of traffic. When you encounter a "mormon roadblock", e.g two cars going the speed limit on a two lane road, FSD goes into a loop changing lanes back and forth hoping for an overtake that never comes. If you're the type of driver who picks his exit lane early because you know they're prone to jamming and drivers blocking each other later, FSD will still try to get out of the merge lane to pass, ditto for busy intersection queues.

Removing the human driver makes one things SPECIFICALLY worse, and that is the ability to correct navigation errors and override sub-optimal routing. For example: there is one block on my commute where you can take either an uncontrolled left turn, or go up to a light. The difference is one block and the light is usually faster during rush hour because the uncontrolled turn takes forever to get a safe gap. Navigation always chooses the uncontrolled left to the point that you have to disengage. There's other quality of life issues too like wanting to approach your destination from the left or the right because you know the parking situation ahead of time. These can be communicated to a human driver. You can't explain that to Tesla FSD though. It's tapped into the car-machine-god hivemind and can't be bothered with instructions from mere mortals.

But I digress, I think the paid, semi-smart autopilot is their best product. I can set an objective speed limit. It stops at stop signs and red lights automatically. It stays in its lane until I tap the blinker so it changes lane. It can autopark. These things actually augment my driving and reduce cognitive strain while driving, while keeping me just alert enough. FSD is all or nothing while requiring full non-interactive attention like a sentinel.

xnx•57m ago
> Realistically looking it’s to early to fantasize about robitaxis

...for Tesla. For Waymo they're already doing >250K rider-omly rides/week.

ants_everywhere•1h ago
If Tesla's robotaxis develop a reputation for accidents, they'll create an unpredictable traffic bubble around them.

Some people will slow down to minimize the fatality of an impact and to increase reaction time (similar to people slowing down around a marked cop car). Others will speed up to ensure they don't get stuck behind or around one.

That happens with other unsafe vehicles (e.g. a truck that doesn't have its load well secured). But it makes me wonder what will happen if Tesla trains on the data of erratic driving created by its presence.

boudin•59m ago
I'm doing this with Tesla's on the road already. When I see one i'm extra-cautious.

This company is so shaddy around all the driving assistance and FSD issues that i have 0 trust and will not until it is thoroughly investigated. They are quite behind other manufacturers on simple stuff like line assistance and automated breaking already, they are going out of their way to make every reported incident sounding that others are to blame, it just looks bad from end to end.

Rushing those robotaxis is just trying to hide the fact that they are quite behind the competition on all those fronts.

JKCalhoun•21m ago
"Honey, don't spook 'em!" (A thought that went through my head when the wife and I were recently driving in the Bay Area surrounded by Teslas, ha ha.)
lordnacho•1h ago
> Unlike other companies reporting to NHTSA, Tesla abuses the right to redact data reported through the system. The automaker redacts the “narrative” for each reported crash, preventing the public from knowing how the crashes happened and who is responsible.

This part seems pretty bad

lcnPylGDnU4H9OF•1h ago
> preventing the public from knowing

So just assume the information looks really bad for Tesla.

input_sh•37m ago
They must've learned their lesson from Cruise, who got caught lying in these reports, lost its license to operate in California, stopped offering robotaxis nationwide, and GM divested from it.
steveBK123•1h ago
Flagged in under 15 minutes, seems the fever has still not broken
mahkeiro•14m ago
The flagging system on HN is really not efficient and often abused.
juancn•29m ago
But how many miles driven? Severity? Is it worse or better than a human driver?

With just a total number it's hard to reason about what it means.

Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in LLMs

https://arxiv.org/abs/2511.15304
71•capgre•2h ago•45 comments

Interactive World History Atlas Since 3000 BC

http://geacron.com/home-en/
137•not_knuth•4h ago•80 comments

Red Alert 2 in web browser

https://chronodivide.com/
35•nsoonhui•2h ago•10 comments

Show HN: Awesome J2ME

https://github.com/hstsethi/awesome-j2me
49•catstor•3h ago•25 comments

40 years ago, Calvin and Hobbes' burst onto the page

https://www.npr.org/2025/11/18/nx-s1-5564064/calvin-and-hobbes-bill-watterson-40-years-comic-stri...
114•mooreds•2h ago•27 comments

Android/Linux Dual Boot

https://wiki.postmarketos.org/wiki/Dual_Booting/WiP
165•joooscha•3d ago•95 comments

CUDA Ontology

https://jamesakl.com/posts/cuda-ontology/
154•gugagore•3d ago•20 comments

Basalt Woven Textile

https://materialdistrict.com/material/basalt-woven-textile/
148•rbanffy•8h ago•69 comments

Towards Interplanetary QUIC Traffic

https://ochagavia.nl/blog/towards-interplanetary-quic-traffic/
44•wofo•2d ago•9 comments

Students fight back over course taught by AI

https://www.theguardian.com/education/2025/nov/20/university-of-staffordshire-course-taught-in-la...
33•level87•1h ago•13 comments

Europe is scaling back GDPR and relaxing AI laws

https://www.theverge.com/news/823750/european-union-ai-act-gdpr-changes
834•ksec•23h ago•945 comments

Meta Segment Anything Model 3

https://ai.meta.com/sam3/
557•lukeinator42•21h ago•111 comments

Loose wire leads to blackout, contact with Francis Scott Key bridge

https://www.ntsb.gov:443/news/press-releases/Pages/NR20251118.aspx
377•DamnInteresting•18h ago•167 comments

The lost cause of the Lisp machines

https://www.tfeb.org/fragments/2025/11/18/the-lost-cause-of-the-lisp-machines/
100•enbywithunix•18h ago•99 comments

Scientists Reveal How the Maya Predicted Eclipses for Centuries

https://www.sciencealert.com/scientists-reveal-how-the-maya-predicted-eclipses-for-centuries
34•rguiscard•6d ago•7 comments

DOS Days – Laptop Displays

https://www.dosdays.co.uk/topics/laptop_displays.php
35•nullbyte808•5h ago•7 comments

Researchers discover security vulnerability in WhatsApp

https://www.univie.ac.at/en/news/detail/forscherinnen-entdecken-grosse-sicherheitsluecke-in-whatsapp
272•KingNoLimit•17h ago•101 comments

Verifying your Matrix devices is becoming mandatory

https://element.io/blog/verifying-your-devices-is-becoming-mandatory-2/
158•LorenDB•14h ago•175 comments

New Proofs Probe Soap-Film Singularities

https://www.quantamagazine.org/new-proofs-probe-soap-film-singularities-20251112/
25•pseudolus•1w ago•0 comments

Wrapping my head around AI wrappers

https://www.wreflection.com/p/wrapping-my-head-around-ai-wrappers
17•nowflux•4d ago•7 comments

Building more with GPT-5.1-Codex-Max

https://openai.com/index/gpt-5-1-codex-max/
441•hansonw•20h ago•268 comments

Precise geolocation via Wi-Fi Positioning System

https://www.amoses.dev/blog/wifi-location/
206•nicosalm•16h ago•78 comments

A surprise with how '#!' handles its program argument in practice

https://utcc.utoronto.ca/~cks/space/blog/unix/ShebangRelativePathSurprise
70•SeenNotHeard•1d ago•56 comments

Details about the shebang/hash-bang mechanism on various Unix flavours (2001)

https://www.in-ulm.de/%7Emascheck/various/shebang/
54•js2•9h ago•13 comments

Show HN: I made a fireplace for your wrist (and widgets)

11•kingofspain•6d ago•7 comments

What really happened with the CIA and The Paris Review?

https://www.theparisreview.org/blog/2025/11/11/what-really-happened-with-the-cia-and-the-paris-re...
81•frenzcan•1w ago•10 comments

PHP 8.5

https://stitcher.io/blog/new-in-php-85
177•brentroose•8h ago•113 comments

Launch HN: Mosaic (YC W25) – Agentic Video Editing

https://mosaic.so
130•adishj•23h ago•121 comments

CLI tool to check the Git status of multiple projects

https://github.com/uralys/check-projects
48•chrisdugne•6d ago•28 comments

How Slide Rules Work

https://amenzwa.github.io/stem/ComputingHistory/HowSlideRulesWork/
143•ColinWright•17h ago•38 comments