The family of the first person killed by that will know who to sue for a punitive trillion dollars.
Depends on the type of stop and go driving. Crawling along at 15mph, sure. But the most dangerous driving scenario - whether human or machine is the driver - is a scenario with large variations in speed between vehicles and also limited visibility.
For example suddenly encountering a traffic jam that starts around a blind corner.
Drivers have tackled this problem by wearing polaroid sunglasses.
I really hope someone asks Tesla how they plan to solve the Sun glare issue.
Why was that OK? Why was it safe to let people use like that without informing them?
Wasted 1 hour each of your 5 co-workers who ended up reviewing unusable slop? Silence.
As with Marmite, I find it very strange to be surrounded by a very big loud cultural divide where I am firmly in the middle.
Unlike Marmite, I wonder if I'm only in "the middle" because of the extremities on both ends…
As far as hallucinations go, it is useful as long as its reliability is above a certain (high) percentage.
I actually tried to come up with a "perceived utility" function as a function of reliability: U(r)=Umax ⋅e^(−k(100−r)^n) with k=0.025 and n=1.5 is the best I came up with, plotted here: https://imgur.com/gallery/reliability-utility-function-u-r-u...
Like why can an LLM create a nicely designed website for me but asking it to do edits and changes to the design is a complete joke. Lots of the time it creates another brand new design (not what i asked all) and it's attempts at editing it LOL. It makes me think it does no design at all rather it just went and grab one from the ethers of the Internet acting like it created it.
For example, I've had numerous conversations where people will point to safety rating in vehicles to defend their purchasing decisions. Its simple to understand really, I want the safest car for my family/child etc, that is why I refuse to buy an older, used vehicle or prefer a sedan over SUV. Safety becomes cover for preference and defending trends like expanding pickup truck sizes since the 2000s while there is no safety rating or even objective measure of the efficacy of these self-driving systems.
Hopefully I haven't wasted your time, its just a psychological trend that I think exists.
Tesla, though, is still hyping a technology that seems to have maxed out years ago.
2,000 may be stretching it but it is possible if the driver is trusting enough. Personally many of my disengagements isn't because it is being dangerous, but just sub-optimal such as not driving as aggressive as I want to, not getting into off-ramp lane as early as I like, or just picking weird navigational choices.
Trying to recall but I haven't had a safety involved disengagement in probably a few months across late 13 and 14. I am just one data point and the main criticism I've seen from 14 is: 1) getting rid of fine speed controls in favor of driving style profiles 2) its car and obstacle avoidance being overtuned so it will tap the brakes if, for instance, an upcoming perpendicular car suddenly appears and starts to roll its stop sign.
Personally, I prefer it to be overly protective albeit turn it down slightly and fix issues where it hilariously thinks large clouds of leaves blowing across are obstacles to brake for.
Yeah, trust and a lot of creative accounting of what constitutes successfully driving by itself for that long.
Would you put your child in one and let it cross a large city 100 times unattended?
IMHO, it's okay for the driver profiles to affect everything other than max speed, including aggressiveness of acceleration and propensity to change lanes. But since exceeding speed limits is "technically" breaking the law, the default behaviour of FSD should be to strictly obey speed limits, and drivers should be given a set of sliders to manually override speed limits. Perhaps like a graphic EQ with sliders for every 10 MPH where you can manually input decide how many MPH over that limit is acceptable.
This would be an inelegant interface, and intentionally so. Drivers should be fully in control of the decision to exceed the speed limit, and by how much. FSD should drive like a hard-nosed driving instructor unless the driver gives unambiguous permission to do otherwise.
[0] Note that I am describing this based on my understanding of the US environment. I am Australian, and our speed limits are strictly enforced at the posted speed, without exception. On any road, you should expect a fine if going 3—6 km/h [2—4 MPH] and caught by a fixed or mobile camera. This applies literally anywhere, including highways. By contrast in the USA, I understand that 5—10 MPH on highways has been socially normalised, and law enforcement generally disregards it.)
I've heard this so many times it's starting to be a meme. The system was claimed to be very capable from the beginning, then every version was a massive improvement, and yet we're always still in very dangerous, and honestly underwhelming territory.
Teslas keep racking up straight line highway miles where every intervention probably counts at most as 1 mile deducted from the total in the stats. Have one cross a busy city without interventions or accidents like a normal human driver is expected to.
> You are judging past tech by 2025 standards.
That's very presumptuous of you. Every single person I know driving a Tesla told me the FSD (not AP) is bad and they would never trust it to drive without very careful attention. I can tell Teslas on FSD in traffic by the erratic maneuvers that are corrected by the driver, and that's a bad thing.
I really don't believe this because everyone I know who drives a Tesla tells me the opposite. I tend to think this is an artifact of people who just irrationally hate Tesla because IRL every negative thing I hear about Teslas comes from people who don't own the cars and hate Elon Musk.
> they would never trust it to drive without very careful attention
Of course, because the product is not designed to drive without human supervision.
> I can tell Teslas on FSD in traffic by the erratic maneuvers that are corrected by the driver, and that's a bad thing.
I don't believe you actually can because I don't notice any difference in the quality of driving between Tesla's or any other car on the road. (In fact the only difference I can notice between drivers of different cars is large trucks). So, again, I write off such statements as more of the same emotionally driven desire to see a pattern were there isn't one.
> I don't believe
> this is an artifact of people who just irrationally hate Tesla
> more of the same emotionally driven desire to see a pattern
Don't you find it curious that every opinion you don't like must be from irrational people hating Tesla, but opinions you do like are all rational and objective? It's as if we didn't define the sunk cost fallacy for exactly this. You're a rational person, if Tesla was confident in the numbers wouldn't we have an avalanche of verifiable stats? Instead we're here playing this "nuh-uh" games with you pretending you're speaking with an authority nobody else has. Does any other company go to such lengths to bury the evidence of their success?
And of course I can tell FSD drivers, literally nobody else on the street brakes hard with absolutely no reason so often, nobody veers abruptly then corrects and straightens out with a wobble, both on highways and in the city. If it's not the car then it must be the drivers but they wouldn't make such irrational moves.
Maybe the manufacturer should try the next version. And test it. And then try the next version. And test it. And then continue until they have something that actually works.
I had a party at my house a couple months ago, mostly SF tech people. I found the Tesla owners chatting together, and the topic was how much FSD sucks and they don’t trust it.
I asked and no-one said they would buy a Tesla again. Distrust because they felt suckered by FSD was a reason, but several also just said Elon’s behavior was a big negative.
We're on the cusp of trading the Tesla in for a Rivian most likely. I should be Tesla's target customer, but instead I'm exactly who you described:
- I don't like the brand. I don't like Elon. I don't like the reputation that the car attaches to me.
- I don't trust the technology. I've gotten two FSD trials, both scared the shit out of me, and I'll never try it again.
- I don't see any compelling developments with Tesla that make me want to buy another. Almost nothing has changed or gotten better in any way that affects me in the last four years.
They should be panicking. The Cybertruck could have been cool, but they managed to turn it into an embarrassment. There are so many alternatives now that are really quite good, and Tesla has spent the last half a decade diddling around with nonsense like the robot and the semi and the Cybertruck and the vaporware roadster instead of making cars for real people that love cars.
I'm sure they would be if the stock price had ever showed any signs of being based in reality.
But for now Elon can keep having SpaceX and xAI buy up all the unsold Teslas to make number go up.
If that ever stops working, just spin up a new company with a hyper-inflated valuation and have it acquire Tesla at some made up number. Worked for him once, why not try it again.
And at this point he can get even fraudier, with the worst possible realistic outcome being that he might get forced to pay a relatively small bribe and publicly humiliate himself for Trump a bit.
But there's really no more consequences to any sort of business fraud (for now) as long as you can afford the tribute.
#WorldLibertyFinancial
IIRC the deposit was 250K, and I know people who signed up on the first day. Can you imagine a more dedicated fan?
How do you not deliver to that group? How big an own-goal is that?
Whoosh. They've been saying Tesla is an AI company for nearly a decade. AI has been propping up entire US economy for last few years. EV bandwagon has left long time ago.
Saying all that I wouldn't mind even cheaper Tesla - small screen, 1 camera instead of 11, fully offline, fully stainless steel, fully open source - basically minimally tech and maximally maintainable and maximum longevity.
Mine has been an extremely well done vehicle and I was (and kind of am) bullish on FSD as a driver assistance technology, but a car is a 6-7 year investment for me and I have big doubts about their direction. They seem to have abandoned the idea of being a car company, instead chasing this robotaxi idea.
Up until 2023/2024 was fine for my 6-7 year car lifecycle. Tesla was really cool when they let you do all sorts of backwards-compatible upgrades, but they seemed to have abandoned that.
I’ve found it incredibly disappointing seeing their flailing direction now.
Rivian seems to still have a lot of the magic that Tesla had. They’re definitely a strong contender for my next vehicle in a year or two.
Tesla FSD 12 -> 13 was a massive jump that happened earlier this year. 14 is still rolling out.
Testing out 13 this weekend, it drove on country roads, noticed a road closure, rerouted, did 7 miles on non divided highways, navigated a town and chose a parking space and parked in it with zero interruptions. It even backed out of the driveway to start the trip. I didn't like the parking job and reparked; other than that, no hands, no feet, fully autonomous. Unlike 12, I had no 'comments' on the driving choices - rotaries were navigated very nicely, road hazards were approached and dealt with safely and properly. It was genuinely good.
Dislike Elon all you want, but Tesla FSD is improving rapidly, and to my experienced eyes adding 9s. Probably two or three more 9s to go, but it's not a maxed out technology.
> It was genuinely good.
You lack data to draw this conclusion. The most important factor is deaths per mile, which is sparse, so it requires aggregating data from many drivers before you have enough statistical power.Which means if Tesla can really build that Cybercab - with an underpowered motor, small battery, plastic body panels, just cameras (which I think they promised to sell under $20k) - they'll be able to hit a business expense level and profitability that Waymo will only be able, in say, 10 years.
Even if you don't want to talk about non-existing hardware, a Model 3's manufacturing cost is surely much lower than a Waymo.
Once (if) they make self driving work at any point in time before Waymo gets to the same level of cost - they'll be the more profitable business.
Not only that, they'll be able to enter markets where the cost of Waymo and what you can charge for taxi rides is so far apart that it doesn't make sense for them - in this sense, they'll have a first mover advantage.
Having driven Tesla FSD and coded with Claude/Codex, it suffers from the exact same issues- Stellar performance in most common contexts, but bizarrely nonsensical behavior sometimes when not.
Which is why I call it "thunking" (clunky thinking) instead of "thinking". And also why it STILL needs constant monitoring by an expert.
His foot was on the gas though
Looking at this author's other articles, he seems more than a bit unhinged when it comes to Tesla: https://electrek.co/author/jamesondow/ Has Hacker News fallen for clickbait? (Don't answer)
The driver admitted he looked down after dropping his phone and blew a stop sign; Tesla argues his foot was on the accelerator, but the jury still assigned partial fault because Autopilot was allowed to operate off limited-access highways and the company didn’t do enough to prevent foreseeable misuse. The driver had already settled separately.
If the wheels of the car fell off, whould Tesla have any blame for that? If we had laid wires all along the road to allow for automatic driving, and Tesla's software misread that and caused a crash, would it be to blame?
When is Autopilot safe to use? Is it ever safe to use? Is the fact that people seem to be able to entirely trick the Autopilot to ignore safety attention mechanisms relevant at all?
If we have percentage-based blame then it feels perfectly fine to share the blame here. People buy cars assuming that the features of the car are safe to use to some extent or another.
Maybe it is just 0%. Like cruise control is a thing that exists, right? But I'm not activating cruise control anywhere near any intersection. Tesla calls their thing autopilot, and their other thing FSD, right? Is there nothing there? Maybe there is no blame, but it feels like there's something there.
A foot on the gas overrides braking on autopilot and causes it to flash up a large message up on the screen that "Autopilot will not break / Accelerator pedal is pressed"
And by the way - I have heard big tech folks repeat that phrase, not really understanding the moral of that Steve Jobs story.
Only rigorous, continual, third party validation that the system is effective and safe would be relevant. It should be evaluated more like a medical treatment.
This gets especially relevant when it gets into an intermediate regime where it can go 10,000 miles without a catastrophic incident. At that level of reliability you can find lots of people who claim "it's driven me around for 2 years without any problem, what are you complaining about?"
10,000 mile per incident fault rate is actually catastrophic. That means the average driver has a serious, life threatening incident every year at an average driving rate. That would be a public safety crisis.
We run into the problem again in the 100,000 mile per incident range. This is still not safe. Yet, that's reliable enough where you can find many people who can potentially get lucky and live their whole life and not see the system cause a catastrophic incident. Yet, it's still 2-5x worse than the average driver.
100% agreed, and I'll take it one step further - level 3 should be outright banned/illegal.
The reason is it allows blame shifting exactly as what is happening right now. Drivers mentally expected level 4 and legally the company will position the fault, in as much as it can get away with, to be on the driver, effectively level 2.
I worry that when it gets to 10,000 mile per incident reliability that it's going to be hard to remind myself I need to pay attention. At which point it becomes a de facto unsupervised system and its reliability falls to that of the autonomous system, rather than the reliability of human + autonomy, an enormous gap.
Of course, I could be wrong. Which is why we need some trusted third party validation of these ideas.
wonder what driving force behind this, because at somepoint money didnt matters
fabiensanglard•2h ago
Animats•2h ago
runako•2h ago
cerved•1h ago