I don't think that's right; I think the stock price entirely depends on people seeing it a vehicle to invest in Musk. If Musk died tomorrow, but nothing else changed at Tesla, the stock price would crater.
I guess it really does depend on which reality distortion field we’re talking about haha.
Right now it’s easily double to triple that, even with Musks behavior.
The passive investing / market cap weighted ETF complex tends to help big valuations stay big, but a company like Tesla still needs that sharp shot in the arm followed by frenzied buying occasionally in order to stay aloft (be it by traders, retail participants, shorts covering, etc).
I suppose they could replace Musk with another hype salesman, but the "hate" that Tesla gets is a big part of these upside shock cycles for the stock, because the ticker is a siren call for short sellers, who are ultimately guaranteed future buyers.
I suspect a significant proportion of Tesla's stock price comes from people who are using it as a proxy for his other companies that the public can't invest in, primarily xAI (as all AI companies are in a horrific bubble right now) and SpaceX.
Elon can't levitate TSLA and other valuations by himself. There has to be at least the appearance of substance. That appearance is wearing thin. While I'm going to observe the caution that the market can stay irrational longer than I can stay solvent, once reality assert itself, Elon will be powerless to recreate the illusion.
I get that it’s way over hyped, but they have real results that can’t be denied
They have the best selling vehicle by a little under 1%, with the Tesla Model Y just edging out the Toyota Corolla. But Toyota also has the 3rd best selling model (RAV4) that is about 7% behind the Model Y. And they have a third model in the top 10, the Camry, at a little over half the Model Y sales.
Just those 3 Toyota models combined sell about 30% more than all Tesla models combined.
Across all models Toyota sells 6 times as many cars as Tesla.
By number of cars sold per year Tesla is the 15th biggest car maker. The list is Toyota, Volkswagen, Hyundai-Kia, GM, Stellantis, Ford, BYD, Honda, Nissan, Suzuki, BMW, Mercedes-Benz, Renault, Geely, and then Tesla.
If we go by revenue from sales rather than units sold it is 12th. The list is: Toyota, Volkswagen, Hyundai-Kia, Stallantis, GM, Ford, Mercedes-Benz, BMW, Honda, BYD, SAIC Motor, and then Tesla.
Yet Tesla has something like 6 times the market cap of Toyota and around 30 times the market caps of VW and Honda. That's pretty much all hype.
Of course not.
They don’t make as many vehicles or have the revenue of other auto manufacturers, but who cares.
What they do, they do very, very well. They lead to mass market EV adoption. Even if they crumble tomorrow their contribution is immense. Who cares about market cap, it’s all just gambling.
Have a minimum quorum of sensors, disable one if it generates impossible values (while deciding carefully what is and isn't possible), use sensors that are much more durable, reliable, and can be self-tested, and integration and subsystem test test test thoroughly some more.
The debris? The very visible piece of debris? The piece of debris that a third party camera inside the car did in fact see? Adding 2 radars and 5 LIDARs would totally solve that!
For fuck's sake, I am tired of this worn out argument. The bottleneck of self-driving isn't sensors. It was never sensors. The bottleneck of self-drivng always was, and still is: AI.
Every time a self-driving car crashes due to a self-driving fault, you pull the blackbox, and what do you see? The sensors received all the data they needed to make the right call. The system had enough time to make the right call. The system did not make the right call. The issue is always AI.
That got us some of the way towards self-driving, but not all the way. AI was the main bottleneck back then. 20 years later, it still is.
Because it's geofenced to shit. Restricted entirely to a few select, fully pre-mapped areas. They only recently started trying to add more freeways to the fence.
Musk has, IIRC, actually admitted that this was their purpose.
It was about scuttling the expansion of the monorail to the airport.
Musk just picked up after the taxi cartel collapsed.
That word choice.
Kinda sorta.
It only operates a few hours a day, and the cars are not self-driving.
It's like a dedicated tunnel for Ubers.
Did we give him wayyy too much free money via subsidies? Yes. But that was our mistake. And if we hadn't given it to him, we would have given it to some other scam artists somewhere or other. So even in the case of the counterfactual, we could expect a similar outcome. Just different scumbags.
No we wouldn’t have. Not every dollar we give goes to scam artists. And there are a whole lot of industries and companies far less deceitful.
From a car design and development point of view, it's a massive waste of lost opportunities.
From a self driving interested person, it's a joke.
Really depends on how you view things, in a purely money in the stock market aspect tesla is doing great.
Only isolated people need Starklink and 95% of people on Earth live in urban cities with pop > 100,000. So it's a product for the 5%
What kind of absurd argument is that? Like any company that doesn't serve 100% of people on the planet is by definition not a success? That's the dumbest fucking argument I have ever heard and I hope you are not actually series about that.
Because Apple doesn't serve 100% of the population either, so clearly Apple is not successful. By your logic right?
And its not even true. People that live in big cities do a thing called 'flying' between big cities quite often. And Starlink is already starting to be dominant in the airline market, meaning all those city people, when they fly use Starlink.
And Starlink is used in agriculture and mining, and shipping. So all those city people do actually use things that have Starlink in the supply chain.
Common man, its fine to hate Musk, but at some point reality is a thing.
Musk companies are essentially reinventing the wheel for no reason whatsoever (or for political purposes) chasing what can be defined at best extremely marginal quality of life improvement for the general population.
Uber and the food delivery apps are much more consequential but they are not as performative as a rocket landing on its butt, so easily amused people are not as enthusiastic (even though they are on the aforementioned food delivery apps 3+ times per day)
> > Common man, its fine to hate Musk, but at some point reality is a thing.
Reality implies that an individual be present and having knowledge of where their quality of life comes from . If you disappeared Musk companies the world will go on without a single trouble. Disappear Microsoft or Apple or Aramco or Google and you'll have Civil War within a week. Maybe you should get a dose of reality, for the stock market is not it, however pleasant the gainz might be.
I never argued that Starlink is as important as the IPhone. That's just a strawman that you set up.
You are just fundamentally arguing in bad faith.
> Musk companies are essentially reinventing the wheel
Even if this was true, and it isn't, its just another argument in bad faith. The wheel has been reinvented many times and often rather successfully by many different companies.
> Uber and the food delivery apps are much more consequential
Again, I have no fucking idea what your argument is. We were not arguing about what is more successful. This is again just a strawman.
But outside of that, if Uber and food delivery apps didn't exist there are 100s of companies who could reproduce them in just a couple of month. And in fact there are likely 100s of smaller competitors, and one of them would grow to be Uber or Doordash. If Uber wasn't there Lyft could replace it just fine. Without Doordash, Delivery Hero or Uber Eats would take over in no time.
This is not the case with SpaceX, if SpaceX disappeared it would take 10-20+ years for any other company to replace it, likely longer. Maybe a combination of companies could do it shorter. Boeing ever finished Starliner. If Amazon investing heavenly into Kupiter. And if BlueOrigin continues to burn money to invest into new Glenn, then maybe in 10 years we might have SpaceX level of capability back.
The facts are this, if SpaceX collapsed tomorrow, there is a 100% chance that the government would step in and save it. Its absolute fundamentally critical to US military and civilians space. And thanks to Starshields its fundamentally critical to the whole navy and airforce.
If however Uber collapsed tomorrow, in couple of month nobody would be talking about it anymore except one of those 'Bankroupt - The Rise and Fall of Uber Story' youtube videos.
> If you
Literally just more strawman stuff that has literally nothing to do what so ever with my original argument.
The position you seem to be outlining here is that any company that is not as big and successful as Microsoft, Apple or Google, 3 of the largest companies in the planet, then you can not be considered a 'success'. Again, this is fundamentally fucking stupid as by that definition most companies on the planet that exists are not successful.
And only a person with severe Musk reality distortion would try to passionately make the argument that SpaceX isn't a success. In fact nobody would spend time even typing out such a dumb argument if it wasn't for Musk being CEO.
The trillion dollar pay package will make it happen, that's what was missing.
The only time I had to take over was for road debris on the highway. Off the highway it’s very good about avoiding it. My guess is Tesla has not been focusing on this issue as it's not needed for robotaxi for phase one.
Later, V12, which is the end-to-end neural network, worked on highways as well, but they use different stacks behind the scenes.
> Consider a turkey that is fed every day. Every single feeding will firm up the bird's belief that it is the general rule of life to be fed every day by friendly members of the human race "looking out for its best interests," as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.
The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb, page 40
A human would rather be involved in a crash because of its own doing, rather than because they let the machine take control and put trust in it.
But, if you watch the car’s display panel, it looks as if the car didn’t see anything and just went full speed ahead. That’s not great.
It should have slowed down and alerted the driver that something was there. I didn’t watch the complete video so maybe there’s more.
So if that was a human and they ran them over it'd be okay because they were testing FSD?
They're putting themselves (fine) and everyone around them (far less fine) in danger with this stunt.
A competent human driver would instinctively slow down, look at the potential obstruction, and think about changing lanes or an emergency stop.
Most probably, the visible object is just some water spilled on the road or that kind of thing. But if it isn’t then it’s very dangerous
This car appeared to be blind to any risk. That’s not acceptable
At 56, I don't expect to see it on country roads in my lifetime, but maybe someday they'll get there.
Anything "could" happen, but it would take an inordinately inattentive driver to be this bad.
They had 7-8 seconds of staring and talking about the debris before hitting it (or perhaps more, the video starts the moment the guy says "we got eh, a something", but possibly he saw it some instants before that).
So a person would need to be pretty much passed out to not see something with so much time to react.
Elon's estimates have always been off but it is irresponsible to see an obstacle up ahead and assume the computer would do something about it while the driver and passenger debate on what the said obstacle is. I am not sure if they were trying to win a Darwin Award and I say that as no particularly fan of Musk!
Also of course you're avoiding an unknown object that large, especially when there's plenty of space to go around it on either side.
If you still don't think you can avoid something like that, please get off the road for everyone's safety.
The person you replied to didn't do that, though:
> But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
Sad to see HN to give up to mob mentality.
If it can't see something like that in ideal conditions, then god knows what else it'd miss in less ideal conditions.
Contrast with Tesla's "vision-only" system, which uses binocular disparity along with AI to detect obstacles, including the ground plane. It doesn't have as good a range, so with a low- profile object like this it probably didn't even see it before it was too late. Which seems to me a theme for Tesla autonomy.
This is why FSD is still shit in late 2025 and drives like it's drunk.
The car reacted the opposite a human would. If you saw something unidentifed ("road kill?") in the distance then you'd be focusing on it and prepared to react according to what it was. With an empty lane beside you I think most drivers would be steering around it just based on size, even before they realized exactly what it was (when emergency braking might be called for).
Do they? "Many humans" would hit that? The humans in the car spotted the debris at least 8s before the impact. I don't think any humans would hit that in broad daylight unless they were asleep, completely drunk, or somehow managed to not look at the road for a full 10s. These are the worst drivers, and there aren't that many because the punishment can go up to criminal charges.
The argument that "a human would have made that mistake" backfires, showing that every Tesla equipped with the "safer than a human driver" FSD is in fact at best at "worst human driver" level. But if we still like the "humans also..." argument, then the FSD should face the same punishment a human would in these situations and have its rights to drive any car revoked.
All that to say that I don't feel this is a fair criticism of the FSD system.
Yes it is because the bar isn't whether a human would detect it, but whether a car with LiDAR would. And without a doubt it would, especially given those conditions: clear day, flat surface, protruding object is a best case scenario for LiDAR. Tesla's FSD was designed by Musk who is not an engineer nor an expert in sensors or robotics, and therefore fails predictably in ways that other systems designed by competent engineers do not.
Imagine there was a human driver team shadowing the Tesla, and say they got T-boned after 60 miles. Would we claim that human drivers suck and have the same level of criticism? I don't think that would be fair either.
This is a cross-country trip. LA to New York is 2776 miles without charging. It crashed the first time in the first 2% of the journey. And not a small intervention or accident either.
How you could possibly see this as anything other than FSD being a total failure is beyond me.
>They made it about 2.5% of the planned trip on Tesla FSD v13.9 before crashing the vehicle.
This really does need to be considered preliminary data based on only one trial.
And so far that's 2.5% as good as you would need to make it one way, one time.
Or 1.25% as good as you need to make it there & back.
People will just have to wait and see how it goes if they do anything to try and bring the average up.
That's about 100:1 odds against getting there & back.
One time.
Don't think I would want to be the second one to try it.
If somebody does take the risk and makes it without any human assistance though, maybe they (or the car) deserve a ticker-tape parade when they get there like Chas Lindbergh :)
Statistically yes, but look at the actual facts of the case.
A large object on the road, not moving, perfect visibility. And the Tesla drives straight into it.
Not hitting static objects in perfect visibility is pretty much baseline requirement #1 of self driving. And Tesla fails to meet even this.
I really couldn't justify 1000:1 with such "sparse" data, but I do get the idea that these are some non-linear probabilities of making it back in one piece.
It seems like it could easily be 1,000,000:1 and the data would look no different at this point.
Roboticists in 2016: "Tesla's sensor technology is not capable of this."
Tesla in 2025: coast-to-coast FSD crashes after 2% of the journey
Roboticists in 2025: "See? We said this would happen."
The reason the robot crashed doesn't come down to "it was just unlucky". The reason it crashed is because it's not sufficiently equipped for the journey. You can run it 999 more times, that will not change. If it's not a thing in the road, it's a tractor trailer crossing the road at the wrong time of day, or some other failure mode that would have been avoided if Musk were not so dogmatic about vision-only sensors.
> The video does not give me that information as a prospective Tesla customer.
If you think it's just a fluke, consider this tweet by the person who is directing Tesla's sensor strategy:
https://www.threads.com/@mdsnprks/post/DN_FhFikyUE/media
Before you put your life in the hands of Tesla autonomy, understand that everything he says in that tweet is 100% wrong. The CEO and part-time pretend engineer removed RADAR thinking he was increasing safety, when really he has no working knowledge of sensor fusion or autonomy, and he ended up making the system less safe. Leading to predictable jury decisions such as the recent one: "Tesla found partly to blame for fatal Autopilot crash" (https://www.bbc.com/news/articles/c93dqpkwx4xo)
So maybe you don't have enough information to put your life in the hands of one of these death traps, but controls and sensors engineers know better.
This argument makes no sense. I take it that you're saying that if we provide the Tesla a road which contains nothing to hit, it won't hit anything?
Well, sure. Also, not interesting.
In a real world drive of almost 3000 miles there will nearly always be things to avoid on the way.
Not quite. I am saying that basing the judgment on a rare anomaly is a bit premature. It's a sample size of 1, but I base this on my own driving record of 30 years and much more than 3000 miles where I never encountered an obstacle like this on a highway.
> Also, not interesting
I would have liked to see the planned cross-country trip completed; I think that would've provided more realistic information about how this car handles with FSD. The scenario of when there is a damn couch or half an engine on the highway is what's not interesting to me, because it is just so rare. Seeing regular traffic, merges, orange cones, construction zones, etc. etc. now that would have been interesting.
https://abc13.com/post/loose-spool-involved-in-crash-near-be...
This was the 5th time in two months.
Now 2018 might have been a record year, but there have been a number of others since then.
Fortunately for us all, drivers don't have to go through Houston to get from CA to NY, but you're likely to encounter unique regional obstacles the further you go from where everything is pre-memorized.
As we know 18-wheelers are routinely going between Houston and Dallas most of the way autonomously, and a couple weeks ago I was walking down Main and right at one of the traffic lights was one of the Waymos, who are diligently memorizing the downtown area right now.
I'll give Tesla the benefit of the doubt, but they are not yet in the same league as some other companies.
How is that relevant? You may not personally encountered this precise circumstance but that doesn't mean anything.
If you were to encounter this same scenario, however, it is a near certainty that you wouldn't crash into it. And yet the self-driving did. That's what matters.
> I would have liked to see the planned cross-country trip completed
I mean once the car significantly damaged itself, it's not like it can continue.
Big credit to the people running the experiment for letting it run and show failure. Many vloggers might've just interfered manually to avoid the accident and edited that part of the video out in order to continue and claim success.
Now you can certainly argue that "objects in the road" is a category of failure mode you don't expect to happen often enough to care about, but it's still a technical flaw in the FSD system. I'd also argue it points to a broader problem with FSD because it doesn't seem like it should have been all that hard for the Tesla to see and avoid the object since the humans saw it in plenty of time. The fact that it didn't raises questions for me about how well the system works in general.
But also, I doubt you would break your swaybar running over some retreads
Probably good parable for Waymo vs Tesla here. One is generalized approach for entire world while another is carefully pre-mapped for a small area.
More likely you simply drove around the debris and didn't register the memory because it's extremely unlikely that you've never encountered dangerous road debris in 30 years of driving.
Really? The people in the video clearly identify a large stationary object in the road a good 7 seconds before hitting it. You don't exactly need lightning quick reflexes to avoid hitting something in that scenario. Maybe more importantly, the Tesla did not seem to see the object at all at any distance. Even if you don't think you could have avoided it, do you think you would have entirely failed to see it and driven right into it at full speed? Because that's what the Tesla did.
Such an event might not be super common, but that doesn't make it an unfair criticism of Telsa's self-driving. Even if they've never seen a large object in the road before or react the wrong way, humans are generally capable of recognizing it when it happens and at least considering it's something they should take action on. The fact that Tesla can't do the same makes this an area where FSD is categorically worse than humans, and "avoiding stuff in the road" feels like an area where that's not a good deficit even if there generally isn't stuff in the road.
I dont think FSD has the intelligence to navigate this
This was not one of those situations.
and some debris large enough to damage your car showing up from out of nowhere after several hours of boring driving along a largely straight highway with little traffic is definitely one of these situations.
Again, this was definitely not one of those situations. It was large, it was in their lane, and they were even yapping about it for 10 seconds.
> But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
This is what humans already (and if we didn't do it, we'd be driving off the road). Based on what you're saying, I question that you're familiar with driving a car, or at least driving on a highway between cities.
I read this comment before seeing the video and thought maybe the debris flies in with the wind and falls on the road a second before impact, or something like that.
But no, here we have bright daylight, perfect visibility, the debris is sitting there on the road visible from very far away, the person in the car doing commentary sees it with plenty of time to leisurely avoid it (had he been driving).
Nothing unexpected showed up out of nowhere, it was sitting right there all along. No quick reaction needed, there was plenty of time to switch lanes. And yet Tesla managed to hit it, against all odds! Wow.
My impression of Tesla's self driving is not very high, but this shows it's actually far worse than I thought.
However, I'm sure a human driver in the loop would have reacted. The driver sitting there watching a machine do it's thing 99% of the time and being expected to step in to save that situation though is a horrific misreading of human nature.
By the way, playing a lot of racing videogames is a great training for dealing with that sort of stuff, except maybe getting good at mirrors. I've been in a few dangerous situations and they were only the 10th thousand averted crash. Nothing to think, only reflexes.
It's one of the best ways to figure out what it feels like to drive at the limits of your car and how you and it react in a very safe and controlled environment.
The best time was a very badly designed speed bump that I didn't even hit at high speed but one side was ridiculously inclinded vs. the other so the entire civic's frontend just smashed right into the pavement and dragged for 3 feet. I wouldn't be surprised if I went into a body shop and find the frontend tilted upwards by a few centimeters. lol
[Cut, google ai provided wrong numbers]
A company actively covering up things that egregious can only be assumed to be doing things even worse regularly.
[1] https://www.reuters.com/investigates/special-report/tesla-mu...
Not sure how to take this reply.
They saw it, called it out, and very deliberately let the car (not) deal with it because that was the point of what they were doing.
They did not "miss" anything.
But it doesn’t help disprove that it’s entirely the computers fault. They could have taken action if they were a rational driver.
From my perspective I think many people would have failed to take action. Swerving or hard breaking at those speeds is very dangerous. And many things on the road like roadkill or bags can be driver over
You said the driver and car both missed it.
I say the fact that you can hear them discuss the object well before hitting it yet clearly not try to avoid it means they did not actually "miss it", which is also why my first comment in this thread was in response to the notion that "a human did hit that..."
These drivers hitting the object while intentionally not intervening does not actually provide information as to whether other drivers not running the same "experiment" would've hit it.
Listen to the audio in the video. The humans do see it and talk about it for a long time before the car hits it. Had a human been driving, plenty of time to avoid it without any rush.
They do nothing to avoid it presumably because the whole point of the experiment was to let the car drive, so they let it drive to see what happens. Turns out Tesla can't see large static objects in clear daylight, so it drives straight into it.
Hopefully we get there. Waymo hasn’t even started highway testing yet but maybe they will be sufficiently better that cameras in cheaper cars simply aren’t worth it. Or maybe a driver will always be in the loop and cross country FSD only is dumb
That's not to say that there aren't many drivers who shouldn't be driving, so both can be true at once, but this is certainly not a bar against which to gauge autonomous driving.
Even if they did not understand what it is, in the real world when you see something on the road you slow down or do a maneuver to avoid it, no matter if it is a harmless peace of cloth or something dangerous like this. People are very good at telling if something is off, you can see it in the video.
Another thing to keep in mind is that video footage is much lower quality than what we can see with our human eyeballs. At no point in the video can I clearly identify what the debris is, but it's clearly evident that the humans in the car can, because they're clearly reacting to it seconds before it's even visible to us in the dash-cam-quality footage. I will freely accept that many drivers are in fact bad drivers, but a carcass (I think?) on a lane visible for >10 seconds away is something that anyone who can't avoid needs to have their license revoked.
The very same video demonstrates this is not true, since the human in the video sees the debris from far away and talks about it, as the self-driving Tesla continues obliviously towards it. Had that same human been driving, it would've been a non-issue to switch to the adjacent lane (completely empty).
But as you said, the friend loves Tesla. The fanboys will always have an excuse, even if the same video contradicts that.
From what I understand, in the USA when a truck tire wears down they put a new layer of tread on it. But it doesn't seem to work very well as it eventually peels off.
Every now and then the Karens get to screeching about it and it reaches a critical mass and the NHTSA does a study and they find that most of the debris you're seeing has nothing to do with retreads. Here's the summary of a recent one:
https://www.moderntiredealer.com/industry-news/commercial-bu...
All I heard was "taxes".
/s
On a more serious note, in the US we generally go in the direction of fewer services rather than more services. It, of course, leads to massive inefficiencies, like police removing shredded tires. But it's very difficult here, politically speaking, to get Americans to agree to add government services.
But also the dominance of car culture in the US is such that the principal state police force may actually be principally designed as a component of the highway traffic authority, as is the cass in California where the Highway Patrol was always larger than and in 1995 absorbed the State Police.
Have you compared distances?
Germany is also the country with the most traffic because it's kind of at the center of everything and pays roads with income tax instead of tolls.
The French charge really high tolls and have very little traffic compared to Germany. They really don't have an excuse.
Oh but we do. Most of the state-owned motorways have been sold off to conglomerates about two decades ago, dividends and exec comps have to come from somewhere.
So still the cheapest roads to be on for foreigners.
I mean, I have seen some dangerous things. Not at the rate you describe, though. Not even close.
??
They are not..
They'd be liable if even if it was road kill, as they're responsible for ensuring big animals don't go into roads.
So also as a consequence of this: If the US were to use the same per person $ amount to upkeep their roads, US roads would have WAY more money to be maintained. Yet, the outcome is obviously worse.
I honestly can't recall ever feeling like I was going through a markedly different area in either better or worse directions.
Around here anything big enough to matter gets reported, the cops fling it to the side of the road and it gets picked up on a schedule because they always pick things up a few weeks before they mow (presumably because hitting garbage isn't great for the mowers).
But yes there are folks whose job it is to clean that stuff up.
In the linked video highway patrol comes out to remove the debris just a few minutes after they hit it. (highway patrol had been called out prior them hitting it)
I recall I was completely flabbergasted by all the cars just ditched along the highway. There were lots of them, just off the road into the grass on the side or whatever.
I asked my buddy about it and he said it was usually tires, as it was cheaper to buy another car than get new tires... Well that didn't help my blown mind one bit.
Mind you, on the way to his house I passed a Kia dealer which literally had a huge "buy one car - get one free" sign outside...
Your rake is much less likely to fall out of a van than out of the bed of a pickup.
So if stuff can't fall out, it won't get cleaned up.
Second, this is a metal ramp used to load vehicles on a trailer (think bobcat-like).
To tow a trailer like that in Europe requires additional licenses, which comes with training around tying down EVERYTHING and double checking.
In the USA you are allowed to drive with this with the same license you need to drive a Smart Car.
That they can today doesn't mean they can do the same route 10,000 times without a safety-critical intervention. In fact, public tracking indicates Tesla's numbers are MUCH lower than that.
No. If you continue to watch the video after the collision, it's clear that there was no traffic tailgating. They even slowed down and pulled over to the side of the road. No other cars.
That kind of consideration is where having traffic tailgating factors into it. A risk of collision (or a minor collision) may be a better option than a certain (or worse) collision with the tailgating idiot.
You shouldn’t take pills a stranger gives you at a music festival without checking them for loads of things, for example. Even if you don’t ever intend on consuming them it’s nice to have specific accusations with evidence.
That's kind of irrelevant, this technology is meant to be safer and held to a higher standard.
Comparing to a human is not a valid excuse...
Not presumably, we know for sure since they are talking about it for a long time before impact.
The point of the experiment was to let the car drive so they let it drive and crash, but we know the humans saw it.
I don't think that is the case. We will judge FSD whether you make more or less accidents than humans, not necessarily in the same situations. The computer is allowed to make mistakes that a human wouldn't, if in reverse the computer makes a lot less mistakes in situations where humans would.
Given that >90% of accidents are easily avoidable (speeding, not keeping enough safety distance, drunk/tired driving, distraction due to smartphone usages) I think we will see FSD be safer on average very quickly.
A self-driving car of today still underperforms the top of the line human driver - but it sure outperforms the "0.1% worst case": the dumbest most inebriated sleep deprived and distracted reckless driver that's responsible for the vast majority of severe road accidents.
Statistics show it plain and clear: self-driving cars already get into less accidents than humans, and the accidents they get into are much less severe too. Their performance is consistently mediocre. Being unable to drink and drive is a big part of where their safety edge comes from.
Tesla's crash reporting rules:
> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.
NHSTA's reporting rules are even more conservative:
> Level 2 ADAS: Entities named in the General Order must report a crash if Level 2 ADAS was in use at any time within 30 seconds of the crash and the crash involved a vulnerable road user being struck or resulted in a fatality, an air bag deployment, or any individual being transported to a hospital for medical treatment.
At highway speeds, "30 seconds" is just shy of an eternity.
Gaps in Tesla’s telematic data create uncertainty regarding the actual rate at which vehicles operating with Autopilot engaged are involved in crashes. Tesla is not aware of every crash involving Autopilot even for severe crashes because of gaps in telematic reporting. Tesla receives telematic data from its vehicles, when appropriate cellular connectivity exists and the antenna is not damaged during a crash, that support both crash notification and aggregation of fleet vehicle mileage. Tesla largely receives data for crashes only with pyrotechnic [airbag] deployment, which are a minority of police reported crashes. A review of NHTSA’s 2021 FARS and Crash Report Sampling System (CRSS) finds that only 18 percent of police-reported crashes include airbag deployments.
From:https://static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf
This subverts all of the accumulated experience of other users on the road about what a car will do, everyone is used to potential issues caused by humans, on top of that other road users will have to learn the quirks of FSD to keep an eye for abnormalities in behaviour?
That's just unrealistic, not only people will have to deal with what other drivers can throw at them (e.g.: veering off lane due to inattention) but also be careful around Teslas which can phantom brake out of nowhere, not avoid debris (shooting it on unpredictable paths), etc.
I don't think we should accept new failure modes on the road for FSD, requiring everyone else to learn them to be on alert, it's just a lot more cognitive load...
This is what Musk has been claiming for almost a decade at this point and yet here we are
It's the standard Tesla set for themselves.
In 2016 Tesla claimed every Tesla car being produced had "the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver": https://web.archive.org/web/20161020091022/https://tesla.com...
Wasn't true then, still isn't true now.
This was really best possible driving conditions - bright day, straight dry road, no other cars around, and still it either failed to see it, or chose to run over it rather than steering around it or stopping. Of all the random things that could happen on the road, encountering a bit of debris under ideal driving conditions seems like it should be the sort of thing it would handle better.
And yet Tesla is rolling out robo taxis with issues like this still present.
True, but not even relevant to this specific example. Since the humans clearly saw it and would not have hit it, so we have a very clear example where Tesla is far inferior to humans.
I know that the basic autopilot is a completely different system than the so-called FSD.
But equipped with that experience from the basic autopilot, it does not surprise me that a large debris on the road was completely missed by the FSD.
At least for me there was nothing indicating there is not enough clearance.
It’s a huge chuck metal well over a foot long in the middle of a lane. The center of a lane is usually higher than sides and an uneven patch of road can cause a slight bounce before it, further reducing clearance. I’d worry about my RAV4 (8 inches) clearing it safely.
At first, I thought it was possibly a tire tread, which tend to curl up.
When you're wielding immense amounts of money, power, and influence I think it's worth trying to do the bare-minimum to hold people accountable for their words and claims. Otherwise your words are meaningless.
I feel like this is the case across the board.
People who always have an excuse, try to shift blame, etc., are assumed to be lacking in competency (let alone ethics and trustworthiness).
If an organisation is constantly retrenching experienced staff and cutting corners to increase earnings rather than being driven by engineering first, it doesn't matter what the engineers do amongst themselves. This culture, in fact, rewards engineers doing a bad job.
I confess to a selection bias, because I won't work at a company that doesn't behave that way. Life is too short for that BS. However, that I maintain employment at the expected pay rates while doing so indicates that there are a lot of companies who don't behave the way you describe.
All that said, I certainly don't deny that there are also a lot of companies who do behave as you describe.
Any company that does engineering "well" likely has slower growth and a smaller PE multiple.
Consequently, you don't hear about it nearly as much as the splashy, full-financial-speed-ahead companies.
Tl;dr - don't buy products or services from companies with high valuation stock prices... they're making that profit somewhere
https://www.motortrend.com/reviews/tesla-cybertruck-beast-vs...
https://www.youtube.com/watch?v=5J3H8--CQRE
The lead engineer on the Cybertruck sadly tried to defend the lie:
https://x.com/wmorrill3/status/1746266437088645551
They never even ran that quarter mile.
> Knowing when to say thanks and when to say sorry is the key for success.
...and I have used this piece of advice since then, it paid me handsomely. Of course, this doesn't allow you to be shameless, on the contrary, it requires to stick to your values as a prerequisite.
I think what allows Elon to behave like that is how he can retaliate without any repercussions since he has tons of money and influence in some circles.
For self driving, he simply decided X is right and talked about exponentials and no matter how many time it fails, there is no reflection what so ever.
Not to get too political, but the last I've heard of Elon Musk is that he was speaking to/mobilizing right wing extremists at a big protest in London. I am also pretty sure he has been trying to do similar things in other European nations (for whatever reason).
It seems to me that it is a bit late to plead for "basic decency" at this moment.
But at the same time let me ask you, what has he got to lose? What financial or reputational risk is he taking by not taking any accountability?
I know because I’m one of them. FSD paid in full almost seven years ago, still does absolutely nothing in Europe. A five-year-old would do a better job at driving because the Tesla can’t even see speed limit signs correctly.
Tesla takes no responsibility for their misleading marketing and years of lies. Most recently Musk promised in early 2025 that these old cars would get a hardware update that will finally enable the paid-for FSD (as if…) The company itself pretends to know nothing about this latest promise made by its CEO.
It’s insane that a business with a trillion-dollar market cap operates like this. It seems to be more of a cult than a real company.
This is sad and atrocious. Not only a Ford Puma (an econobox compared to a Tesla) can read almost all speed limit signs correctly, it can pull speed limit data correctly from its onboard maps when there are no signs. These maps can be updated via WiFi or an on board modem too.
Tesla mocked "big auto industry", but that elephant proved that it can run if it needs to.
For instance UNECE regulation 79 prevents FSD from using its full capacity to turn the steering wheel.
It's a rat's nest of red tape.
The real problem is that Tesla sold this in Europe, but never put the slightest effort towards actually developing the localized ML model that could make it work. (Of course the hardware sucks too, but they could have done a much better job with it.)
If it's really due to a patent, that just makes it worse. Tesla has known all along that they can't deliver the product they sold.
FSD is literally improving exponentially, looking at it in Europe (where it is blocked by regulators with economic incentives to delay Tesla's progress) is like looking at a fully fueled rocket ship at ignition and complaining that it is not moving.
Go take a ride in a new Model Y with hardware 4 and V13 software in North America and you'll realize how the EU regulators are screwing over European customers.
yeah, how could we expect software developers to write code that can replaces "z" with "s", handle extra "u"s and divide numbers by 1.6? </s>
My mistake was assuming this company had the slightest decency to sell things they could actually deliver.
Isn't that the bare minimum requirement for how commerce is supposed to function?
But my guess is that it's a defense mechanism, essentially the Just World fallacy. "It would really suck to have that bad thing happen to me through no fault of my own. *spoink!* I'm careful (and smart) therefore bad things won't happen to me" (https://dilbert-viewer.herokuapp.com/1999-09-08)
On one hand, I agree with you that this is maddening.
On the other hand this is, as the kids would say, "the meta" in 202X for business and politics.
Lying at scale and never taking responsibility for anything obviously works exceptionally well, and not just for Elon. I wish it didn't, but it does.
I've written off pretty much everything he says since sometime before 2020, too many lies about self-driving to count.
But I'm not someone with any influence (nor do I really want that kind of attention).
"How do we get self-driving into millions of regular consumer cars without doubling the price by adding expensive sensors and redesigning the vehicle around huge chunky sensors"
Waymo is focused on Taxis for a reason because it's likely going to be unaffordable, except to people driving Lambos. But that also may be fine for a big chunk of the public who don't want to own cars (or want to rent their own as a service).
Some consumer car companies are experimenting with adding smaller LIDAR sensors like recent Volvo Ex90 which costs ~$100k. But they aren't as sophisticated as Waymo
Back then Comma AI did the same thing choosing cheap cameras, Geohot the founder is similar to Elon being someone who prefers bucking trends and taking risks. He made a good public argument around that time for using Cameras and betting on more data = better output. And I guess Elon thought so too. It also happened to involve less hardware engineering and let them focus on software. Which is an easier decision as a business.
If it doesn’t work out I’m sure it will die a quick death and LIDAR will take over. State regulators won’t tolerate it long.
Except you'd need to map "everywhere" in high-fidelity 3D, save it somewhere in the car, and have it accessible near-realtime. The real reason Waymo can't service "everywhere" is that their approach doesn't scale.
And don't get me wrong - it's clearly a better service where it works (at this point in time), but it'll realistically only ever work in pre-mapped cities. Which of course would remove a ton of drivers and accidents, so still a win.
That was my hunch, but Google Lens was able to ID it. Possible that Waymo vehicles can do this too, but that must take some serious compute and optimization to do at highway speeds.
https://www.proxibid.com/_next/image?url=https%3A%2F%2Fimage...
It's huge, far bigger than it looks in the video. You would have to be asleep at the wheel to miss it.
Has Elon lied about the capabilities? Yes, on many occasions.
Crashing your car to prove it seems lilke a waste. When the documentation is clear.
""" Tesla Autopilot is an advanced driver-assistance system (ADAS) developed by Tesla, Inc. that provides partial vehicle automation, corresponding to Level 2 automation as defined by SAE International """ https://en.wikipedia.org/wiki/Tesla_Autopilot
"""
Drive Autonomously: No manufacturer has achieved Level 4 or Level 5 autonomy for road use. Tesla has not achieved Level 3 conditional autonomy like Mercedes-Benz’s DrivePilot system. A Tesla cannot drive itself. The driver must remain attentive at all times. Tesla now qualifies Autopilot and Full Self-Driving with a (Supervised) parenthetical on its website. Tesla’s Level 2 system is a hands-on system that requires the driver to regain control immediately. Torque sensors confirm the driver’s hands are on the wheel or yoke. Level 2 Driving on City Streets: Tesla does list Autosteer on City Streets as a feature of Full-Self Driving. But notably, its website provides no further """ https://insideevs.com/news/742295/tesla-autopilot-abilities-...
""" In a statement addressing the US recall, Tesla declared its technology is a ‘Level Two’ semi-autonomous driving system – not the more advanced ‘Level Three’ system which is already being developed and rolled out by rival car-makers. """ https://www.drive.com.au/news/tesla-full-self-driving-level-...
Waymo has achieved Level 4, with hundreds of thousands of paid rides per week and a stellar safety record. But they're not technically a manufacturer I guess.
And here, the too-frequently posted excuse that "oh, many humans would have hit that too" is utter nonsense.
In that situation, with light traffic, clear daylight visibility, and wide shoulders, any human who would have hit that is either highly distracted, incompetent, or drunk.
Both driver and passenger saw the object at least 7-8 seconds ahead of time; at 0:00sec the passenger is pointing and they are commenting on the object, at 0:05sec passenger is leaning forward with concern and the Tesla drives over it at 0:08sec. The "Full Self Driving" Tesla didn't even sound any warning until a second AFTER it hit the object.
Any alert half-competent driver would have certainly lifted off the accelerator, started braking and changing lanes in half that time. They didn't because of the expectation the Tesla would take some corrective action — bad assumption.
"My 'Full Self Driving' is as good as a drunk" is not a worthwhile claim.
Worse yet, the entire concept of [it drives and then hands control to the human when it can't handle a situation] is actively dangerous to levels of insanity.
Human perceptual and nervous systems are terrible at tasks requiring vigilance —it is like our brains are evolved for attention to wander. Having a life-critical task that can literally kill you or others ALMOST fully handled autonomously is situation designed for the human to lost attention and situational awareness. Then, demanding in a split second that (s)he immediately become fully oriented, think of a reaction plan, and then execute it, is a recipe for disaster.
In this case, it is even worse. The Tesla itself gave the humans zero warning.
The driver and passenger saw the object well in advance of the Tesla and in in 3-4 times the time and distance it would take to react effectively. But, they had an assumption nothing was wrong because they assumed the Tesla would handle the situation and they were not in a driving mindset, instead waiting to see what the Tesla would do. They were not actively driving the car in the world. Fortunately, the only result was a mangled Tesla — this time.
- Elon bullshits wildly and sometimes delivers
- FSD is far ahead of other available personally owned autonomy - no other system even attempts to drive on surface level streets. FSD works better than you would think - I've been in several flawless rides across town in conditions I didn't expect it to handle.
- FSD doesn't work well enough to rely on it yet, so what's the point for real consumers who don't want to sit there nervously hovering their hands over the wheel even if it is impressive and you might have to ride several hours before anything goes awry.
- We can't really know how close FSD is to being reliable enough because all we have is marketing claims from Tesla, fan boy clips on YouTube, and haters who can't seem to discern where FSD really is ahead, even as it falls far short of hype
What I wish we had was a common metric audited by a third party reliably published in terms of hours until disengagement or something like that across all systems in various conditions.
The owner's manual Tesla has posted on www.tesla.com says, "Full Self-Driving (Supervised) is a hands-on feature that requires you to pay attention to the road at all times. Keep your hands on the steering wheel at all times..." [0]
The owners of Tesla vehicles are saying that they can just relax and not keep hands on the wheel.
Is Tesla maintaining a more conservative stance while encouraging drivers to do dumb things? Or is the owner's manual just out of date?
0: https://www.tesla.com/ownersmanual/modely/en_us/GUID-2CB6080...
"To be honest with you. {talked over} you decide to change lane. How come you, did you think to change lanes or you just kinda froze up in that moment?"
He clearly thought they had plenty of time to react to it.
Musk's army must've missed this one.
gdulli•4mo ago
lazide•4mo ago
For automated tools like CNC machines, there is a reason it’s just ‘emergency stop’ for instance, not ‘tap the correct thing we should do instead’.
But doing an e-stop on a road at speed is a very dangerous action as well, which is why that isn’t an option here either.
UncleMeat•4mo ago
The alternative is taking control of the car all the time. Something nobody is going to practically do.
lazide•4mo ago
detaro•4mo ago
Passively monitoring a situation continuously and reacting quickly when needed is something humans are not good at. Even with airline pilots, where we do invest a lot in training and monitoring, it's a well-understood issue that having to take over in a surprising situation often leads to confusion and time needed to re-orient before they can take effective action. Which for planes is often fine, because you have the time buffer needed for that, but not always.
anonymars•4mo ago
And then when something goes wrong, you are unceremoniously handed a plane with potentially some or all of those protections no longer active.
As an analogy, imagine your FSD car was trying to slow down for something, but along the way there is some issue with a sensor. So it gives up and hands control back to you while it's in the middle of braking, yet now your ABS is no longer active.
So now the situation is much more sudden than it would have been (if you had been driving the car you would have been aware of it and slowing down for it youself earlier in the game), you likely weren't paying as much attention in the first place because of the automation, and some of the normal protection isn't working.
So it's almost three levels of adding insult to injury. Potentially related discussion: https://news.ycombinator.com/item?id=43970363
kayodelycaon•4mo ago
The level of training required to oversee full automation is non-trivial if you have to do more than press a stop button.
lazide•4mo ago
Coffeewine•4mo ago
It does have the same problem - if 99.999% of your flight time is spent in normal law you are not especially ready to operate in one of the alternate laws or god forbid direct law, which is similar to the case of a driver who perhaps accustomed to the system forget how to drive.
But I think we have a ways before we get there. If the car could detect issues earlier and more gradually notify the driver that they need to take control, most every driver at present retains the knowledge of how to directly operate a car with non-navigational automation (abs as you mentioned, power stearing, etc)
anonymars•4mo ago
I was thinking of something similar to XL Airways Germany 888T. I was trying to find it and came across this thread making a similar comparison so I'll link that: https://www.reddit.com/r/AdmiralCloudberg/comments/18ks9nl/p...
But I think there was some other example with an engine asymmetry (an autothrottle issue?) that the autopilot was fighting with bank, and eventually it exceeded the bank limit and dumped a basically uncontrollable aircraft in the pilots' lap. It would have been more obvious if you were seeing the yoke bank more and more. (Though it looks like this was China Airlines 006, a 747SP, which contradicts that thought.)
I agree that we can make the situation less abrupt for cars in some cases (though people will probably get annoyed by the car bugging them for everything going on)
anonymars•4mo ago
> "In trying to explain why Ho never took this critical step and subsequently failed to notice the plane’s increasing bank, the NTSB looked at two areas: fatigue, and overreliance on automation. Regarding the latter, investigators noted that during cruise flight, the job of a Boeing 747 pilot is to monitor the automation, not to fly the airplane. Studies have shown that humans are naturally poor monitors of automation, because it’s boring and does not actively engage our brains and bodies. As a result, when something goes wrong, the brain has to “wake up” before it can assess the situation and take corrective action. Therefore, when flying on autopilot pilots have increased reaction times to unexpected events, as opposed to flying manually, when a sudden change in the state of the aircraft can be instinctively assessed using physical cues transmitted via the control column."
So who knows what we can do. I've definitely experienced this to varying degrees with the fancier cruise controls (e.g. "Autopilot"). It's one thing to just take pressure off the gas and/or steering wheel, but another entirely when you aren't actively "driving the car" at full attention anymore.
catapart•4mo ago
They want to pretend you'll only need to actually intervene in 'edge case' situations, but here we have an example of perfect conditions requiring intervention. Regardless of the buzzwords they can attach to whatever methods they are using, it doesn't feel like it works.
rsynnott•4mo ago
... wait, do they actually claim that? I mean that's just nonsense.
xeromal•4mo ago