I don't think that's right; I think the stock price entirely depends on people seeing it a vehicle to invest in Musk. If Musk died tomorrow, but nothing else changed at Tesla, the stock price would crater.
Right now it’s easily double to triple that, even with Musks behavior.
The passive investing / market cap weighted ETF complex tends to help big valuations stay big, but a company like Tesla still needs that sharp shot in the arm followed by frenzied buying occasionally in order to stay aloft (be it by traders, retail participants, shorts covering, etc).
I suppose they could replace Musk with another hype salesman, but the "hate" that Tesla gets is a big part of these upside shock cycles for the stock, because the ticker is a siren call for short sellers, who are ultimately guaranteed future buyers.
I suspect a significant proportion of Tesla's stock price comes from people who are using it as a proxy for his other companies that the public can't invest in, primarily xAI (as all AI companies are in a horrific bubble right now) and SpaceX.
Elon can't levitate TSLA and other valuations by himself. There has to be at least the appearance of substance. That appearance is wearing thin. While I'm going to observe the caution that the market can stay irrational longer than I can stay solvent, once reality assert itself, Elon will be powerless to recreate the illusion.
I get that it’s way over hyped, but they have real results that can’t be denied
The debris? The very visible piece of debris? The piece of debris that a third party camera inside the car did in fact see? Adding 2 radars and 5 LIDARs would totally solve that!
For fuck's sake, I am tired of this worn out argument. The bottleneck of self-driving isn't sensors. It was never sensors. The bottleneck of self-drivng always was, and still is: AI.
Every time a self-driving car crashes due to a self-driving fault, you pull the blackbox, and what do you see? The sensors received all the data they needed to make the right call. The system had enough time to make the right call. The system did not make the right call. The issue is always AI.
Musk has, IIRC, actually admitted that this was their purpose.
Did we give him wayyy too much free money via subsidies? Yes. But that was our mistake. And if we hadn't given it to him, we would have given it to some other scam artists somewhere or other. So even in the case of the counterfactual, we could expect a similar outcome. Just different scumbags.
No we wouldn’t have. Not every dollar we give goes to scam artists. And there are a whole lot of industries and companies far less deceitful.
From a car design and development point of view, it's a massive waste of lost opportunities.
From a self driving interested person, it's a joke.
Really depends on how you view things, in a purely money in the stock market aspect tesla is doing great.
The only time I had to take over was for road debris on the highway. Off the highway it’s very good about avoiding it. My guess is Tesla has not been focusing on this issue as it's not needed for robotaxi for phase one.
> Consider a turkey that is fed every day. Every single feeding will firm up the bird's belief that it is the general rule of life to be fed every day by friendly members of the human race "looking out for its best interests," as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.
The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb, page 40
But, if you watch the car’s display panel, it looks as if the car didn’t see anything and just went full speed ahead. That’s not great.
It should have slowed down and alerted the driver that something was there. I didn’t watch the complete video so maybe there’s more.
So if that was a human and they ran them over it'd be okay because they were testing FSD?
They're putting themselves (fine) and everyone around them (far less fine) in danger with this stunt.
A competent human driver would instinctively slow down, look at the potential obstruction, and think about changing lanes or an emergency stop.
Most probably, the visible object is just some water spilled on the road or that kind of thing. But if it isn’t then it’s very dangerous
This car appeared to be blind to any risk. That’s not acceptable
At 56, I don't expect to see it on country roads in my lifetime, but maybe someday they'll get there.
Elon's estimates have always been off but it is irresponsible to see an obstacle up ahead and assume the computer would do something about it while the driver and passenger debate on what the said obstacle is. I am not sure if they were trying to win a Darwin Award and I say that as no particularly fan of Musk!
Also of course you're avoiding an unknown object that large, especially when there's plenty of space to go around it on either side.
If you still don't think you can avoid something like that, please get off the road for everyone's safety.
The person you replied to didn't do that, though:
> But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
Do they? "Many humans" would hit that? The humans in the car spotted the debris at least 8s before the impact. I don't think any humans would hit that in broad daylight unless they were asleep, completely drunk, or somehow managed to not look at the road for a full 10s. These are the worst drivers, and there aren't that many because the punishment can go up to criminal charges.
The argument that "a human would have made that mistake" backfires, showing that every Tesla equipped with the "safer than a human driver" FSD is in fact at best at "worst human driver" level. But if we still like the "humans also..." argument, then the FSD should face the same punishment a human would in these situations and have its rights to drive any car revoked.
All that to say that I don't feel this is a fair criticism of the FSD system.
Yes it is because the bar isn't whether a human would detect it, but whether a car with LiDAR would. And without a doubt it would, especially given those conditions: clear day, flat surface, protruding object is a best case scenario for LiDAR. Tesla's FSD was designed by Musk who is not an engineer nor an expert in sensors or robotics, and therefore fails predictably in ways that other systems designed by competent engineers do not.
I dont think FSD has the intelligence to navigate this
However, I'm sure a human driver in the loop would have reacted. The driver sitting there watching a machine do it's thing 99% of the time and being expected to step in to save that situation though is a horrific misreading of human nature.
By the way, playing a lot of racing videogames is a great training for dealing with that sort of stuff, except maybe getting good at mirrors. I've been in a few dangerous situations and they were only the 10th thousand averted crash. Nothing to think, only reflexes.
The best time was a very badly designed speed bump that I didn't even hit at high speed but one side was ridiculously inclinded vs. the other so the entire civic's frontend just smashed right into the pavement and dragged for 3 feet. I wouldn't be surprised if I went into a body shop and find the frontend tilted upwards by a few centimeters. lol
[Cut, google ai provided wrong numbers]
Not sure how to take this reply.
That's not to say that there aren't many drivers who shouldn't be driving, so both can be true at once, but this is certainly not a bar against which to gauge autonomous driving.
Even if they did not understand what it is, in the real world when you see something on the road you slow down or do a maneuver to avoid it, no matter if it is a harmless peace of cloth or something dangerous like this. People are very good at telling if something is off, you can see it in the video.
Another thing to keep in mind is that video footage is much lower quality than what we can see with our human eyeballs. At no point in the video can I clearly identify what the debris is, but it's clearly evident that the humans in the car can, because they're clearly reacting to it seconds before it's even visible to us in the dash-cam-quality footage. I will freely accept that many drivers are in fact bad drivers, but a carcass (I think?) on a lane visible for >10 seconds away is something that anyone who can't avoid needs to have their license revoked.
From what I understand, in the USA when a truck tire wears down they put a new layer of tread on it. But it doesn't seem to work very well as it eventually peels off.
Every now and then the Karens get to screeching about it and it reaches a critical mass and the NHTSA does a study and they find that most of the debris you're seeing has nothing to do with retreads. Here's the summary of a recent one:
https://www.moderntiredealer.com/industry-news/commercial-bu...
All I heard was "taxes".
/s
On a more serious note, in the US we generally go in the direction of fewer services rather than more services. It, of course, leads to massive inefficiencies, like police removing shredded tires. But it's very difficult here, politically speaking, to get Americans to agree to add government services.
Have you compared distances?
Germany is also the country with the most traffic because it's kind of at the center of everything and pays roads with income tax instead of tolls.
The French charge really high tolls and have very little traffic compared to Germany. They really don't have an excuse.
Oh but we do. Most of the state-owned motorways have been sold off to conglomerates about two decades ago, dividends and exec comps have to come from somewhere.
I mean, I have seen some dangerous things. Not at the rate you describe, though. Not even close.
??
They are not..
I honestly can't recall ever feeling like I was going through a markedly different area in either better or worse directions.
Around here anything big enough to matter gets reported, the cops fling it to the side of the road and it gets picked up on a schedule because they always pick things up a few weeks before they mow (presumably because hitting garbage isn't great for the mowers).
But yes there are folks whose job it is to clean that stuff up.
In the linked video highway patrol comes out to remove the debris just a few minutes after they hit it. (highway patrol had been called out prior them hitting it)
That they can today doesn't mean they can do the same route 10,000 times without a safety-critical intervention. In fact, public tracking indicates Tesla's numbers are MUCH lower than that.
No. If you continue to watch the video after the collision, it's clear that there was no traffic tailgating. They even slowed down and pulled over to the side of the road. No other cars.
You shouldn’t take pills a stranger gives you at a music festival without checking them for loads of things, for example. Even if you don’t ever intend on consuming them it’s nice to have specific accusations with evidence.
That's kind of irrelevant, this technology is meant to be safer and held to a higher standard.
Comparing to a human is not a valid excuse...
I don't think that is the case. We will judge FSD whether you make more or less accidents than humans, not necessarily in the same situations. The computer is allowed to make mistakes that a human wouldn't, if in reverse the computer makes a lot less mistakes in situations where humans would.
Given that >90% of accidents are easily avoidable (speeding, not keeping enough safety distance, drunk/tired driving, distraction due to smartphone usages) I think we will see FSD be safer on average very quickly.
A self-driving car of today still underperforms the top of the line human driver - but it sure outperforms the "0.1% worst case": the dumbest most inebriated sleep deprived and distracted reckless driver that's responsible for the vast majority of severe road accidents.
Statistics show it plain and clear: self-driving cars already get into less accidents than humans, and the accidents they get into are much less severe too. Their performance is consistently mediocre. Being unable to drink and drive is a big part of where their safety edge comes from.
Tesla's crash reporting rules:
> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.
NHSTA's reporting rules are even more conservative:
> Level 2 ADAS: Entities named in the General Order must report a crash if Level 2 ADAS was in use at any time within 30 seconds of the crash and the crash involved a vulnerable road user being struck or resulted in a fatality, an air bag deployment, or any individual being transported to a hospital for medical treatment.
At highway speeds, "30 seconds" is just shy of an eternity.
This subverts all of the accumulated experience of other users on the road about what a car will do, everyone is used to potential issues caused by humans, on top of that other road users will have to learn the quirks of FSD to keep an eye for abnormalities in behaviour?
That's just unrealistic, not only people will have to deal with what other drivers can throw at them (e.g.: veering off lane due to inattention) but also be careful around Teslas which can phantom brake out of nowhere, not avoid debris (shooting it on unpredictable paths), etc.
I don't think we should accept new failure modes on the road for FSD, requiring everyone else to learn them to be on alert, it's just a lot more cognitive load...
This is what Musk has been claiming for almost a decade at this point and yet here we are
I know that the basic autopilot is a completely different system than the so-called FSD.
But equipped with that experience from the basic autopilot, it does not surprise me that a large debris on the road was completely missed by the FSD.
At least for me there was nothing indicating there is not enough clearance.
It’s a huge chuck metal well over a foot long in the middle of a lane. The center of a lane is usually higher than sides and an uneven patch of road can cause a slight bounce before it, further reducing clearance. I’d worry about my RAV4 (8 inches) clearing it safely.
At first, I thought it was possibly a tire tread, which tend to curl up.
When you're wielding immense amounts of money, power, and influence I think it's worth trying to do the bare-minimum to hold people accountable for their words and claims. Otherwise your words are meaningless.
I feel like this is the case across the board.
People who always have an excuse, try to shift blame, etc., are assumed to be lacking in competency (let alone ethics and trustworthiness).
> Knowing when to say thanks and when to say sorry is the key for success.
...and I have used this piece of advice since then, it paid me handsomely. Of course, this doesn't allow you to be shameless, on the contrary, it requires to stick to your values as a prerequisite.
I think what allows Elon to behave like that is how he can retaliate without any repercussions since he has tons of money and influence in some circles.
For self driving, he simply decided X is right and talked about exponentials and no matter how many time it fails, there is no reflection what so ever.
Not to get too political, but the last I've heard of Elon Musk is that he was speaking to/mobilizing right wing extremists at a big protest in London. I am also pretty sure he has been trying to do similar things in other European nations (for whatever reason).
It seems to me that it is a bit late to plead for "basic decency" at this moment.
But at the same time let me ask you, what has he got to lose? What financial or reputational risk is he taking by not taking any accountability?
I know because I’m one of them. FSD paid in full almost seven years ago, still does absolutely nothing in Europe. A five-year-old would do a better job at driving because the Tesla can’t even see speed limit signs correctly.
Tesla takes no responsibility for their misleading marketing and years of lies. Most recently Musk promised in early 2025 that these old cars would get a hardware update that will finally enable the paid-for FSD (as if…) The company itself pretends to know nothing about this latest promise made by its CEO.
It’s insane that a business with a trillion-dollar market cap operates like this. It seems to be more of a cult than a real company.
This is sad and atrocious. Not only a Ford Puma (an econobox compared to a Tesla) can read almost all speed limit signs correctly, it can pull speed limit data correctly from its onboard maps when there are no signs. These maps can be updated via WiFi or an on board modem too.
Tesla mocked "big auto industry", but that elephant proved that it can run if it needs to.
"How do we get self-driving into millions of regular consumer cars without doubling the price by adding expensive sensors and redesigning the vehicle around huge chunky sensors"
Waymo is focused on Taxis for a reason because it's likely going to be unaffordable, except to people driving Lambos. But that also may be fine for a big chunk of the public who don't want to own cars (or want to rent their own as a service).
Some consumer car companies are experimenting with adding smaller LIDAR sensors like recent Volvo Ex90 which costs ~$100k. But they aren't as sophisticated as Waymo
That was my hunch, but Google Lens was able to ID it. Possible that Waymo vehicles can do this too, but that must take some serious compute and optimization to do at highway speeds.
Has Elon lied about the capabilities? Yes, on many occasions.
Crashing your car to prove it seems lilke a waste. When the documentation is clear.
""" Tesla Autopilot is an advanced driver-assistance system (ADAS) developed by Tesla, Inc. that provides partial vehicle automation, corresponding to Level 2 automation as defined by SAE International """ https://en.wikipedia.org/wiki/Tesla_Autopilot
"""
Drive Autonomously: No manufacturer has achieved Level 4 or Level 5 autonomy for road use. Tesla has not achieved Level 3 conditional autonomy like Mercedes-Benz’s DrivePilot system. A Tesla cannot drive itself. The driver must remain attentive at all times. Tesla now qualifies Autopilot and Full Self-Driving with a (Supervised) parenthetical on its website. Tesla’s Level 2 system is a hands-on system that requires the driver to regain control immediately. Torque sensors confirm the driver’s hands are on the wheel or yoke. Level 2 Driving on City Streets: Tesla does list Autosteer on City Streets as a feature of Full-Self Driving. But notably, its website provides no further """ https://insideevs.com/news/742295/tesla-autopilot-abilities-...
""" In a statement addressing the US recall, Tesla declared its technology is a ‘Level Two’ semi-autonomous driving system – not the more advanced ‘Level Three’ system which is already being developed and rolled out by rival car-makers. """ https://www.drive.com.au/news/tesla-full-self-driving-level-...
And here, the too-frequently posted excuse that "oh, many humans would have hit that too" is utter nonsense.
In that situation, with light traffic, clear daylight visibility, and wide shoulders, any human who would have hit that is either highly distracted, incompetent, or drunk.
Both driver and passenger saw the object at least 7-8 seconds ahead of time; at 0:00sec the passenger is pointing and they are commenting on the object, at 0:05sec passenger is leaning forward with concern and the Tesla drives over it at 0:08sec. The "Full Self Driving" Tesla didn't even sound any warning until a second AFTER it hit the object.
Any alert half-competent driver would have certainly lifted off the accelerator, started braking and changing lanes in half that time. They didn't because of the expectation the Tesla would take some corrective action — bad assumption.
"My 'Full Self Driving' is as good as a drunk" is not a worthwhile claim.
Worse yet, the entire concept of [it drives and then hands control to the human when it can't handle a situation] is actively dangerous to levels of insanity.
Human perceptual and nervous systems are terrible at tasks requiring vigilance —it is like our brains are evolved for attention to wander. Having a life-critical task that can literally kill you or others ALMOST fully handled autonomously is situation designed for the human to lost attention and situational awareness. Then, demanding in a split second that (s)he immediately become fully oriented, think of a reaction plan, and then execute it, is a recipe for disaster.
In this case, it is even worse. The Tesla itself gave the humans zero warning.
The driver and passenger saw the object well in advance of the Tesla and in in 3-4 times the time and distance it would take to react effectively. But, they had an assumption nothing was wrong because they assumed the Tesla would handle the situation and they were not in a driving mindset, instead waiting to see what the Tesla would do. They were not actively driving the car in the world. Fortunately, the only result was a mangled Tesla — this time.
gdulli•1h ago
lazide•1h ago
For automated tools like CNC machines, there is a reason it’s just ‘emergency stop’ for instance, not ‘tap the correct thing we should do instead’.
But doing an e-stop on a road at speed is a very dangerous action as well, which is why that isn’t an option here either.
UncleMeat•1h ago
The alternative is taking control of the car all the time. Something nobody is going to practically do.
lazide•1h ago
detaro•1h ago
Passively monitoring a situation continuously and reacting quickly when needed is something humans are not good at. Even with airline pilots, where we do invest a lot in training and monitoring, it's a well-understood issue that having to take over in a surprising situation often leads to confusion and time needed to re-orient before they can take effective action. Which for planes is often fine, because you have the time buffer needed for that, but not always.
anonymars•1h ago
And then when something goes wrong, you are unceremoniously handed a plane with potentially some or all of those protections no longer active.
As an analogy, imagine your FSD car was trying to slow down for something, but along the way there is some issue with a sensor. So it gives up and hands control back to you while it's in the middle of braking, yet now your ABS is no longer active.
So now the situation is much more sudden than it would have been (if you had been driving the car you would have been aware of it and slowing down for it youself earlier in the game), you likely weren't paying as much attention in the first place because of the automation, and some of the normal protection isn't working.
So it's almost three levels of adding insult to injury. Potentially related discussion: https://news.ycombinator.com/item?id=43970363
kayodelycaon•1h ago
The level of training required to oversee full automation is non-trivial if you have to do more than press a stop button.
Coffeewine•1h ago
It does have the same problem - if 99.999% of your flight time is spent in normal law you are not especially ready to operate in one of the alternate laws or god forbid direct law, which is similar to the case of a driver who perhaps accustomed to the system forget how to drive.
But I think we have a ways before we get there. If the car could detect issues earlier and more gradually notify the driver that they need to take control, most every driver at present retains the knowledge of how to directly operate a car with non-navigational automation (abs as you mentioned, power stearing, etc)
anonymars•44m ago
I was thinking of, for example XL Airways Germany 888T. I was trying to find it and came across this thread making a similar comparison so I'll link that: https://www.reddit.com/r/AdmiralCloudberg/comments/18ks9nl/p...
I agree that we can make the situation less abrupt for cars in some cases (though people will probably get annoyed by the car bugging them for everything going on)
catapart•1h ago
They want to pretend you'll only need to actually intervene in 'edge case' situations, but here we have an example of perfect conditions requiring intervention. Regardless of the buzzwords they can attach to whatever methods they are using, it doesn't feel like it works.