> Tesla is widely known for its so-called advanced driver-assistance systems, including Autopilot and Full Self-Driving (FSD).
(emphasis mine)
It’s not a term Musk dreamed up.
https://en.m.wikipedia.org/wiki/Advanced_driver-assistance_s...
No point trying to get folks who will move again soon when something else changes in US.
There are ~42k traffic fatalities in the US each year. Cameras-only self driving cars are a tiny fraction of the number of total cars.
The highest estimates I've seen for annual traffic deaths with an ADAS involved (not even implying causation) is in the range of dozens. Cameras-only self driving cars would be a fraction of those. As a result, there are quite possibly more than a thousand traffic fatalities each year caused by human-driven cars for each ONE caused by a cameras-only self driving car.
But my original claim was only that they kill a lot fewer. That seems self-evident.
Polonium ingestion also kills fewer people than self driving cars, so by that token, polonium ingestion is perfectly safe.
Meanwhile, there are ~42k traffic deaths per year that could be prevented by focusing on human-driven cars.
Eh, there are over a hundred thousands deaths attributable to Musk's actions in the White House [1]. I agree with you on the short-term calculus. But trusting Tesla to help reduce those traffic deaths--and furthermore, enabling its position of power--puts those states' sovereignty in jeopardy. Swapping lives for sovereignty is an old (and trusted) trade.
[1] https://www.bu.edu/sph/news/articles/2025/tracking-anticipat...
https://wolfstreet.com/2021/12/13/vehicles-in-operation-and-...
Here's an alternative idea that would save a lot more lives:
Take the camera-based driver attention monitoring that works in my seven year-old Tesla, which notices IMMEDIATELY if I look away from the windshield for more than a second or two, and then require that in the human-driven cars.
Estimates for annual deaths in the US from distracted driving are between 3,250 and 12,400. An in-cabin camera is not expensive or specialty hardware. The tech is there, the costs are low. We could save a lot of lives!
If we're ignoring that to focus on Tesla's FSD, the goal is not sensible regulation or saving lives.
A: Citation? Just because it's less than 0.3% of cars on the road doesn't mean it's less than 0.1% of fatalaties. And citation that doesn't let Tesla pass off any FSD crash as "driver error" which they have a horrible habit of doing. If FSD disengages at impact, they call that driver error, which is absolute bullshit.
And because Tesla is taking 0 accountability for it, they are passing it onto the driver. They want to have their cake and eat it too. If you or I are driving distractedly and kill someone, we face serious criminal and financial repercussions.
If FSD decides to swerve out of the lane and into oncoming taffic, Tesla wants to shrug and say "I guess the driver should have been better". That's trash, and should be banned from our roads summarily.
This. Put it on all vehicles that are driven (exc. waymo, zoox, and the like).
It looks like something similar is already happening in the EU (with momentum in the US too.) (See https://spyro-soft.com/blog/automotive/driver-monitoring-sys...)
This exists already in Subaru vehicles, even ones with ICEs. It's called "DriverFocus." It's super helpful. However, I don't believe the technology is mandated in all vehicles yet.
The driver drowsiness alert in Teslas seems to be much more limited than that. It only activates at speeds over 60km/h, when driven for more than 10 minutes, and when Autopilot is not engaged. I wonder why they disable it when Autopilot is on?
The drowsiness alert would be superfluous when it's watching your eyes. It's already going to yell at you if it can't see your open eyes looking out the windshield for more than a second or two.
Whether or not the "steering wheel torque" method is better than a vision-based driver drowsiness alert is probably debatable, but it would be pretty tough to fall asleep while also applying exactly the right amount of torque to keep Autopilot engaged.
It yells at me sometimes when I'm driving down my drive way and looking at my goats instead of the driveway, which is fair. It also yells at me when I look at the mirrors for 'too long' or if I look for 'too long' for potential cross traffic when crossing an intersection (when driving late at night, I try to look for potential red light runners, but you have to spend more time looking). It also tells my spouse to sit up when she already is. Chances are this alert is going to be disabled, because it's a bigger distraction than anything else.
It also likes to alert about cross traffic when I start moving to sequence after traffic I saw that is in motion. Those alerts would be handy if it were about traffic I didn't see though, so I don't want to turn them off, even though so far they've been unhelpful.
that sounds rough; hopefully they're OK! did the car drive into them from the side or from behind?
where did it happen? googling "Tesla ditch self-driving accident" turns up nothing, but I would have thought it would have made the news.
- Mississippi 23.9 (highest)
- Texas 14.7
- CA 11.3
- Rhode Island and DC 4.8 (lowest)
Anecdotally, the huge trucks and driving culture (aggressive, fast) would have made me guess Texas has higher deaths before seeing the data.
Texas has a plurality of fatal car accidents (for USA), but California is not far behind, and in 2022 California has slightly more deaths. (This page doesn't have the number of fatal car accidents for 2022, which is a bit odd.)
2023 here, Texas has plurality again, with California close behind: https://www-fars.nhtsa.dot.gov/States/StatesCrashesAndAllVic...
You're not looking at absolute numbers, which is what plurality means. I don't see how "someone in the US is more likely to die in a car wreck in Texas even if they never go to Texas" could make sense.
A driver in the US dies while driving due to a crash/wreck/whatever.
Statistically, that occurred with the highest probability in TX. as i said, this was like 2015-2019 when i used to claim this. There's a sign on freeways in TX that say "highway deaths so far in <year>: <16 bit int>" which led me to start looking in to it, and i think my little quip is just a way to draw attention to how dangerous it is to drive in TX. But it is quite large, Texas.
Although there's a good argument to be made that Tesla's entire system has fundamental design flaws which they have negligently disregarded.
I also believe that marketing it as FSD should be liable and scrutinized as a level 4 system. Because when you hear FSD, the public naturally thinks the abilities marked in level 4 arguably even 5.
(Though a consequence is that levels 3 and 4 are very close together in difficulty. We might not see many level 3 cars.)
https://www.sae.org/binaries/content/gallery/cm/content/news...
The time is up to the manufacturer, isn't it?
Mercedes uses 10 seconds right now and that seems pretty good to me. At that point I know it can't be too dire or the car would have already emergency stopped.
The time depends on how quickly an event unfolds in traffic. You can't guarantee 10s notice for an event that is imminent in 2s and the system might not be able to handle or can't detect.
The car could become temporarily "blind" for some reason with just 4-5s to brake before a collision. It's enough for a human driver even considering reaction time. But it's impossible to guarantee a minimum time without the ability to predict every issue that will happen on the road.
If it becomes "blind" because of an unexpected total system failure, that's an exception to the guarantee just like your transmission suddenly exploding is an exception. It had better be extremely rare. If it happens regularly then it needs a recall.
When dealing with unpredictable real life events there are no guarantees, unless we're considering the many carveouts to that definition from a legal perspective. A blind car (fluke weather, blown fuse, SW glitch, trolley problem) can no longer guarantee anything. Giving the driver 10s, or assuming the worst and braking hard could equally cause a crash.
> your transmission suddenly exploding is an exception
As long as the brakes or steering work a driver could still avoid a crash. The driver having a stroke is closer to a blind car.
The guarantee here is that the human isn't obligated to intervene for a moment.
If you call that guarantee impossible, then what about level 4 cars? They guarantee that the human isn't obligated to intervene ever. Are level 4 cars impossible?
Is this a wording issue? What would you say level 4 cars promise/provide? Level 3 cars need to promise/provide the same thing for a limited time. And that time has to be long enough to do a proper transfer of attention.
Ah, understood. So the guarantee is that the driver is not legally responsible for anything that happens in those 10s. I always took that as a guarantee of safety rather than from legal consequences.
The guarantee is that you will be very safe and you can go ahead and look away from the road and pay attention to other things. But at most this is as good as a level 4 or 5 car, not an impossibly perfect car.
Yes but there is a minimum time (if a bit under-specified)
> "At Level 3, an ADS is capable of continuing to perform the DDT (Dynamic Driving Task) for at least several seconds after providing the fallback-ready user with a request to intervene."
J3016 Section 3.12, Note 3: https://wiki.unece.org/download/attachments/128418539/SAE%20...
Which makes me think: if FSD requires constant hands on steering wheel and concentration what is the point? May as well drive yourself.
The highest level of ADAS system I use regularly has facial attentiveness tracking. If you spend too much time drinking coffee or even looking out the sides of the car it will alert you and eventually turn off. So you're not spending a ton of time drinking coffee or reading emails.
It's really nice having the car just want to stay in the center of the lane and keep the following distance all on its own. It's less fatiguing on your hands and arms having the car feel like it's in a groove following all the curves for you instead of resisting your input all the time for hours and hours. It's incredibly nice not having to switch between the brake and the gas over and over in stop and go traffic. Instead, the only thing I need to focus on are the drivers around me and be ready to brake.
I've driven between Houston, Dallas, and Austin dozens of times with ADAS systems and another dozen or so times with only basic cruise control. It's way nicer when the only time I have to touch the gas and brake are getting on and off the highways. I'm considerably more relaxed and less exhausted getting to my destination.
Let's assume all these options are either the same price or an immaterial difference to the price of your next car. If you had an option for a car with basic cruise control or no cruise control, which one would you take? If the option was basic cruise or adaptive cruise which kept pace with traffic and operated in stop and go conditions, which would you choose?
And then there's the skill atrophy. How do you learn to perform in stressful situations? By building up confidence and experience with constant repetition in more mundane ones, which this robs you of.
This isn't correct. Level 4 doesn't require driver intervention[0]. Hence, why I'm arguing "Full Self Driving" starts here, level 4.
So now if you're explicitly not required to "be present" then the system should be liable or at least the "driver" isn't to blame directly.
It's actually level 5 is the same as level 4 but add heavy rain, snow, ice, name-your-adverse-condition.
Level 2 requires the driver to choose whether to intervene at all times. This is an unreasonable task for humans.
Level 3 puts the car in charge of when intervention is needed, and even once it wants intervention it still has to maintain safe control for several seconds as part of the system spec.
Level 4 puts the car in charge of when intervention is wanted, but you can refuse to intervene and it has to be able to park itself.
So I will double down on my claim. Until the car requests intervention AND the timer runs out, level 3 and 4 are the same. They require the same abilities out of the car. And that section of time, between wanting intervention and getting intervention, is the hardest part of level 3 driving by far. If you can solve that, you're 90% of the way to level 4.
A level 3 car has to be able to handle emergencies several seconds long, and turning it into level 4 is mostly adding the ability to park on the shoulder after you get out of the initial emergency.
The gap between 4 and 5 is a bunch bigger. A level 4 car can refuse to drive based on weather, or location, or type of road, or presence of construction, or basically anything it finds mildly confusing. 5 can't.
I edited a bit for clarity, but also I'll append a thought experiment as an extra edit:
A level 3 car with an hours-long driver intervention timer is basically identical to a level 4 car.
If you have a 0 second intervention timer, you're barely better than a level 2 car.
How long does the timer have to be before developing your level 3 system is almost as difficult as a level 4 system?
I don't think it's very long.
I still stand by level 3 != level 4 in terms of real world liability.
Level 3 allows too much wiggle room and sloppiness to be able to legally shift liability away from the driver. At that point you're playing that "intervention period" length. Manufacturers claiming Level 3 will want to lower it as much as possible, regulators raise it. To me, Level 3 simply shouldn't exist.
Only at Level 4 is the expectation, without a doubt, the machine is in control. A person in the driver seat is optional because the steering wheel and pedals are as well. When people bought "Full Self Driving" they seriously believe "when can I go to sleep?" ability is where it belongs, which always put the expectation at Level 4.
It looked like the Mercedes system is 10 seconds which seems like plenty to me.
And while it would be nice to sleep I'll be pretty happy just looking away from the road.
I've clocked nearly a half million miles on the road (I'll be there sometime in the next 9 months), and the range of technical ability you need to drive in just the US, no, scratch that, any given state or even county varies so much and potentially so often that FSD is just a lie to sell cars. I'm willing to upload a full hour drive touring a few parishes around here in my quite heavy Lexus, front and rear cameras, just to prove my point. I'd do it in the subaru but the dashcam isn't very good and also it's lineage is rally so it exaggerates how poor the roads are. My YouTube has dashcam footage of drives that I'm willing to bet no automated system could handle, even if it claimed to be "level 5". Driving after a storm or hurricane is another issue. I know the hazards in general and specifically for the areas I'd need to travel during or after an emergency. I cannot fathom the amount of storage and processing that would take, to have that for every location with roads. On board, in the car? Maybe in 20 years.
Doing some napkin math, with 4 million miles of road in the US, if you wanted to store 1KB of data per meter of road, hundreds of data points, you'd only need 7TB for the entire database.
And the processing to make it shouldn't be anything special, should it? Collection would be hard.
Currently that would probably cost ~$500 per car to implement based on retail pricing of 8TB SSDs. It would need to be updated constantly, too, with road closures, potholes, missing signage, construction. With an external GPS unit like a tomtom, they had radio receivers in the power cord that tuned to traffic frequencies, if available, and could route you around closures, construction, and the like, so you need a nationwide network to handle this. Cellphone won't cut it. Starlink might, but regardless, you need to add that radio and accoutrements to the BOM for each car.
and i'm not talking about the processing of the dataset that gets put onto the 8TB SSD in the car; i am talking about the processing of the data on the 8TB SSD on the car while at speed.
furthermore, i am fairly certain that it would take, on average, more than 1.6MB per mile to describe the road, road condition, hazards, etc. a shapefile of all roads in the US - that which gets one closer to knowing where the lanes are, how wide the shoulders are, etc is 616MB. and it's incomplete - i put in two roads near me with fairly unique names and neither are in the dataset. So your self driving car using these GIS datasets won't know those roads.
I had an idea to put an atomicpi in my car, with two cameras. it has a bosch 9-dof sensor on the board, coupled with the cameras you can map road surface perturbations, hazards, and the like, which i believe will be much more than 1KB per meter, especially as you need "base" conditions and updates and current conditions (reported by the cars in front of you, ideally). the csv GIS dataset looks like this:
>OBJECTID,ID,DIR,LENGTH,LINKID,COUNTRY,JURISCODE,JURISNAME,ROADNUM,ROADNAME,ADMIN,SURFACE,LANES,SPEEDLIM,CLASS,NHS,BORDER,Shape__Length
> 568143,964990,0,0.07,02_36250355,2,02_39,Ohio,S161,DUBLIN GRANVILLE RD,State,Paved,4,88,3,7,0,0.000759951397761616
and i ran, for example `awk -F, '/PACIFIC COAST/ {sum += $4} END {print sum}' NTAD*.csv` and it spat out 79.04, which i think is a bit shorter than reality. Looks like the dataset i pulled is only "major roads" as well - but that doesn't explain 79.04 as the sum of lengths of all rows with "PACIFIC COAST" in them. It does show the total length of interstate 10 is 3986.55, which is roughly double what the actual length is (2460mi), so perhaps i'm just not understanding this dataset.
Anyhow 600+ MB for just that sort of information (plus shapes) for only a really quite small subset of roads in the US.
anyhow my thoughts are scattered, this input box is too small, and i'm not really arguing. Maybe it is possible, but it will raise the price thousands of dollars per auto, you need infrastructure (starlink will work) to update the cars, and so on. I'm prepared to admit i am wrong, but your comment didn't move the needle for me.
also, just to be fun, which self driving car could manage this entire drive? https://www.youtube.com/watch?v=sNqFN7KeOYE
> i am talking about the processing of the data on the 8TB SSD on the car while at speed.
I'm not worried about that. The actual driving takes such powerful computers that even if there was a petabyte of total data, the amount the car would have to process as it moves would be a trickle in comparison to what it's already doing. Max 50KB per 10 milliseconds. And obviously the data would by sorted by location so there's very little extra processing required.
But you tell me, how many data points do you think you need per meter of road?
I really don't think you need millimeter-level surface perturbations all the way across. Mapping the precise edges of the road and lanes should only need dozens of data points, 4 bytes each. And then you can throw a few more dozen at points inside the lanes to flesh it out. You can throw a hundred data points at each pothole without breaking a sweat. Measuring the surface texture in various ways and how it responds to weather is only going to take a handful of bytes per square meter, in a way that repeats a lot and is easy to compress.
> 568143,964990,0,0.07,02_36250355,2,02_39,Ohio,S161,DUBLIN GRANVILLE RD,State,Paved,4,88,3,7,0,0.000759951397761616
That's an extremely inefficient format. Unnecessary object ids, repeating metadata over and over, way too many decimal places, and all stored as text.
But even then, your database is so tiny compared to the size I suggested that I don't think we can extrapolate anything useful. Even if we 4x it or whatever to compensate for a lack of rural roads.
none of this has to do with visual or proprioception. It's knowing "every inch" of road. It's knowing how far i can leave the center of the lane if someone else crowds me or goes over the center divider, because the shoulder is soft through here because logging trucks have been exiting the forest onto the highway. It's knowing what part of I-605 floods - not the whole thing, some lanes, some places, and "flood" means 2+ inches of water on the road surface, hitting it at speed makes a tidal wave flying into other lanes. If someone hits that in front of you, you're blind for a couple of seconds minimum. If we want to have semi trucks be "FSD" it needs to know, for the traffic and other conditions, how fast to go and what gear to be in to climb each hill, and then the hazards that are over the hill - that a trucker would know. Where's the gravel bed on more mountainous passes? Or more simply, what time of day neighborhoods are more likely to have people approaching or going through / out of intersections, blind or otherwise. How many "bytes" is that information, times every neighborhood? If many cars brake at the same place, there's probably a reason, and that needs to be either in the dataset or updated somehow if conditions change. You ever used Waze and had a report of something on the road or a cop parked somewhere, and it's nowhere to be seen? And that's updated much more frequently than the radio-info on the GPS systems i referred to earlier. Some roads become impassable in the rain, some roads ice more readily.
If this was easy/simple/solved, waymo et al would be bragging about it, the tech in their cars. Waymo (or the other one) specifically, because they cover less than 0.1% of road surfaces in the US, in some of the most maintained and heavily traveled corridors in the world. So, if anyone from a robotaxi company happens by and knows roughly how much storage is needed for <0.1% of the road surfaces in the US, then we could actually start to have this dialog in a meaningful way. Also i am unsure how much coverage robotaxis actually have in their service area. A "grid system" of roads makes mapping and aggregate data "simple", for sure.
This reminded me a bit of the idea that somewhere in the US there's a database of every sms sent to or from US cellular phones. "it's just text; it'll compress well" - belies how much text there is, there.
for reference, the map in my lexus is ~8GB, for the US. And that's just "shapes" and POI and knowing how the addressing works on each road. It doesn't know what lane i'm in, it doesn't track curves in the road effectively (the icon leaves the road while i'm driving quite often), and overpasses and the like confuse all GPS systems i've ever used - like in Dallas, TX where it's 4 layers high and parallel roads stacked. furthermore, just the road data on google maps for the nearest metro area to my house is 20MB. i have a recollection it goes real quick into hundreds of MB if you need to download maps for the swaths of areas where there is no cellphone reception, like areas in western Nevada. given 20MB for my metro, that's 40GB of just road shapes and addresses for the US, which is much more than the 600MB incomplete GIS files i downloaded.
so we've moved from fencing 600MB "text" data; to the actual data needed by a GPS to give directions, 8000MB. Your claim is that a mere 1000x more data is enough to autonomously self-drive anywhere in the US, at any time of day or year, etc...
you know who actually has this data and would know how big it is? Tesla.
The part of the computer that knows how to drive is completely separate from the 7TB database of the exact shape and location of every lane and edge and defect.
> knowing how far i can leave the center of the lane if someone else crowds me or goes over the center divider
Experience, not in the database.
> knowing what part of I-605 floods
> Where's the gravel bed on more mountainous passes?
That goes in the database but it's less than one byte per meter.
> How many "bytes" is that information, times every neighborhood?
I don't know why you would want that data, you should be wary of blind traffic at all times, but that's easy math. There's less than a million neighborhoods and time based activity levels for each neighborhood would be about a hundred bytes. So: Less than 1 byte per meter and less than 100MB total.
> If this was easy/simple/solved, waymo et al would be bragging about it
This doesn't happen for two reasons. One they are collecting orders of magnitude more data than road info, two like I keep saying the collection is extremely difficult and I'm only defending the storage and use as being feasible.
> This reminded me a bit of the idea that somewhere in the US there's a database of every sms sent to or from US cellular phones. "it's just text; it'll compress well" - belies how much text there is, there.
Well we know how many meters of road there are. So it's basic multiplication.
I can tell you how many hard drives you need to store a trillion texts. It's five hard drives.
Google thinks the human race sends almost ten trillion text messages per year. So I guess you could store them all very easily? Why do you think it's not doable?
> Your claim is that a mere 1000x more data is enough to autonomously self-drive anywhere in the US, at any time of day or year, etc...
My claim is that 1000x is enough for utterly exhaustive road maps. Figuring out how to drive is another thing entirely.
an SMS isn't just "140 characters/bytes" or whatever (i honestly don't care what your definition of "SMS" is). Of course you could fit 140 characters * 1e12 onto 5 hard drives. Where are you going to put the 1PB (for 1e12, but your own cite says it's 1e13, so 10PB) of metadata, minimum? the most barebones amount of metadata you need to actually have actionable "intelligence" is 1KB per message (technically i was able to finagle it to ~1016 bytes.) And that's for every message, even an SMS that is the single character "K".
you need the metadata to derive any information from the SMS. "Lunch?" "yeah" "where?" "the place with the wheel" "okay see you in 25, bring Joel" This is what you propose to save. (quick math shows you went off something like ~32TB of sms data per 1e12 messages)
in the same way that you propose that the shapes of a road and it's direction and distance "plus 1KB of metadata per meter" is enough to derive the ability to drive upon those roads.
It's pretty obvious that just using sensors is not going to get FSD. Maybe in the next 20 years we will develop sensor technology (and swarm networking and whatever else) that will allow us to dispense with the "7TB" of metadata. My argument is that: we need much more "metadata" than 1KB per meter to "know the road baseline, current conditions, hazards", much in the same way a text message is more than 140 bytes. Driving with "only sensors" and rough GPS has killed people. It does not matter if human drivers have more death per million miles or whatever, because i am strictly talking about FSD, what other people are calling level 5 (i'd even concede level 4; although i wouldn't be able to use a level 4 car where i live for roughly 1/4th the year - and other areas would have more than 1/4th the year.)
enjoy your night!
note: the metadata for a meter of road could be:
{
"road_segment": {
"segment_id": 3500000,
"meter_position": 128534,
"coordinates": {
"latitude": 32.5385,
"longitude": -92.9222
},
"timestamp_added": "2024-06-05T23:57:34Z",
"last_updated": {
"timestamp": "2025-06-05T23:57:36Z",
"delta_seconds": 2.0
},
"hash_signature": "89a25b6f3cd829e671bb9d42e8fae2c6",
"road_type": "highway",
"lane_data": {
"lane_count": 4,
"lane_width_m": 3.5,
"shoulder_width_m": 2.0,
"divider_type": "concrete barrier",
"markings": ["solid white", "dashed white", "double yellow"]
},
"speed_limit_mph": 65,
"road_material": "asphalt",
"incline_percent": 1.2,
"curve_radius_m": 150,
"surface_condition": "dry",
"weather_conditions": {
"timestamp": "2025-06-05T23:57:36Z",
"temperature_c": 25,
"precipitation_mm": 0,
"visibility_m": 5000,
"wind_speed_mps": 2.5
},
"baseline_hazards": [
{
"type": "grade_crossing",
"description": "Railroad crossing with signal lights",
"location": { "latitude": 32.5386, "longitude": -92.9224 }
},
{
"type": "roadwork",
"description": "Permanent lane narrowing from past construction",
"location": { "latitude": 32.5390, "longitude": -92.9230 }
}
],
"current_hazards": [
{
"type": "construction",
"description": "Active roadwork zone with lane closure",
"severity": "high",
"location": { "latitude": 32.5387, "longitude": -92.9226 }
},
{
"type": "downed_power_line",
"description": "Reported electrical hazard near shoulder",
"severity": "critical",
"location": { "latitude": 32.5395, "longitude": -92.9232 }
}
]
}
}
Obviously you can reduce this, but there's a minimum viable amount of metadata, that's my claim, and it's more than 1KB per meter. that snippet is ~1800bytes as is. the "current conditions" would not be part of the dataset on the "7TB" disk. that would need to be relayed or otherwise ingested by the car as it drives - the way my 2012 lexus tells me that i'm about to drive into a wild storm, but that's all the extra information i get out of its infotainment system. waze is a better example of the sort of realtime updates i expect a FSD to need; although i expect many times more points of information than waze has, maybe dozens, maybe hundreds more. and each "trick" you do to reduce the size of the metadata necessarily implies more CPU needed to parse and process it.How did you reach that number?
I figure the most important metadata is source and destination phone numbers and a timestamp, and I guess what cell tower each phone was on. A phone number needs 8 bytes, and timestamp and cell tower can be 4 bytes, so that's 28 bytes of important metadata.
> (quick math shows you went off something like ~32TB of sms data per 1e12 messages)
I was going for a full 140TB of data. 20-30TB hard drives are available.
I did consider metadata, but I figured you could probably put that in the savings from non-full-length messages.
> Where are you going to put the 1PB (for 1e12, but your own cite says it's 1e13, so 10PB) of metadata, minimum?
Well for just the US it would be closer to 1PB. But, uh, I'd store it in a single server rack? (ideally with backups somewhere) As of backblaze's last storage pod post, almost three years ago, it cost them $20k per petabyte. That's absolutely trivial on the scale of telecomms or governments or whatever.
> My argument is that: we need much more "metadata" than 1KB per meter to "know the road baseline, current conditions, hazards", much in the same way a text message is more than 140 bytes.
I mean, I agree with you about needing extra information.
But that's why the number I gave is 10000x larger than your CSV. My number is supposed to be big enough to include those things!
> note: the metadata for a meter of road could be:
I really appreciate the effort you put into this. I have two main things to say.
A) That's less than a kilobyte of information. Most of the bytes in the JSON are key names, and even without a schema for good compression, you can replace key names with 2-byte identifier numbers. And things like "critical" and "Active roadwork zone with lane closure" should also be 1-byte or 2-byte indexes into a table. And all the numbers in there could be stored as 4 byte values. Apply all that and it goes down below 300 bytes. If you had a special schema for this, it would be even lower by a significant amount.
B) Most of those values would not need to be repeated per meter. Add one byte to each hazard to say how long it lasts, 0-255 meters, instant 99% savings on storing hazard data.
> each "trick" you do to reduce the size of the metadata necessarily implies more CPU needed to parse and process it.
CPUs are measured in billions of cycles per second. They can handle some lookup tables and basic level compression easily. Hell, these keys are just going to feed into a lookup table anyway, using integers makes it faster. And not repeating unchanged sections makes it a lot faster.
a phone number is not a 64 bit integer, like, just off the top of my head, a phone number can start with "0"
{
"message_id": "b72f9a6c-34d2-4ef9-89a5-623c1d7b890a",
"timestamp_sent": "2025-06-05T23:57:34Z",
"timestamp_received": "2025-06-05T23:57:35Z",
"sender": {
"phone_number": "+1-318-555-1234",
"carrier": "AT&T",
"device_id": "IMEI-354812345678901",
"location": {
"cell_tower_id": "LA5321",
"latitude": 32.5385,
"longitude": -92.9222
}
},
"receiver": {
"phone_number": "+1-225-555-5678",
"carrier": "Verizon",
"device_id": "IMEI-869712345678902",
"location": {
"cell_tower_id": "LA6723",
"latitude": 30.4515,
"longitude": -91.1871
}
},
"network": {
"protocol": "GSM",
"message_type": "SMS",
"message_size": "160 bytes",
"sms_center": "+1-800-555-9876",
"routing_path": [
"Cell Tower LA5321",
"Switching Center AT&T Baton Rouge",
"SMS Center",
"Switching Center Verizon New Orleans",
"Cell Tower LA6723"
]
},
"status": "Delivered"
}
and again - if you use clever tricks to reduce this, you increase the overhead to actually use the data.get a celltower snooper on your phone and watch the data it shows - that's the metadata for your phone. SMS dragnet would need that for both phones, plus the message itself.
It's not an integer. But you can store it inside 64 bits. You can split it into country code and then number, or you can use 60 bits to store 18 digits and then use the top 4 bits to say how many leading 0s to keep/remove. Or other things. A 64 bit integer is enough bits to store variable length numbers up to 19 digits while remembering how many leading zeros they have.
If you want really simple and extremely fast to decode you can use BCD to store up to 16 digits and pad it with F nibbles.
> JSON
Most of this is unimportant. Routing path, really? And we don't need to store the location of a cell tower ten million times, we can have a central listing of cell towers.
I don't think we really need both phone number and IMEI but fine let's add it. Two IMEI means another 16 bytes. And two timestamps sure.
Phone number, IMEI, timestamp, cell tower ID, all times two. That's still well under 100 bytes if we put even the slightest effort into using a binary format.
> and again - if you use clever tricks to reduce this, you increase the overhead to actually use the data.
No no no. Most of the things I would do are faster than JSON.
even if that 80% was 99%, that last 1% will be the cause of some mishaps.
my subaru is within a few percent of 80% FSD if everything is turned on. I still technically have to hold the wheel, but the steering only shuts off about 20% of the time with that being met.
I’ve heard rumors of that happening.
That amount of time is also nowhere near enough for a human to switch tasks like that. I would say it should be at least a minute or two, and even that is pushing it with how people are bound to use the system (spacing out while watching a TV show, etc).
Let's say you were parked, taking a nap in the drivers set - when you woke up, would you immediately start driving or would you wait a minute to get your bearings? How about at a highway rest stop? It feels like trying to push back on that by asserting "the driver is supposed to always be alert and ready to be attentive!" is another bout of fanciful fiction like L2. Being outright asleep would seem to be derelict, but I can imagine many fuzzy mental middle grounds, especially in a droning car.
(Somewhat related, if I haven't driven in a month I would say it takes me tens of minutes (maybe 20?) to get back up to the usual groove. Obviously the only way to do it is to do it, but I drive much differently until then)
It would take me a minute to wake up. Napping is one of the few things you can't do with level 3.
> How about at a highway rest stop?
Wait, doesn't this go against your argument?
Someone at a rest stop can go from "not driving at all" to "full speed down the highway and merging" in a handful of seconds. And it works fine.
> It feels like trying to push back on that by asserting "the driver is supposed to always be alert and ready to be attentive!" is another bout of fanciful fiction like L2.
Depends on what "alert" means. I can be in a general readiness state, with no particular requirements on my focus, for hours on end.
> Being outright asleep would seem to be derelict, but I can imagine many fuzzy mental middle grounds, especially in a droning car.
"The boring droning car made me zone out" is something that can happen while you're driving. A TV show could actually reduce the risk of falling half asleep.
> if I haven't driven in a month I would say it takes me tens of minutes (maybe 20?) to get back up to the usual groove. Obviously the only way to do it is to do it, but I drive much differently until then
How much differently? Also an autonomous car asking for takeover is probably driving super cautiously too.
It's really not inhuman levels of constant vigilance, not anymore than actually driving the car regularly. I just don't have to actively be keeping the pedals just right to maintain the following distance myself and I don't have to constantly fight the wheel.
If we're going to allow companies to write code in which human safety is in danger if that code misbehaves, that code should be auditable by a 3rd party, and those audits should regularly happen.
Code which affects the safety of humans should be reviewed with AT LEAST as much rigor as code for slot machines.
^ reading anecdotes about accident scenes, someone if not everyone is always lying about what happened.
In one case there was a claim the driver was in the backseat. This got widely published in all media outlets. And turned out to be complete nonsense, it wasn't even in autopilot.
But of course it could be true but I would wait for the data.
FSDs failures are either far more boring (imagining a stop sign) or put's the user in danger (driving onto train tracks).
If that’s the case they should show it.
With how popular Musk is these days I can 100% where Tesla is coming from here.
Like Tesla?
If your airbags don't deploy, Tesla doesn't consider it an accident for the purposes of reporting (modern safety systems don't blindly deploy airbags, they evaluate g-forces, speeds, angles of impact, etc., so you can hit something at 25mph and the vehicle decides your seatbelts are sufficient. Tesla decides "that's not a reportable collision"). Know when else your airbags might not deploy? Very serious accidents, when hardware or controllers are damaged.
Speaking of which, fatalities are not included in that report. "It was a collision where someone died, but doesn't merit inclusion in a safety report" is a weird position to take.
The actual article is about how Tesla claims that providing this data would be a competitive disadvantage that rivals could use.
Would we accept Pfizer releasing a new pill without evidence?
“It’s better at preventing heart attacks than anything else. But we can’t show you data, that would hurt our competitive advantage.”
https://www.reuters.com/legal/government/wait-what-fda-wants...
> The records must be reviewed to redact “confidential business and trade secret information of Pfizer or BioNTech and personal privacy information of patients who participated in clinical trials,” wrote DOJ lawyers in a joint status report, filed Monday.
...
> But we can’t show you data
Tesla shows the data to the NHTSA whose experts look at it and can force recalls so your analogy and argument make no sense.
Literally felt like the difference between flying a helicopter (actively trying to kill u lol) and an airplane.
I honestly did not get the hype until this specific HW4/v12 combination which didn't exist until last summer or so. It's the first time FSD felt like a safety feature for just $99 a month.
That's exactly the problem. It's great right until it isn't, at which point it's likely to make a decision that will kill you or someone else if you aren't lucky.
(most) Humans are REALLY good at paying attention to something that will actively kill them at any moment - you don't see a lot of people running a chainsaw while sending a text to their friend about drinks later in the day.
Humans are REALLY bad at stopping something they trust (IMO foolishly), with less than a half a second of notice, from killing them or someone else. It is completely natural to get lulled into a sense of security when something mostly works exactly as you'd expect.
Meanwhile Tesla wants to act as if it's the driver's fault anytime there's a crash without acknowledging they are actively perpetuating the myth of: "this thing drives itself". It's literally called "Full Self Driving" and Telsa expects the average person to look at that name and think: you need to be vigilant anytime you turn this on because it is a beta feature that may drive into oncoming traffic at any moment.
So for example, if I look at the screen, my phone, or start day-dreaming for even a few seconds, it'll beep and quickly strike me out from using FSD. "FSD (supervised)" is how it shows up in the UI too at least giving some expectation of it not being autonomous.
So in practice, I'm picturing the right driving inputs and watching what it's doing.
I haven't tested it but I assume the same is true if you put tape over the camera
An unsubstantiated claim given that there are many, many safe human drivers who have neither LIDAR sensors nor hyper-accurate pre-mapping at their disposal.
Entertaining a No True Scotsmen is a bit of a silly exercise anyway, but this semantic game is extra silly.
The gap between humans and computers is enormous, not some weird gotcha tactic.
No True Scotsman was obviously in reference to GC, not you.
> No True Scotsman was obviously in reference to GC, not you.
I'm unsure what they said that would qualify. Was it adding "true unsupervised"? I think that's a fair qualification, because most of the point of self driving is lost if I can't look away from the road.
Note Waymo announced a partnership with Toyota, pretty hand wavy, but at least it seems there’s hope the technology may come to regular car owners at some point.
I get that the vast majority aren't car enthusiasts and that's ok, but there is actual pleasure in driving.
And even me - when I want a rest from it - lane assist and cruise control are MORE than enough. I can even add these two to old classics without much bother.
Going all in on autonomy doesn't interest me at all.
Polarised sunglasses. Works on my Subaru. Works on my buddy’s Tesla.
Tesla can't help but know that the supposed non-driver getting constantly nagged to be vigilant as if actively driving destroys most of the value proposition of FSD. Vision Attention Monitoring has quite a bit of potential to be very useful … precisely in situations in which vehicles are not driving themselves.
This should be weighed against the fallibility of human drivers, surely? Our point of comparison is not "perfect", it's "human." Inasmuch, with millions of miles driven, FSD appears to be many times safer than humans: https://www.notebookcheck.net/Tesla-Autopilot-and-FSD-are-no...
Not perfect, and there will be crashes, but much better, and I think that's the yard stick we should be using, because no system will ever be perfect.
Also to note, FSD disengagements are probably common enough to still be on the left hand side of the "Valley of Degraded Supervision"[2], where mistakes are common enough that users stay viligant. As mi/de increases to 5,000 or 50,000, the quality of the supervision could degrade to the point that the supervised system is less safe than an unaided driver.
[1] https://teslafsdtracker.com/Main
[2] https://users.ece.cmu.edu/~koopman/pubs/koopman19_TestingSaf...
That said, something being excessively baked does not mean it is good.
The point is - it didnt deliver, and still doesnt. Its a securities fraud out in the open, but clearly from a guy who is above the threshold of applicable law
Are you hearing yourself.
How can a "safety feature" be a subscription? Next they'll charge you a microtransaction every time you fasten your seatbelt?
There is a bunch of data that is "missing", both from Tesla and other manufacturers.
Tesla declines/redacts data far more than other manufacturers, to the point where in many cases, the majority of data is redacted for a given incident.
Data about a hitting a pedestrian or having an accident isn't proprietary tech. They're not asking for source code, but for data that should arguably be made available for people to see in the interest of transparency and this information is sought consistently from other car makers.
Tesla is of course sticking out like a sore thumb, because they have put the most investment into EV's and "autopilot" features the data might show that they stick out.
Trump may not care about reelection but congressmen do.
Countries take time to decide how to implement the UN regulations so in countries such as Australia, there is (from a quick check) still no regulation requiring light passenger road vehicles to record any telemetry. The US already had a form of regulation requiring limited telemetry about a vehicle for -20 to +5 seconds around a crash event to be recorded.[2] This US regulation also did not require recording of fields relevant to ADS/ADAS.[2]
What this article describes is access to telemetry data that manufacturers such as Tesla are voluntarily recording within vehicles that may include some idea of ADS/ADAS operation during a crash event. For example, Tesla may be recording the human throttle input separate from recording of the ADS/ADAS throttle input, showing whether it was the driver or vehicle who caused the car to accelerate dangerously before a crash. But the UN regulation and older US regulation didn't expect Tesla to record more than just a single throttle position field, ignoring whether ADS/ADAS or the driver directed the throttle position.
[1] UN Regulation No. 160 - Event Data Recorder (EDR) - https://unece.org/sites/default/files/2023-10/R160E.pdf
[2] CFR Title 49 Subtitle B Chapter V Part 563 - https://www.ecfr.gov/current/title-49/subtitle-B/chapter-V/p...
[3] https://unece.org/fileadmin/DAM/trans/doc/2019/wp29grva/GRVA...
VW, BYD or Toyota or judged on how many cars the sell. What is Tesla judged on?
But then I remember that investors just want to dump their shares on the next sucker they don't really care about the underlying business case.
Like who cares if software version numbers are released on crash reports or not?
Me giving them capital helps them keep more people from ending up writing boilerplate code for fintech startups and other fairly fruitless endeavors.
So in a way it’s great that they’ve been convinced to buy zero-emissions vehicles by giving them a reactionary edgelord option that’s just like every other EV. (Except for the suicide FSD mode which is more like a Darwin awards filter.)
The only part I don't know why people would trust is the FSD/Autopilot of which I wouldn't recommend people to buy. But as an EV its an excellent car.
However, if you look at the latest press release by JDPower (https://www.jdpower.com/business/press-releases/2025-us-vehi...) you'll see that Tesla now ranks right near the average. Significantly better than in previous years and ahead of other common car makers.
The dependability study could do with a segment purely on EV's, given that EV's as a whole are improving by roughly 33 PP100 per year.
You are citing a study that specifically excludes Tesla. And even then you’re bragging about Tesla being … average?
If they really wanted to exclude them they wouldn’t have included them all.
I cited the study, because it was the same study commented above where the most recent study wasn’t mentioned.
I’m not bragging at all, but they’re just not as bad of a car as some people are giving them a wrap for. Heck they had the same number of problem reported as Ford, does that mean Fords are bad cars? No.
If you think about issues this way, I think it's fully unproductive. No one was bragging. The fact you see this as either unthinking bragging or unthinking criticism makes it very hard to talk about what matters: the facts of the matter.
> Tesla's are excellent cars and are still ranked at the top compared to the rest of the market.
The actual facts show otherwise, of course, so now he’s trying the squid ink approach of emitting a lot of verbiage trying to say that average and excellent mean the same thing, but that’s just a distraction from the fact that their first statement was based on brand loyalty, not data.
If I said "BMWs are excellent cars" would I be bragging?
A study has found that perhaps I was wrong in the eyes of a dependability study, but I still think they're an excellent car. I still think that if a Ford owner said that they think their Ford Ranger is an excellent car people wouldn't disagree because a dependability study put them as average.
> you'll see that Tesla now ranks right near the average. Significantly better than in previous years and ahead of other common car makers.
That was my mistake
Which is an absolutely asinine axis of comparison seeing as we live in a reality where different brands take different product lines seriously.
Interestingly, MotorEasy found that gas and hybrids were the most reliable. Diesel were the least reliable.
I'll never buy another one as long as Elon Musk is associated with the company, but I'd be crazy to sell it now because it's paid for and it's a great car.
Some of the comments I hear almost universally from prior Model 3 owners when they switch to an Ioniq 5 is how much nicer the ride quality is and how nice it is to have buttons on the dash again.
There have been comments around the ride harshness of the model 3. In the latest model it has been specifically fixed.
I think any self driving tech at the moment is just terrible and shouldn’t be used, for the record
I think Elon found a glitch it tech-bro reasoning: you promise a product, people believe you and buy it, then they realise that it is not what was advertised, but it is OK, because new version has it now, so you can't complain, it is your fault in the eyes of your peers, you should have made a better research or whatever.
Your FSD has almost got you killed? Don't worry, this is your fault, and anyway the bug was already fixed in the update. Probably. This time for sure.
The changes in the new 2024 version, specifically changed the suspension setup and you can mechanically verify the difference between the models because there were mechanical changes.
You need to drop the FSD rhetoric, there are an enormous amount of Tesla drivers who couldn't care about FSD, but simply enjoying the car as an EV.
From the consumer POV who doesn't own shares in the brand, it makes no sense at all.
> enormous amount of Tesla drivers who couldn't care about FSD but simply enjoying the car as an EV
Are there an enormous amount of disappointed Tesla drivers as well who stuck with it because of a sunk cost fallacy?
Should, for example, CyberTruck owners who spend 150K on a bullet-proof car be happy that in the new 2026 version the panels finally will stop falling off?
Should the few million people who purchased Tesla Model 3 feel better that the new version finally drives like a car?
Either way, in prior comments you couldn't come to terms with them possibly fixing something like that in the first place.
Tesla is a low-trust company. Too much grifting
Is this the new "how would you feel if you didn't eat breakfast yesterday?"?
Successive version always do slight changes like that over time, that how car companies work. Old owners don't need to be happy about it, or think about it at all.
Combine that with FSD rubbish bait and switch, nutjob CEO, and the cratering of any brand goodwill ... I won't touch those cars if you paid me (seriously).
And honestly, I recognise the innate progression of the technology ... but others have caught and surpassed to the point that I don't have to worry about buying the lesser car anymore when I go elsewhere.
On the whole seems a little... over enthusiastic...
It’s easy to pick the worst model of the lot and use it to disregard the entire brand, but I can’t expect any more critical thought from some online
Thank you for contributing to lower the overall amount of critical thinking on the internet.
whilst surely top ranked, they apparently share the top with others makers (https://www.euroncap.com/en/ratings-rewards/latest-safety-ra...) (Large Family Cars category, last two years, order by occupant protection).
https://www.carscoops.com/2024/11/tesla-model-3-comes-bottom...
The Cybertruck is a complete disaster of a vehicle with so many issues (eg [1]) that the only reason people buy them is to make a political statement from a group that 3+ years ago wouldn't have been caught buying an EV.
Teslas are drivable iPads. Many people (myself included) not only hate this (because it's hard to use without looking) but it's also lazy design. By this I mean, it allows manufacturers to say "we'll fix it with a software update" (and then probably never get around to it) whereas haptic controls require more thought and effort to be put into the UI/UX during manufacturing.
For other Teslas, there have been a host of other issues, some small, some not. For example, the seats were unreliable if adjusted too often so Tesla made an OTA update to limit how much you can adjust the seats to avoid failure [2].
The only thing propping up Tesla sales now are trade restrictions on BYD.
[1]: https://apnews.com/article/cybertruck-recall-tesla-elon-musk...
[2]: https://driveteslacanada.ca/news/tesla-now-monitors-how-ofte...
Teslas don’t even have a Speedo, let alone a HUD. It’s just an iPad with proprietary software which can’t sync with your phone. Terrible
Bidirectional charging would be great, everything else is already far up there.
For me this is a clear win for Tesla. Anything under 5s is crazy fast to me anyway, so the other things I mention are worth a lot more.
Subjectively, the boot of the Ioniq 5 on the normal models is huge, and I say that as someone for whom boot size was the primary requirement. Moving the seats forward creates epic amounts of space at the cost of reduced leg room for rear passengers.
And for when I want to carry weirder stuff I fold the backseats. I've carried entire Ikea wardrobes like this.
And the point remains, Tesla learns harder into the deception and lying.
Companies used to play these tricks in the 90s. I am NOT happy that Tesla is trying to bring them back.
Other times, I have something that should fit, but can't readily fit through the oddly shaped rear hatch and it becomes a problem.
I'd agree with you with regard to luggage. That's a scenario that really hits the strength of the Y setup, especially with the relatively large frunk. I often end up with a few backpack sized items up there on longer trips.
That’s great but you’re missing the fact that the Ioniq has a HUD. What does the Tesla have? Everything you’ve mentioned is irrelevant. The Ioniq has CarPlay so you don’t touch the software.
Do you mean I could get something significantly better if I choose not to get the things which are important to me? Because that would be worse for me, not better. I have never cared about the badge.
Out of interest, if I reduced the trunk space a little, which similarly priced EVs could you recommend with more range and better performance than the Model Y AWD Long Range?
My Model 3 Performance is rated for I think 300 miles. In the real world, it can be as low as 230 miles if it's 30F outside and I'm going 75 mph. If it's warm, I'm still looking at ~270 miles at 75 mph. The colder it gets, the lower your range because cold air is denser (increasing drag) and higher usage of the heater drains the battery.
And the chinese models are just much better.
Remember what Musk said many years ago, something along the lines of that he wants to get the global EV movement started, and that for this to happen he'd gladly let anyone use his patents without retaliating?
Now he doesn't even want data which might save lives to get out into the public.
> June 12, 2014
> Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.
> Tesla Motors was created to accelerate the advent of sustainable transport. If we clear a path to the creation of compelling electric vehicles, but then lay intellectual property landmines behind us to inhibit others, we are acting in a manner contrary to that goal.
https://web.archive.org/web/20160722033909/https://www.tesla...
Perhaps Musk's persona has kind of killed that though. Or at least he causes one to weigh the status aspect of the car against the politics they increasingly represent.
[1] The thing I've alsways disliked most about Tesla actually — not a car "for the people" — way too rarefied, elite.
Of course, that's the ideal situation. Tesla in 2025 is very different from what they were talking about in 2014.
Meanwhile their competitors are moving downmarket and releasing cheaper cars
But what do I know, I assume their self driving AI hype is what drives their hugely inflated stock price, so it has made a lot of people very rich, which is a goal in itself. It's hard to point at the richest man in the world and say he made strategic errors.
It should be done carefully, but it should be done.
More than one company has been imploded by a leader who's been successful in the past and no longer has anyone to tell them "No."
Honestly, the best thing for Tesla would be to evict Musk as a leader, install someone who can focus on excellent delivery (like SpaceX), and create a separate R&D org for Musk to lead.
He already has Neuralink - he should put his efforts into that; perhaps as a test subject.
You know, I’ve thought about this too. What makes us think he hasn’t done this already? He could have an org structure where someone else is in charge of everything and still be this “veto guy”.
Personally, I don’t think he’s very excited about electric cars anymore. Tesla has mostly achieved what it set out to do. Electric cars are undeniably mainstream now. His next passion is possibly Optimus (which would also help with Tesla manufacturing and Mars settlement) and AI (same - would help with everything, make Optimus smarter). Maybe the only thing he might still be excited about, related to cars, is the self-driving taxi service. That could become a highly profitable business with a massive entry barrier for anyone that wants to compete with them. I believe in this thesis even more after the success of Starlink.
As for competition - Waymo had been too cautious and slow in its rollout to a fault. Much like Google’s AI policy before ChatGPT. Tesla can still beat them to a punch. Being a fully vertically integrated car company, they can churn out robo-taxis faster than anyone else.
Only if they're actually better, because Waymo is currently 5.5-6.5 years* ahead of where Tesla wants to be with this month's launch.
Also, BYD has their own one; don't rule them out as a viable competitor fo anything Tesla does: https://cleantechnica.com/2025/02/12/byd-gods-eye-more-advan...
* depending on how the safety drivers part goes
Questionable.
> Tesla can still beat them to a punch.
Tesla has released nothing but a kind of nice driver assist.
The claim about robo-taxi are literally just claims. I believe it when I see it.
I agree with this. I'd also think that Tesla's board has got to be concerned about his generally erratic behavior. I know that CEOs and high-profile engineers can be pretty erratic ("DEVELOPERS! DEVELOPERS! DEVELOPERS!") but the drug use and constant tabloid exposure can't be worth whatever actual talent he's bringing to the table anymore...right?
I suspect it's too late for that.
Musk, like Jobs before him, has a reality distortion bubble; this is how the Tesla P/E ratio is now… 189.49? Huh, it went up since I last checked.
Anyway, point is that number would be 30 even in an agressive growth scenario (which no longer seems plausible given their shinies are now being done better by others), and BMW's P/E is 7.41.
If Tesla stock price reduced to realistic (i.e. not Musk-boosted) levels, that's a factor reduction of 189.49/7.41 ~= 25.6, which reduces them to about 13 USD.
I've heard Musk has a lot of loans with Tesla stock as collateral, where margin calls will trigger sales if the price goes under about $240.
I have no idea what happens when you mix that combination of margin call, price shock, corporate debt, etc.
Pump, pump, pump. BS announcements and promises. Whatever shit he has to spew to keep the stock up.
Great for Tesla as a company. Terrible for its shareholders. It's not an exaggeration to say that Musk's value add at Tesla--today--isn't building cars, but hyping the stock. (That wasn't always true. And I wouldn't say the same about SpaceX or Neuralink.)
It doesn't always work out. Sometimes another technology or a competitor gets over that hump first, and the other (LaserDisc, Betamax) never gets the volume it takes to become an affordable commodity. And it doesn't necessarily have anything to do with which one was better. But that's the path to selling a new tech to the masses: sell with a high price tag to the wealthy first.
This seems a little crazy. They started with the fastest one, but it was still much cheaper than equivalents, and model 3s and model ys have been selling like hot cakes. These are cars for the people.
This makes them pretty affordable for people on average income, especially given the lack of fuel expenses and road tax.
https://www.cnet.com/home/electric-vehicles/tesla-model-3-ch...
The average selling price for a new car in the U.S. was around $47K, both the Model 3 and some models of the Y come below that.
>> Now that the Tesla Model 3 is eligible for the full $7,500 [US federal] electric vehicle tax credit
>> The Model 3 is eligible for the full $7,500 [California tax credit], bringing the total amount of federal and state tax credits to $15,000.
So the cheapest Model 3 (as of 2023) was $14,000 more expensive than the cheapest Camry, but offset by tax policy.
Tesla's biggest achievement on the pricing front was creating a viable scaling pathway.
Although it's debatable that BYD wouldn't have done it absent Tesla, because of Chinese market incentives.
Climate change would have still been discussed. The Paris Agreement would have still happened. China would have looked at its lack of long term oil reserves and pushed to shift away from non-essential consumption.
There are too many incentives to develop a mass market model, so if not Tesla then someone else.
For people who use leases to get a new car (average lease is 36 months) they’d be doing more harm to the environment, but for people who hold onto their cars longer, they’d be reducing CO2e.
Those are just rough generalizations, and of course it depends on driving distance, grid emissions, etc. For example, if you get your electricity primarily from coal, the break even is closer to 12 years. But as others have said, the EV market tends towards the type of people who don’t hold on to cars very long.
>expensive to repair would be the easy environmental improvement that would also save money.
This line of thinking seems to miss the financial reality of the vast majority of Americans. Most people aren’t choosing between an $1800 repair vs a $50k new EV for environmental reasons, it’s because they can only afford one of those options.
And the choice isn't "smaller ICE car" and "EV tank". There are many EV hatchbacks, sedans, and compact cars available, arguably more models than ICE vehicles in North America. Most ICE drivers are the ones who buy tanks anyway.
Yeah this is what bothers me about all the EV haters.
"But but what about the tire dust"
"But but what about the battery fires"
blah blah. It's like a pathological hatred of anything anyone else might do that you think maybe makes them "better" than you in some way.
It works pretty well although there is some maintenance cost when bits pack up. It's quite easy to find similar on facebook market.
Angle grinder wins against any lock. Maybe you're not the easiest bike to steal by having your very expensive lock, but in certain areas (NYC being a big one) that basically doesn't matter. And insurance claims from the bike locks still suck, since you have to deal a lost bike and an insurance claim every time that happens...
Note how infrequently iPhones are stolen since apple got serious about preventing it.
That's a lot of flaunters.
Having a large lineup is good for customer satisfaction and for attracting customers on the fence, but it definitely hurts you on this one very specific, mostly meaningless metric.
I do think regardless that even if you do have a smaller line up of cars, it is still an impressive metric that it is the most sold car in the world. That does mean that whilst Tesla has a smaller line up, that they hit the mark with meeting what people are looking for.
It is still a very in demand car for a reason and quite honestly I almost don't believe the metric.
Whilst I agree having a larger line up of cars would dilute your model sales, it is still impressive. Afterall, people wouldn't buy that model nor Tesla if they didn't like their cars.
I agree that if they did this by brand, Tesla would be much further down the list.
The combined sales of Toyota's sedan models dwarfs Tesla's sales.
The Model Y being the best selling car in the world for 2 years in a row is a part of that.
There's nothing rarefied at all about it.
Based on what? They are at or below the avg car price. They are literally definitionally avg.
In fact, the Model 3 was one the cheapest electric cars at the time.
And still today Model Y isn't all the expensive. And its the most sold car in the world. How can the most sold car in the world be considered elite?
Not generally a fan of Teslas but this just rings hollow. You can get Model 3's and Model Y's for under $40k which is much less than the average cost of a new car in the US. (~$49k in 2025). I would consider a car priced below the average well within the reach of "the people". Even a top specced Model S is no where near what actually rarefied elites could drive. A Base 911 Carerra is ~$130k, a 911 Turbo S is $230k. A New Ferrari 296 is over $400k and you can't buy one even if you wanted to.
Elon knew that EV's weren't sexy, so he decided to risk it and build a fast and ultimately expensive EV to begin with, to show people that they were worth buying and fast.
Only now through the model Y and the model 3 are we now seeing more consumer friendly models, which is what Elon always wanted from the start.
Here in Australia you can buy a model 3 for around the same price as our most sold car.
> asserted, helped others assert or had a financial stake in any assertion of (i) any patent or other intellectual property right against Tesla
You had to agree to let Tesla use any of your patents, copyrights, trademarks, trade secrets, and all other forms of intellectual property. In return Tesla lets you use just their patents.
Yes, it is actually explicitly that blatantly unfair.
[1] https://www.tesla.com/legal/additional-resources#patent-pled...
I disagree with Tesla about this case at the moment, but the issues are very different.
That's not what he said, anyone can invent excuses after the fact but that doesn't change the facts.
Musk simply pulled the "Don't be evil" trick, in so many words. Oops, sorry, not being evil helps the competition - which has also been slapped with 150% tariff, just in case.
They offered this statement along with a "good faith" patent pledge that required reciprocity.
Just like the annual "robotaxis this year", nothing has changed. lol
Yeah, good idea to hide the crash data.
I find it laughable that there still are Musk fanboys who after a decade of lies about this still believe in "Robotaxis". 90% of them clearly have never tried to drive a Tesla in a scenario where the minimal protections for kids to use public street space is not "kids should get a SUV to not get killed".
It is also amusing to watch videos of Tesla fanboys on YouTube who proudly show that their Tesla now can use FSD for up to 500 miles without a single crash (or "critical disengagement)". A human driver statistically causes a crash every 500,000 miles.
But yes, we will have flying Robotaxis in 2 weeks from now, that will solve this problem. Musk said so.
:)
Not sure what's your argument here. The visualization you get using "Enhanced Autopilot" is completely different to the one you get using "FSD Beta" because the software you are running is completely different as well.
What you see is what you get.
I'm no Tesla fan - but it would be real-world obvious if even 0.1% of Teslas actually were that "eager" to kill children. In most western countries, covering up child-killing accidents scales very poorly.
On a more serious note: Where do we as a society put the bar? What are the numbers, at which we accept the risk? Do we put the bar higher than for humans? Or same level? Or does the added convenience for car drivers tempt us to accept a lower bar?
Safety implementation is never objective. You can only implement a system by subjecting it to context. Traffic safety is a world of edge cases, and each driving implementation will engage with those edge cases from a different subjective context.
We are used to framing computation as a system of rules: explicit logic that is predictably followed. Tesla is using the other approach to "AI": statistical models. A statistical model replaces binary logic with a system of bias. A model that is built out of good example data will behave as if it is the thing creating that data. This works well when the context that model is situated in is similar to the example. It works poorly when there is a mismatch of context. The important thing to know here is that in both cases, it "works". A statistical model never fails: that's a feature of binary logic. Instead, it behaves in a way we don't like. The only way to accommodate this is to build a model out of examples that incorporate every edge case. Those examples can't conflict with each other, either. The model must be biased to make the objectively correct decision for every unique context it could possibly encounter in the future; or it will be biased to make the wrong decision.
The only real solution to traffic safety is to replace it with a fail-safe system: a system whose participants can't collide with each other or their surrounding environment. Today, the best implementation of this goal is trains.
Humans have the same problems that statistical models have. There are two key differences, though:
1. Humans are reliably capable of logical deduction.
2. Humans can be held directly accountable for their mistakes.
Tesla would very much like us to be ignorant of #1, and to insulate their platform from #2.
Could not agree more.
You have to seperate those. And the default in car nations like Germany or the US has always been to ban the humans. After having seen how other nations are handling it, and what it does for quality of life, whenever I see how German cities look like (and of course most of US cities) it feels totally alien to me.
Anyway: No, Robotaxis clearly are not the solution to the problem. In school kid vs. Tesla, the car always will win. And this includes even if you blame the kid for having made a mistake according to road regulations - making mistakes in regards of traffic rules as a young human should not be punished by death.
What I have seen in my German home town also is a downward spiral: Hockey mums thinking it is safer for their kids to come pick them up with their SUVs. But because those are so big that it is impossible to see the other kids, risk of accidents is actually rising, causing more mums to driver their kids in SUVs etc.
Also, due to the narrow roads it's standard practice to be in eye contact with other users of the shared space to make sure who drives/walks next.
Car AIs can not hold eye contact, so this is where the problem starts.
And, this one of course is very very specific just to Germany: On parts of the Autobahn you have to always expect another car approaching on the left lane with 250 km/h / 155 MPH, so you really have to use the rear view mirror very early to get an idea at what speed that car may be moving. The reach of the Tesla back camera is far too low for another driver at that speed being able to break so to not crash into your back.
So, when it comes to Germany even if the system worked better, there simply is no place where you could really make use of it without either killing people or getting killed.
It’s hard for me to understand how everyone doesn't geek out about it all the time.
It is fairly obvious that the loudest have little experience with the product.
This is why - because people like you are killing other people due to technologically-inspired negligence.
Right?
Criticism of Tesla would deconstruct their dualist narrative. Tesla has sold the public on the notion that "good enough" self-driving is objectively safer than human driving. Anyone who accepts this narrative can consider the failure of human driving safety as an ultimate bad, which implies that Tesla's automated driving alternative is an ultimate good. This dogmatic thinking hinges on Tesla's vague assertion that automated driving in general is statistically safer than human driving in general. As soon as people engage with any criticism of this narrative whatsoever, the dualist perspective is lost, and the narrative itself falls apart.
duxup•1d ago
e44858•1d ago
FireBeyond•1d ago
duxup•1d ago
ra7•1d ago
[1] https://www.nhtsa.gov/laws-regulations/standing-general-orde...
andsoitis•1d ago
The nice thing is we can look for ourselves to what extent that is true by downloading the CSV: https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_In...
For example, in the case of BMW, in every single case the field for ADS/ADAS Version is either blank or redacted.
armsaw•1d ago
andsoitis•1d ago
Not true. There are many rows for other manufacturers where fields are redacted or blank.
For example:
- Row 7. BMW. ADAS/ADS Version: blank
- Row 8. BMW. ADAS/ADS Version: redacted
- Row 9. Subaru. ADAS/ADS Version: redacted
etc.
armsaw•1d ago
andsoitis•1d ago
Again, not true.
I just filtered for BMW, and in every single instance, without fail, the ADS/ADAS Version cell is either redacted or blank.
I didn't check other manufacturers.
zimpenfish•1d ago
genewitch•1d ago
I don't own nor do I want to own a Tesla, but stuff like this is what gets reported and the corrections or actual facts get buried in the resulting noise. I don't really even care that this is about tesla, even.
If this was some sort of rendering or CSV error on your part, then that could happen at CBS or msnbc just as easily, and tomorrow the headlines scream "Tesla only automaker shirking reporting responsibilities"
ra7•1d ago
Tesla also has a problem of their telematics underreporting crashes. One of the reasons for that is they don’t consider it a crash if airbags don’t deploy. This was called out by the NHTSA in a prior investigation: https://static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf.
Here’s the relevant paragraph from that report:
> Gaps in Tesla's telematic data create uncertainty regarding the actual rate at which vehicles operating with Autopilot engaged are involved in crashes. Tesla is not aware of every crash involving Autopilot even for severe crashes because of gaps in telematic reporting. Tesla receives telematic data from its vehicles, when appropriate cellular connectivity exists and the antenna is not damaged during a crash, that support both crash notification and aggregation of fleet vehicle mileage. Tesla largely receives data for crashes only with pyrotechnic deployment, which are a minority of police reported crashes.
jmpman•1d ago
bumby•1d ago
bumby•1d ago
>and the crash involves a vulnerable road user being struck or results in a fatality, an air bag deployment, or any individual being transported to a hospital for medical treatment.
ra7•1d ago
timewizard•1d ago