If Tesla didn't want Lemonade to provide this, they could block them.
Strategically, Tesla doesn't want to be an insurer. They started the insurance product years ago, before Lemonade also offered this, to make FSD more attractive to buyers.
But the expansion stalled, maybe because the state bureaucracy or maybe because Tesla shifted priority to other things.
In conclusion: Tesla is happy that Lemonade offers this. It makes Tesla cars more attractive to buyers without Tesla doing the work of starting an insurance company in every state.
If the math was mathing, it would be malpractice not to expand it. I'm betting that their scheme simply wasn't workable, given the extremely high costs of claims (Tesla repairs aren't cheap) relative to the low rates that they were collecting on premiums. The cheap premiums are probably a form of market dumping to get people to buy their FSD product, the sales of which boosts their share price.
And the system is designed to set up drivers for failure.
An HCI challenge with mostly autonomous systems is that operators lose their awareness of the system, and when things go wrong you can easily get worse outcomes than if the system was fully manual with an engaged operator.
This is a well known challenge in the nuclear energy sector and airline industry (Air France 447) - how do you keep operators fully engaged even though they almost never need to intervene, because otherwise they’re likely to be missing critical context and make wrong decisions. These days you could probably argue the same is true of software engineers reviewing LLM code that’s often - but not always - correct.
The last few years of Tesla 'growth' show how this transition is unfolding. S and X production is shutdown, just a few more models to shutdown.
In reality, you acquired a license to use it. Your liability should only go as far as you have agreed to identify the licenser.
The product you buy is called "FSD Supervised". It clearly states you're liable and must supervise the system.
I don't think there's law that would allow Tesla (or anyone else) to sell a passenger car with unsupervised system.
If you take Waymo or Tesla Robotaxi in Austin, you are not liable for accidents, Google or Tesla is.
That's because they operate on limited state laws that allow them to provide such service but the law doesn't allow selling such cars to people.
That's changing. Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
FSD isn't perfect, but it is everyday amazing and useful.
And this is before we even discuss the threats against our allies and destruction of trade partnerships, devaluation of our hard earned dollars, etc.
Sorry, I know politics aren’t “relevant,” but actually, they are.
Glad you “didn’t harm anyone.”
Maybe people will find this comment distasteful and irrelevant. Personally, I hope Tesla owners who decide to brag about their purchase never hear peace about it and become motivated to sell their vehicles out of sheer embarrassment.
genocide /jĕn′ə-sīd″/ noun
The systematic and widespread extermination or attempted extermination of a national, racial, religious, or ethnic group. The systematic killing of a racial or cultural group.“Uhm aktually it’s not a genocide it’s just a fascist police state”
Multiple humanitarian organizations define mass displacement as genocide and/or ethnic cleansing.
The holocaust literally started with mass deportations/detentions. Then the nazis figured out that it was easier to kill detainees.
It may not be on the marketing copy but it’s almost certainly present in the contract.
> Fair prices, based on how you drive [...] Get a discount, and earn a lower premium as you drive better.
FrankWilhoit•2h ago
nradov•1h ago
SoftTalker•1h ago
jgbuddy•1h ago
Are you saying that the investments in FSD by tesla have been with the goal of letting drivers get a way with accidents? The law is black and white
tehwebguy•1h ago
deelayman•54m ago
As an extreme end of a spectrum example, there's been worry and debate for decades over automating military capabilities to the point where it becomes "push button to win war". There used to be, and hopefully still is, lots of restraint towards heading in that direction - in recognition of the need for ethics validation in automated judgements. The topic comes up now and then around Tesla's, and impossible decisions that FSD will have to make.
So at a certain point, and it may be right around the point of serious physical harm, the design decision to have or not have human-in-the-middle accountability seems to run into ethical constraints. In reality it's the ruthless bottom line focused corps - that don't seem to be the norm, but may have an outsized impact - that actually push up against ethical constraints. But even then, I would be wary as an executive documenting a decision to disregard potential harms at one of them shops. That line is being tested, but it's still there.
In my actual experience with automations, they've always been derived from laziness / reducing effort for everyone, or "because we can", and sometimes a need to reduce human error.