You were responsibile for something, say, child care, and you just decided to go for beer and leave the child with an AI. The house burns down, but because you had insurance you are not responsible. You just head along to your next child care job and don't too much worry about it.
Don't take the right safety precautions and burn down a customers house - liability insurance
Click on a link in a phishing email and open up your network to a ransomware attack - cyber insurance
Forget to lock your door and get burgled - property insurance
Write buggy software which leads to a hospital having to suspend operations - PI (or E&O) insurance
Fail to adequately adhere to regulatory obligations and get sued - D&O insurance
Obviously there will be various conditions etc which apply but I've been in Insurance a long time and cover for carelessness and stupidity is one of the things which keeps the industry going. I've dealt directly with (paid) claims for all of the above situations.
It doesn't absolve responsibility though, it just protects against the financial loss. I suspect if you leave a child alone with an AI and the house burns down that's going to be the least of your problems.
I’m pretty sure this will be the same for the other insurance you mentioned but for property insurance if you left your front door open you will have a hard time getting the insurance to actually pay out your claim. At least here they require a burglar alarm and they require it to be armed when nobody is on site or they will absolutely decline the claim.
Insurance insures against risk, but there’s a threshold to that and if you prove to be above it they will decline your claim or void your insurance in totality.
Two main exceptions:
1 - if you are letting the property to someone else, e.g a lodger or have paying guests staying with you then this is typically excluded.
2 - if you have had previous theft claims, live in a high crime area, or you have a particularly high risk (e.g lots of valuables), the Insurer will add an endorsement that you need a minimum standard of locks and have them engaged when the property is unoccupied.
Outside of those, if you accidentally leave a door unlocked, your claim will likely be paid. The situation obviously may be different in other countries. I worked for a property insurer and saw hundreds of these claims (entry via an unlocked entry point) paid during my time there - I also saw many declined because of the above.
I suspect that over time the number of policies in the 'budget' category will continue to increase as price continues to trump everything else for most people]
edit: it is the same for the other lines I mentioned as well -e.g a cyber policy I saw recently has no conditions relating to use of MFA. It will have been factored in when writing the risk (they will have said they use it) and if it turned out it was a lie then there would be an issue with cover but if it was just a case of an admin forgetting to include an OU in the MFA group policy the claim would almost certainly be covered. Policies aimed at the SME space are much more likely to have specific conditions though.
How is this supposed to be assessed? You can demonstrate that a door was locked, if some kind of obvious measure was taken to circumvent it (destroying the lock, destroying the door, destroying the window...), but you can't demonstrate that it was unlocked. Burglars aren't limited to destroying things to bypass locks. One obvious approach is to pick them.
All that said, I can't recall many instances where the theft wasn't either breaking and entering, or entry through an open access point. As easy as lock picking might be, it's not a common burglary technique.
Is that commercial or residential?
I've never seen a residential insurance that requires an alarm system, let alone a monitored system. Though many carriers will offer a discount for having this.
Where is here? I'm not aware of that being common anyplace in the US. I'm guessing you're in some country where crime is significantly higher than in the US.
Insurance will only pay out if you can show that you have done everything a reasonable person would be expected to do to avoid the loss/damage.
> Don't take the right safety precautions and burn down a customers house - liability insurance
You mean someone burnt a customers house down /because of something like an electrical or equipment malfunction that they could not have reasonably foreseen or prevented/, right?
> Forget to lock your door and get burgled - property insurance
That seems unlikely. Compare this: https://moneysmart.gov.au/home-insurance/contents-insurance
> It's worth checking what isn't included. For example, damage caused by floods, intentional or criminal damage, or theft if you leave windows or doors unlocked.
Happy to be shown that I'm wrong but please do not give people the impression that liability insurance or property insurance will absolve them of losses no questions asked.
Would you want to insure people who think they have no responsibility because they've delegated it to an AI? They might as well have delegated the responsibility to a child or a dog. To sell them insurance, you as the insurer are making a financial bet on the ability of the dog to take care of anything that does go wrong.
And still as the insured, using the AI imbued with your responsibility risks horrible outcomes that could still ruin your life. The AI has no life to ruin. It was never really responsible.
Aside from the fact that your insurance rate just went up, possibly by a lot.
More generally I think “if something is bad, we should not be able to insure it because then we incentivise it” is not right
Insurance just covers financial damage, and it's the insurer making a bet with you that they will profit off the premiums they calculated for your particular coverage instead of you causing an insurance payout that would be in the red for them.
And if you intentionally committed an act that would cause a payout, the insurance would almost certainly void your coverage and claim.
You could. Insurance companies will sell you insurance for just about anything, in custom situations they figure up the risk somehow. You likely wouldn't like how much it'd cost you though.
> A tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up.
If a human sales representative had made that mistake instead of a chatbot, I wonder if companies will try to recover that cost through insurance. Or perhaps AI insurance won't cover the chatbot for that either?
cf an airline chatbot agreeing to an inappropriate refund or giving wrong advice that leaves the airline deciding to apologise and pay their holiday-related expenses. Those are costs it makes more sense for the airline to eat than get their insurers to price up (unlike other aviation insurance which can be for eye-wateringly large sums) even if it happens several times a month (which if your chatbot is an LLM supposed to handle a wide variety of questions it probably does). Same goes for the human sales representatives who may work with higher-stakes relationships than chatbots but the consequence of their error is usually not much bigger than issue refund or lose client relationship
I guess chatbots/LLMs will end up as a special case for professional indemnity insurance in a lot of those regulated firms as lawyers/accountants start to use them in certain contexts.
A customer trusted the policy that the chatbot provided to make a decision, and the tribunal said that it was reasonable for the customer to make a decision based on that policy, and that the airline had to honor that policy.
Insurance generally offsets low precision with higher premiums and a wide range of clients. 1 employee has a lot of variability but 100,000 become reasonably predictable.
Sorry couldn’t resist.
Insuring against localized risk is an old hat for insurance, fire and flood insurance for example, and is generally handled by having lots of localities in the portfolio. This works very well for once-off events, but occasionally leaving localities is warranted when it becomes impossible to insure profitably if the law won't let insurers raise premiums to levels commensurate to the risk.
If the AI screws up, what do you need to fire/retrain? It seems like eventually the ai would get wrapped in so many layers of hard coded business logic to prevent repeat offenses that you may as well just be using hard coded business logic.
If the insurance company models loss-causing outputs as Bernoulli trials (i.e. each time the LLM is used, it is an independent and identically distributed event - equal chances of an error), but they are actually correlated due to information sharing, then that could make it harder for them.
AI models hallucinate, and by their blackbox nature can't have any kind of safeguards put in, as has been evidenced by the large number of paths in research to prompt jailbreaking.
Inherently also, AI is operating on a non-deterministic environment, but its architecture for computation is constrained by determinism and decide-ability. The two are foundationally incompatible for reliable operations.
Language is also one of those trouble areas since the meaning is floating. It seems quite likely that a chatbot will get stuck in a infinite loop (halting problem) with the paying customer failing to be served, and worse the company involved imposes personal cost on them in the process (in frustration and lack of resolution). If the company involved eliminates all but that as a single point of contact, either in structure or informal process; I don't see any way you can actually control costs sufficiently when the lawsuits start piling up.
We know this stuff isn’t ready, is easily hacked, is undesirable by consumers… and will fail. Somehow, it’s still more efficient to cover losses and degrade service than to approach the problem differently.
If you’re doing it wrong to a meaningful extent you won’t be able to get insurance or it will be very expensive
https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-...
How about MAGA insurance that covers injecting disinfectant, or eating horse dewormer pills, or voting for tariffs?
For all my being pwned, I should get paid for my part in purifying the gene pool and raising average human intelligence by a minuscule fraction of a percentage, as well as the right to smugly say I told you so! Because that was ALL part of the plan. Don't let the coffin lid hit you on the way out.
The Conservatives Who’d Rather Die Than Not Own the Libs
https://www.theatlantic.com/ideas/archive/2021/09/breitbart-...
>Rarely has so significant a faction in American politics behaved in a way that so directly claims the life of its own supporters. [...]
>In Nolte’s account, however, a conspiracy of evil leftist elites are to blame for vaccine skepticism on the right. “I sincerely believe the organized left is doing everything in its power to convince Trump supporters NOT to get the life-saving Trump vaccine,” Nolte writes. They are “putting unvaccinated Trump supporters in an impossible position,” he insists, “where they can either NOT get a life-saving vaccine or CAN feel like cucks caving to the ugliest, smuggest bullies in the world.” [...]
>Nolte theorized:
>In a country where elections are decided on razor-thin margins, does it not benefit one side if their opponents simply drop dead? If I wanted to use reverse psychology to convince people not to get a life-saving vaccination, I would do exactly what Stern and the left are doing … I would bully and taunt and mock and ridicule you for not getting vaccinated, knowing the human response would be, Hey, fuck you, I’m never getting vaccinated! …
>Have you ever thought that maybe the left has us right where they want us? Just stand back for a moment and think about this … Right now, a countless number of Trump supporters believe they are owning the left by refusing to take a life-saving vaccine—a vaccine, by the way, everyone on the left has taken. Oh, and so has Trump.
Neywiny•9mo ago