{Task, model, coverage} --> bid.
It can be circular Ai with insurer Ai doing the evaluation and bidding.
Insurance is very different. Nobody is looking to insure the unit test they vibe coded late Friday afternoon, rather it would be the multi million dollar "we replaced all our accountants with a chatGPT based system" decisions. Getting one of those decisions wrong will absolutely be a problem for your AI-insurance company. In addition, in most cases you won't even know if you were right or wrong until many years later so you have to keep reserves locked up for much much longer.
The jurisdiction that hears the case in whatever “justice” system hears it, will set the precedent for all others to reference, based on their alignment with the jurisdiction that uses state power to enforce a resolution.
I expect people will host or make remotely available systems which fall outside of the acceptable limits for whatever regional jurisdiction has their laws.
As usual, pirates and the powerful will steer around those.
Interestingly, see [1] for
“a teletype from General Groves to Oppenheimer from February 1944, instructing the latter as to what to tell Underhill [UC’s secretary and finance officer] about the hazards to be insured against at an unspecified site”
[1]: https://blog.nuclearsecrecy.com/2012/03/28/weekly-document-i...
I expect home insurance to cost more than it pays out (both in median and mean terms) but I take the negative-value deal to protect against rare financially ruinous outcomes.
Quality underwriting and minimizing adverse selection gives an insurance company a massive advantage over competitor insurance companies but it doesn't make or break the market its own.
I'm also not sold on model provider diversity being the measure of risk diversity-surely most of the risk is coming from application errors and not failures of "safety" tuning of models (which are mostly about preventing LLMs from saying things you wouldn't want in the newspaper--I assume AI E&O isn't interested in ensuring reputation risk)
E&O insurance exists because the client is expecting accuracy, but AI products do not bring any material expectation of accuracy (yet). If there is an error, that is currently part of the product.
There are, of course, cases of material damage such as, e.g. AI in a self-driving vehicle hitting someone or something, that would be insurable, but that would be more about insuring that specific industry rather than E&O.
baobun•3d ago
Indeed.
https://en.wikipedia.org/wiki/Post_office_scandal
janice1999•4h ago
jlarocco•3h ago
doctorpangloss•3h ago
> We have to be careful, that we are not creating a cottage industry that damages the brand and makes clients like the DWP and the DVLA think twice. The DWP would not have re-awarded the Post Office card account contract, which pays out £18 billion a year, in the last month if they thought for a minute that this computer system was not reliable
I know that's something that someone said, but is it true? So what if a lot of people say that? Nobody knows who or what leads to sales or not sales. If sales were all that mattered, they wouldn't do the IT upgrade at all.
People use shitty software all the time.
> The new Horizon project became the largest non-military IT contract in Europe.
Also... really doubt that is true.
The Horizon IT report's first volume "will focus on redress (compensation) and the human impact of the Horizon scandal." Okay. But why did people feel so strongly about the technology in the first place? Who gives a fuck about bugs?