{Task, model, coverage} --> bid.
It can be circular Ai with insurer Ai doing the evaluation and bidding.
Insurance is very different. Nobody is looking to insure the unit test they vibe coded late Friday afternoon, rather it would be the multi million dollar "we replaced all our accountants with a chatGPT based system" decisions. Getting one of those decisions wrong will absolutely be a problem for your AI-insurance company. In addition, in most cases you won't even know if you were right or wrong until many years later so you have to keep reserves locked up for much much longer.
The jurisdiction that hears the case in whatever “justice” system hears it, will set the precedent for all others to reference, based on their alignment with the jurisdiction that uses state power to enforce a resolution.
I expect people will host or make remotely available systems which fall outside of the acceptable limits for whatever regional jurisdiction has their laws.
As usual, pirates and the powerful will steer around those.
Interestingly, see [1] for
“a teletype from General Groves to Oppenheimer from February 1944, instructing the latter as to what to tell Underhill [UC’s secretary and finance officer] about the hazards to be insured against at an unspecified site”
[1]: https://blog.nuclearsecrecy.com/2012/03/28/weekly-document-i...
this sounds very interesting! any more details?
I expect home insurance to cost more than it pays out (both in median and mean terms) but I take the negative-value deal to protect against rare financially ruinous outcomes.
Quality underwriting and minimizing adverse selection gives an insurance company a massive advantage over competitor insurance companies but it doesn't make or break the market its own.
I'm also not sold on model provider diversity being the measure of risk diversity-surely most of the risk is coming from application errors and not failures of "safety" tuning of models (which are mostly about preventing LLMs from saying things you wouldn't want in the newspaper--I assume AI E&O isn't interested in ensuring reputation risk)
If I have insured a whole population by the river, then I'm heavily incentivized to sell an additional insurance policy to someone else by that same river, after all, if there is a flood, I cannot become more bankrupt, and in the other case, I collect one more premium.
This is not true, and nothing you have said contradicts my argument to that effect.
baobun•8mo ago
Indeed.
https://en.wikipedia.org/wiki/Post_office_scandal
janice1999•8mo ago
jlarocco•8mo ago
doctorpangloss•8mo ago
> We have to be careful, that we are not creating a cottage industry that damages the brand and makes clients like the DWP and the DVLA think twice. The DWP would not have re-awarded the Post Office card account contract, which pays out £18 billion a year, in the last month if they thought for a minute that this computer system was not reliable
I know that's something that someone said, but is it true? So what if a lot of people say that? Nobody knows who or what leads to sales or not sales. If sales were all that mattered, they wouldn't do the IT upgrade at all.
People use shitty software all the time.
> The new Horizon project became the largest non-military IT contract in Europe.
Also... really doubt that is true.
The Horizon IT report's first volume "will focus on redress (compensation) and the human impact of the Horizon scandal." Okay. But why did people feel so strongly about the technology in the first place? Who gives a fuck about bugs?
idopmstuff•8mo ago
Every organization is built around tradeoffs between time (which is very much money here) and quality. You route appraisals to people of different skill levels depending on the likelihood of issues and the likelihood that those issues matter. With appraisals, for example, if buyer's putting down 40% where only 20% is required, you check only the super critical stuff, because even if you're meaningfully off on the appraised value it doesn't actually matter that much.
There are a ton of these tradeoffs of whether you should review something as well as whether you should have your most senior people review it. On top of that, reviewers are judged on speed, so naturally they're making tradeoffs of accuracy vs. time themselves on each review.
Using AI means you're able to review things that wouldn't even have been looked at otherwise, because there's no longer a tradeoff of time/cost vs. accuracy. Even the world's best reviewer can't catch things they're not looking at.