So 3 years at McKinsey taught OP the corporate BS. That paragraph doesn't say anything useful.
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management.
We think we can be bigger (more customers, more sales, more money) than all existing players.
> We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably.
We're looking to eclipse the population of any one country and we're going to use something like Bitcoin to side-step national currencies (and maybe also to avoid existing regulatory structure, not clear from the ambiguous language).
> We plan to do with 100 people what Allianz and others do with 100,000.
We believe we can automate or use AI to eliminate the need for people to actually support these billion customers.
All three of those are very bold statements/goals.
Some of them off the top of my head are number customers, number of active policies, premium amount, assets under management, time to claim resolution, etc. He's talking to business people who understand the insurance market.
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, I see one very concrete moral problem:
that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
That, to me, is deeply disturbing, and very very difficult to justify.
> Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, […]
I’m pretty sure. Although, the original comment was basically putting that issue aside, so I’m not sure what there is to say about it.
For the points you brought up, why is stagnation for the purposes of upholding an ethical position a bad thing?
And yes, by definition, worrying about ethical responsibility would lead to ethical issues. That's the whole point.
However, I believe there's a middle ground and endeavor to find it. Based on your response it doesn't appear as though you believe a middle ground exists.
Author ignores the fact that any normal market there are variously priced insurances, yet somehow not all people flock to the cheapest one, in contrary (at least where I live). Higher fees mean ie less stressful life when dealing with insurer.
Dealing with a machine is unlikely to be worse.
Also, I think you may have made a typo that negated the meaning of some of your comment (but I believe I can understand what you meant anyway).
But it was already the case that they just arbitrarily do WTF ever they want, that outside a small set of actions that "bots" can perhaps handle fine they aren't going to do anything for you, and that the only way to get actual support for a real problem involves something being sent from a .gov email address or on frightening letterhead.
So... not really any different? You already basically have to threaten them (well, have someone scarier than you threaten them) to get any real support, this wouldn't be different.
And then the they will add a low cost arbitration clause, where disputes are also handled by AI. Free market goes brrr
There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.
More likely, those 100,000 humans are mostly working on sales and marketing, and the few allocated to support are all incentivized to avoid you, and to send you canned answers. A reasonably decent AI would be better at customer support than most companies give, since it'll have the same rules and policies to operate with, but will most likely be able to speak and write coherently in the language I speak.
https://worldpopulationreview.com/countries/deaths-per-day
So actually 100,000 employees put it surprisingly close to just having one case handled per day per employee.
Of course, a ton of people don’t have life insurance. And also, a lot of deaths are pretty straightforward.
That's not an assumption.
I know that I, and many others, have been able to get a human on the phone every time we needed one. Regardless of the number of those humans actually working claims, in the current system, it is "enough".
I also know that it's impossible to give that level of service when you have 1 employee for every 10 million customers.
That's really all that you need in order to make the judgement that you're not going to get a human.
Side-note: I did a quick search, and found that Allstate has 23k reps that actually handle claims and 55k employees total, so almost half of their workforce does claims and disputes. They also have 10% market share of the US's ~340 million people, so that's, at most, 1 rep per 1500 employees. That's much better odds than 1 for every 10 million.
> A reasonably decent A
And there's the problem - that AI doesn't exist. You're speculating about a scenario that simply hasn't been realized in the real world, and every single person that I've talked to who has interacted with an AI-based "support representative" has had a bad experience.
Insurance isn't like a widget. People have actual legal rights that insurers must service. This involves processing clerks, adjusters, examiners, underwriters, etc. Which then requires actual humans, because AI with the pinpoint accuracy needed for these legally binding, high-stakes decisions aren't here yet.
E.g., issuing and continuing disability policies: Sifting through medical records, calling and emailing claimants and external doctors, constant follow-ups about their life and status. Sure, automate parts of it, but what happens when your AI:
a. incorrectly approves someone, then you need to kick them off the policy later?
b. incorrectly denies someone initial or continuing coverage?
Both scenarios almost guarantee legal action—multiple appeals, attorneys getting involved—especially when it's a denial of ongoing benefits.
And that's just scratching the surface. I get that many companies are bloated, and nobody loves insurance companies. No doubt, smarter regulations could probably trim headcount. But the idea that you could insure a billion people with just 100, or even 1000 (10x!), employees is just silly.
Economic productivity putting people out of jobs is both good and necessary and it is unethical to work against it.
What part of that is suffering, if it enables 100k constituents to put food on the table?
I agree with you (except in classifying the genuine effort of my fellow people to be "fake jobs" just because a computer can do some of the work) and believe making a resilient, trustworthy, proven system for the former is a prerequisite to withdrawing the latter, to avoid suffering.
Unfortunately for us, the barrier to the former is ideological in nature and imposed by the elite few in power now, before any matters of capital allocation (human or financial) come into play.
This was previously stated: the good being done is 100,000 people can feed their families. What good is going without that? You'll enrich some private equity dudes and make a lot more people unemployed and a lot more families unhappy.
We shouldn't employ people in economically un-viable ways just because they need income. We can just give them money directly, or redirect them to other work, or a combination of the two.
If that is what's necessary to provide a social safety net, then maybe so. See the works progress administration for an example of this.
> We can just give them money directly, or redirect them to other work
Ideally yes, but that isn't happening, hence the first option.
We may be straying here, though: this discussion didn't start out with someone saying what someone else should or shouldn't do. We were discussing the ethical and economic consequences of an idea.
The way I've come to think of the current moment in history is that capitalism allocates resources via markets and we use this system because in many situations its highly efficient. But governments allocate resources democratically exactly because we do not always want to allocate resources efficiently with respect to making money.
Whether it "makes sense" or not, most people believe there is more to life than the efficient allocation of resources and thus it might be a reasonable opinion that making 100,000 people suddenly unemployed is bad. I doubt seriously that the OP believes having 100,000 people working indefinitely when the labor can be done more efficiently by machines is good. I think most reasonable people want to see the transition handled more smoothly than a pure market capitalism would do it.
And I think there is a distinction in different kinds of efficiency that can be optimized for, not just monetary cost. If we desire clean, paved, safe roads, that can be used by all equally for efficient movement of goods, because we recognize that as a prereq for a strong economy, we can not rely on the free market to deliver that, much less optimize for it. It can be more efficient, in terms of actually delivering the desired goal vs not delivering it at all (or delivering a grossly bastardized version of it) to pool our resources and explicitly work towards making something available rather than hoping that the free market will deliver it.
The free market did not deliver on reducing congestion in New York (in fact, one might say that over the decades, the free market is what made it worse), but the congestion pricing program has, and has resulted in a bunch of valuable/desirable knock-on effects.
I do not think that a centrally planned economy is workable; but collectively being deliberate about building the things we need/want, and taking a longer view, can result in significant efficiencies.
The free market ends up simply wasting resources in its drive to discover where efficiencies lie and how to take advantage of them.
That said I suspect the founder is seriously overestimating the number of highly intelligent, competent people he can hire, and underestimating how much bureaucratic nonsense comes with insurance, but that's a problem he'll run into later down the road. Sometimes you have to hire three people with mediocre salaries because the sort of highly motivated competent person you want can't be found for the role.
It's easy to describe a business process with written down rules, and those are easy to find in legal discovery. It's much easier to obfuscate with an AI model, because "nobody knows what it's actually doing - it's AI!".
I am personally 7 for 8 in lifetime wins in my city's parking ticket appeals process. That doesn't mean that I think that 7 out of 8 tickets my city issues are incorrect.
I suppose whole life where there is a cash value and investments being managed might have a more ongoing service need, but I'm not familiar with that.
I seems a bit high to me, but I don’t know anything about the industry. FWIW, around 170k people die per day.
https://news.ycombinator.com/item?id=43918053
This doesn’t establish any sort of mathematical bounds, but it gives an idea of the size of the problem. I suspect 100k employees is an over-estimate just because a lot of people are uninsured…
Allianz has ~150k employees but certainly they don't all work on the term life business in the USA, they do all kinds of other insurance stuff all over the world and have hundreds of different products.
For term life specifically, there still are some pretty significant back office teams that a customer probably never interacts with directly, though. A few that come to mind:
- underwriters: you wont be able to make a decision for all of your applicants based on the info they provide you and the info you can pull from automated sources, so some number of humans are on the phone with your applicants asking clarifying questions, doing additional research, and making risk decisions. They're also routinely doing retrospective analysis that looks back on claims paid out to make sure the claims are reasonable and there's not some sort of gap in the underwriting approach thats leaving unknown risk on the table, and audits of automated underwriting decisions to make sure the rules engines are correctly categorizing risks
- actuaries: every company has varying risk tolerance for both the policies they issue and the cash they hold/invest. These people are advising on how to take risks and working with underwriters and finance people to try and figure out the financial impact of various underwriting decisions: can a product remain viable if it is purchased by a heavier balance of smokers vs nonsmokers, etc
- accountants and finance: its a capital-intensive business that requires large cash reserves and sane investment strategy for that cash, often subject to tests by regulators or industry associations and all sorts of lengthy audits
- compliance: in the US, life insurance is individually regulated by each state. Many states join the ICC Compact and agree to all follow the same rules and have a single set of regulatory filings, but you still have plenty of other states to do filings with, analyze changing requirements from, maintain relationships with regulators, respond to regulatory complaints or investigations, etc
- industry reporting: most insurance carriers participate in information-sharing programs like the MIB (Medical Information Bureau) and these memberships come with various reporting and code-back obligations. The goal is to prevent you from getting declined at one life insurer because you say you have some sort of uninsurable illness and then turning around and lying about not having that illness to another life insurer the next day. These sort of conflicting answers get flagged for manual review, someone will need to talk to the applicant and figure out why they gave conflicting info to multiple insurers and what the truth really is.
- claims and fraud investigations: many, many people lie to try and get insurance they aren't qualified for or to take out insurance on someone they aren't supposed to. Claims investigations start by asking "is the insured really dead" but then try to answer the questions like "did the insured know this policy was taken out on them", "were the responses the insured gave during underwriting truthful", etc. These investigations are extremely time consuming and often involve combing public records, calling doctors, interviewing family, and more. You'd probably be shocked how common it is for former-spouses to try and take out insurance policies without the other knowing during divorces. Some level of this investigation is happening in the first couple of years a policy is in force, too, as insurers can rescind the policy and refund the premiums if they determine it was obtained under false pretenses
- reinsurance: even the biggest insurers typically pool and share some amount of risk so that a bad claims year can't take down an entire carrier. reinsurance treaties are complex things to negotiate and maintain, and have lots of reporting obligations and collaboration between the reinsurer and the actuaries to validate the risks are what everyone thinks they are
The customer-facing part of a term life company is really just the tip of the iceberg. Small companies are certainly better at doing this with tech than bigger incumbents (thats a big part of the reason we exist at Wysh), and a narrow product focus really helps, but there's still some pretty significant levels of human expertise involved to keep it all running.
If they were receiving spousal support (“alimony”) or child support, this seems unsurprising and sensible.
You need both an insurable interest and consent of the insured in order to buy an insurance policy on someone else’s life.
Couples separating and holding policies on each other is pretty common and carriers have some specific rules to follow to make sure there’s appropriate mutual consent for policy changes etc
Although I would still agree that there would need to be a mechanism for escalation to a human.
Respectfully, no it can't. From a Western perspective, specifically American, and from an average middle-class person's perspective, specifically American, it only appears to be fair.
However, LLMs are a codification of internet and written content, largely by English speakers for English speakers. There are <400m people in the US and ~8b in the world. The bias tilt is insane. At the margins, weird things happen that you would be otherwise oblivious to unless you yourself come from the margins.
AI is bias automation, and reflects the data it's trained on. The vast majority of training data is biased, even against different slices of Americans. The resulting AI will be biased.
Only true for pre-trained foundational models without any domain-specific augmentations. A good AI tool in this space would be fine-tuned or have other mechanisms that overshadow the pre-training from internet content.
The Catholic church has 1B "customers" and seems to be doing ok with human-to-human interaction without the need (or desire) for AI. They do so via ~ 500K priests and another 4M lay ministers
The comparison to the Church seems not really super useful, their business model is pretty different.
I don't know. Given the human beings I've interacted with in customer support, and the number of times I've had to escalate because they were quite simply "intelligence-challenged" who couldn't even understand my issues, I'm not sure this is a bad thing.
In my limited experience with AI agents, they've been far more helpful and far faster, they actually seem to understand the issue immediately, and then either give me the solution (i.e. the obscure fact I needed in a support PDF that no regular rep would probably ever have known) or escalate me immediately to the actual right person who can help.
And regular humans will stonewall you anyway, if that's corporate policy. And then you go to the courts.
And right now, the LLMs aren't really that smart, they're making up for low intelligence by being superhumanly fast and able to hold a lot of context at once. While this is better than every response being from a randomly selected customer support agent (as I've experienced), and when they don't even bother reading their own previous replies when the randomiser puts the same person in the chain more than once, it's not great.
LLM customer support can seem like a customer win to start with, when the AI is friendlier etc., but either the AI is just being more polite about the fixed corporate policy, or the LLM is making stuff up when it talks to you.
No it's not, but that's not what I described. I described replacing mediocre humans with better AI for at least the first level of customer service.
Real world evidence supporting your argument:
United Health Group is currently embroiled in a class action lawsuit pertaining to using AI to auto-deny health care claims and procedures:
The plaintiffs are members who were denied benefit coverage. They claim in the lawsuit that the use of AI to evaluate claims for post-acute care resulted in denials, which in turn led to worsening health for the patients and in some cases resulted in death.
They said the AI program developed by UnitedHealth subsidiary naviHealth, nH Predict, would sometimes supersede physician judgement, and has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
https://www.healthcarefinancenews.com/news/class-action-laws...
This is a fantastic illustration of selection bias. It stands to reason that truly-unjustified (some hidden variable) denials would be appealed at a higher rate and therefore the true value is something less than 90%.
That's not to say UHG are without blame, I just thought this was really interesting.
This case is not like that; if the insurance agency wants to dispute the 90% false denial rate, it would be trivial for them to take a random sample of _all_ cases, go through the appeal process for those, and publish the resulting number without selection bias.
As long as that doesn't happen, the most logical conclusion for us outside observers is: the number is probably not so much lower than 90% that it makes a difference.
It stands to reason that the overwhelming majority of cases where the claim was approved were approved correctly. Unless that rate is well under 15%, it’s impossible to have the claimed “90% error rate”.
[1] In the sense of "it doesn't matter if it caused the problem", rather than "it probably didn't have any effect". Because after all, "to err is human, but to really foul things up takes a computer".
feature, not bug
We'll send the appeals through Mechanical Turk.
Happy now?
The junior guy started crying in the meeting. Like just blubbering. My wife still feels bad for it but still…
Weird thing, instead of firing him McKinsey kept him and stipulated that he can only be in meetings when the partner is present.
Get at least a few years work experience and call me. Or alternatively, start your own dang business if you are really that smart.
You want the senior people focusing on the problems, strategy, and comms and not data aggregation and power point formatting.
Half the time it doesn't actually matter who the consultant is, the business is just looking for an arbiter to provide a second opinion or justify a decision.
Modern consulting seems like one of the better deaths inflicted by GenAI. The entire industry is a means to commit corporate espionage legally.
They can do something more useful with that education.
This is your mistake. The point of a consultant is to tell the business to do what the business was already planning to do anyway. This way the consultant takes the risk/blame of the decision. It's similar to the classic 'no one was ever fired for buying IBM' "I did what the McKinsey consultant told me" is CYA. The last piece is that since everyone is in on the game, when a decision leads to bad outcomes they don't blame the consultant, but something they could not have foreseen.
The person making the recommendations isn't just out of school. They've been at the firm for years, and do have a ton of experience.
The recent grads are there for all of the grunt work -- collecting massive amounts of data and synthesizing it. You don't need years of business experience for that, but getting into a top college and writing lots of research papers in college is actually the perfect background for that.
Why would they fire him after a singe incident?
Sounds like McKinsey is a more companionate organization than you, and that's saying something:)
Saying the works sucks isn't bullying, unless you didn't know you were incompetent.
1. This BA/Asc was on <4 hours of sleep, maybe many days in a row
2. They walked into that meeting thinking they had completed exactly what the client (your wife) wanted
And after the meeting (this I feel more confident about, as it happens a lot)
1. A conversation happened to see if the BA/Asc wanted to stay on the project
2. They said yes, and the leadership decided that the best way to make this person feel safe was to always have a more experienced person in the room to deal with hiccups (in this case, the perception of low quality work)
Isn't that... good? What else would you expect
You can take the founder out of a consultancy, but you can't take the consultancy out of the founder.
For the record, that strikes me as seriously improper. Life insurance is a heavily regulated offering intended to provide security to families. It is the opposite of bitcoin, which is a highly speculative investment asset. Those two things should not be mixed.
Also, the fact that the disclosure seems to limit sales to being only occurring in Bermuda seems intentional. I suspect that this product would be highly illegal in most if not all US states, so they must offer this only for sale in Bermuda to avoid that issue.
> You can borrow Bitcoin against your policy, and the borrowed BTC adopts a cost basis at the time of the loan. So if BTC were to 10x after you fund your policy, you could borrow a Bitcoin from Meanwhile at the 10x higher cost basis—meaning you could sell that BTC immediately and not owe any capital gains tax on that 10x of appreciation
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
perhaps a typo in year?
My understanding of this reputation is: This often happens at the detriment of either product quality or employee satisfaction. It's debatable if they actually have a reputation of providing value. I think short term? Maybe, albeit expensive. Long term? I'd say no.
Somewhat ironically, over the past 20 years I’ve come to reject PhD-type career tracks after seeing how much PhD overproduction there is and how my older colleagues only had a BS or MS. These days, I yearn to leave my Big Tech job to start a “boring” business. Right now I’m taking Accounting 101 at a local university to understand business financials better.
Now that I'm working at a big organization (a Fortune 500 company), I can relate. I'm by far the most innovative person in my team and I'm being held down because I'm not doing my role (as I'm not a dev but a data analyst at the moment).
If I'd be doing my role however, then we wouldn't be innovating and the C-suite wants us to innovate with AI. I'm the only one at my department that can create actual AI automations. And the IT department is basically stripped out by upper management.
If anyone wants an actual dev building AI automations and think how we can disrupt with the state of the art, my email is in my profile.
I really think every founder (and startup worker) needs to take seriously the marketing side of the business, and not just believe that new technology will win.
(While I, too, am allergic to bitcoin scams, given increasing levels of political corruption monkeying with markets, rates, and regulation, I can also see it as an enticing alternative for those looking to get long-term investments off the dollar. For insurance, the main question is, will the money be there and be made available? Having seen even highly-regulated pensions fail (without federal insurance recourse in the case of religious hospital behemoths), I can see how technical guarantees independent of regulation or law could be compelling.)
They essentially lied about any anticipated KPI potentials and let their "tech" people put together a 15k EUR/month (before public release) platform on AWS which was such a pile of mess, it made the second year's CTO start from scratch. After some heavy arguments because of their poor performance, McKinsey agreed to let some "non-technical" people work there for a couple of months for free. All arguments you'd had with the McKinsey "Engineers" felt like talking to AWS Sales, they had barely any technical insights but a catalog of "pre-made solutions" to choose from.
The fact that the company has become a sort of pseudo-VC (mentorship but not financing) for small teams within megacorps is interesting. I wonder why large corps find it so difficult to innovate. I think that they become somewhat "load-bearing" in society and the lines between the company and the market begin to blur. Any change the company makes causes a misalignment because they shaped the market to fit themselves.
one nitpick:
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
ChatGPT launched in late 2022...
Having worked in highly regulated industries, I’ve learned that the best way to disrupt incumbents is by creating a product that assumes more business risk than is typically accepted. Large, regulated companies are extremely risk-averse—so if you can take on that risk in a smart, innovative way, you’ll win.
The key phrase is LIFE Insurance, not HEALTH Insurance!
They are vastly different markets.
You don’t deny claims for life insurance as companies would do for health insurance. It’s a very different set of circumstances to have to deny life insurance.
Unless you can solve that part of the problem as well as the big players, you will run into problems at some point, using extrem value theory you can even estimate when.
MPSFounder•15h ago
throwanem•15h ago
MPSFounder•15h ago
throwanem•15h ago
I would rather work for someone honest than for a bullshit artist. But I wouldn't necessarily decline to work for the world's best bullshit artist. Just that you want to be very sure you know who at the table is the sucker.
SJC_Hacker•15h ago
Ideally you get someone whose good at both, or at least competent at one and really good in the other, such as Jobs or Gates.
MPSFounder•14h ago
mannyv•13h ago
Look at DEC, a classic engineering company failure. DEC failed because they were led by engineers who didn't understand the market. It apparently was a great place to work, because they were so NIH that they built everything from scratch.
Then look at Intel, a company that is in the process of failing because they listened to their customers too much. None of their customers wanted GPUs, or mobile chips, or power savings - until they did. By that time Intel was already behind the curve.
Then look at Microsoft under Ballmer - a company that probably illustrates the point you're trying to make. But then they won with Nadella, luckily.
Apple is a bit different and a bad example because unlike other companies they attempt to define the future. Most companies aren't in a position to try, much less succeed, at this.