OTOH they're probably planning to charge full price anyway, but massively reduce costs, because, profit.
The CEO is an employee of the board of directors and the stockholders. An AI CEO would no doubt be as ruthless as a human CEO, if not more so. In other words, I wouldn't anticipate any improvement in CEO behavior.
This is a false dichotomy. Why not both?
I think it's a bit strange to hope or assume that an AI CEO would somehow preserve human jobs.
Does this CEO of a small hospital realize that their hospital will take the legal responsibility if there's no doctor to sue for malpractice?
My wife and I are both physicians. Our house doesn’t belong to either of us, strictly; it belongs to our marriage. You have to have a legal claim against both of us to put it in jeopardy.
I think the cases where judgements differ—either between humans or AI or both–will be the difficult to discern cases, where no human and no LLM will have 99% confidence.
From the votes I see that this is unpopular opinion but apparently there are close to 400 million companies in the world, of those 60K are publicly traded.
I am sure that there's enough data to train top notch CEO on this, since they are required to keep records all the time and give speeches for living.
Surely privately owned companies where the CEO is also the owner wouldn't like it but replacing the CEO with an AI in institutions with professional CEOs seems overdue. The radiologist AI certainly will be much better served by AI CEO.
Main reasons…
1. HR doesn’t work with you. They work for your CEO or Board. Consider them a toxic entity if you ever have a real problem.
2. HR is a socially accepted jobs program for people without any discernible skills, beyond basic data entry and organization. Effectively no one else wants to do it. The issue is that with point one, these people are told they are important and it immediately goes to their heads.
Agree that AI should replace CEOs. They’re often biased in unhelpful ways that AI isn’t and it costs people wellbeing.
> “Undeniable proof that confidently uninformed hospital administrators are a danger to patients: easily duped by AI companies that are nowhere near capable of providing patient care,” [Radiologist Dr.] Suhail told Radiology Business. “Any attempt to implement AI-only reads would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naive. But in some sense, they’re correct: Hospitals are happy to cut costs even if it means patient harm, as long as it’s legal.”
Of course, given that these are legal cases, it would take years for any consequences to be turned into actions.
That hasn't stopped them any other time they cut costs. Have you ever spoken to a nurse who works in a hospital?
Getting rid of radiologists is as much nonsense and saber rattling as suggesting using AI would harm patients.
The answer is clearly just the same as in software development or any other AI impacted field: Let the best professionals handle 10x+ the volume. What that means for all the rest of employees is the question of the century though...
Did a chatbot tell you that? What makes you think it is so?
As others in the thread note, there are plenty of concerns around operational use of AI solutions in the medical space, but radiology has a much larger target painted on it than other practices as a fair portion of the job (but certainly not all!) can boil down to high-skill pattern recognition from visual inputs. The current list of AI-enabled devices going through FDA approval is public, more than 3/4 of the list are targeting radiology use cases: https://www.fda.gov/medical-devices/software-medical-device-...
AI at 50% would be notably worse (also where are you getting that number?)
I worry that rational takes like this end up completely lost in the battle between motivated parties who yell far louder, but have minimal investment in actual outcomes for those who will be depending on these technologies. The debate over self-driving vehicles is another example.
A radiologist should separately review a scan, an AI separately review it, and then combine the 2 results for review.
Don't we have more radiologists than we did five years ago?
Okay so demand for imaging is up, so we should GET RID of the radiologists? How about we AUGMENT them with AI so that they can do their job better and faster? Why does it need to be either or?
Thinking about this some more: US tax laws really favor income from investment over income from wages. So ideally a co-op member would put something in to join, get a wage, and have an appreciating asset in a tax advantaged account.
“For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said. "
What's the false negative rate for human beings? And what about women that are considered high risk? Is it better or worse?
Or did they run 5 tests, found zero inaccuracies and extrapolated to 10,000 but though 0 mistakes was too unbelievable and would give away the game.
Did they test the xray on only uncomplicated cases like young healthy people with no deformities? Or did they test it on complex cases too, maybe cases where there are multiple issues and some should be ignored, like elderly or people with different shaped bodies.
Also, what is "wrong" here? Is it a false negative, or a false positive? Is it a misdiagnosis? There's levels of wrongness, especially in the medical field.
Here are my opinions, after a 20 year career as a diagnostic radiologist, and 45 years as a hobbyist computer programmer
1. There are no products currently on the market that can replace a radiologist.
2. If you can't fully and completely replace radiologists, you will still need them around in significant numbers.
3. Because of the infinite variation in human anatomy, physiology, and pathology, it is my opinion that AGI will be required to fully and completely replace radiologists.
4. Once AI is strong enough to replace radiologists, it will be strong enough to replace every other job as well.
5. Based on current RVU compensation models, any cost savings achieved by hospitals replacing radiologists with AI will quickly be lost by reimbursements being adjusted down. There is no way an insurance company will pay the same for an AI interpretation and a human interpretation.
6. There are significant unanswered medicolegal questions that will need to be addressed before AI can operate unsupervised.
In conclusion, I will work as a human radiologist until I retire in 10 years
The dentist reviewed it and told me that there's just too much variation in how places to fillings and the different densities of the filling and the replaced tooth material for the AI to make good judgements. He didn't think any of my fillings would need replacing I likely have many more years before they fail.
The NYT ran a story about "AI taking over radiology", where they talked to radiologists at the Mayo clinic (who have an AI research lab), who flatly told NYT that no - AI will not be replacing radiologists, the AI is not good enough.
Here is the rub though, the "AI Lab" was doing research using local CNN's with ~30M parameters. Basically 2017 consumer GPU tier AI tech.
I don't know yet if there has been a modern transformer of datacenter scale that has been explicitly pre-trained for medicine/radiology, along with extensive medical/radiology RLHF.
Perhaps they cost a great number of money?
In my personal experience interacting with the medical system, it’s, unsurprisingly, quite common for an actual specialist to look at the same images a radiologist looked at, and see something quite different. And it’s nearly always the case that a specialist or a reasonable careful non-specialist who is willing to read a bit of the literature or even ask a chatbot [0], will figure out that at least half of what the radiologist says is utterly irrelevant.
So I think that the degree to which ML can perform as well as a radiologist is not necessarily a great measurement for ML’s ability to assist with medical care.
[0] Carefully. Mindlessly asking a chatbot will give complete nonsense.
They like to show off occasionally. We had a rectal foreign body that was described as a Phillips-head screwdriver. I was hoping to catch them out by noticing it was Pozidriv, but it was in fact a Phillips.
Are you sure?
"You're right to push back. Upon reinspection, it appears to be something else."
“For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said.
Sounds like 3 wrongs are an acceptable level of risk for this CEO. It would be interesting to put radiologists up against AI to see which have better results, but I would still rather a human read my chart and then have AI give the second opinion, rather than the other way around.
Shank•1h ago
I mean, if I were a choosing person and I could choose to have a human radiologist review AND an AI review I think I would prefer that. 3/10,000 sounds like a very good rate but a false negative on a cancer diagnosis is life threatening, no?
zamadatix•1h ago
jon-wood•1h ago