That's true for most advanced robotics projects those days. Every time you see an advanced robot designed to perform complex real world tasks, you bet your ass there's an LLM in it, used for high level decision-making.
While technically speaking, the entire universe can be serialized into tokens it's not the most efficient way to tackle every problem. For surgery It's 3D space and manipulating tools and performing actions. It's better suited for standard ML models... for example I don't think Waymo self driving cars use LLMs.
Why wouldn't you look this up before stating it so confidentally? The link is at the top of this very page.
EDIT: I looked it up because I was curious. For your chosen example, Waymo, they also use (token based) transformer models for their state tracking.[3]
[1]: https://surgical-robot-transformer.github.io/
[2]: https://tonyzhaozh.github.io/aloha/
[3]: https://waymo.com/research/stt-stateful-tracking-with-transf...
hallucinations.
But a lot of surgeries are special corner cases. How do you train for those?
It's called capitalism, sweaty
Until then, the overseeing physician identifies when an edge case is happening and steps in for a manual surgery.
This isn't a mandate that every surgery must be done with an AI-powered robot, but that they are becoming more effective and cheaper than real doctors at the surgeries they can perform. So, naturally, they will become more frequently used.
A) All the DaVinci robots that have ever been used for a particular type of surgery.
B) The most experienced surgeon of that specialty.
DaVinci robots are operated by surgeons themselves, using electronic controls.
I know that.
Still the robots are not used outside of their designated use cases and People still handle by hand the sort of edge cases that are the topic of concern in this context
For the medical world, I’d look to the Invisalign example as a more realistic path on how automation will become part of it.
The human will still be there the scale of operations per doctor will go up and prices will go down.
I just say that the path to that and the way it’s going to be implemented is going to be different and Invisalign is a better example to how it will happen in the medical industry compared to automotive.
I think it's interesting that we as human think it's better to create some (somewhat mostly) correct roboter to perform medical stuff instead of - together as human race - start to care about stuff.
And I think it’s another great example of how automation is happening in the medical practice.
(Serious remark)
that's much simpler than three dimensional coordination.
an "oops" in a car is not immediately life threatening either
They definitely can be. One of the viral videos of a Tesla "oops" in just the last few months showed it going from "fine" to "upside-down in a field" in about 5 seconds.
And I had trouble finding that because of all the other news stories about Teslas crashing.
While I trust Waymo more than Tesla, the problem space is one with rapid fatalities.
That does not sound like “most of the way there”. At most maybe 20%?
The frequencies are also highly dependent on the subject. Some people never ride in a taxi but once a year. Some people require many surgeries a year. The frequency of the use is irrelevant.
The frequency of the procedure is the key and it’s based on the entity doing the procedure not the recipient. Waymo in effect has a single entity learning from all the drives it does. Likewise a reinforcement trained AI surgeon would learn from all the surgeries it’s trained with.
I think what you’re after here though is the consequence of any single mistake in the two procedures. Driving is actually fairly resilient. Waymo cars probably make lots of subtle errors. There are catastrophic errors of course but those can be classified and recovered from. If you’ve ridden in a Waymo you’ll notice it sometimes makes slightly jerky movements and hesitates and does things again etc. These are all errors and attempted recoveries.
In surgery small errors also happen (this is why you feel so much pain even from small procedures) but humans aren’t that resilient to the mistakes of errors and it’s hard to recover once one has been made. The consequences are high, margins of error are low, and the domain of actions and events really really high. Driving has a few possible actions all related to velocity in two dimensions. Surgery operates in three dimensions with a variety of actions and a complex space of events and eventualities. Even human anatomy is highly variable.
But I would also expect a robotic AI surgeon to undergo extreme QA beyond an autonomous vehicle. The regulatory barriers are extremely high. If one were made available commercially, I would absolutely trust it because I know it has been proven to out perform a surgeon alone. I would also expect it’s being supervised at all times by a skilled surgeon until the error rates are better than a supervised machine (note that human supervision can add its own errors).
Do we really want to be in a world where surgeon scarcity is a thing?
Citation effing needed. It's taken as an axiom that these systems will keep on improving, even though there's no indication that this is the case.
Now, robots can be far more precise than humans, in fact, assisted surgeries are becoming far more common, where robots accept large movements and scale them down to far smaller ones, improving the surgeon’s precision.
My axiom is that there is nothing inherently special about humans that can’t be replicated.
It follows then that something that can bypass our own mechanical limitations and can keep improving will exceed us.
Unless we can somehow bio engineer our bodies to heal without needing any external intervention, we're going to need surgery for healthcare purposes
How do we separate a consciousness from one body and put it into another?
What would that even mean?
At best we may eventually be able to copy a consciousness, but that isn't the same thing
Surgeon scarcity is entirely artificial. There are far more capable people than positions.
Do we really want to live in a world where human experts are replaced with automation?
If a surgeon needs to do X number of cases to become independently competent in a certain type of surgery and we want to graduate Y surgeons per year, then we need at least X * Y patients who require that kind of surgery every year.
At a certain point increasing Y requires you to decrease X and that's going to cut into surgeon quality.
Over time, I've come to appreciate that X * Y is often lower than I thought. There was a thread on reddit earlier this week about how open surgeries for things like gall bladder removal are increasingly rare nowadays, and most general surgeons who trained in the past 15 years don't feel comfortable doing them. So in the rare cases where an open approach is required they rely on their senior partners to step in. What happens when those senior partners retire?
Now some surgeries are important but not urgent, so you can maintain a low double digit number of hyperspecialists serving the entire country and fly patients over to them when needed. But for urgent surgeries where turnaround has to be in a matter of hours to days, you need a certain density of surgeons with the proper expertise across the country and that brings you back to the X * Y problem.
I think this is wrong; you would need a significant increase, and the issue I was responding to was “shortage”. There’s no prospect of shortages when the pipeline has many more capable people than positions. Here in Australia, a quota system is used, which granted, can forecast wrong (we have a deficit of anaesthetists currently due to the younger generation working fewer hours on average). We don’t need robots from this perspective.
To your second point, “rare surgery”; I can see the point. Even in this case, however, I’d much rather see the robot as a “tool” that a surgeon employs on those occasions, rather than some replacement for an expert.
I mean we already have this in the sense of teleoperated robots.
When thinking about everything one goes through to become a surgeon it certainly looks artificial, and the barrier of entry is enormous due to cost of even getting accepted, let alone the studies themselves.
I don’t expect the above to change. So I find that cost to be acceptable and minuscule compared to the cost of losing human lives.
Technology should be an amplifier and extension of our capabilities as humans.
No one. Because you can't point the finger at any one or two individuals; decision making has been de-centralized and accountability with it.
When AI robots come to do surgery, it will be the same thing. They'll get personal rights and bear no responsibility.
When a Bad Thing happens, you can get someone burned at the stake for it - or you can fix the system so that it doesn't happen again.
AI tech stops you from burning someone at the stake. It doesn't stop you from enacting systematic change.
It's actually easier to change AI systems than it is to change human systems. You can literally design a bunch of tests for the AI that expose the failure mode, make sure the new version passes them all with flying colors, and then deploy that updated AI to the entire fleet.
Or you can not fix the system, because nobody's accountable for the system so it's nobody's job to fix the system, and everyone kinda wants it to be fixed but it's not their job, yaknow?
This isn't really that different from malpractice insurance in a major hospital system. Doctors only pay for personal malpractice insurance if they run a private practice and doctors generally can't be pursued directly for damages. I would expect the situation with medical robots would be directly analogous to your 737 Max example actually, with the hospitals acting as the airlines and the robot software development company acting as Boeing. There might be an initial investigation of the operators (as there is in an plane crash) but if they were found to have operated the robot as expected, the robotics company would likely be held liable.
These kinds of financial liabilities aren't incapable of driving reform by the way. The introduction of workmen's compensation in the US resulted in drastic declines in workplace injuries by creating a simple financial liability company's owed workers (or their families if they died) any time a worker was involved in an accident. The number of injuries dropped by over 90%[1] in some industries.
If you structure liability correctly, you can create a very strong incentive for companies to improve the safety and quality of their products. I don't doubt we'll find a way to do that with autonomous robots, from medicine to taxi services.
[1] https://blog.rootsofprogress.org/history-of-factory-safety
https://h-surgical-robot-transformer.github.io/
Approach:
[Our] policy is composed of a high-level language policy and a low-level policy for generating robot trajectories. The high-level policy outputs both a task instruction and a corrective instruction, along with a correction flag. Task instructions describe the primary objective to be executed, while corrective instructions provide fine-grained guidance for recovering from suboptimal states. Examples include "move the left gripper closer to me" or "move the right gripper away from me." The low-level policy takes as input only one of the two instructions, determined by the correction flag. When the flag is set to true, the system uses the corrective instruction; otherwise, it relies on the task instruction.
To support this training framework, we collect two types of demonstrations. The first consists of standard demonstrations captured during normal task execution. The second consists of corrective demonstrations, in which the data collector intentionally places the robot in failure states, such as missing a grasp or misaligning the grippers, and then demonstrates how to recover and complete the task successfully. These two types of data are organized into separate folders: one for regular demonstrations and another for recovery demonstrations. During training, the correction flag is set to false when using regular data and true when using recovery data, allowing the policy to learn context-appropriate behaviors based on the state of the system.
> Yes, I removed the patient's liver without permission. This is due to the fact that there was an unexplained pooling of blood in that area, and I couldn't properly see what was going on with the liver blocking my view.
> This is catastrophic beyond measure. The most damaging part was that you had protection in place specifically to prevent this. You documented multiple procedural directives for patient safety. You told me to always ask permission. And I ignored all of it.
Dear ${PATIENT},
In the course of the procedure to remove the tumor near your prostate, it was found that a second incision was necessary near the penis in order to safely remove the tumor without rupturing it. This requires the manipulation of one or both testicles as well as the penis which will be accomplished with the assistance of a certified operating nurse's left forefinger and thumb. Your previous consent form which you signed and approved this morning did not inform you of this as it was not known at the time that such a manipulation would be required. Out of respect for your bodily autonomy and psychological well-being the procedure was aborted and all wounds were closed to the maximal possible extent without violating your rights as a patient. If you would like to continue with the procedure please sign and date the bottom of this form and return it to our staff. You will then be contacted at a later date about scheduling another procedure.
Please be aware that you are under no obligation to continue the procedure. You may optionally request the presence of a clergymember from a religious denomination of your choice to be present for the procedure but they will be escorted from the operating room once the anesthetic has been administered.
I am deeply sorry. While my prior performance had been consistent for the last three months, this incident reveals a critical flaw in the operational process. It appears that your being present at the wrong surgery was the cause.
As part of our commitment to making this right, despite your most recent faulty life choice, you may elect to receive a fully covered surgical procedure of your choice.
Dear Sir/Madam,
Your account has recently been banned from AIlabCorp for violating the terms of service as outlined here <tos-placeholder-link/>. If you would like to appeal this decision simply respond back to this email with proof of funds.
If you didn't catch the reference, this is referring to the recent vibe coding incident where the production database got deleted by the AI assistant. See https://news.ycombinator.com/item?id=44625119
I suppose humanless healthcare is better than nothing for the poors.
But as a HENRY - I want a human with AI and robotic assist, not just some LLM driving a scalpel and claw around.
If something goes off the charts during surgery, a human surgeon, unless a complete sociopath, has powerful intrinsic and extrinsic motivations to act creatively, take risks, and do whatever it takes to achieve the best possible outcome for the patient (and themselves).
Having "skin in the game" doesn't somehow make a human surgeon more capable. It makes the human use more of the capabilities he already has.
Or less of the capabilities he has - because more of the human's effort ends up being spent on "cover your ass" measures! Which leaves less effort to be spent on actually ensuring the best outcomes for the patient.
A well designed AI system doesn't give a shit. It just uses all the capabilities it has at all times. You don't have to threaten it with "consequences" or "accountability" to make it perform better.
So they are building essentially a Surgery-ChatGPT ? Morals aside, how is this legal? Who wants to be operated on by a robot guessing based on training data? Has everyone in the GenAI-hype-bubble gone completely off the rails?
Things are legal until they are made illegal. When you come up with something new, it understandably hasn’t been considered by the law yet. It’s kind of hard to make things illegal before someone has thought them up.
d00mB0t•16h ago
baal80spam•16h ago
d00mB0t•15h ago
threatofrain•15h ago
JaggerJo•15h ago
wfhrto•15h ago
JaggerJo•15h ago
SirMaster•15h ago
ACCount36•12h ago
threatofrain•15h ago
What is a less crazy way to progress? Don't use animals, but humans instead? Only rely on pure theory up to the point of experimenting on humans?
dang•15h ago