Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
The artificial intelligence boom is already beginning to take off in the medical field through the form of AI-based visit summaries and analysis of patient conditions. Now, new research demonstrates how AI training techniques similar to those used for ChatGPT could be used to train surgical robots to operate on their own.
Researchers at John Hopkins University and Stanford University built a training model using video recordings of human-controlled robotic arms performing surgical tasks. By learning to imitate actions on a video, researchers believe they can reduce the need to program each individual movement required for a procedure. From the Washington Post: :
Robots have learned to manipulate needles, tie knots and suture wounds on their own. Furthermore, the trained robots went beyond mere imitation, correcting their slip-ups without being informed – for example, picking up a dropped needle. Scientists have already begun the next stage of work: combining all the different skills in complete surgery performed on animal corpses.
To be sure, robotics have been used in the operating room for years, in 2018 the meme “surgery on grapes” highlighted how robotic arms can help with interventions, providing a high level of accuracy Approximately 876,000 robot-assisted surgeries were conducted in 2020. Robotic instruments can reach places and perform tasks in the body where the hand of a surgeon would never go, and they do not suffer from tremors. Slim and precise tools can spare nerve damage. But robotics is typically driven manually by a surgeon with a controller. The surgeon is still in charge.
The concern of skeptics of more autonomous robots is that AI models like ChatGPT aren’t “intelligent,” but rather simply mimic what they’ve already seen before, and don’t understand the underlying concepts they’re dealing with. The infinite variety of pathologies in an incalculable variety of human hosts poses a challenge, then—what if the AI model hasn’t seen a specific scenario before? Something can go wrong during surgery in a fraction of a second, and what if the AI is not trained to respond?
At a minimum, autonomous robots used in surgeries must be approved by the Food and Drug Administration. In other cases where doctors use AI to summarize their patient visits and make recommendations, FDA approval is not necessary because the doctor is technically supposed to review and approve any information it produces. It is worrying because there is already evidence that AI bots make bad adviceor hallucinate and include information in meeting transcripts that were never spoken. How many times will a tired and overwhelmed doctor rubber stamp what an AI produces without looking closely?
It feels reminiscent of the recent reports regarding how the soldiers are doing in Israel they rely on AI to identify attack targets without scrutinizing the information very closely. “Soldiers who were poorly trained in the use of the technology attacked human targets without corroborating the (AI’s) predictions at all,” a Washington Post history read “At times the only corroboration needed was that the target was a male.” Things can go wrong when people become complacent and aren’t in the loop enough.
Health care is another field with high growth – certainly higher than the consumer market. If Gmail summarizes an email incorrectly, it’s not the end of the world. AI systems incorrectly diagnosing a health problem, or making a mistake during surgery, is a much more serious problem. Who is responsible in this case? U Post interviewed the director of robotic surgery at the University of Miami, and this is what he had to say:
“The stake is so high,” he said, “because this is a matter of life and death.” The anatomy of each patient differs, as does the way a disease behaves in patients.
“I look at (the images from) CT and MRI scans and then do the surgery,” controlling the robotic arms, Parekh said. “If you want the robot to do the surgery itself, it will have to understand all the images, like reading the CT and MRI scans.” In addition, robots need to learn how to perform laparoscopic, or laparoscopic, surgery, which uses very small incisions.
The idea that AI will ever be infallible is hard to take seriously when no technology is ever perfect. Certainly, this autonomous technology is interesting from a research perspective, but the blow from a wrong surgery performed by an autonomous robot would be monumental. Who punishes when something goes wrong, who has his medical license revoked? Humans are still not infallible, but at least patients have the peace of mind of knowing that they have spent years of training and can be held accountable if something goes wrong. AI models are crude simulacra of humans, sometimes behave unpredictably, and have no moral compass.
If doctors are tired and overworked – a reason that the researchers suggest why this technology could be valuable – perhaps the systemic problems that cause a shortage should be treated instead. It has been widely reported that the United States is experiencing an extreme shortage of doctors due to the increasing inaccessibility of the field. The country is on track to experience a shortage of 10,000 to 20,000 surgeons by 2036, according to the American Association of Medical Colleges.