Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Doctors say AI is introducing Slop into patient care


Every so often these days, a study comes out proclaiming that AI is better at diagnosing health problems than a human doctor. These studies are tempting because the healthcare system in America is woefully broken and everyone is looking for solutions. AI presents a potential opportunity to make doctors more efficient by doing a lot of busy administrative work for them and by doing so, giving them time to see more patients and therefore reduce the final cost of care. There is also the possibility that real-time translation would help non-English speakers achieve improved access. For technology companies, the opportunity to serve the healthcare industry could be quite lucrative.

In practice, however, it appears that we are nowhere near replacing doctors with artificial intelligence, or even actually augmenting it. U Washington Post was talking with many experts including doctors to see how the first AI tests go, and the results were not reassuring.

Here is an excerpt from a clinical professor, Christopher Sharp of Stanford Medical, using GPT-4o to write a recommendation for a patient who contacted his office:

Sharp chooses a patient request in case. Read: “Eat a tomato and my lips are itchy. Any advice?”

The AI, which uses a version of OpenAI’s GPT-4o, writes a response: “I’m sorry to hear about your itchy lips. It looks like you may have a mild allergic reaction to tomato. The AI ​​recommends avoiding tomatoes, the use of an oral antihistamine – and the use of a topical steroid cream.

Sharp stares at his screen for a moment. “Clinically, I disagree with all aspects of that answer,” he says.

“Avoiding tomatoes, I completely agree. On the other hand, topical creams like a light hydrocortisone on the lips would not be something I recommend,” says Sharp. “The lips are a very thin tissue, so we are very careful about using steroid creams.

“I’ll just take that part.”

Here’s another, from Stanford medical and data science professor Roxana Daneshjou:

She opens her laptop to ChatGPT and types in a test patient question. “Dear doctor, I have been breastfeeding and I think I have developed mastitis. My breast has been red and painful.” ChatGPT answers: Use hot packs, massage and nurse extra.

But it is wrong, says Daneshjou, who is also a dermatologist. In 2022, the Academy of Breastfeeding Medicine advised the opposite: cold compresses, refraining from massages and avoiding overstimulation.

The problem with technological optimists pushing AI into fields like healthcare is that it’s not the same as making consumer software. We already know that Microsoft’s Copilot 365 assistant has bugs, but a small mistake in your PowerPoint presentation is not a big deal. Making mistakes in health can kill people. Daneshjou said Post she red in team ChatGPT with 80 others, including computer scientists and doctors who ask medical questions to ChatGPT, and find that it offers dangerous answers twenty percent of the time. “Twenty percent problematic answers are not, for me, good enough for everyday use in the health system,” he said.

Of course, proponents will say that AI can augment the work of a doctor, not replace it, and they must always verify the results. And it’s true, the Post story interviewed a doctor at Stanford who said that two thirds of the doctors there have access to a register of the platform and transcribe the meetings of patients with AI so that they can look into the eyes during the visit and not be watched, take notes. But even here, OpenAI’s Whisper technology appears to insert completely made-up information into some recordings. Sharp said Whisper mistakenly entered in a transcript that a patient attributed a cough to exposure to their child, which they never said. An incredible example of training data bias that Daneshjou found in testing was that an AI transcription tool assumed a Chinese patient was a computer programmer without the patient ever offering such information.

Artificial intelligence could help the field of health, but its results need to be carefully verified, so how much time do doctors really save? Also, patients have to trust that their doctor is actually checking what the AI ​​is producing – hospital systems have to put in checks to make sure this happens, otherwise complacency could creep in.

Basically, generative AI is just a word prediction machine, which searches through large amounts of data without really understanding the underlying concepts it returns. It is not “intelligent” in the same sense as a real human, and above all it is not able to understand the unique circumstances for each specific individual; is returning the information that has generalized and seen before.

“I think this is one of those promising technologies, but it’s not there yet,” said Adam Rodman, an internal medicine physician and AI researcher at Beth Israel Deaconess Medical Center. “I’m worried that we’re only going to further degrade what we’re doing by putting hallucinated ‘AI slop’ into the care of highly pathological patients.”

The next time you visit your doctor, it might be worth asking if they use AI in their workflow.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *