Will bots replace docs? Not any time soon, AI experts say
Hershey, Pa. – With artificial intelligence, predicting a doctor’s decision could be like Hulu predicting what to watch next.
Just as Hulu gives you recommendations based on your watch history, software could sift through medical records and make recommendations to the doctor based on similar cases.
The computer models are not that difficult for trained programmers to make, but it begs the question: Who is actually making the decision?
“You have to be careful of the ethics,” said Dr. Lincoln Smith, director of analytic enablement, customer engagement and insights for Highmark. “You’re not taking the physician out of it.”
The potential benefits of artificial intelligence in health care are clear – machines can quickly pour through large volumes of data and learn to detect disease or determine the best courses of treatment. But as AI becomes more powerful and more prevalent, it raises new questions about the roles of man and machine.
Artificial intelligence – a system designed to think like a human and learn from experience – has become an increased focus of health care research.
Earlier this year, the FDA approved the first medical device using AI to detect diabetic retinopathy, a common cause of vision loss for diabetics. Dr. Michael Abramoff of the University of Iowa led the research into the technology, which would help non-specialists like family doctors – who are more likely to encounter diabetes patients – detect the condition.
Using more than 100,000 images of skin disease, researchers at Stanford University trained a computer to classify skin lesions. Their study, published in Nature, found that their system could recognize malignant melanomas and carcinomas just as accurately as trained dermatologists.
Last month, a team from NYU published the results of a study that used an open-source algorithm from Google to analyze images of lung tumors. The AI system could distinguish between two types of lung cancer with 97 percent accuracy and could potentially be applied to other cancer types.
Select Medical, which owns specialty hospitals and outpatient rehabilitation centers, is exploring natural language processing to comb through its lab data. Natural language processing is the ability for machines to interpret written or spoken human language – a fundamental concept for AI-powered voice assistants like Siri or Alexa.
Brian Rusignuolo, senior vice president and CIO for Select Medical, said to goal is to see whether the AI system can come up with the same conclusion as humans when deciding to transition a patient from one level of care to the next.
He emphasized that with AI there still has to a trained human making the decision.
“It comes down to choices,” he said. “The AI will make recommendations based on the algorithm, but from the provider standpoint, there’s still a human making a decision based on knowledge. I don’t think we’re at a point of completely autonomous health care.”
At Highmark, Smith wants to use natural language processing to scour clinical and non-clinical review data which can total 5 to 10 million pages a year. His team has also been working to develop AI systems to assist with other exhaustive data reviews such as prior authorizations (deciding whether a procedure will be covered by an insurance) and drug formularies (a list of prescriptions preferred by an insurance plan).
Smith said we will be “seeing a human in the loop for a long time.” AI systems are only as good as the data that’s fed into the system, so if the data is biased, it will return a biased model.
“There’s serious ethics around AI use and we’re just now coming to terms with that,” he said.
Craig Limoli is the founder and CEO of Wellsheet, a New York-based startup that works with health care providers to integrate AI into medical records systems, including a project with Robert Wood Johnson Barnabas Health to help keep congestive heart failure patients on track through their treatment plan.
Limoli said the system can “accelerate the physician’s ability to make the right decision.” He also emphasized that humans will continue to be involved in the decision process especially as AI systems could produce what is known as the “black box problem.”
“They might reach conclusions that are not readily explainable. If an algorithm determines a particular course of treatment, the doctor has to explain, ‘We recommend this course of treatment.’ When the patient says, ‘Why?’ The doctor says, ‘The AI told me.’”
The use of AI in health care is only expected to grow, particularly as providers discover the potential cost savings – estimated to be $150 billion by 2025, according to one industry consultant.
In light of this, researchers are acknowledging ethical questions like whether AI systems will replace doctors. A group of Hungarian scientists looking at health care staffing shortages addressed this exact question in a recently published paper.
“AI is not meant to replace caregivers, but those who use AI will probably replace those who don’t,” they wrote. “And it is possible to prepare for that.”
This blog post was a writing assignment for my science writing class at Penn State based on a health care innovation event at Penn State College of Medicine.