Can AI Save Lives? Only If We Let It
Despite artificial intelligence’s great promise, health-care channel partners face slow AI adoption.
Late one evening, Gunjan Bhardwaj got a phone call from his friend and mentor — a call that would change his career. His friend had cancer. Not long after, Bhardwaj spent days in the hospital, as his friend underwent painful chemotherapy.
As a former consultant at Ernst and Young and Boston Consulting Group, Bhardwaj understood the flow of information in the Internet age. He knew artificial intelligence (AI) and machine learning were capable of digesting large data sets and arriving at powerful connections and insights. What Bhardwaj didn’t understand was, why health-care information to help his friend wasn’t available.
“Everyone wanted to know alternative therapies, what other physicians to talk to, studies going on elsewhere in the world, all of which could have helped create a targeted therapy,” Bhardwaj says. “For the first time in my life, I saw the hopelessness and helplessness.”
Gunjan Bhardwaj
Gunjan Bhardwaj
His friend survived, thankfully; yet the feeling of helplessness stuck with Bhardwaj, so he started a company called Innoplexus, which democratizes data and deploys machine learning and AI to help researchers develop cures more quickly and doctors to identify diseases and treatments faster.
Like many startups, as well as tech giants and integrators, Innoplexus wants to bring AI to health care even as the industry shows a cultural resistance. It’s unfortunate, too. Health care is fertile ground for AI applications, given the industry’s data explosion and thirst for innovation. Most of all, AI carries the promise of hope to people who desperately need it.
“This is really hard, it’s really humbling, and it’s really complicated,” former Alphabet Executive Chairman Eric Schmidt said at Healthcare Information and Management Systems Society’s (HIMSS) conference in Las Vegas last week. “But if we all work together, we can really save lives at a scale that is unimaginable, because of the impact of these technologies.”
Is AI at a Tipping Point?
AI can impact health care in all sorts of ways.
“It touches everything,” Schmidt says.
For instance, AI is making great strides in imaging, particularly in radiology. It can free up doctors and nurses who currently spend two-thirds of their time doing administrative work and data entry. At Innoplexus, AI crawls, aggregates, analyzes and visualizes data.
AI also works alongside physicians.
“The most common examples of how AI is used in health care right now are to help with better decision making for doctors,” says Kaveh Safavi, M.D., J.D., global head of Accenture Health. “The focus is most commonly on cancer, because of the complexity of the diagnosis and the amount of information that needs to be managed. It’s also used in behavioral health; how to infer if a patient is at risk for, say, suicide.”
Kaveh Safavi
Kaveh Safavi
A lot of people want AI in health care, too.
A recent Accenture report found that nearly one in five consumers has used health services that are powered by AI, such as virtual clinicians and home-based diagnostics. They like the AI-powered virtual doctor because it’s always available, lets them avoid a trip to the hospital, and assesses vast amounts of relevant information.
Tech platform vendors, independent software vendors and integrators haven’t missed the signs, either.
“When walking the floor and talking to people at HIMSS, I was struck by how much the industry focus has moved from operating efficiencies and cost savings to a much bigger focus on patient outcomes and patient experience,” says Neeracha Taychakhoonavudh, senior vice president of partner and industry innovation at Salesforce.
At HIMSS, Salesforce unveiled new features to its Health Cloud, such as a tool for monitoring and closing gaps in a patient’s health treatment plan, mainly through mobile devices. Accenture, Deloitte and PwC are top channel partners for Health Cloud. So are smaller integration partners such as Jitterbit and MuleSoft, which provide services for integrating Health Cloud with electronic health-record providers.
Moreover, consumers are showing that they are willing to wear technology to track their fitness, lifestyle and vital signs, according to the Accenture report. Use of wearables has more than tripled since 2014, from 9 percent to 33 percent. Nearly half of health-care consumers are using mobile apps, compared to just 16 percent in 2014. Wearables and mobile devices are looking more like health-care devices every day.
“A decade from now, health care and health-care experiences will be much less dependent on physical location,” Safavi says. “Rather than going to a place, you get care wherever and whenever you want it. Self-service becomes a bigger part through mobile, wearables, virtual health care, and patients helping each other through social platforms.”
While enough people and most of the major tech providers seem to be on board with AI, the problem is that most health-care providers lag behind.
In a recent survey of almost 200 health-care IT leaders conducted by TEKsystems, an IT staffing and services company, half of the respondents had nothing on the planning horizon with AI, while the other half were in various phases of evaluation and implementation. Health-care providers are hindered by tough regulatory requirements, a high security standard and data management challenges.
“What they’re saying is, ‘Yeah, we understand it, but we also understand that security, interoperability, data enablement are critical components to this,’” says Ben Flock, chief healthcare strategist at TEKsystems. “If you really don’t have a good data strategy, AI projects are really tough to achieve from an IT perspective.”
All of this puts AI at a kind of tipping point. Technologies and applications that have the potential to shape health care in the coming years, such as connected health and telemedicine, are being delayed, TEKsystems says.
Health-care providers should take these reports seriously.
“On a broader social context, if our health-care system fails to keep up with citizens’ expectations, then we run the risk of public backlash,” Safavi says. “Think about dissatisfaction expressed through voting actions, pressure on regulations, and general lack of willingness to provide financial resources for the health-care system.”
AI Researchers Make Strides
Until more health-care providers open up to AI, much of the AI magic will continue to happen deep inside research labs.
Last fall, for instance, Stanford University researchers developed a machine learning algorithm that can diagnose pneumonia better than radiologists. The algorithm makes its diagnosis based off of chest X-ray images after learning from hundreds of thousands of diagnoses.
A machine-learning algorithm’s ability to learn and achieve an objective function – in this case, diagnosing pneumonia – is beyond anything a human can do.
Witness the awesome power of AI: Alphabet-owned AI research company DeepMind built a machine-learning engine, called AlphaZero, that learned how to play chess by playing 44 million games in nine hours and then beat the world-champion chess program Stockfish in four hours.
Despite AI’s advances in imaging and radiology, “it’s still very much a research focused effort today,” says Kim Garriott, principal consultant of healthcare strategies at Logicalis Healthcare Solutions, an arm of Logicalis US, an IT solutions and managed service provider.
Another example of AI researchers making inroads in health care comes from Alphabet. At HIMSS, Schmidt talked about Alphabet recently publishing a paper on AI looking at retinal images – retinal fundus gives a view into a person’s vascular system – and predicting heart-risk factors, such as AIDS, A1C and blood pressure, more accurately than the doctors did in the trials.
“Imagine a situation where we take that result, which is just the beginning of the result, and somewhere some hospital runs an appropriate multiyear trial where they actually do retinal images and train against what the doctors do,” Schmidt says. “The first thing they do is take the retinal image and say, ‘We see the path of your disease now, and we know exactly when we’re going to have to intervene unless you do A, B or C.’ That’s the kind of model that’s predictive and now possible.”
Why Isn’t This Happening Faster?
Not all AI researchers got it right.
In the 1990s, Rich Caruna, a Ph.D. student at Carnegie Mellon University, was asked to train a neural net to evaluate risks for patients with pneumonia, the New York Times reports. The neural net wrongly surmised that asthmatic patients with pneumonia fared better than other patients and would’ve told doctors to send them home — even though these patients are in a high-risk category.
Caruna said he could have fixed the problem, but when someone from the University of Pittsburgh Medical Center wanted to use the algorithm, he declined.
“I said we don’t understand what it does inside,” Caruna told the New York Times. “I said I was afraid.”
Herein lies the rub.
Especially in health care, the biggest AI stumbling block is trust. There’s lack of trust in AI’s black box: How does AI come up with its decisions when the learning is unsupervised? There’s lack of trust in the data: Was there enough data to train on to make reliable judgments about a person’s health? There’s lack of trust in the people creating AI: Did human bias creep into the data? There’s lack of trust about a company’s motives: Did AI come from a reliable source?
“There’s a low level of trust that needs to be established, where organizations are very transparent about their motivations and rules,” Accenture’s Safavi says. “If someone offered an AI agent to help a person make decisions about their health care, and it was from an insurance company, maybe that person is skeptical about the motives.”
In the Accenture survey, one out of four respondents who wouldn’t use an AI-powered virtual doctor says it’s because they don’t understand enough about how AI works.
On the flip side, how good does AI have to be? Consider AI that helps a doctor make a diagnosis or a patient understand their condition. If the AI is like the worst doctor, is it still good enough? Or does AI have to be flawless before people are willing to let it fulfill its promises and take a greater role in their health care?
“As a society, we struggle with those questions, because we didn’t know we would have to answer them,” Safavi says. “The way we answer them will have an effect on the speed of adoption.”
About the Author
You May Also Like