Doctor working on a virtual brain

DHIS 2019: Actualizing the Potential of Artificial Intelligence

Artificial intelligence (AI) is a hot topic in healthcare, and many investors, vendors, and providers are hopeful that AI solutions will gain traction in the industry and help revolutionize providers’ work and patients’ care. However, many players in the healthcare industry are still wondering what AI looks like in healthcare, what the challenges are with implementing AI, and whether it’s really worth the hype.

At DHIS, we were excited to have Geoff Gordon of UAB Medicine, Jason Wiesner of Sutter Health, and Nathan Patrick Taylor of Symphony Post Acute Network join together on a panel to further discuss AI and its potential.

What is AI?

The moderator of the panel, Chad Konchak of NorthShore University HealthSystem, opened up the discussion by asking the panelists to define AI.

“If I can replace a task with an automated tool, I call that AI,” Taylor said. “Here is an example: we have a detailed process for getting records from a hospital. The records come over as PDFs, Word documents, and text files, and then our team copies and pastes everything into the right place.”

Taylor continued, “We set a goal to automate that process. In order to do that, we needed a machine-learning algorithm to determine what kind of document the record was in, and we also needed a tool to physically move the document to the right spot. Those tools automated the process.”

Gordon added that AI consists of tools that help with single detection. “A lot of AI is finding the needle in the haystack, or finding the one patient who has one specific condition.”

Is Healthcare Too Complicated for AI?

Konchak then went on to explain that AI solutions need to have some kind of cognitive learning component so that the solutions can take in data and make sense of it. Other industries have already adopted this component, but healthcare is behind in terms of adoption. The panelists responded to Konchak’s thoughts with why they think healthcare is behind and is a more complicated industry than others.

Gordon responded by saying that healthcare has a lot of variables, and that is what makes AI solutions hard to adopt. “AI algorithms function better when there are fewer variables of significance, and we don’t have that luxury in healthcare,” he said. “If you’re trying to build a list of high-risk diabetic patients, you have to take a lot of things into account in the algorithm. That is complicated, especially if the data isn’t all that good.”

Wiesner added that money is a challenge when it comes to making AI solutions work for an organization. “One challenge is that AI tools that interact with the physician at the time of diagnosis require FDA approval, and most companies don’t have the dollars to tackle that problem. We are big organization, and we have made a big investment in data. So we are looking for tools that can handle our big organization.”

In addition, Wiesner said that big organizations need AI solutions that will integrate with their core systems. “We want solutions that can affect a large population and work with our platform already, so integration is key. AI solutions are popular now because of the state of the technology, but those solutions must integrate. I don’t think that problem has really been addressed yet. Some vendors are now going for a platform approach where algorithm developers can plug algorithms into the platform, and that approach resonates with me because I’m not going to integrate with 50 different solutions.”

What Is Important to Look for in a Vendor?

The panelists were also asked how they have gone about engaging with vendors for AI solutions. The panelists agreed that partnership is a big priority when it comes to engaging with vendors, especially because using AI tools often means treading new ground.

Weisner said, “Any vendor has to be our partner. I want to be interacting with the CTO of the company, and I want to know about how the solution can work with our workflow. I don’t want a vendor to tell me about the bells and whistles from the get-go. Any vendor that is just smoke and mirrors and doesn’t actually execute on things gets eliminated from our process pretty quickly.”

Taylor explained how when his organization selected a vendor for their machine-learning platform, he and his organization really needed to have their hand held throughout the process. Fortunately, they had folks on the vendor side who helped them with that, and that made a huge difference.

How Have You Tried to Retain Value from Your AI Solutions?

One audience member brought up the topic of value—how far had the panelists’ organizations gone to create and retain value in their organizations versus passing along that value elsewhere?

“That’s a double-edged sword,” Gordon replied. “You don’t want to develop an algorithm just to drive all your patients to an ancillary service. But you don’t want to be perceived as self-serving either. That is a tricky conversation to have internally. We just have to ask ourselves what is best for our patients and what things will actually help us be preventative.”

Taylor added that one of his organization’s key mistakes in terms of value was developing an algorithm for predicting readmissions. “I soon realized that the algorithm didn’t really predict anything—it just told me what would most likely lead to a readmission. We never got into the ROI, and that was a mistake.”

In Taylor’s situation, the hospitals were benefiting from the tool, but Taylor and his organization were not. “We had to go back to the hospitals and tell them that with the algorithm, we probably wouldn’t have fewer readmissions, but the tool would allow us to collaborate better on how to treat patients.” Now, with the tool, Taylor and his organization do the best they can to get ahead of what issues they think could happen with a patient.

What Barriers are Preventing Organizations from Progressing?

Just because an AI tool gets trained by using accurate data doesn’t mean the tool will always be accurate. Some data can be very specific to patient populations in a geographical area, or sometimes an AI tool won’t recognize results that it wasn’t trained to recognize.

Weisner and Taylor both elaborated on this. “One interesting aspect of AI is Explainable AI, or XAI,” Weisner said. “Basically, that means an AI tool can take complicated algorithms and explain the data in a way that a human can understand. But to make sure people know what data they can trust, institutions have pilots for AI tools. The outcomes of limited-scope pilots are important so that we know what the opportunities are and how results can be actualized and trusted by providers and patients.”

Taylor added, “We have had to retrain our AI tool whenever our patient population has changed. At first, I trained our AI solution on data that was available six months prior. That data worked okay in the solution for a time, but then the model started to decay as the data changed.”

Taylor concluded that it is important to keep the AI tools up to date by retraining them. That way, the tools use the most current data to determine outcomes for patients.

What Success Looks like with AI

Ultimately, providers are looking for AI tools that will save them time and provide the most beneficial information so providers can more efficiently help patients. One audience member lamented that the time and attention of clinicians is totally oversubscribed and that clinicians are overwhelmed with all the information that they could look at.

The audience member explained, “If we are going to succeed with AI, the data has to come to a conclusion that saves the physicians’ time. It should allow physicians to see fewer things.” That way, physicians are focused on the most important data and results to truly help patients.

Although there is still a lot of progress to be made in AI, many providers have already seen success, and it was exciting for those in the room to hear how three organizations in healthcare have made strides with their AI tools.




     Photo cred: Adobe Stock, sdecoret