Hospitals

AI Blazing the New Healthcare Frontier

Dr. Gary Call, Chief Medical Officer, HMS

Artificial Intelligence (AI) has been used in healthcare since the 1960s and 70s with systems such as Dendral and MYCIN, but it’s getting a lot more attention as the technology matures, cheap, powerful hardware becomes readily accessible, and vast amounts of data and diagnostic imagery are available via Electronic Health Records (EHRs). Venture capital firm Rock Health reports that 121 health companies working in healthcare AI and machine learning raised $2.7 billion between 2011 and 2017.

There are several exciting applications being developed today, though not all are without growing pains.

Diagnostic Image Analysis and Decision Support

There’s great potential for AI in medical image analysis, and some of the most successful applications there are in the field of eye care. These systems generally work in a similar way: hundreds of thousands of diagnostic images are fed to the computer, and the AI is told whether the image is that of a diseased or disease-free patient. The knowledge gained from the training images allows the system to diagnose new images.

A system called IDx-DR is the first AI diagnostic to be FDA approved, and it can accurately detect diabetic retinopathy (DR) in under a minute, without the need for an ophthalmologist. If not detected early, DR can cause blindness, and patients in rural areas often live hours away from eye specialists. About 50% of diabetics don’t get the recommended annual eye exams, so it’s easy to see how the system could greatly improve outcomes for those who find it difficult to get regular checkups with qualified doctors.

Systems exist or are being developed to identify or assist in identifying, several types of cancer, coronary heart disease, congenital heart defects, abnormal brain pathology, Alzheimer’s, and others.

AI-Assisted Robotic Surgery

While this one sounds more like sci-fi than most applications, it’s happening today, and it’s improving outcomes. A system invented by Mazor Robotics analyzes medical records and helps physically guide the surgeon’s instruments during orthopedic operations. In a study, patients experienced five times fewer surgical complications and 21% shorter hospital stays compared with surgeons operating alone. It’s estimated that AI-assisted surgery could save up to $40 billion annually.

Other Applications

AI is finding its way into many other areas of healthcare:

  • Sensely’s “Molly” is a smartphone app that allows patients to interact with an AI-powered virtual nursing assistant before being directed to an appropriate provider or self-care regimen. Molly is currently being used by the NHS, Dudley CCG, Novartis and UCSF.
  • Beth Israel Hospital uses AI to identify patients likely to miss appointments so that staff can preemptively intervene.
  • AI-powered text-to-speech systems are streamlining back-office activities such as ordering prescriptions and tests.
  • Payers are beginning to use neural networks to identify billing errors and gain an edge in the ongoing arms race with fraudsters.
  • Sensitive patient data is being protected by AI-powered security systems that can identify unusual access patterns.

The “Black Box” Problem

These developments are enormously exciting, but there are still issues to be worked out.

One concern with neural networks and deep learning systems is that, while their diagnostic abilities can be impressive once well trained, most are unable to explain to their decisions, which is a major concern to physicians and patients. This so-called “black box” problem makes many uneasy about blindly accepting an AI-generated diagnosis. It’s a problem that many are trying to crack, including the Defense Advanced Research Projects Agency (DARPA), which created the Explainable Artificial Intelligence (XAI) initiative to help address concerns across all applications.

Turing predicted this issue in his paper nearly 70 years ago: “An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behavior.”

IBM’s flagship technology “Watson”, which put AI front-and-center by beating the world’s best “Jeopardy” champions, has (among other issues) a black box problem with its highly touted oncology diagnosis system. Google’s DeepMind, however, is not only able to identify 50 eye diseases with an accuracy of 94%—as good or better than top eye doctors—but it can also explain how it came to its conclusions.

GIGO

A long-held truth in all computer science is known as “Garbage In, Garbage Out” (GIGO). No matter how perfect the algorithms and program logic in a system, if its input is flawed, the output will be too.

In medical image analysis this means that the quality of the input images and human diagnoses are paramount — it’s crucial that an AI system is trained with high-quality sample data.

A physician’s handwritten free notes are often excluded from the analysis, meaning that important data regarding a case may be unavailable to the AI.

Outlook

While AI is already being successfully used today, in almost all cases it is as a second opinion requiring human supervision. It’s widely expected that this will change as the black box problem is resolved, systems mature, and larger amounts of high-quality teaching data becomes available. The Harvard Business Review estimates that by 2021, AI applications could save the healthcare system up to $150 billion. That’s the kind of money that could really help more people receive affordable, high-quality care.

We are actively using AI where it makes sense today—to identify member health risks, to score the propensity of actions or claims to be fraudulent or improper and to build predictive models about outcomes and engagement, among other applications. We’re excited to see what the future holds. How is your organization using or thinking of using these technologies?

The editorial staff had no role in this post's creation.