ORLANDO, Fla.—A St. Louis, Missouri-based health system that was struggling to control costs in the surgical suite turned to big data to solve the problem. And they didn’t just save money, they reduced or eliminated targeted surgical products, reduced variation in surgical protocols and established best practices across surgical departments—all while ensuring quality postoperative results for patients.
Mercy, one of this year’s Healthcare Information and Management Systems Society (HIMSS) Enterprise Davies Award recipients, saved $9.42 million last year in perioperative costs alone. And how much did the system cost? “Less than that,” says Mercy’s CMIO, Todd Stewart, M.D.
Mercy, the fifth largest Catholic healthcare system in the nation, has 43 hospitals and more than 700 physician practices in Arkansas, Kansas, Missouri and Oklahoma. Mercy also has outreach ministries in Louisiana, Mississippi and Texas.
Mercy has been using SAP HANA, an application server that includes an in-memory, column-oriented analytical database management system, to reduce variations in surgical case costs.
We sat down with Stewart and Jamie Oswald, the health system's data analytics and engineering pro, to talk through the ins and outs of making these dramatic changes—and their dramatic results.
FierceHealthcare: What’s the biggest difference between your old system and the new one?
Oswald: It’s really fast. What sets it apart is that in the old world you had to pre-aggregate everything. Now we can redefine measures, dimensions at the transaction level, and roll it up into whatever level is appropriate. The power of that is that we can then, from that high-level dashboard, drill into it and get all the way down. Transparency has helped us, as a provider, to get a lot of credibility.
FH: How so?
Stewart: The difficulty with users, especially the clinicians, is getting them to operate better. One of the first things they want to know is what the data means and whether they can trust it. The only way we can answer those questions is by having really good governance of the data. At the concept level that we understand what it is we’re talking about and to define that concept. That may sound simple. But mapping the data on the backside, we might have to resolve three or four different data sources that may have to be mapped and remapped in the background. We have to do that in a well-governed manner. And that gives your data reliability.
So look at surgical block time utilization. How do we define start time and stop time? Where do we get that data? When we pull that in, we can drill down to each data element. So everyone understands that it’s valid.
Oswald: Block utilization is a big area of concern for perioperative users. We want to keep the operating rooms humming. So we have a dashboard around perioperative space. First, there’s the block utilization percentage. From there, you can drill down into that information. We have links out to a wiki page that defines all of the measures. Numerator, denominator, that start time means the time the patient hits the door or the bed. The contact information for the head of perioperative is on there. So surgeons can go right to her if they feel like the data is wrong.
We let them go from the top level of detail to the bottom. Surgeons will say “those aren’t my numbers. The aggregation is wrong.” When that happens, we go down to the smallest level of data we have. We can ask “Which patient isn’t yours? Which one wasn’t here on this day? It says they left the room at 10:15. Is that wrong?”
It builds credibility. The next time you talk to them it makes the conversation easier and faster. And they can go in and check it themselves.
FH: Did you get input from end users on the system? How did you ensure it’s user-friendly?
Stewart: Yes. And it’s in plain English. It has big bold headings that are easy to see. Truly, it’s not a technical manual. It has technical information, but it’s not a technical manual. It’s digestible.
Oswald: The surgeons contribute to the wiki—they or someone on their team.
Stewart: It’s not top-down. You want them to own it. They make decisions about how they manage their block time. They’re able to roll that data up to a high level and down to the most granular level very quickly. It allows the clinicians to own that whole information stack.
FH: What are the key data points they can see?
Stewart: Block utilization, turnaround rates and—one of the biggest ones—perioperative cost-per-case. If somebody it doing total knees for $5,000 and someone else is doing them for $7,000 we want to bring that cost down—obviously, as clinically appropriate.
Oswald: In terms of motivation, we have a surgeon scorecard with the cases they work on most by volume, how much it costs to do each one and a ranking of all specialties.
Stewart: We’re a competitive lot.
Oswald: Type A.
FH: How to you deal with physician preference items?
Stewart: Pick lists and prep lists are peer to peer. So the cardiologists manage themselves as a group. They are unified in that they want the highest quality at the lowest cost. We track down to the sponge. But that’s the point of the technology. Most places are dependent on these high-level metrics and it becomes an IT scavenger hunt to figure out the data underneath that, much less to manage and govern it through time. This technology stack has the ability to consume very large, very heterogeneous data sets and have them organized and maintained. We’re a very large health system, so the data we’re talking about is not insignificant.
Now we can compress it and pull it into memory selectively. We can right-size what we need. Without that we cannot do peer-to-peer management on pick lists and prep lists. We don’t have the cost, quality or utilization data.
Oswald: You can compare on a case by case basis—the cost of implants for example. And then we go and talk to the physician, and a lot of times they just say “I use what they hand me.”
Stewart: Or it’s what they trained on.
Oswald: So if we can save $500 a case by switching, we have to do it.
FH: And because the data is all there, you can show them that the outcomes aren’t affected by that switch.
Stewart: Those two have to go together.
FH: What advice would you give to other organizations attempting this kind of project?
Oswald: You have to have a leader who’s invested. Our perioperative lead is great. She got buy-in, but also said “This is how we’re going to do it. You can’t come to me with your spreadsheet anymore. If you don’t like the number, let’s fix the number. But this is it.”
The technology was also vitally important. We tried to do it with our existing tools and we could get about two and a half weeks’ worth of data in a searchable space.
Stewart: That made it virtually impossible to have the discussions on the governance and the clinical sides. The technology has to be the servant, not the master.
Oswald: A little Skynet there.
Stewart: It can be very distracting. Very disruptive. We want the right information to the right person at the right time. Everyone else stops there. But the fourth part is “in the right manner.” It needs to be at that particular point in time because we’re so swamped clinically. People are amazed at the number of steps that nurses, particularly, take and at how quickly clinicians are pounded with information. They’re making decisions every few seconds. You have to get it right in the workflow.
Oswald: Before I moved into healthcare, it was “If you can make my life better, I’ll do it.” For care providers, it’s “You have to make it significantly better.” If you’re taking me from five clicks to four, those are still four new clicks I have to learn.
FH: What’s next?
Stewart: It’s continuous improvement. Now that we have the data, we can keep wringing the sponge.
Oswald: It makes it so much easier to see where the opportunities are, to see the variance.
Stewart: The data opens up the ability to ask a multitude of questions, to dial up and dial down. Our challenge now is to extend this platform across other areas, from perioperative to lab. There’s lots of opportunity.