false
Catalog
On Demand: More than Meets the Eye: Assessing The ...
Webinar Recording
Webinar Recording
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
We're grateful today to the Edwards Life Sciences Corporation. They, as you know, are global leaders when it comes to patient-focused innovations for structural heart disease. They helped to sponsor Medi-Axiom's new white paper, The Value of a Structural Heart Program, Impact Beyond the Procedure. Today's webinar really highlights a lot of the findings from that white paper, and we're delighted to be able to bring the information to you today. Today's program has some speakers from Baylor Scott and White Health, and they do not have any disclosures relevant to the content that you'll be viewing today. During the webinar, if you have any questions, we'd like you to type them into the Q&A section at the bottom of your screen. Then if you have any questions during the program or with other people's questions, we'll leave time at the end of the presentation so that we can discuss with the panelists. A copy of today's slides will be placed in the chat. So in addition, everyone who is registered for the webinar will receive a link to the recording within about three to five business days. If you have any colleagues that couldn't attend the session today, a copy of the recording will be available on our Medi-Axiom website within about three to five days as well. I am joined today, hopefully in a moment again, with our folks from Baylor Scott and White Health in Dallas, Texas, as I mentioned, who were interested, instrumental, sorry, in telling their story for the White Paper. With us today is Trey Wick. He is the Chief Financial Officer at Baylor Scott and White Health. Mohamed Safa, Mo, is the Director of Advanced Analytics, and hopefully he'll be back with us ever so shortly. Seems to be having a little difficulty with the Zoom on his end. And then we hope to have Dr. Michael DiMaio. He is Chief of Staff at the Heart Hospitals and Medical Director for Surgical Services, Research and Education. We weren't sure he was gonna be able to join us, but we have a last minute indication that he will be able to join us for part of the call anyway. So with that, Trey, I'm not sure what's honestly going on with Mo. Here's his picture, and here's your picture. So I think if you're okay, Trey, shall I go ahead and advance the slides, or we wanna give Mo just another minute? Well, I think I can get us started here. I think you probably can. So let me just advance the slides. And why don't you take it away, and we hope to have Mo back with us ever so shortly. We really do hope to have Mo back. So he has a lot to say about this, and he was an integral part of making all of this happen and partnering with us. But I'll go ahead and get us kicked off here. And what we wanted to do initially here was just kind of go over the steps we went through for analyzing this program. And we will touch base on all of these areas as we walk through the slides here. But what we wanted to do was kind of get these out in front of you now, and then wrap back up with going back over them at the end, so we can tie all of this together. But as you can see, our steps were defining our problem, identifying an executive champion and the key stakeholders, doing a data inventory. What do we have? What are all of the items we have available to study this program? Defining measures and parameters. We had to come up with some definitions on how we were going to do things, how we're going to measure our impact and get buy-in from all of the stakeholders. Start with an iterative process. Expanding our analysis from beyond a single site to multiple sites. Building dashboards that would really help us communicate findings and share the results that we discovered. Scaling that to multi-sites. And then maintaining that continuous improvement. We didn't just do this one time and have it come out right. There were multiple continuous improvement cycles that we went through to get this to where we have it now today. So we'll touch on all of these as we go through the slides. I'm very hopeful that Mo will be able to join us and talk about his pieces. But if you advance to the next slide, please. This was our opportunity statement. This is really what started the discussion. And I'll start with the first one here. TAVR carries a perception of being only marginally sustainable. As this new technology came out, it was groundbreaking. It was high cost. The reimbursement in most instances did not cover the cost of the device and all those things used in the TAVR case. So the perceptions of the program really started to stem from those early days of just measuring the single TAVR episode of care on its own. So that really started the discussion. Some of the other hurdles we had was getting accurate TAVR financials might be burdensome. You know, at the patient level below certain thresholds, there might be supplies and things used that are not charged for documented. There could be staff time, you know, that doesn't get allocated automatically. You may need systems to do that. You may need allocation methods to really measure your overhead and indirect costs that have to be accounted for. And those allocation methodologies might be imperfect. You know, all of this is highly dependent on resources available at your own locations, how much time and effort you want to go into and to measuring these details down at the patient level. So those were two of the bigger challenges we had. Once we got into some of the analysis, we also noticed that a lot of the work was being done at each individual location. So it was siloed. That might have led to different conclusions about the program's performance based on, you know, the sites doing their own assumptions and definitions. So we really wanted to get away from a siloed, individualized by-site analysis and really work on something that could be used across multiple locations and everyone agreed on the definitions. And that's what moved us to the last point here as a system-wide methodology, getting away from each location, quote, doing their own thing and reaching those different conclusions to a more collaborative, consistent process with definitions that were understood and accepted by all of the stakeholders. And we felt like we would be able to, once we reached that point, we could really spread this across multiple locations within our system. Go to the next slide here. I see that Mo is on the video. I'm gonna see if he can jump in here. Let's give him a second and see if he's able to. Nope, he's back off. This is just good to give you a snapshot of the size of our programs here from the inception and how many cases we've seen over those years. So going all the way back to 2011 up through 2024, you can see we have seven facilities here that are participating in the TAVR program, starting from 20, what's that, about 13 or 14 years ago, to doing over a thousand a year now and over 8,000 during the entire lifetime of this program here over the past 13 or 14 years. So next slide. So this is some of the work that we did to really build the approach we had. It's the executive champion and key stakeholders is very collaborative approach. We really started with the physician leadership, really engaging the clinical as well as the business units, collaborating with our internal and external resources to gather their feedback and optimizing the data model to ensure that we have the appropriate inclusions and exclusions, really refining those definitions to make sure we were measuring what we wanted to measure and weren't getting off into areas that maybe were not associated with the program or maybe would lead us into other services. So the physician leadership, obviously, they're one of the big stakeholders here at the table to help guide us in really the pathway for these patients and all of the services that they consume. It's not just about the single TAVR episode, it's about things that happen prior to those patients receiving their new valve, but also those things that happen afterward. Finance leadership is at the table to help measure all those things and also to be a part of hearing about how those clinical and care pathways are defined so that we can appropriately measure those impacts and share the results of the performance accurately and in a way that the physicians and clinical leadership can understand. And then we also included clinical coordinators and other supporting personnel here, their insights into the program, the care that these patients receive, the timing, what we needed to include or consider as we looked at all of these areas and how we would measure that. So it was really a team effort that involved multiple areas of each organization and really getting everybody, I guess you would say, on the train as far as how things were going to be defined and measured and how those results will be shared. Next slide. So really, me from a finance perspective, I'm purely from the finance side of things, no clinical expertise here. I knew there was a bigger picture out there. I knew these patients, they would obviously be coming here to have their TAVR procedure, but we knew that there was a lot more happening before they were selected and there was a lot more happening after their procedure was completed. And really to assist me and other finance leaders in the organization, we needed to understand that bigger clinical picture. You know, what is the care path for these patients? What are all of the other services that they are receiving beyond the TAVR episode of care? When are those services being supplied? How can all of that be defined and measured and the results be shared? So we really leaned on the physicians and the clinical professionals to help us with that. And as you can see, we started to develop a map of all of the services that these patients were provided during their course of treatment. And it could include things such as even surgical valve replacement, obviously ECHO, CT scans, clinic visits, follow-up visits, cath lab and PCI procedures, stress tests. So we really started to develop a larger profile for what these patients consumed. And really we started to expand the size of what we were measuring as a TAVR case, going away from just the individual episode of the TAVR episode and really expanding that to a much larger and a bigger picture, what we ended up calling the halo effect. All right, next slide. So what data did we need to tell the story? We needed to look at what do we have readily available? Well, we had a lot of information from our systems as far as the patient's utilization and resources, all of those detail items that they used as they received their care. What data needs and gaps did we see? I think a lot of that was around defining and putting in those missing pieces that we may not have known about and learning about the best practices for follow-up and pre-procedure care that these patients had. So identifying where those items were and what gaps were there. There may be things that we weren't, data we weren't collecting that we needed to start collecting. So we identified those areas and began a method of collecting those. And that really led into determining action plans to obtain any additional data that we needed. And then also partnering with our data stewards. So there may have been areas where we did not have access to data that we needed to request access to that data and begin to see what was available and share that and collect that and share that information with our larger team that included the clinicians and the physicians and the other stakeholders. So we could determine what was valuable and what we needed to continue to collect to tell the whole story. Next slide. So how do we achieve those meaningful insights? Defining the measures and parameters. For finance, that was really second nature for me. It's around charges, payments and contribution margin. Those were the areas we really focused on. Contribution margin is simply payments less the direct cost for those services that we provided. So think about the things that the patient has done when they're receiving care. That was the area we were really measuring as direct costs. Those things that were happening for the patient during their episode or encounter. We looked at the inpatient visits and the outpatient services encounter. And this is really where we expanded the lengths of time here pre and post the TAVR episode. And we accumulated the data from all of those activities and put them together. Code mapping and grouping. This was really one of the more important ones because we needed to make sure that we were measuring consistent definitions for the activity. And I know from a finance perspective, when someone asked me to tell them whether a service is profitable or not, one of the first things I asked them was, well, how do you define it? Are you looking at DRGs? Are you looking at UB revenue codes? Are you defining it by your general ledger cost centers? So we had to come up with those definitions and the stakeholders as a group, including the clinical people and physicians and finance and everyone else came together and agreed on those definitions. And that really allowed us to move forward much faster and really with those definitions, collect the relevant data and be able to share that information and share those results. And then aligning that with data governance, really making sure that what we were collecting was consistent and there weren't any unusual characteristics to it, looking at the, sometimes the data would come back. It wouldn't look like we expected it to look and we needed to do some refining there on how the data was collected. And maybe we had to do a little work to make it cleaner data. So we did all of those things as we were preparing to measure this program. Next slide. just going to quick ask Mo, can you, yeah, can you hear my apologies everyone can you all hear me now. Yeah, we can. Great. I have to go get a different device my computer slides, because I was really winging it there for. Yeah, I can I can talk about this one a little bit, especially the last I think Trey you covered the first two really well. So, the 3rd, 1 talking about code mapping and grouping, we wanted to to have whatever we were reporting on meaningful. So, rather than seeing the users looking at and revenue codes and cost centers, we wanted to create grouping. So, what is what is a tabber? So, instead of looking at to 66 to 67, there's a tabber component. What is a PCI for revenue codes the same thing. So, when you come to the reporting aspect of it, when you start putting things on the dashboard. It's less codes and more meaningful grouping that the folks are looking at. We did the same thing for the cost center. So, anytime we're looking at those services. We want to bring things down to what is consider imaging. What is new command? What is what is lapse all that kind of grouping. And we, we, we did it once and we use it over and over again. And with the data governance component, it's really very similar. You define a metric or a definition of a term. Once you catalog it, you put it in a data dictionary, and then this becomes useful in future dashboards or products or anything that you want to scale and reuse in the future. Next slide please. So, this is kind of talking about the iterative analysis. I missed those slides. Maybe trade that you might have covered for me again. Thank you. And my apologies. So, in 2018, we wanted to start somewhere. We just did not know what we didn't know at the time. So, we said that the easiest thing to do is capture those 2 codes for tabber to 66 to 67. And let's go and look at all of the encounters for those patients who had a tabber 12 months before and then 12 months after the procedure discharge. And let's look at all the revenues and charges and payments and calculate the contribution margin. And we, we continued doing that for for a couple of years. This was only done at 1 site. We learned a lot during that time. One of which is the ease of access to data is not necessarily going to be there when you're. So, I was an analyst at the time at 1 site. If I don't have access to certain data components, now I have to go outside of my comfort zone and try to see who in the organization can provide me that data access. It was a little bit different in 2020. And this is kind of what inspired this tabber halo product is what we call it now to be what it is today. There was a trade might have covered that there was a collaborative approach across the system to share best practices and sanitize the quality of care and the programs. We had 6 programs. So, I was asking, we do something similar to what we did in 2018 for the 1 facility. So, now it's, it's a little bit more challenging because there's a lot of nuances at the time we were on 2 different. So, trying to get the component of what is, what is this data element and for our North Texas region versus the central Texas region. So, we wanted to provide something that would be meaningful. And if you're showing this data analysis to our stakeholders, it all makes sense and flows perfectly. So, there was challenges, as I mentioned in bridging between the 2, but we made that analysis and it really expedited getting to that tabber halo, which is in that 1.5, what you see on to the right of the screen. 1 thing we added in addition to making it all for all the 6 programs and what goes behind it is through the collaboration. Some of the clinical coordinators suggested that instead of the 12 months post procedure, let's look at 14 months. So, we had a lot of patients that are coming back for follow up visits up to 14 months. So, this is kind of where what we call the tabber halo 1.5. And then in 1 of the slides that Trey covered as well throughout the ongoing collaboration. So, every time we made a change, we went and presented it to additional stakeholders and others that gave us feedback. And the 1 time we were presenting it with several physicians in the room and 1 of them was suggesting. So, this is great. This is giving me the analysis for the halo effect, but this is only covering the patients who had a tabber. What if we do look at it as a program for all the patients that are coming to the clinic? So, the 1st thing in my mind goes, yeah, I mean, nothing is impossible. That could be done. We'll just have now to tweak the model and completely look at it from a different angle. So, instead of indexing or making the index visit that tabber episode, when the patient is coming for the tabber procedure, we went back and started capturing who are those patients that are coming to the clinic. And from that, we were able to say, from those patients that came to the clinic, X had a tabber, Y had a sabber, and then all the other percentages that had additional components or inpatient visits. And then we're also able to capture those outpatient visits. And you see here on the slide, it shows about the 30 days prior and the 90 days post the initial clinic visit. This is kind of the window that we determined as the tier 1 tabber halo. I think in one of the slides previously, you saw the 3 concentric circles. So, the 1st circle is capturing what's in the realm of the tier 1 of the halo, which is what you see now here on that halo 2.0. And we're also able to capture now the subsequent visits for any time after that. One thing we would like to in the future to capture is more around patient loyalty. So, those patients that came initially to have a tabber or aortic stenosis treatment, what was their long-term revenue stream to Baylor Scott & White as tabber being the point of entry or the aortic stenosis treatment being the point of entry? This is not done yet, but this is something for the future we're hoping to be able to also calculate. Next slide, please. So, this is a little bit more into the detail of what goes into the halo. Initially, like I mentioned previously, we talked to the different coordinators, the physicians. We wanted to know what should be included, what should be excluded. This is what we came up with. But also, the dashboard that we built is flexible enough to where if something shows up that's not part of here, but we had not previously determined that should be included in the tabber, they have the ability to unfilter those so that they're excluded. And it's pretty user-friendly so that folks can simply see a value that says, okay, well, why is it showing an endoscopy? This patient maybe had an endoscopy in that window, but I don't want it to be included because it's not part of the tabber. So, then they can simply exclude it. So, you can see here we looked at the contribution margin only for their hospital bills or hospital visits. We don't want to capture the professional billing. Again, the 30 and 90-day window pre and post, we're capturing pretty much all their inpatient visits. So, we're able to tell how many or what percentage was tabber, what percentage was tabbered, and then other procedures after they had that first clinic visit. And then from an outpatient services standpoint, we capture all their outpatient visits to the hospitals and the CD clinics, as well as things like x-rays, CT scans, PFTs, chest tests, ECHOs, and other things like cardiac rehab and therapy. Next slide, please. All right, so this is the obligatory finance slide. You can't have finance participate and not have a slide that's a bar chart with some dollar signs on it. And I just really wanted to use this slide to illustrate the values and the difference in the values when we measure these cases just as a single tabber case versus measuring the entire episode and the halo impact. So, as you look on the left-hand side there, this is a measure, say, for 100 tabber cases, for example, that had 71 pre-index visits and 276 post-index visits. On the left-hand side, you can see the index visit or the tabber episode really only had a small contribution margin of just over $40,000. But when you measure all of the activity that these patients had, including those visits prior to their tabber index case, as well as the multiple visits post their tabber, you can see that that jumps considerably from the smaller number all the way up to just over $73,000. So, you can see here from measuring the program's value and planning and making decisions about where investments are going to be made in the healthcare ecosystem, it makes a huge difference in how this program is viewed and how the investments are made going forward and where these programs are going to be growing in the future. So, next slide. So, yeah, with the halo effect or the tabber halo, we also build it as one of the modules in our product. So, we have a financial dashboard that not only looks at the halo effect, this became a component of it. So, halo, we look at volumes, we look at the pair mix, if it's shifting, we have some analysis where folks can do like year-over-year analysis, are we trending in the right, in the favorable direction or not. There's also a section we created around quality and outcomes, and we keep integrating all these data sets together, and the halo has served a great component of this overall, what we call the service line analytics product. So, it's self-serve, folks can just go in, they request access to it, they don't have to go and have this big data request, and we have to provide them actual data dumps and things like that. They can go to a dashboard. It's automated, we have folks always with us at the table. So, we're not building these products with Trey and others, like from quality, from clinical. They are telling us this is the guideline we should, when we're building any analytics product, this is what we should follow, these are the things that we should look at, and these are the business rules, essentially. We're able to calculate the contribution margin for the program. We started, like we've been talking about with the TAVR, but now we're able to tell what is the LAAO halo effect. Also, recently, the TER, that's another one. We built the halo product now because we kept improving it to where now we can pretty much feed it any definition we want, and we set those parameters, like the TAVR, we set the 30 days and the 90 days. We can specify what should be included, excluded, and then on the dashboard, it's just going to display, and folks can use this to see what their halo is for that program that might not be identified already. Also, some of the things that we show there, or those insights are going to provide is opportunities and outliers, things that facilities need to be improving on. Also, potential growth and market share, so all of these different insights can be obtained within those dashboards. And it's across the system, so we have 18 different hospitals that have cardiovascular, seven of which do TAVR, but this is now being scaled to not only look at TAVR, but everything and any of the specialties or subservice lines within the cardiovascular service line. Next slide, please. All right, so to summarize a little bit and measuring the program's total value, I'll walk through these from left to right. These are some of our advice on how we would tell someone to do this, starting from day one. Start small, focus on one site or a subset of patients initially. It really helps to simplify the process. It allows you to also be very nimble. You can really micromanage those fewer cases to validate your charging, your coding, your billing, your payments, to assure everything is working like you expected. And if you need to make any adjustments to the approach, it's a lot easier to make those on the small scale. So perfect the approach at a small scale and then scale it up from there. Number two, securing an executive champion. This is going to be your advocate for not just the service that you're measuring, but also to get resources, maybe access to information, introduction to all of the other key stakeholders. And that champion really provides the support and authority to really make this happen and really undergird that whole process. Number three, involving key stakeholders. So this could be obviously clinicians and physicians, all the support staff. It could even be a service line committee if one exists. But be transparent with all of your data, gather all of those diverse perspectives and input, and use those key stakeholders as your testing group for how you're effectively communicating the results. It also helps you ensure that these financial and clinical teams are aligned from the very beginning. Remember, we talked a little bit about how we defined and set up all those definitions on the front end. So we were all measuring what we expected, and we were able to understand those results. Prioritizing data governance. That's critical to make the meaningful measurements and achieve acceptance of the results. Because everyone has participated, all those key stakeholders have participated in selecting and accept those definitions of what they're measuring. So clear definitions and data integrity, it's really critical for the whole success of this process. And lastly, iterate and refine. And I said refine, refine, and refine again. Each subsequent review cycle is an improvement over the preceding one. And it ultimately produces more meaningful analysis and results. So don't expect to do this one time and be done, as you saw earlier. We had a 1 and we had a 1.5 and a 2.0. But believe me, there were multiple cycles within each of those to get to that point. So these multiple iterations will really be continuous learning. And just expect that to be a part of the process. Next slide. Yeah, so you probably saw this slide at the beginning. It was more like an overview if you will follow the steps that we followed. But I can talk a little bit more to it now. So this is just in summary. Step by step to capture the HALO or the program value, whether it's a structural part or TAVR or LAO. Really, any of those programs, you can probably follow the same steps and accomplish the same thing. So the first thing is understand what problem we're trying to solve. And in this case, we had an opportunity to show the sustainability of the TAVR program. Which is usually perceived as not sustainable. So folks think of the TAVR, it's not going to make money. So this is really uncovering all these hidden costs that can show actual value from a financial angle. And then also the very important step that I think helped a lot shaping what the TAILOR effect looks like today is this collaboration. And having a champion like Trey and partner. And then also having folks from the MDs, from the clinical coordinators and other folks across the system. Because having their perspective is very important. Someone in analytics with a technical background is not going to figure that out without their guidance and help. From a technical standpoint, a lot of times you're going to have to face some roadblocks with data access. You don't always have the data at your fingertips. So either you would have to go look for what you have and see if that will work. Or start reaching out in the organization. Is there a team that manages certain data that we need? Like the financial data. And then also the other categorization of terms that would allow meaningful reporting in analytics. And then of course, what is it that we're reporting on? So this is where the defined measures and parameters. Are we doing the cost, the contribution margin? And then what are the parameters that define what's in the scope of the HALO? Do we wanna have it open-ended or is it specific to a certain timeframe? What are the inclusions, exclusions? Those are very important things to define. Data governance is very important. More recently we've been integrated a lot more with data governance. Like I mentioned earlier, we define something once, we catalog it and we use it over and over again so that you don't have to figure out how do we calculate contribution margin each time you're trying to solve the same problem. And then start small, start like, you have to start somewhere. I feel if we just said, we can't figure this out, we wouldn't be where we are today. We said, let's start with what we know, which is DRGs to 66 to 67 and let the process teach you what you're trying to get to the final outcome. So then this is when we learned and it sparked some other conversations that, hey, it should be extended to 14 months for the follow-up visits, which is now we consider a tier two. But it also led us to know that instead of capturing only these two DRG codes, let's look at the TAVR as all these patients that are coming to the clinic, because that's actually what the program is. We have a program that's treating for aortic stenosis and then TAVR happens to be the procedure for that treatment. But we also capture everything else in that realm. And then build the dashboards. I think this is very important also for scalability and a wider use and adoption. Having someone in the organization where everyone is gonna reach out to provide the data and then like a data dump or put it in Excel to do analysis, it's not sustainable. Having something in a self-service dashboards, it sort of becomes like a product that everyone knows where to go to and then find the answers to their problems. And then these products that we're building today, we started with one site, but now for the TAVR at least it's now, we're up to seven programs, but if we open additional programs down the road, it's going to be used for that program just like it's already being used for seven. There's nothing additional we have to do. So it's easy to scale and make it available across the system. And we're always iterating and making improvements. We have folks like Trey and other stakeholders at the table with us providing feedback, telling us what things are working, what things are not working. Sometimes we come up with some new ideas we implement as enhancements to the product or we're continuously making it more valuable and useful to the organization. Next slide, please. I think, thank you. Yeah, I'll just chime in here. Thanks guys. This is just to point out to everyone that the business white paper is just published titled the Value of a Structural Heart Program and the Impact Beyond the Procedure. It highlights much of the information that Mo and Trey have talked about today. You'll be able to download the white paper. The link is presented here on the slides. You can also go to the MedAxium website to get your copy. So it gives you just a version of what you can look at and spend a little more time with to understand the steps and the process that the Baylor Scott and White Health team went through. So thanks so much. I was struck by a few things. So we've got a few questions that came in during the presentation. So I'm just, a couple of things that really struck me in my past history working in a hospital system. I think one of the biggest struggles was really landing on those data definitions. And I like what you did in terms of saying how you cataloged them, you did it once and then you had a reference that everybody used. How did you, did you find any challenges in actually getting to that point where everybody could agree on the definition or was there, and this relates to one of the questions that came in. Was there a concern over how you capture a particular patient population in that, well, they sort of belong to another program, if you will. And how did you avoid double counting the benefit of that those patients brought, let's say in say some of the imaging because imaging wants to count that as their patient or the value that they bring to the organization. But so does the TAVR program. So how did you manage issues like that in either definitionally or how you just described the end impact? Yeah, these are all great questions, thank you. So we allow in the product, in the analytics dashboards to look at the data from different lenses. So if you are someone in a, like a unit leader, let's say you're in the cath lab or EP lab or non-invasive or OR, you can look at that granularity level of detail. So if you're trying to capture and give credit for every imaging exam that you've done, every cath lab procedure you've done, you're able to get that from there. But we also have other areas in the product that are gonna be more comprehensive of things like the halo. Or we also have something at the patient visit grain, which is gonna give you for the entire visit, what is the cost for the patient? What is the contribution margin for that patient? So we've given different users across and different stakeholders across the organization, different ways to look at the data so that a lot of these questions, they're kind of related, but they're different. And when you're looking at the halo effect, you are specifically looking at that patient clinic program and then downstream revenue that goes to those patients. But you can still go and look at their imaging separately in a different format. So it's almost like you're catering for the needs of different stakeholders, depending on how they wanna look at it and what story they're trying to tell and what questions they're trying to answer. I think it's exactly that. What story are you trying to tell? You can tell the imaging story by itself. You can talk about the TAVR program as a single episode, or you can expand it. So it gets back to making sure that the context is appropriate for what you are telling your audience or your stakeholders. Mm-hmm, yeah, really great information. Another person wanted to just clarify the total time for the patient that you evaluated. I think it was in your final version, your 2.0, where you were looking, what's it, 30 days pre, and then post 14 months, is that? Yeah, so this is how we, in the final data model, the 2.0, we have it now down to 30 days prior to that index visit, is what we call it, which is when they came for that clinic initial evaluation of aortic stenosis, and then 90 days after that. The reason why it was down to those time periods is typically when a patient is referred to the aortic stenosis clinic, within those 30 days leading to the clinic appointment, they ask them to do any kind of like preliminary tests and x-rays and things like that leading to the visit. And then the 90 days post-clinic, is this is pretty much they have done their intervention or inpatient visit or episode for the TAVR or SAVR. And then we're still capturing the additional value, but it's more like a tier two. So this is really what we call the tier one in that 120 days. And then we capture everything else down to the 14 months as a tier two. And then one thing I mentioned in the future we would like to capture beyond that, when a patient who had a TAVR or came for aortic stenosis treatment, if they had a hip replacement five years from now, and it goes back to where they initially, they came to Baylor Scott & White, the point of entry was the aortic stenosis treatment, what is that revenue capture? We're not there yet. This is something that we're hoping to calculate down the road. Yeah, yeah. Well, and part of what you mentioned too in creating that self-serve dashboard presents both transparency, as well as like you said, it automates and allows people to go in and get information whenever they need it so that they can use it for their own understanding. So that's really a great way to approach it. I had another question here that someone asked in terms of what did you do or did you include patients with final diagnoses other than the 266 and 267 DRGs? What did you do with those folks? Or was that part of your definition of this impact of the halo? Yeah, in the latest model in the 2.0, that became just part of it. So in the initial definition in the 1.0, in the 1.5, we were only focused at the 266, 267 DRGs. But then when we started looking at a program for TAVR, for aortic stenosis, it was also very essential to find out what percentage of those patients that come to the clinic don't end up with a TAVR. And I think that by itself is valuable to know. And it's interesting, statistically speaking, over the last three years, when we started looking at TAVR from a halo perspective, it's always been that like around the same ballpark, like 65% to 75% TAVR and about 25 to 30% SAVR. And then there's about five to 10% of other services in cardiovascular. And have you found or have you had any indication or request, I guess I should say, from clinicians to look at the data from the perspective of low, intermediate, high or extreme risk sorts of patients for TAVR just in terms of consumption of services? We haven't gone that far yet, but this is an interesting one. Yeah. Definitely something for one of the enhancements maybe we'll do to the model and we'll be cool. Yeah, interesting. It's always, and I know that was one of the, I think the big impacts of the team is that you had that multidisciplinary approach. And I liked what Dr. DeMaio said in the paper that it was like a heart team approach when it came to looking at TAVR programs financially to understand what was going on. You had the eyes and ears and understanding of a variety of folks that really would appreciate the impact of the program so that you could truly represent what TAVR meant to the organization. One thing I do want to add on that though, it just reminded me, we have been working on integrating different data sets in our analytics product development. So one thing we're doing is bringing in registry data to combine with the financial and outcomes data. So eventually we'll be able to, because we're able to do it today with the cardiac surgery and the cath PCI populations and even ECMO, eventually we're going to get the TVT registry. And what that would allow us to do is calculate the complications impact on the TAVR contribution margin or the direct costs and things like that. So we're not far from there. It's just a matter of time, we'll have that part of our product for folks to look at these insights. I'm really excited about that because I know that those definitions are already agreed upon by the clinical folks and the physicians. So if we can tie financial impacts to those definitions, they're going to be able to, it's going to communicate so effectively. That's going to be great. Yeah, that's great. Hey, just a couple other questions now rolling in here. I'm wondering about using particular programs for data visualization, wondering about Power BI. The statement here is most healthcare financial systems are stuck in the past and avoiding changing their current models that hinder structural heart program growth. Are you Power, I don't remember, Power BI or Tableau or what you're doing? Yeah, right now we're using Tableau, sorry, we're using now Power BI. We started migrating about four years ago. Previously we were in Tableau. Honestly, I don't have a preference. I think the tool is just the tool. It's what value is it providing you? And Trey can probably speak more about the financial resistance or the financial folks resistance. I feel nothing is wrong with that. You're just adding value to what they can bring. So if someone wants to still use Excel, this is totally fine, but the only way to scale the products, you need something that is available online self-service and have it something like Power BI or Tableau. Sure, yeah. I'm sorry, Trey, go ahead. I was just going to say, you know, with the Power BI product is we were bringing all of those things forward. Mo and I were very involved with some of that. And some of the things we asked for, you know, from a finance perspective were, hey, we want the ability to see the granular detail. So they provided that ability. So you can see the results there, but you can also drill down to the patient level detail and demonstrate from the bottom up, you know, how those results were obtained. And that does a lot to build that trust in the data. Yeah, I had a clarifying comment here from someone when asked about what about patients that were outside the 266 and 267, they wanted to clarify that what if it was a TAVR patient? So not necessarily a non-TAVR patient, but rather they had a complication post-procedure that changed the DRG. Maybe they ended up in, you know, big time life or ECMO, that sort of thing that shifted the DRG. Did you still capture that patient or did they fall off the definition for inclusion? We still captured them into the global data set because if they go back to that index visit of the clinical clinic, index visit of the aortic stenosis clinic or a TAVR clinic, they are going to be part of this population. However, if let's say Trey says, well, this is an outlier, because it'll tell you. I've seen it where maybe we had like one or two patients that showed ECMO because it's going to map to the DRG3. But if Trey says, well, this is an outlier, out of line, he can easily exclude it. Okay. So we have thought about these use cases. We want it to be part of the model because it goes back to that initial definition of these meet the criteria of the clinic visit. But if the financial analyst or whoever is doing it wants to exclude those outliers, they can do that easily within the filters of the dashboard. Great. Thank you. And then one last question as we wrap things up here, wondering how long did it take to complete this project the first go around? Just wondering, and then how did you convince other folks? I know you had Trey, you on the finance side, how did you convince folks to give you resources to work at either Mo's time or anybody else's time to get the analyst help that you needed, just the ancillary team members? How did you get their support for time? Well, from that perspective, I think that we could obviously see the value of this service to our patients. We knew it was going to be groundbreaking. We wanted to be a part of that for the future. So that was an easy sell to have the resources available to support the program and continue to see it grow. Mo, what other comments would you have? To comment on the time aspect, I mean, it took, this was such an iterative process and I see us continuing to iterate and evolve in where we get to. As far as the other component, how did we justify having this strategic, driven by a strategic initiative like the TAVR collaborative that we talked about before, it helped inspire something like this and really expedite it because if now you're not bringing it from a one-off kind of request, hey, this is just one program liking to do something like that, but this is really about improving patient care, sharing best practices across the system, making sure that if a patient comes to hospital A is going to get the same service if they go to hospital C and hospital B, this becomes more of a strategic initiative so it can justify any of those activities that go to meet that strategic goal. Yeah, great. Well, that's a great summary as we hit the top of the hour. So thank you both for your time today. It's really been insightful and I think it's aspirational for so many programs who are looking to also be able to understand the value of their TAVR programs outside of the procedure itself. So again, the slides are available there in the chat and you can download the white paper if you're interested in looking at it in more detail. I'll just add both myself and Katie Willer was unable to join us today. These are our contact information as well if you're interested in other aspects of TAVR, understanding program efficiency and clinical delivery of care. So thanks again for your time today. It's been great having you. If you have colleagues that couldn't make the webinar today, they can go to the MedAxium website and there they'll be able to find a link to the recording so that you can share and learn with others as well. So thanks again for your time. Glad you joined us. Thank you. Bye-bye.
Video Summary
This webinar, sponsored by Edwards Life Sciences Corporation, discussed the findings of Medi-Axiom's white paper, "The Value of a Structural Heart Program, Impact Beyond the Procedure." The discussion emphasized the importance of assessing the broader impact of TAVR (Transcatheter Aortic Valve Replacement) programs beyond the procedural level.<br /><br />Speakers from Baylor Scott and White Health, including Trey Wick, CFO, and Mohamed Safa, Director of Advanced Analytics, explained the process of measuring the total value of a TAVR program. They described their methodology which involved multidisciplinary collaboration, defining clear data definitions, and using a systematic approach to understand both pre-and post-procedural patient care.<br /><br />Their iterative approach involved starting small, defining measures, building dashboards for accessibility, and achieving agreement on data governance to ensure consistent results. The ultimate goal was to scale the program beyond individual sites and to incorporate other structural heart procedures.<br /><br />Emphasized were the financial insights gained through expanding the analysis to include the "halo effect," capturing additional patient care episodes around TAVR cases within a specified timeframe, hence illustrating the broader value to hospital systems.<br /><br />The session concluded by highlighting the importance of ongoing interaction between finance and clinical teams and the value of capturing data for continuous improvement in program delivery and profitability.
Keywords
TAVR program
Edwards Life Sciences
structural heart program
Medi-Axiom white paper
Baylor Scott and White Health
financial insights
data governance
multidisciplinary collaboration
patient care episodes
×
Please select your language
1
English