false
Catalog
On Demand - Valve AI - Collaborative Intelligence ...
Webinar Recording
Webinar Recording
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
on. Just a few housekeeping items. Everybody will be off camera and on mute during the presentation. If you do have any questions, please pose them in the Q&A section down below where it is right next to the participant chat section. But you utilize the Q&A section and once we have those questions, we'll pose them to Dr. Scaglia at the end of the presentation. So with that, we're going to get started. So I'd like to kick off Valve AI. Collaborative intelligence is more powerful than you might expect. Today's presenter will be Dr. Professor Gregory Scaglia, who's the director of echocardiography at the Prince Charles Hospital in Brisbane, Australia, with leadership involvement in the structural heart program. Greg is the director of echocardiography at the Prince Charles Hospital in Brisbane, leading their structural heart program and serving as a key decision-making member for the advancement and integration of new technologies. And he trained at the Cleveland Clinic. In addition to practice at both Prince Charles and Evara Heart Care, he's a professor of medicine at the University of Queensland, building on chairman and leadership roles throughout the imaging societies of Australasia, including the Cardiac Society of Australia and New Zealand, Cardiac Structural Heart Disease in Australia, ECHO Australia, and the National ECHO Database of Australia. He's the editor of the Journal of American Society of Echocardiography Case Journal and the Echocardiography Journal, having over 130 publications himself. Greg has put 27 years of clinical cardiology experience to advise the federal government of Australia regarding credentialing for ECHO in structural intervention and has served as a co-author of international guidelines for ASCEACVI for ECHO and structural heart interventions in 2021. Dr. Scalia will be joined by Don Fowler for Q&A. Don is the president of ECHO IQ USA. Don brings more than 30 years of executive leadership in high technology medical systems to the company with expertise in business scale-up and commercialization. Don spent 26 years working for Siemens Healthcare, both nationally and internationally in several executive roles. He subsequently joined Toshiba America Medical Systems, where he held roles as president, CEO, and the board of directors. Don's technical knowledge, industry experience, and network of leading decision makers in ECHO IQ's priority channel is expected to help the company fast-track the rollout of its novel AI-backed solutions for structural heart health. So again, if you have any questions at the end of the presentation, please post them in the Q&A section. But for right now, we are off to a very exciting presentation. Dr. Scalia, I'll turn it over to you. Very good. Well, thank you very much, Chris, and it's a pleasure to be here with you folks. I'm actually sitting in Madrid right now. As you can see on conference myself, I have some conflicts and disclosures to make. So I'm a cardiologist, as you heard, in two big hospitals, but this talk is not representing any hospital or health service. But across the services that I work at, we're scanning somewhere in the order of 75,000 cases a year. So by American standards, medium to large size service. And the other thing that I should point out is that I'm a member of the NEDA steering committee, the National Echo Database of Australia, and that's going to feature strongly in this talk. I've done a lot of proctoring and so on for device companies in Echo, and I've been a content expert for Echo IQ, although I'm not a shareholder. And I did do some training at MIT Sloan in the United States a couple of years ago about business AI. So what I did was, when I wrote this lecture, I asked ChatGPT to write the lecture for me about AI in Echo, just because that was exciting and fun. And everybody's talking about about ChatGPT right at the moment. So I thought, well, I'll give it a go. And so this is what the machine spat out and said, I should say the following things. Here's some examples of AI powered Echo applications, such as automated view labelling, measurements, interpretation, image acquisition, identification for review, segmentation of valves and structures, and 3D Echo. So remember that ChatGPT is a giant word finder, which actually doesn't know what it's saying, but puts together everything that's read from the entire internet and packages it in a way which kind of makes sense. And in fact, I actually wrote this talk earlier this year, and then representing it today, updated. And when I wrote this, and this was all exciting new news, and even in the three or four months since I first thought about this, this is almost ho-hum now, we're sort of completely comfortable with ChatGPT and ready to move on to the next thing. It absolutely hit with a vengeance and now it's real. But for those of us who aren't data engineers, there's a bunch of words, which are buzzwords in this whole AI field. And I thought I'd go through each of these four terms, and then try and bring them into clinical medicine, and particularly to Echo, and talk about how they will be useful to us as clinicians. So supervised learning, unsupervised learning, deep neural networks, and natural language processing, NLP. So what's it all about? So this slide was made by ChatGPT, and I don't want you to read it, but I did think that the picture was nice. And it did give me the three things that I wanted to talk about, supervised learning, unsupervised learning, and a smaller subset of so-called reinforcement learning. Because these three sorts of learnings, and in fact, the first two, supervised learning and unsupervised learning, is really where we are now with these AI tools in medicine, and in particular, in Echo. Now, how are these algorithms developed? Well, all of these algorithms, as everybody knows, are related to big data, and in fact, taking data and putting it through learning algorithms, basically giant logistical regressions, which we in medicine have been doing for years. Although when we do logistical regressions, we think about, say, three or four parameters and how they might affect outcome, age, gender, cholesterol, something like that, and how that might affect outcome. Whereas these big models can take hundreds or thousands of parameters, and then do many, many, many logistic regressions, which are just like we do in our p-values and so on in our research, but in a much more sort of grand and multi-parametric way. Now, think about that for a second, and let's talk about NEDA for a second, the National Echo Database of Australia, because this has been fuel for massive big data learning within echocardiography. So NEDA is a cooperative, not-for-profit unit, which has accumulated 2.05 million echoes across the last 15 or so years in our country, from all of those hospitals there, from all of the states, 2.2 million echoes. We have a population of 25 million, over 100 million numbers. Remember, echo is very number heavy, ejection fraction, heart size, gradient, valve area, right ventricular systolic pressure, and within that 2.05 million echoes is 1.45 million people. Now, we have a unique thing in our country, and that is we have a national death registry, which means that all 1.45 million of those people, we know their vital status, living or dead, and if they passed away, what they passed away from, and when they passed away. And so this is an incredibly powerful prognostic tool at a scale of numbers never heard of before in clinical medicine. So how does this work now, and how does this data pour into the four sorts of AI? So let's talk about supervised learning. So supervised learning is kind of what you're already used to from Facebook and these sorts of things, Google Photos and so on. And in supervised learning, you have a relatively small number of choices, say your friends in a photo that you've taken or some other image, and matches these things against a small number of known knowns. And so what happens is you show the computer a bunch of apples, you show the computer a bunch of bananas, and you say, you tell them what they are, you show them a good number, but not millions, this is hundreds or thousands of bits of data. And then when you show it a bunch of fruit, it can say, well, I recognise the yellow things are bananas and the red things are apples. So this is supervised learning. This is extremely labour intensive from the human point of view. Now, we've actually had supervised learning models for over a decade. And in fact, these pictures from Siemens Accuson, I think were about 15 years ago. And what happened with this was that one of the technical staff whom I met sat in a dark room for two years and hand traced all of these aortas. And if you look down that middle row of three pictures there, they are human hand traced mappings of the aortic route, such that the machine had inside it. And I remember one of the people from the company telling me that inside this machine are 1000 hearts, which back in those days was a quizzical thing to think about. And then when it saw another heart, another aortic route, it would try and match the one that you showed it, the index one, against its bank of 1000, say, stored up data, all of which have been hand labelled. So they have had supervised learning of all their pixels and then tried to approximate your data with their known knowns. Even now, Philips has this model called heart model and one of the other vendors has just brought out something just like this, which again, fits the pixels that you give it against a known set of pixels from, in this case, about 1000 cases, which have been carefully mapped out. And the computer has been told that that's a left ventral LV and that's a left atrium LA. And so the computer matches the pixels that you give it against and set a relatively small number of known knowns. So what's the point? Well, the point about this is that this is really old, new, it's reinforcing old news. It's not something you didn't know. In fact, you had to know it to tell the computer what the supervised data meant in the first place. And the other thing about supervised learning is a real risk of bias. And that is to say, if you feed it, say, 1000 apples and two bananas, the chances that it's going to get it right are obviously skewed towards the input data. And because the numbers are relatively small, the concern regarding bias and supervised data is obviously very real. And let's just go across to clinical for a second. And let's talk about aortic stenosis, because that's really what this lecture and the software and this whole algorithms are about. So aortic stenosis is about that valve on the left, the trileaf aortic valve, which opens and closes in systole and lets all blood go out of the heart, progressively gets more and more sclerotic, calcified, chunky. And if you look on that right hand picture, you can almost not even see an orifice there. And that would be critical aortic stenosis. And if you look in the standard trans thoracic pictures, which are the fuel for the algorithms that we're talking about, if you look at that bottom left picture there, you can see that valve barely opening compared to the normal one as the index picture. And if you look at this short axis picture, three leaflets there at the point of that cursor that says aortic valve, again, barely opening compared to that normal one in the top right corner. So this is the problem of aortic stenosis. 100% of all blood that goes out of this heart has to go through that aortic valve. And I tell my patients that when you've got aortic stenosis, it's like driving your car around with the park brake on three clicks. It's a fixed resistance, 100,000 times a day, every beat, the heart has to push against this fixed obstruction. Now, luckily for us in echo, the amount of obstruction to flow is directly related to the speed of blood. So yours and mine, blood going out through aortic valve goes one metre per second. And that correlates by formulas with the so-called Bernoulli equation for the gradient across the valve is four times that velocity squared. So you and me have a gradient across our valve of four square centimetres. And we have a valve area, that's the orifice, if you look at that top right picture there, of three square centimetres, which is about the size of an American quarter. Now, if you have significant aortic stenosis, all of the blood still has to get out that doorway. And so to get it out through such a small orifice, it has to go a lot faster. And in critical or severe aortic stenosis is said to be four metres per second or more, that is four times faster, because it's one fourth of the orifice area that is less than one square centimetre. And that is easily accessible with Doppler and the so-called velocities of Doppler. So the things that we measure are velocity, and therefore it's our correlate, which is gradient four v squared, and valve area of less than one square centimetre. And just to make things slightly more complicated, doctors talk about two different sorts of gradients, peak gradient, which is that four v squared, four times four times four, which would be 64, and mean gradient, which is the average gradient across systole, and anything more than 40 is severe. And I'm sorry about that, that there's two different words. It's actually a remnant from the days of the cath lab, where pressure was measured by angiogram. Now, we were taught in the 1990s and onwards, that it was a relatively safe thing to live with anything short of severe aortic stenosis, based on this data, extremely esteemed data by the doyens of cardiology, whose names you'll recognise there, two from the United States and one from Europe. But look at the total number, I think if you add that up, it'll be just over 1000 people, and look at the data is somewhere between 30 plus years ago. Now, one of the things that's important to remember about this is when you're talking about what do you do with aortic stenosis, when this data was all created, all you could do was full open heart surgery with a cut down your chest. So remember, the hurdle was higher to do something. And what we were trying to tell ourselves, and I believe this, and I think we, none of us would deny the data, that when open heart surgery was all that you could do, then what you wanted to show was it was safe to wait. And if these studies show that it was safe to wait unless things started to go badly, and by badly, it was things like chest pain, fainting, profound breathlessness and edema, that were the four cardinal symptoms where things were going badly, and you had to move from observation to surgery. Now, our first big foray with NEDA, we presented at the ESC in Paris, the European Society of Cardiology, and Jeff Strange and David Playford, who are the leaders of this study, presented data, which is a supervised model from NEDA of prognosis in this aortic stenosis condition. So it's not new news. This is old news. We just told the computer what to look for, but we didn't do anything other than look at the standard numbers, 4V squared of that velocity, mean gradient and valve area. And what NEDA found was, of course, that if you had no aortic stenosis or mild versus moderate or severe, your survival, and look, this is survival. This is not just whether you had a bad day or whether you had to have surgery. This is survival, was profoundly influenced by these lesions. But more importantly, what we found in NEDA, as we often have with NEDA when we look at things, is that as you go from no stenosis to mild stenosis to moderate stenosis, the risk of mortality goes up and up and up. But look at this. This data showed us that there was no difference between severe and moderate aortic stenosis. The exact opposite of how we practiced for the last 30 years to say that there was no penalty for living with moderate aortic stenosis, let alone severe. But this wasn't new news. It was just massive data reinforcing the things that we thought we knew in the past using the exact same rules, the exact same cutoffs, the exact same criterias. But with population base here, I think this was about 500,000 people. So that was supervised. Let's go across now to unsupervised, which is the real powerhouse of how AI actually works. So what is unsupervised learning? How does it work? Well, in unsupervised learning, you throw the computer millions of data points, not hundreds or thousands, but millions. And somewhere in that data will be a hidden meaning, which you wouldn't even know. No human could know. They don't even know how they know. It's just the computer knows that there's a meaning. And how does it do it? Well, what it does is this. And this is probably the pivot point of this talk. Unsupervised learning does clustering. And so it looks at a bunch of parameters which are drawn as colored dots on here. And it's an XY coordinate. This is not cardiology. It could be anything. But the computer tries to find clusters of like-minded things. And then once it's found a cluster, pixels on this thing, on this example, for example, which looked the same, or let's go back to our fruit analogy, you show it thousands or millions of pictures of fruit or millions of pictures of cars or whatever. And the computer finds a bunch of red things and it finds a bunch of green things. And then content experts like me come in and tell the computer that cluster is apples and that cluster is melons. Or if it was cars and you showed it millions of cars and it found a bunch that looked kind of the same, you'd say, well, they're Toyotas and they're Mercedes and something like that. And so clustering will find groups of things that look the same. It doesn't know why they're the same. And it certainly doesn't know what they are. It still requires humans as content experts. but it's found a bunch of alike things. And the benefits is that this bunch of alike things, no human could have the breadth of vision to determine this. And so nobody looking at an echocardiogram or population data, or I don't know, things from the stock market, would have enough breadth of vision to have this clustering in their mind that the AI does. I might say that doctors who've been practicing a long time sort of do a version of this. For example, I do remember an old doctor mentor of mine telling me that he felt that ventricular septal defects must close in adulthood, because you never see 80 year olds with ventricular septal defects, and yet a lot of people in their 20s still have these holes in their heart. And presuming that they're not dying, they must be closing up. So that's sort of unsupervised learning at a human level, as opposed to an AI level, but it's an observation that nobody in close could have got. So what does this tell us? Well, this tells us new news. So supervised learning was old news, better perhaps, but unsupervised learning tells us new news, things that we could never know. And in fact, slightly concerningly, we don't even know how it knows. It just knows the clusters. Now, echo particularly as a very measurement driven art, or science, or clinical test, if you like. And we generate numbers like mean velocity, peak velocity, aortic valve area, velocity time integral, left ventricular outflow track diameter, and so on. Lots of numbers. And we're all amazed to see the computer doing this smoothing when we first got this data back from the engineers about how it found trends within millions and millions of pixel points. So let's go forward a little bit, and then we're gonna loop back to supervised learning. How is this unsupervised learning done? And I said before, these are lots and lots of logistic regressions trying to find a fit, a curve of best fit, but no simple regression, or even the so-called multiple logistic regressions that we do in science are gonna cope with this. And this is where deep neural networks come in. And I'm not an engineer, but my understanding is that these neural networks take a bunch of input data, which could be pixels, they could be numbers, they could be ejection fractions, and then pass them through multiple layers, which you can't see of if this, then that, if this, then that, and that can be 20 layers deep and can feed back to each other. And for example, the pixels of the dog versus the pixels of the cat go through many different filtering levels so that the computer spits out the right answer. So this is how I, as I understand it, deep neural networks allow the computer to do these massive computations of the unsupervised learning. But that brings us to this, and this is the second most important thing I'll say today, and the concept of the phenotype. Now, phenotype's a word which we've borrowed from the genetic people. So when you talk about genetics, you talk about the genotype, that is what chromosomes you've got, and the phenotype, which is how you manifest. For example, you might carry the gene for hypertrophic cardiomyopathy, but you might not express the gene. So you're genotype positive, but phenotype negative. Phenotype is the expression. Now, what we've done in NEDA and many of the other big AI machine learning algorithms is find a phenotype, in this case, of aortic stenosis. And I can remember David Playford speaking in 2019. In fact, it was actually 2018. We were at a conference, I think in Atlanta, where David got up and said that his algorithm could detect aortic stenosis without Doppler. The number one thing that makes you get, make the diagnosis of aortic stenosis is the Doppler, that four V squared thing I showed you before. But the computer was so good at detecting the phenotype of aortic stenosis patients, that is thickness of the heart, ejection fraction, right ventricular systolic pressure, left atrial size, diastology, multiple other parameters. Remember, echoes generate about 100 parameters each. Multiple parameters put into a model so that when the machine looks at it, even before you've turned the Doppler on, it goes, there's a 95% chance this person has aortic stenosis. And this was a radical concept generated by unsupervised learning of cluster recognition of the things that cluster around somebody with a severely stenotic aortic valve. And to me, this is the place where AI has the value. It's got more things in its vision than I could possibly have. As somebody who's been doing this for 25 plus years, it's got more things in its vision of the things that go with severe aortic stenosis than any human being could have. Now, there are challenges to this. And one of the big challenges is that we know what we want and the algorithms are basically curves of best fit. And there are phenomenon within the data engineering schools of thought about overfitting and so on. But just to give a cute example, for example, to teach on, say, let's have a look at this picture of red and blue pixels. That actually is a ventricle in blue and ventricular wall in red. Now, you and I do this every single day when we look at echoes and you see black is the chamber and white speckles are the muscle and our eye easily can pick the difference. But in fact, again, having been doing this for a very long time, finding that edge for a computer is extremely difficult. And so we've had all kinds of attempts at curve fitting algorithms to try and find the line of best fit. But the harder you try, the more likely you are to overshoot. And so the concept of reinforcement learning is basically rewarding the algorithm when it's closer on the line and giving negative feedback to it when it's starting to veer away crazy. Lastly, NLP, natural language processing is the obvious thing that Google has given us when we speak to Siri or Google or Alexia and they hear our voices. And things like Dragon Dictate and all these other softwares which can understand the spoken word and put them into interpretation. Now, there's lots of things that use this. There's lots of bots that you talk to when you're talking to your insurance company and you're going to their webpage. All of us have dealt with this all the time. But why it's relevant to medicine is, and particularly to echo, is that many things that we do don't have numbers. So whilst aortic stenosis actually is actually pretty good because it's got quite a lot of numbers, there's words like this. There is diffuse thickening of the tri-leaflet aortic valve with reduced leaflet excursion. Now I read that and I see number C up there. If I read the words, there's a bicuspid aortic valve with a RAFAE, I'd see number B. But there's no numbers for that. And it's almost impossible to imagine how an algorithm could work with that. But NLP could turn that into a number or a categorisation or a type B or a type C, so at least we can think about it. Computer, even the numbers showing, for example, this thing that if you've got a valve area per metre squared of your body size, that's a worse problem to have. Doctors see that picture on the right there. And in particular, regurgitation lesions like that, there is grade 0-1-4 aortic regurgitation. I mean, what is that? And yet that is completely how we write about regurgitation, that is leaky valves like you're seeing there, not stenotic. 100% of regurgitation, or actually I'll take that back, 98% of regurgitation in echo reports is textual, not numeric. So NLP will come into its own when we're starting to look at those sorts of particularly the regurgitant lesions. So let's talk about now how echo can be assisted by AI at the present time, not going forward. Where are we right now? Certainly this is a hot research process, a hot research topic, and when you go to conference, I'd say nine-tenths of the late-breaking trial type stuff in echo is about AI. Now, this is J.O. He's the father of echo, or one of the fathers of echo, and the esteemed author of the textbook that we all learn from, a good friend of mine from the Mayo Clinic, and he told us that he felt in a review article recently that these are the five stages where AI could help in echo. So view labeling and quality control, segmentation of chambers and structures, measurements, diagnosis, and report. Let's go back to this beginning here. Now, Jeffrey Zhang in 2018, I really liked this paper, actually tried to do this and take some pixels, actually played an echo into the machine, got it to figure out what view it was looking at, what pixels, what chambers, detect edges and fit it to models, and then tried to make diagnoses like left ventricular hypertrophy, amyloid, myocardial infarction, and so on, and took bunches of pixels like this, cluster-analyzed them, got the machine to figure it out whether it was looking at a long axis or a four-chamber or a short axis or whatever, and then tried to give us some results. And this was, I think, groundbreaking, and the beginning of pixel reading. And he was able to get the machine to identify chambers, identify views, and tell us things like wall thickness, diastolic function, and some diseases. And for 2018, when this was, this was a revolution. Now, that's dealing with pixels that have already been obtained, but this sort of hardware, which the CaptionHealth device, which my friend Jim Thomas has been working on, actually is upstream from that and helping people with variable levels of training to actually acquire the data, because remember, it takes over a year to train a sonographer or a doctor to actually acquire Echo. It's actually a very user-dependent technology. But this machine can actually guide your hand in real time with that bar graph there where the green tick is and tell you when you're getting warmer, warmer, warmer. Got it. And when you've got the right image and you've got the tool to the probe, it presses the Save button. So already now there are tools for allowing a broader application of these devices with different levels of users in things like COVID when we were in a situation where it was almost like a combat zone. So this is already here. So we've got the pixels. The next thing we do is the measurements. How good are these machines at getting the measurements? Because as I said, there's about 100 numbers in your average echocardiogram. How good are these machines? And so I was delighted actually to be giving this talk today because this email came to me this morning from Dr. Morave and also Roberto Lang, also Doyen of Echo, where many of us went to conference one time and reported, I can't remember exactly how many, but something like 50 Echoes. I remember spending quite one afternoon doing this at conference in Seattle. And so this was content expert or content from experts, measurements from experts put against the AI pixel recognition measurements of things like wall thickness and Doppler. And in fact, it was extremely impressive how accurate the computers were compared to this very esteemed group of senior Echo doctors from around the world. And that's hot off the press today. Now we've acquired the pixels. We've taken the measurements either manually or by some sort of sophisticated computer analysis or algorithmic software, but what next? We're in a mode and the software we're gonna talk about today basically deals with the latter. Let's assume we've got the measurements and we want to then use the AI to help us make a diagnosis, confirm a diagnosis, quality assess the diagnosis. So downstream, once we've already got the measurements. And let's go back now to the thought of phenotyping. So Partho Sengupta our friend who was in New York at the time has told us about his read on the phenotype of severe aortic stenosis. And what Partho did was generate these graphics, which if I just zoom in, he analyzed four parameters, aortic valve area, that thing that's supposed to be three square centimeters. And if it's very bad, it's down to one square centimeter. The ejection fraction, the power of the heart, is the heart getting weaker or is it still full power? Remember full power for heart is 60 or better on that scale and a weak heart might be 50, 40, 30, 20. The mean gradient, so remember we talked about the peak gradients, 64 or more are severe and mean gradient 40 or more are severe. And the stroke volume index, which is the amount of blood going out of the heart with every heartbeat. The concept being that if a heart starting to fail, normally a heart would generate 70, 80 mils per beat, more than 50. And if it's going down below 30, then it's a failing heart. So those four parameters, the severity of the valve area, the severity of the valve gradient, the ejection fraction of the ventricle and how much blood's coming out of the heart. And when he analyzed those together, he was able to find a phenotype, which presumably was the worst of each of those four things, which he called high risk versus a, which is in green there, versus people who are at the better end of all four of those parameters. And he was able to clearly show that this AI generated phenotype predicted how long it took before you had to go to, in this case, open heart surgery. Mark Luckman did the same exact thing with many more parameters, not just four, but he actually looked at this many parameters. I'll show you in a second, the parameters, but he looked at 2,500 people having TAVR. That's per catheter valve replacement rather than open heart surgery. And what he did was looked at all these parameters. There's too many to talk about, but it's a lot. And he just fed them into this multi-parametric sort of unsupervised learning engine, and then was able to generate four clusters, exactly like we said. And depending on which cluster you were in, in one or two there, yellow or blue, which is basically the good end of the stick of all of those things, you had a very good recovery of heart function after your TAVR and therefore good survival. And if you were in the third or fourth cluster, that is the bad end of those parameters as deemed by this complicated sort of curve fitting algorithm, obviously you had a much poorer outcome and a much poorer recovery of heart function after your procedure. So these multi-parametric unsupervised model can pick people who are at higher risk, and they can pick people who are at higher risk of not recovering after they have the job done. So Jeff Strange and Netta at ESC last year presented the Netta version of this, where Netta of course not requiring individual follow-up because we've got a death registry follow-up. So we're not talking about whether you recovered from your heart surgery, we're talking all cause mortality now. And at that stage, we were up to a million echoes, give or take, in 630,000 people. And we trained a model to look for high risk, medium risk and low risk phenotypes in about 440,000, and then retested it in a test cohort. And as with Netta, what we found, as we found almost every time we've done anything in Netta, that you get these divergent curves, which are very clearly different. The low risk group, the people with the better end of all of the parameters in green there, and the highest risk group with the bad end of all the parameters in the purple curve there. And the odds ratio for bad outcome, were phenomenally bad, 1.8 times worse, that's 80% worse survival if you're in the moderate group, and 180% worse 2.8 odds ratio if you're in the severe end. So the algorithm, just like Partho and Lachman, the algorithms can pick people who have a much, if you like more severe version of the disease, which is not just the gradient, but it's the ventricle, it's the ejection fraction, it's the diastology and so on. So this year, Jeff Strange and Playford presented at ESC, we were in Amsterdam, where they then took that algorithm, which I believe is what's in the Echo IQ package, and ran it on the two big teaching hospitals in Sydney and Melbourne called St. Vincent's, on 21,000 or so people who had Echo, and eluded down to 9,189 people who were appropriate, taking out things like people that already had heart surgery or didn't have enough values. And in fact, the minimum data set you needed for this algorithm was height, weight, ejection fraction and aortic valve Doppler. And then within that, they did routine, well, everybody had routine Echo reporting, because they were routine Echoes. And within that 218 of 9,000 or so people had severe aortic stenosis or 2.4%. But then using the algorithm, they identified a total or they upconverted from 212 or so to 376 people who had the high risk phenotype as per the algorithm generated a couple of slides ago there. And taking that forward, if it was just human reporting, which is how we practise, 218 or 2.4% of people had severe RS. But on a re-revision of the data by the sophisticated algorithms, in terms of, not only are they severe, but are they at the severe risk phenotype, that upgrades from 218 to 376. And if you look down that flowchart on the right there, they've done a sort of an imaginary 20,000 cohort and how that would change. But the bottom line is that the AI-driven risk-based algorithms will detect more folk, in particular, and interestingly, detect more women who seem to be underrepresented in the clinical diagnosis, the good old-fashioned clinical diagnosis, and will actually detect more people who should go to heart surgery. 2.1 fold more women, 1.6 fold more men. Now you might say, gee whiz, I don't need any more work. I don't know that that's exactly the point here. I think the point is you don't want to be letting people slip through, in your case, you'd say to the catcher, we would say to the wicketkeeper in cricket, because we just didn't have enough tools to pick out all the features that would make somebody be a high-risk candidate, a high-risk person who could be a candidate for intervention. And in particular, you might use this sort of a tool as the so-called physician prompt to say when they come back for their appointment next year, because typically these people have annual appointments or six months or something, that we're a lot more vigilant, particularly in that subgroup of people who had their diagnosis upgraded by the algorithm. So the algorithm detected 72% additional patients with severe features, and even after adjusting for all other things, because a lot of folks say, well maybe they're older or fatter or weaker or whatever, it's not. These algorithms take into account, and that's the great thing about huge data sets, they wash out a lot of those confounding variables, and it's particularly relevant in women. So as you do clinical echo, I think the fear factor here is that we're actually seeing, well not what we want to see, but we see what our standard old-fashioned parameters, which basically are Doppler and a couple of other things. By going to a phenotype read, we'll pick out people who, if not need to go to surgery or need to do something intense, need to be certainly more intensely looked at. And I think that's where the aid of this physician prompt will be. Now I'm not going to really talk about the software, we've got Don online who can teach us a little about the software, it's not really my thing, but the software is something that works as a plug-in, as I understand, to most echo management systems. Lastly, second last slide, natural language processing we just talked about before, or word or textual report data in echoes is the bane of big data analysis for echocardiography. And I'm slightly embarrassed to say that as a leader in this field for a very long time, I can still say that only 2% of scans, even in NETA, anywhere you go, actually have quantitation of regurgitation lesions. So stenosis lesions are exquisitely quantitated. In fact, there's virtually no place for angiograms now in the quantitation of stenotic lesions, echo is so good. But in regurgitation lesions, it's all words. There was moderate regurgitation. Now, if you put that into a number, you might say that's a grade two leak. There was moderately severe regurgitation, that's grade three. I've seen written there was moderate to moderately severe regurgitation. I'm not sure what that is, two point something. Now, NLP, theoretically, is ideal for disambiguating all of this stuff. And David Playford presented again at ESC in Amsterdam two weeks ago, the NLP read of mild, moderate and severe mitral regurgitation. And as with all NETA things, we've shown that there is a very obvious gradation of risk from mild to moderate to severe mitral regurgitation. And I expect that there will be automated algorithmic softwares to deal with this coming very soon. So my summary is like this. Valve AI is already here and it's ready to go. And it's not just something that we've been we'll talk about and talk about. My experience of echo things and probably medical science in general is that the uptake of a new technology will be directly proportional to its incremental value. And those of you who know echo might remember the concept of harmonic imaging. Back in the old days, echo was extremely blurry in the 1990s. And you were lucky sometimes to even see where the heart was. And then this button called harmonics came and it just worked. And it was uptake, its uptake was overnight. 100% of people just wanted it straight away because it worked. On the other side of that same conversation, speckle tracking strain has been available for nearly 15 years. But it added such a small incremental value, at least that's what we felt, to how we treat patients, that it's really taken 10 plus years for people to adopt strain. And it was only when it showed a use in cardio-oncology and chemotherapy that it got a life at all. So the incremental benefit will set the rate of uptake. Small model supervised learning are commercial, working, useful, available, and we have them right now. They will calculate the ejection fraction off your pictures. They will curve fit against non-models within the system. Large model phenotype algorithms have now reached the clinical arena. And what you're seeing here from NEDA through into these commercial applications like you're about to hear about from the folks today, these things are right here right now and ready to go. And what will they do? Well, they will help, I think, they'll help the yield, they'll bridge the gaps in the data, they'll interpolate when somebody doesn't have all the elements of a phenotype because you couldn't get some pixels, and hopefully improve the diagnostic yield and accuracy. And therefore, they will democratize echo because extremely user dependent, as I say, it's a difficult technology, cardiac ultrasound, it's not something you just buy a machine and turn it on. It takes years of training. And even with that, there's lots of inter-observer variability. And hopefully, these large AI models will help more people be able to do better echo. And these things, I think, actually really are ready to go now as AI position prompts. And so with that, I'll thank you for listening to my talk. Thank you very much, Dr. Scalia and Don Fowler from Echo IQ. Again, we're proud corporate partners with Echo IQ. And we do have a few questions. For those that are joining, the chat section has the slides. So if you'd like to download the slides, you can go there to get them and pose your questions in the Q&A box. I do have a few here. How can this EchoSolve AI find more patients if it's finding in guideline cases? So there are two parts to the software. And Don, you might want to elaborate. But as it was taught to me, one is the in guideline, which is basically the supervised learning model, like I said. And how it would find more cases if it's basically just applying the rules is it means you didn't apply the rules. And we all are guilty of those crimes. And in fact, when I've actually audited myself and I look at some of these things, I say, that looks severe to me, but I called it moderate because of some parameter was slightly low or something. It just didn't quite fit what I felt at the time. So the supervised learning model, and Don, you can tell us what your version of that's called, basically is much more rigid and rigorous in applying the actual rules. The unsupervised model, the other package, doesn't do that. It's not looking at the rules as you know them. You don't know what the rules are that it thinks it knows. It's phenotyping. And there are many, many parameters that go into the wash to generate high risk patients. These are not guideline patients. These are high risk patients to be determined whether or not going sooner to surgery is the right thing to do. Don. Yeah, Professor Scali, you're exactly right. You know, first and foremost, at the highest level, we use all the possible computations of the equation variance to find in the guideline cases. And what we've found is some of these are direct measurements, aortic velocity and mean pressure gradients, like you've talked about, but things like AVA are actually calculations from other measurements. So we have the continuity equation that I think everybody on this call is probably familiar with. It's used in clinical practice. It's the area for the LVOT. But what we have found is that it makes a certain assumption that LVOT is actually at a cross-sectional, a round shape. And in fact, what we know is that's not the case, right? It's more elliptical. That's not typically seen in a 2D echo. So what we do is by taking a look at all the information that's available to us, including 3D echo measurement data, we're able to adapt the formulas to design for elliptical LVOT shape, for example, that yields a better result. And because of that, what we're able to do is take advantage of all the measurements that we have at our disposal to take a look at all the different permutations of making these calculations to make sure something doesn't fall through the cracks. And then you're right, as it relates to ECHOSOL from an AI perspective, we ignore the LVOT data in total. And we're able to sit there and train this on a massive data set. We're not talking, as you said earlier, about hundreds, but millions of pieces of data. And able to then train the system that says, look, this is what an AVA of less than one centimeter squared looks like. Now take a look at every other parameter and look at it hundreds of thousands of times and understand what's taking place inside of this particular patient to come up with the right phenotype. So looking at and taking advantage of these massive data sets is something that humans alone can't do. So when people say, are we replacing people? No, we're not replacing people. We're enhancing people's skill sets, right? It's collaborative intelligence. It's not even artificial in my opinion. So we're working together to help give you the best bits of information that you can make the best diagnosis and come up with the best treatment paths for your patients. Thanks, Don. I do have another question. When it comes to training data, what kind of requirements would you look for in an AI? Well, I'll speak first. And this is going to be a lot of repeating what you probably just heard, not probably, what you just heard about from Professor Scalia. But the data that you build this information on is critical. And it's not just the data, but also the methodologies used to train it. So we learned a lot today about the different types of training. And that becomes critical. What we also know is AI is not equal across all the data sets. So if you want to understand prognosis better and risk stratification, you want to have the data that's specifically linked to mortality, which is critical. And that's, I think, the key advantage and something that really sets us apart having our relationship with the National Echo Database of Australia. You also need to make sure that the data is unbiased. And we see how that unconscious bias affects our healthcare systems today. We saw a great example earlier of women being underdiagnosed versus men. We also know the same thing happens with race. So you want to be able to collect all this data. You want to know how the machine is learning this process. And then you want to make sure that you're training from a maximum computational capacity. And so again, we're talking about large, large data sets. And you do all these permutations hundreds of thousands of times. And you take a look at this data and you look at all the variables, not just what's taking place at the valve, not what's just taking place in the ventricles, but the entire heart. You build these multidimensional models and you keep on training the system hundreds of thousands of times. The science piece is essential. So if you're going to use AI, you want to make sure that you've got AI that gives you all the computational and data power that you have access to. Don, I've got a question. When you do this training of the model and you've got 2 million echoes and you put it into the system, is it like you put it on and you go away and get a cup of coffee and come back or you come back in a week? Or how long does it take to do a run on one of these things? Great question. So if you're talking about a specific case where we actually send up and let me just give you a quick description. So as you had shown in a model, we've tried not to change anybody's workflow. So you acquire your data set like you normally would on your echo cart, even a portable device. You would take your measurements, whether they be manual or whether you use an AI tool to gain your measurements. Those measurements are then typically sent to a PACS or a reading station. At that point, we have a very simple and light application that takes a look at when a new patient comes to the PACS system. We send just the measurement data, not the images, but just the measurement data up to our cloud. It takes one second because it's just data. And then it takes two seconds to process that and send it back. So you talk about the power of computing power and you take a look at what's taking place and having all of this information and to be able to do that in two seconds and then give back a report back to the cardiologist that assists in making the right determination. I think that's the power of what we're doing. That's good. But the question I actually had was when you were building this thing in the first place, does it take months of, you know, you've got rooms full of machines generating heat, you know, do you have to have like a nuclear reactor hooked up to the room full of computers? I mean, how much horsepower does it take to do this? It was a tremendous amount of horsepower. And then I will tell you that we also had engineers assigned to try to figure out how to break it. Right. So you develop these systems and then you say, now what's the breaking point for the system? When does it not perform? And we spent six months. So it wasn't just a few weeks. We actually spent six months of an engineer trying to figure out if I hold back this piece of information, can you still get to the right information? If I hold back this piece of information and we did this for six months until we found a breaking point that we said, okay, now we're comfortable that we can make the appropriate diagnosis with the information that we're given. One more good question that I think is top of mind for a lot of people. I have so many patients already. I'm more concerned about making sure I'm seeing the most urgent cases the soonest. Can AI help me prioritize patients? Certainly. I was at a CT scan conference not long ago. I have another hat that I wear in that role and the CT people were telling me that the AI now a hundred percent of their scans go into an AI algorithm and floats to the top, the people with something bad and floats to the bottom or sinks to the bottom, the normal ones. So that, and all of us will have experience where there's some bad thing that's been scanned this morning and you don't get to see it till nine o'clock tonight before you go home and you've got a dissection or some other really bad thing. So that's called triaging, obviously. Now I think that this is kind of a triage so that if you've got somebody where the machine says high risk based on the algorithm and they've got maybe actual severe AS or borderline by old fashioned guideline, that floats that person to the top. It's a triage tool. It puts it to the top of mind, either you do something now or you get them back, not 12 months, but six months, or you float them to a higher priority in your mental thinking. Is it going to make more patients? Well, I mean, patients are patients and people with aortic stenosis, most of them are going to come to intervention sometime if they live long enough. But I think using the algorithm to float people front of the line, front of thinking, front of mind, I think there's the value. I agree. And I also think it acts as a good negative predictor as well, right? So we only, not only want to take those critical cases, but for the thousands of cases that aren't critical, we also don't want to get them mixed up, right? So it acts as a very good negative predictor. And that's what some of our clients have said as well. One other thing that we talk about from a triage standpoint, one of the features that we have available that's entirely customized by our clients is that we can send email alerts, text alerts, either to your structural heart team, to your physicians that says your patient has been identified by the computer as having severe aortic stenosis in guidelines as an example. And we can send that as a text message. We can batch send those out. So we even try to help with that process as well to bubble up those most critical patients. And that's something that's easily customizable to whatever alert system that you like or not. Well, thank you very much. It's all we have for the questions. Don, any closing comments or contact information in case any of our members would like to get in touch with Echo IQ? Well, thanks, Chris. And first of all, let me personally thank Professor Scalia for taking his time today, especially with the time zone challenges. It was very, very informative. At Echo IQ, we're very excited about having the opportunity to bring this to a commercialization here in the United States. You can contact us through our website. And then we're MedAxian customers. There's a direct link on to the MedAxian profile. So just click on to us. You can also reach me on my cell phone. I'm happy to give you that right now. It's 714-290-9568. You won't go to a machine, you'll just come to a human. So by all means, feel free to reach out if there's anything I can do for you. All right. With that, the Echo IQ team, Elizabeth and Don, thank you very much. Dr. Scalia, thank you very much for your time as well. And we'll be recording this and sending this out. And with that, we'll conclude the presentation. Thank you very much.
Video Summary
Valve AI, powered by Echo IQ, is a technology that utilizes artificial intelligence (AI) to assist in the analysis and interpretation of echocardiograms. The AI algorithms can help in various aspects of echocardiography, including view labeling, measurements, diagnosis, and reporting. The technology is designed to provide more accurate and efficient analysis, allowing clinicians to make better decisions in patient care. The AI models are trained using large datasets and deep neural networks, which enable them to identify patterns and clusters that may not be apparent to human observers. For example, the AI algorithms can detect the phenotype of severe aortic stenosis, which includes parameters such as aortic valve area, ejection fraction, and mean gradient. By analyzing these parameters together, the AI can identify patients at higher risk and help guide treatment decisions. The Valve AI technology is ready for clinical use and can be integrated into existing echocardiography systems. It offers the potential to improve diagnostic accuracy and patient outcomes by providing clinicians with additional insights and support in their decision-making process.
Keywords
Valve AI
Echo IQ
artificial intelligence
echocardiograms
diagnosis
patient care
AI algorithms
severe aortic stenosis
treatment decisions
×
Please select your language
1
English