Rama Chellappa

Instilling Trust in AI

Rama Chellappa

Rama Chellappa, PhD, John Hopkins University Bloomberg Distinguished Professor in electrical, computer, and biomedical engineering, and co-author of “Can We Trust AI?” looks at the promise of AI in health care and how we can best utilize this extraordinary tool to save lives and improve health equity.

Artificial intelligence (AI) has become one of the most exciting and promising technologies in health care and MedTech, thanks to its ability to shift through huge volumes of data and quickly detect patterns. As commercial applications for AI continue to expand, questions have been raised regarding privacy, bias and even safety. MedTech Intelligence spoke with Rama Chellappa, PhD, a John Hopkins University Bloomberg Distinguished Professor in electrical, computer, and biomedical engineering, member of the University’s Center for Imaging Science, Center for Language and Speech Processing, and the Malone Center for Engineering, and co-author of Can We Trust AI? about the promise—and how to combat the risks—of AI in health care.

What led you to write “Can We Trust AI?”

Dr. Chellappa: I have been working on computer vision, which is a subfield of AI, for almost four decades. And it has had its ups and downs. The issue with AI is that everybody expects too much of it because it is so fascinating. We have seen these Hollywood movies where all kinds of fantastical things happen. There is so much expectation and sometimes the technology is not there, so the technology falls short and immediately people sour on it.

Since 2012, we have learned how to mine data, so AI has taken a stronger form and it’s doing well. But whenever you directly export data, questions arise: Where did you get the data? How did you get the data? Are you invading my privacy? Do we have bias because we didn’t do the loss functions properly? And, if it is purely data-driven, can I attack it or hack it? I felt we needed a positive opinion on AI, while also acknowledging some of these issues.

I had great help from my co-author, Eric Niiler, and I hope we have pointed out the positive things that can happen, have happened and will happen, while at the same time acknowledging the concerns and telling people that, “Yes, we are aware of this and we are working on it and making it better.”

How do you define AI versus machine learning?

Dr. Chellappa: AI broadly means using domain data to make informed decisions. This could be based on logic or calculus or optimization. Machine learning tends to be more mathematical and focuses on directly working with data. With data science and machine learning, they both directly work with data. But AI, in theory, should also incorporate domain expertise or domain knowledge.

For example, if it’s a smart car, there are certain things we need to follow: rules regarding how you drive and so forth. In any application, we have a domain expert or knowledge, and AI in it’s true form should exploit that.

People are using the term AI in very broad terms these days. Machine learning is part of AI but it’s more mathematically inclined—it lives in the math and statistics department. AI tends to live mostly in computer science.

We’ve heard about risks and concerns related to biased data. There are also concerns with outdated data. What if there’s something we’ve learned that is not correct or is no longer relevant, how do we make sure the AI is filtering that out?

Dr. Chellappa: AI is very elaborate software, and all software can be probed. For example, we have loss functions. We ask, what do we need to get out of this data? We declare it, and then we look at the loss functions to see if it is behaving well or if there are some bumps that need to be fixed. We also can audit how the data was collected, was it willy-nilly or was there a science behind how it was collected?

It is a very well-known fact in statistics that if you have noise in the data, your estimates are going to be biased. And if we know what the noise is, we can compensate for it.

We have worked on a bias-reduction algorithm for a facial recognition, and there are other things that we can do for algorithms that are used in healthcare and medicine. For example, we introduce complex data slowly. First, we teach the machine with simple data and, as it gets better, we give it harder and harder data so that it can get better. You don’t teach calculus to a 5-year-old. We teach them arithmetic and then we teach them algebra and geometry and then we go to calculus.

Sometimes data is corrupted, so we do what we call adversity training. We tell the machine that sometimes you may have to deal with imperfect data, can you learn in a general way so when you get poor quality data, you will do reasonably well?

Technology is not frozen. It always gets better. So, when these issues are raised, we need to pay attention to them and then we must fix them. In terms of bias, a technology that does not serve everybody in a society is hard to justify. So, we have to be sure that it serves everybody that it is supposed to serve. The bottom line is, we need to continuously monitor and evaluate our algorithms. We should not assume they are correct.

You mentioned facial recognition, and that is one of the examples you look at in your book. In South Korea, they used facial recognition to find super-spreaders during the COVID-19 pandemic. It’s very interesting, but that also scares people a little bit when we think about the future of AI and privacy issues. Is that something that can be addressed, or do we need to choose between the pros and cons of using this technology?

Dr. Chellappa: That’s a great way to enter this question: pros versus cons. What I tell people is you need to be savvier because the technology has gotten savvier. It is able to mine your data. That is a reality, so we need to be careful about what we share.

How privacy issues are handled also depends on the country. There is no universal law. In some countries you have absolutely no expectation of privacy, and they can capture and mine any data about you.

I do think that we are more careful in the U.S., and I do think medicine has figured out how to keep data private. When we do machine learning in the medical space, there are all kinds of guidelines that must be followed to even have access to that data.

In your book, you also look at areas where AI is outperforming doctors in terms of diagnoses and detection. How do we guide the technology when AI is outperforming the people?

Dr. Chellappa: I don’t do the AI versus humans thing—that is Hollywood stuff. I do AI and humans. In medicine, it’s a trio: AI, the doctor and the patient. AI can give suggestions based on the data. It also can look at your data going back to the first time you had your immunization shots. AI can keep track of troves of data and then help in the decision making. Nobody is going to send you to an OR for surgery based on what AI says. AI may make suggestions for the surgeon to look at—it is an additional piece of inference for the physician.

What are some of the biggest challenges of incorporating AI into devices?

Dr. Chellappa: I called them the four fearsome elephants in the AI room. The first is what we call the domain shift. Is the data that I use to make my inference for one calculation applicable to the other calculations? For example, South Indians tend to have higher rates of diabetes. So, when people develop these technologies, we have to be aware of the population where they are going to be applied. If domain shift is not taken into account, the performance will be lower.

The second is whether these devices can be hacked. If a Tesla wants to go right, but someone hacks its system and it goes left, that is not a good thing—so we do worry about that.

The third is something we already talked about: bias. Will it serve certain populations with less accuracy? Because that might confuse the decisions we make.

And privacy, of course, is the fourth one. Who has access to the data we are collecting and will someone else be able to access the data? If it is a wearable device, the data is out there. I cannot connect a wearable to the central computer using a 100-mile-long wire. It is going to go through WiFi, which means someone can hack into it and get information about me.

Cybersecurity has become very important. This information has to be completely secure so that only authorized people can access it.

What do you see as the areas where AI has shown (or will show) the most value in the healthcare industry?

Dr. Chellappa: The FDA has already approved close to 120 AI-driven methods, but we are still at the experimental stage. There are efforts to improve diagnosis of Alzheimer’s and dementia through continuous gait monitoring using wearable sensors. There are machine learning methods that look at signals collected while someone is under anesthesia to predict the probability that they will experience low blood pressure. Machine learning is also being used for sepsis detection. There are many, many exciting things being done right now.

One area that I feel is exciting in terms of device development is wearable sensors and continuous monitoring. Nowadays, even entry-level cars have sensors to recognize when a car is on either side of you or behind you. In some cases, the car will slow down if you’re getting too close to the car in front of you.

Similarly, wearable sensors are being put on humans to alert us if we are going to fall and to monitor blood sugar. If we were living in three generational families under one roof like we have in the past, these devices may not mean much. But now a lot of elderly people are living alone. If we think of it like having these cameras in the car to help us drive more safely and reduce accidents and deaths on the highways, these wearable sensors can make us much safer and improve our health.

If you have an Apple Watch that nudges you to get up and take a little walk and drink some water when you’ve been sitting on the couch for more than 30 minutes, people will do it. These tools can make us safer and help us lead a better healthier life.

 

Related Articles