Biomed in the Media
October 14, 2024
Joran Michiels
With the increased use of complex black box models in medical care it has become more and more important to understand the model's decision and correct it if necessary. Today it is possible to locally explain the decision of an AI model. These explanations can be reviewed by medical experts and if the explanation is correct, the experts can validate and trust the model’s decision. But what if the explanation is incorrect? This is an ideal opportunity to improve the model’s performance and trustworthiness by correcting its explanation. In the following pitch we explain how you can implement such interactive explanations in your own model optimization setup. This video was recorded at the Flanders AI research day.