Personal

Silja Renooij

Team members

Robust and trustworthy advice

This project is one of the workpackages within the PersOn: Explainable, Maintainable, and Trustworthy Decision Support Systems for Personalised Care in Oncology project.

Scientific challenge

Probabilistic graphical models, such as Bayesian networks, provide a principled approach for modelling highly complex interactions among stochastic variables. As such they can support reasoning and decision making under uncertainty through the computation of any probability of interest, such as the expected effect of a particular treatment for an individual patient. The models can be constructed from knowledge provided by domain experts and/or learned from data, and their inner workings are considered explainable. These characteristics make them an attractive foundation for a DSS.

Various approaches to explain the reasoning from findings to advice have been proposed and studied from a technical perspective. However, current approaches often lack an evaluation in context, with intended users of the DSS, such as physicians or patients, to establish if the type and form of explanation is actually comprehensible and useful. From the healthcare perspective, a major issue is that current explanation methods are unable to explicate the trustworthiness of advice to the end-user. A relevant line of research has focused on studying the robustness of the outcome of probabilistic models to changes in the model or its inputs. The outcome of such analyses, as well as information about the general performance of a model, is currently used during the model construction phase, but not yet for the purpose of explaining models in use. Furthermore, current explanation approaches typically return multiple possible explanations in a language which does not align with the medical decision making process, leaving it up to the user to weigh and interpret this information. We aim to design and evaluate interactive approaches to explanation, where 1) the user can indicate preferences for the type and form of explanatory features, 2) explanations can be tied to relevant domain knowledge available outside of the probabilistic model, such as knowledge from clinical guidelines, and 3) uncertainty about and volatility of the model output is conveyed in the explanation.

Role of users

Health care professionals will be involved in the design and evaluation of explanation form and type, and of the interactive features used in the explanation. The interactive algorithms for computing explanations that we design can be implemented both in the systems of medical technology providers and in the software tools for reasoning under uncertainty.