Abstract
In the rapidly evolving field of machine learning, it is important to quantify model uncertainty and explain algorithm decisions, especially for safety-critical domains such as healthcare. In this talk, we present a novel approach to explaining Gaussian processes (GPs) which we term GP-SHAP. Our method is based on the popular solution concept of Shapley values extended to stochastic cooperative games, resulting in explanations that are random variables. GP-SHAP’s explanations satisfy similar favourable axioms to standard Shapley values and possess a tractable covariance function across features and data observations. This covariance allows for quantifying explanation uncertainties and studying statistical dependencies between explanations. We further extend our framework to the problem of predictive explanation and propose a Shapley prior over the explanation function to predict Shapley values for new data based on previously computed ones. This work is accepted at NeurIPS 2023 as a spotlight paper.
Biography
Dr. Siu Lun Chau (https://chau999.github.io/) is a postdoctoral researcher at the Rational Intelligence lab led by Prof. Krikamol Muandet at the CISPA Helmholtz Center for Information Security in Germany. His research focuses on improving methods for uncertainty quantification and interpretability of machine learning algorithms. He is also interested in the intersection of kernel methods and Gaussian processes, causal inference, and preference learning. Siu Lun holds a PhD in Statistics from the University of Oxford, where he was mainly supervised by Professor Dino Sejdinovic.