Making Sense of Interpretable Machine Learning

17 - 21 October 2022

Venue: Lorentz Center@Snellius

If you are invited or already registered for this workshop, you have received login details by email.

There is no doubt that the interest in machine learning has exploded in the world, and this interest is not expected to fade out any time soon. Researchers develop algorithms almost daily, and a growing number of companies are eager to start using machine learning in practice. Both research and practice urgently need to make sense of how these algorithms work and produce decisions. We need in-depth discussions on interpretability in machine learning now more than ever.

The aim of this workshop is to stimulate discussions among scientists and practitioners to lay the foundations for future collaborations within the area of interpretable machine learning. We focus on three application domains: finance, healthcare, and business analytics. Our objective is to discuss relevant problems, both conceptual and technical in nature, to spark new research directions. In summary, we shall discuss and collectively address the following questions:

- What are the measures of interpretability in different disciplines, specifically in finance, healthcare, and business analytics?

- Do different post-processing models provide consistent explanations?

- How can inherently interpretable algorithms match the accuracy of black-box algorithms?

- What are the instances from research or practice, where the provided interpretations or explanations are robust or non-robust against potential changes in data?

- What types of statistical inference for interpretations are useful in practice?

- What are the main barriers in practice to use these new transparent algorithms?


Follow us on:

Niels Bohrweg 1 & 2

2333 CA Leiden

The Netherlands

+31 71 527 5400