Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding
Staff - Faculty of Informatics
Date: 11 October 2022 / 13:15 - 14:15
USI Campus EST, room D0.02, Sector D
Speaker: Kacper Sokol, University of Bristol & RMIT University
Abstract:
Myriad approaches exist to help humans peer inside automated decision-making systems based on artificial intelligence and machine learning algorithms. These tools and the insights they produce, however, tend to be complex socio-technological constructs themselves, hence subject to technical limitations as well as human biases and (possibly ill-defined) preferences. Under these conditions, how can we ensure that explanations are meaningful and fulfil their role by leading to understanding?
In this talk Kacper will provide a high-level introduction to and overview of explainable AI and interpretable ML, followed by a deep dive into practical aspects of a popular explainability algorithm. He will demonstrate how different configurations of an explainer – that is often presented as a monolithic, end-to-end tool – may impact the resulting insights. Kacper will then show the importance of the strategy employed to present them to a user, arguing in favour of a clear separation between the technical and social aspects of such techniques. Importantly, understanding these dependencies can help us to build bespoke explainers that are robust, reliable, trustworthy and suitable for the unique problem at hand.
Biography:
Kacper is a Research Fellow at the ARC Centre of Excellence for Automated Decision-Making and Society, affiliated with the School of Computing Technologies at RMIT University, Australia; and an Honorary Research Fellow at the Intelligent Systems Laboratory, University of Bristol, UK. His main research focus is explainability – transparency and interpretability – of data-driven predictive systems based on artificial intelligence and machine learning algorithms. Kacper holds a Master's degree in Mathematics and Computer Science, and a doctorate in Computer Science from the University of Bristol. Throughout his career Kacper was a visiting researcher at the University of Tartu, Estonia as well as the Simons Institute for the Theory of Computing at UC Berkeley, USA; prior to joining RMIT he has worked with numerous projects at the University of Bristol. In his research, Kacper collaborated with industry partners such as THALES, and provided consulting services in explainable artificial intelligence and transparent machine learning to companies such as Airbus.
Host: Prof. Marc Langheinrich