Explainable AI and Society

an interdisciplinary, hybrid lecture series

Modern AI can be used to drive cars, to decide on loans, or to detect cancer. Yet, the inner workings of many AI systems remain hidden – even to experts. Given the crucial role that AI systems play in modern society, this seems unacceptable. But how can we make complex, self-learning systems explainable? What kinds of explanations are we looking for when demanding that AI must be explainable? And which societal, ethical and legal desiderata can actually be satisfied via explainability?


20 Oct ’22
6:00 pm
What AI can Learn from Law
Law
Réka Markovich
University of Luxembourg
17 Nov ’22
6:00 pm
Hybrid, Explanatory, Interactive Machine Learning – Towards Human-AI Partnership
Computer Science
Ute Schmid
University of Bamberg
15 Dec ’22
6:00 pm
What Advertising Data Tells us about Society
Computer Science
Ingmar Weber
Saarland University
19 Jan ’23
6:00 pm
Minimal Ethics – A Framework for Applied Ethics in the Digital Sphere
Philosophy
Vincent Müller
Friedrich-Alexander-University Erlangen-Nuremberg

Next lecture: Ute Schmid — Hybrid, Explanatory, Interactive Machine Learning – Towards Human-AI Partnership

17 Nov ’22 – 6:00 pm 

Location: H 24 (RW I)

For many practical applications of machine learning it is appropriate or even necessary to make use of human expertise to compensate a too small amount or low quality of data. Taking into account knowledge which is available in explicit form reduces the amount of data needed for learning. Furthermore, even if domain experts cannot formulate knowledge explicitly, they typically can recognize and correct erroneous decisions or actions. This type of implicit knowledge can be injected into the learning process to guide model adapation. These insights have led to the so-called third wave of AI with a focus on explainablity (XAI). In the talk, I will introduce research on explanatory and interactive machine learning. I will present inductive programming as a powerful approach to learn interpretable models in relational domains. Arguing for the need of specific exlanations for different stakeholders and goals, I will introduce different types of explanations based on theories and findings from cognitive science. Furthermore, I will show how intelligent tutor systems and XAI can be combined to support constructive learning. Algorithmic realisations of explanation generation will be complemented with results from psychological experiments investigating the effect on joint human-AI task performance and trust. Finally, current research projects are introduced to illustrate applications of the presented work in medical diagnostics, quality control in industrial production, file management, and accountability.​

To register, either click on the registration button and fill out the Google form or send an email with the subject “Registration” to Leon Weiser: leon.weiser@uni-bayreuth.de.

The mail should contain the dates you want to participate and whether you want to participate online or in person.

 

Click here for information on past lectures.

Scientific organizers

Kevin Baum, Georg Borges, Holger Herrmanns, Lena Kästner, Markus Langer, Astrid Schomäcker, Andreas Sesing, Ulla Wessels, Timo Speith

Funded by the