Explainable AI and Society

an interdisciplinary, hybrid lecture series

Modern AI can be used to drive cars, to decide on loans, or to detect cancer. Yet, the inner workings of many AI systems remain hidden – even to experts. Given the crucial role that AI systems play in modern society, this seems unacceptable. But how can we make complex, self-learning systems explainable? What kinds of explanations are we looking for when demanding that AI must be explainable? And which societal, ethical and legal desiderata can actually be satisfied via explainability?


19 Oct ’23
6:15pm
Explained – agreed. On the consequences of informed consent on explainability
Philosophy
Claus Beisbart
University of Bern
16 Nov ’23
6:15pm
Trustworthy Machine Learning
Computer Science
Emmanuel Müller
Technische Universität Dortmund
14 Dec ’23
6:15pm
A Legal Perspective on
Explainable AI: Why, How Much and For Whom?

Law
Anne Lauber-Rönsberg
Technische Universität Dresden
18 Jan ’23
6:15pm
Organizing AI: How to
shape accountable AI development and use

Psychology
Gudela Grote
ETH Zurich

Next lecture: Claus Beisbart – Title: Explained – agreed. On the consequences of informed consent on explainability

19 Oct ’23 – 6.15pm

Location: International Meeting Centre (IBZ), TU Dortmund, Emil-Figge-Straße 59, 44227 Dortmund

Further information will follow soon.​

To register, send an e-mail with the title “Registration” to sara.mann@tu-dortmund.de. Include which lecture(s) you would like to attend and whether you will attend online or in person.

Click here for information on past lectures.

Scientific organizers

Kevin Baum, Georg Borges, Holger Herrmanns, Lena Kästner, Markus Langer, Astrid Schomäcker, Andreas Sesing, Ulla Wessels, Timo Speith, Eva Schmidt

Funded by the