Explainable AI and Society
an interdisciplinary, hybrid lecture series
Modern AI can be used to drive cars, to decide on loans, or to detect cancer. Yet, the inner workings of many AI systems remain hidden – even to experts. Given the crucial role that AI systems play in modern society, this seems unacceptable. But how can we make complex, self-learning systems explainable? What kinds of explanations are we looking for when demanding that AI must be explainable? And which societal, ethical and legal desiderata can actually be satisfied via explainability?
|19 Oct ’23
|Explained – agreed. On the consequences of informed consent on explainability
University of Bern
|16 Nov ’23
|Verification of Neural Networks — And What It Might Have to Do With Explainability
Technische Universität Dortmund
|14 Dec ’23
|A Legal Perspective on
Explainable AI: Why, How Much and For Whom?
Technische Universität Dresden
|18 Jan ’23
|Organizing AI: How to
shape accountable AI development and use
To register, send an e-mail with the title “Registration” to firstname.lastname@example.org. Include which lecture(s) you would like to attend and whether you will attend online or in person.
Click here for information on past lectures.
Kevin Baum, Georg Borges, Holger Herrmanns, Lena Kästner, Markus Langer, Astrid Schomäcker, Andreas Sesing, Ulla Wessels, Timo Speith, Eva Schmidt