We are hosting and contributing to a number of interdisciplinary research and public events.

Explainable AI and Society

Interdisciplinary Lecture Series (hybrid)

Modern AI can be used to drive cars, to decide on loans, or to detect cancer. Yet, the inner workings of many AI systems remain hidden – even to experts. Given the crucial role that AI systems play in modern society, this seems unacceptable. But how can we make complex, self-learning systems explainable? What kinds of explanations are we looking for when demanding that AI must be explainable? And which societal, ethical and legal desiderata can actually be satisfied via explainability?

Current Lectures

The next Explainable AI and Society Lecture Series will take place at TU Dortmund in Fall 2023. Click here for more information about the upcoming events and the registration procedure.

Former Lectures

Fall 2022                                                Summer 2022                                         Fall 2021


Click on the poster for more information about the former lectures.​

Issues in Explainable AI (XAI)

Workshop Series “Issues in XAI #?”

The spreading of increasingly opaque artificial intelligent (AI) systems in society raises profound technical, moral, and legal challenges. The use of AI systems in high-stakes situations raises worries that their decisions are biased, that relying on their recommendations undermines human autonomy or responsibility, that they infringe on the legal rights of affected parties, or that we cannot reasonably trust them. Making AI systems explainable is a promising route towards meeting some of these challenges. But what kinds of explanations are able to generate understanding? And how can we extract suitable explanatory information from different kinds of AI systems?

The workshop series Issues in Explainable AI aims to address these and related questions in an interdisciplinary setting. To this end, the workshops in this series bring together experts from philosophy, computer science, law, psychology, and medicine, among others.

The series has been initiated by the project Explainable Intelligent Systems. Since its first iteration at Saarland University it has been hosted by the Leverhulme Centre for the Future of Intelligence (Cambridge), TU Delft, and the project BIAS (Leibniz University Hannover). Coming iterations will be held at TU Dortmund, the Center for Perspicuous Computing (Saarbrücken-Dresden), and University of Bayreuth.

Since the series keeps expanding, we now appointed a steering committee (SC) consisting in some of its initiators: Juan M. Durán (Delft), Lena Kästner (Bayreuth), Rune Nyrup (Cambridge), and Eva Schmidt (Dortmund). If you are interested in hosting one of the upcoming events, please just get in touch with one of the SC members.

Upcoming events

Issues in XAI #6: Understanding Feedback Loops (Saarbrücken)
Issues in XAI #5: Understanding Black Boxes: Interdisciplinary Perspectives (TU Dortmund, 5 – 7 September 2022)

Past events

Issues in XAI #4: Explainable AI: between Ethics and Epistemology (Delft University of Technology, 23 – 25 May 2022)
Issues in XAI #3: Bias and Discrimination in Algorithmic Decision-Making (Hanover, October 8th – October 9th 2021)
Issues in XAI #2: Understanding and Explaining in Healthcare (Cambridge, May 2021)

Intelligent Systems in Context

Artificial intelligent (AI) systems are all around us. In our modern lives, we interact with them on a daily basis – whether we ask them to switch on the light, take us home, filter our email for spam, or recommend a course of medical treatment. Developments in modern AI technology thus do not only raise technical questions. They also present us with profound moral and legal questions, alongside with foundational questions from philosophy and psychology. This workshop brings together experts from various disciplines to take a look at how we build, use, understand, and interact with AI systems in various forms. 

11 Jul’ – 13 Jul’ 22 in Bayreuth

Click here for more information about the event and the registration process.

Public Events

Past Events

02.10.2019: Erklärbare Intelligente Systeme: Verstehen, Vertrauen, Verantwortung

In Zeiten zunehmender Automatisierung nahezu aller Lebensbereiche stellt sich mehr denn je die Frage, welche Rahmenbedingungen erfüllt sein müssen, damit intelligente Systeme in der Gesellschaft berechtigterweise Akzeptanz finden können.

Vortrag von Prof. Sikja Vöneky “Wie lässt sich Künstliche Intelligenz regulieren? Eine Herausforderung des 21. Jahrhunderts”

Anschließend Podiumsdiksuission mit Prof. Dr. Silja Vöneky, JProf. Dr. Lena Kästner, Dr. Steffen-Werner Meyer und Dr. Clemens Stachl, Moderation: Kevin Baum