Events

 

We are hosting the lecture series Explainable AI and Society, an interdisciplinary, hybrid lecture series starting in October 2021.

We are one of the co-organizers of the workshop series Issues in XAI, an interdisciplinary event aimed at addressing the core questions about making AI systems explainable.

We also strive to include the interested public in the scientific dialogue by inviting to public events.

Explainable AI and Society

Modern AI can be used to drive cars, to decide on loans, or to detect cancer. Yet, the inner workings of many AI systems remain hidden – even to experts. Given the crucial role that AI systems play in modern society, this seems unacceptable. But how can we make complex, self-learning systems explainable? What kinds of explanations are we looking for when demanding that AI must be explainable? And which societal, ethical and legal desiderata can actually be satisfied via explainability?


21 April ’22
6:15 pm
Why We Need a Science of Machine Behavior
Psychology
Iyad Rahwan
Max Planck Institute for Human Development
19 May ’22
6:15 pm
The Invisible Hand of Prediction
Computer Science
Moritz Hardt
Max Planck Institute for Intelligent Systems
09 Jun ’22
6:15 pm
Justification, Decision Threshold, and Randomness
Philosophy
Kate Vredenburgh
London School of Economics
14 Jul ’22
6:15 pm
Liability for AI
Law
Herbert Zech
Humboldt Universität zu Berlin

Click here for more information about the event and the registration process.

Intelligent Systems in Context

Artificial intelligent (AI) systems are all around us. In our modern lives, we interact with them on a daily basis – whether we ask them to switch on the light, take us home, filter our email for spam, or recommend a course of medical treatment. Developments in modern AI technology thus do not only raise technical questions. They also present us with profound moral and legal questions, alongside with foundational questions from philosophy and psychology. This workshop brings together experts from various disciplines to take a look at how we build, use, understand, and interact with AI systems in various forms. 

11 Jul’ – 13 Jul’ 22 in Bayreuth


Click here for more information about the event and the registration process.

Issues in XAI

Workshop Series Issues in Explainable AI (XAI)

The spreading of increasingly opaque artificial intelligent (AI) systems in society raises profound technical, moral, and legal challenges. The use of AI systems in high-stakes situations raises worries that their decisions are biased, that relying on their recommendations undermines human autonomy or responsibility, that they infringe on the legal rights of affected parties, or that we cannot reasonably trust them. Making AI systems explainable is a promising route towards meeting some of these challenges. But what kinds of explanations are able to generate understanding? And how can we extract suitable explanatory information from different kinds of AI systems?

The workshop series Issues in Explainable AI aims to address these and related questions in an interdisciplinary setting. To this end, the workshops in this series bring together experts from philosophy, computer science, law, psychology, and medicine, among others.
The series is organized by the project Explainable Intelligent Systems (Saarland University/TU Dortmund), the Leverhulme Centre for the Future of Intelligence (Cambridge), TU Delft and the project BIAS (Leibniz University Hannover).

Upcoming events

Issues in XAI #5: Understanding Black Boxes: Interdisciplinary Perspectives (TU Dortmund, 5 – 7 September 2022)

Past events

Issues in XAI #4: Explainable AI: between Ethics and Epistemology (Delft University of Technology, 23 – 25 May 2022)
Issues in XAI #3: Bias and Discrimination in Algorithmic Decision-Making (Hanover, October 8th – October 9th 2021)
Issues in XAI #2: Understanding and Explaining in Healthcare (Cambridge, May 2021)

Issues in XAI #1: Blackboxes, Recommendations, and Levels of Explanations (Saarbrücken, September 30th – October 2nd 2019)

Public Events

Vergangene Veranstaltungen

02.10.2019: Erklärbare Intelligente Systeme: Verstehen, Vertrauen, Verantwortung

In Zeiten zunehmender Automatisierung nahezu aller Lebensbereiche stellt sich mehr denn je die Frage, welche Rahmenbedingungen erfüllt sein müssen, damit intelligente Systeme in der Gesellschaft berechtigterweise Akzeptanz finden können.

Vortrag von Prof. Sikja Vöneky “Wie lässt sich Künstliche Intelligenz regulieren? Eine Herausforderung des 21. Jahrhunderts”

Anschließend Podiumsdiksuission mit Prof. Dr. Silja Vöneky, JProf. Dr. Lena Kästner, Dr. Steffen-Werner Meyer und Dr. Clemens Stachl, Moderation: Kevin Baum