We are hosting the lecture series Explainable AI and Society, an interdisciplinary, hybrid lecture series starting in October 2021.
We are one of the co-organizers of the workshop series Issues in XAI, an interdisciplinary event aimed at addressing the core questions about making AI systems explainable.
We also strive to include the interested public in the scientific dialogue by inviting to public events.
Explainable AI and Society
This interdisciplinary lecture series discusses the potentials and questions within the rising field of Explainable Artificial Intelligence (XAI). Modern AI can be used to drive cars, to decide on loans, or to detect cancer. Yet, the inner workings of many AI systems remain hidden – even to experts. Given the crucial role that AI systems play in modern society, this seems unacceptable. But how can we make complex, self-learning systems explainable? What kinds of explanations are we looking for when demanding that AI must be explainable? And which societal, ethical and legal desiderata can actually be satisfied via explainability?
|21 Oct ’21
|Discrimination by AI systems:
An Analysis from a Legal Perspective.
|18 Nov ’21
|Action Regulation in Human-AI-Interaction:
The Psychology of Intelligent Automation
|Thomas Franke & Tim Schrills
University of Lübeck
|16 Dec ’21
Theory and Practice
|20 Jan ’22
|Explaining Machine Learning:
A New Kind of Idealization?
Eindhoven University of Technology
Click here for more information about the event and the registration process.
Issues in XAI
Workshop Series Issues in Explainable AI (XAI)
The spreading of increasingly opaque artificial intelligent (AI) systems in society raises profound technical, moral, and legal challenges. The use of AI systems in high-stakes situations raises worries that their decisions are biased, that relying on their recommendations undermines human autonomy or responsibility, that they infringe on the legal rights of affected parties, or that we cannot reasonably trust them. Making AI systems explainable is a promising route towards meeting some of these challenges. But what kinds of explanations are able to generate understanding? And how can we extract suitable explanatory information from different kinds of AI systems?
The workshop series Issues in Explainable AI aims to address these and related questions in an interdisciplinary setting. To this end, the workshops in this series bring together experts from philosophy, computer science, law, psychology, and medicine, among others.
The series is organized by the project Explainable Intelligent Systems (Saarland University/TU Dortmund), the Leverhulme Centre for the Future of Intelligence (Cambridge), TU Delft and the project BIAS (Leibniz University Hannover).
Issues in XAI #4: Explainable AI: between Ethics and Epistemology (Delft University of Technology, 23 – 25 May 2022)
Issues in XAI #3: Bias and Discrimination in Algorithmic Decision-Making (Hanover, October 8th – October 9th 2021)
Issues in XAI #2: Understanding and Explaining in Healthcare (Cambridge, May 2021)
Issues in XAI #1: Blackboxes, Recommendations, and Levels of Explanations (Saarbrücken, September 30th – October 2nd 2019)
In Zeiten zunehmender Automatisierung nahezu aller Lebensbereiche stellt sich mehr denn je die Frage, welche Rahmenbedingungen erfüllt sein müssen, damit intelligente Systeme in der Gesellschaft berechtigterweise Akzeptanz finden können.
Vortrag von Prof. Sikja Vöneky “Wie lässt sich Künstliche Intelligenz regulieren? Eine Herausforderung des 21. Jahrhunderts”
Anschließend Podiumsdiksuission mit Prof. Dr. Silja Vöneky, JProf. Dr. Lena Kästner, Dr. Steffen-Werner Meyer und Dr. Clemens Stachl, Moderation: Kevin Baum