Workshop Series Issues in Explainable AI (XAI)
The spreading of increasingly opaque artificial intelligent (AI) systems in society raises profound technical, moral, and legal challenges. The use of AI systems in high-stakes situations raises worries that their decisions are biased, that relying on their recommendations undermines human autonomy or responsibility, that they infringe on the legal rights of affected parties, or that we cannot reasonably trust them. Making AI systems explainable is a promising route towards meeting some of these challenges. But what kinds of explanations are able to generate understanding? And how can we extract suitable explanatory information from different kinds of AI systems?
The workshop series Issues in Explainable AI aims to address these and related questions in an interdisciplinary setting. To this end, the workshops in this series bring together experts from philosophy, computer science, law, psychology, and medicine, among others.
The series is organized by the project Explainable Intelligent Systems (Saarland University/TU Dortmund), the Leverhulme Centre for the Future of Intelligence (Cambridge), TU Delft and the project BIAS (Leibniz University Hannover).
Issues in XAI #3: Bias and Discrimination in Algorithmic Decision-Making (Hanover, October 8th – October 9th 2021)
Issues in XAI #2: Understanding and Explaining in Healthcare (Cambridge, May 2021)
Issues in XAI #1: Blackboxes, Recommendations, and Levels of Explanations (Saarbrücken, September 30th – October 2nd 2019)