Last week, Lena Kästner and Markus Langer had the opportunity to present the topic Explainable Artificial Intelligence (XAI) and the idea behind our research project Explainable Intelligent Systems (EIS) to an attentive audience at the AOW 2019. A fascinating discussion was prompted, following the presentation of the two researchers and after having bridged the terminological gap between philosophy and psychology. That discussion made clear once again the relevance of XAI as a research topic and the importance of interdisciplinary cooperation.

In the first part of the presentation, Markus Langer introduced one of the core problems at which EIS is looking. What if systems make recommendations which human agents can neither understand nor fathom nor question? To make the topic more accessible for our audience, personnel selection was used as an example. When applicants receive a job rejection letter of rejection (which was jointly created by a personnel manager and an AI system), said applicants would like to know the reasons behind their rejection usually. At the same time, merely producing a ranking of the candidates is not sufficient for a decision support system equipped with AI in the case of personnel managers tasked with making a competent choice between two applicants. In both cases, some kind of explanation is needed to allow for traceability, scrutiny, and responsible decision-making.

In the second part of the talk, Lena Kästner compared different approaches to explanations. She attempted to argue from a philosophical background. On the one hand, she postulated which types of explanations are relevant for which kind of problem. On the other hand, she contended which types of explanations are less likely to get to the heart of the problem regarding many questions in connection with XAI.

The subsequent discussion centered on the role of psychology in not only the field of XAI but also in the whole domain of human-AI interaction. Three main conclusions from the presentation and the discussion:

  • There is a need for more interdisciplinary collaborations and people who can translate from one discipline (e.g., philosophy or computer science) to another (e.g., psychology).
  • When AI systems pick up on and replicate biases in data, it can be used as some kind of ”bias mirror” to make human agents aware of biases. Such biases may have been undiscovered until then. For example, it is imaginable that personnel managers could analyze their decision history through the use of an AI system. By doing this, the respective personnel managers could become conscious of potential individual biases and heuristics in decision-making.
  • Psychology has been examining the black box ”human” for decades. Psychological methods could also help in the investigation of newly created artificial black boxes.
From left: Prof. Cornelius König, JProf. Lena Kästner, Dr. Markus Langer