Our first workshop: Issues in Explainable AI

After months of preparation, we were happy to successfully launch our very first workshop “Issues in Explainable AI: Black Boxes, Recommendations, and Levels of Explanation” on September 30, 2019. Researchers from different disciplines gave their presentations in front of a broad and engaging audience. The interest in our workshop was so high that our workshop was fully booked early on during our registration period. Despite having moved to a bigger location, the room was still packed with explainable AI (XAI) enthusiasts.

Since Explainable Intelligent Systems brings together scientists from Law, Informatics, Psychology, and Philosophy, we focused on inviting other researchers from the respective disciplines. We also aimed to present different research foci to an interdisciplinary audience and kickstart new scientific dialogues. These aims led us to leave enough room after every presentation to ask clarificatory questions and introduce new lines of thought.

While all presentations provided highly interesting input and resulted in remarkably active discussion sessions, two event slots deserve special mention in particular. We had designated two sessions in particular for discussion. Dan Brooks, Ph.D., presented his view on levels of explanation. On the last day, Prof. Georg Borges gave his talk “New Technologies and the Law – A Complicated Relationship?”, with the support of his assistant Andreas Sesing. Both sessions served as an excellent start for extensive discussion sessions. They especially sparked the participation of our audience, consisting of researchers, students, and professionals from different branches of industry.

We also reached our goal to include junior researchers. Nadine Schlicker, a post-graduate student of Psychology, presented together with Prof. Cornelius König. She, and other students from her department, had looked at the significance of Psychology for AI, hand in hand with Prof. König and his staff. The results were very informative. The research team had investigated the health care sector and compared responses to human agents and systems equipped with XAI. The main research focus was on professional nurses and the way their vacation requests may be handled. If a system equipped with XAI processed the request, the professional nurses laid less importance on interpersonal justice and the presence of an explanation, compared to human agents. These results are in tension with the expectations of users of other XAI systems and general demands for XAI.

All in all, it was a rewarding experience, and we connected with many people whom we wish to exchange ideas with in the future. We are sure the vast majority of participants shared our experience. However, there is one thing left to say. During our discussions, we became aware of how important our internal colloquia are when it comes to finding a common language. Bridging terminological gaps and eliminating domain-specific ”false friends” is crucial for the optimal interdisciplinary XAI research setting, in our honest opinion. This is why we have decided to continue with our internal colloquia despite having covered the very basic XAI terminology from all our four different disciplines (Informatics, Psychology, Law, and Philosophy) already.

Our team and our cooperation partners. From left: Prof. Holger Hermanns, Prof. Juan M. Durán (TU Delft), Prof. Ulla Wessels, Andreas Sesing, Prof. Georg Borges, JProf. Lena Kästner, Kevin Baum, Prof. Cornelius König, JProf. Eva Schmidt, Geoff Keeling (Leverhulme Centre for the Future of Intelligence), Daniel Oster, Dr. Rune Nyrup
(Leverhulme Centre for the Future of Intelligence) , Jeanette Lang

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top