Explainable Intelligent Systems

A Joint Saarland University / University of Bayreuth / TU Dortmund Research Project


The Project Explainable Intelligent Systems investigates how explainability of artificial intelligent systems contributes to the fulfillment of important societal desiderata like responsible decision-making, the trustworthiness of AI, and many more.

Artificial intelligent systems increasingly augment or take over tasks previously performed by humans. This concerns both low-stakes tasks, such as recommending books or movies and high-stakes tasks, such as suggesting which applicant to give a job, what medical treatment to give a patient, or how to navigate autonomous cars through heavy traffic. In such situations, a variety of moral, legal, and general societal challenges are raised. To answer these challenges, it is often claimed, we need to ensure that artificially intelligent systems deliver reliable, trustworthy, and understandable explanations for their decisions. But how can this be achieved?

Explainable Intelligent Systems (EIS) is a highly interdisciplinary and innovative research project based at Saarland University, TU Dortmund University, and the University of Bayreuth, Germany. It brings together experts from informatics, law, philosophy and psychology. Together, we investigate how intelligent systems can and should be designed in order to provide explainable recommendations and thus meet important societal desiderata.

Our research focuses on what society can appropriately desire from intelligent systems, how precisely understandability contributes to meeting the various challenges raised by the increasing use of AI in a modern society, and how explainability enables this much-needed understandability. Given that both the adequacy of a given explanation and its success in evoking understanding varies depending on a range of factors, we believe that the answers to our research questions will vary with the specific case or context under consideration. Therefore, a major goal of EIS is to develop a context-sensitive framework for explainability. In the near future, our framework shall help scientists, politicians, and stakeholders to guide research and policy-making regarding intelligent systems.

EIS is kindly founded by the Volkswagen Foundation in the Artificial Intelligence and the Society of the Future track.



EIS brings together experts from informatics, law, philosophy and psychology.


We work with national and international partners in research, politics, and industry.


Our context-sensitive framework shall help guide future research and policy-making regarding intelligent systems.

Our Partners