Project

The project Explainable Intelligent Systems (EIS) brings together experts from philosophy, computer science, psychology and law to respond to the overaching question:

How can and should intelligent systems be designed to provide explainable recommendations?

Intelligent systems affect decisions that directly influence individual lives or entire societies. They play a vital role in shaping decisions regarding freedom versus imprisonment, healing versus death, or justice versus discrimination. Their objective is to take the workload off decision-makers and to enable them to make informed decisions. However, in all these cases we want human agents, rather than intelligent systems, to make the final decision. This is because, first, only humans can take responsibility for decisions and, second, only humans can give appropriate weight to case-sensitive considerations in their decision-making processes. Intelligent systems therefore must be designed to assist humans in deciding in a responsible and well-informed way. To achieve this, human decision-makers need more than mere recommendations. After all, unsubstantiated recommendations can only be blindly obeyed or rejected. Therefore, decision-makers need explanations.

But how is the explainability of recommendations made by intelligent systems to be achieved? Which forms of explanation are required for human decision-makers to be able to understand and trust algorithm-based recommendations? What are the ethical and judicial requirements regarding the content and precision of such explanations? How is it possible to develop intelligent systems that adhere to standards responding to the aforementioned questions?

Our researchers from different disciplines aim to provide answers to these questions by combining their expertises so as to build fertile soil for the future of AI.

EIS is funded by a Planning Grant by VolkswagenStiftung as part of the Artificial Intelligence and the Society of the Future track.

Scroll to top