Explainable Intelligent Systems
A Joint Saarland University / University of Bayreuth / TU Dortmund Research Project
The Project Explainable Intelligent Systems investigates how explainability of artificial intelligent systems contributes to the fulfillment of important societal desiderata like responsible decision-making, the trustworthiness of AI, and many more.
Artificial intelligent systems increasingly augment or take over tasks previously performed by humans. This concerns both low-stakes tasks, such as recommending books or movies and high-stakes tasks, such as suggesting which applicant to give a job, what medical treatment to give a patient, or how to navigate autonomous cars through heavy traffic. In such situations, a variety of moral, legal, and general societal challenges are raised. To answer these challenges, it is often claimed, we need to ensure that artificially intelligent systems deliver reliable, trustworthy, and understandable explanations for their decisions. But how can this be achieved?
Explainable Intelligent Systems (EIS) is a highly interdisciplinary and innovative research project based at Saarland University, TU Dortmund University, and the University of Bayreuth, Germany. It brings together experts from informatics, law, philosophy and psychology. Together, we investigate how intelligent systems can and should be designed in order to provide explainable recommendations and thus meet important societal desiderata.
EIS is kindly founded by the Volkswagen Foundation in the Artificial Intelligence and the Society of the Future track.
Our context-sensitive framework shall help guide future research and policy-making regarding intelligent systems.