For Otherwise We Don’t Know What They Are Doing: Why AI Systems Need to Be Explainable

Intelligent systems increasingly support or take over human decisions. This development influences all areas of human life: Intelligent systems recommend new shows on streaming portals, support online searches by giving recommendations, automatically screen thousands of applicants, predict crime, or support medical diagnosis. Systems support human decision-making and work tasks by filtering and analyzing information, providing […]

Paper accepted for RE’19

We are proud to announce that the EIS-influenced paper »Explainability as a Non-Functional Requirement« by Maximilian Köhl, Dimitri Bohlender, Kevin Baum, Markus Langer, Daniel Oster, and Timo Speith has been accepted for the 27th IEEE International Requirements Engineering Conference at Jeju Island, South Korea (23th to 27th September).

Delivering EIS to the Broad Public

Kevin Baum will deliver fresh insights from our project to the broad public in four non-scientific talks this month alone. The first two are taking place next week. On June 11, Kevin will give a talk on ethical aspects of the application of algorithms in health care and medical care at the 372th Saarlandic Conference […]

Scroll to top