EIS goes Athens: ESPP 2019

At the beginning of September, the EIS team contributed the interdisciplinary symposium on “Explainable Intelligent Systems and the Trustworthiness of Artificial Experts” for the third time. We presented at the 27th International Conference of the European Society for Philosophy and Psychology (ESPP), with Viviane Hähne, Lena Kästner, Daniel Oster, and Timo Speith as speakers. In […]

For Otherwise We Don’t Know What They Are Doing: Why AI Systems Need to Be Explainable

Intelligent systems increasingly support or take over human decisions. This development influences all areas of human life: Intelligent systems recommend new shows on streaming portals, support online searches by giving recommendations, automatically screen thousands of applicants, predict crime, or support medical diagnosis. Systems support human decision-making and work tasks by filtering and analyzing information, providing […]

New Blood for EIS

At the beginning of this month, Daniel Oster submitted his master’s thesis entitled “Explanation Requirements Concerning Artificial Systems – How to Transmit Transparency from Designer Side to Non-Designer Side”. Shortly thereafter, it was confirmed that Daniel has passed his thesis with success. He will henceforth support EIS as a full-fledged collaborator, and not just as […]

EIS goes Rome

Last week, Timo Speith, Felix Bräuer, and Markus Langer from the EIS team presented the Panel “Explainable Intelligent Systems and the Trustworthiness of Artificial Experts” at the 9th International Conference on Information Law and Ethics in Rome. In the course of the project, we are going to present this panel a total of three times, […]

Scroll to top