One of the biggest coups for EIS was to have a paper accepted at the renowned IEEE Requirements Engineering Conference during the project’s planning grant period. Consequently, Maximilian Köhl and Timo Speith, two of the paper’s authors, flew to South Korea to present their work to an interested community. During their stay, both of them […]
Last week, Lena Kästner and Markus Langer had the opportunity to present the topic Explainable Artificial Intelligence (XAI) and the idea behind our research project Explainable Intelligent Systems (EIS) to an attentive audience at the AOW 2019. A fascinating discussion was prompted, following the presentation of the two researchers and after having bridged the terminological […]
At the beginning of September, the EIS team contributed the interdisciplinary symposium on “Explainable Intelligent Systems and the Trustworthiness of Artificial Experts” for the third time. We presented at the 27th International Conference of the European Society for Philosophy and Psychology (ESPP), with Viviane Hähne, Lena Kästner, Daniel Oster, and Timo Speith as speakers. In […]
The EIS team contributed an interdisciplinary symposium on “Explainable Intelligent Systems and the Trustworthiness of Artificial Experts” to the conference EuroCogSci, with Tina Feldkamp, Daniel Oster, Eva Schmidt, and Timo Speith as speakers. We used the symposium to give some of our junior researchers a chance to present in front of an international audience, and […]
Kevin Baum and Andreas Sesing will partake in the 20th Fall Academy of the German Foundation for Law and Computer Science (DSRI). Their paper on ”Requirements for the Explainability of Machine-aided Decisions” (”Anforderungen an die Erklärbarkeit maschinengestützter Entscheidungen”) was accepted. The Fall Academy is taking place in Bremen between September 11 and September 14, 2019. […]
Last week, Timo Speith, Felix Bräuer, and Markus Langer from the EIS team presented the Panel “Explainable Intelligent Systems and the Trustworthiness of Artificial Experts” at the 9th International Conference on Information Law and Ethics in Rome. In the course of the project, we are going to present this panel a total of three times, […]
We are proud to announce that the EIS-influenced paper »Explainability as a Non-Functional Requirement« by Maximilian Köhl, Dimitri Bohlender, Kevin Baum, Markus Langer, Daniel Oster, and Timo Speith has been accepted for the 27th IEEE International Requirements Engineering Conference at Jeju Island, South Korea (23th to 27th September).
In March 2019 Felix Bräuer (Assistant Professor of Philosophy, Saarland University) has given a talk on “Trusting Artificial Experts” at the workshop Epistemic Trust in the Epistemology of Expert Testimony (University of Erlangen-Nuremberg). In his talk, Felix Bräuer has discussed which notions of trust are and aren’t applicable to our dealings with artificial intelligent recommender […]