Last week, Lena Kästner visited the philosophy department at University of Stuttgart. Her visit was hosted by Prof. Dr. Ulrike Pompe-Alama. In the local colloquium, Lena presented some recent work on mechanistic explanations and scientific discovery to philosophers and experts in the history of science and technology. The core idea of Lena’s discovery account is […]
Last week, the Bertelsmann Stiftung and the Stiftung Neue Verantwortung published a new position paper: “Wie Algorithmen verständlich werden”. The paper calls for making algorithmic decision-making processes understandable for stakeholders. Also, it highlights that this is vital for stakeholders to be able to comprehend the functioning of the systems. If stakeholders comprehend the functioning of […]
At the beginning of September, the EIS team contributed the interdisciplinary symposium on “Explainable Intelligent Systems and the Trustworthiness of Artificial Experts” for the third time. We presented at the 27th International Conference of the European Society for Philosophy and Psychology (ESPP), with Viviane Hähne, Lena Kästner, Daniel Oster, and Timo Speith as speakers. In […]
At the beginning of this month, Daniel Oster submitted his master’s thesis entitled “Explanation Requirements Concerning Artificial Systems – How to Transmit Transparency from Designer Side to Non-Designer Side”. Shortly thereafter, it was confirmed that Daniel has passed his thesis with success. He will henceforth support EIS as a full-fledged collaborator, and not just as […]
Last week, Timo Speith, Felix Bräuer, and Markus Langer from the EIS team presented the Panel “Explainable Intelligent Systems and the Trustworthiness of Artificial Experts” at the 9th International Conference on Information Law and Ethics in Rome. In the course of the project, we are going to present this panel a total of three times, […]
Last week, Kevin Baum, Lena Kästner, and Eva Schmidt from the EIS team presented the core ideas of our project at the Leverhulme Centre for the Future of Intelligence in Cambridge. Our thanks go to our collaborator Rune Nyrup for inviting us to talk at his lunch seminar. We got some great questions and comments […]
We are proud to announce that the EIS-influenced paper »Explainability as a Non-Functional Requirement« by Maximilian Köhl, Dimitri Bohlender, Kevin Baum, Markus Langer, Daniel Oster, and Timo Speith has been accepted for the 27th IEEE International Requirements Engineering Conference at Jeju Island, South Korea (23th to 27th September).
In March 2019 Felix Bräuer (Assistant Professor of Philosophy, Saarland University) has given a talk on “Trusting Artificial Experts” at the workshop Epistemic Trust in the Epistemology of Expert Testimony (University of Erlangen-Nuremberg). In his talk, Felix Bräuer has discussed which notions of trust are and aren’t applicable to our dealings with artificial intelligent recommender […]
On Monday, May 27, at 4:15 pm, Eva Schmidt will give a talk entitled “Künstliche Intelligente Systeme: Vertrauen und Verstehen”/”Artificial Intelligent Systems: Trust and Understanding” as part of a cycle of lectures on artificial intelligence organized by the philosophy students at the University of Zurich.