Research
Next to publishing our work in scientific articles and presenting it in scientific talks, we use our findings about the issues surrounding explainable AI both in teaching our students and in public outreach events.
Publications
Forthcoming
- Baum, K., Biewer S., Hermanns, H., Hetmank, S., Langer, M., Lauber-Rönsberg, A., & Sterz, S. Forthcoming. Taming the AI Monster: Monitoring of Individual Fairness for Effective Human Oversight. 30th International Symposium on Model Checking Software.
- Kares, F., König, C., & Langer, M. Forthcoming. Generative KI in der Wissenschaft: Ein Seminar zur Stärkung der KI-Kompetenzen von Studierenden. In Psychologische Rundschau.
-
Schmidt, Eva. Forthcoming. Hume and the Unity of Reasons. In Hume and Contemporary Epistemology, edited by Scott Stapleford and Verena Wagner.
-
Schmidt, Eva. Forthcoming. “Stakes and Understanding the Decisions of AI Systems“. In Philosophy of Science for Machine Learning: Core Issues, New Perspectives, eds. Juan Durán and Giorgia Pozzi.
- Kästner, Lena and Crook, Barnaby. Forthcoming. “Don’t Fear the Bogeyman: On Why There is No Prediction-Understanding Trade-Off for Deep Learning in Neuroscience.” In Philosophy of Science for Machine Learning: Core Issues, New Perspectives, eds. Juan Durán and Giorgia Pozzi
Published
- Kästner, L and Crook, B. 2024. Explaining AI Through Mechanistic Interpretability. European Journal for Philosophy of Science.
- Speith, T., Crook, B., Mann, S., Schömacker, A., & Langer, M. 2024. Conceptualizing understanding in explainable artificial intelligence (XAI): an abilities-based approach. Ethics and Information Technology.
- Biewer, S., Baum, K., Sterz, S., Hermanns, H., Hetmank, S., Langer, M., Lauber-Rönsberg, A., & Lehr, F. 2024. Software doping analysis for human oversight. In Formal Methods in System Design.
- Borges, Georg & Keil, Ulrich (editors). 2024. Big Data: Grundlagen, Rechtsfragen, Vertragspraxis. Nomos.
- Mann, Sara. 2024. Understanding via exemplification in XAI: how explaining image classification benefits from exemplars. AI & Society.
- Longo, L., Brcic, M., Cabitza, F., Choi, J., Confalonieri, R., Del Ser, J., … Speith, T., & Stumpf, S. 2024. Explainable artificial intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions. In Information Fusion.
- Borges, Georg. 2023. Der Begriff des KI-Systems. Tatbestandsmerkmale und Auslegungsgrundsätze. In Computer und Recht.
- Borges, Georg. 2023. Liability for AI Systems Under Current and Future Law. In Computer Law Review International.
- Speith, Timo. 2023. Building bridges for better machines: from machine ethics to machine explainability and back. Saarländische Universitäts-und Landesbibliothek.
- Köhl, M.A., & Hermanns, H. 2023. Model-Based Diagnosis of Real-Time Systems: Robustness Against Varying Latency, Clock Drift, and Out-of-Order Observations. In ACM Transactions on Embedded Computing Systems.
- Langer, Markus. 2023. Fehlgeleitete Hoffnungen? Grenzen menschlicher Aufsicht beim Einsatz algorithmusbasierter Systeme am Beispiel Personalauswahl. In Psychologische Rundschau.
- Kästner, L. & Schomäcker, A. (2023). KI-Systeme in der modernen Gesellschaft: Potenziale und Grenzen [AI-Systems in modern Society: Potentials and Pitfalls].
- Schlicker N, Langer M, Hirsch MC. How trustworthy is artificial intelligence? : A model for the conflict between objectivity and subjectivity. Innere Medizin (Heidelberg, Germany). 2023
- B. Crook, M. Schlüter and T. Speith, “Revisiting the Performance-Explainability Trade-Off in Explainable Artificial Intelligence (XAI),” 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW).
-
Langer, Markus and Cornelius J. König. 2023. “Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management“. In Human Resource Management Review.
- Schmidt, Eva, Andreas Sesing-Wagenpfeil and Maximilian Köhl. 2023. “Bare statistical evidence and the legitimacy of software-based judicial decisions”. In Synthese.
- Crook, B. (2023). Understanding as a bottleneck for the data-driven approach to psychiatric science. Philosophy and the Mind Sciences.
- 2023). Trust in hybrid human-automated decision-support. International Journal of Selection and Assessment. , , , , & (
- Schmidt, Eva. 2023. “Facts about Incoherence as Non-Evidential Epistemic Reasons”. In Asian Journal of Philosophy.
- Mann, Sara, Barnaby Crook, Lena Kästner, Astrid Schomäcker and Timo Speith. 2023. “Understanding and Addressing Sources of Opacity in Computer Systems”. In Proceedings of the Third International Workshop on Requirements Engineering for Explainable Systems (RE4ES) co-located with the 31st IEEE International Requirements Engineering Conference (RE’23).
- Kästner, L. (2023). Modeling Psychopathology: 4D Multiplexes to the Rescue. Synthese.
- Sesing-Wagenpfeil, A., Biniok, M., Kares, F., Kästner, L., Langer, M., Metz, C., Peshteryanu, T. V., & Wessels, U. (2023). Legal Tech im Richterzimmer? Streiflichter aus Wissenschaft und Praxis zum KI-Einsatz bei Flugverspätungen. Proceedings of the IRIS23 Internationale Rechtsinformatik Symposion.
- Leonie Nora Sieger, Julia Hermann, Astrid Schomäcker, Stefan Heindorf, Christian Meske, Celine-Chiara Hey, and Ayşegül Doğangün. 2022. User Involvement in Training Smart Home Agents: Increasing Perceived Control and Understanding. In Proceedings of the 10th International Conference on Human-Agent Interaction (HAI ’22). Association for Computing Machinery.
- Schmidt, Eva. 2022. ”Wie können wir autonomen KI-Systemen vertrauen? Die Rolle von Gründe-Erklärungen” [How can we trust autonomous AI systems? The role of reason explanations]. In Gratwanderung Künstliche Intelligenz – Interdisziplinäre Perspektiven auf das Verhältnis von Mensch und KI [Walking the tightrope of Artificial Intelligence – Interdisciplinary Perspectives on the Relationship between Humans and AI], edited by Britta Konz, Karl-Heinrich Ostmeyer and Marcel Scholz.
- Langer, M., Hunsicker, T., Feldkamp, T, König, C. J., & Grgić-Hlača, N. (2022). Look! It’s a computer program! It’s an algorithm! It’s AI!”: Does terminology affect human perceptions and evaluations of intelligent systems? Proceedings of the CHI22 Conference on Human Factors in Computing Systems.
- Langer, M., König, C. J., Back, C., & Hemsing, V. (2022). Trust in Artificial Intelligence: Comparing trust processes between human and automated trustees in light of unfair bias. Journal of Business and Psychology.
- Baum, Kevin and Sarah Sterz. 2022. “Ethics for Nerds“. In International Review of Information Ethics.
- Borges, Georg. 2022. “Der Entwurf einer neuen Produkthaftungsrichtlinie”. In Der Betrieb.
- Borges, Georg. 2022. “Haftung für KI-Systeme Konzepte und Adressaten der Haftung”. In Computer und Recht.
- Chazette, Larissa, Wasja Brunotte and Timo Speith. 2022. “Explainable software systems: from requirements analysis to system evaluation”. In Requirements Engineering.
- Brunotte, Wasja, Larissa Chazette and Timo Speith. 2022. “Quo Vadis, Explainability? – A Research Roadmap for Explainability Engineering“. In REFSQ 2022.
- Speith, Timo. 2022. “How to Evaluate Explainability? – A Case for Three Criteria”. In IEEE 30th International Requirements Engineering Conference Workshops.
- Speith, Timo. 2022. “A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods”. In FaccT 2022: ACM Conference on Fairness, Accountability, and Transparency.
- Baum, Kevin, Susanne Mantel, Eva Schmidt and Timo Speith. 2022. “From Responsibility to Reason-Giving Explainable Artificial Intelligence“. In Philosophy and Technology.
- Langer, Markus and Cornelius J. König. 2022. “Applied Explainable Artificial Intelligence (XAI) in Human Resource Management: How to address issues of AI opacity and understandability in HRM“. In Handbook of Research on Human Resource Management and Artificial Intelligence, edited by S. Strohmeier. Cheltenham: Elgar Publishing.
- Chazette, Larissa, Brunotte Wasja and Timo Speith. 2021. “Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue“. In 2021 IEEE 29th International Requirements Engineering Conference (RE).
- Kästner, Lena, Markus Langer, Veronika Lazar, Astrid Schomäcker, Timo Speith and Sarah Sterz. 2021. “On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness“. In 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW).
- Langer, Markus, Kevin Baum, Kathrin Hartmann, Stefan Hessel, Timo Speith and Jonas Wahl. 2021. “Explainability Auditing for Intelligent Systems: A Rationale for Multi-Disciplinary Perspectives“. In 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW).
- Sterz, Sarah, Kevin Baum, Anne Lauber-Rönsberg and Holger Hermanns. 2021. “Towards Perspicuity Requirements“. In 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW).
- Schmidt, Eva, Tobias Schlicht, Johannes L. Brandl, Frank Esken, Hans-Johann Glock, Albert Newen, Josef Perner, Franziska Poprawe, Anna Strasser, and Julia Wolf. 2021. “Teleology first: Goals before knowledge and belief“ (Commentary on Buckwalter et al. “Knowledge before belief”). In Behavioral and Brain Sciences.
- Schlicker, Nadine, Markus Langer, Sonja Ötting, Kevin Baum, Cornelius J. König, and Dieter Wallach. 2021. “What to Expect from Opening up ‘Black Boxes’? Comparing Perceptions of Justice Between Human and Automated Agents“. In Computers in Human Behavior.
- Langer, Markus, Kevin Baum, Cornelius J. König, Viviane Hähne, Daniel Oster, and Timo Speith. 2021. “Spare me the Details: How the Type of Information about Automated Interviews Influences Applicant Reactions“. In International Journal of Selection and Assessment.
- Langer, Markus and Richard N. Landers. 2021. “The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers.“ In Computers in Human Behavior.
- Sesing, Andreas. 2021. “Systemische Transparenz bei automatisierter Datenverarbeitung. Umfang und Grenzen der Pflicht zur Bereitstellung aussagekräftiger Informationen über die involvierte Logik“. In MMR – Zeitschrift für IT-Recht und Digitalisierung, pp. 288-292.
- Glock, Hanjo and Eva Schmidt. 2021. “Pluralism about practical reasons and reason explanations“. In Philosophical Explorations, online.
- Langer, Markus, Cornelius J. König, and Victoria Hemsing. 2020. ʺIs Anybody Listening? The Impact of Automatically Evaluated Job Interviews on Impression Management and Applicant Reactions.ʺ In Journal of Managerial Psychology.
- Langer, Markus, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. “What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research“. In: Artificial Intelligence.
- Schmidt, Eva. 2020. “Where Reasons and Reasoning Come Apart”. In Noûs, pp. 1–20.
- Glock, Hanjo and Eva Schmidt. 2019. ʺObjectivism and Causalism about Reasons for Action.ʺ In Explanation in Action Theory and Historiography: Causal and Teleological Approaches, edited by Gunnar Schumann, pp. 124-45. London: Routledge.
- Kästner, Lena and Philipp Haueis. 2019. ʺDiscovering Patterns: On the Norms of Mechanistic Inquiry.ʺ In Erkenntnis.
- Köhl, Maximilian A., Dimitri Bohlender, Kevin Baum, Markus Langer, Daniel Oster, and Timo Speith. 2019. ʺExplainability as a Non-Functional Requirement.ʺ IEEEE Xplore, published online on Dec 5, 2019.
- Langer, Markus, Cornelius J. König, Diana Ruth-Pelipez Sanchez, and Sören Samadi. 2019. “Highly-Automated Job Interviews: Acceptance Under the Influence of Stakes”. In International Journal of Selection and Assessment, pp. 217–234.
- Langer, Markus, Cornelius J. König, and Maria Papathanasiou. 2019. “Highly-Automated Job Interviews: Applicant Reactions and the Organizational Context.” In International Journal of Selection and Assessment, pp. 217–234.
- Sesing, Andreas and Kevin Baum. 2019. ʺAnforderungen an die Erklärbarkeit maschinengestützter Entscheidungen.ʺ In Die Macht der Daten und der Algorithmen: Regulierung von IT, IoT und KI, edited by Jürgen Taeger, pp. 435-49. Oldenburg: olwir-Verlag.
Scientific Talks
- 04.24: “Taming the AI Monster: Monitoring of Individual Fairness for Effective Human Oversight” (Prof. Dr. Holger Hermanns). SPIN 2024. Luxembourg.
- 03.24: “That’s not fair! Explainability as a means to increase algorithmic fairness” (Dr. Astrid Schomäcker). Ethics of AI (Un-)Explainability. University of Munster.
- 03.24: “Navigating Towards Trustworthy AI” (Prof. Dr. Lena Kästner). International Congress on Biophotonics. Jena.
- 03.24: “Reasons of AI Systems” (JProf. Dr. Eva Schmidt). Ethics of AI (Un-)Explainability. University of Munster.
- 02.24: “Risks Deriving from the Agential Profiles of Modern AI Systems” (Barnaby Crook). Bayreuth Day of Digital Sciences. Bayreuth.
- 02.24: “What is AI Ethics and What Is It Good for Anyways” (Thorsten Helfer). Bayreuth Day of Digital Sciences. Bayreuth.
- 02.24: “Stakes and Understanding the Decisions of Artificial Intelligent Systems” (JProf. Dr. Eva Schmidt). Trust and Opacity in AI. TU Dresden.
- 02.24: “Stakes and Understanding the Decisions of Artificial Intelligent Systems” (JProf. Dr. Eva Schmidt). Epistemological Issues of Machine Learning in Science. TU Dortmund.
- 02.24: “Opacity as a Stepping Stone” (Prof. Dr. Lena Kästner). Epistemological Issues of Machine Learning. Dortmund.
- 02.24: “AI in Society: On Trustworthiness, Fairness and Explainability” (Prof. Dr. Lena Kästner). Weizenbaum Institute. Berlin.
- 01.24: “The ECJ’s SCHUFA Judgement (C-634/21)” (Dr. Andreas Sesing-Wagenpfeil). Projekttag Explainable Intelligent Systems.
- 31.01.24: “The Reasons of AI Systems” (JProf. Dr. Eva Schmidt). Philosophie am Mittag, University of Siegen.
- 12.23: “The Weal and Woe of Black Box Systems” (Prof. Dr. Lena Kästner). Lingnan-Cambridge Workshop on AI in Science. Cambridge.
- 15.12.23: “Risks Deriving from the Agential Profiles of Modern AI Systems” (Barnaby Crook). Philosophy of Artificial Intelligence, Erlangen.
-
07/08.12.23: “Hume and the Unity of Reasons” (JProf. Dr. Eva Schmidt). Workshop Epistemic Dilemmas, Epistemic Rationality, and Higher-Order Evidence, TU Dortmund.
- 12.23: “Bias in human and algorithmic decision making” (Dr. Astrid Schomäcker). Humans, Animals, and AI Systems. University of Luxemburg.
- 12.23: “Challenges in regulating AI. The Example of the European AI Act” (Prof. Dr. Georg Borges). SACAIR 2023. Mouldersdrift, South Africa.
- 28/29.11.23: “Hume and the Unity of Reasons” (JProf. Dr. Eva Schmidt). Goethe Epistemology Meeting 2023, Goethe University Frankfurt.
- 16.11.23: “Stakes and Understanding the Decisions of Artificial Intelligent Systems” (JProf. Dr. Eva Schmidt). Workshop Trust and Opacity in AI: Perspectives from Epistemology, Ethics, and Political Philosophy, TU Dresden.
- 11.23: “What Explainable AI can Learn from Philosophy of Science” (Prof. Dr. Lena Kästner). CamPos Talk Series. Cambridge.
- 15.11.23: “Hume and the Unity of Reasons” (JProf. Dr. Eva Schmidt). Research colloquium practical philosophy & normativity, University of Bielefeld.
- 27.10.23 “Model-Development in Neuroscience: Generalizability and Simplicity in Mechanistic Explanations (Prof. Dr. Lena Kästner). GeSiMeX Workshop. Berlin.
- 10.23: “How do we assess system trustworthiness?” (Prof. Dr. Markus Langer). socialBridges e-conference. TU Dresden.
- 10.23: “How do we assess system trustworthiness?” (Prof. Dr. Markus Langer). Trust and Resilience in Digital Societies. Ruhr Uni Bochum.
- 10.23: “Liability for Malfunction of AI Systems” (Prof. Dr. Georg Borges). AISoLA. Crete
- 10.23: “(Over-)Regulating AI? On the principles and possible white spots of the European Union’s AI Act” (Dr. Andreas Sesing-Wagenpfeil). AISoLA Conference 2023. Crete.
- 26.10.23: “Understanding AI Systems and the Stakes” (JProf. Dr. Eva Schmidt). AISoLA, Crete.
- 09.23: “Was sagt das Recht zu Diskriminierungsrisiken durch KI?” (Dr. Andreas Sesing-Wagenpfeil). KI und Diskriminierung. Diakonie Deutschland.
- 09.23: “Trust Dynamics in XAI” (Tim Schrills & Prof. Dr. Markus Langer). Arbeits- Organisations- und Wirschaftspsychologie. Kassel.
- 09.23: “Datenrecht und Datenschutz” (Prof. Dr. Georg Borges). EDV-Gerichtstag. Saarbrücken.
- 22.09.23: “Symposium: Explainable AI in Scientific Research. (Barnaby Crook and Lena Kästner). European Philosophy of Science Association 2023, Belgrade.
- 04.09.23: “Keynote: To trust or not to trust? – On Challenges for AI in Modern Society” (Lena Kästner). RE4ES, Hanover.
- 04.09.23: “Understanding and Addressing Sources of Opacity in Computer Systems” (Sara Mann, co-authored by Barnaby Crook, Lena Kästner, Astrid Schomäcker and Timo Speith). RE4ES, Hanover.
- 04.09.23: “Revisiting the Performance-Explainability Trade-Off in Explainable Artificial Intelligence (XAI),” (Barnaby Crook) 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW), Hannover.
- 09.23: “Personal Responsibility when Interacting with AI: The Role of Human Autonomy and System Transparency” (Felix Kares, Prof. Dr. Jürgen H. Lenz, & Prof. Dr. Markus Langer). Arbeits- Organisations- und Wirschaftspsychologie. Kassel.
- 08.23: “Explainable AI systems in judiciary” (Prof. Dr. Georg Borges & Dr. Andreas Sesing-Wagenpfeil). Judgements Made by AI. Saarbrücken.
- 14.07.23: “The Structure of Arguments About Intelligence” (Barnaby Crook). Artificial Intelligence and Intelligent Matter: An Interdisciplinary Perspective. Münster.
- 12.07.23: “Wunderpille KI? Zu Chancen, Risiken und Nebenwirkungen [Magic Pill AI? Chances, Risks, and Side Effects]” (Lena Kästner). Wissenschaftstag 2023. Erlangen-Nuremberg.
- 04.07.23: “Explaining AI Through the Scientific Perspective” (Barnaby Crook). Ulm University.
-
06.06.2023: “The Reasons of AI Systems” (JProf. Dr. Eva Schmidt). Kolloquium, Leibniz Universität, Hannover.
-
23.-24.05.2023: “The Reasons of AI Systems” (JProf. Dr. Eva Schmidt). Workshop Rational Agency, Reflection, and the Varieties of Metacognition, Vienna.
- 15.05.23: “Machine Learning and Scientific Discovery” (Lena Kästner). Bielefeld University.
-
13.05.2023: “Erklärbare Künstliche Intelligenz” (JProf. Dr. Eva Schmidt). Fachtag Praktische Philosophie, TU Dortmund.
-
09.05.2023: “The Reasons of AI Systems” (JProf. Dr. Eva Schmidt). Colloquium of the University of Mannheim.
- 05.2023: “A Legal Framework for AI. The Example of European Legislation” (Prof. Dr. Georg Borges). Pitfalls of Artificial Intelligence. Discussion Forum Johannesburg.
- 05.2023: “Legal Aspects of Generative AI” (Prof. Dr. Georg Borges). 7th International Software Days. Vienna.
- 05.2023: “Trust in Hybrid Human-Automated Decision-Support” (Felix Kares, Prof Dr. Cornelius König, Richard Bergs, Clea Protzel, & Prof. Dr. Markus Langer). European Association of Work and Organizational Psychology. Katowice, Poland.
- 28.04.2023: “ADM-Systeme in der modernen Gesellschaft: ihr Potenzial und ihre Grenzen” (Prof. Dr. Lena Kästner). Juristentagung. München.
- 25.02.2023: “For Better or Worse? AI in Modern Society: The Case of ChatGPT“ (Lena Kästner). IRIS 2023. Salzburg, Austria.
- 24.02.2023: “Gedanken aus der Philosophie: Ist KI im Gerichtssaal vertretbar?“ (Lena Kästner and Ulla Wessels). IRIS 2023. Salzburg, Austria.
-
26.01.2023: “Explaining AI Through the Scientific Perspective” (Prof. Dr. Lena Kästner). RWTH/ HLRS Philosophy of AI series. Aachen.
- 15.11.2022: “Understanding via Exemplification in XAI” (Sara Mann). Thought in Humans and Machines: Multidisciplinary Perspectives, Hamburg, Germany.
- 19.10.2022: “How Might the Use of Opaque Artificial Intelligence in Medical Contexts Undermine Knowledge?” (Prof. Dr. Eva Schmidt, co-authored by Paul Martin Putora and Rianne Fijten). Keynote talk at the Graduate Workshop Philosophy Meets Machine Learning, Tübingen University, Germany.
- 19.10.2022: “Mary Hesse, Machine Learning, and the Learning Machine” (Barnaby Crook). Philosophy Meets Machine Learning Graduate Workshop, Tübingen.
-
13.10.2022 Introducing the XiC (Explainability in Context) Framework. (Lena Kästner and Markus Langer). Herrenhausen Conference on AI and the Future of Society.
-
07.10.2022: “Discovering the Inside of the Blackbox” (Prof. Dr. Lena Kästner). ESDIT NL Keynote lecture. Oegstgeest (near) Leiden, the Netherlands.
- 06.10.2022: “Do we expect fairness in algorithmic decision-making?” (Markus Langer). Keynote at the small group meeting “AI and machine-learning algorithms in personnel recruitment, selection, and assessment”, Vrije Universiteit, Amsterdam.
- 15.09.2022: “Discovering Emergent Structure in AI Systems” (Barnaby Crook and Prof. Dr. Lena Kästner). GAP11, Berlin.
-
12.-15.09.2022: “How Might the Use of Opaque Artificial Intelligence in Medical Contexts Undermine Knowledge?” (JProf. Dr. Eva Schmidt, co-authored by Paul Martin Putora and Rianne Fijten). GAP11, Berlin.
- 12.09.22: “Equal treatment is important to me – but not that important: Does applicant suitability affect applicant reactions to automated systems in personnel selection?” (Langer, M., König, C. J., & Kramp, M.). Presentation at the conference of the Deutsche Gesellschaft für Psychologie (DGPS) 2022, Hildesheim, Germany.
- 12.09.22: “Perceiving trustworthiness of AI-based systems – A theoretical model” (Schlicker, N., Langer, M., & Uhde, A.). Presentation at the conference of the Deutsche Gesellschaft für Psychologie (DGPS), Hildesheim, Germany
- 05.09.2022: “Understanding via Exemplification in XAI” (Sara Mann). Issues in XAI #5: Understanding Black Boxes — Interdisciplinary Perspectives, Dortmund, Germany. https://explainable-intelligent.systems/issuesinxai5/
- 16.08.2022: “The Compact Core – Emergent Structure Distinction in Artificial and Biological Neural Networks” (Barnaby Crook). GWP, Berlin.
-
16.08.22: “Multiplexes: New Directions for Computational Psychiatry?” (Lena Käastner) International Congress German Society for Philosophy of Science (GWP); Berlin, Germany.
- 15.08.2022: “How to Evaluate Explainability? – A Case for Three Criteria” (Timo Speith). RE4ES, Oldenburg.
- 27.07.2022: “A Review of Taxonomies of Explainable Artificial Intelligence Methods” (Timo Speith). Valera Reading Group, Saarbrücken.
- 13.07.2022: “A Review of Taxonomies of Explainable Artificial Intelligence Methods” (Timo Speith). Intelligent Systems in Context, Bayreuth.
- 11.07.2022: ”Understanding via Exemplification and XAI. Why We Should Explain with Exemplars Instead of Examples” (Sara Mann). Intelligent Systems in Contexts, Bayreuth, Germany.
- 28.-29.06.2022: ”Understanding via Exemplification and XAI. Why We Should Explain with Exemplars Instead of Examples” (Sara Mann). First Luxembourg Workshop on Epistemology and Artificial Intelligence, University of Luxembourg, Luxembourg.
- 23.06.2022: “A Review of Taxonomies of Explainable Artificial Intelligence Methods” (Timo Speith). FAccT, Seoul, South Korea.
-
15.-17.06.2022: “How Might the Use of Opaque Artificial Intelligence in Medical Contexts Undermine Knowledge?” (JProf. Dr. Eva Schmidt, co-authored by Paul Martin Putora and Rianne Fijten). European Epistemology Network Conference, Glasgow.
-
10.06.2022: “How Might the Use of Opaque Artificial Intelligence in Medical Contexts Undermine Knowledge? (JProf. Dr. Eva Schmidt, co-authored by Paul Martin Putora and Rianne Fijten). Rhine Ruhr Epistemology Network Meeting, TU Dortmund.
- 23.-25.05.2022: “How Might the Use of Opaque Artificial Intelligence in Medical Contexts Undermine Knowledge?” (Prof. Dr. Eva Schmidt, co-authored by Paul Martin Putora and Rianne Fijten). Workshop Issues in XAI #4, TU Delft, Netherlands.
- 11.05.2022: “The application of Artificial Intelligence to selection: Possibilities and Principles.” (Markus Langer). Invited discussant at a panel discussion in the EAWOP Work Lab series.
- 28.-29.04.2022: ”How Might the Use of Opaque Artificial Intelligence in Medical Contexts Undermine Knowledge?” (Prof. Dr. Eva Schmidt, co-authored by Paul Martin Putora and Rianne Fijten). Workshop, University of Luxembourg, Luxembourg.
- 30.03.2022: ”Modelling Mental Illnesses: Multiplexes to Rescue?” (Prof. Dr. Lena Kästner). Guest talk in the series on the foundations and ethics of AI, IDSIA USI-SUPSI (Università della Svizzera italiana), Lugano, Switzerland.
- 23.03.2022: ”The Reasons of AI Systems” (Prof. Dr. Eva Schmidt). Workshop Understanding Others Factively, TU Dortmund, Germany.
- 21.02.22: “Differences in trust processes between human and automated trustees in light of unfair bias” (Langer, M., König, C. J., Back, C. & Hemsing, V). Presentation at the Tagung experimentell arbeitender Psycholog:innen (TeaP), Köln, Germany (online)
- 14.01.2022: “Wer ist verantwortlich für KI-gestützte Entscheidungen?” (Prof. Dr. Eva Schmidt). Guest lecture in Beate Bollig’s seminar Informatik und Ethik, TU Dortmund, Germany.
- 13.01.2022: “Machine Ethics ⟺ Machine Explainability” (Kevin Baum). Guest talk in Rune Nyrup’s seminar Designing artificial moral agents – could we, and should we?, Leverhulme Centre for the Future of Intelligence, Online.
- 12.01.2022: “Modelling Mental Illness: Multiplexes to Rescue?” (JProf. Dr. Lena Kästner). MCMP Colloquium, Ludwig-Maximilians-Universität München, Online.
- 01.12.2021: “Bare Statistical Evidence and the Legitimacy of Software-Based Judicial Decisions” (Prof. Dr. Eva Schmidt). Ringvorlesung Jenseits des Menschen, Universität Hamburg, Online.
- 12.11.2021: “Wer trägt Verantwortung für KI-gestützte Entscheidungen?” (Prof. Dr. Eva Schmidt). 17. Dortmunder Wissenschaftstag, Dortmund.
- 09.11.2021: “Grasping Psychopathology: On Complex and Computational Models” (JProf. Dr. Lena Kästner). Philosophy of Science meets Machine Learning, Tübingen, Germany.
- 06.11.2021: “On the Relation between Epistemic Reasons and Evidence” (Prof. Dr. Eva Schmidt). Workshop Norms and Nature, University of Luxembourg, Luxembourg.
- 09.10.2021: “Do we expect bias and discrimination in algorithmic decision-making?” (Dr. Markus Langer). Key note at the workshop Bias and Discrimination in Algorithmic Decision-Making”, Online.
- 06.10.2021: “Psychological Dimensions of Explainability” (Dr. Markus Langer). Keynote at the RE21 Requirements Engineering conference, Online.
- 22.09.2021: “Künstliche Intelligenz und Automatisierung im Management: Vertrauensprozesse und Gerechtigkeitswahrnehmungen gegenüber automatisierten Systemen” (Dr. Markus Langer). Conference of the Arbeits- Organisations- und Wirtschaftspsychologie (AOW) 2021, Online.
- 07.09.2021: “Pragmatic Encroachment With Reasons“ (JProf. Dr. Eva Schmidt). XXV. Kongress der Deutschen Gesellschaft für Philosophie, Online.
- 01.09.2021: “Investigating trust in automated systems and human-automation teams in personnel selection” (Dr. Markus Langer). European Network for Selection Researchers (ENESER), Zürich, Switzerland.
- 10.08.2021: “Philosophy meets computer science: Opening the black box?” (Timo Speith). IT and Law Summer Camp, Saarbrücken.
- 20.07.2021: Keynote “Mysterious Multiplexes” (JProf. Dr. Lena Kästner). ECR Workshop in Philosophy of Mind and Cognitive Science, Online.
- 12.07.2021: “Reason-Giving XAI and Responsibility“ (JProf. Dr. Eva Schmidt, paper co-authored with Kevin Baum, Susanne Mantel and Timo Speith). Rhine-Ruhr Epistemology Meeting, Online.
- 26.05.2021: “The Role of XAI for Meta-Trust and Trust Propagation – Psychological and Philosophical Perspectives” (Dr. Markus Langer, Kevin Baum, and Sarah Sterz). Issues in XAI: Understanding and Explaining in Healthcare, Online.
- 25.05.2021: “The Promises of: Understanding, Explanation and Discovery” (JProf. Dr. Lena Kästner and Timo Speith). Issues in XAI: Understanding and Explaining in Healthcare, Online.
- 19.05.2021: “Wer ist verantwortlich für KI-gestützte Entscheidungen?“ (JProf. Dr. Eva Schmidt). Ringvorlesung “Gratwanderung Künstliche Intelligenz – eine interdisziplinäre Ringvorlesung zum Verhältnis von Mensch und Künstlicher Intelligenz”, Technische Universität Dortmund, Online.
- 18.05.2021: “XAI in Context: Glassboxing vs. Interpretability Methods“ (Lena Kästner). Eindhoven University of Technology, Online.
- 18.05.2021: “Ethical Issues of Artificial Intelligence“ (Timo Speith). Kolloquium des Wichmann Labs. Universität Tübingen, Online.
- 13.04. 2021: Keynote “What is the Epistemology of XAI?” (JProf. Lena Kästner). Explainable medical AI: Ethics, Epistemology, and Formal Methods, Lorentz Center Leiden, Online.
- 05.02.2021: “Perspicuous Computing“ (with Holger Hermanns, Bernd Finkbeiner, Christof Fetzer, Raimund Dachselt, and Rupak Majumdar) Session at Design, Automation & Test in Europe (DATE) 2021, Online
- 18.12.2020: “Pragmatic Encroachment With Reasons“ (JProf. Dr. Eva Schmidt). Knowledge and Decision Colloquium, Online.
- 15.12.2020: “Modelling Psychopathology: When Correlation Waggles Its Eyebrows” Research Colloquium in Philosophy of Cognition, TU Berlin, Germany
- 29.11.2020: “Verantwortung, Vertrauen, Rechte: Über die ethische Dimension erklärbarer KI” (Kevin Baum). Verantwortlichkeit digitalisierter Unternehmen – die ethischen und rechtlichen Auswirkungen des Einsatzes künstlicher Intelligenz . Universität Salzburg, Salzburg.
- 05.11.2020: “Mehr Autonomie durch erklärbare künstliche Intelligenzʺ [More Autonomy through Explainable Artificial Intelligence] (JProf. Dr. Eva Schmidt, Kevin Baum, and Nadine Schlicker). UX and beyond – Digitalisierung menschenzentriert gestalten. Usability in Germany conference, University of Mannheim, Mannheim, Germany.
- 05.11.2020: Panel discussion: “Ethik in der Mensch-Technik Interaktion – lohnt sich digitale Verantwortung? ʺ [Ethics in Human-Technology Interaction – Is Digital Responsibility Worth It?] (JProf. Dr. Eva Schmidt). UX and beyond – Digitalisierung menschenzentriert gestalten. Usability in Germany conference, University of Mannheim, Mannheim, Germany.
- 21.10.2020: “Causal Modelling and Computational Models” (JProf. Dr. Lena Kästner). Department of Philosophy, University of Bern, Switzerland.
- 21.10.2020: “Pragmatic Encroachment With Reasons“ (JProf. Dr. Eva Schmidt). Guest lecture series, Innsbruck, Online.
- 05.10.2020: “Moralische Hürden von Profiling und die Herausforderung der Erklärbarkeit” (Kevin Baum). Tagung »Profiling 2.0«. Thüringer Landesbeauftragten für den Datenschutz und die Informationsfreiheit (TLfDI), Erfurt.
- 21.09.2020: “Commentary on the paper ‘Ideal and Nonideal Justice, Discrimination and the Design of ADM Systems’ by Jürgen Sirsch“ (Timo Speith). Workshop of the FairAndGoodADM Project. FairAndGoodADM, Kaiserslautern, Germany.
- 21.08.2020: “Pluralism About Practical Reasons and Reason Explanations“ (JProf. Dr. Eva Schmidt). Zoom Epistemology Group, Online.
- 23.06.2020: “Pragmatic Encroachment With Reasons“ (JProf. Dr. Eva Schmidt). Virtual Metaethics Colloquia, Online.
- 16.-30.06.2020: “A Question of Morality: Is There a Double Standard When It Comes to Algorithms?” (Prof. Dr. Cornelius König, Dr. Markus Langer, and Tina Feldkamp). SIOP 2020 Virtual Conference, Online.
- 16.-30.06.2020: “Interview Technology and AI: Effects on Applicants, Evaluators, and Adverse Impact” (Dr. Markus Langer). SIOP 2020 Virtual Conference, Online.
- 19.05.2020: “Pragmatic Encroachment With Reasons“ (JProf. Dr. Eva Schmidt). Research Seminar Bielefeld, Online.
- 06.-07.03.2020: “Pragmatic Encroachment with Reasons“ (JProf. Dr. Eva Schmidt). Rutgers-Bochum Workshop in Philosophy. Rutgers University, New Brunswick, New Jersey, United States of America.
- 05.03.2020: “Fair Algorithmic Decision-Making: Unrealizable Dream or Actual Possibility?” (Timo Speith). Colloquium session, Fraunhofer Institute for Industrial Mathematics (ITWM), Kaiserslautern, Germany.
- 10.12.2019: “Pragmatic Encroachment with Reasons“ (JProf. Dr. Eva Schmidt). Winter workshop of the ERC Project Competence and Success in Epistemology and Beyond, Helsinki, Finland.
- 29.11.2019: “Verantwortung, Vertrauen, Rechte – Über die ethische Dimension erklärbarer KI“ [Responsibility, Trust, Rights – On the Ethical Dimensions of Explainable AI] (Kevin Baum). Conference on the Responsibility of Digitized Companies – The Ethical and Legal Consequences of the Use of Artificial Intelligence, Edmundsburg, Salzburg, Austria.
- 26.11.2019: “Phänomene erklären, Mechanismen entdecken“ [Explaining Phemonema, Discovering Mechanisms] (JProf. Dr. Lena Kästner). Departmental colloquium for philosophy, University of Stuttgart, Stuttgart, Germany.
- 07.11.2019: “Explainable Intelligent Systems (EIS)“ (Prof. Holger Hermanns, JProf. Dr. Eva Schmidt, JProf. Dr. Lena Kästner, and Kevin Baum; poster and presentation as planning grant lightning talk). Kick-off symposium Artificial Intelligence and the Society of the Future by the Volkswagen Foundation, Herrenhausen Castle, Hanover, Germany.
- 06.10.2019: “Psychological Perspectives on the Automation in Management“ (Dr. Markus Langer). University of Calgary, Canada.
- 02.10.2019: “XAI and Psychology – Issues and Experimental Investigations“ (Prof. Dr. Cornelius König, Dr. Markus Langer, Nadine Schlicker). Workshop: Issues in Explainable AI – Black Boxes, Recommendations and Levels of Explanation, Saarland University, Saarbrücken, Germany.
- 26.09.2019: “Explainable Artificial Intelligence als neues Thema für die AOW Psychologie“ (Prof. Dr. Cornelius König, JProf. Dr. Lena Kästner, Dr. Markus Langer). 11th Convention by the Panel for Professional, Organizational and Economic Psychology of the German Society for Psychology, Brunswick, Germany.
- 25.09.2019: “Explainability as a Non-Functional Requirement“ (Maximilian Köhl and Timo Speith). 27th International Requirements Engineering Conference, Jeju Island, South Korea.
- 13.09.2019: “Anforderungen an die Erklärbarkeit maschinengestützter Entscheidungen“ [Requirements on the Explainabilty of Machine-Aided Decisions] (Kevin Baum and Andreas Sesing). 20th Fall Academy of the German Foundation for Law and Computer Science (DSRI), Bremen, Germany.
- 01.07.2019: “How We Can Make Sense of Explainable Intelligent Systems?“ (JProf. Dr. Lena Kästner, JProf. Dr. Eva Schmidt, and Kevin Baum; invited talk). Leverhulme Centre for the Future of Intelligence, Cambridge, United Kingdom.
- July 2019: “How Can We Make Sense of Explainable Intelligent Systems?” Leverhulme Centre for the Future of Intelligence, Cambridge, UK
- 07.06.2019: “Modelling Mental Disorders – Cognitive, Causal, Both or Neither?“ (JProf. Dr. Lena Kästner; invited talk). Modelling workshop, Australian National University, Canberra, Australia.
- 06.-07.06.2019: “Pragmatic Encroachment with Reasons“ (JProf. Dr. Eva Schmidt). Workshop: Dimensions of Rationality – Practical Reasons for Belief and Other Attitudes, Goethe University Frankfurt, Frankfurt am Main, Germany.
- 27.05.2019: “Künstliche intelligente Systeme – Vertrauen und Verstehen“ [Artificial Intelligent Systems: Trust and Understanding] (JProf. Dr. Eva Schmidt; part of a lecture series on artifical intelligence). University of Zurich, Switzerland.
- 04.-05.04.2019: “Statistical Evidence and the Role of Algorithms in Criminal Justice“ (JProf. Dr. Eva Schmidt and Andreas Sesing). Evidence in Law and Ethics (ELE 2019), Jagiellonian University, Kraków, Poland.
- 22.05.2019: “Explaining Cognitive Systems: Mechanisms, Predictions, or Both?” Psychology Department of Saarland University, Germany
- 29.03.2019: “Die Evolution der künstlichen Intelligenz – Warum wir uns einmischen können und sollten“ [The Evolution of Artificial Intelligence – Why We Can and Should Interfere] (Dr. Markus Langer). Opening talk at the Be-In Student Congress of the Association of German Professional Psychologists, Heidelberg, Germany.
- 07.-09.03.2019: “Artificial Intelligence in Psychological Assessment: Why Psychology Needs to Care“ (Dr. Markus Langer). International Convention of Psychological Science, Paris, France.
- 07.03.2019: “Trusting Artificial Experts“ (Felix Bräuer and Kevin Baum). Workshop: Epistemic Trust in the Epistemology of Expert Testimony, Friedrich-Alexander University, Erlangen, Germany.
Interdisciplinary panels
The EIS panel ”Explainable Intelligent Systems and the Trustworthiness of Artificial Experts” was accepted to and held at the following conferences:
- 9th International Conference on Information, Law and Ethics (ICIL), University of Rome Tor Vergata, July 11-13, 2019
- European Conference for Cognitive Science 2019, Ruhr University Bochum, Germany, September 2-4, 2019
- 27th Annual Meeting of the European Society for Philosophy and Psychology, Athens, September 5-8, 2019
The panel included the following papers:
- “Why Explainable AI Matters Morally” (Kevin Baum, Holger Hermanns, and Timo Speith)
- “Artificial Intelligent Systems: Reasonable Trust Requires Rationalizing Explanations” (Felix Bräuer, Eva Schmidt, and Ulla Wessels)
- “What Kind of Explanation Shall Artificial Experts Provide?” (Lena Kästner, Daniel Oster, and Andreas Sesing)
- “How the Kind of Explanation Affects People’s Reactions to Artificial Experts” (Tina Feldkamp, Cornelius König, and Markus Langer
Public Outreach Events
Future talks and presentations
Past public outreach presentations:
- 04.24: “Die ethische Landschaft der KI – Zwischen Dystopie und Realität” (Thorsten Helfer) talk at the DUCAH-Forum, change hub, Berlin.
- 12.23: “KI-Regulierung in der EU: Was der AI Act bringen wird (und was nicht)” (Dr. Andreas Sesing-Wagenpfeil), talk at Alumni-Tag der Helmut Schmidt Universität, Hamburg.
- 11.23: ““Künstliche Intelligenz und Ethik: Navigieren im ethischen Dilemma” (Thorsten Helfer), talk at Lehrerfortbildung, Bildungscampus Saarland
- 09.23: “Transatlantic Fireside Chat – Real-World Innovations of AI” (JProf. Dr. Eva Schmidt with Rebecca Nugent and Sebastian Buschjäger).
Digitale Woche Dortmund
- 09.23: “Künstliche Intelligenz: Erkenntnistheoretische und ethische Fragen [Artificial Intelligence: Epistemological and ethical questions]” (JProf. Dr. Eva Schmidt) talk at Zukunftskongress Logistik, Dortmund.
- 07.23: “ChatGPT und Datenschutz” (Prof. Dr. Georg Borges) talk at Online-Workshop, Saarbrücken.
- 05.23: “Erklärbare Künstliche Intelligenz” (JProf. Dr. Eva Schmidt) talk at Fachtag Praktische Philosophie, TU Dortmund.
- 03.23: “Entmystifizierung eines Hypes: ChatGPT zum Anfassen” (Prof. Dr. Markus Langer), talk at the German Research Center for Artificial Intelligence, DFKI.
- 01.03.2023: “Transparenz – warum ist das wichtig? Digitalisierung, Künstliche Intelligenz, Politik” [Transparency – why is it important? Digitalization, Artificial Intelligence, Politics] (JProf. Dr. Eva Schmidt), talk at Alliance 90/The Greens Dortmund-Hombruch.
- 02.2023: Conference on Legal Aspects of Digitalising Public Administration in Japan and Germany (hybrid) (Prof. Dr. Georg Borges).
- 12.01.2022: “Künstliche Intelligenz und Verantwortung” (Kevin Baum). Paul Fritsche Stiftung Wissenschaftliches Form. Homburg, Germany.
- 11.-13.07.2022: “Understanding via Exemplification in XAI“ (Sara Mann), Intelligent Systems in Context, University of Bayreuth, Germany.
- 29.03.2022: “Erklärbare Intelligente Systeme (mit Zweck)“ [“Explainable Intelligent Systems (with purpose)“]. (Kevin Baum and Markus Langer). Runder Tisch “KI und Medien gemeinsam gestalten“, Online.
- 23.03.2022: “AI. Friend, foe or fad“ (Kevin Baum). Beitrag zum Panel auf der Booster Conference 2022, Bergen, Norway.
- 17.03.2022: “Eckstein-Stammtisch: Das GUTE ABEND-Gespräch mit Kevin Baum“ (Kevin Baum). Interview mit der Arbeitskammer des Saarlandes zum Thema KI und Ethik, Saarbrücken, Germany.
- 26.-27.11.2021: “Diskriminierende Algorithmen – Wie kommt das Vorurteil in die Maschine und wie gehen wir damit um?” (Kevin Baum). Workshop Digitalethik & junge politische Philosophie. Spotlight: Digital-Verantwortung, Online.
- 20.11.2021: “Diskriminierende Algorithmen – Wie kommt das Vorurteil in die Maschine und wie gehen wir damit um?” (Kevin Baum). RCDS Westkonferenz 2021, Saarbrücken, Germany.
- 30.06.2021: “Fake News und Desinformation“ (Kevin Baum, Stephan Schweitzer, and Robert Reick). 2. Tag der digitalen Bildung im Saarland: “Perspektiven auf die Zukunft der digitalen Bildung“. Ministerium für Bildung und Kultur, Online.
- 30.06.2021: “Erklärbarkeit künstlicher Intelligenz“ (Timo Speith). 2. Tag der digitalen Bildung im Saarland: “Perspektiven auf die Zukunft der digitalen Bildung“. Ministerium für Bildung und Kultur, Online.
- 01.06.2021: “Diskriminierung durch künstliche Intelligenz“ [Discrimination Through Artificial Intelligence] (Prof. Dr. Georg Borges). Trierer Gespräche zu Recht und Digitalisierung, Online.
- 20.04.2021: “Algorithmische Diskriminierung“ (Kevin Baum). Algorithmische Diskriminierung. Union Stiftung, Online.
- 10.03.2021: “Erklärbare KI“ [Explainable Artificial Intelligence] (Timo Speith). e-Studientage für die Begabtenförderung. InfoLab Saar, Online.
- 09.02.2021: “Welche Rolle spielen Algorithmen im Zusammenhang mit Desinformation? Entscheiden zukünftig Algorithmen und Künstliche Intelligenz nicht mehr nur darüber, was wir wann und warum sehen, sondern erstellen gar Fake News?” (Kevin Baum). Safer Internet Day 2021. Landesmedienanstalt Saarland, Saarbrücken.
- 26.01.2021: “Neues Verbundprojekt zu Entscheidungen von Künstlicher Intelligenz“ [New Interdisciplinary Project about the Explainabilty of Artificial Intelligent Systems] (JProf. Dr. Eva Schmidt).
- 19.11.2020: “Künstliche Intelligenz” (Kevin Baum). Transformationsdialog “Künstliche Intelligenz”. Arbeitskammer des Saarlandes, Saarbrücken.
- 12.11.2020: “Verhaltensmanipulation durch digitales ‘Hexenwerk'” (Kevin Baum). Online-Vortrag mit Podiumsdiskussion. Landeszentrale für politische Bildung des Saarlandes gemeinsam mit dem gemeinnützigen Verein Algoright e.V., Online.
- 07.10.2020: “KI: Fairness von Algorithmen“ (Timo Speith). e-Studientage für die Begabtenförderung. InfoLab Saar, Online.
- 16.09.2020: “Fünf ethische Herausforderungen im Zeitalter der digitalisierten Medizin” (Kevin Baum). Eröffnung des Fortbildungsjahres 2020/2021. Ärztekammer des Saarlandes, Saarbrücken.
- 19.09.2020: “Ethics for Nerds – Ethische Grundlagen für Informatiker*innen und Naturwissenschaftler*innen“ (Timo Speith). Summer School “Auch Nerds brauchen Ethik! – (Unternehmens-)Verantwortung in der Digitalisierung, Informatik und Künstlichen Intelligenz“. Consulting Akademie Unternehmensethik, Online.
- 15.11.2019: “Was ist XAI, warum ist das wichtig und was hat UX damit zu tun?“ [What is XAI, Why Is It Important and What Has UX Got to Do with It?] (JProf. Dr. Eva Schmidt and Kevin Baum). Ergosign, Saarbrücken, Germany.
- 02.10.2019: “Erklärbare Intelligente Systeme: Verstehen. Verantwortung. Vertrauen.“ (Lena Kästner). Issues in Explainable AI: Black Boxes, Recommendations, and Levels of Explanation. Saarland University, Saarbrücken, Germany.
- 23.09.2019: ʺKI und Klimaschutzʺ [AI and Climate Protection] (Kevin Baum). 3rd fireside chat of the Fridays for Future group at Saarland University, Saarbrücken, Germany.
- 03.09.2019: ʺWarum und wie Maschinenethik und die Erklärbarkeit künstlicher Intelligenz zusammengehörenʺ [Why and How Machine Ethics and the Explainability of Artificial Intelligence Belong Together] (Kevin Baum). As a part of the Stipendiatinnen und Stipendiaten machen Programm scholarship holder seminar Autonomous Systems – Between the Poles of Human and Machine by the German Academic Scholarship Foundation, Saarbrücken, Germany.
- 25.06.2019: ʺWas ist vertrauenswürdige künstliche Intelligenz?ʺ [What is Trustworthy Artificial Intelligence] (Kevin Baum). Working Group for Technology, Innovation and Research (AG-TIF), Saarlandic Task Force on Economy, Saarbrücken, Germany.
- 24.06.2019: ʺWenn Computer Lebenserwartung vorhersagenʺ [When Computers Predict Life Expectancy] (Kevin Baum). Sankt Jakobus Hospice, Saarbrücken, Germany.
- 12.06.2019: ʺErklärbare KI – Was ist gewünscht, was notwendig, was ist möglich?ʺ [Explainable AI – What is Desired, What is Necessary, What is Possible?] (Kevin Baum). 37th Conference of the Commissaries for the Freedom of Information, Saarbrücken, Germany.
- 12.06.2019: ʺTransparenz und Erklärbarkeit algorithmischer Entscheidungssysteme – Begründung, Schwierigkeiten, Trade-Offsʺ [Transparency and Explainability of Algorithmic Decision Systems – Justification, Difficulties, Trade-Offs] (Kevin Baum). 37th Conference of the Commissaries for the Freedom of Information, Saarbrücken, Germany.
- 11.06.2019: ʺAlgorithmen in der Medizin – Ethische Aspekteʺ [Algorithms in Medicine – Ethical Aspects] (Kevin Baum). 372th Saarlandic Conference on Pain.
- 25.05.2019: ʺDimensionen der Maschinenerklärbarkeitʺ [Dimensions of Machine Explainability] (Timo Speith). Saarland University Open Day, Saarbrücken, Germany.
- 25.05.2019: ʺWie Maschinen lernenʺ [How Machines Learn] (Timo Speith; AI workshop). Saarland University Open Day, Saarbrücken, Germany.
- 25.05.2019: ʺWarum wir KI verstehen können müssenʺ [Why We Have to Be Able to Understand AI] (Kevin Baum). Saarland University Open Day, Saarbrücken, Germany.
- 21.05.2019: ʺSoll es in der Informatik eine Professionsethik geben?ʺ [Should There Be Professional Ethics in Computer Science?] (Timo Speith; invited workshop participation). Office of the Protestant Church, Hanover, Germany.
- 02.05.2019: Forum discussion ʺSaarbrücken – Digital?ʺ with the candidates for the mayoral election in Saarbrücken in 2019 (Dr. Markus Langer, Kevin Baum, and Timo Speith). Saarland University campus, Saarbrücken, Germany.
- 08.04.2019: ʺKI! Aber wie? – Wie Künstliche Intelligenz gestaltet werden sollteʺ [AI! But How? – How Artificial Intelligence Should Be Designed] (Timo Speith). Lions Club Saar East, Neunkirchen/Saar, Germany.