Philosophy & Computer Science

Summer School 17th to 22nd July 2023, Bayreuth, Germany

Welcome to our summer school at the intersection of computer science and philosophy!

This intensive program is designed for phd students, graduate students, and advanced undergraduates who are interested in exploring topics at the intersection of philosophy and computer science.

During the summer school, participants will have the opportunity to engage with leading experts in the field, discuss cutting-edge research, and work on collaborative projects. The program will cover a wide range of topics, including intelligent systems, fairness, explainability, privacy, security, and healthcare.

In addition to formal lectures and seminars, the summer school will include hands-on workshops, a range of interactive group discussions, and social activities. This is a unique opportunity to learn about and reflect on the exciting and sometimes contentious relationship between computer science and philosophy, connect with your peers, and meet international experts in the field.

We’re looking forward to meeting you for this intellectual adventure!



Joseph Halpern

Joe Halpern received his Ph.D. in Mathematics from Harvard, after
spending two years as head of the Mathematics Department at Bawku Secondary School, in Ghana. After a year as a postdoc at MIT, he joined the IBM Almaden Research Center, where he remained until 1996. He then joined the CS department at Cornell, where he is currently a full professor. Halpern received the Godel Prize and the Dijkstra Prize, joint with his former student Yoram Moses, for his work on applying reasoning about knowledge to analyzing multi-agent systems. He also received the ACM Autonomous Agents Research Award, the ACM/AAAI Newell Award, was a Guggenheim and Fulbright Fellow, and is a Fellow of AAAI, ACM, the American Association of Arts and Science, and the American Academy of Arts and Science. He has coauthored six patents, three books (“Reasoning About Knowledge”, “Reasoning about Uncertainty”, and “Actual Causality”), and over 300 technical publications.

A Causal Analysis of Harm
Friday, 21.07.2023 at 4:30pm

It has proved notoriously difficult to define harm. Indeed, it has been claimed that the notion of harm is a “Frankensteinian jumble” that should be replaced by other well-behaved notions. On the other hand, harm has become increasingly important as concerns about the potential harms that may be caused by AI systems grow. Indeed, the European Union’s draft AI act mentions “harm” over 25 times and points out that, given its crucial role, it must be defined carefully.

I start by defining a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality. The key features of the definition are that it is based on contrastive causation and uses a default utility to which the utility of actual outcomes is compared. I show that our definition is able to handle the problematic examples from the literature. I extend the definition to a quantitative notion of harm, first in the case of a single individual, and then for groups of individuals. I show that the “obvious”‘ way of doing this (just taking the expected harm for an individual and then summing the expected harm over all individuals) can lead to counterintuitive or inappropriate answers, and discuss alternatives, drawing on work from the decision-theory literature.

This is joint work with Sander Beckers and Hana Chockler.

Johanna  Thoma

Johanna Thoma works on ethics, philosophy of public policy, decision theory and economic methodology, and has been Professor of Ethics at the University of Bayreuth  since March 2023. Previously, she was an Associate Professor at the Department of Philosophy, Logic and Scientific Method at the London School of Economics. She is also an External Member of the Munich Center for Mathematical Philosophy, and remains a Visiting Professor at LSE. In her recent work, she has focused on philosophical questions to do with the regulation of new technologies, in particular how to take into account risk and uncertainty.

Risk Imposition by Artificial Agents: The Moral Proxy Problem
Saturday, 22.07.2023 at 9:30am

Does AI raise radically new problems for ethics? It is tempting to think that this is not so; that the same ethical questions arise for artificial agents as for human agents and that they should be answered in the same ways. In this talk, I will explore one reason why this null hypothesis is mistaken. I will argue that, where artificial agents replace previously decentralised human decision-making, the possibility of centralised control raises new and distinct problems. Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy, and this is a problem that does not arise for human agents. In particular, we need to decide whether artificial agents should be acting as proxies for low-level agents — e.g. individual users of the artificial agents — or whether they should be moral proxies for high-level agents — e.g. designers, distributors or regulators, that is, those who can potentially control the choice behaviour of many artificial agents at once. Who we think an artificial agent is a moral proxy for determines from which agential perspective the choice problems artificial agents will be faced with should be framed: should we frame them like the individual choice scenarios previously faced by individual human agents? Or should we, rather, consider the expected aggregate effects of the many choices made by all the artificial agents of a particular type all at once? The talk will focus in particular on how artificial agents should be designed to make risky choices, and argues that the question of risky choice by artificial agents shows the moral proxy problem to be both practically relevant and difficult.

Marija Slavkovik

Marija Slavkovik is a full professor with the Faculty for Social Sciences of the University of Bergen in Norway. Her background is in computer science and artificial intelligence. She has a masters in computational logic and a PhD in computational social choice. She has been doing research in machine ethics since 2012.  Marija works on formalising ethical collective decision-making. She has held held several seminars, tutorials and graduate courses on machine ethics and AI ethics.  

Machine Ethics from an AI Perspective
Monday, 17.07.2023 at 4:30pm

In 2006 machine ethics was defined as the field that “is concerned with the behaviour of machines towards human users and other machines”. Since, the terms computational ethics and AI ethics have also been used. Sometimes they are used to denote completely different problems and sometimes different dimensions of the same concern: how should machines behave? From the point of view of philosophy, an oversimplified statement of the machine ethics problem is: what does it mean for a machine to behave ethically. From the point of view of AI what is studied in machine ethics (again putting it overtly simply) is: how do we automate moral reasoning and decision-making? The talk gives an overview of machine ethics from its official inception to today following the work in AI. A related, but pertinent point that also will be discussed is: are AI enhanced machines and softwares de facto acting as our moral arbiters.

Invited Speakers

Invited speakers are Aldo Faisal, Susanne Gaube, Joseph Halpern, Hendrik Heuer, Eva Lermer, Dimitri Coelho Mollo, Wojciech Samek, Elizabeth Seger, Tobias Seitz, Marija Slavkovik, Johanna Thoma, Mariya Toneva, Kate Vredenburgh, and Herbert Zech.See below for pictures and biographies of our speakers.

Susanne Gaube

Center for Leadership and People Management

Dr. Susanne Gaube currently leads the project “Human-AI Interaction in Healthcare” at the Center for Leadership and People Management at LMU Munich and works at the Department of Infection Prevention and Infectious Diseases at the University Hospital Regensburg. She completed her Ph.D. in Psychology at the University of Regensburg, focusing primarily on health behavior change. Currently, her research concentrates on topics around the use of digital technologies, especially AI-enabled systems, in healthcare and medicine. Susanne Gaube is interested in understanding how AI-enabled decision support systems influence clinical decision-making and patient outcomes. Her interdisciplinary work has been published in many international, high-impact journals.


The Effect of Diagnostic AI-Generated Advice on Clinical Decision-Making.

Thursday, 20.07.2023 at 9:30am

Hendrik Heuer

Image Copyright: Cosima Hanebeck

Institute of Information Management Bremen

Dr. Hendrik Heuer is a postdoctoral researcher at the Institute of Information Management Bremen (ifib) and the Centre for Media, Communication, and Information Research at the University of Bremen. His research focuses on human-computer interaction and machine learning. He is currently working on ways to combat misinformation. He studied and worked in Bremen, Stockholm, Helsinki, and Amsterdam and was a Visiting Postdoctoral Research Fellow at Harvard University.


Fairness, Accountability, Transparency & Ethics in Machine Learning

Wednesday, 19.07.2023 at 2:30pm

Anne-Kathrin Kleine

Center for Leadership and People Management

Dr. Anne-Kathrin Kleine is a postdoctoral researcher in the “Human-AI Interaction in Healthcare” project at LMU Munich. She completed her Ph.D. at the University of Groningen in the Netherlands, focusing on healthy entrepreneurship practices and challenges of organizational change and transition processes. She has a strong background in data analysis and data-driven storytelling, focusing on meta-analysis, longitudinal and person-centered data modeling, and visualization. Most of her current research projects revolve around human-AI interaction in occupational contexts, intending to maximize its benefits for individuals and organizations. Her work has been published in high-impact international journals. Next to conducting research, she regularly holds talks and workshops for scientists and practitioners at international events on data analysis techniques and opportunities for optimizing the ways AI-enabled tools and recommendations aid human decision-making.


A Glimpse Into the Future – How AI-Enabled Precision Psychiatry Tools Advance Mental Healthcare

Thursday, 20.07.2023 at 11:30am

Eva Lermer

University of Applied Sciences Augsburg

Center for Leadership and People Management

Eva Lermer is a Research Professor of Organizational Psychology at the University of Applied Sciences Augsburg and leads the research project “Human-AI Interaction in Healthcare” at the LMU Munich. She has a background in Psychology and Sociology and holds a Ph.D. and habilitation in Psychology from LMU Munich. Her research interests include decision making, positive psychology, and the impact of challenges like COVID-19 on diverse groups. Her work has been published in international journals and she has authored several books and book chapters to promote practical application of psychological research findings.


The Effect of Diagnostic AI-Generated Advice on Clinical Decision-Making.

Thursday, 20.07.2023 at 9:30am

Dimitri Coelho Mollo

Umeå University, Sweden

I am an Assistant Professor with focus in Philosophy of Artificial Intelligence at the Department of Historical, Philosophical and Religious Studies, at Umeå University, Sweden, and focus area coordinator at TAIGA – centre for transdisciplinary AI, for the area Understanding and Explaining Artificial Intelligence. I am also an external Principal Investigator at the Science of Intelligence Cluster, in Berlin, Germany.

My areas of research are Philosophy of Artificial Intelligence and Cognitive Science, and Philosophy of Science. I am also interested in the ethics of current and future use of AI systems, Philosophy of Biology, Philosophy of Mind, and Philosophy of Climate Science.

My main research focus is on foundational and epistemological issues in Artificial Intelligence and the cognitive sciences, regarding, among others, the concepts of representation, computation, and intelligence.


Models of (Artificial) Intelligence: Idealisation and Behaviour

Tuesday, 18.07.2023 at 9:30am

Wojciech Samek
Image Copyright: Christian Kielmann 


Technical University Berlin

Fraunhofer Heinrich Hertz Institute


Wojciech Samek is a professor in the EECS Department at the Technical University of Berlin and is jointly heading the AI Department at Fraunhofer Heinrich Hertz Institute. He studied Computer Science at Humboldt University of Berlin and received the PhD in Machine Learning from the Technical University of Berlin in 2014. He is Fellow at the BIFOLD – Berlin Institute for the Foundation of Learning and Data, the ELLIS Unit Berlin and the DFG Research Unit DeSBi. Furthermore, he is a senior editor of IEEE TNNLS, an editorial board member of Pattern Recognition, and an elected member of the IEEE MLSP Technical Committee and the Germany’s Platform for Artificial Intelligence. He is co-author of more than 180 publications, co-editor of two Springer books on Explainable AI, and recipient of multiple best paper awards, including the 2020 Pattern Recognition Best Paper Award and the 2022 Digital Signal Processing Best Paper Prize.


From Black-Box Models to Human-Understandable Explainable AI

Wednesday, 19.07.2023 at 9:30am

Elizabeth Seger

Centre for the Governance of AI

Centre for the Study of Existential Risk

Elizabeth Seger is a research scholar at the Centre for the Governance of AI (GovAI) and a research affiliate at the Centre for the Study of Existential Risk (CSER) at the University of Cambridge. Elizabeth leads GovAI’s research streams on AI Democratization – seeking to better understand what it means to democratize AI and how this understanding might be reflect AI model publishing standards – and on Epistemic Security – investigating the impacts of emerging AI capabilities on how people produce, access, share, and appraise information.
Elizabeth completed her PhD in Philosophy of Science at the University of Cambridge. Elizabeth’s PhD research investigated foundations for trust in user-AI relationships in analogue to trust in human lay-expert relationships. Elizabeth also holds an MPhil in History and Philosophy of Science from the University of Cambridge and a BSc in Human Biology and Society from UCLA.



Tuesday, 18.07.2023 at 11:30am

Tobias Seitz

Google Safety Engineering Center in Munich

As a User Experience Researcher, I’m the voice of the user when teams make product decisions at the Google Safety Engineering Center in Munich. My job is to uncover people’s needs, attitudes, and behaviors and translate the insights into a product that makes people’s lives easier. I’ve interviewed and surveyed thousands of people around the world. Before joining Google in 2018, I conducted research in Usable Security & Privacy and earned my PhD at the LMU Munich.


A Practical Primer on User Research in Security & Privacy

Friday, 21.07.2023 at 11:30am

Mariya Toneva

Max Planck Institute for Software Systems

Mariya Toneva is faculty at the Max Planck Institute for Software Systems, where she leads the Bridging AI and Neuroscience group (BrAIN). Her research is at the intersection of Machine Learning, Natural Language Processing, and Neuroscience, with a focus on building computational models of language processing in the brain that can also improve natural language processing systems.


Bridging Language in Machines with Language in the Brain

Tuesday, 18.07.2023 at 2:30pm

Kate Vredenburgh

Kate Vredenburgh is an Assistant Professor in the Department of Philosophy, Logic and Scientific Method at the London School of Economics. She works in the philosophy of social science, political philosophy, and the philosophy of computing, on topics that intersect with ethics, epistemology, and metaphysics.


Algorithmic Fairness Beyond Error Rates

Thursday, 20.07.2023 at 11:30am

Herbert Zech

Humboldt-Universität zu Berlin

Weizenbaum Institute

Please download the curriculum vitae of Herbert Zech here.


Legal Responsibility for AI

Friday, 21.07.2023 at 9:30am



Monday, 17.07.2023 – Opening (Bldg.: NW II, Room: H 18)

Time Activity Person Title
14:30 – 16:00 Opening Lena Kästner, Olivier Roy
16:30 – 18:00 Keynote Marija Slavkovik Machine Ethics from an AI Perspective


Tuesday, 18.07.2023 – Intelligent Systems (Bldg.: RW I, Room: S 62)

Time Activity Person Title
09:30 – 11:00 Lecture Dimitri Coelho Mollo Models of (Artificial) Intelligence: Idealisation and Behaviour
11:30 – 13:00 Lecture Elizabeth Seger AI Democratization: What it Means, How Its Achieved, and the Role of Open-Source Model Sharing
14:30 – 16:00 Lecture Mariya Toneva Bridging Language in Machines with Language in the Brain
16:30 – 17:30 Poster Session
18:00 – 19:00 Social Event Botanical Garden Tour


Wednesday, 19.07.2023  – Fairness & Explainability (Bldg.: RW I, Room: S 62)

Time Activity Person Title
09:30 – 11:00 Lecture Wojciech Samek From Black-Box Models to Human-Understandable Explainable AI
11:30 – 13:00 Lecture Kate Vredenburgh Algorithmic Fairness Beyond Error Rates
14:30 – 16:00 Lecture Hendrik Heuer Fairness, Accountability, Transparency & Ethics in Machine Learning
16:30 – 18:00 Group Work Group A: Auditing Recommendations of Machine Learning Systems (Hendrik Heuer; Bldg.: RW II, Room: S 40)
Group B: Evaluating the alignment between human brains and language models (Mariya Toneva;
Bldg.: RW II, Room: S 46)


Thursday, 20.07.2023 – AI in Healthcare (Bldg.: RW I, Room: S 62)

Time Activity Person Title
09:30 – 11:00 Lecture Eva Lermer & Susanne Gaube The Effect of Diagnostic AI-Generated Advice on Clinical Decision-Making
11:30 – 13:00 Lecture Anne-Kathrin Kleine A Glimpse Into the Future – How AI-Enabled Precision Psychiatry Tools Advance Mental Healthcare
14:30 – 16:00 Lecture Aldo Faisal AI for Healthcare: Why It Exemplifies Everything That Makes AI Hard and Important
16:30 – 18:00 Social Event City Tour


Friday, 21.07.2023 – Privacy & Security (Bldg.: RW I, Room: S 62)

Time Activity Person Title
09:30 – 11:00 Lecture Herbert Zech Legal Responsibility for AI
11:30 – 13:00 Lecture Tobias Seitz A Practical Primer on User Research in Security & Privacy
14:30 – 16:00 Group Work Group A: Crafting a Great Research Plan (Tobias Seitz; Bldg.: GW II, Room: S 6)
Group B: TBA (TBA; Bldg.: GW II, Room S 8)
16:30 – 18:00 Keynote Joseph Halpern A Causal Analysis of Harm (Bldg.: RW I, Room: H 25)


Saturday, 22.07.2023 – Summary & Closing (Bldg.: RW I, Room: H 25)

Time Activity Person Title
09:30 – 11:00 Keynote Johanna Thoma Risk Imposition by Artificial Agents: The Moral Proxy Problem
11:30 – 13:00 Final Discussion


Deadline: March 31, 2023 April 9, 2023 Application is closed!

Participation in the summer school is 90€. This includes lunch and coffee as well as a guided city tour.

If you’re interested in joining the summer school, please fill in this document and send it together with your current transcript of records and a CV to phil+cs(at)uni-bayreuth(dot)de. There is a limited number of scholarships (max. 500€ per person) available to support those in need of assistance with travel and accommodation costs. Please indicate in your application if you’d like to be considered for this.

The keynote lectures are public and free to attend for everyone, no registration required. Places available for joing the full program are limited, please apply. There will be no streaming of lectures or keynotes.

Senior researchers who are interested in joining certain lectures or days, please contact us at phil+cs(at)uni-bayreuth(dot)de.

If you are a student at University of Bayreuth, you might be able to take certain parts of the summer school as coursework. For details please refer to CMlife teaching entries; there’s no need to fill in the application form. Of course, you can also apply as a regular participant to join the full program.

Poster Session

The Summer School will include a poster session on Tuesday afternoon. All participants will be invited to present a poster on the topic they are working on. We can accommodate posters up to a size of A0.

If you are a participant wanting to present a poster and have not contacted us concerning that, please get in touch at phil+cs(at)uni-bayreuth(dot)de.


Travel & Accommodation

Transportation Guide

To get to Bayreuth you have three main options:

By plane:

If you plan to come to Bayreuth by plane the nearest airport is in Nuremberg. From there you have to take a train to the main station (Nuremberg airport to Nuremberg main station) and then take the train to Bayreuth (Bayreuth Hauptbahnhof). The route takes about 1h 15min.

Some better connections might also be found to the airport in Munich. From which you can reach Bayreuth by train via Nuremberg. This takes around 3h 30min and you need to change train at Munich main station and Nuremberg main station.

Detailed information about the train can be found here.

By train:

If you will travel by train the best way is to take a train to Bamberg or Nuremberg since Bayreuth, unfortunately, does not have a long-distance train station. From there you can take a local train connecting to Bayreuth. You have to be a little bit cautious to not end up on the wrong train. From both directions, there are two coupled trains from which only one goes to Bayreuth. So, make sure, that you entered the correct train.

More detailed information can be found on the Deutsche Bahn website here. The train station in Bayreuth is called Bayreuth Hauptbahnhof.

By car:

If you come by car, Bayreuth is very easy to reach.  The university is located right next to the Autobahn A9 exit Bayreuth south. There are plenty of parking spots at the university free of charge.


Sights in and around Bayreuth

Hermitage Bayreuth

The Hermitage in Bayreuth is a historical park with water features and buildings, created from 1715 onwards, which is one of the city’s main sights.

Text: Wikipedia

Image: © Meike Kratzer

New Palace

The New Palace was built from 1753 onwards, after a fire destroyed the margrave’s previous residence in January 1753. By 1758, it was essentially completed. Despite its almost modest appearance, it is one of the major pieces of German architecture of the 18th century.

Text: Wikipedia, Bayrische Schlösserverwaltung

Image: © Ramona Schirner

Bayreuth Festival Theatre

This opera house north of Bayreuth was built by the 19th-century German composer Richard Wagner and is dedicated solely to the performance of his stage works. It is the venue for the annual Bayreuth Festival, for which it was specifically conceived and built.

Text: Wikipedia

Image: © Corinna Weih

Ecological-Botanical Garden

The Ecological-Botanical Garden is a central scientific institution of the University of Bayreuth with a focus on ecology and the environment in research and teaching. It presents on 16 hectares of open space and 6000m² of greenhouse area near-natural vegetation types from all over the world.

Text: Universität Bayreuth

Image: © Universität Bayreuth


Created specifically for the 2016 state horticultural show, the Wilhelminenaue (roughly: “Meadow of the Wilhelmine”) has been freely accessible since 2017. The park offers recreation and nature experience for young and old as well as new habitat for rare plants and animals.

Text: Bayreuth Tourismus

Image: © Meike Kratzer


With its uniformly designed sandstone buildings, the northern section of Friedrichstrasse is the city’s boulevard. It was laid out in the middle of the 18th century together with Jean-Paul Square and has survived the times essentially unchanged.

Text: Wikipedia

Image: © Roman Henn

Scientific Organizers

Daniel Buschek – Lena Kästner – Olivier Roy – Timo Speith