This week, our head-PI Lena Kästner gave a talk at the workshop “Philosophy of Science meets Machine Learning” organized by Thomas Grote, Thilo Hagendorff and Eric Raidl form the Cluster of Excellence “Machine Learning for Science” at the University of Tübingen. In the workshop, international experts debated about how machine learning transforms society as well as science and philosophy of science. Key topics included what potentials machine learning offers for scientific inquiry, what challenges it raises for both science and society, and how the use of machine learning might be regulated. During the discussions, clarifying core concepts such as “understanding”, “explanation”, “modelling”, “prediction” and “causal inference” took center stage. Once again it became clear that the field of explainable AI is young and vibrant. While important foundational questions still need answering, it seems evident that collaborations between philosophers and machine learning experts will be directing future scientific inquires in both natural and artificial systems.
Machine learning does not only transform businesses and the social sphere, it also fundamentally transforms science and scientific practice. The workshop focuses on that latter issue. It aims to discuss whether and how exactly recent developments in the field of machine learning potentially transform the process of scientific inquiry. For this, it sets out to analyse the field of machine learning through the lenses of philosophy of science, epistemology, research ethics and cognate fields such as sociology of science. The workshop will bring together philosophers from different backgrounds (from formal epistemology to the study of the social dimensions of science) and machine learning researchers. The workshop`s central topics are:
- A critical reflection on key-concepts, such as ‘learning’, ‘inference’, ‘explanation’ or ‘understanding’.
- The implications of machine learning for the special sciences, e.g. cognitive science, social science or medicine.
- The ethics of machine learning-driven science, e.g. the moral responsibilities of researchers.
- Social aspects of machine learning-driven science, e.g. the impact of funding structures on research.