Last week, the Bertelsmann Stiftung and the Stiftung Neue Verantwortung published a new position paper: “Wie Algorithmen verständlich werden”. The paper calls for making algorithmic decision-making processes understandable for stakeholders. Also, it highlights that this is vital for stakeholders to be able to comprehend the functioning of the systems. If stakeholders comprehend the functioning of the systems, they can critically engage with them, contest them (if required), and use them appropriately. Likewise, it is demanded, decision-making processes must be traceable and transparent to those affected by the decisions in question.
The paper points out that, as of yet, there are neither tangible suggestions nor ideas for how these goals may be reached: “Wie aber sollten Transparenz und Nachvollziehbarkeit von Algorithmen umgesetzt werden? Hier fehlt es bislang an konkreten Ideen und Vorschlägen.” (page 35). We could not agree with the authors more. Fortunately, our EIS team is already working on solutions! We firmly believe that the right kinds of explanations depend on the characteristics of a given situation. Different explanations will deliver transparency, traceability, and general understandability of the system in question (and the decisions it proposes) to users, stakeholders, and affected people in different contexts. The position paper by the Bertelsmann Stiftung and Stiftung Neue Verantwortung highlights the importance of our work and motivates us to push EIS forward. As if this was not enough, Initiative D21 presented their #AlgoMon (“Algorithm Monitoring”) Guidelines with a very similar message (including the importance of context-sensitivity) last week, too.
Our team of psychologists, experts from law, computer scientists, and philosophers has already proposed a formal approach as to how we can ascertain whether a given system is explainable and therefore understandable for its stakeholders (see our recent paper “Explainability as a non-functional Requirement”). Furthermore, the psychologists among us are already conducting several studies in order to investigate how explainability exactly contributes to understanding.