For Otherwise We Don’t Know What They Are Doing: Why AI Systems Need to Be Explainable

Intelligent systems increasingly support or take over human decisions. This development influences all areas of human life: Intelligent systems recommend new shows on streaming portals, support online searches by giving recommendations, automatically screen thousands of applicants, predict crime, or support medical diagnosis. Systems support human decision-making and work tasks by filtering and analyzing information, providing […]

Predictive Policing: Andreas Sesing at the Filmhaus Saarbrücken

Software predicting when and where a crime is about to be committed: What may sound like a scenario from science fiction in the style of ”Minority Report” has in the meantime become reality in cities such as Chicago, London or Munich. And whether we are dangerous subjects is already being decided by computers these days. […]

EIS at the 20th Fall Academy of the German Foundation for Law and Computer Science

Kevin Baum and Andreas Sesing will partake in the 20th Fall Academy of the German Foundation for Law and Computer Science (DSRI). Their paper on ”Requirements for the Explainability of Machine-aided Decisions” (”Anforderungen an die Erklärbarkeit maschinengestützter Entscheidungen”) was accepted. The Fall Academy is taking place in Bremen between September 11 and September 14, 2019. […]

Scroll to top