by Barnaby Crook | Aug 19, 2021 | Reading Group (internal)
1. Feature Visualization One way to make progress in rendering deep artificial neural networks intelligible is to figure out what is going on inside their hidden layers. ‘Hidden’ in this context just means located between the input and output layers1. Drawing...
by Timo Speith | Aug 6, 2021 | Reading Group (internal)
LIME (which is an acronym for Local Interpretable Model-Agnostic Explanations) is one of the most cited techniques in the Explainable Artificial Intelligence (XAI) debate (according to Google Scholar, it was cited around 5.5k times). Indeed, the whole (renewed) XAI...