Interpretable Deep Neural Networks for EEG-based Auditory Attention Detection with Layer-Wise Relevance Propagation
by , , , ,
Abstract:
Deep Neural Networks (DNNs) recently found their way into cognitive neuroscience serving as powerful computational models. However, the complexity of deep learning models results in an uninterpretable black box, preventing neurophysiological insight into processes behind the decision of the model. In this work, we present an explanation approach for a DNN in spatial auditory attention detection (AAD) with electroencephalography (EEG), based on Layer-Wise Relevance Propagation (LRP). LRP decomposes the prediction of the DNN into relevance heatmaps that represent the importance of the spectro-spatial image features regarding the decision of the network, illustrated in Figure 1. To validate the LRP explanation for the DNN, (1) the relation between relevance heatmaps and the output of the network is examined via relevance-guided input perturbation. Further, (2) structural features and potential prediction strategies in the LRP heatmaps are investigated by spectral clustering of relevance heatmaps. The results indicate that explanation heatmaps generated by LRP highlight areas in the cortical activation images that predominantly impact the decision of the network. The clustering approach found distinct patterns in relevance maps, individually for each subject, revealing the importance of neuro-physiologically plausible frontal, lateral, and rear brain areas for auditory attention. This work demonstrates that LRP can fill the interpretability gap in the development of DNNs for EEG- based AAD. The relevance heatmaps of single input samples combined with the knowledge of global prediction strategies open up the ability to investigate sample groups of interest at will, which renders LRP as a tool to reveal potential neural- or decisional processes underlying the deep learning model.
Reference:
Interpretable Deep Neural Networks for EEG-based Auditory Attention Detection with Layer-Wise Relevance Propagation (Gabriel Ivucic, Felix Putze, Siqi Cai, Haizhou Li, Tanja Schultz), In The Third Neuroadaptive Technology Conference, 2022.
Bibtex Entry:
@inproceedings{ivucic2022interpretable,
  title={Interpretable Deep Neural Networks for EEG-based Auditory Attention Detection with Layer-Wise Relevance Propagation},
  author={Ivucic, Gabriel and Putze, Felix and Cai, Siqi and Li, Haizhou and Schultz, Tanja},
  booktitle={The Third Neuroadaptive Technology Conference},
  year={2022},
  address={L{\"u}bbenau, Germany},
  month={October 9--12},
  pages={1--5},
  url={https://neuroadaptive.org/wp-content/uploads/2022/10/NAT22_Programme.pdf},
  abstract={Deep Neural Networks (DNNs) recently found their way into cognitive neuroscience serving as powerful computational models. However, the complexity of deep learning models results in an uninterpretable black box, preventing neurophysiological insight into processes behind the decision of the model. In this work, we present an explanation approach for a DNN in spatial auditory attention detection (AAD) with electroencephalography (EEG), based on Layer-Wise Relevance Propagation (LRP). LRP decomposes the prediction of the DNN into relevance heatmaps that represent the importance of the spectro-spatial image features regarding the decision of the network, illustrated in Figure 1. To validate the LRP explanation for the DNN, (1) the relation between relevance heatmaps and the output of the network is examined via relevance-guided input perturbation. Further, (2) structural features and potential prediction strategies in the LRP heatmaps are investigated by spectral clustering of relevance heatmaps. The results indicate that explanation heatmaps generated by LRP highlight areas in the cortical activation images that predominantly impact the decision of the network. The clustering approach found distinct patterns in relevance maps, individually for each subject, revealing the importance of neuro-physiologically plausible frontal, lateral, and rear brain areas for auditory attention. This work demonstrates that LRP can fill the interpretability gap in the development of DNNs for EEG- based AAD. The relevance heatmaps of single input samples combined with the knowledge of global prediction strategies open up the ability to investigate sample groups of interest at will, which renders LRP as a tool to reveal potential neural- or decisional processes underlying the deep learning model.}
}