Random header banner 1 Random header banner 2 Random header banner 3 Random header banner 4

Deep learning techniques are increasingly utilized in medical applications, including image reconstruction, segmentation, and classification. However, despite the good performance those models are not easily interpretable by humans. Especially medical applications require to verify that this is not the result of exploiting data artifacts.

Our experiments on Alzheimer's disease (AD) classification showed that Deep Neural Networks (DNN) might learn from features introduced by the skull stripping algorithm. Therefore, we are investigating how preprocessing (registration and brain extraction) determine which and how many features in the MR images are relevant for the separation of patients from healthy controls.

We develop a relevance-guided approach which minimizes the impact of preprocessing (e.g. skull stripping and registration), rendering it a practically usable and robust method for DNN-based neuroimaging classification studies. Additionally, our relevance-guided approach focusses the feature identification on the parenchyma and provides physiological more plausible results.

 

References

Tinauer et al., ISMRM, 2020, Relevance-guided Deep Learning for Feature Identification in R2* Maps in Alzheimer’s Disease Classification

Tinauer et al., ISMRM, 2019, Relevance-guided Feature Extraction for Alzheimer's Disease Classification

 

Heatmapping 

Wonderful resource about explainability in deep learning:  http://heatmapping.org.

 

LRP Extension for Keras

https://github.com/christiantinauer/keras-LRP-DTD-layer