VISUAL MODELS EXPLANATION OF DEEP NEURAL NETWORKS FOR FMRI NEUROIMAGING DATA
DEFESA DE PROPOSTA DE TESE – Programa de Pós-Graduação em Ciência da Computação
VISUAL MODELS EXPLANATION OF DEEP NEURAL NETWORKS FOR FMRI NEUROIMAGING DATA
ALUNA: Laura Angélica Tomaz da Silva
ORIENTADOR: Dr. Duncan Dubugras Alcoba Ruiz
COORIENTAÇÃO: Dr. Felipe Rech Meneguzzi (PUCRS)
BANCA EXAMINADORA: Dr. Alexandre Rosa Franco (NKI), Dr. Marcio Sarroglia Pinho (PUCRS)
DATA: 28 de outubro de 2021
LOCAL: Videoconferência
HORÁRIO: 14:00
Link para acessar a videoconferência
Meeting ID: 918 5807 2801
Passcode: 177465
RESUMO:
Dyslexia is a developmental disorder affecting several specific learning skills, most commonly reading. It is a complex learning difficulty characterized by significant impairment in the development of reading skills unrelated to problems with visual acuity, schooling, or overall mental health. Recent research sought to identify biomarkers of dyslexia using various forms of neuroimaging to use as tools to diagnose dyslexia and other learning disorders. While deep neural networks models provide everimproving tools for classification tasks in most medical areas for which data is available, high classification performance provides virtually no insights to medical researchers about the underlying conditions. Explainable artificial intelligence attempts to provide explanations into why and how these models produce predictions. Post-hoc explanations are one of the research lines of explainable deep neural networks. Explanations produced by post-hoc techniques are a useful tool to explain existing accurate models. Currently, post-hoc explanations excel in only explaining a single prediction of the model, and often fail to provide an understanding of the entire model, as well as a
metric to assess the accuracy of the generated explanations. This thesis proposal aims to address this gap in post-hoc explanations research. We intend to investigate how post-hoc explanation approaches are employed in the medical domain. By doing so, we intend to create a novel post-hoc explanation technique capable of generalizing explanations to a global scope for the medical domain. Finally, we plan to introduce novel metrics to evaluate our post-hoc explanations used in a global scope.