FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning

Autores
Khalid, Saif; Rashwan, Hatem A.; Abdulwahab, Saddam; Abdel-Nasser, Mohamed; Quiroga, Facundo Manuel; Puig, Domenec
Año de publicación
2023
Idioma
inglés
Tipo de recurso
artículo
Estado
versión publicada
Descripción
The performance of diagnostic Computer-Aided Design (CAD) systems for retinal diseases depends on the quality of the retinal images being screened. Thus, many studies have been developed to evaluate and assess the quality of such retinal images. However, most of them did not investigate the relationship between the accuracy of the developed models and the quality of the visualization of interpretability methods for distinguishing between gradable and non-gradable retinal images. Consequently, this paper presents a novel framework called ‘‘FGR-Net’’ to automatically assess and interpret underlying fundus image quality by merging an autoencoder network with a classifier network. The FGR-Net model also provides an interpretable quality assessment through visualizations. In particular, FGR-Net uses a deep autoencoder to reconstruct the input image in order to extract the visual characteristics of the input fundus images based on self-supervised learning. The extracted features by the autoencoder are then fed into a deep classifier network to distinguish between gradable and ungradable fundus images. FGR-Net is evaluated with different interpretability methods, which indicates that the autoencoder is a key factor in forcing the classifier to focus on the relevant structures of the fundus images, such as the fovea, optic disk, and prominent blood vessels. Additionally, the interpretability methods can provide visual feedback for ophthalmologists to understand how our model evaluates the quality of fundus images. The experimental results showed the superiority of FGR-Net over the state-of-the-art quality assessment methods, with an accuracy of > 89% and an F1-score of > 87%. The code is publicly available at https://github.com/saifalkh/FGR-Net.
Instituto de Investigación en Informática
Materia
Ciencias Informáticas
Retinal image
Quality assessment
Autoencoder network
Ocular diseases
Deep learning
Intepretability
Explainability
Gradability
Nivel de accesibilidad
acceso abierto
Condiciones de uso
http://creativecommons.org/licenses/by-nc-nd/4.0/
Repositorio
SEDICI (UNLP)
Institución
Universidad Nacional de La Plata
OAI Identificador
oai:sedici.unlp.edu.ar:10915/160108

id SEDICI_142dce6cbfbbb3f11fa7e9c166548006
oai_identifier_str oai:sedici.unlp.edu.ar:10915/160108
network_acronym_str SEDICI
repository_id_str 1329
network_name_str SEDICI (UNLP)
spelling FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learningKhalid, SaifRashwan, Hatem A.Abdulwahab, SaddamAbdel-Nasser, MohamedQuiroga, Facundo ManuelPuig, DomenecCiencias InformáticasRetinal imageQuality assessmentAutoencoder networkOcular diseasesDeep learningIntepretabilityExplainabilityGradabilityThe performance of diagnostic Computer-Aided Design (CAD) systems for retinal diseases depends on the quality of the retinal images being screened. Thus, many studies have been developed to evaluate and assess the quality of such retinal images. However, most of them did not investigate the relationship between the accuracy of the developed models and the quality of the visualization of interpretability methods for distinguishing between gradable and non-gradable retinal images. Consequently, this paper presents a novel framework called ‘‘FGR-Net’’ to automatically assess and interpret underlying fundus image quality by merging an autoencoder network with a classifier network. The FGR-Net model also provides an interpretable quality assessment through visualizations. In particular, FGR-Net uses a deep autoencoder to reconstruct the input image in order to extract the visual characteristics of the input fundus images based on self-supervised learning. The extracted features by the autoencoder are then fed into a deep classifier network to distinguish between gradable and ungradable fundus images. FGR-Net is evaluated with different interpretability methods, which indicates that the autoencoder is a key factor in forcing the classifier to focus on the relevant structures of the fundus images, such as the fovea, optic disk, and prominent blood vessels. Additionally, the interpretability methods can provide visual feedback for ophthalmologists to understand how our model evaluates the quality of fundus images. The experimental results showed the superiority of FGR-Net over the state-of-the-art quality assessment methods, with an accuracy of > 89% and an F1-score of > 87%. The code is publicly available at https://github.com/saifalkh/FGR-Net.Instituto de Investigación en Informática2023info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionArticulohttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfhttp://sedici.unlp.edu.ar/handle/10915/160108enginfo:eu-repo/semantics/altIdentifier/issn/0957-4174info:eu-repo/semantics/altIdentifier/doi/10.1016/j.eswa.2023.121644info:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc-nd/4.0/Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)reponame:SEDICI (UNLP)instname:Universidad Nacional de La Platainstacron:UNLP2025-10-15T11:33:44Zoai:sedici.unlp.edu.ar:10915/160108Institucionalhttp://sedici.unlp.edu.ar/Universidad públicaNo correspondehttp://sedici.unlp.edu.ar/oai/snrdalira@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:13292025-10-15 11:33:44.325SEDICI (UNLP) - Universidad Nacional de La Platafalse
dc.title.none.fl_str_mv FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning
title FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning
spellingShingle FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning
Khalid, Saif
Ciencias Informáticas
Retinal image
Quality assessment
Autoencoder network
Ocular diseases
Deep learning
Intepretability
Explainability
Gradability
title_short FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning
title_full FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning
title_fullStr FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning
title_full_unstemmed FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning
title_sort FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning
dc.creator.none.fl_str_mv Khalid, Saif
Rashwan, Hatem A.
Abdulwahab, Saddam
Abdel-Nasser, Mohamed
Quiroga, Facundo Manuel
Puig, Domenec
author Khalid, Saif
author_facet Khalid, Saif
Rashwan, Hatem A.
Abdulwahab, Saddam
Abdel-Nasser, Mohamed
Quiroga, Facundo Manuel
Puig, Domenec
author_role author
author2 Rashwan, Hatem A.
Abdulwahab, Saddam
Abdel-Nasser, Mohamed
Quiroga, Facundo Manuel
Puig, Domenec
author2_role author
author
author
author
author
dc.subject.none.fl_str_mv Ciencias Informáticas
Retinal image
Quality assessment
Autoencoder network
Ocular diseases
Deep learning
Intepretability
Explainability
Gradability
topic Ciencias Informáticas
Retinal image
Quality assessment
Autoencoder network
Ocular diseases
Deep learning
Intepretability
Explainability
Gradability
dc.description.none.fl_txt_mv The performance of diagnostic Computer-Aided Design (CAD) systems for retinal diseases depends on the quality of the retinal images being screened. Thus, many studies have been developed to evaluate and assess the quality of such retinal images. However, most of them did not investigate the relationship between the accuracy of the developed models and the quality of the visualization of interpretability methods for distinguishing between gradable and non-gradable retinal images. Consequently, this paper presents a novel framework called ‘‘FGR-Net’’ to automatically assess and interpret underlying fundus image quality by merging an autoencoder network with a classifier network. The FGR-Net model also provides an interpretable quality assessment through visualizations. In particular, FGR-Net uses a deep autoencoder to reconstruct the input image in order to extract the visual characteristics of the input fundus images based on self-supervised learning. The extracted features by the autoencoder are then fed into a deep classifier network to distinguish between gradable and ungradable fundus images. FGR-Net is evaluated with different interpretability methods, which indicates that the autoencoder is a key factor in forcing the classifier to focus on the relevant structures of the fundus images, such as the fovea, optic disk, and prominent blood vessels. Additionally, the interpretability methods can provide visual feedback for ophthalmologists to understand how our model evaluates the quality of fundus images. The experimental results showed the superiority of FGR-Net over the state-of-the-art quality assessment methods, with an accuracy of > 89% and an F1-score of > 87%. The code is publicly available at https://github.com/saifalkh/FGR-Net.
Instituto de Investigación en Informática
description The performance of diagnostic Computer-Aided Design (CAD) systems for retinal diseases depends on the quality of the retinal images being screened. Thus, many studies have been developed to evaluate and assess the quality of such retinal images. However, most of them did not investigate the relationship between the accuracy of the developed models and the quality of the visualization of interpretability methods for distinguishing between gradable and non-gradable retinal images. Consequently, this paper presents a novel framework called ‘‘FGR-Net’’ to automatically assess and interpret underlying fundus image quality by merging an autoencoder network with a classifier network. The FGR-Net model also provides an interpretable quality assessment through visualizations. In particular, FGR-Net uses a deep autoencoder to reconstruct the input image in order to extract the visual characteristics of the input fundus images based on self-supervised learning. The extracted features by the autoencoder are then fed into a deep classifier network to distinguish between gradable and ungradable fundus images. FGR-Net is evaluated with different interpretability methods, which indicates that the autoencoder is a key factor in forcing the classifier to focus on the relevant structures of the fundus images, such as the fovea, optic disk, and prominent blood vessels. Additionally, the interpretability methods can provide visual feedback for ophthalmologists to understand how our model evaluates the quality of fundus images. The experimental results showed the superiority of FGR-Net over the state-of-the-art quality assessment methods, with an accuracy of > 89% and an F1-score of > 87%. The code is publicly available at https://github.com/saifalkh/FGR-Net.
publishDate 2023
dc.date.none.fl_str_mv 2023
dc.type.none.fl_str_mv info:eu-repo/semantics/article
info:eu-repo/semantics/publishedVersion
Articulo
http://purl.org/coar/resource_type/c_6501
info:ar-repo/semantics/articulo
format article
status_str publishedVersion
dc.identifier.none.fl_str_mv http://sedici.unlp.edu.ar/handle/10915/160108
url http://sedici.unlp.edu.ar/handle/10915/160108
dc.language.none.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv info:eu-repo/semantics/altIdentifier/issn/0957-4174
info:eu-repo/semantics/altIdentifier/doi/10.1016/j.eswa.2023.121644
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
http://creativecommons.org/licenses/by-nc-nd/4.0/
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
eu_rights_str_mv openAccess
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0/
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
dc.format.none.fl_str_mv application/pdf
dc.source.none.fl_str_mv reponame:SEDICI (UNLP)
instname:Universidad Nacional de La Plata
instacron:UNLP
reponame_str SEDICI (UNLP)
collection SEDICI (UNLP)
instname_str Universidad Nacional de La Plata
instacron_str UNLP
institution UNLP
repository.name.fl_str_mv SEDICI (UNLP) - Universidad Nacional de La Plata
repository.mail.fl_str_mv alira@sedici.unlp.edu.ar
_version_ 1846064367158689792
score 13.22299