Quantitative Evaluation of White & Black Box Interpretability Methods for Image Classification

Autores
Stanchi, Oscar Agustín; Ronchetti, Franco; Dal Bianco, Pedro Alejandro; Ríos, Gastón Gustavo; Hasperué, Waldo; Puig Valls, Domenec; Rashwan, Hatem; Quiroga, Facundo Manuel
Año de publicación
2024
Idioma
inglés
Tipo de recurso
documento de conferencia
Estado
versión publicada
Descripción
The field of interpretability in Deep Learning faces significant challenges due to the lack of standard metrics for systematically evaluating and comparing interpretability methods. The absence of quantifiable measures impedes practitioners ability to select the most suitable methods and models for their specific tasks. To address this issue, we propose the Pixel Erosion and Dilation Score, a novel metric designed to assess the robustness of model explanations. Our approach involves applying iterative erosion and dilation processes to heatmaps generated by various interpretability methods, thereby using them to hide and show the important regions of a image to the network, allowing for a coherent and interpretable evaluation of model decision-making processes. We conduct quantitative ablation tests using our metric on the ImageNet dataset with both VGG16 and ResNet18 models. The results reveal that our new measure provides a numerical and intuitive means for comparing interpretability methods and models, facilitating more informed decision-making for practitioner.
Red de Universidades con Carreras en Informática
Materia
Ciencias Informáticas
Ablation
Black Box
Computer Vision
Deep Learning
Interpretability
Quantitative Measure
White Box
Nivel de accesibilidad
acceso abierto
Condiciones de uso
http://creativecommons.org/licenses/by-nc-sa/4.0/
Repositorio
SEDICI (UNLP)
Institución
Universidad Nacional de La Plata
OAI Identificador
oai:sedici.unlp.edu.ar:10915/176288

id SEDICI_3f7db2ddb0393da671b5fcbafdbcf6a4
oai_identifier_str oai:sedici.unlp.edu.ar:10915/176288
network_acronym_str SEDICI
repository_id_str 1329
network_name_str SEDICI (UNLP)
spelling Quantitative Evaluation of White & Black Box Interpretability Methods for Image ClassificationStanchi, Oscar AgustínRonchetti, FrancoDal Bianco, Pedro AlejandroRíos, Gastón GustavoHasperué, WaldoPuig Valls, DomenecRashwan, HatemQuiroga, Facundo ManuelCiencias InformáticasAblationBlack BoxComputer VisionDeep LearningInterpretabilityQuantitative MeasureWhite BoxThe field of interpretability in Deep Learning faces significant challenges due to the lack of standard metrics for systematically evaluating and comparing interpretability methods. The absence of quantifiable measures impedes practitioners ability to select the most suitable methods and models for their specific tasks. To address this issue, we propose the Pixel Erosion and Dilation Score, a novel metric designed to assess the robustness of model explanations. Our approach involves applying iterative erosion and dilation processes to heatmaps generated by various interpretability methods, thereby using them to hide and show the important regions of a image to the network, allowing for a coherent and interpretable evaluation of model decision-making processes. We conduct quantitative ablation tests using our metric on the ImageNet dataset with both VGG16 and ResNet18 models. The results reveal that our new measure provides a numerical and intuitive means for comparing interpretability methods and models, facilitating more informed decision-making for practitioner.Red de Universidades con Carreras en Informática2024-10info:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/publishedVersionObjeto de conferenciahttp://purl.org/coar/resource_type/c_5794info:ar-repo/semantics/documentoDeConferenciaapplication/pdf125-134http://sedici.unlp.edu.ar/handle/10915/176288enginfo:eu-repo/semantics/altIdentifier/isbn/978-950-34-2428-5info:eu-repo/semantics/reference/hdl/10915/172755info:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc-sa/4.0/Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)reponame:SEDICI (UNLP)instname:Universidad Nacional de La Platainstacron:UNLP2025-09-10T12:50:15Zoai:sedici.unlp.edu.ar:10915/176288Institucionalhttp://sedici.unlp.edu.ar/Universidad públicaNo correspondehttp://sedici.unlp.edu.ar/oai/snrdalira@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:13292025-09-10 12:50:16.008SEDICI (UNLP) - Universidad Nacional de La Platafalse
dc.title.none.fl_str_mv Quantitative Evaluation of White & Black Box Interpretability Methods for Image Classification
title Quantitative Evaluation of White & Black Box Interpretability Methods for Image Classification
spellingShingle Quantitative Evaluation of White & Black Box Interpretability Methods for Image Classification
Stanchi, Oscar Agustín
Ciencias Informáticas
Ablation
Black Box
Computer Vision
Deep Learning
Interpretability
Quantitative Measure
White Box
title_short Quantitative Evaluation of White & Black Box Interpretability Methods for Image Classification
title_full Quantitative Evaluation of White & Black Box Interpretability Methods for Image Classification
title_fullStr Quantitative Evaluation of White & Black Box Interpretability Methods for Image Classification
title_full_unstemmed Quantitative Evaluation of White & Black Box Interpretability Methods for Image Classification
title_sort Quantitative Evaluation of White & Black Box Interpretability Methods for Image Classification
dc.creator.none.fl_str_mv Stanchi, Oscar Agustín
Ronchetti, Franco
Dal Bianco, Pedro Alejandro
Ríos, Gastón Gustavo
Hasperué, Waldo
Puig Valls, Domenec
Rashwan, Hatem
Quiroga, Facundo Manuel
author Stanchi, Oscar Agustín
author_facet Stanchi, Oscar Agustín
Ronchetti, Franco
Dal Bianco, Pedro Alejandro
Ríos, Gastón Gustavo
Hasperué, Waldo
Puig Valls, Domenec
Rashwan, Hatem
Quiroga, Facundo Manuel
author_role author
author2 Ronchetti, Franco
Dal Bianco, Pedro Alejandro
Ríos, Gastón Gustavo
Hasperué, Waldo
Puig Valls, Domenec
Rashwan, Hatem
Quiroga, Facundo Manuel
author2_role author
author
author
author
author
author
author
dc.subject.none.fl_str_mv Ciencias Informáticas
Ablation
Black Box
Computer Vision
Deep Learning
Interpretability
Quantitative Measure
White Box
topic Ciencias Informáticas
Ablation
Black Box
Computer Vision
Deep Learning
Interpretability
Quantitative Measure
White Box
dc.description.none.fl_txt_mv The field of interpretability in Deep Learning faces significant challenges due to the lack of standard metrics for systematically evaluating and comparing interpretability methods. The absence of quantifiable measures impedes practitioners ability to select the most suitable methods and models for their specific tasks. To address this issue, we propose the Pixel Erosion and Dilation Score, a novel metric designed to assess the robustness of model explanations. Our approach involves applying iterative erosion and dilation processes to heatmaps generated by various interpretability methods, thereby using them to hide and show the important regions of a image to the network, allowing for a coherent and interpretable evaluation of model decision-making processes. We conduct quantitative ablation tests using our metric on the ImageNet dataset with both VGG16 and ResNet18 models. The results reveal that our new measure provides a numerical and intuitive means for comparing interpretability methods and models, facilitating more informed decision-making for practitioner.
Red de Universidades con Carreras en Informática
description The field of interpretability in Deep Learning faces significant challenges due to the lack of standard metrics for systematically evaluating and comparing interpretability methods. The absence of quantifiable measures impedes practitioners ability to select the most suitable methods and models for their specific tasks. To address this issue, we propose the Pixel Erosion and Dilation Score, a novel metric designed to assess the robustness of model explanations. Our approach involves applying iterative erosion and dilation processes to heatmaps generated by various interpretability methods, thereby using them to hide and show the important regions of a image to the network, allowing for a coherent and interpretable evaluation of model decision-making processes. We conduct quantitative ablation tests using our metric on the ImageNet dataset with both VGG16 and ResNet18 models. The results reveal that our new measure provides a numerical and intuitive means for comparing interpretability methods and models, facilitating more informed decision-making for practitioner.
publishDate 2024
dc.date.none.fl_str_mv 2024-10
dc.type.none.fl_str_mv info:eu-repo/semantics/conferenceObject
info:eu-repo/semantics/publishedVersion
Objeto de conferencia
http://purl.org/coar/resource_type/c_5794
info:ar-repo/semantics/documentoDeConferencia
format conferenceObject
status_str publishedVersion
dc.identifier.none.fl_str_mv http://sedici.unlp.edu.ar/handle/10915/176288
url http://sedici.unlp.edu.ar/handle/10915/176288
dc.language.none.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv info:eu-repo/semantics/altIdentifier/isbn/978-950-34-2428-5
info:eu-repo/semantics/reference/hdl/10915/172755
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
http://creativecommons.org/licenses/by-nc-sa/4.0/
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
eu_rights_str_mv openAccess
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc-sa/4.0/
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
dc.format.none.fl_str_mv application/pdf
125-134
dc.source.none.fl_str_mv reponame:SEDICI (UNLP)
instname:Universidad Nacional de La Plata
instacron:UNLP
reponame_str SEDICI (UNLP)
collection SEDICI (UNLP)
instname_str Universidad Nacional de La Plata
instacron_str UNLP
institution UNLP
repository.name.fl_str_mv SEDICI (UNLP) - Universidad Nacional de La Plata
repository.mail.fl_str_mv alira@sedici.unlp.edu.ar
_version_ 1842904747521081344
score 12.993085