Explainable Artificial Intelligence: Analysis of Methodologies and Applications
- Autores
- Pezzini, María Cecilia; Pons, Claudia Fabiana
- Año de publicación
- 2025
- Idioma
- inglés
- Tipo de recurso
- artículo
- Estado
- versión publicada
- Descripción
- Explainability is essential in healthcare, finance, and security, where black-box models can undermine trust and decisions. Recent advances in eXplainable Artificial Intelligence (XAI) across structured/tabular data, computer vision, and natural language processing are surveyed. Thirty articles (2022–2024) were selected through a structured search with explicit inclusion criteria, and emerging approaches are compared with established techniques such as LIME and SHAP, alongside rule-, logic-, and ontology-based methods. Methods are organized along key dimensions—post-hoc vs. ante-hoc, model-agnostic vs. model-specific, scope, problem type, input data, and output format—and their effectiveness and applicability are evaluated. The review highlights innovations including spatially explainable architectures (e.g., SAMCNet) and entropy-based logic explanations, and identifies persistent challenges in robustness, cross-domain generalization, and deployment. Overall, findings consolidate the evolving XAI landscape and indicate directions toward reproducible techniques that strengthen transparency, accountability, and user trust in AI systems.
La explicabilidad es esencial en salud, finanzas y seguridad, donde los modelos de caja negra pueden socavar la confianza y las decisiones. Se revisan avances recientes en Inteligencia Artificial Explicable (XAI) en datos estructurados/tabulares, visión por computadora y procesamiento del lenguaje natural. Se seleccionaron treinta artículos (2022–2024) mediante una búsqueda estructurada con criterios de inclusión explícitos, y se comparan enfoques emergentes con técnicas consolidadas como LIME y SHAP, junto con métodos basados en reglas, lógica y ontologías. Los métodos se organizan según dimensiones clave—post hoc vs. ante hoc, agnóstico al modelo vs. específico de modelo, alcance, tipo de problema, datos de entrada y formato de salida—y se evalúa su efectividad y aplicabilidad. Se destacan innovaciones como arquitecturas con explicabilidad espacial (p. ej., SAMCNet) y explicaciones lógicas basadas en entropía, y se identifican desafíos persistentes en robustez, generalización entre dominios y despliegue. En conjunto, los hallazgos consolidan el panorama en evolución de XAI y señalan direcciones hacia técnicas reproducibles que fortalezcan la transparencia, la rendición de cuentas y la confianza de los usuarios en los sistemas de IA.
Facultad de Informática - Materia
-
Ciencias Informáticas
artificial intelligence
explainability
explainable artificial intelligence
machine learning
aprendizaje automático
explicabilidad
Inteligencia artificial
inteligencia artificial explicable - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Repositorio
.jpg)
- Institución
- Universidad Nacional de La Plata
- OAI Identificador
- oai:sedici.unlp.edu.ar:10915/186930
Ver los metadatos del registro completo
| id |
SEDICI_793e380d148ec555bd839875b1f471f5 |
|---|---|
| oai_identifier_str |
oai:sedici.unlp.edu.ar:10915/186930 |
| network_acronym_str |
SEDICI |
| repository_id_str |
1329 |
| network_name_str |
SEDICI (UNLP) |
| spelling |
Explainable Artificial Intelligence: Analysis of Methodologies and ApplicationsInteligencia artificial explicable: análisis de metodologías y aplicacionesPezzini, María CeciliaPons, Claudia FabianaCiencias Informáticasartificial intelligenceexplainabilityexplainable artificial intelligencemachine learningaprendizaje automáticoexplicabilidadInteligencia artificialinteligencia artificial explicableExplainability is essential in healthcare, finance, and security, where black-box models can undermine trust and decisions. Recent advances in eXplainable Artificial Intelligence (XAI) across structured/tabular data, computer vision, and natural language processing are surveyed. Thirty articles (2022–2024) were selected through a structured search with explicit inclusion criteria, and emerging approaches are compared with established techniques such as LIME and SHAP, alongside rule-, logic-, and ontology-based methods. Methods are organized along key dimensions—post-hoc vs. ante-hoc, model-agnostic vs. model-specific, scope, problem type, input data, and output format—and their effectiveness and applicability are evaluated. The review highlights innovations including spatially explainable architectures (e.g., SAMCNet) and entropy-based logic explanations, and identifies persistent challenges in robustness, cross-domain generalization, and deployment. Overall, findings consolidate the evolving XAI landscape and indicate directions toward reproducible techniques that strengthen transparency, accountability, and user trust in AI systems.La explicabilidad es esencial en salud, finanzas y seguridad, donde los modelos de caja negra pueden socavar la confianza y las decisiones. Se revisan avances recientes en Inteligencia Artificial Explicable (XAI) en datos estructurados/tabulares, visión por computadora y procesamiento del lenguaje natural. Se seleccionaron treinta artículos (2022–2024) mediante una búsqueda estructurada con criterios de inclusión explícitos, y se comparan enfoques emergentes con técnicas consolidadas como LIME y SHAP, junto con métodos basados en reglas, lógica y ontologías. Los métodos se organizan según dimensiones clave—post hoc vs. ante hoc, agnóstico al modelo vs. específico de modelo, alcance, tipo de problema, datos de entrada y formato de salida—y se evalúa su efectividad y aplicabilidad. Se destacan innovaciones como arquitecturas con explicabilidad espacial (p. ej., SAMCNet) y explicaciones lógicas basadas en entropía, y se identifican desafíos persistentes en robustez, generalización entre dominios y despliegue. En conjunto, los hallazgos consolidan el panorama en evolución de XAI y señalan direcciones hacia técnicas reproducibles que fortalezcan la transparencia, la rendición de cuentas y la confianza de los usuarios en los sistemas de IA.Facultad de Informática2025-10info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionArticulohttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdf75-86http://sedici.unlp.edu.ar/handle/10915/186930enginfo:eu-repo/semantics/altIdentifier/issn/1666-6038info:eu-repo/semantics/altIdentifier/doi/10.24215/16666038.25.e07info:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc-sa/4.0/Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)reponame:SEDICI (UNLP)instname:Universidad Nacional de La Platainstacron:UNLP2025-11-12T11:15:39Zoai:sedici.unlp.edu.ar:10915/186930Institucionalhttp://sedici.unlp.edu.ar/Universidad públicaNo correspondehttp://sedici.unlp.edu.ar/oai/snrdalira@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:13292025-11-12 11:15:39.274SEDICI (UNLP) - Universidad Nacional de La Platafalse |
| dc.title.none.fl_str_mv |
Explainable Artificial Intelligence: Analysis of Methodologies and Applications Inteligencia artificial explicable: análisis de metodologías y aplicaciones |
| title |
Explainable Artificial Intelligence: Analysis of Methodologies and Applications |
| spellingShingle |
Explainable Artificial Intelligence: Analysis of Methodologies and Applications Pezzini, María Cecilia Ciencias Informáticas artificial intelligence explainability explainable artificial intelligence machine learning aprendizaje automático explicabilidad Inteligencia artificial inteligencia artificial explicable |
| title_short |
Explainable Artificial Intelligence: Analysis of Methodologies and Applications |
| title_full |
Explainable Artificial Intelligence: Analysis of Methodologies and Applications |
| title_fullStr |
Explainable Artificial Intelligence: Analysis of Methodologies and Applications |
| title_full_unstemmed |
Explainable Artificial Intelligence: Analysis of Methodologies and Applications |
| title_sort |
Explainable Artificial Intelligence: Analysis of Methodologies and Applications |
| dc.creator.none.fl_str_mv |
Pezzini, María Cecilia Pons, Claudia Fabiana |
| author |
Pezzini, María Cecilia |
| author_facet |
Pezzini, María Cecilia Pons, Claudia Fabiana |
| author_role |
author |
| author2 |
Pons, Claudia Fabiana |
| author2_role |
author |
| dc.subject.none.fl_str_mv |
Ciencias Informáticas artificial intelligence explainability explainable artificial intelligence machine learning aprendizaje automático explicabilidad Inteligencia artificial inteligencia artificial explicable |
| topic |
Ciencias Informáticas artificial intelligence explainability explainable artificial intelligence machine learning aprendizaje automático explicabilidad Inteligencia artificial inteligencia artificial explicable |
| dc.description.none.fl_txt_mv |
Explainability is essential in healthcare, finance, and security, where black-box models can undermine trust and decisions. Recent advances in eXplainable Artificial Intelligence (XAI) across structured/tabular data, computer vision, and natural language processing are surveyed. Thirty articles (2022–2024) were selected through a structured search with explicit inclusion criteria, and emerging approaches are compared with established techniques such as LIME and SHAP, alongside rule-, logic-, and ontology-based methods. Methods are organized along key dimensions—post-hoc vs. ante-hoc, model-agnostic vs. model-specific, scope, problem type, input data, and output format—and their effectiveness and applicability are evaluated. The review highlights innovations including spatially explainable architectures (e.g., SAMCNet) and entropy-based logic explanations, and identifies persistent challenges in robustness, cross-domain generalization, and deployment. Overall, findings consolidate the evolving XAI landscape and indicate directions toward reproducible techniques that strengthen transparency, accountability, and user trust in AI systems. La explicabilidad es esencial en salud, finanzas y seguridad, donde los modelos de caja negra pueden socavar la confianza y las decisiones. Se revisan avances recientes en Inteligencia Artificial Explicable (XAI) en datos estructurados/tabulares, visión por computadora y procesamiento del lenguaje natural. Se seleccionaron treinta artículos (2022–2024) mediante una búsqueda estructurada con criterios de inclusión explícitos, y se comparan enfoques emergentes con técnicas consolidadas como LIME y SHAP, junto con métodos basados en reglas, lógica y ontologías. Los métodos se organizan según dimensiones clave—post hoc vs. ante hoc, agnóstico al modelo vs. específico de modelo, alcance, tipo de problema, datos de entrada y formato de salida—y se evalúa su efectividad y aplicabilidad. Se destacan innovaciones como arquitecturas con explicabilidad espacial (p. ej., SAMCNet) y explicaciones lógicas basadas en entropía, y se identifican desafíos persistentes en robustez, generalización entre dominios y despliegue. En conjunto, los hallazgos consolidan el panorama en evolución de XAI y señalan direcciones hacia técnicas reproducibles que fortalezcan la transparencia, la rendición de cuentas y la confianza de los usuarios en los sistemas de IA. Facultad de Informática |
| description |
Explainability is essential in healthcare, finance, and security, where black-box models can undermine trust and decisions. Recent advances in eXplainable Artificial Intelligence (XAI) across structured/tabular data, computer vision, and natural language processing are surveyed. Thirty articles (2022–2024) were selected through a structured search with explicit inclusion criteria, and emerging approaches are compared with established techniques such as LIME and SHAP, alongside rule-, logic-, and ontology-based methods. Methods are organized along key dimensions—post-hoc vs. ante-hoc, model-agnostic vs. model-specific, scope, problem type, input data, and output format—and their effectiveness and applicability are evaluated. The review highlights innovations including spatially explainable architectures (e.g., SAMCNet) and entropy-based logic explanations, and identifies persistent challenges in robustness, cross-domain generalization, and deployment. Overall, findings consolidate the evolving XAI landscape and indicate directions toward reproducible techniques that strengthen transparency, accountability, and user trust in AI systems. |
| publishDate |
2025 |
| dc.date.none.fl_str_mv |
2025-10 |
| dc.type.none.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion Articulo http://purl.org/coar/resource_type/c_6501 info:ar-repo/semantics/articulo |
| format |
article |
| status_str |
publishedVersion |
| dc.identifier.none.fl_str_mv |
http://sedici.unlp.edu.ar/handle/10915/186930 |
| url |
http://sedici.unlp.edu.ar/handle/10915/186930 |
| dc.language.none.fl_str_mv |
eng |
| language |
eng |
| dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/issn/1666-6038 info:eu-repo/semantics/altIdentifier/doi/10.24215/16666038.25.e07 |
| dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess http://creativecommons.org/licenses/by-nc-sa/4.0/ Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) |
| eu_rights_str_mv |
openAccess |
| rights_invalid_str_mv |
http://creativecommons.org/licenses/by-nc-sa/4.0/ Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) |
| dc.format.none.fl_str_mv |
application/pdf 75-86 |
| dc.source.none.fl_str_mv |
reponame:SEDICI (UNLP) instname:Universidad Nacional de La Plata instacron:UNLP |
| reponame_str |
SEDICI (UNLP) |
| collection |
SEDICI (UNLP) |
| instname_str |
Universidad Nacional de La Plata |
| instacron_str |
UNLP |
| institution |
UNLP |
| repository.name.fl_str_mv |
SEDICI (UNLP) - Universidad Nacional de La Plata |
| repository.mail.fl_str_mv |
alira@sedici.unlp.edu.ar |
| _version_ |
1848605864214134784 |
| score |
12.976206 |