Rule Extraction in Trained Feedforward Deep Neural Networks: Integrating Cosine Similarity and Logic for Explainability
- Autores
- Negro, Pablo Ariel; Pons, Claudia Fabiana
- Año de publicación
- 2024
- Idioma
- inglés
- Tipo de recurso
- artículo
- Estado
- versión publicada
- Descripción
- Explainability is a key aspect of machine learning, necessary for ensuring transparency and trust in decision-making processes. As machine learning models become more complex, the integration of neural and symbolic approaches has emerged as a promising solution to the explainability problem. One effective solution involves using search techniques to extract rules from trained deep neural networks by examining weight and bias values and calculating their correlation with outputs. This article proposes incorporating cosine similarity in this process to narrow down the search space and identify the critical path connecting inputs to final results. Additionally, the integration of first-order logic (FOL) is suggested to provide a more comprehensive and interpretable understanding of the decision-making process. By leveraging cosine similarity and FOL, an innovative algorithm capable of extracting and explaining rule patterns learned by a feedforward trained neural network was developed and tested in two use cases, demonstrating its effectiveness in providing insights into model behavior.
- Materia
-
Ciencias de la Computación e Información
Artificial Intelligence
Black Box Models
Cosine Similarity
Deep Learning
Distance Function
Entropy
Explainability
Feedforward Neural Network
Logic
Regularization
Rule Extraction - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- http://creativecommons.org/licenses/by/4.0/
- Repositorio
- Institución
- Comisión de Investigaciones Científicas de la Provincia de Buenos Aires
- OAI Identificador
- oai:digital.cic.gba.gob.ar:11746/12471
Ver los metadatos del registro completo
id |
CICBA_a026cffceac62214998aa6b2d253b6f5 |
---|---|
oai_identifier_str |
oai:digital.cic.gba.gob.ar:11746/12471 |
network_acronym_str |
CICBA |
repository_id_str |
9441 |
network_name_str |
CIC Digital (CICBA) |
spelling |
Rule Extraction in Trained Feedforward Deep Neural Networks: Integrating Cosine Similarity and Logic for ExplainabilityNegro, Pablo ArielPons, Claudia FabianaCiencias de la Computación e InformaciónArtificial IntelligenceBlack Box ModelsCosine SimilarityDeep LearningDistance FunctionEntropyExplainabilityFeedforward Neural NetworkLogicRegularizationRule ExtractionExplainability is a key aspect of machine learning, necessary for ensuring transparency and trust in decision-making processes. As machine learning models become more complex, the integration of neural and symbolic approaches has emerged as a promising solution to the explainability problem. One effective solution involves using search techniques to extract rules from trained deep neural networks by examining weight and bias values and calculating their correlation with outputs. This article proposes incorporating cosine similarity in this process to narrow down the search space and identify the critical path connecting inputs to final results. Additionally, the integration of first-order logic (FOL) is suggested to provide a more comprehensive and interpretable understanding of the decision-making process. By leveraging cosine similarity and FOL, an innovative algorithm capable of extracting and explaining rule patterns learned by a feedforward trained neural network was developed and tested in two use cases, demonstrating its effectiveness in providing insights into model behavior.2024-12info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfhttps://digital.cic.gba.gob.ar/handle/11746/12471enginfo:eu-repo/semantics/altIdentifier/issn/2642-1585info:eu-repo/semantics/altIdentifier/doi/10.4018/IJAIML.347988info:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by/4.0/reponame:CIC Digital (CICBA)instname:Comisión de Investigaciones Científicas de la Provincia de Buenos Airesinstacron:CICBA2025-10-23T11:14:16Zoai:digital.cic.gba.gob.ar:11746/12471Institucionalhttp://digital.cic.gba.gob.arOrganismo científico-tecnológicoNo correspondehttp://digital.cic.gba.gob.ar/oai/snrdmarisa.degiusti@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:94412025-10-23 11:14:16.966CIC Digital (CICBA) - Comisión de Investigaciones Científicas de la Provincia de Buenos Airesfalse |
dc.title.none.fl_str_mv |
Rule Extraction in Trained Feedforward Deep Neural Networks: Integrating Cosine Similarity and Logic for Explainability |
title |
Rule Extraction in Trained Feedforward Deep Neural Networks: Integrating Cosine Similarity and Logic for Explainability |
spellingShingle |
Rule Extraction in Trained Feedforward Deep Neural Networks: Integrating Cosine Similarity and Logic for Explainability Negro, Pablo Ariel Ciencias de la Computación e Información Artificial Intelligence Black Box Models Cosine Similarity Deep Learning Distance Function Entropy Explainability Feedforward Neural Network Logic Regularization Rule Extraction |
title_short |
Rule Extraction in Trained Feedforward Deep Neural Networks: Integrating Cosine Similarity and Logic for Explainability |
title_full |
Rule Extraction in Trained Feedforward Deep Neural Networks: Integrating Cosine Similarity and Logic for Explainability |
title_fullStr |
Rule Extraction in Trained Feedforward Deep Neural Networks: Integrating Cosine Similarity and Logic for Explainability |
title_full_unstemmed |
Rule Extraction in Trained Feedforward Deep Neural Networks: Integrating Cosine Similarity and Logic for Explainability |
title_sort |
Rule Extraction in Trained Feedforward Deep Neural Networks: Integrating Cosine Similarity and Logic for Explainability |
dc.creator.none.fl_str_mv |
Negro, Pablo Ariel Pons, Claudia Fabiana |
author |
Negro, Pablo Ariel |
author_facet |
Negro, Pablo Ariel Pons, Claudia Fabiana |
author_role |
author |
author2 |
Pons, Claudia Fabiana |
author2_role |
author |
dc.subject.none.fl_str_mv |
Ciencias de la Computación e Información Artificial Intelligence Black Box Models Cosine Similarity Deep Learning Distance Function Entropy Explainability Feedforward Neural Network Logic Regularization Rule Extraction |
topic |
Ciencias de la Computación e Información Artificial Intelligence Black Box Models Cosine Similarity Deep Learning Distance Function Entropy Explainability Feedforward Neural Network Logic Regularization Rule Extraction |
dc.description.none.fl_txt_mv |
Explainability is a key aspect of machine learning, necessary for ensuring transparency and trust in decision-making processes. As machine learning models become more complex, the integration of neural and symbolic approaches has emerged as a promising solution to the explainability problem. One effective solution involves using search techniques to extract rules from trained deep neural networks by examining weight and bias values and calculating their correlation with outputs. This article proposes incorporating cosine similarity in this process to narrow down the search space and identify the critical path connecting inputs to final results. Additionally, the integration of first-order logic (FOL) is suggested to provide a more comprehensive and interpretable understanding of the decision-making process. By leveraging cosine similarity and FOL, an innovative algorithm capable of extracting and explaining rule patterns learned by a feedforward trained neural network was developed and tested in two use cases, demonstrating its effectiveness in providing insights into model behavior. |
description |
Explainability is a key aspect of machine learning, necessary for ensuring transparency and trust in decision-making processes. As machine learning models become more complex, the integration of neural and symbolic approaches has emerged as a promising solution to the explainability problem. One effective solution involves using search techniques to extract rules from trained deep neural networks by examining weight and bias values and calculating their correlation with outputs. This article proposes incorporating cosine similarity in this process to narrow down the search space and identify the critical path connecting inputs to final results. Additionally, the integration of first-order logic (FOL) is suggested to provide a more comprehensive and interpretable understanding of the decision-making process. By leveraging cosine similarity and FOL, an innovative algorithm capable of extracting and explaining rule patterns learned by a feedforward trained neural network was developed and tested in two use cases, demonstrating its effectiveness in providing insights into model behavior. |
publishDate |
2024 |
dc.date.none.fl_str_mv |
2024-12 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion http://purl.org/coar/resource_type/c_6501 info:ar-repo/semantics/articulo |
format |
article |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
https://digital.cic.gba.gob.ar/handle/11746/12471 |
url |
https://digital.cic.gba.gob.ar/handle/11746/12471 |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/issn/2642-1585 info:eu-repo/semantics/altIdentifier/doi/10.4018/IJAIML.347988 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess http://creativecommons.org/licenses/by/4.0/ |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
http://creativecommons.org/licenses/by/4.0/ |
dc.format.none.fl_str_mv |
application/pdf |
dc.source.none.fl_str_mv |
reponame:CIC Digital (CICBA) instname:Comisión de Investigaciones Científicas de la Provincia de Buenos Aires instacron:CICBA |
reponame_str |
CIC Digital (CICBA) |
collection |
CIC Digital (CICBA) |
instname_str |
Comisión de Investigaciones Científicas de la Provincia de Buenos Aires |
instacron_str |
CICBA |
institution |
CICBA |
repository.name.fl_str_mv |
CIC Digital (CICBA) - Comisión de Investigaciones Científicas de la Provincia de Buenos Aires |
repository.mail.fl_str_mv |
marisa.degiusti@sedici.unlp.edu.ar |
_version_ |
1846783879016349696 |
score |
12.982451 |