Measuring (in)variances in Convolutional Networks
- Autores
- Quiroga, Facundo; Torrents-Barrena, Jordina; Lanzarini, Laura Cristina; Puig, Domenec
- Año de publicación
- 2019
- Idioma
- inglés
- Tipo de recurso
- documento de conferencia
- Estado
- versión publicada
- Descripción
- Convolutional neural networks (CNN) offer state-of-the-art performance in various computer vision tasks such as activity recognition, face detection, medical image analysis, among others. Many of those tasks need invariance to image transformations (i.e.. rotations, translations or scaling). This work proposes a versatile, straightforward and interpretable measure to quantify the (in)variance of CNN activations with respect to transformations of the input. Intermediate output values of feature maps and fully connected layers are also analyzed with respect to different input transformations. The technique is applicable to any type of neural network and/or transformation. Our technique is validated on rotation transformations and compared with the relative (in)variance of several networks. More specifically, ResNet, AllConvolutional and VGG architectures were trained on CIFAR10 and MNIST databases with and without rotational data augmentation. Experiments reveal that rotation (in)variance of CNN outputs is class conditional. A distribution analysis also shows that lower layers are the most invariant, which seems to go against previous guidelines that recommend placing invariances near the network output and equivariances near the input.
Instituto de Investigación en Informática - Materia
-
Ciencias Informáticas
transformation invariance
rotation invariance
Neural networks
variance measure
MNIST dataset
CIFAR10 dataset
Residual Network
VGG Network
AllConvolutional Network - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Repositorio
- Institución
- Universidad Nacional de La Plata
- OAI Identificador
- oai:sedici.unlp.edu.ar:10915/80387
Ver los metadatos del registro completo
id |
SEDICI_72b1abda08887582d5a87400561530c8 |
---|---|
oai_identifier_str |
oai:sedici.unlp.edu.ar:10915/80387 |
network_acronym_str |
SEDICI |
repository_id_str |
1329 |
network_name_str |
SEDICI (UNLP) |
spelling |
Measuring (in)variances in Convolutional NetworksQuiroga, FacundoTorrents-Barrena, JordinaLanzarini, Laura CristinaPuig, DomenecCiencias Informáticastransformation invariancerotation invarianceNeural networksvariance measureMNIST datasetCIFAR10 datasetResidual NetworkVGG NetworkAllConvolutional NetworkConvolutional neural networks (CNN) offer state-of-the-art performance in various computer vision tasks such as activity recognition, face detection, medical image analysis, among others. Many of those tasks need invariance to image transformations (i.e.. rotations, translations or scaling). This work proposes a versatile, straightforward and interpretable measure to quantify the (in)variance of CNN activations with respect to transformations of the input. Intermediate output values of feature maps and fully connected layers are also analyzed with respect to different input transformations. The technique is applicable to any type of neural network and/or transformation. Our technique is validated on rotation transformations and compared with the relative (in)variance of several networks. More specifically, ResNet, AllConvolutional and VGG architectures were trained on CIFAR10 and MNIST databases with and without rotational data augmentation. Experiments reveal that rotation (in)variance of CNN outputs is class conditional. A distribution analysis also shows that lower layers are the most invariant, which seems to go against previous guidelines that recommend placing invariances near the network output and equivariances near the input.Instituto de Investigación en Informática2019-06info:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/publishedVersionObjeto de conferenciahttp://purl.org/coar/resource_type/c_5794info:ar-repo/semantics/documentoDeConferenciaapplication/pdf98-109http://sedici.unlp.edu.ar/handle/10915/80387enginfo:eu-repo/semantics/altIdentifier/isbn/978-3-030-27713-0info:eu-repo/semantics/reference/doi/10.1007/978-3-030-27713-0info:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc-sa/4.0/Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)reponame:SEDICI (UNLP)instname:Universidad Nacional de La Platainstacron:UNLP2025-09-29T11:14:47Zoai:sedici.unlp.edu.ar:10915/80387Institucionalhttp://sedici.unlp.edu.ar/Universidad públicaNo correspondehttp://sedici.unlp.edu.ar/oai/snrdalira@sedici.unlp.edu.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:13292025-09-29 11:14:47.888SEDICI (UNLP) - Universidad Nacional de La Platafalse |
dc.title.none.fl_str_mv |
Measuring (in)variances in Convolutional Networks |
title |
Measuring (in)variances in Convolutional Networks |
spellingShingle |
Measuring (in)variances in Convolutional Networks Quiroga, Facundo Ciencias Informáticas transformation invariance rotation invariance Neural networks variance measure MNIST dataset CIFAR10 dataset Residual Network VGG Network AllConvolutional Network |
title_short |
Measuring (in)variances in Convolutional Networks |
title_full |
Measuring (in)variances in Convolutional Networks |
title_fullStr |
Measuring (in)variances in Convolutional Networks |
title_full_unstemmed |
Measuring (in)variances in Convolutional Networks |
title_sort |
Measuring (in)variances in Convolutional Networks |
dc.creator.none.fl_str_mv |
Quiroga, Facundo Torrents-Barrena, Jordina Lanzarini, Laura Cristina Puig, Domenec |
author |
Quiroga, Facundo |
author_facet |
Quiroga, Facundo Torrents-Barrena, Jordina Lanzarini, Laura Cristina Puig, Domenec |
author_role |
author |
author2 |
Torrents-Barrena, Jordina Lanzarini, Laura Cristina Puig, Domenec |
author2_role |
author author author |
dc.subject.none.fl_str_mv |
Ciencias Informáticas transformation invariance rotation invariance Neural networks variance measure MNIST dataset CIFAR10 dataset Residual Network VGG Network AllConvolutional Network |
topic |
Ciencias Informáticas transformation invariance rotation invariance Neural networks variance measure MNIST dataset CIFAR10 dataset Residual Network VGG Network AllConvolutional Network |
dc.description.none.fl_txt_mv |
Convolutional neural networks (CNN) offer state-of-the-art performance in various computer vision tasks such as activity recognition, face detection, medical image analysis, among others. Many of those tasks need invariance to image transformations (i.e.. rotations, translations or scaling). This work proposes a versatile, straightforward and interpretable measure to quantify the (in)variance of CNN activations with respect to transformations of the input. Intermediate output values of feature maps and fully connected layers are also analyzed with respect to different input transformations. The technique is applicable to any type of neural network and/or transformation. Our technique is validated on rotation transformations and compared with the relative (in)variance of several networks. More specifically, ResNet, AllConvolutional and VGG architectures were trained on CIFAR10 and MNIST databases with and without rotational data augmentation. Experiments reveal that rotation (in)variance of CNN outputs is class conditional. A distribution analysis also shows that lower layers are the most invariant, which seems to go against previous guidelines that recommend placing invariances near the network output and equivariances near the input. Instituto de Investigación en Informática |
description |
Convolutional neural networks (CNN) offer state-of-the-art performance in various computer vision tasks such as activity recognition, face detection, medical image analysis, among others. Many of those tasks need invariance to image transformations (i.e.. rotations, translations or scaling). This work proposes a versatile, straightforward and interpretable measure to quantify the (in)variance of CNN activations with respect to transformations of the input. Intermediate output values of feature maps and fully connected layers are also analyzed with respect to different input transformations. The technique is applicable to any type of neural network and/or transformation. Our technique is validated on rotation transformations and compared with the relative (in)variance of several networks. More specifically, ResNet, AllConvolutional and VGG architectures were trained on CIFAR10 and MNIST databases with and without rotational data augmentation. Experiments reveal that rotation (in)variance of CNN outputs is class conditional. A distribution analysis also shows that lower layers are the most invariant, which seems to go against previous guidelines that recommend placing invariances near the network output and equivariances near the input. |
publishDate |
2019 |
dc.date.none.fl_str_mv |
2019-06 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/conferenceObject info:eu-repo/semantics/publishedVersion Objeto de conferencia http://purl.org/coar/resource_type/c_5794 info:ar-repo/semantics/documentoDeConferencia |
format |
conferenceObject |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
http://sedici.unlp.edu.ar/handle/10915/80387 |
url |
http://sedici.unlp.edu.ar/handle/10915/80387 |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/isbn/978-3-030-27713-0 info:eu-repo/semantics/reference/doi/10.1007/978-3-030-27713-0 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess http://creativecommons.org/licenses/by-nc-sa/4.0/ Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
http://creativecommons.org/licenses/by-nc-sa/4.0/ Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) |
dc.format.none.fl_str_mv |
application/pdf 98-109 |
dc.source.none.fl_str_mv |
reponame:SEDICI (UNLP) instname:Universidad Nacional de La Plata instacron:UNLP |
reponame_str |
SEDICI (UNLP) |
collection |
SEDICI (UNLP) |
instname_str |
Universidad Nacional de La Plata |
instacron_str |
UNLP |
institution |
UNLP |
repository.name.fl_str_mv |
SEDICI (UNLP) - Universidad Nacional de La Plata |
repository.mail.fl_str_mv |
alira@sedici.unlp.edu.ar |
_version_ |
1844616019504529408 |
score |
13.070432 |