A comprehensive system for facial animation of generic 3D head models driven by speech
- Autores
- Terissi, Lucas Daniel; Cerda, Mauricio; Gómez, Juan Carlos; Hitschfeld-kahler, Nancy; Girau, Bernard
- Año de publicación
- 2013
- Idioma
- inglés
- Tipo de recurso
- artículo
- Estado
- versión publicada
- Descripción
- A comprehensive system for facial animation of generic 3D head models driven by speech is presented in this article. In the training stage, audio-visual information is extracted from audio-visual training data, and then used to compute the parameters of a single joint audio-visual hidden Markov model (AV-HMM). In contrast to most of the methods in the literature, the proposed approach does not require segmentation/classification processing stages of the audio-visual data, avoiding the error propagation related to these procedures. The trained AV-HMM provides a compact representation of the audio-visual data, without the need of phoneme (word) segmentation, which makes it adaptable to different languages. Visual features are estimated from the speech signal based on the inversion of the AV-HMM. The estimated visual speech features are used to animate a simple face model. The animation of a more complex head model is then obtained by automatically mapping the deformation of the simple model to it, using a small number of control points for the interpolation. The proposed algorithm allows the animation of 3D head models of arbitrary complexity through a simple setup procedure. The resulting animation is evaluated in terms of intelligibility of visual speech through perceptual tests, showing a promising performance. The computational complexity of the proposed system is analyzed, showing the feasibility of its real-time implementation.
Fil: Terissi, Lucas Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y Sistemas; Argentina;
Fil: Cerda, Mauricio. Universidad Austral de Chile; Chile;
Fil: Gómez, Juan Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y Sistemas; Argentina;
Fil: Hitschfeld-kahler, Nancy. Universidad de Chile. Departamento de Ciencias de la Computación; Argentina;
Fil: Girau, Bernard. Loria - INRIA Nancy Grand Est. Cortex Team. Vandoeuvre-lès-Nancy; Francia; - Materia
-
Facial Animation
Hidden Markov Models
Audio Visual Speech Processing - Nivel de accesibilidad
- acceso abierto
- Condiciones de uso
- https://creativecommons.org/licenses/by-nc-sa/2.5/ar/
- Repositorio
- Institución
- Consejo Nacional de Investigaciones Científicas y Técnicas
- OAI Identificador
- oai:ri.conicet.gov.ar:11336/1438
Ver los metadatos del registro completo
id |
CONICETDig_ec6dc271896be78c327ba2fe9b81d961 |
---|---|
oai_identifier_str |
oai:ri.conicet.gov.ar:11336/1438 |
network_acronym_str |
CONICETDig |
repository_id_str |
3498 |
network_name_str |
CONICET Digital (CONICET) |
spelling |
A comprehensive system for facial animation of generic 3D head models driven by speechTerissi, Lucas DanielCerda, MauricioGómez, Juan CarlosHitschfeld-kahler, NancyGirau, BernardFacial AnimationHidden Markov ModelsAudio Visual Speech Processinghttps://purl.org/becyt/ford/2.2https://purl.org/becyt/ford/2https://purl.org/becyt/ford/2.2https://purl.org/becyt/ford/2A comprehensive system for facial animation of generic 3D head models driven by speech is presented in this article. In the training stage, audio-visual information is extracted from audio-visual training data, and then used to compute the parameters of a single joint audio-visual hidden Markov model (AV-HMM). In contrast to most of the methods in the literature, the proposed approach does not require segmentation/classification processing stages of the audio-visual data, avoiding the error propagation related to these procedures. The trained AV-HMM provides a compact representation of the audio-visual data, without the need of phoneme (word) segmentation, which makes it adaptable to different languages. Visual features are estimated from the speech signal based on the inversion of the AV-HMM. The estimated visual speech features are used to animate a simple face model. The animation of a more complex head model is then obtained by automatically mapping the deformation of the simple model to it, using a small number of control points for the interpolation. The proposed algorithm allows the animation of 3D head models of arbitrary complexity through a simple setup procedure. The resulting animation is evaluated in terms of intelligibility of visual speech through perceptual tests, showing a promising performance. The computational complexity of the proposed system is analyzed, showing the feasibility of its real-time implementation.Fil: Terissi, Lucas Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y Sistemas; Argentina;Fil: Cerda, Mauricio. Universidad Austral de Chile; Chile;Fil: Gómez, Juan Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y Sistemas; Argentina;Fil: Hitschfeld-kahler, Nancy. Universidad de Chile. Departamento de Ciencias de la Computación; Argentina;Fil: Girau, Bernard. Loria - INRIA Nancy Grand Est. Cortex Team. Vandoeuvre-lès-Nancy; Francia;Springer2013-02info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501info:ar-repo/semantics/articuloapplication/pdfapplication/pdfhttp://hdl.handle.net/11336/1438Terissi, Lucas Daniel; Cerda, Mauricio; Gómez, Juan Carlos; Hitschfeld-kahler, Nancy; Girau, Bernard; A comprehensive system for facial animation of generic 3D head models driven by speech; Springer; EURASIP Journal on Audio, Speech and Music Processing; 2013; 5; 2-2013; 1-371687-4722enginfo:eu-repo/semantics/altIdentifier/url/http://asmp.eurasipjournals.com/content/2013/1/5info:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/reponame:CONICET Digital (CONICET)instname:Consejo Nacional de Investigaciones Científicas y Técnicas2025-10-15T14:49:34Zoai:ri.conicet.gov.ar:11336/1438instacron:CONICETInstitucionalhttp://ri.conicet.gov.ar/Organismo científico-tecnológicoNo correspondehttp://ri.conicet.gov.ar/oai/requestdasensio@conicet.gov.ar; lcarlino@conicet.gov.arArgentinaNo correspondeNo correspondeNo correspondeopendoar:34982025-10-15 14:49:34.402CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicasfalse |
dc.title.none.fl_str_mv |
A comprehensive system for facial animation of generic 3D head models driven by speech |
title |
A comprehensive system for facial animation of generic 3D head models driven by speech |
spellingShingle |
A comprehensive system for facial animation of generic 3D head models driven by speech Terissi, Lucas Daniel Facial Animation Hidden Markov Models Audio Visual Speech Processing |
title_short |
A comprehensive system for facial animation of generic 3D head models driven by speech |
title_full |
A comprehensive system for facial animation of generic 3D head models driven by speech |
title_fullStr |
A comprehensive system for facial animation of generic 3D head models driven by speech |
title_full_unstemmed |
A comprehensive system for facial animation of generic 3D head models driven by speech |
title_sort |
A comprehensive system for facial animation of generic 3D head models driven by speech |
dc.creator.none.fl_str_mv |
Terissi, Lucas Daniel Cerda, Mauricio Gómez, Juan Carlos Hitschfeld-kahler, Nancy Girau, Bernard |
author |
Terissi, Lucas Daniel |
author_facet |
Terissi, Lucas Daniel Cerda, Mauricio Gómez, Juan Carlos Hitschfeld-kahler, Nancy Girau, Bernard |
author_role |
author |
author2 |
Cerda, Mauricio Gómez, Juan Carlos Hitschfeld-kahler, Nancy Girau, Bernard |
author2_role |
author author author author |
dc.subject.none.fl_str_mv |
Facial Animation Hidden Markov Models Audio Visual Speech Processing |
topic |
Facial Animation Hidden Markov Models Audio Visual Speech Processing |
purl_subject.fl_str_mv |
https://purl.org/becyt/ford/2.2 https://purl.org/becyt/ford/2 https://purl.org/becyt/ford/2.2 https://purl.org/becyt/ford/2 |
dc.description.none.fl_txt_mv |
A comprehensive system for facial animation of generic 3D head models driven by speech is presented in this article. In the training stage, audio-visual information is extracted from audio-visual training data, and then used to compute the parameters of a single joint audio-visual hidden Markov model (AV-HMM). In contrast to most of the methods in the literature, the proposed approach does not require segmentation/classification processing stages of the audio-visual data, avoiding the error propagation related to these procedures. The trained AV-HMM provides a compact representation of the audio-visual data, without the need of phoneme (word) segmentation, which makes it adaptable to different languages. Visual features are estimated from the speech signal based on the inversion of the AV-HMM. The estimated visual speech features are used to animate a simple face model. The animation of a more complex head model is then obtained by automatically mapping the deformation of the simple model to it, using a small number of control points for the interpolation. The proposed algorithm allows the animation of 3D head models of arbitrary complexity through a simple setup procedure. The resulting animation is evaluated in terms of intelligibility of visual speech through perceptual tests, showing a promising performance. The computational complexity of the proposed system is analyzed, showing the feasibility of its real-time implementation. Fil: Terissi, Lucas Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y Sistemas; Argentina; Fil: Cerda, Mauricio. Universidad Austral de Chile; Chile; Fil: Gómez, Juan Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y Sistemas; Argentina; Fil: Hitschfeld-kahler, Nancy. Universidad de Chile. Departamento de Ciencias de la Computación; Argentina; Fil: Girau, Bernard. Loria - INRIA Nancy Grand Est. Cortex Team. Vandoeuvre-lès-Nancy; Francia; |
description |
A comprehensive system for facial animation of generic 3D head models driven by speech is presented in this article. In the training stage, audio-visual information is extracted from audio-visual training data, and then used to compute the parameters of a single joint audio-visual hidden Markov model (AV-HMM). In contrast to most of the methods in the literature, the proposed approach does not require segmentation/classification processing stages of the audio-visual data, avoiding the error propagation related to these procedures. The trained AV-HMM provides a compact representation of the audio-visual data, without the need of phoneme (word) segmentation, which makes it adaptable to different languages. Visual features are estimated from the speech signal based on the inversion of the AV-HMM. The estimated visual speech features are used to animate a simple face model. The animation of a more complex head model is then obtained by automatically mapping the deformation of the simple model to it, using a small number of control points for the interpolation. The proposed algorithm allows the animation of 3D head models of arbitrary complexity through a simple setup procedure. The resulting animation is evaluated in terms of intelligibility of visual speech through perceptual tests, showing a promising performance. The computational complexity of the proposed system is analyzed, showing the feasibility of its real-time implementation. |
publishDate |
2013 |
dc.date.none.fl_str_mv |
2013-02 |
dc.type.none.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion http://purl.org/coar/resource_type/c_6501 info:ar-repo/semantics/articulo |
format |
article |
status_str |
publishedVersion |
dc.identifier.none.fl_str_mv |
http://hdl.handle.net/11336/1438 Terissi, Lucas Daniel; Cerda, Mauricio; Gómez, Juan Carlos; Hitschfeld-kahler, Nancy; Girau, Bernard; A comprehensive system for facial animation of generic 3D head models driven by speech; Springer; EURASIP Journal on Audio, Speech and Music Processing; 2013; 5; 2-2013; 1-37 1687-4722 |
url |
http://hdl.handle.net/11336/1438 |
identifier_str_mv |
Terissi, Lucas Daniel; Cerda, Mauricio; Gómez, Juan Carlos; Hitschfeld-kahler, Nancy; Girau, Bernard; A comprehensive system for facial animation of generic 3D head models driven by speech; Springer; EURASIP Journal on Audio, Speech and Music Processing; 2013; 5; 2-2013; 1-37 1687-4722 |
dc.language.none.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
info:eu-repo/semantics/altIdentifier/url/http://asmp.eurasipjournals.com/content/2013/1/5 |
dc.rights.none.fl_str_mv |
info:eu-repo/semantics/openAccess https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
eu_rights_str_mv |
openAccess |
rights_invalid_str_mv |
https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ |
dc.format.none.fl_str_mv |
application/pdf application/pdf |
dc.publisher.none.fl_str_mv |
Springer |
publisher.none.fl_str_mv |
Springer |
dc.source.none.fl_str_mv |
reponame:CONICET Digital (CONICET) instname:Consejo Nacional de Investigaciones Científicas y Técnicas |
reponame_str |
CONICET Digital (CONICET) |
collection |
CONICET Digital (CONICET) |
instname_str |
Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.name.fl_str_mv |
CONICET Digital (CONICET) - Consejo Nacional de Investigaciones Científicas y Técnicas |
repository.mail.fl_str_mv |
dasensio@conicet.gov.ar; lcarlino@conicet.gov.ar |
_version_ |
1846083018169516032 |
score |
13.22299 |