Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent - Architectures et Modèles pour l'Interaction Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent

Résumé

This paper introduces a new model to generate rhythmically relevant non-verbal facial behaviors for virtual agents while they speak. The model demonstrates perceived performance comparable to behaviors directly extracted from the data and replayed on a virtual agent, in terms of synchronization with speech and believability. Interestingly, we found that training the model with two different sets of data, instead of one, did not necessarily improve its performance. The expressiveness of the people in the dataset and the shooting conditions are key elements. We also show that employing an adversarial model, in which fabricated fake examples are introduced during the training phase, increases the perception of synchronization with speech. A collection of videos demonstrating the results and code can be accessed at: https://github.com/aldelb/non_verbal_facial_animation.
Fichier principal
Vignette du fichier
Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent.pdf (919.2 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04206768 , version 1 (14-09-2023)

Identifiants

  • HAL Id : hal-04206768 , version 1

Citer

Alice Delbosc, Magalie Ochs, Nicolas Sabouret, Brian Ravenet, Stéphane Ayache. Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent. GENEA : GENERATION AND EVALUATION OF NON-VERBAL BEHAVIOUR FOR EMBODIED AGENTS during ICMI ’23 Companion, Oct 2023, Paris, France. ⟨hal-04206768⟩
86 Consultations
52 Téléchargements

Partager

Gmail Facebook X LinkedIn More