Abstract
The level of realism that real-time virtual humans have reached in the last years enables their use as an alternative to pictures and videos in the remediation of social cognition deficits. This paper presents the engineering principles and tools used to design facial expressions on virtual humans to play basic emotions. The proposal is based on the Facial Action Coding System that makes it possible to easily represent facial expressions. Then, the paper describes how the designed virtual human facial emotions have been assessed by healthy people. For that purpose, 204 healthy participants have taken part in an experiment in which they had to recognize the six basic emotions (each of them with two levels of intensity) depicted by the virtual humans. The overall accuracy of the emotion identification task was 88.25%, which outperforms most results obtained by other authors using virtual humans and/or pictures. The best recognized emotions were neutral, happiness and anger. Remarkably striking was the high success rate gotten for disgust, far superior to previous studies based on virtual reality. Unlike other works, no significant differences were found between women and men in the recognition of emotions, probably due to an enhanced dynamism and realism of the designed human faces. However, age-related differences were found for some emotions in favor of the younger participants. In addition, higher emotion identification rates were detected for higher intensity representations of each emotion, for more dynamic avatars and for faces shown frontally compared to lateral ones. Therefore, the results of the evaluation experiment have demonstrated that virtual humans perfectly convey emotions using facial expressions.
Get full access to this article
View all access options for this article.
