Abstract
The details of facial expressions can effectively improve the realism of virtual animated characters. However, many expression generation algorithms suffer from problems such as inaccurate facial feature recognition. Therefore, based on the virtual reality technology, this study proposes a real-time simulation model for animated character facial expressions that integrates improved multi-task cascaded convolutional networks, and conducts experiments on the effectiveness and superiority of the model. The experiment outcomes showed that the lowest normalized average error for feature point recognition in the dataset was only 2.48, and the accuracy of facial action unit extraction was basically over 95%. The highest average in different datasets reached 97.68%, showing high intra-group correlation coefficient and low average absolute error. In addition, the rigid pose estimation algorithm that introduced head eye coordination had the smallest absolute deviation of the mean line of sight estimation, only 4.98°. Finally, the model simulated normal and exaggerated expressions in real-time, with an average frame rate of 54 frames and 38 frames, and a matching degree of 96.48% and 92.74%. Overall, this research can promote the advancement of virtual reality technology, which has significant importance for the metaverse, enhancing virtual interaction experience, and enhancing the realism of animated characters.
Get full access to this article
View all access options for this article.
