Resumo: | Social skills are an important issue throughout human life. Therefore, systems that can synthesize emotions, for example virtual characters (avatars) and robotic platforms, are gaining special attention in the literature. In particular, those systems may be important tools in order to promote social and emotional competences in children (or adults) that have some communication/interaction impairments. The present paper proposes a mirroring emotion system that uses the recent Intel RealSense 3D sensor along with a humanoid robot. The system extracts the user's facial Action Units (AUs) and head motion data. Then, it sends the information to the robot allowing on-line imitation. The first tests were conducted in a laboratorial environment using the software FaceReader in order to verify its correct functioning. Next, a perceptual study was performed to verify the similarity between the expressions of a performer and those of the robot using a quiz distributed to 59 respondents. Finally, the system was evaluated with typically developing children with 6 to 9 years old. The robot mimicked the children's emotional facial expressions. The results point out that the present system can on-line and accurately map facial expressions of a user onto the robot.
|