TY - JOUR
T1 - Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots
AU - Tsiourti, Christiana
AU - Weiss, Astrid
AU - Wac, Katarzyna
AU - Vincze, Markus
PY - 2019/8/1
Y1 - 2019/8/1
N2 - Humanoid social robots have an increasingly prominent place in today's world. Their acceptance in social and emotional human--robot interaction (HRI) scenarios depends on their ability to convey well recognized and believable emotional expressions to their human users. In this article, we incorporate recent findings from psychology, neuroscience, human--computer interaction, and HRI, to examine how people recognize and respond to emotions displayed by the body and voice of humanoid robots, with a particular emphasis on the effects of incongruence. In a social HRI laboratory experiment, we investigated contextual incongruence (i.e., the conflict situation where a robot's reaction is incongrous with the socio-emotional context of the interaction) and cross-modal incongruence (i.e., the conflict situation where an observer receives incongruous emotional information across the auditory (vocal prosody) and visual (whole-body expressions) modalities). Results showed that both contextual incongruence and cross-modal incongruence confused observers and decreased the likelihood that they accurately recognized the emotional expressions of the robot. This, in turn, gives the impression that the robot is unintelligent or unable to express ``empathic'' behaviour and leads to profoundly harmful effects on likability and believability. Our findings reinforce the need of proper design of emotional expressions for robots that use several channels to communicate their emotional states in a clear and effective way. We offer recommendations regarding design choices and discuss future research areas in the direction of multimodal HRI.
AB - Humanoid social robots have an increasingly prominent place in today's world. Their acceptance in social and emotional human--robot interaction (HRI) scenarios depends on their ability to convey well recognized and believable emotional expressions to their human users. In this article, we incorporate recent findings from psychology, neuroscience, human--computer interaction, and HRI, to examine how people recognize and respond to emotions displayed by the body and voice of humanoid robots, with a particular emphasis on the effects of incongruence. In a social HRI laboratory experiment, we investigated contextual incongruence (i.e., the conflict situation where a robot's reaction is incongrous with the socio-emotional context of the interaction) and cross-modal incongruence (i.e., the conflict situation where an observer receives incongruous emotional information across the auditory (vocal prosody) and visual (whole-body expressions) modalities). Results showed that both contextual incongruence and cross-modal incongruence confused observers and decreased the likelihood that they accurately recognized the emotional expressions of the robot. This, in turn, gives the impression that the robot is unintelligent or unable to express ``empathic'' behaviour and leads to profoundly harmful effects on likability and believability. Our findings reinforce the need of proper design of emotional expressions for robots that use several channels to communicate their emotional states in a clear and effective way. We offer recommendations regarding design choices and discuss future research areas in the direction of multimodal HRI.
UR - http://www.mendeley.com/research/multimodal-integration-emotional-signals-voice-body-context-effects-incongruence-emotion-recognition
U2 - 10.1007/s12369-019-00524-z
DO - 10.1007/s12369-019-00524-z
M3 - Journal article
SN - 1875-4791
JO - International Journal of Social Robotics
JF - International Journal of Social Robotics
ER -