Abstract
Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face-voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face-voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection.
Original language | English |
---|---|
Journal | Acta Psychologica |
Volume | 137 |
Issue number | 1 |
Pages (from-to) | 36-47 |
Number of pages | 12 |
ISSN | 0001-6918 |
DOIs | |
Publication status | Published - May 2011 |
Keywords
- Adult
- Analysis of Variance
- Attention
- Emotions
- Facial Expression
- Female
- Humans
- Male
- Reaction Time
- Social Perception
- Visual Perception