Examining the Social Signals Hypothesis in Face Perception and Recognition

Research output: Contribution to conferencePaper


Research has shown that people are more accurate at recognising faces from moving compared to static images. Three experiments were conducted to examine the validity of the social signals hypothesis, which proposes that social cues carried in movement (e.g. expression and speech) benefit face recognition by attracting attention to identity-specific facial features such as the eyes, nose and mouth. Participants (N=51) completed three tasks while their eye movements were recorded: an unfamiliar face learning task, a familiar (famous) face recognition task and a face-matching task. In the famous and learning tasks, participants were more accurate when recognising faces from motion, relative to a static presentation. However, motion did not affect performance in the matching task. Across all three tasks, participants directed a significantly higher proportion of dwell time and fixations to the internal features (eyes, nose and mouth) when faces were presented in motion, relative to static. Conversely, the proportion of time and fixations directed to the external features (cheeks, chin and hair) was significantly higher during the presentation of static faces. These results support the social signals hypothesis by demonstrating that social cues present in facial motion attract attention to identity-specific features, facilitating identity processing .
Original languageEnglish
Publication statusPublished - 2 Jul 2020
EventExperimental Psychology Society Meeting 2020 - Online
Duration: 2 Jul 20202 Jul 2020


ConferenceExperimental Psychology Society Meeting 2020


Dive into the research topics of 'Examining the Social Signals Hypothesis in Face Perception and Recognition'. Together they form a unique fingerprint.

Cite this