Motion Disabled:Faces

    Research output: Non-textual formDigital or Visual Products


    Motion Disabled:Faces was an experimental time-based installation by artist Simon Mckeown which explored perceptions of the human face and voice using audio and facial motion capture animation.

    Featuring five disabled actors, their words and poetry along and emotions such as joy and sadness were all digitally mapped and manipulated to explore difference and belonging. The process used techniques normally associated with feature films and computer games to create an immersive three-screen experience. The project was completed using Optitracks Expression face motion capture tool to capture face movement data, linked directly to 3D animation system Softimage and its proprietary tool Face Robot.

    For the project, McKeown undertook computer graphic research into synthetic character development. McKeown researched models of emotion such as the Ocean Model and later the Universal Face Expressions model (seven emotions – anger, contempt, disgust, fear, joy, sadness, and surprise), as proposed by Darwin in 1872 and later confirmed through the ‘universality studies’ in 1972 as outlined in ‘Facial expression analysis’ by David Matsumoto and Paul Ekman (2008). In addition to Universal Emotions and to develop a comprehensive understanding of face movement, McKeown triangulated Ekman's Facial Action Coding System, an anatomically based structure for describing all observable facial movement for every emotion (Ekman, 1978); with Phonetics - units of sound that distinguish one word from another and the definition of Visemes, units of speech sounds that look the same, for example when lip-reading and in 3D animation (Osipa, 2003/2007).

    Furthermore, McKeown researched the challenge to believability presented by the perceived negative effect of the ‘Uncanny Valley’, first proposed by Masahiro Mori in 1970. This was later developed into a computer graphic concept which predicts that as virtual characters become more realistically human their presentation crosses a threshold into which viewers notice unsettling imperfections (McDonnell, Briedt, Bülthoff, 2012). Creative development in response to the Uncanny Valley along with state of the art methods of production were researched, regarding approaches and understanding, as exemplified in feature film production (Duncan, 2009 and 2011), and computer game production such as La Noire, which used the (now retired) MotionScan/Depth Analysis System. McKeown further researched human behavioural characteristics (Morris, 2002), as well as acting methods pertaining to animation (Hooks, 2003; Kundert-Gibbs, 2009) along with traditional animation (Williams, 2009). 

    Technically McKeown researched developing methods of production, including 3D laser scanning of faces and bodies along with methods of motion capture, including marker-based motion capture (OptiTrack, VICON, Motion Analysis etc) and markerless pattern recognition-based video approaches (Image Metrics). Additionally, he researched methods of procedural content generation (Smelik et al, 2010), often used to create environments as well as motion, AI reuse (Dragert, 2012), Photogrammetry derived modelling (3D models extracted from photographs) (Wang, 2013), automatic body rig creation and adaption and randomisation of body sections (Seo, 2010).
    Original languageEnglish
    Media of outputFilm
    Publication statusPublished - 2013


    Dive into the research topics of 'Motion Disabled:Faces'. Together they form a unique fingerprint.

    Cite this