Face Capture 101

When you think of motion capture, typically the first image to come to mind would be an actor in a lycra suit moving around a sound stage, while a computer captures their movements. While this is a key part of the process, it neglects to mention an increasingly common aspect of motion capture - face capture. 

As the name suggests, it’s the process of capturing facial expressions, and then putting these movements onto a digital character. In the early days of motion capture, there wasn’t the technology in place to capture an actor’s facial expressions. The first iteration of Gollum in Lord of the Rings: The Two Towers involved a team of VFX artists animating facial expressions manually - separate from the motion capture data gathered from Andy Serkis’ movements. 

The first iterations of facial capture focused on taking an actor’s likeness, rather than digitally translating expressions. This process involved a 3D scanner circling a motionless actor - or at least one aiming to be as still as possible, mapping out their 3D geometry. These scans then become a digital asset that can be manipulated and transformed by the VFX teams. An early adopter of this technology was James Cameron, who first used it in The Abyss and The Terminator 2: Judgement Day.

The next step in the evolution of facial capture technology was the introduction of retroreflective markers, which were placed upon the muscle groups of actors’ faces, which would drive the ‘digital mask’ of a character. This would be captured by an assortment of cameras which would surround the actor. This development meant that facial and body acting was no longer separate, and instead could be simultaneously recorded, resulting in a more nuanced, connected performance. An example of this type of motion capture can be seen in Peter Jackson’s King Kong, in which Andy Serkis portrayed the titular character. 

Though a groundbreaking step, it still had its limitations. Actors were restricted by the position of cameras, and the markers were often uncomfortable to wear. Early iterations of these markers were not as sensitive as they are now, meaning an actor would often have to exaggerate their performance for it to be picked up by the sensor. As the technology evolved, makeup eventually took the place of physical markers, improving the experience for performers significantly. This is still a widely used technique and one that Animatrik used during the mocap sessions for Star Wars: The Last Jedi. Billie Eilish also posted about performing with this technology while shooting for her concert film Happier than Ever: A Love Letter to Los Angeles in 2021.

We then began to see the introduction of markerless-based facial capture technology, which came in the form of head-mounted cameras. This allowed the actor to be continuously shot by a 360-degree camera, meaning they were untethered, and could perform more naturally. Head-mounted cameras were first introduced during the pre-production process of James Cameron’s Avatar. The cameras would capture every nuance of an actor’s facial performance, analyzing how their muscles were moving, and then mapping this onto a digital character. 

4D reconstruction technology in Power Rangers (2017)

There have been multiple iterations and developments of head-mounted cameras, from upgrading from standard-def video to HD and 2K, to the introduction of lighter rigs with greater headspace between the performer and the camera. 

Today’s cutting edge technology now offers end-to-end reconstruction pipelines that capture all the subtle movements of an actor’s performance, enabling digital characters to be informed with photo realistic facial animation. 4D facial capture and processing technology is becoming popular with AAA game developers and film studios for bringing digital characters to life.

As we enter the age of digital doubles, the demand for high-fidelity facial capture increases. Whether it’s making sure the actor is more comfortable with the gear itself, or ensuring the data translation process is as smooth as possible, each development in facial capture comes down to making a digital character feel not only more realistic but more alive. Face capture superimposes human emotions onto a digital character, which helps an audience maintain a suspension of disbelief and increase the level of immersion, be that a game, a blockbuster, or a virtual concert in the metaverse.

To find out more about Animatrik’s motion capture services, click here