Outlook: How motion capture is changing film production
Brett Ineson
Issue: November/December 2022

Outlook: How motion capture is changing film production

Motion capture has been a staple within the film industry for many years now. Significant advances in technology over the last two decades have enabled filmmakers to capture and translate an actor’s physical performance to create a huge array of character designs, human or non-human, in CGI form. 

Thanks to motion-capture suits and head-mounted cameras, it’s now possible to capture an entire physical performance, simultaneously recording every nuance of both the body and the face. This results in fully-synchronized movements that can be combined with animation, as well as voice acting, for a unified performance.

This improvement has been a game changer for filmmakers, as it enabled them to streamline the production process, making the technology not only more advanced, but also more accessible. But with every advancement and enhanced capability comes a brand new challenge to be addressed within the production pipeline.



Acting with performance capture

It cannot be denied that actors and directors are far more familiar with the motion-capture process today than ever before. The technology is now widely used on almost every major blockbuster production, from Planet of the Apes to Avengers: Endgame. Yet, although it may seem as simple as putting a special suit on your actors and saying, ‘Away you go,’ there are many variables in play and therefore many new challenges encountered on every new production.

Motion capture’s advances over the last two decades in both stability and reliability have streamlined this process. Less time, therefore, is wasted on-set; hardware and software are more dependable, handled by crew members who know what they’re doing. This results in more attention being given to the physical space that actors are now inhabiting, and particularly how they might inhabit them. 

If we start with the very beginning of the filmmaking process, the roles involved in casting for the motion-capture character have had to adapt significantly when identifying the right person for a role. This challenge is one that has been given more focus thanks to the reliability in the technology. More time can now be given to the performance aspects of the process, minimizing the mistakes that could result in a difficult role for the design team moving forward. 

Take the roles of the apes in War of the Planet of the Apes for example. The ability to embody the characters’ dominant physicality was only achieved by casting physically-imposing actors. However, facial expressions and body language can be digitally rendered in so much detail now that filmmakers are less reliant on the actors’ physicality. There’s less need to be overly specific with the casting.


Photo: Terry Notary

Rather than considering actors’ physical traits for roles, it’s their mocap skills that are now in demand. Helping to bridge this gap are mocap acting trainers, like the renowned movement coach Terry Notary, with whom we’ve worked across a wide range of projects, from Planet of the Apes, Warcraft and Kong: Skull Island. His role extends to helping less experienced actors better understand the physical demands of bringing digital characters to life. It’s a role that has since become an important part of the process. 

As it becomes easier to capture performers, more emphasis is on the actors to deliver bigger and better performances. Having roles on-set that deliver training can achieve consistency across the board and remains an ever-changing challenge to motion-capture filmmakers.

Realtime visualization

Motion capture performances aren’t always needed purely within filmmaking either. When creating video games, one of the most important aspects to streamlining the production process is realtime visualization. With motion capture, users can capture both facial and bodily performances within video-game cinematics that are rendered in realtime, and in the same game engine as the interactive gameplay.

Many actors approaching the motion-capture process for the first time find it difficult to adapt their style across different disciplines, and accommodating this is a challenge unto itself. For The Matrix Awakens: An Unreal Engine 5 Experience, Keanu Reeves and Carrie Anne-Moss were both reprising roles from The Matrix, as well as playing themselves — actors made famous by the success of those iconic characters. 



The actors morph almost seamlessly from Reeves and Anne-Moss within a movie setting, to an interactive version of Neo and Trinity. One of the major challenges to film and game productions is ensuring that actors and production teams are able to adapt their processes and create characters that can translate across multiple disciplines in realtime.

With the help of Epic Games’ powerful Unreal Engine, motion capture crossed disciplines within a game-changingly impressive open-world experience and blurred the lines between film and game. Motion capture is now being used to create the cut scenes of an open-world experience. The Matrix Awakens: An Unreal Engine 5 Experience is more than a game, it’s a vision for what the future of interactive content could be with the aid of motion capture creating visually- authentic characters.

Virtual production & LED walls 

For production crews, one of the biggest challenges arrives on the motion-capture stage. Bringing physicality to a digital production can be a difficult task, but one that proves vital in getting the authentic movement and expression a filmmaker needs from the performers. 

If your shot requires a complex choreography of multiple actors performing backflips or jumping from great heights, the stage will require the safety instruments, levitation and correct scale to make it all possible without damaging the final look of the scene.

To overcome the challenges imposed by doing this, productions opt to utilize virtual production stages and LED walls. Commonly associated with their use on Disney+’s The Mandalorian, LED stages allow filmmakers to capture a significant amount of complex visual effects and in-camera effects using realtime game-engine technology and LED screens that surround the entire production.

To do this, the LED stage needs to work in combination with a motion-capture volume, which is aware of where the camera is at all times and how it is moving. These volumes can be notoriously difficult to set up. For filmmakers, opting for motion-capture technology that allows for ease of use within the volume is one of the most important choices for a production crew. 



The result is one that suits the needs of the performers too. With virtual production stages utilizing LED walls, productions are afforded dynamic photoreal digital landscapes and sets to be live whilst filming, reducing any necessity for green screen. This can prove immensely beneficial to the actors, being able to visualize the world around them whilst their movements are tracked in realtime. 

Virtual production brings the physical aspect back to digital production, working in tandem with motion-capture technology to seamlessly capture shots in realtime. But with every new breakthrough, there are emerging challenges. As films grow in scale, so will the physical spaces. Finding the right space for the right scene will become more important, but motion capture is constantly evolving to meet that challenge.

Conclusion

Motion capture has come a long way since the early days of rotoscoping, and even Andy Serkis’s Gollum, a character that audiences traditionally associate with the technology. The processes involved are constantly evolving, yet one aspect will always remain key to success: the cohesive and synchronized relationship between performers, the technology and the crew.

Every new production experience brings increased knowledge, training and awareness of the motion-capture process, which enables both filmmakers and game developers to overcome persistent challenges and test the boundaries of motion capture.

Brett Ineson is the President of Animatrik (www.animatrik.com). Headquartered in Vancouver, with a second location in Los Angeles, Animatrik offers motion capture, previsualization and virtual cinematography services, with credits that include Aquaman, Spider-Man: Homecoming, Gears of War 5, Avengers: Endgame, Ready Player One, Xmen: Dark Phoenix and Rogue One: A Star Wars Story.