'Rise of the Planet of the Apes'
Issue: August 1, 2011

'Rise of the Planet of the Apes'

LOS ANGELES — For director Rupert Wyatt’s Rise of the Planet of the Apes, the ability to create photorealistic chimps was key, and that challenge fell to Wellington, New Zealand-based Weta Digital under the leadership of visual effects supervisor Joe Letteri.

Letteri and his team are no strangers to such challenges. They were the artists behind the ground-breaking visual effects and technology of James Cameron’s Avatar.

This iteration of the Apes story (a prequel to the original) was going to be different than in the past. No longer were actors going to have to endure hours of make-up, and viewers would now have an easier time suspending their disbelief  — “that’s a guy in an ape suit!” Photorealism would bring another layer of believability to the story, which focuses on a father and son scientist team who genetically engineer an intelligent chimp named Caesar, who goes on to lead an uprising against mankind.

Weta Digital (www.wetafx.co.nz), which created 1,000 visual effects shots for Rise of the Planet of the Apes, was the only house on the film. According to Letteri, who took time to chat with us while on a recent visit to LA, it was because there were so many apes, and they all had to work together; there was just no way to break up the work. “It was all performance driven, so Fox wanted to keep it all with us so there would be consistency of performance throughout the film.”

POST: You used performance capture for the apes to help with the photorealism?

JOE LETTERI: “Yes. With the main character being so much like a human, it just made sense to do it that way. But we didn’t want to do a post process type of thing where the actors do their parts and we put Caesar in later. 

“We built on the performance capture technology we developed for Avatar. We thought what if now we take everything we did for Avatar and just have the actor in there again and that way we could just capture it and the motion would just go directly into the shot right on top of the original performance. That was the next step up from Avatar.”

POST: How else did was the technology used differently here than on Avatar? 

LETTERI: “In Avatar we did everything on a closed stage: a performance capture volume. Here, we had to take it out onto a film set and out on location, so we had to develop new camera technology and new markers and everything for the mocap systems to get that part working. There were a lot of other advances to get Caesar (played by Andy Serkis) working as an ape — figuring out a better way to do fur than in the past and teeth and eyes and skin and muscle and all the things that go into making a character.”

POST: What were some of the challenges of shooting outside in this way, sunlight?

LETTERI: “Exactly. There were a couple of issues: One was dealing with sunlight. So rather than using reflective markers we made active LED markers. Using infrared light we could distinguish that light from the sunlight and whatever set lights were being used. That was half the battle. The other part was getting coverage, because if you are shooting in a forest area, you have lots of trees and things in the way.

“We also did a big set for the Golden Gate Bridge where we built a 300-foot-long section. Then, after the fact, we put it into our digital bridge inside of our a digital environment. But because we had the apes swarming over the bridge we had to performance capture all of that and had to build rows and rows of motion capture cameras to cover this whole volume. We built little birdhouses to keep them safe in the weather. Just to be able to manage and calibrate that would take a long time every day.”

POST: Would you say the bridge scene was one of the most challenging in the film? 

LETTERI: “Yes, from a purely technical point of view, because it was so vast. You have outdoor lighting, you have dozens of vehicles working on the set, actors performing amongst those vehicles as chimps, but with their performance capture suits on and their arm extensions and everything they needed to do to work. They are obviously jumping up and down on the cars and running around them; there is lots of action. Then we have to take that set and put it into the Golden Gate Bridge at various parts along the bridge as the storyline progresses and put that all in a digital environment.”

POST: Were the tools used for the effects and animation based on off-the-shelf technology with proprietary software on top of that?

LETTERI: “Yes and no. We use Autodesk Maya as our base platform and that really gives us something to hook into. All the tools we developed to do the animation, the rigging, the skinning and all that type of thing is based on proprietary software.”

POST: What about the performance-capture software?

LETTERI: “It was a mix. We used Motion Analysis software, Giant Software from Giant Studios and a mix of software we’ve written.”

POST: Can you talk about capturing the actors’ facial movements?

LETTERI: “We used the same technology we developed for Avatar — the idea of having a single camera mounted on a head rig in front of the actor’s face. So we were able to get that part to work the same way. You get a faithful capture of the actor’s performance, and then because chimps and humans are so alike in their underlying facial structure, the way their muscles and tissues work, we were able to then translate the actor’s performance into the chimp’s performance.”

POST: How many digital chimps did you create?

LETTERI: “There were four lead chimps, probably about another dozen secondary ones, and from those we have scenes where there are hundreds. We reused those 12 or so and just varied them slightly to populate the town.”

POST: Did you use Massive software?

LETTERI: “Yes, we did.”

POST: What did you use for compositing, and can you talk about that process?

LETTERI: “We used Nuke. We had a number of techniques that we’ve developed, especially through the course of Avatar. Even though we didn’t do this as a 3D stereoscopic film (it was shot 35mm), we did use the 3D compositing pipeline that we developed on Avatar, and that really is helpful because you now have control of every pixel in depth. You aren’t just breaking things down in planes or elements like we used to in the early days, so it really makes it a lot more flexible to figure out how you need to combine elements and work with them after the fact.

“We also used Nuke’s 3D capabilities... to projection map backgrounds and skies and everything, so we could get the backgrounds working pretty quickly. There are a lot of things you have to do when dealing with paint out — there is a lot of cut and paste and re-projection and things like that, and Nuke, because it has that built-in 3D engine, is really helpful for that.”

POST: So no 2D-to-3D conversion?

LETTERI: “No stereo at all on this one. We thought we couldn’t shoot it stereo; we just didn’t have the time because of the tight release schedule. Everyone wanted all the work to really go into the characters rather than worrying about stereo. So Fox sort of took that off the table early on. Fox knows, because we’ve done this with Avatar, if you are not shooting in stereo, you really don’t want to convert. It’s better to do it right or not do it at all, but commit one way of the other.”

POST: Backtracking a bit, what was the previs process like?

LETTERI: “We didn’t do too much of the previs on this film because director Rupert Wyatt had a previs team working with him directly, so a lot of that was to flesh out story and script ideas and blocking for the stage and things like that. 

“What we did was a little bit more animation driven, where we were just working out real performance beats and figuring out what the motions of the chimps would be. Because as much as you have actors performing this with their arm extensions and trying to be quadrapedel and everything, you still have to translate that to realistic chimp motions. 

“You have to discover when an actor does something, what the analogous motion is because you kind of understand what they are after — but because the knees and ankles and everything bend in different ways, you can’t just take all the capture data literally. At some point you have to blend into what the chimp has to do to get the right foot contact and right joint angles. So we spent a lot of time in the early days doing character studies to figure out that motion and how we would put that together.”

POST: Did you rely on videos or live chimps?

LETTERI: “We saw live chimps because we work out of Wellington and the zoo there has a great population of them. They allowed us to spend a lot of time there photographing and filming. Then the animators had to go in and look at that and any other references, and start tracking and matching the animation, and learning about how chimps move.”

POST: And making them believable?

LETTERI: “Yes, the biggest challenge is really just building the believability of the characters. Just looking for those moments when you can really get what the character is thinking, and they become part of the story. Every scene just requires one of those keystone moments where you figure out this is what’s working and then all the other shots can hang around that and come together.”

POST: How was it working with the director Rupert Wyatt?

LETTERI: “We treated it very much like a live-action film. We had Andy Serkis and the other actors in the set, and because they were the same size as the chimps, we decided early on to just shoot it like it was live action. That way you get everything you want. If you want a close-up, you have Andy turn to camera or if he’s running and the cameraman needs to track with him, then use that performance because you can cut with it… everyone has a guide and it just works. 

“So, even though we were capturing all the performance live, we didn’t overlay Caesar onto the camera or anything. Not only was it not necessary, it actually would have gotten in the way. Andy’s performance was the one you wanted, not the plastic Caesar that you could do in realtime, so we just treated it like live action and kept that as our model all the way through.”