VISUAL EFFECTS: 'BEOWULF'
Iain Blair
Issue: November 1, 2007

VISUAL EFFECTS: 'BEOWULF'

CULVER CITY - Based on the English epic poem of the 8th Century, and with a script by graphic novelist Neil Gaiman (Stardust) and Roger Avary (Pulp Fiction), the new Robert Zemeckis film version of Beowulf doesn’t pull any punches in terms of gore and violence. It also brings the ancient tale of the mighty warrior Beowulf (Ray Winstone), who slays the demon Grendel and incurs the wrath of its monstrous-yet-seductive mother (Angelina Jolie), firmly into the modern world of digital cinema.

Here, visual effects supervisor and Sony Pictures Imageworks staffer Jerome Chen, who previously collaborated with Zemeckis on The Polar Express and who earned his first Academy Award nomination for the groundbreaking visual effects in Stuart Little, talks about the challenges of making Beowulf and pushing the digital envelope even further than they did on The Polar Express.

POST: What were the biggest challenges of making the film?

JEROME CHEN: “The film’s sheer scope in terms of the 6th Century fantasy-oriented environments that we needed to make, including castles, caves, the ocean. Beyond that, we also had to create very realistic humans and creatures, including a dragon — which in itself is tough as there have been so many in film and audience expectations are so high — and the Grendel creature. That had to be ferocious but also emotionally vulnerable and able to elicit sympathy from the audience, so we had a lot to contend with.”

POST:: Where did you do all the visual effects, and what tools did you use?

CHEN: “It was all done at Imageworks with a team of between 400 and 500, and I’ve been on it for three years now, so it’s been huge — the biggest job I’ve ever done. We used [Autodesk] Maya as the backbone of our animation pipeline, and [Side Effects] Houdini as our effects pipeline. We rendered in Pixar RenderMan and composited in our in-house software Katana and Bonzai. Katana is also our RenderMan interface: it’s a lighting/compositing interface.

“It’s odd talking about visual effects because the film is totally CG. It’s really a hybrid animation/visual effects film with a lot of live-action components because we deal with actors in terms of capturing their performance. The whole front end of the production process in terms of designing sets, costumes and the motion capture all feels very live action, and that’s why I love these projects because they’re these hybrids that give you the best of both worlds. You still interact with the live action, then in post the whole thing is a visual effects film. The only difference is you’re doing everything in the film as opposed to a select group of shots that enhance the live-action photography. So here, you’re not only lighting the humans and creatures in CG, you’re also compositing them onto a CG-created environment.”

POST: How big a step forward was this from Polar Express, or was it more of a refinement of those techniques?

CHEN: “I’d definitely call it a leap forward, in both technology and creativity, which work hand in hand. The more advanced the technology we developed, the more time we had to work on the creative aspects of creating the imagery. What I’ve found in most visual effects work is, if you spend, say, 75 percent of your time trying to work out how to do it, you only get 25 percent of the time to make it look good. But here, it was more a 50/50 split, and it shows.”

POST: So what new technology enabled you to also raise the creative bar?

CHEN: “We used Imageworks’ patented Imagemotion performance capture system to bring it all to life. The advancement of the motion capture volume, in terms of having more motion capture cameras and higher quality ones, so we could get more of the facial and body movements, was crucial. That capturing part is the equivalent of shooting negative, except it’s negative that can only pick up human motion and nothing else. And pre-production is the scanning of the actors into the computer and the whole modeling of costumes and so on.”

POST:: How did the pipeline work once that part was complete?

CHEN: “The post process involves taking all that movement data, cleaning it up and applying it to the virtual skeletons of the characters. We had to create a library of CG characters that were affordable in terms of being able to make enough high-quality humans to fill the movie, as in some scenes there are over 100 characters in a hall or a crowd. We didn’t have enough time to make 100 different people, so we made about 24 male and female bodies, and from those created variations of costumes and hairdos, which were extremely complex and time-consuming to do.

“The next step was creating all the CGI environments and sets, and there were numerous rooms in the castle, a forest, a beach, countryside, a cave and so on. All those had to be built and then texture-painted. Then we had all the creatures in addition to Grendel and the dragon, including horses and dogs, which was kind of like building miniatures. Next, you get into the process of dealing with the motion capture elements, and while you’re capturing actors’ movements, you’re not actually creating a camera POV yet. All you do is record human data moving — you don’t have the cinematic POV. So there’s an editorial process that happens, and Bob Zemeckis will select the performance he likes and then we’d process that onto the CG characters. And for any given scene, we’d take the characters and place them into the set they’re meant to be in. So you need very careful records when you do motion capture. Once that’s done, we turn it over to layout — the equivalent of the camera department, and they start creating the cinematic POVs Bob wants. The interesting thing is, performance is locked and now you shoot your coverage.”

POST: What technology was used for this?

CHEN: “We used the realtime capabilities of game technology to help us visualize this part, specifically the realtime rendering engine inside [Autodesk] MotionBuilder.  MotionBuilder has better realtime capabilities than Maya, and realtime is important since we have an in-house camera layout system that works in realtime, so that way we can use the skills of a real camera operator to shoot the characters.

“Rather than having an animator keyframe these cameras, we actually have the motion characters play back in realtime, the camera operator looks at them on a screen and basically pans and tilts to the characters walking across the screen, and we record that live. So you get all the nuances of a cameraman’s style, and it doesn’t feel too keyframed. And it worked well for giving the film fluid human movements.

“That’s the first pass of performance integration onto the CG character and camera layout; it’s what the editor edits with. Now he has a whole bunch of shots to make his cut with, and after these camera layout shots are created and the editor cuts together a sequence, then we start, in some sense, a more traditional filmmaking process. The big difference is if we decide, for instance, to add a tighter reaction shot, the editor can then do it immediately. There’s no compromise, or ‘we missed that on the day.’ The performance is exactly the same, as you’ve locked off on it, and that’s part of the flexibility Bob likes about the performance capture process, because he gets the best of both worlds. He gets the actors’ performances, and then has the strength of the virtual technology to make the best film he can make.”

POST: Once you have a cut of the film, how do you deal with the more traditional aspects of post production?

CHEN: “We take the characters and start doing digital cloth and hair simulations on them, and since that’s the time-consuming part, we do all that after we have a cut put together. So you know exactly how long a shot will be and how many people will be in it. For cloth and hair simulation, we used the Maya cloth simulators as the backbone of our system, with a customized toolset that we built here. Then we have our own in-house hair styling system that’s then rendered in RenderMan. And as all the hair and cloth are being done, in parallel you have all the visual effects shots.

“So if a scene calls for snow or fire or debris, then we use the Houdini effects pipeline to create the elements that are also then rendered in RenderMan. On top of that, we spent almost 18 months writing customized RenderMan shaders to create the look of the human skin, eyes and clothes of the characters. Then digital lighting starts on a shot-by-shot basis once cloth and hair are done, and the lighting layers are rendered separately and then composited together. And then all the final color balancing and final lighting tweaks are done in the composite.”

POST: Beowulf seems far less stylized than The Polar Express?

CHEN: “Exactly. It had to have a component of realism and very dynamic performances from the actors. One of the biggest challenges was making sure that those performances translated. It’s not yet a perfect system. You don’t just turn the cameras on and their soul comes through automatically. Creating human characters that aren’t distracting is very, very hard. Even after doing this film I’d still consider that to be the hardest thing to do. We also had to tackle all the hard CG elements, like cloth, water, hair and so on.”

POST: After three years on this film, just how groundbreaking is Beowulf for you?

CHEN: “It definitely feels like we’re in the middle of some big kind of change that’s happening in the industry. It’s not a sweeping change, but it’s this new avenue that’s opening up, because as we’re working on the movie, we start speaking to other directors and writers and creatives who are becoming very interested in this whole process of performance capture and CG.

“Guys like Jim Cameron are using something similar on his new film Avatar, and other directors are exploring this kind of filmmaking process. The big appeal is that it has a live-action component to the process; it’s not just making an animated feature where the performances are storyboarded and created by animators. This is a different genre. My background is animation, but this won’t replace animation and it’s not intended to. It seems to be becoming its own medium, another avenue for creatives to tell a story in, and which requires all this new technology.”