|LOS ANGELES — Marvel Comics’ new Avengers movie, distributed under the Disney banner, capitalizes on the existing big-name franchises of Iron Man, Hulk, Thor and Captain America. In addition to the main VFX houses, ILM and Weta, also onboard are writer/director Joss Whedon, Captain America editor Jeffrey Ford and Serenity editor Lisa Lassek.
“It was always going to be a two-editor movie,” says Ford. “That was always the plan because of the scale of the thing. It’s enormous. There are multiple characters and it’s an ensemble piece, so it has a lot of range, a lot of complexity to it. And there’s a ton of incredible action stuff. It is just mind blowing there’s so much of it.”
Ford interviewed with director Whedon in early 2011, got the job, and started editing Avengers at the end of July. “I never worked with him before, but I knew his work. We hit it off right away. We have a similar sense of humor. We’re both comic book fans from way back, so we both understood the world and speak the same language and shorthand.”
Ford says Whedon was the perfect man for the job “because of his ability to weave super compelling and really exciting storylines together. He’s really brilliant at balancing all those characters and giving them a unique, separate voice.”
"There's a lot of humor in the movie,” Ford says excitedly. “It’s character-driven humor: humor driven by the absurdity and irony of the situation that the characters find themselves in. It’s not really related to the fact that they are superheroes or that they have incredible powers. It's related to the human complications that they find themselves in dealing with each other and the crisis that they are in.”
Captain America was a man out of time, he continues. “He was frozen 70 years ago. He's completely behind the times and has no idea what’s going on in the modern world. He has to catch up and deal with this crisis that’s beyond everyone’s understanding. Then you have Tony Stark who's always looking at things from a totally unique perspective and is always making a wise crack to break the tension. You have Thor, he's from Asgard so his understanding is completely shaped from the fact that he's not from earth. When everyone has to compare notes about what's what it can be really fun."
Most of the movie was shot with the Arri Alexa with some additional high-speed photography done on film and some Canon 5D footage. Ford and Lassek edited on Avid systems cutting in DNx36.
“It’s sort of a fact of life that there’s going to be a lot of footage on a movie of this scale because you have to use multiple units to make the schedule and the only way you can do it was to have parallel units,” says Ford. “So Joss was moving back and forth between two units all the time.”
According to Ford, “Movies like this pull the director in a lot of directions. It’s difficult to have a director sitting with you all the time because they are needed in so many different areas — visual effects reviews going on every day, directing the composer — there’s just so much to do. Early on, when he was available, we spent a lot of time discussing the story and then Lisa (Lassek) and I would work and look at cuts, and he’d give us notes and feedback, and we’d turn the cuts around fairly quickly. Then we went into a process of screening the film frequently, and that helped a lot because when you see it in a run you really get a clear picture of what you’re trying to achieve.”
PREVIS & POSTVIS
The daunting task of managing the VFX workflow on Avengers was expedited by the previsualization and postvisualization work done by The Third Floor and VFX editors George McCarthy and Jeremy Bradley. McCarthy and Bradley worked interactively with Ford, Third Floor supervisors Nick Markel-previs, Gerardo Ramirez-postvis, and uber VFX supervisor Janek Sirrs, to create previs and postvis sequences for various moments in the film. “The Third Floor was essential to everything that we did,” says Ford.
The previsualization team worked first with Whedon, Avengers storyboard artists and production designers to digitally model and animate representations of a bulk of the scenes in the movie with CG backgrounds and dimensionally accurate virtual versions of the sets.
Postvisualization takes the previs material and makes a first pass on marrying the effects to the actual plates in the movie. “We use those to define the motion of the characters,” says Ford. “Especially when you have an animated character that's full CG like Hulk. His movements and actions have to be mapped out with other characters in the plate and you have to do that in a very specific and careful way before you send it to the vendor to do their work. They need a guide of some kind.”
In most VFX movies, the editor is constantly cutting scenes with greenscreen in them. By doing postvis, the editor and director get a better idea of how the scene plays out and whether they need to make changes. In the evenings, Ford would finish cutting together a VFX-intensive scene and then hand it over to McCarthy and Bradley. They would then identify the plates required for the VFX shots and pass the plates to Ramirez and his postvis team. Some of the shots would have two or more plates: a clean background plus, for example, one with an explosion and another with a crowd running.
Ramirez and his team tracked the plates using Boujou to extract technical data such as camera height, speed, angle and lens. The postvis team then used that data in Maya and After Effects to add CG elements that included background set extensions, animated characters, vehicles and effects. McCarthy and Bradley would analyze the postvis shots in the edit and send data such as frame count information to the various vendors.
For simpler postvis shots, McCarthy and Bradley did comps directly in their Avid system with CG elements provided by Ramirez or one of the vendors. Everyone was stationed at the same location, recalls Ramirez, and in close communication with director and editors so when they sent shots to the vendors “they know where the director is going” regarding timing and composition.
Ford would treat previs and postvis material just like a shot of photography when he cut it in. “Those things are helpful in trying to determine what’s really essential to making the cuts work.”
“No matter what happens on any visual effects movie you have to be open to making changes until the very last minute," explains Ford, "because as the (VFX) shot comes in you have to re-cut it. Every shot that comes in I re-cut it. The cut points change for every single shot when it's finalled almost without fail. A few of them drop in maybe they have sync dialogue and a cut point that's defined by the plate. But if you get a new visual effects shot it’s your responsibility to eval
Editorial used CineSync or Polycom to communicate globally with the visual effects vendors. The shots would be delivered to editorial in Los Angeles via a secure FTP service such as Signiant or Aspera. Vendors would send DPX files as well as QuickTime movies. DPXs were screened in the visual effects room, and QuickTimes put into the cut.
Did the movie change much over the course of the production? “Whedon’s a writer by trade,” explains Ford. “Any writer who’s making a film is rewriting the movie as they are shooting it. Every writer/director I’ve worked with approached it that way, and it’s a huge benefit. As you learn about the characters and [what the cast brings], you change the emphasis on things and change the dialogue to fit their voice. He did that a lot, and we had very little re-shooting on this film.”
San Francisco’s ILM (www.ilm.com) and Wellington, NZ’s Weta Digital (www.wetafx.co.nz) were two of the main visual effects houses on Avengers. Others include Digital Domain, Cantina Creative, Evil Eye Pictures, Fuel VFX, Hydraulx, Lola Visual Effects, Luma Pictures, New Deal Studios and Trixter Film.
ILM VFX supervisor Jeff White says his crew — which reached 250 people at one point — was responsible for character creation and revision work on Hulk, Iron Man, the aliens, digital doubles for the characters as well as building out sections of New York City that led to producing over 700 shots in the finished movie.
Early in 2011, White went to New York City to supervise the still photography used to build highly detailed digital environments employed as sets and location extensions. The process uses a combination of LIDAR and high-resolution digital photography.
ILM had four photographers in the Big Apple for almost eight weeks. Each photographer had a camera rig that shoots 360-degree panoramas “that look like the street views on Google maps, except it’s much higher resolution, says White. They hung cameras off of rooftops to get high perspectives and they’d shoot a “sphere” every hundred feet or so.
“We shot 1,300 of these spheres and each sphere is composed of 72 bracketed images, so there’s almost 275,000 photos in all and this is all super high resolution,” recalls White. The goal is to be able to build a virtual environment of sufficient density so when they shoot a scene, especially when they have a moving camera, the digital version has enough information so it is indistinguishable from the actual location.
Most of the cameras were Canon EOS-1D Mark IIIs shooting “raw” images. “We have the pipeline where we shoot them as raw, plus you have little jpeg previews so you can do a quick stitch just to make sure you got all the tiles you needed,” notes White.
One area they needed was the viaduct at Grand Central station. Says White, “We had the set in New Mexico where they had built a strip of the viaduct.” They had a pretty good set, but it was literally just the upper roadway of the viaduct and only 40-foot green screens beyond that. White was going to be responsible for everything beyond that with the understanding that it wasn't going to be just one view of it. “Our camera was going to be moving up and down on the viaduct for all the different shots.”
A lot of the work is processing all the imagery and building the digital environment. That means projecting the images onto the LIDAR and painting out anything that doesn’t have LIDAR dimension information. If there’s a tree or a building or cars, all that has to be painted out and later replaced with new CG versions of those objects.
“Then we had to go and break out all of the windows in the buildings because those need to have moving reflections on them and they need to have room interiors,” explains White. So ILM artists replaced each window on every building. They have a special “shader” for reflections that also adds a window blind and room behind the window that you can see. Says White, “It’s having that kind of variety and those kind of details that actually make the city look believable.”
This virtual environment work is something White had done before on Transformers. “We’ve done it before, but not to this level. This was that to the nth degree and stretching that technology much further than we pushed it before.”
With Transformers, since it was shot in Chicago, they were able to shoot plates with cameras hanging from helicopters. In Chicago they have much looser restrictions where you can fly and how low you can get. In New York City, for obvious reasons, you can't get below 500 feet above the tops of the buildings. “So for anything that would be an aerial plate within the city or even for a lot of the photography on the ground they knew we were going to have to work extensively from photographs of the city,” he explains.
White is also especially proud of the new digital Hulk ILM created. “We had so much focus on Hulk. We knew that this movie would really ride on that character. Pulling off a digital character that’s believable is so difficult. We really threw everything we had at it in order to make the eyes believable, for the skin to react in the right way; the right level of sweat, all the things you can see in the trailer shots took iteration after iteration to get right.”
The idea to incorporate the actor Mark Ruffalo into the design of the Hulk was “one of the great decisions they made on this film,” White explains. Ruffalo gave performances on set that were recorded with a film camera as well as a witness camera zoomed right on his face so they could reference all the nuances of his expressions.
Ruffalo later went to ILM and did a mocap performance, equipped with a head-cam for facial tracking, for each shot. ILM artists incorporated his physical performance and acting, and shaped a digital double using life casts of his hands, feet and face, and full Light Stage body scans for textures. They even took a dental mold to use as the basis for Hulk’s teeth. A lot of time was invested making the Ruffalo digital double perfect. “We took that Mark Ruffalo mesh and sculpted that to be the Hulk. So they share the same topology and textures,” notes White.
ILM took a multilayered approach to realizing the Hulk character. In Maya they built a complex rig with a muscle system for volume preservation. On top of all that they had a truly dynamic “sim” (simulation). “We have a great sim engine in Zeno that uses tetrahedron meshes to simulate skin to get the gross giggle dynamics when the Hulk is jumping or hits the ground,” explains White. “We then run a thin wall version of that to simulate what the skin did to get wrinkles and folds.”
ILM is known for relying heavily on proprietary, in-house tools, like Zeno. On this movie, White says the focus was “to open the toolsets to allow the artist to solve problems the best way possible.” Sometimes that off-the-shelf software has the best tool to get the shot done a lot faster. Autodesk Maya 2011 was their animation backbone, Side Effects Houdini for visual effects, The Foundry’s Nuke for compositing. The environments team used a combination of Zeno for building New York, Autodesk 3DS Max and Chaos Group’s V-ray for rendering cars and things.
One major workflow change is that they would ingest footage as DPX files, convert to EXR and work in P3 color space instead of Rec. 709. Part of that was making sure all the artists were equipped with HP DreamColor monitors that can display P3 color space. “There were some real headaches at the beginning, but it helped us immensely being able to work in that color range and having confidence on how it was going to look when it was projected,” concludes White.
Weta Digital had at peak about 500 people working on just under 400 shots for Avengers from early 2011 to March of 2012, according to Weta visual effects supervisor Guy Williams.
Williams says the two most challenging sequences were the “Mountain Top Battle,” where Captain America and Iron Man fight Thor for custody of Loki, and the “Engine Three” destruction scene, when Captain America and Iron Man have to survive a crashing Helicarrier.
The big challenges in the “Mountain Top Battle,” says Williams, were the large number of shots, the scope of the effects and the extensive use of all-CG animation with digital doubles and environments. “Some shots were deemed from the start to be too hard to capture on-location, and at an early date, they were planned as all CG shots. Other shots evolved into all-CG shots to add even more head-slamming action into the scene.”
In the “Engine Three” destruction scene Hawkeye fires an explosive arrow into the ventilation shaft of the Helicarrier. A large concussive fireball, created using Maya’s high-resolution fluid sims, tears up the engine and gets sucked down into the blades and spit out the bottom. Inside the plumes of digital smoke, Iron Man and Captain America work through the wreckage to fix the engine before the Helicarrier crashes.
“The detail placed into the sets allowed us freedom to enhance action later by swapping out existing shots with new, fully-digital shots with more extreme action and camera moves,” explains Williams.
Advances in a cloud simulation plug-in enabled artists to populate numerous shots with complex volumetric cloudscapes infused with improved light scattering and indirect lighting, including a new control for Anisotropy — important for getting the proper amount of light bleeding into the cloud without having the key side of the cloud blowing out too fast.
“We worked a lot with ILM on this show,” describes Williams. “They created many of the models that we started with, such as Iron Man and the Helicarrier. When we needed to change the models or the textures of an asset, we would pass it back to them.
“This movie is geek food,” he adds. “Joss is a brilliant writer and director. Marvel is a great studio that understands their properties to the fullest and best knows how to present them. Tack onto that how fun it was to deal with the production and how cool the specific shots were to work on, and I couldn’t ask for a better set-up. Listen, its hard to remember with projects like Avengers that this is a job and not a hobby.”