By Daniel Restuccio
Issue: June 1, 2004

THE DAY AFTER TOMORROW'S PHOTOREAL EFFECTS

The genesis of The Day After Tomorrow came when director Roland Emmerich was shooting The Patriot in North Carolina. The threat of impending hurricanes had production personnel glued to The Weather Channel. In the hotel bookstore Emmerich found a copy of The Coming Global Superstorm by Art Bell and Whitley Strieber which was inspired by an article The Great Climate Flip-Flop by William H. Calvin in the January 1998 issue of The Atlantic Monthly.

At the core of Calvin's thesis is that there is mounting, credible evidence that earth's climate abruptly flip-flops every thousand years suggesting another ice age might be coming. This new deep freeze, triggered by global-warming trends, could be here sooner than anyone expects. Like the last Ice Age, such a climatic catastrophe would wipe out entire populations, permanently changing civilization as we know it.

Emmerich and co-writer Jeffrey Nachmanoff brainstormed a storyline that causes the sudden climate change to occur in weeks, not years, culminating in a dramatic second act super-storm that leaves much of the northern hemisphere encased in snow and ice.

They tied that plot to a story about Jack Hall, a heroic paleoclimatologist played by Dennis Quaid, who desperately tries to warn the world of the impending disaster. This scenario becomes personal for the hero when his son Sam travels to endangered New York City to compete in an academic decathlon forcing Hall to embark on a grueling rescue to save his son's life. Connecting these narratives are the themes of a father trying to emotionally reconnect with his son, and the arrogance of an entire nation that suddenly realizes it can't exploit the planet any more.

Emmerich said in an earlier interview that no matter how grand the effects are, human drama is still at the heart of the film. The Day After Tomorrow echoes a common theme to many of Emmerich's films: the struggle of regular people in extraordinary circumstances and the heroic aspects of their personality these situations bring out.

"Roland wanted to get right into pre-viz," says VFX supervisor Karen Goulekas. One of the major effects sequences is the flooding of New York City. So the first thing effects pre-production did was purchase the low-rez geometry of New York City from Urban Data Solutions (now known as EarthData Solutions). They started pre-viz'ing the storm tide sequence with a team of nine artists, including pre-viz supervisor Josh Kolden.

"We cranked out a lot of stuff the first three months," says Goulekas. "We were right downstairs from Roland so he could come down and look at stuff we'd work on. A few of the sequences had storyboards: the twister, the ice shelf and space sequences, but a lot of them didn't, which was fun. We'd just talk it through and try different angles."

"Often we'd sit with Roland," recalls Kolden, "and he'd say, "We need a boat and we need it to go down the street." So we had to build the boat on the spot in Maya or find a place to buy it and modify it."

Urban Data's New York City model contains millions of polygons, says Kolden, which is a huge database. Even though Crack Creative has pretty robust PCs - single- and dual-chip 3.06Ghz all with at least 1 GB of memory - it was still a chore for these machines to load the file into Maya with less than 2 GB of memory let alone move interactively in that database.

Faced with sluggish performance, Kolden upgraded the graphics cards to nVidia Quadro FX 1000, 2000, and 3000 cards. "It's worth it," he says, "because when you're sitting with the director you are no longer testing his patience as you wait for the software to render a new angle on a shot." For post-viz shots in particular Kolden was working with graphically intensive particle snow. "We put in the new cards and it's the difference between working in molasses to completely smooth."

While the Urban Data Solutions models were okay for pre-viz, the demands of ultra photorealism prompted Goulekas to augment the geometry with high resolution laser scans from Lidar VFX (www.lidarvfx).

"The laser is intelligent and has intensity mapping built in," says Lidar VFX president Paul Maurice. "It has a range of three miles and is accurate down to the centimeter. We had 1.5 terabytes of data just for the NYC geometry."

"Based on our pre-viz," explains Goulekas, "we knew we had to do 13 blocks of New York, around 5th Avenue, 41st to 42nd street, all around the library, and the Empire State Building. It took three months to get 13 blocks. And at the same time we had three teams of photographers take over 50,000 building texture photos."

What you get with the Lidar scans, says Goulekas, is the natural anomalies of the building, like the asymmetrical character lines in a human face. "You get every nook and cranny. Like the building sinking under its weight, and getting crooked, and shifting and all the really cool stuff that happens with real buildings."

When Goulekas, Chambers and Emmerich completed principal photography they entered a post visualization stage where they combined plate photography, live-action and bluescreen with preliminary visual effects. It was important for editorial to understand the timings of events in the shots, the movement of water, how fast ice breaks apart, and the speed and velocity of twisters tearing through downtown Los Angeles.

"Crack Creative would take the Avid plates and mock up the scene and add the full CG," explains Goulekas. "We could change stuff. I would do an end sequence shot and ILM would say, "You know, it would be a lot faster if you guys do it. It's right there, it's iteractive, you cut it in, the director sees it, you like it and then send it to us." It's faster for the vendors too when they're long distance. How do you describe camera motion over the phone? Otherwise it takes a couple days to turn it around. We had the Crack Creative guys turn in 10, 15 iterations."

Kolden says, "Typically with this type of movie you see slugs of black or storyboards where the effects go. In this post-viz situation we were required to generate something that was compelling enough to edit directly into the movie, more or less seamlessly. Have it not be distracting and not pull you out of the flow of the movie."

"Post viz is adding extra detail to the pre-viz before you execute the effects shots. You can catch problems before you get into heavy stuff," says Kolden.

Kolden notes that when pre- and post-viz sequences were handed off to effects houses there was very little freedom to rework the shots. "There's a good chance that they are using the original camera moves that we designed." Crack Creative would hand off shots, say, to Digital Domain and would transform the camera curves created in Maya into Houdini- (one of the effects packages that DD uses) compatible curves. "The storm tide sequence looks identical to the pre-viz," he says.

During pre-production Kolden's pre-viz shots were cut together into complete sequences by Dan Fort using Final Cut Pro. Those pre-vizes were handed off to visual effects editor Peter S. Elliot as reference material for the actual editorial process. "David Brenner, the editor, was unavailable at the beginning of principal photography so he asked me to begin the post process first in Los Angeles then in Montreal," recalls Elliot.

Elliot did the first cut of the Los Angeles twister scene on an Avid Film Composer combining pre-viz footage with actual location photography. "The back plates to the Los Angeles helicopter shots weren't available or the pre-viz simply told the story better so I put the shots from Josh's pre-viz [in] until the new plates were shot or there was a temp available."

Brenner and Elliot worked closely together during the initial stages of the first cut. "I would do rough composites on the Avid," describes Elliot. "There's the whole driving scene where Sam gets driven to the airport by Dennis Quaid. All the car interiors were shot on the soundstage in Montreal in front of bluescreen. Dave [Brenner] cut the scene using the bluescreen actors, and then I composited the Washington backplates with the interior car scenes."

The Day After Tomorrow is Oscar-winning editor David Brenner's (Independence Day, The Patriot) third film collaboration with Emmerich. "This film was closer to Independence Day than The Patriot due to its structure and its reliance on missing images that you had to imagine,? he says. "This was the longest post production I've ever been on," referring to the nearly two-year time span - early 2002 to May 2004 - he spent working on the film.

Brenner's philosophy of editing is that you don't have any philosophy at all until you start seeing actual film footage. "The biggest mistake you can make is to walk in with a preconceived notion of what you are going to do."

The style and rhythm of the cut has to come from the film, Brenner says emphatically. "Watch the flow of the actors on screen, observe the movement of the camera, see what the director does and let that dictate what the edit should look like. Don't try to force a style, let it be organic."

Emmerich, he says, knows what he wants, but is very collaborative. "During the shooting process, Roland lets me do my first cut," says Brenner describing his working relationship with Emmerich. "and then comes in on weekends and gives me notes. Sometimes this may go on for a couple of passes. When the shoot ends he goes on vacation for a few weeks to clear his mind so that he can be fresh when he comes back and see the entire first cut. After that, he works with me every day throughout the director's cut.

"On action sequences he wants to see the assembly right away, before shooting ends, to make sure he has all the coverage he needs," notes Brenner. When the Frank Harris character falls through the roof of the mall, he recalls, they went back and re-shot the bluescreen a couple of times to get all the pieces.

On a film like The Day After Tomorrow, "Pre-viz is the editors' best friend," he says. "It's often hard to be drawn into a scene when you suddenly cut from live-action to a low-resolution computer animatic. That's why sound and music are so important. If you find the right piece of temp music to drive the scene, and if you create sound for visuals that are not there, you will fill in the blanks for an audience. If you did your job well, they will see the missing images in their minds, and they will go along for the ride."

The Day After Tomorrow opens dramatically with a fast, low fly-over above the Antarctic terrain. Three-dimensional titles appear and hover over the cold, icy white landscape. Large icebergs appear and recede. The aerial shot eventually settles on a tiny outpost of scientists lead by Dennis Quaid where whey are studying prehistoric weather by drilling and extracting layered ice cores. That 4000 frame 2.5 minute shot created by Hydraulx is believed to be the longest all CG fly over ever created.

The detail of making ice look photorealistic and not like slabs of flat plaster, says the Strause brothers, proved to be the big challenge. The brothers solved that problem with the intricate technique of sub-surface scattering, the internal refraction of light within a translucent material. "This was the technology that brought Golem to life," says Greg Strause.

"It's not perfect," he continues, "it took weeks that to get the shaders right." The technique is so computationally intensive that single shots took 3 - 4 weeks to render on 100 computers with dual 3.2 GHz processors.

Instead of modeling the icebergs from scratch the Strause brothers hired model maker Dan O'Quinn to sculpt the ice out of foam. These large models, up to eight-feet long, were scanned with a Polhemus 3D laser and "got the modelers 80 percent of the way to a finished look," says Greg Strause "The 3D modelers could then focus on just doing the details of the ice flows."

As the scientists begin to take another core sample the ice surface begins to crack and opens a huge chasm, created by digital set extensions, with chunks of ice tearing and falling away.

The entire Larsen B ice shelf falls into the ocean. An event made more startling by the fact that the real Larsen B ice shelf fell into the ocean a few weeks after Emmerich scripted the scene.

"The edit was locked," explains Greg Strause. "There was very specific cues to how the ice should fall, but it had to look random. We brought in character animators to take the ice pieces and hand animate it to look like a simulation. Sometimes there's something to be said for brute force keyframing."

The Hydraulx animators used Maya Fluids and particle systems to add snow layers to the scenes. And for the dramatic Antarctic skies the brothers photographed days' worth of sunrises and sunsets, which were later tiled together in a 3D panorama.

For compositing, Hydraulx relied on their Discreet Inferno and Flame systems. When you have shots with 120 layers of compositing, you really need a high-end system, says Colin Strause. Emmerich really liked the fact, Strause continues, that he could come in and see the composited shots at full resolution and make changes in realtime.

Jack Hall tries to warn the powers that be of the dire implications of his discoveries. However Hall miscalculates the timing of the climate shift even as dramatic weather changes are ominously occurring all around him.

Ian Holm plays Professor Rapson, a colleague of Jack Hall who lives and works in a remote northern weather outpost in Scotland. Rapson warns Hall that the changes are happening faster than anyone anticipated. With the storm closing in on the British Isles, the British air force is sent to airlift the royal family out of harms way. The helicopters however head straight into a storm supercell, an extreme weather configuration that pulls super-cooled air down from the stratosphere instantly freezing any object. As the choppers enter the cyclone, their fuel lines freeze and they crash. One of the surviving pilots who ventures outside is immediately frozen solid.

That scene was started at Digital Domain and handed off to The Orphanage. ?We received assets from Digital Domain, Maya files of the camera blocking and the choppers, textures, on-set photography and match-move information,? recalls Echegaray.

In the beginning, he says, The Orphanage was exploring different "looks." "How does snow look? How does falling snow look? How does it look with different haze levels? We had to envision this environment hit by a giant snowstorm and make it look like it could be possible."

Continues Echegaray, "We used their models, the positions of the cameras as a jumping off point and built on top of that. We did the terrain, all the dynamics and all the lighting." Orphanage programmers wrote an application to translate the DD Maya files to fit into The Orphanage pipeline which uses Discreet 3DS Max, AfterBurn and Splutterfish?s Brazil rendering system.

When the chopper moves into the eye of the storm you see this big wall of the tornado around it, describes The Orphanage's visual effects supervisor Remo Balcells. That was done in what is called "2 and a half D" which combines enhanced, volumetrically rendered images that are projected back on to 3D geometry. Balcells insisted that the shape of the clouds be moving internally by the strong winds. He used 3D fluid dynamics to get the extra punch of realism. It's important, he says, that the clouds "not have the shape of random noise. To make it convincing you have to sculpt them like in nature."

The Ophanage started using the Brazil r/s render engine first on Hellboy and now on The Day After Tomorrow. Brazil r/s has lighting features such as global illumination and photon mapping, raytracing and a snappy shader library. Brazil r/s can also output separate passes for each attribute of the frame's content: a diffusion pass, a reflection pass, and shadow passes. This allows them to fine tune the images inside their main compositing tool, Adobe After Effects (www.adobe.com).

"We've been compositing in After Effects for a long time," says The Orphanage founder Stu Maschwitz. "Which is controversial in the high end visual effects world because it does not have a 32-bit floating point color space. We like AE for a lot of features. It has 3D. It has expressions. So the challenge was to find a smart way to work with Cineon. Elin redistributes the Cineon log data into the After Effects 16-bit buffer and then it provides a look up table for viewing that."

Elin works with Industrial Light & Magic?s new high dynamic range file format OpenEXR (www.openexr.com). "So if during compositing you need to make a shot one stop brighter you can."

In The Day After Tomorrow, continues Maschwitz, they put the helicopters in front of a bright sky. "You want that motion-blur and defocusing effect when the helicopter blades spin fast. It looks a lot better when you are working in Elin."

"We texture the 3D geometry," he explains, "and render that with radiosity to one frame of the shot. That gets it 85 percent realistic looking. Then that goes to the matte painter who adds all the nuances that are missing. So now you have this giant matte painting. We project that on to the same geometry and then render it again with a camera move."

While Hall tries to make sense of the bizarre weather patterns, his son finally arrives in the doomed New York City. Three days of heavy rain in Manhattan is modest foreshadowing of the impending storm tide that will soon wash down its streets with 90-foot waves. While many of these shots were started at Digital Domain in the final film few of them survived intact and some of them were salvaged only for their building geometry.

"I called Tweak films," recalls Goulekas, "and said can you guys take on five really hard aerial water shots and finish them in three months?" Another company wouldn't touch it, she said, "but these eight guys at Tweak said, 'Yeah, I think we can do that.'"

So Tweak Films inherited five of the major storm tide shots. "We did all the aerial water shots," says Tweak visual effects supervisors Chris Horvath and Jim Hourihan.

"We got a ton of assets from Digital Domain including the Lidar models of New York City," describes Horvath. "This was a massive 400 gigs of material that we had to write software just to sift through the data to see what we needed. The Lidar also contained projected textures as well. We had to up-rez some of the low-rez building textures to match the camera moves we were doing. We finally got it down to around eight gigs of data for our shots," recalls Hourihan.

Shot SL040, the wide shot of lower Manhattan and the submerging Statue of Liberty, is composed of half of Digital Domain elements and half of Tweak's elements combined together.

"That shot is composed of textures, the statue, the body of water itself, the splash around the statue, the sky and the lightning, and the city," describes Horvath. The statue is Digital Domain's, the torch element and lightning is all Tweak. They hired Peter Lloyd, a veteran matte painter, to do all the sky and lightning elements.

"The body of water in that shot is Digital Domain in the foreground and Tweak water in the mid and background. We matched the water to add more foam and white caps," explains Horvath. "We made the sea a lot more stormy, we added atmospheric effects and changed the splash elements somewhat," says Hourihan. "We had to match move ocean, and I don?t know if anyone?s ever done that before, but it's really hard."

The Digital Domain splash had a nice shape, but was assessed as too thin and vaporous. Hourihan created a more robust 3D splash that matched and fleshed out the DD splash. Tweak's compositor Mike Root hand painted and rotoscoped that shot pulling the Tweak splash in, out and through the Digital Domain splash.

"We were given a panoramic image of the city on a card," describes Horvath. "That was repainted to have more detail and aerial depth cuing. We separated the single card and projected it on to multiple cards at different depths. So when a camera move was applied they would slide with parallax ever so slightly against each other."

In shot ST020, water barrels down multiple avenues like surging rivers rushing down canals, tossing cars, bouncing off of buildings and meeting at street intersections. In that shot the buildings are by Digital Domain, the surging water by Tweak and the lightning was done by Hydraulx.

In another shot, ST006, the first very, very wide shot of the storm surge overtaking the city, 360 buildings have individual splash simulations, continues Horvath. Each simulation is a mammoth simulation of water splashing against a single building. The splash effect is handled by different physics than solid water so they were rendered separately and composited together.

Emmerich wanted to do things in the simulation that accentuated how the water behaved. So they adjusted the timing of two of the shots where the water goes around the library.

Mike Root is Tweak's lead Shake compositor and according to Horvath is the only reason the shots look good at all. "Most of them have weird edges, multiple takes and dozens of different elements," he muses.

The Statue of Liberty shot, he says, was very difficult. "There were hundreds of layers. Mike doesn?t like doing pre-comps, but prefers to do everything in a single monolithic procedural script. The composite script was so complicated we joked that no one other than Mike could understand it without experiencing brain damage."

Tweak handed off shots ST020 and ST150 to Hydraulx who added interactive light from lightning bolts off camera, recalls Goulekas. "They sent us the final composites, clear city shots, with the rain as a separate element," describes Colin Strause. Hydraulx rotoscoped parts of the cityscape, isolating windows and sides of buildings, to articulate the lightning so that flashes would strike the skyscrapers in a realistic, natural fashion.