By Daniel Restuccio
Issue: July 1, 2004


CULVER CITY - Sony Pictures Imageworks ( gets to dazzle audiences for a second time with new-and-improved visual effects in Spider-Man 2. The challenge of topping the first film's $820 million worldwide box office is enormous and expectations are high. For the effects team, headed by visual effects designer John Dykstra, the goal is not to pack the movie with more gee-wiz effects but to refine the technique of delivering images that complement the depth and complexity of the story.

Two years have passed since the first Spider-Man movie. One of the major conceits of the first film was being able to fly with Spider-Man through the buildings and skyscrapers of New York. The richness and detail of those buildings, says Dykstra, play a big part in creating that immersive feeling of being there and flying with the web slinger, and that's been taken to a whole new level this time around.

"We improved upon the building texture-mapping technique in the second film," says Dykstra. Imageworks' artists "figured out new ways to stitch the tiles together in a way that made them much more flexible. You are basically shooting a building at one time of day, but that building has to be represented in the film at all times of day and into the night. So the texture maps have to be shot in a certain way to allow you the flexibility to change the apparent lighting on those textures and make it look like the building is real."

"We got a lot more geometric detail with the buildings, which allowed us to get a lot closer," says visual effects supervisor Scott Stokdyk. "We combined that with ambient occlusion lighting in both RenderMan and Mental Ray to get a global illumination look and feel."

They also got 18 new buildings for the New York City digital set, reports Stokdyk. "Our best quality foreground buildings from the last show were considered medium quality for use in Spider-Man 2, and were put behind our newer buildings."

Texture maps for buildings were dramatically improved for Spider-Man 2 says John Dykstra
Imageworks designers also improved the look on all their digital set interiors as well as practical set extensions for both day and night by using high dynamic range imaging techniques. Imageworks uses Alias Maya and Side Effects Houdini as the backbone of their CG pipeline and extends their functionality with in-house built toolsets and proprietary compositing software.


As in the first film, when Spider-Man swings from building to building he's CG, not a real stunt man. "We did reuse the Spider-Man model," says Stokdyk referring to the digital double created to blend seamlessly with the live-action Spider-Man. "We had one artist on the first show spend eight or nine months with the videotape of the stunt guy, just lining our model up and matching it up, sculpting it, and molding it."

To add to the flexibility of the CG characters in this film, CG heads of actors Toby McGuire (Spider-Man) and Alfred Molina (Spider-Man's nemesis Doctor Otto Octavius, a.k.a. Doc Ock) were built using data from the Light Stage device designed by Paul Debevec at the USC Institute for Creative Technology.

More geometric detail, adds Scott Stokdyk, allowed for more close-ups.

Debevec says, "For Spider-Man 2, the actor is lit from numerous different directions, with light from a special apparatus, and filmed from several viewing angles, which creates a complete digital record of how their face transforms incident illumination into reflected light. All of the face's coloration, shininess, self-shadowing and subsurface translucence is captured in this dataset. The dataset is a bit like a hologram, which records how something looks from many angles - our dataset records how a face looks under any kind of lighting."

By contrast, for The Matrix: Reloaded the universal-capture system used to create virtual heads of Keanu Reeves, Laurence Fishburne and Hugo Weaving works differently, he says. In that case,"the actor is filmed from several directions under one relatively diffuse lighting condition, where they use crossed polarizers to separate out the shininess of the face. The diffuse lighting yields a diffuse texture map for the face, and the way that the face reflects light is then simulated according to analytical models of surface reflectance and subsurface scattering."

The Light Stage/Sony system works differently, explains Debevec. "To create a rendering of the actor in a real or virtual set, the computer program can simply recombine the images of the face to simulate its appearance under any combination of lighting directions - the light from a real set, or the light envisioned by a lighting designer, or a combination of both.

"Algorithms implemented by Mark Sagar and his team at Sony Imageworks made it possible to animate the shape and viewpoint of the face based on several expressions of the actors, creating fully animated photoreal faces," he continues. "A particular advance was a novel rendering pipeline created at Sony Imageworks that allowed the digital lighting team to light the reflectance field faces and traditional CG elements using the same sets of traditional lighting controls."

"We get all the textural information from the light stage," adds Stokdyk, "but the facial expressions we get from motion capture." Animators were able, he says, to combine bits and pieces of performances chosen by director Sam Raimi and animation director Anthony La Molinara, and customize them to get a specific expression that matched the action and could work under any lighting condition.

Whenever possible for reference, adds Dykstra, " we have the actual actor, preferably in the actual lighting that was used, actually performing the scene ."

In Spider-Man 2 Dr. Otto Octavius is a brilliant scientist trying to harness fusion energy. During a demonstration where he manipulates the raw plasmas with mechanical arms, an accident occurs. The robotic appendages fuse to this body and he transforms into the deranged, multi-tentacled Doc Ock who becomes obsessed with destroying Spider-Man. Animating those arms became a personal challenge for Dykstra.

He says, "We started out with Doc Ock as a concept" that evolved to over a year in tight collaboration with animation director Anthony LaMolinara, costume designer James Acheson and production designer Neil Spisak. They were aided by the concept drawings of James Carson, Alex Tavoularis and Paul Catling, with Raimi "orchestrating the entire deal."

Doc Ock doesn't want to move like a spider, explains Dykstra, he doesn't want to move like a jointed quadruped, and when you make him look like an octopus, an octopus looks great under water, but you put them on dry land and they lose their strength and power. The trick, he notes, was coming up with a design for the tentacles that gave them personality... a sense of them being an organic extension of Doc Ock's physiognomy, but still able to maintain some sense of dangerous mechanics.

Artists gave Doc Ock's tentacles personality and a sense that they were an organic extension of the character's physical make up.
"The determination was made that we would put practical tentacles or puppets in the scene to interact with the actors as much as possible," Dykstra continues. He says Raimi loved having the puppets in the scene because he could be spontaneous with his direction. In full rig, 16 puppeteers were required to move in concert with Molina. Even scenes where the puppets would ultimately be replaced by CG, they were still used for realistic action and lighting reference.

For the CG versions of the tentacles, they broke down, piece by piece, then cyber-scanned the practical tentacles built by Edge FX. Imageworks then textured and set them up for the animators.

"For our CG tentacles," explains Stokdyk, "we needed the ability to stretch and un-spool out of Doc Ock's back so we setup slinky-like squash and stretch controls. That helped a lot on impact when the tentacles hit a building where we could put a little shock wave, a little ripple, through it."


Visual effects digital content creation would be very difficult to accomplish without an ultra-sophisticated computer infrastructure to support it.

"We are a 24/7 operation," says Alberto Velez, executive director of systems engineering at Sony Pictures Imageworks. "A challenge we face is increasing the technical infrastructure to handle the complexity of digital shots, which on this project we were always moving toward more dynamic photorealism. This includes providing additional processing power, network bandwidth and high performance and highly available storage."

Two of the biggest changes that Velez has managed over the past few years are the migration from SGI Irix systems to higher performance commodity workstations and render servers and the transition to the Linux OS, both on the front and back end of the facility. In an ideal world, once all the applications Imageworks uses are ported to the open-source operating system they will be "Linux on all fronts."

"At the beginning ofSpider-Man 2," Velez recalls, "we were in transition to a Linux-based renderfarm. Some of the creative leads were a bit concerned about the transition, but once they realized its benefits they welcomed the significant performance gains."

Another key technology that was deployed at Imageworks for Spider-Man 2 production was an improved color-correct viewing process using in-house proprietary tools and an NEC DLP projector. This configuration facilitated the ability of production leads to review digital dailies in a correct color space that resulted in quicker feedback and turnaround in processing shots by their artists.

Imageworks' technical infrastructure scales up to bring as many workstations and servers online as needed during a project, sometimes numbering in the hundreds during crunch time. "In the last three months on Spider-Man 2 we brought on a lot more machines, particularly for the renderfarm. Since the first Spider-Man movie," continues Velez, "the renderfarm evolved from a thousand 1GHz processors to roughly twice that many with 3GHz processors."

Imageworks uses a mixed array of IBM, Hewlett-Packard and Dell workstations and servers in single- and dual-processor 3GHz configurations. IBM and HP workstations are used interactively and include Nvidia Quadro FX graphics cards and an average of 2GB of memory. Dual-processor IBM and Dell servers comprise the renderfarm. All systems are networked via a Gigabit Ethernet backbone LAN.

Visual effects productions store all their files on Network Appliance networked attached storage (NAS) files with a current capacity of more than 50 Terabytes to handle all in-house projects. The assets of Spider-Man 2 alone took up over 10 Terabytes. Imageworks has developed its own Oracle-based asset management system.

Not resting on their laurels, Velez says his department is investigating the eventual migration from 32-bit to 64-bit systems. "Workstations and servers in a 64-bit architecture," he says, "offer inherently more precision, higher memory addressability and enable rendering of large, complex datasets at an increased level of detail. They also give us the ability to simulate physical phenomena much more realistically."


Santa Monica- and San Francisco-based Radium worked closely with Sony Imageworks on 52 effects shots for the film. The Radium team, led by VFX supervisors Scott Rader and Jonathan Keeton, as well as VFX producer Tom Ford, performed complicated wire and rig removal, CG and compositing work for various shots and sequences. The wire and rig removal involved re-creating buildings, removing cranes and apparatus that enabled the characters to fly, as well as CG geometry and projection mapping of high-rez images. They used Discreet Inferno, Combustion, After Effects, Maya and Baujou.

"What was most challenging from an artistic standpoint was that each sequence required different looks," explains Rader. "Each round of shots entailed a fresh start, and we completed a wide array of effects work. We approached each shot differently, even though there were eight or 10 shots per sequence."

"Another interesting aspect to our work was that there was a high level of protection with the film," adds Ford. "We were provided with single frame references, but were never allowed to see the context of our shots. It presented an exciting challenge for us to create effects for certain shots and sequences without seeing how our shots fit together in the final film."