Daniel Restuccio
Issue: June 1, 2006


CULVER CITY, CA - Academy Award nominee Richard Hoover was born in Walla, WA, but grew up near Cambridge, MA. He attended University of Oregon where he majored in design with an emphasis in animation. "I first became interested in effects," says Hoover, "when an U of Oregon alumni visited an art class I was taking. He showed his work, which was back light animation and that got me excited. I eventually got my first job working for him at Midocean Motion Pictures in Hollywood."

Some of his other tours of duty in the field of visual effects were with Robert Abel & Associates and Dream Quest Images, the company that later became Disney Studios' The Secret Lab.  He was visual effects supervisor on Reign of Fire, Unbreakable, Inspector Gadget, Armageddon, Darkness Falls and Seabiscuit.  He joined Sony Pictures Imageworks in 2002 and recently completed work on Superman Returns as senior visual effects supervisor, working with production effects supervisor Mark Stetson and director Bryan Singer.

Post: Overall approximately how many shots did Imageworks do, what was the schedule, when did you start?

We created 302 shots including 17 shared shots. We started in October 2004 with R&D of the Superman character.  We showed test shots by January 2005. First unit completed principal in September 2005 as I recall and we received our first shots at the end of that month. We are delivering our final shots in two days (May, 2006).

Post: There's a heroic sequence in the movie where Superman saves a jet from crashing. It's quite spectacular, so how did you do it?

Mark Stetson, the visual supervisor for the entire production, whom I have known for 20 years, created pre-vis shots for the whole sequence with Pixel Liberation Front.

We followed their design religiously - to the point of importing the pre-vis camera and rendering the first pass of the shot with that camera. PLF exported Maya files and we would replace their models. In some cases we would provide them models for their pre-vis.

I would say 90 percent of the sequence is digital.  There are several green screen Superman elements. Dodgers Stadium was used for the background. Otherwise all other elements are CG. We made a photo double of Superman capable of speaking dialog, but he actually never had to speak. There are, however, several close-up fully digital Superman shots in the sequence as well as images made of part CG and part greenscreen. 

Post: What are the challenges to making that look realistic?

The biggest challenge [was] making the likeness and shape be honest to what the character is. As artists all want to be God, and correct the imperfections that are human. We need to resist that. When you go into a close up of real character you see the differences. We shot greenscreen for everything. A lot of the greenscreen had Brandon hanging horizontally and his skin was hanging down, subject to gravity. The digital double was created vertically from a scan of Brandon's body. Gravity mattered. We did make corrections shot by shot, and blend the shapes in the face to make what Bryan wanted in the expression.

The aircraft were all modeled to the manufacturers' plans except the shuttle, which was a production design. However, we textured the shuttle with known shuttle materials. The fire and smoke were simulated in software and rendered in Renderman, and the embers were created in Houdini. 

The stadium was photographed with the Genesis camera plates, but many were replaced with texture maps made from the Genesis plates and projected on a lydar model of the stadium. This allowed us to move the camera like the pre-vis instead of the other way around. 

We filled the stadium with a CG-animated crowd. We had about 70 different animations of crowd behaviors with each being several hundred frames long. Software, with artist direction, could direct their orientation and select similar performances to give a life-like feel to their overall movements.

The clouds were volumetric particle renders.  We created layouts of cloud shapes to give the shots speed and great compositions, then rendered from foreground to background with the required clarity for realism.

Post: What was it like working with the Genesis footage?

The Genesis camera creates a very sharp picture with a great range of exposure, about eight stops. I found the images very clear and crisp compared to film.  It is just a different look.  I shot some of the high-speed elements for the film at 15 degree shutter angle to simulate the Genesis look on film, for example. Although the grain is different, the result went together well because of sharper nature of the image.

The chips do have unique anomalies that make each camera different. This would show up only under microscopic analysis. We went through enormous pixel scrutiny. Eventually a software program was written to address this issue. 

Post: How did you deal with the color space issues?

Each shot we worked on was converted from Sony HDCAM SR 4:4:4 tape to DPX file format in Panavision's Panalog format. From there, we converted to linear or log files as necessary and worked as we would on a film project.  We reversed out our work to Panalog and delivered the final shots in DPX format. All and all, this worked with no problems. We never filmed out our work, which was the most difficult part of the process. We had to judge our shots based upon the viewing device and the LUT applied at the time rather than a constant like a film print, which was challenging at times.

Post: How about greenscreen footage?

I supervised most of our greenscreens.  Most of our shots were of Brandon on wires flying around the stage with a variety of rigs. The stunt guys rigged one of the stages at Fox Studios in Sydney with a ceiling mounted trolley system powered by descender winches to puppeteer Superman's movements.  The greenscreens pulled with very sharp edges - compared to film. You would think something is wrong with the matte at first look, because there is no roll off from the value of what you are shooting and the greenscreen color.  It changes in one pixel.  We used all the normal tools and tricks for a film greenscreen and the shots came out very well.

Post: I understand there's an animated character shots that come to a full frame close up.  What were the new techniques you applied since the days of Spider Man 2?

There are several close ups of Superman as a digital character throughout the film.  We captured skin detail and lighting in the same manner as Spider Man with a few new embellishments.  We used more cameras and tweaked our shaders and blending techniques quite a bit. We have really raised the bar of the detail capture to an amazing level.  Our only limitations were the camera proximity algorithm and the averaging of detail with distance in Renderman.

Post: Did you work on any of the shots that were collaborative between various effects houses?

Yes, we worked with a number of other vendors.  The collaboration on this film was amazing. I give Mark Stetson the credit for his choices and his ability to keep us all on the same page.  I think the effort everyone had to make the film work in the new environment of the Genesis camera, made everyone talk to each other in a positive way.   We were all in this together.

There were the usual file format/color space issues but otherwise there were no problems.  Imageworks was mostly responsible for the CG Superman, so we provided character renders and lighting passes to other vendors.  They would send us their cameras and background renders and we would light Superman and send back the elements. We also provided a number of CG capes for greenscreen elements of Superman where the cape did not work on stage or got in the way of the wires holding him up.