SIGGRAPH: Green Steve And Wondermoss

Posted By David Blumenfeld on August 07, 2012 06:26 am | Permalink
It seems like Siggraph 2010 was just here, and now I'm back two years later in my home town for another round. While I was unable to attend yesterday, I showed up early to get the party started Monday morning. Attendance seemed to be down quite a bit from last time I was here, though the three sessions I went to had long lines. I'm sure tomorrow will draw a larger turnout as the expo floor will be open.

After getting the lay of the land once more and doing some back and forth meandering to get registered, I was off to the Keynote speech in the West Hall. After a nice down-to-Earth, warm introduction by the conference chair Rebecca Strzelec, some awards were presented to varying researchers by the ACM President and CEO. This was followed by a comical audience participation exercise in self-help by author, games developer, and futurist Jane McGonigal.

From here, it was time to head on over to the production session for "Assembling Marvel's The Avengers" in the South Hall.  Jeff White, VFX Supervisor for ILM, started off the talk, presenting a fantastic array of work on this effects heavy film.  Topics ranged from their creation of digital doubles, the Leviathan creature, building destruction, new suit damage and transformation techniques on Iron Man, building their digital recreation of New York, and an in-depth look at the character and look development for Hulk.  One of the more interesting things I found during this talk dealt with their HDR acquisition for New York.  

While they shoot their spheres much the same way as I do (they use a Canon 1d Mark III, while I still use a Mark II), what was impressive was the degree they went to in order to capture the entire area of the city their characters would be moving through.  They were able to send a team out to shoot environments this way every few hundred feet down city blocks, as well as up on cranes and on rooftops where possible.  If I recall, they shot something on the order of over two thousand environment balls.  Additionally, they acquired LIDAR scan data of the buildings, and then using GPS coordinates of the HDR images, were able to piece back the exact location of these environment spheres and project these photos back onto the building geometry.  Combined with some scripted vehicle and prop placements tools (and hundreds of models for this purpose), along with clever building window reflection generation and office interior replacement (using ILM offices as a substitute), not only were they able to create a highly believable 3d city they could traverse through, but enough image based lighting environment spheres to provide a relatively complete HDR lighting scheme for everywhere their characters needed to move through the city.  While of course this was augmented with traditional lighting, the sheer scope of this acquisition was very impressive.  I would've liked to have found out what they did to rectify the fact that they would've been forced to shoot all these HDRs at different times and under different environmental lighting conditions, but I would imagine it would've just been a job for an artist to go in and color correct the stitched maps to match closer, and then be done with it.  Anyway, impressive nonetheless.

One humorous aspect of the Hulk portion focused on the on-set reference bodybuilders who were used.  The primary man, nicknamed Green Steve as he was decked out shirtless in green body paint, really got into the role, acting out shots to the best of his ability.  While these were definitely humorous outtakes, the reference he and the other stand-in provided was definitely useful, as it was clearly visible in some of the animation roughs as well as during lookdev and lighting tests.  They talked about various facial, hand, and dental casts they took of the actor Mark Ruffalo, as well as a full lightstage capture session.

Next up was Guy Williams, VFX Supervisor for Weta.  He spoke briefly about their HDR and LIDAR acquisition, also mentioning that they took multiple exposure photographs for individual stage lights to use as texture mapped area lights in their image based lighting/spherical harmonics setup, providing the same high range detail for reflectors and such that environmental HDRs give.  Of added interest to me was his mention of capturing their HDRs with 6 positions.  I typically shoot three (using a Sigma 8mm).

I used to capture these on a panoramic head kit to ensure that the camera was nodal to the lens, but finally stopped using that as the extra inch or two of offset didn't give me any trouble stitching, but it was significantly less to paint out with the rig no longer present.  I did for a while try shooting 4 positions (every 90 degrees instead of every 120), but I didn't find that I had any better stitch results than with three (in other words, I get quality stitches almost all the time anyway).  I assume they're doing every 60 degrees to simply provide even more data for stitching, but with 7 stops, that's 42 pictures to my 21, and more opportunity for the crew and other people who prefer to look directly into the camera and smile instead of walk off set for a few seconds to mess up the pictures, so I'm not sure what the specific advantage is, but I would've liked to have found out (time unfortunately became a concern and I was unable to get in there and ask).  Finally, Aaron Gilman spoke about the over 200 shots his 30 animators tackled, complete with personally filmed reference of himself and his team. Having an in-house motion capture stage is nice as well for this purpose.  In all, both studios did a truly remarkable job on the incredibly complex shots they were tasked with, and my kudos to all the artists who surely put in some long hours to achieve such high quality results.

The final presentation I attended today was for Pixar's Brave.  They touched on the visual development for sets, characters, and props that the art department did, the character development for facial, posture, mannerisms, and style for a number of the movie's cast, and cloth, hair and fur, and simulation dynamics setups and challenges.

Sets and environmental modeling and lookdev was discussed, as was color and lighting, both from a technical and creative/artistic point of view. However, the one part that I found of particular interest was their custom development of the moss, lichen, grass, and undergrowth system.  Rather than taking a guide curve/hair approach, a paint fx style stroke/tube system, or a particle instancing method, what they came up with was an almost entirely render-time solution affectionately dubbed Wondermoss.  Starting from an underlying surface, whether an uneven ground plane, a rock, or a tree trunk, the system would quickly create an offset upper bound using some simple trig math (sine waves added to each other with some offset functions), and then create a subdivided cubic volume to encompass a minimal area of interest via a raymarching algorithm.  Utilizing these small volumes, shading densities could be interpolated to essentially fake self-shadowing and color darkening, as well as quick solves for psuedo ambient occlusion.  Semi random patterns could then be applied along with predefined plant shapes, allowing for very fast rendering of this growth with little user-input or tweaking required.  While of course the complexity of this system and what additional artist input was necessary didn't get much attention in the talk, the end results of this were nothing short of phenomenal, and the applications of this shading based system seems to be easily extensible to other types of detail fill in varying cg sets, not just for plantlife.  I'm sure the fine folks at Pixar will get a great deal of mileage out of this development, and I for one was definitely impressed by it.

At the very end of the talk, during the question and answer session, a comment was made that they were taking advantage of Renderman's Deep Compositing feature.  For those who recall my blog from two years ago, I gave this technique some extra overview as I was greatly intrigued by the benefits it provided Weta in their making of Avatar.  At the time, they had written their own solution for this into Renderman, and implemented the back end into Shake.  Anyway, it seems from the comment today that this functionality is now part of Renderman Studio 3 (I'm assuming).  While we have RMS3 at our studio, I'm still using RMS2 at the moment, though shortly I will be making the switch as well as using VRay to a greater degree as well.  I am definitely interesting in doing some research into finding out not only how to access this secondary output of depth metadata, but if this can be read into both Nuke and Flame on the compositing side.  If anyone reading this knows off hand, please drop me a line or a comment.  When I get a chance to look this up (hopefully in the next few days), I'll add an update to that blog.

For those who are unsure, Deep Compositing allows the artist to output a pass which contains floating point z-depth data as a numerical pixel position table, meaning not only is the fidelity of the data far greater than that of a traditional z-depth render pass, but it allows for any given element to automatically be composited properly without the need for holdout matte creation, and without the drawback of any sort of edge blending issues inherent with z-depth passes.  Even when custom z-depth compositing nodes are created to handle transparency, motion blur, and anti-aliasing (as we did during my time on Beowulf), the ability to re-render an element and update it in the comp without the need for holdout creation is a huge timesaver, and one which I would love to take advantage of directly out of the box.

Well that pretty much sums up my first day at the conference.  I ran into a few old friends and garnered a few tidbits worth looking into, so all in all, it was a nice start.  I'm looking forward to tomorrow, though I'll have to pick and choose as a number of talks I want to attend are happening concurrently, so we'll see which one wins out depending on my mood.  Here's hoping you enjoyed these random thoughts, and as always, drop me a line if you have any of your own to add!  Goodnight all.

David Blumenfeld is with Brickyard VFX. Check out their Website at: www.brickyardvfx.com