Advertisement
Current Issue
October 2014
Post Blog » 2010 » July » SIGGRAPH: A Case Of The Mondays
« SIGGRAPH 2010 - LA | Main  | VideoInsight ’10 Seminar by Tektronix July 21, 2010 Hilton LA/Universal City Hotel, Universal City, CA »

SIGGRAPH: A Case Of The Mondays

Today was a full day at the show, with a number of interesting talks and sessions. I arrived nice and early and snagged the first spot right in front again. If I can keep this up, I definitely won’t have to worry about getting lost in the parking lot. This also came in handy around midday when my iPhone needed a bit of recharging.

After arriving, I headed over to the “All About Avatar” talk. Moderated by Jim Hillin, this talk featured Stephen Rosenbaum (on-set VFX Supervisor), Kevin Smith (lighting artist and shader writer from Weta), Antoine Bouthors (effects artist from Weta), Matthew Welford (from MPC), and Peter Hillman (compositing at Weta). This talk was interesting and full of well-presented visual examples and on-set photography, including the use of the virtual camera for creating shot layout “templates” which were handed over to Weta for effects creation. While a number of technical aspects were discussed, including custom development of a stereo compositing pipeline in Shake, discussion of spherical harmonics in relation to pre-baked image-based environment lighting, and various interactive artist toolsets for creation of volumetric effects such as clouds, atmospherics, and fluid, perhaps the most interesting aspect of this talk for me was their development of what they have termed “deep compositing”.

While this was not a technical paper presentation on this technique (something which I would be very interested to read about), from what I gather, this methodology stores all depth data for a given sampled pixel along with the color and alpha, so that all recorded depths for that pixel (essentially a mathematical z-depth) can be accessed automatically at compositing time. To elaborate with a simple example (my apologies to the developers if I’m getting this wrong), a 1x1x1 unit opaque blue cube rendered orthographically perpendicular to the camera view at a distance of 5 units away from the camera would, in addition to storing an rgba value of 0,0,1,1 would also contain information (visualized as a Cartesian graph for purposes of illustration, but in reality stored as a deep opacity voxel field, similar to Renderman’s deep shadow format) of its “existence” between a depth from the camera of 5 and 6 (and all points in between since the cube is solid). During the compositing phase, any other objects read into the composite tree would read their own depth data, and either place themselves in front of the cube if their depth was less than 5, behind the cube if their depth was greater than 6, or inside the cube if their depth was somewhere between 5 and 6. This method doesn’t suffer from the drawbacks of z-depth compositing, where edges are poorly sampled and aliased, and also doesn’t require separate z-depth channels to be written out. While z-depth solutions can be developed to provide much better handling of transparency without loss of depth integrity (something I developed along with Matt Pleck back at Imageworks during the production of Beowulf), this solution is considerably more elegant and easier to work with when brought into a compositing package that is written to handle this type of data. Furthermore, the alternate solution of generating holdout mattes is not necessary, which provides not only for considerable time savings, but allows a greater amount of flexibility in regards to asset workflow and render planning and management. This is definitely a cool development (and one which has also been developed at Animal Logic), and I personally plan on looking into something along these lines at Brickyard, though for us, it would require implementing a reader not only in Nuke, but in Flame as well.

After this talk, I had planned on attending the “Illustrating Using Photoshop CS5 New Painting Tools” talk. Unfortunately, I became strangely lost inside the Studio section, and by the time I found the workshop, it was already underway and overflowing into other areas of the room, so I opted out of this and instead grabbed a bite to eat. I also took this opportunity to charge my phone within the confines of my strategically parked car. Once back, I decided to stop in on the “Do-It-Yourself Time-Lapse Motion Control Systems” presentation. This turned out to be a presentation by xRes Studio, where a longtime friend and colleague Eric Hanson is currently a VFX Supervisor. He was one of the presenters, and it was nice to run into him and say hello. We last worked together back at Digital Domain on Stealth. What they were presenting at this talk was some methodology for helping photographic artists to create low-cost, ad-hoc time-lapse motion control systems of varying complexity without the large cost of a turnkey solution. By utilizing an Arduino controller board, as well as some additional circuitboards they are developing, it is possible for hobbyists and professionals alike to construct (using these inexpensive products and custom written or open-source code) to create these camera rigs for (at the high end) hundreds of dollars or even less depending on choice of materials. I found this to be an interesting alternative to higher priced motion control systems as this may be something we want to look into for various practical shoots we do using the Canon 5D in our motion graphics department.

Immediately after this talk and in the same location, Dan Collins from Arizona Statue University gave an overview talk on “LIDAR Scanning For Visualization and Modeling”. While this was definitely an overview presentation of the evolution of this technology and some real-world examples of how this can be used, I found it to be interesting to actually see a selection of various capture devices and the point cloud data they produce. While I have worked on features which used LIDAR data for large scale geometry acquisition, such as The Day After Tomorrow, I was not directly involved with the capture process and have from time to time wondered if using LIDAR might be beneficial to our studio for quick set acquisition during a shoot. I will definitely be looking more into this in the near future.

From here, it was time to head over to “The Making Of Avatar” presentation, featuring VFX Supervisor Joe Letteri. This talk was more of a general overview of some of the various challenges faced and strategies used to tackle the overall film. In addition to touching more on the spherical harmonics and image based lighting, deep compositing, and stereoscopic issues and solutions, there was also some discussion of the muscle deformation system, FACS targeting and facial performance capture methodology, and overall scene reconstruction breakdown.

I decided to finish my day with the panel discussion of “CS 292: The Lost Lectures – Computer Graphics People and Pixels in the Past 30 Years”. This was an interview (more of a recollection of memories) by Richard Chuang (co-founder of PDI) of Ed Catmull (current President of Walt Disney Animation Studios and Pixar). The talk focused on a class that Ed taught back in 1980 (in which Richard was a remote student via microwave broadcast). Richard managed to record these classes, and his video footage of Ed is the only surviving record of these classes. The talks were all at once entertaining, historically significant, and surprisingly relevant despite 30 years of changes and development in the field of computer graphics and animation. This was a rare glimpse into the thoughts and recollections of a handful of people (including Jim Blinn among others) whose contributions to the field and our industry are perhaps too great to measure and fully appreciate. Throughout the duration of the panel, I was reminded of my own foray into this realm, which I thought I might briefly share here. As a child, I was fairly artistic and enjoyed drawing, though I was always very technical and enjoyed building things as well. Though I had used a number of computers early on, including the Commodore 64, TI-99, and Apple II and IIe, the first computer my family purchased was an Apple IIc. In school, a close friend of mine and very talented artist would draw comic strips and flipbooks with me, and animation was always something I was interested in creating myself. I began my foray into computer graphics using Logo, creating interesting pictures (and very basic computer programming) as well as animating trucks moving across the screen, but my first real animation program was a tool called Fantavision, publish by Broderbund. This tool allowed for a number of drawn shapes with different fill patterns (after upgrading to an RGB composite monitor, I learned these patterns were actually colors) as well as tweening and keyframing. My 5th grade science project had me construct a backyard meteorology station, and for my presentation, I created a simply animated tornado (which I still have on 5 ¼” floppy disk in a box in the garage somewhere) using this program and presented it to the class in the computer lab at my school. A number of changes in hardware, software, and life goals transpired in between then and now, but sometimes I find myself thinking about how lucky I am to have spent a large portion of my life working in a field where I can create beautiful images, solve technical challenges, and go to work every day with such a fun and amazing task. In large part, I owe a round of thanks to these pioneers whose vision, hard work, and pursuit of the unknown has made this career choice possible. A former colleague and friend of mine at Disney named Tim Bergeron once told me something which I think about often, especially when I’m having a particularly rough time on a project or things just aren’t going my way. He would say, “Whenever you think things are really bad and they couldn’t get any worse, always remember, we get paid to make cartoons”. I don’t know if I could put it any better than this.

For tomorrow, I have a full day planned. In the morning, I’ll either be checking out more about Iron Man 2, where they discuss some of the shading techniques further as well as their keyframe/mocap integration, or a talk on Simulation In Production, where representatives from a few different studios will talk about fluids, hair simulation, fractured broken geometry with texture fidelity and continuity, and large count object simulation. I really wish I could go to both, but I guess I’ll decide in the morning. Next, I’ll be hearing about a paper on Expressive Rendering and Illustrations, where a slew of different motion graphics techniques will be discussed. At lunch, I’ll be attending the Autodesk 3DMax 20th anniversary press lunch. In the early afternoon, there’s a nice talk called Blowing $h!t Up, which will talk about a number of destruction effects and rigid body dynamics in Avatar, Transformers 2, and 2012. After that, I’ll have to choose between a Pipeline and Asset Management talk (a topic near and dear to my heart as I used to specialize in developing systems for this purpose), or a talk on the making of Tron given by some former colleagues. I’ll likely end up choosing the former even though again I’d like to see both. Finally, MPC is presenting their views on a Global Visual Effects Pipeline, dealing with the challenges faced by multi-site operations like the one I work at. I may try to catch this one before calling it a day. Well, looks like it’s pretty late again, so I’d better call it a night. Stop back in tomorrow to hear some more recaps and random thoughts!

Posted By David Blumenfeld on July 27, 2010 12:00 am | Permalink 
Post a Comment
Register for an account
Or if you have an existing account login below.
Username:
Password:
Comments: