|Good evening loyal readers! Well, SIGGRAPH is officially more than half way through. It’s been an interesting show so far, with some really great talks and courses that have definitely prompted me to make a list of some RandD projects for when I return to the studio. I’ll recap some of the presentations from today while interjecting a few thoughts and ideas here and there, as I am clearly so fond of doing, and so without further ado, on to the day in review.
The morning began much like the previous two, with my best-of-the-best elite parking spot right in front eagerly awaiting my arrival. Driving in at the crack of dawn definitely has its advantages, well, at least one. Just as yesterday, the buses were letting the throngs of attendees off in droves, and for a moment, I thought perhaps a hasty jaunt up the escalator in search of a fast pass was in store, but alas, this was quite a different attraction indeed. Ahh, to wax poetic about a technical exposition, the humanity of it all… what a world! I stopped in briefly at the Media Suite for an orange juice and to check the press releases, then made my way to the 9:00am presentation of “Iron Man 2: Bringing In The Big Gun”. The presentation was given by Doug Smythe (digital production supervisor at ILM) and Marc Chu (animation supervisor at ILM). They talked about a number of production challenges, including the CG expo environments, various suit design, rigging, and animation, and motion capture acquisition and processing. As in some of the other talks I’ve seen on Iron Man, as well as talks on Avatar and courses on the new physically based shading models, one area that particularly interested me was using HDR imagery as projection on texture cards to be used as area lighting. Before getting into that, I wanted to touch back on something that was cleared up a bit for me. In one of my other posts, I mentioned how Ben Snow (VFX Supervisor from ILM) talked about shooting his HDR bracket exposures 3 stops apart, where I am used to shooting closer to 1-1.5 stops apart. In today’s presentation, Doug gave more detail on how they shoot their HDR images, and it turns out their camera is one generation older than the one I shoot mine with. This only allows them to shoot 5 brackets per angle, where I am able to shoot 7. What this really means is that with their method, they are achieving a difference of 15 stops from their darkest to brightest image. For me to approximate that range, I would need to shoot at 2 stops apart, giving me a range of 14. Right now, I’m capturing between 10 and 11. One test I plan to do back at the studio is to shoot a series of four sets of HDR domes, using stops of 1, 1.5, 2, and 3. I’ll merge and stitch them all up, clean out the rig, and then ensure that they are all color balanced to match at their midpoints. Then I’ll light a test scene with the environment lighting only using each of these four different HDR images and wedge out the results to compare on both a gray and chrome ball over the backplate as well as over a neutral plane to see what sort of results I get and if the increased stops actually provides a better lighting or not. If anyone is interested, I would be happy to post up those wedges as a follow up, simply let me know. Anyway, back to the HDR texture card as area light. Up until now, the way we generally set up our lighting (this is using non-physically based shading models in Renderman, essentially appearance networks built out using the Delux shader which ships with Slim) is using two environment lights with the same HDR image mapped onto them, one emitting the diffuse component only as well as traced occlusion, the other emitting the specular component only (providing the ability to control each one’s intensity and other values independently). A minimum of one spot light with shadows (either deep shadows or raytraced depending on the scene and desired look) is placed at the highest intensity position of the environment ball, sometimes adding additional lighting and other times casting a shadow only. Additional spots may be added for more complex shadowing as necessary. From here, we’ll add additional environment lights mapped with HDR images of studio lights, such as big area light boxes in different shapes and patterns, to achieve additional rim lighting or specular hits. Next, we’ll add blocking geometry where we want to darken areas, and mapped cards for reflection purposes if desired. While this is a relatively standard way of performing image based lighting combined with standard lighting, what intrigues me is the talk of using HDR images as textures on cards and being able to use those as additional environment lighting. To do this currently, I would have to create a lat-long image of my texture on a black background and then map this to an environment light (ball) to achieve this. What I would like to be able to do (and what I’m assuming these fine folks are talking about) is simply shoot a non-fisheye bracketed exposure photo of an actual light on my stage, merge it into a single radiance file, and map that onto a card which I can place in my scene, and have Renderman calculate that as environment lighting. This is something I will definitely look up, and if anyone has some info on this, I wouldn’t mind a point in the right direction. While it’s not a huge amount of work to have to shoot stitchable images, merge and stitch them, clean out all the unwanted portions of the map, and then assign this to an environment light, this is a lot of extra work which could be avoided if I could go directly to a card with a single image. Anyway, definitely something I will be looking into.
The next talk I attended was “Expressive Rendering And Illustrations”. What this covered was a number of different non-photorealistic rendering styles, mostly created a research projects, all with very interesting and thought provoking results. From Joe Schmid’s system for non-traditional motion depiction, where commonly used motion blur is done away with and instead substituted with strobing, weighted lagging, and colored speed line generation (both blurred and tubular), to Wilmot Li, Dong-Mind Yan, and Maneesh Agrawala’s presentation on “Illustrating How Mechanical Assemblies Work”, which provided for automatic solving of spatially configured systems such as gear chains and their accompanying causal chains of motion (complete with arrows, sequences, and automatically solved animation). The use of rendering techniques such as these, as well as the logic in the programs used to identify and solve the motion with minimal user input was definitely impressive. Stephane Grabli’s presentation on “Programmable Rendering of Line Drawing From 3D Scenes” provided a fantastic style of hand sketched tracing, with a far superior look to traditional cel shader implementations. The availability of his development from his website (http://freestyleintegration.wordpress.com) is also a fantastic bonus, and I’ll definitely play with this when I return to the studio. Finally, Alec Rivers’ presentation on “2.5D Cartoon Models” illustrated his clever solution to creating a fully tumbleable 2D drawing in 3D space without the result looking like a cel shaded piece of geometry was very creative and definitely headed in a cool direction. His research and a working application can also be downloaded and experimented with at his website (http://www.alecrivers.com).
I had wanted to attend the “Blowing $h!t Up” talk, but a prior commitment caused a timing conflict with this, so alas I was unable to go. I did end up with a bit of free time before my next scheduled course, so I took an opportunity to get out on the expo floor. For some reason (not sure if this was reality or just my skewed perception), the number of vendors seemed much smaller to me this year than in recent shows. The number of freebie items was almost non-existent as well, and while I really don’t need another box of mints, keychain, squeeze toy, or t-shirt, it would’ve been nice to have something to bring home for my young son. While there was a standard slew of software vendors and the requisite demonstration lectures, the rest of the floor was mostly dominated by 3d printers and the samples of models they created, a number of stereo displays and televisions, and a handful of GPU based rendering engines that frankly I’ve never heard of before today. There was also a selection of booths showcasing different 3d scanning solutions, and at least three or four motion capture technologies showcased as well. I definitely intend to spend some more in-depth time on the floor, but that will likely have to wait until Thursday due to my course schedule.
My final course of the day ended up being “Pipelines and Asset Management”, moderated by an old friend and colleague of mine, Erick Miller. While the talks were interesting, I mostly went to simply get a chance to catch up with Erick, who I hadn’t seen in over 3 years. I spent the better part of a decade of my career building and supervising the creation of large scale feature film and facility pipelines, and while each studio and set of artists have unique problems to tackle and creative ways of solving their issues, there’s frankly not much new in the methodology for this particular task. File referencing, nesting and swappable proxy representation of collections of publishable assets, level of detail generation, automated scene population and crowd manipulation, metadata storage and retrieval, and push-pull scene propagation are the standard fare. While I do still find the creation of pipeline solutions interesting and fun, I think over the last four years my interests have shifted more towards the creation of a final image and all the parts in between. For a long time, I was always a part of a large scale facility, with highly departmentalized structures and specializations. Working for a much smaller studio like Brickyard, where my close-knit team of technical artists and myself are responsible for every aspect of the process from design through final renders really gives you a different perspective on the process and a greater appreciation for the sum of all the parts. There is no such thing as “over-the-fence”, and at the end of the day, like all of us, I simply want to tell the story well with the most beautiful images possible, on time and under budget. With that said, one part of particular interest to me during the talk was Christian Eisenacher’s discussion of “Example Based Texture Synthesis” which he has developed at Disney. While the result of his process is fantastic, relying on Exemplar Analysis to find the most appropriate missing pixels in an image to thereby create non-repeating seamless tiles for large-scale textures, what interested me most was their use of non-UV’d polygon meshes, using something called pTex, now an open source code base available at http://ptex.us/ for everyone. What this essentially provides is a meaningful flow based texture coordinate system at every polyface or subvidision with adjustable density on a per-face basis. While the notion of not having to create UV maps for polygonal meshes is terribly appealing, from the demo videos I have seen online, it feels much like a camera view projection system which works well in either 3d paint scenarios (either using brush strokes or actual pixel maps) or in viewport projection situations. I am not sure how this would work if I wanted to take a flattened representation of the object into Photoshop to actually paint a number of aligned textures (such as signage on a complexly curved building surface). In one video, a snapshot of the current 3d view is brought into Photoshop, where an image is stuck onto the surface projection style and then viewed back in the 3d system, but this is not the same thing. This analogy is much more similar to painting through geometry from an underlying picture in Z-Brush or Mudbox, which although also quite useful, is definitely not the same thing. In either case, my lack of knowledge about the software simply means it is an area I would like to look into more and find out how this can potentially be leveraged in my own workflows, either now or in the future. I may have to hit up some old colleagues from Disney who are intimately involved with this process, such as Chuck Tappan, to see what I am missing and if this has another aspect to it.
All in all, it was a pretty interesting day, as the others have been as well. For tomorrow, I’ll try to hit up “The Last Airbender – Harnessing the Elements: Air, Water and Fire” first thing. At 11:30, I’m attending a Press Luncheon where discussions about new processor technology, real-time raytracing, stereoscopic impact on production, and cloud computing for rendering purposes will take place. At 2:00, I though I might check out the Molecular Graphics talk, simply because it’s always interesting to see cutting edge graphics applied to scientific research and visualization (which of course spills over into entertainment), followed by a talk on 3D Printing for Art and Visualization. Though I forgot to register last week, hopefully one of the kind folks at Pixar will get me an one of my co-workers a spot at the evening Renderman User Group meeting, which is how I intend to end the day. Check in tomorrow night for an overview of how my day pans out, and please feel free to comment on anything I have written if it sparks your interest in any fashion. And now, off to my four hours of sleep!