|Siggraph is pretty much a wrap, I would have written earlier but literally have not stopped since stepping off the first plane in to Burbank Monday morning. I headed straight in to the talk on computational photography - I have to say this is one of the more exciting research areas to me. Some development has been evolving in this sector for a little while and is now getting more focus (ha). A few years ago at Siggraph, there were a number of fun emerging technologies related to this field - encoded apertures, origami lenses, lens arrays - and plenty of work on up-res-sing, deblurring, infinite depth of field, etc... As with most tech, there are tradeoffs, loss of contrast, ringing or other artifacts, but what I find really exciting is to think of this research more philosophically, thinking of light in a new way, changing photography and imaging by being creative about how we analyze light and how that will change the creative process. This is definitely a sector to watch.
On with the show, with a full conference pass, I always feel like I need to be in three places at once, luckily ILM sends a few qualified folks to cover the subject matter (and contribute of course). I tend to hit up the replicating realism sessions, anything to do with faces, lighting acquisition, 3d spatial things, human motion, and as usual there are a few good pieces here and there, sometimes there are things we know but haven't implemented, and sometimes it is the result from a different approach that can be leveraged in an entirely new way. One of the things I'd like to see more of is research collaboration between industry and academics, I find that some of the research going on is solving something that certain companies already have but can't/don't share. I understand the protection of intellectual property, but at a minimum I think the big shops can help consult and guide some of this academic research by collaborating (not saying it doesn't happen, just that I'd like to see more of it). Quick example, I attended a talk on analyzing and extracting 3d human performance from a single 2d video source. The research included a number of secondary techniques that helped the process and rounded out the work in a nice way, but also distracted from focusing on a more elegant solution to the principal issue. Of course research can head in a magnitude of directions for many reasons, I just like to think that with some more collaboration, I can integrate some of the techniques sooner and keep raising the visual bar.
I always enjoy seeing the research coming out of ICT, specifically the facial capture stuff using normal mapping - and even the headrig version of this tech. Great datasets for shapes, bump and spec, performance - if and when you have a controlled setup. The headmount frees some of that, but still leaves plenty of fun in the research arena for capturing facial performance in context to a live action environment with minimal tech/footprint.
On to the floor, it seemed nice to see the major shops all with recruiting efforts. I know all Lucas divisions have a number of opportunities for folks on the upcoming slate of work. As for the vendors - nothing like the magnitude of a show like NAB, but I guess that is a good thing, you can get right down to seeing the tools you really need. There was plenty of opportunity to play with desktop scanners and 3d printers, a good all around representation of those market options that you could touch and feel. And as usual, all the mocap players were representing - optical systems, accelerometer based, etc... nothing super new here except that everyone is jumping on the virtual cinematography band wagon. It is nice to see the tools becoming packaged and available for expanded use at reasonable price points. Most form factors look like shoulder mount broadcast cameras, and give that kind of look to your virtual camera move. Guess what people, if you have a tracking technology, you already have the capability of doing virtual cinematography (still need the talent, but piecing the tech together isn’t that hard). Heck, put your tracking object on a dolly, a hot head (or just port the rotations directly), or a steadicam (make sure to add mass). Point here is that you can leverage traditional tools with the technology to replicate the cinematic look.
One observation, emerging tech always has the most random graphical interaction technologies, cracks me up to use augmented reality to get a virtual smell, crazy haptic feedback contraptions and fun little graphic games. One cool tech that is letting us peer into the future, a 3d display that had a little 3d video game within the hologram-like spinning LED display – this tech will be fun in the near term. These projects make me think that it would be fun to be back in school working on something like this – I was working on animatronics back then, I guess that runs the same vein. Come to think of it, a lot of the research we do to solve a vfx problem is extremely similar. We leverage hardware and software in some undiscovered way, usually no where near solid state, yet just capable enough to get the job done, then we start all over again.
Thanks to Jim Morris, for his keynote, covering many of the pivotal moments in computer graphics, amazing to see how far in a short amount of time this industry has evolved. It is the moment when we get to see our hard work and innovation pay off through the amazing visual and the audience appreciation, that keep me inspired. I rounded out the conference with plenty of socializing and networking, thanks to all the vendors and sponsors for Bordello Bar, J Lounge, Club Nokia and 7 Grand – good thing the conference is only a few days long, now back to the pile of deadlines waiting for me back at the studio.
Posted By Michael Sanders on July 29, 2010 12:00 am | Permalink