SIGGRAPH 2015: Day 1 - VR, keynote, parallax mapping & chance meetings

Posted By David Blumenfeld on August 11, 2015 06:40 am | Permalink
Hello SIGGRAPH friends! After skipping a year, I'm back again at the show, ready to dive into the latest and greatest that's taken place in our industry. As everyone already knows, this year's big topic has been the resurgence, or "second renaissance", of virtual reality (and augmented reality). While the expo floor doesn't open until Tuesday, I'm quite sure that booths filled with this tech will dominate, much the way 3D printing, stereoscopic viewers and monitors, and motion capture tech has in expos past.



My day began with a 9am panel on this very subject, "The Renaissance of VR: Are We Going to Do it Right This Time?" The name of this panel was alluded to during the talk, and the meaning was simply an explanation into how Virtual Reality never quite took off the last time it was hyped in the media. While this technology has literally been in development for over 50 years (with some impressive archival footage showing the proof), the late 1980s and early 1990s was when the terminology and promise of the future of this tech went mainstream. The problem back then was not really with the tech itself, but more how it was hyped as the "be-all-end-all" magical solution to every conceivable problem, when in fact the state of the art of the time (and to some degree even now for that matter), was nowhere in sync with what was billed, leaving early adopters disappointed with the gap between dreams and achievable reality. 

Many of the problems faced then have had nearly two decades of research poured into them, allowing for some (namely latency, wireless communication, and advances in realtime rendering) to be solved this time around. It is clear though that many hurdles remain, with two specifically coming to mind. First off, there is currently no viable solution now, or in the near future, to rectify the discrepancy between perceived motion in an HMD (head mounted display), such as when you view the VR world as if you are flying through the sky, and real motion as detected by the fluid in your ears when seated. This leads to an uneasy feeling, similar to the feeling many experience after playing first person shooter games. In scenarios where you move around, unless your real vision is partially preserved, you can easily become disoriented or collide with obstacles. 

The second is target market. While industry currently has a large demand for these products, especially in design, manufacturing, and product testing to name a few, outside of the gaming and entertainment world, it is yet to be seen if the average consumer will demand these products in a manner similar to personal computers, smartphones, and televisions. The panel of four speakers all had unique perspectives on these topics, and the one I found most interesting was that of Elizabeth Baron of the Ford Motor Company, demonstrating how they use this technology for programmable vehicle modeling (complete with interactive touch points), virtual spaces where the car can be viewed from the exterior, complete with all components down to every screw and fastener, and inside a CAVE system where the interior of the car can be explored.

The second session I attended was the Keynote Address. After some awards were presented to some key players in art and academia, Joi Ito, director of MIT's Media Lab, gave the keynote speech presenting his vision of the future in regards to how he sees technology progressing in relation to the areas of study in his department at the university. Of greatest interest to me was his vision of the convergence of digital technology with that of biotechnology, and how the two are so interrelated given the ability to utilize and program living organisms to perform increasingly complex tasks that may forever change the environment around us.

After a brief lunch and a chance to catch up with a few colleagues from years past, I headed to the Disney presentation "Building San Fransokyo: Creating the World of Disney's Big Hero 6." They focused on four major areas of production, including creating the models of the city and characters, lighting the film, rendering it all (and writing their new renderer Hyperion for the task), and creating the wormhole sequence. While there is far too much to write for this article, I will start by mentioning that the Hyperion renderer was written to perform large raytrace bounces of indirect lighting while greatly speeding up this typically memory-intensive process by presorting the rays and then feeding the geometry into that sorted "list" rather than randomly firing them all off. On average, this method required approximately 10.5 million rays per frame to calculate, but provided a straightforward method for the artist to use the tool. Rather than a large list of options for the render globals and lighting engine, the list was literally under 20 settings. This allowed the artists to focus significantly more of their time on the art of the shot rather than dialing knobs around to try and get the proper look. 

Of course, this ease of use still comes at the cost of render resources, of which they have a considerable amount. They claimed an average of 89 core hours per frame. If you assume render nodes of 16 cores per machine, that's still over 5.5 hours per frame. To be fair, they were rendering a significant amount of geometry, so this average time is incredibly impressive to say the least. Fortunately, they were able to utilize two of their own full render farms (including artist workstations at night), plus the farms of Pixar and ILM, totaling approximately 55,000 cores. This yielded an average render throughput of 1.1 million core hours per day, for a total film render time of just under 200 million core hours. There's nothing insignificant about that, and the final look is just beautiful. 

It was interesting to note that for the wormhole, they created the look out of mandelbulbs. For those who don't know what fractals are, they are essentially mathematical sets of numbers whose sequenced values, when tested for certain conditions (such as whether or not they approach infinity) evaluate to true or false, and based on that are assigned a pixel coordinate and color which, when viewed as a large sequence, produce fantastic imagery resembling certain phenomena witnessed in nature. One of these sets is named Mandelbrot, after the mathematician who popularized it. When this set is calculated in three dimensional space, a uniquely detailed "flower ball" is produced, the specific shape of which can be altered by offsetting the numerical inputs. 

A friend of mine recently was exploring some of these shapes for a job he is working on, and we discussed it briefly before I did some online exploration of various open source software out there for making these shapes and imagery as animation and exportable geometry. I had done this research after seeing the Big Hero 6 movie, and didn't make the connection between the two, but when they showed this in the presentation, I instantly recognized it as such, which made it far more interesting to me how it was created. 

One final topic of interest from the presentation was how they rendered the windows in the buildings. When flying by large offices and skyscrapers, especially at night when they are internally lit or when the exterior lighting allows you to see into the windows, a dead giveaway can be the interior, which is often a texture map. To rectify this problem and increase the realism without having to take the hit of modeling and rendering the interiors, they wrote what is called a
Parallax Mapping shader. 

The best way to explain parallax mapping (which comes in about four different levels of complexity/quality) is to think of bump mapping, but rather than shifting the normals of the surface up or down, the actual texture coordinates are shifted sideways to give a false parallax distortion, making it look as if there is dimensionality when there is in fact none.

Rather than explain this further, I'll simply point you to a nice link on the Web which explains the technique and gives some GL shading example code at:

This is absolutely a technique that I will be exploring more when I'm back at the studio, and I'll attempt to create a setup to render this in VRay if something doesn't already exist for this.

After that talk, I headed over to a studio course called "Build Your Own Game Controller." In a few short days, I'll be one of those lucky manchildren to be purchasing my first pinball game for my 40th birthday (thanks to a wonderfully accommodating wife), and at some point down the road, I'll finish setting up the Mame system I started putting together with that same friend mentioned above, so this topic was of interest to me. 

Unfortunately, due to the short gap between the last talk and this course, all the seats with the hands-on electronics kits were already taken, so I stood outside the area and just watched and listened for a while. The course was taught by Josef Spjut of Nvidia, and the entire thing revolved around a small kit he was selling which included switches (buttons), cables, two breadboards, and an Arduino Micro USB controller board. I only stayed part way through, but it was great to see how simple it is to configure, test, and operate the unit with the provided IDE software kit. This will definitely be something I will explore more down the road, and for anyone interested in robotics, home automation, or similar areas, these tools are definitely user friendly, inexpensive, and easily accessible for even the novice tinkerer.

After I left the studio area, I sat for a few minutes to gather my notes, and ended up being approached by two gentlemen who just happened to be nearby. While I'll keep their names private out of courtesy, I have to say that it was one of the more interesting conversations I have had at SIGGRAPH, and very exciting as they were quite unique people.

One was highly accomplished in a completely different field and directly related to a pioneer in the computing industry, and another was a visionary futurist and accomplished writer in Virtual Reality, from its early emergence in the 1990s. My chance meeting with these gentlemen and the charming conversation that ensued was a real treat, and easily my highlight of the day, and I thank them both for taking time to chat with me and allowing me to find out more about their history and perspectives.

All in all, it was a fun first day. As expected, I ran into some old friends and colleagues, and it was interesting, to say the least, to see what turns of events have transpired in their lives since we last met. I'm looking forward to Tuesday, and hope to have some more interesting thoughts and ruminations to share with you then!

David Blumenfeld is the Head of CG/VFX Supervisor at Brickyard VFX (http://brickyardvfx.com) in Santa Monica, CA. He can be reached at: dblumen@brickyardvfx.com.