SIGGRAPH 2015: Day 4 - A Peanut & an Ant walk into a bar

Posted By David Blumenfeld on August 17, 2015 12:51 pm | Permalink
As quickly as it all began, another SIGGRAPH has come and gone. Overall, this year's show had some really interesting talks and presentations. Similar to the last time I attended, the visual effects in all the films that were presented looked incredible. It's great to see that level of quality almost across the board as the current standard in our business. 

Only a few years ago, that wasn't really the case. In a few situations at the presentations, I overheard some folks talking about how attendance seemed to be down overall. While I guess it's totally possible, I personally felt there was a pretty good turnout this year, and for the presentations that I went to, almost all of them were relatively full, and over half were in 2,000-plus capacity seating halls. 



I also walked the expo floor about four separate times. In shows past, I have mentioned that the expo floor seems to be getting smaller and smaller (less companies/representation) with a greatly diminished budget (very little swag and freebies, and what is there is just pens, candy, a few buttons, and t-shirts you have to work far too hard for). 

Additionally, the absence of so many companies who used to be there or who I would have thought should make it a point to have a presence, continues to grow, and the floor space for others who continue to come seems to be ever shrinking. To me personally, this was all a bit of a disappointment. Sadly, the floor is such a blatant overall sales tactic that it made me avoid some places because I felt like I was being hardpitched rather than just free to explore and get information. At a few larger booths, I asked meaningful, technical questions, and very few of them could be answered because the people manning the floor lacked the knowledge or experience, which I found both perplexing and annoying. 

This is a highly technical show with a large number of highly-experienced attendees. If the people on the floor can't answer more questions than what is written in the brochure, they should not be there, in my opinion. I did however have a great conversation with
the co-founder of Redshift Rendering Technologies, Robert Slater, and I wanted to thank him for answering my barrage of difficult questions for nearly 45 minutes. Their software was recommended to me by a former co-worker I ran into at the show, and I intend to look far more deeply into this product as a potential render solution for some of our studio's upcoming needs.

And now, on to the talks...

The first production session I attended was "The Peanuts Movie: From Comic Strip to Feature Film," presented by Blue Sky Studios. They discussed their process of porting a highly-recognizable and classic cartoon over to 3D, and the challenges they faced with solving all the perspective cheats inherent in the comic strip. I recall going through these same issues back at Disney on the production of the ride film Mickey's Philharmagic, where decisions had to be made in respect to dealing with the ears of Mickey Mouse, which traditionally are always both shown next to each other, regardless of if he is viewed straight on or in profile. The character was eventually rigged to maintain the double-ear cheat, though in later 3D incarnations of the character, such as the Mickey Mouse Clubhouse, they chose to deal with it in a proper 3D manner instead. 

Blue Sky remained true to the original in their setup, and ended up using camera-based morphs, likely through a combination of blend shapes, pose space deformers, or some similar techniques. Additionally, in order to obtain poses which matched the original material, such as how a character in profile with a raised arm never has the arm pass through the open mouth of their enormous heads, a significant amount of rig stretching and camera perspective hiding needed to be done. 

This type of cheat is often a staple on stereoscopic animated projects as well to ensure that perceived depths end up where intended as opposed to where they would really land if not cheated. This was Blue Sky's first time animating on 2s (where only 12 frames are animated per second instead of the usual 24, and each of those are held for two frames, the way most traditional 2D animation has always been done, except for standout scenes with fast motion or other extenuating circumstances). Special care has to be taken in rendering due to this stepped technique in order to compensate for motion blur (which typically depends on each frame being unique to calculate properly, something they had to write custom software to account for). 

To nail down the unique look of the Peanuts animated holiday specials, they added additional cheats and techniques, including
Multiples (where a fast moving arm may show three hands in a ghosted, or onion skinned, manner), Smears (where an object is elongated far beyond its proportions to better indicate motion or curvature), and Motion Lines (2D streaks painted onto the frames afterward to indicate shaking, fast motion, and impacts, as well as effects like the dust coming from the character Pigpen). From what they showed, it looks like they really nailed down the look, feel, and movement of the original cartoons, and I'm excited to see the final result when it's released on November 6th.

After some more time on the expo floor, I headed to the production session of "The Making of Marvel's "AntMan,"presented by Marvel, Double Negative, Luma Pictures, and Method Studios. This film was uniquely shot, because in addition to a standard first and second unit, they also found it necessary to have a macro unit, which was responsible for creating and filming all the close-up photography, model miniatures, and the like. The main tools used during this process included a Phantom camera (not sure which model) shooting at 1,000 frames per second, Frazier Lenses (a special patented lens type which allows for very deep depth of field, allowing a large range of the footage to be in focus even though it's in the macro world, along with the ability to get close to the ground and adjust a prism for reverse tiltshift effects and to rotate the ground plane (see https://en.wikipedia.org/wiki/Frazier_lens for more information),

Microscopic Lenses (also called Objectives; see https://en.wikipedia.org/wiki/Objective_(optics) ), an electric bike with a camera mounted to it (to be used like a dolly but with bicycle like movement characteristics for chase scenes and fast movement point of view shots), and real ants, which could be filmed and analyzed for their look, motion, and general behavior. It was quite interesting to note that the animation of the ants in the film were greatly influenced by other animals, such as horses and dogs, because the correct motion (and look, including translucency, hair, and other features) of real ants proved to be far too frightening looking for the story.

In all, the macro unit shot 82 painstakingly-created environments over the course of 40 days with a crew of 25 people, using gear including technocranes, remotely-operated heads and cameras, high-speed motion control robots, such as the Bolt, and other amazing pieces of gear and technology.

Additional techniques were employed on the project, including scanning of objects and environments with Mephisto scanners, LIDAR (LIght Detection And Ranging) scanners, and other structured light scanning devices. This, coupled with super high resolution HDR (high dynamic range) 360-degree panoramic photography, allowed the sets to be completely reconstructed digitally with incredible detail and texture via projection (as discussed in my Day 3 blog about CHAPPIE). 

Nearly all of the set pieces and props were then photographed for reference, using three exposure brackets (+/2 stops), and 13 brackets of focus, which is a technique where the same picture is captured with the focus set progressively further away each time, and the subsequent frames are then blended to create a perfectly sharp image at all depths of field, much in the way light field photography  produces a changeable focus depth after the fact. In addition, these images were captured with cross-polarized gels (filter material) placed over the set lighting and the camera lens, so that when taken at matching and opposing polarization, provided a way to generate what's called a reflectance pass. This essentially means you can separate the diffuse and specular contribution of the images, allowing you to see the object without any highlights visible at all, or see the highlights only (by calculating a difference of the two images in a piece of compositing or photo processing software). This information is useful for the look development department when creating the shaders and texture maps.

Another unique technique they took was with the Ant-Man and Yellowjacket characters. There are numerous shots where these characters are fully CG, but their helmets allow the eyes to still remain visible. The actors were captured from multiple motion cameras simultaneously at different angles performing the entirety of their lines for the film, attempting to properly act out the part facially. The eyes were then cut out and composited back into the CG shots so they looked realistic and performed correctly, matching the feel, look, and behavior of the real actors.

The last session I attended was a panel entitled "The Original VR MeetUp." This informally-moderated session included a group of pioneers in the Virtual Reality industry from the 1980s and 1990s, whose work, influence, and knowledge has shaped the principles and ground work upon which today's VR revolution is based upon. 

While it was entertaining to hear them wax poetic about stories from the good old days, there were a number of key takeaways I got from listening to them speak. One of the greatest concerns they collectively shared was the notion that many of today's VR practitioners lack historical knowledge about the first round of VR. There is a feeling that from the extensive research performed back then, a clear and concise list of requirements, or "do's and don'ts" for virtual reality implementations to be successful was created, and many of these problems are not being addressed or even looked at today due to ignorance of their existence. There is a feeling as well that some of the "new technology" that is being claimed was actually invented and solved decades ago, but the new generation is simply unaware of this. 

I think this is a good example of where taking time to do some extensive historical research before embarking on new development would serve everyone quite well in this case. Another concern they shared was about the current use of VR, and how they feel it is a bit misguided and missing the point of the true intention and unique benefit of the technology. The current resurgence in development at this time is largely due to heavy investment by certain sectors, specifically gaming and entertainment. One of the speakers made the point that back in the first round, they did not take this new technology and try to remake PacMan or Space Invaders inside of it, but that is what is essentially happening now. Similarly, trying to take narrative-directed stories, such as short films and such, misses the entire point of this immersive world, which is supposed to be explored at the pace and intention of the user, not the presenter.

This supports the notion that VR is not about telling a story, but about an interactive experience, where one can explore an alternate  place and time. They further suggest that, much like the incredible success of companies such as Facebook, Twitter, and other social media, virtual reality is intended to be a social experience, where multiple people can interact with each other in this created world, as opposed to the solitary experience currently being provided by strapping glasses onto your head and being alone the entire time. 

Examples of this behavior are illustrated in similar nonVR  scenarios such as games and simulations including Second Life and The Sims. An interesting question was raised about what ways this new technology can be monetized and profitable, since that is typically the sole impetus for investment leading to further development. The interesting answer was that only a few years back, a strange phenomenon was discovered, easily exemplified with many of the apps people use on their phones and computers today. Contrary to what seemed common sense, it turns out people are more than willing to pay REAL money for VIRTUAL goods. An example of this would be while playing a free game, users will actually buy extra set pieces, clothing items, animals, or tokens allowing them access into another part of the "world" or simply to set themselves or their experience apart or enhance it in some way, when in fact they are receiving nothing tangible for their money. 

This may be the key ticket to how virtual reality experiences can be a profitable business model, leading to the next evolution in what will make the original promise of the technology achieve the holy grail, as I will now explain. 

During the first round of VR development, one of the major obstacles was the technical limitations of the graphics and computing platforms. Graphics were primitive, I/O speed and data throughput was greatly limited, and shading and rendering techniques had not evolved far enough. When VR was essentially shelved in the early 2000s (except for in university and scientific settings), computer graphics continued to evolve, primarily led by the gaming and visual effects industries, as well as advanced visualization for specialized domains such as manufacturing, medicine, and scientific research and exploration. In today's world, graphical creation capabilities have evolved to the point where it's common practice to create visual effects where the graphics themselves are visually indistinguishable from the real-world photography they augment or reproduce.

If we continue to make advances in near realtime processing and rendering, within some time period, this should become instantaneous (realtime), allowing a VR participant to interact in a false world that looks just as believable as the one they live in, similar to the current movie Tomorrowland or the Holodeck of Star Trek fame. It is hard to give a timetable estimate to this, but considering how rapidly some of the current advances have taken place in just the last 20 years, I can easily see this becoming a reality within little more than a decade from now, so long as significant investment is put into it.

All of this is great food for thought, and it hopefully sends the message that there are times when research should be conducted simply for the sake of knowledge and improvement, similarly how I also believe in sometimes creating art for the sake of art. Not everything should be driven by profit, for if that were the case, we would lack fantastic advancement and discoveries from things including our space program. As it turns out, spending time and money on discovery and progress tends to have a net positive return in the long run due to the ancillary demands and industries it spawns, both at the local and global scale. Tools such as VR can bridge hurdles and gaps that no other technology currently can. 

Imagine, much as the Internet and email has done for information and communication, people from disparate locations on the globe all coming together in a unique environment, with no barriers or sociopolitical differences keeping them apart, behaving as one global community and connecting with a true human experience in a digitally-created realm of discovery. Think of the ideas and progress that could be made when all of humanity collectively interacts and shares ideas together. As a species, it is my firm belief that we will only survive and excel if we can all participate and work with each other without barriers and conflict, sharing a common ground. 

Perhaps this is really the ultimate goal of virtual reality after all, and if not, maybe it in fact should be. I hope you enjoyed some of this food for thought. I feel that personally, I gained quite a bit from the convention this year. I have some new ideas and techniques to try out on my job, as well as a greater understanding and appreciation of a new technology that isn't really new after all, and I hope you as the reader shared in that, even if only in a small way at the very least. 

Next year, SIGGRAPH will be held in Anaheim, CA, and since I typically only attend the semi-local shows, I hope to go to that one as well if time and other conditions permit. I can't wait to see what new things the future holds in store. Until then, thanks for reading!

David Blumenfeld is the Head of CG/VFX Supervisor at Brickyard VFX (http://brickyardvfx.com) in Santa Monica, CA. He can be reached at: dblumen@brickyardvfx.com.