SIGGRAPH 2017: Cities, Mandlebulbs & Beyond

Posted By David Blumenfeld on August 08, 2017 07:08 am | Permalink
Today was another great day at the show, and I arrived inspired and ready to dive into a few Production Sessions as well as spend some time on the expo floor. I try to break up my limited schedule pretty evenly among the large-scale "making-of" presentations and the technical talks, panels, and papers, but it's always a difficult juggling act as you simply can't be in two places at one time. 

This year, I think I skewed much more heavily towards the former, as it's one of the few times over the course of the year I get to see what everyone else is doing out there and what technologies are being used in production (as opposed to being developed but not ready for implementation). So without further ado, let's dive in!

I began my day with the Production Session entitled "Behind the Headset: The Making of Google Spotlight Stories' Son of Jaguar, Sonaria, and Oculus Story Studios' Dear Angelica." All of these presentations revolved around creating short form experience films in 360 VR, which poses a number of its own challenges. While I won't entirely recap each one, I will tell you about some interesting tidbits from each. 

As an unfortunate footnote, the Oculus Story Studio behind these films has been shut down by Facebook and most of the people presenting were either out of a job or had moved on to other companies - an all too familiar tale in the world of visual effects these days, and one I find quite ironic. I think there are far too few companies spending money on making art for the sake of making art and pushing the boundaries of new technology, and to be fair, only a select few have the funding necessary to do such a thing. To see a company of that size, with what would seem like money to spare for such a task, reallocate their funding towards more "profitable" realms hinders, or at least in my mind greatly limits, the speed and quality which these boundaries can be explored and improved upon. 

Not all research has to be in the scientific realm, even though animation, visual effects, art and expression accomplish that task as well and provide benefits that, while seemingly intangible, have the potential to yield far greater profits if forward-thinking economics are applied. Again, I have no real knowledge of the inner workings of that company, so this is not a direct criticism, but more a trend or pattern I see repeat itself time and time again, and for lack of a better word, it honestly saddens me.

At any rate, the Son of Jaguar film was a visually-stylistic narrative revolving around the world of Lucha Libre, and there were two technical features that stood out to me. The first had to do with character interaction. There are certain points in the film where the hero character looks and talks to camera, ala Ferris Bueller. Of course, the viewer has the ability to look all around, so they created a unique setup where the animation of the head and eyes always looks towards camera, regardless of where it is in 3-space. The second, and similar interesting piece of tech, was in the form of some graphical two-dimensional effects, similar to the old comic book "pow" starbursts. These show up when a character gets hit in the face within the wrestling ring. These effects are intended to sit behind the character's head and radiate outward, but they must maintain view-dependence for the movable camera. The way they achieved this was to create a sphere centered on the character's head, completely transparent (masked) outside the graphical starburst shape, which would behave like a card sprite and always show on the opposite side of the sphere in relation to the view camera. 

I thought this was a very clever solution to the problem, and one that could operate with very low processing overhead for realtime 90fps playback on the VR headsets. There was also an interesting solution for volumetric shadowing, where they rendered a depth channel (similar to z-depth) and then packed the floating-point result into an 8-bit buffer, basically using its vector as a projection to darken the atmosphere and project a shadow on 3D objects with little overhead as well.

In the Sonaria film, the prime focus of the short was to explore and raise the bar on 360-degree sound. Up until now, most sound is binaural stereo, which works fine in certain specific orientations, but loses spatial cues as you move and reorient yourself. For this, they recorded everything in Ambisonic format (a 360-degree sound field), and used some interesting techniques to refocus the sounds based on the user's position and orientation. 

For instance, some of the visuals take place both above and below the water line, and the viewer can duck down below or stand up above. The audio has the ability to sense this spatial positioning, and making use of pre-coded reverb zones, can transition the sounds and their processing depending on where the viewer is at to properly reflect what you would hear and how you would hear it accordingly. 

Additionally, musical audio was recorded in six unique radial segments (think a Trivial Pursuit game piece "pie"), so no matter how you rotated, you could still experience a stereo version of the music and keep the two ears somewhat separate.

For Dear Angelica, a new 3D VR painting tool software was developed, named Quill. Similar to other stroke based paint systems, where the mathematical curves are recorded, and the rasterized "paint" applied afterwards, this system allowed the art director/artist to create the stylized paint strokes all around the viewer, so the canvas was an actual volumetric assembly of surrounding colored gradient line art of varying widths and tapers. 

Aside from the obvious unique nature of the visual this created, the point here was that rather than fitting traditional artwork into a VR medium, this provided a new direct VR creation tool, something that is greatly lacking in this realm. I found this and the other shorts to be quite novel and, most importantly, pushing the envelope on how stories are both told and presented in a VR format, opening the door for further development and exploration in this area of growing popularity.

From here, I took to the expo floor to explore what was new. I find it interesting how every year prior, there was a noticeable trend towards two or three specific areas of focus out there. In the past, you might find yourself at the show and almost every other booth seemed to be about motion capture or 3D printing. Other times, everything focused on stereo content creation and playback or VR headsets. When physically-based rendering became the defacto standard, it felt like every course, panel, and technical paper had something to do with BRDFs, importance sampling, and image-based lighting. Of course this makes sense, as the expo is primarily about the latest trends in visual effects and computer graphics, as well as the newest emerging technologies. I found there to be less of a focus like that this year, and as I mentioned on Monday, the number of companies, or rather the lack thereof, was pretty striking. Along with its noticeably- smaller footprint, there was no real main theme for the booths, but rather a hodgepodge of the same old things. 

Among the clutter, I did spy a just-in-development realtime terrain generation tool called Instant Terra from a company called Wysilab. This software is just entering the beta phase in September, and is lacking in a few features I was hopeful that it might have, but I am eager to see this as it develops and hopefully some of those advanced features will make their way into the release version, so please check it out if you are interested. 

I also spent a few minutes at the Allegorithmic booth, getting a demo of Substance Designer and Painter, both tools that I am looking to integrate into our current pipeline. 

Of course, I also stopped at the Redshift booth to speak with Rob Slater (co-founder and VP of engineering). Rob has always been very helpful since we began using this rendering system, and I can't say enough good things about it. Its quality, speed, and production level feature set allows us to churn through large demanding jobs in a fraction of the time from prior renderers, and I couldn't be happier with the system. In time for the show, Version 2.5 is now available (which I have since installed and am using on my latest project), and it incorporates a number of features I have been waiting for. Check their Website (https://www.redshift3d.com) to see what some of them are. 

This of course brings me to the next area I was looking for on the floor, and that was GPU expansion chassis. There are very few manufacturers of these, which I find incredibly surprising considering how many graphics card manufacturers were on the floor and how much GPU computing has taken off in the past few years. One company there who makes render blades and such advertised systems capable of running a number of GPUs, but when asked if they made any chassis without the computing portion, they simply dismissed me and said that's not what they do, which seemed like a huge lost opportunity for them. There was only one company there that seemed to be able to acquire and sell these, and I am currently speaking with them to see what they offer, as I'd like to expand our capabilities. (We currently use Cubix and have looked into Cyclone).

After this, it was time to check out three more Production Sessions, including "Valerian and the City of a Thousand Planets" (featuring the work of ILM, Weta Digital, and Rodeo FX), "Game of Thrones: Building and Destroying Meereen" (featuring the work of Rodeo FX), and "Sony Pictures Imageworks: Celebrating 25 Years of Innovation, Imagination, and Creativity." While the Imageworks session was really just a look back on what they've done over the years, the other two focused on the significant amount of work in both projects. 

While all of the work produced was fantastic and of the utmost quality, I'm not going to go into too much depth on this, as you can simply watch the sessions to learn more. Instead, I'll offer up an interesting, though possibly controversial observation about all of the presentations I saw like this.

It seems as if every studio that presented had to create a large scale CG city for the film they were working on, and if they were responsible for building any sort of interesting architecture, there was a strong chance it had volumetric fractal-based shapes called Mandlebulbs in its design. Not that there's anything wrong with any of this, it's just funny to see everyone basically doing the same thing, all with the similar goal of trying to create unique visuals that end up being, well, not quite so unique, at least from each other. 

The other main thing that's become quite apparent is that, as the technology continues to improve, the effects being made for the tentpole films (and TV shows) all seem to take on the same level of complexity. After all, there's only so many things you can make. Don't get me wrong, I love visual effects and make my living off of it, so the more the merrier. However, in some ways it's become a little bit overwhelming, even for me. With an average film now comprising an effects shot count over 2,500, it's become a bit of visual information overload. It seems like every shot contains simulated water, volumetric smoke and fire, fully animated CG characters with hair, cloth, muscle and skin, partial set extension/augmentation or entire environment replacement, and any number of other props, vehicles, and more. 

It's nearly impossible for any of these films to be completed by any less than a handful of the largest companies all working in unison and even sharing assets (companies which in reality are in competition with each other in an already low-margin business), supplemented by even more small boutique studios filling in the one-off shots or easier to accomplish tasks.

Again, empirically this is great, as more work means more business and jobs for artists and technicians, but I see a flip side to this as well. The amount of work and money spent to create such detailed shots where the audience has no way to even see five percent of the work in a shot that was accomplished undermines the value of what the creation of this artwork is supposed to convey. There is simply so much going on, and half the time the camera is whizzing by so fast the bulk of it is lost in the motion blur or intense (bright or dark) mood lighting. It also commoditizes a task, which is really not mundane in the least. 

An example (perhaps not great, but humor me) would be the construction of a house. From a commodity viewpoint, architects and engineers design a house and create all the concepts and plans in accordance with the owner (or whoever is paying for it to be built), and then builders are hired to construct it. While of course some builders are better than others, for the most part, they follow the plans, use the proper materials and building techniques, and viola, you have a functional house created. This is not to say builders aren't also highly skilled value-adding individuals, but in this example, they are very task oriented with a clear result to be achieved. 

However, the visual effects companies don't only take on that task, as they are required to provide design, concept, and story development to the project as well. In the analogy, the builder is not also involved in the architecture, design, and engineering portion, and at that point, it's no longer a commodity in even the loosest sense of the word. To top that off, it often seems that for one reason or another, things that should be fleshed out and resolved before a finger even touches a camera, let alone a computer, are left for someone else after initial production has already completed, causing a giant waste of time and money to be spent that could've been better allocated to the things that need to be made because they can't be built. 

For example, there was a vehicle in the one film that was built as a practical prop for the actor to sit in, and after the entire thing was shot, the company who sponsored the product (in-film ad placement) was shown the item and decided it didn't meet their brand aesthetic. By the time all was said and done, nearly the entire vehicle had to be rebuilt and then replaced in every shot it appeared in. Of course I don't know the specifics of this situation, and this is not intended to place blame or point the finger at anyone. It's merely an example of things I've witnessed many times, and again, while this provides work for artists, it would be my preference to not spend time on this seemingly unnecessary task, and rather spend the extra money saved to give the artists "more" time to work on the other parts that really need to be created, thereby allowing them to be more fulfilled in the higher-quality product they can create and not have to work so many long hours to do it. 

In many shots, nearly the entire thing is comprised of actors in front of a blue/green screen with a small stand-in prop, and it makes you wonder how a cinematographer gets so much credit for shooting these when in fact the creation of the entire set and other assets occurred digitally by other artists, who in turn built a system whereby a DP can look into a virtual representation on an iPad and frame a shot that way. 

I guess what I'm getting at is that the old adage "just because you can do something, doesn't mean you should" can apply in many of these cases. I am fully aware that many of these projects are creating worlds which simply don't exist, and it's far more cost effective and feasible to create them digitally rather than try to build practical sets, but it just seems like we've reached a point where maybe it's a bit overboard and could be dialed back without stifling the creative freedom and additional imaginative filmmaking that this medium allows. 

I'm definitely curious to hear your take on this, and as always, you can email me at blumenfeldvfx@gmail.com and let me know what you think. As I said, my goal here is not to take away work from anyone, or to downplay anybody's contribution in a project, be it physical production, post production, etc. I just think perhaps a more thoughtful approach can be taken to plan this work so it doesn't end up driving companies out of business or destroying the very industry that makes this all possible.

With that food for thoughts, tomorrow is quickly approaching, and again the final day of SIGGRAPH will be upon us. I have three production sessions on the books, so please tune in again to hear some unique thoughts on what is presented. Thanks for reading, and now off to bed!