SIGGRAPH 2015: Day 3 - From Domes to Dinosaurs

Posted By David Blumenfeld on August 13, 2015 06:53 am | Permalink
Today was an interesting day for some different reasons than the past two. I attended four talks, one of which was about a topic I knew very little about, and three others for movies I have not yet seen. I figured this would be a good opportunity to learn about something more obscure but interesting sounding, as well as see some behind the scenes work before enjoying the finished product for a change.

My day began with a 9am panel called "Digital Domes: Theaters Without Borders." Four speakers each took turns sharing their experiences and involvement in the administration and creation of content for fully immersive, 180-degree domed theaters such as the ones you might find at many planetariums and science museums. By comparing and contrasting the similarities and differences between this format and that of virtual reality glasses, some interesting things came to light. But first, a look at my own history, in relation to this. 



Large immersive formats have long been a technological goal of theater operators and the viewing public, and I can still remember my own personal early exposure to them. At seven years old in 1982, I still vividly recall standing in an early morning line wrapped entirely around the block at the Cinerama Dome in Hollywood to see the opening weekend new film E.T. The Extra Terrestrial. I again returned to the theater two years later to see the opening of 2010. 

My family would often visit the iconic theater to see special presentation films displayed in Cinerama, which incorporated three projectors that allowed the wide images to stitch together, forming a large panoramic view. We also took weekend trips to San Diego, and one of my favorite things to do was visit Balboa Park, which not only has a fantastic zoo and museum complex, but the Reuben H. Fleet Science Center. Built in 1973, it was the first museum to house an Omnimax theater, which was a special, partially-domed, extra-large format IMAX screen displaying footage filmed in their dual 70mm format, but with a near 180-degree lens, which when projected onto the dome, created a far more immersive experience. 

As a child, I remember seeing a space presentation, and being both terrified and awed at their depiction of the thunderously-loud, blindingly-bright Big Bang explosion. In 1985, I returned to see a very special 42-minute film entitled Chronos, an award-winning dialogue- and actor-free abstract film beautifully shot in varying scales of timelapse across five continents.

I often visited the Griffith Observatory as well, famously known for their terribly uncomfortable wooden seats (which have since been replaced a few years back in an extensive remodel) and their ubiquitous Mark series Zeiss mechanical starfield projector shaped like a gigantic ant (since replaced by a more modern projector but still on display in the museum portion). When I was first dating my now wife, some of our evenings were spent in that dome watching the since discontinued Laserium shows set to Pink Floyd's The Wall, Dark Side of the Moon, and U2. Less than two months ago, I returned to their completely-updated dome (now nicely appointed with plush comfortable chairs) for my son's first astronomy presentation inside that same theater, entitled Centered In The Universe, where much like me almost four decades ago, he witnessed his first recreation of the Big Bang with the same amount of trepidation and delight. 

I feel very lucky to have large-scale presentation theaters of this caliber located so close by, including the large IMAX theater at the California Science Center. These uniquely immersive cinematic experiences present the world to us in a vibrant encompassing way, but they provide their own unique challenges, and this was what today's presentation focused on.

There were a few key points that I found interesting about how full 180-degree dome presentations are made, and what some of their limitations are. For one, because of their scale, all modern presentations must be created at a 4K square resolution minimum, with some theaters supporting 8K as well. While 2K theaters still exist, this size is not terribly useful for modern films incorporating the use of CG and other digitally-composited footage. Unfortunately, there are currently no capture devices with a square sensor, which would enable the circular image to be shot at that resolution. 



All digital and filmback sensors are rectangular in varying aspects, meaning in order to capture at that resolution, you must in fact use a higher resolution capable device (such as a Sony 8K F65) to obtain a vertical resolution high enough to meet this condition. This is further limited by lens type. While using an extreme wide angle 180-degree lens, such as an 8mm on a full-frame sensor, will give you a full hemisphere, most lenses produce less than optimal image quality towards the edges, which can cause problems as well at this large scale. 

There are other interesting things to take note of in the realm of cinematography for dome projection. For one, unlike standard partial field of view screens (such as your standard movie house, where even a large screen only encompasses at most one third your field of view), having many editorial cuts can be very distracting, so shots tend to be much longer, and motion much slower. With very smooth motion, it appears to the viewer that either the world around them is slowly rotating or they are flying through it, but handheld type shots can be very disconcerting, seeming as though a giant has scooped up the audience and is carrying them around.

Despite the fact that a viewer can look anywhere in the scene, most people tend to focus their attention towards the direction of motion, so action needs to be deliberately placed to one side, as requiring people to turn their heads too far or tilt them backwards is quite uncomfortable and sometimes nausea inducing.

Of interest, while it sounds obvious, there is no such thing as a zoom shot in a full dome. A zoom shot by definition is pushing into a limited field of view and making it larger, but in a fully-immersive dome, this actually becomes a dolly since there is nothing to zoom due to the all encompassing field of view. Finally, a big drawback to these types of theaters involves contrast issues. In a typical planetarium-type setting where the dome is completely black with bright stars, the contrast is more than adequate, but in a daytime shot or one where the environment is completely lit, the light from one area on the dome spills and reflects onto other parts, called cross contamination, effectively washing out the dark areas and significantly reducing the contrast. 

The only real solution to this will be the future implementation of a non-projected dome image, essentially in the form of an enormous LED screen on the entire surface. While that technology will surely not be long coming, I would expect it to be rather cost prohibitive for quite some time.

I showed up to this talk not really knowing what to expect, but I found the topic quite interesting and thought provoking, and learned a few things on the way. Of the approximately 1,600 domed theaters in America, many struggle with having enough appealing content to stay profitable. The format doesn't lend itself well to standard movie production, and it's often a struggle to generate enough traffic with the scientific content that these venues are so well suited for. Perhaps this particular market is actually ripe for innovation and investment. 

Unlike the rising popularity of VR, which tends to be a very personal experience due to enclosed nature of the glasses and head-mounted displays, large domed theaters provide a social setting where families, friends, and large groups can enjoy this unique immersive entertainment together.

I next attended the 10:45am production session "Image Engine Presents: Breathing Life Into CHAPPIE." For starters, the CG work on this film was so realistically convincing and lifelike. In the film, their main character needed to evolve throughout the timeline to incorporate his numerous transformations in appearance (graffiti, stickers, damage, etc.). For simplicity and to reduce potential problems, they decided to place every piece of the model and its many variations all in the same file. While this produced a very large data set, it ensured that any updates were always propagated downstream without having to update and track multiple versions.

They next made mention of shooting bilevel HDRs, which I personally know Sony Imageworks has done in the past for some of their productions. The idea behind this is as follows... 

Let's say you intend to shoot a single HDR panorama in the center of a courtyard surrounded by buildings. You would shoot your three (or more) multiexposure, bracketed angles at a height of say three feet, and then shoot another set in the same location, but this time at a height of six feet. Using these two known heights (one side of a triangle), you can then correlate any given point in the two stitched images (let's say the corner of a specific doorway) and then calculate the angle difference between the two images at that point. With that information, simple trigonometry will allow you to solve the distance to that point in the image, and when combined with its X/Y location in the image, a specific point in three-dimensional space can be determined. 

This is often used to then create a light at that location, allowing the lighting artist more control over the intensity and color of the contribution in those locations of the IBL (image based lighting) map. They didn't mention in the presentation if this was what they were using the dual HDRs for. However, I see another potential use for this, which I'll discuss shortly.

They indicated they were using another technique which I've actually written about before from a previous SIGGRAPH (when I saw Digital Domain using it on Real Steel and ILM on the first Avengers movie). While using a spherical HDR (dome light) is a great way to begin lighting a scene, there is no distance indicated in this method, as the lighting itself is infinite in all directions. However, a more precise way to go about this is to survey the set and create a digital model of it. This can be done using LIDAR Light Detection and Ranging scanning (expensive), by using traditional land surveying techniques (expensive and difficult), or by using photogrammetry (inexpensive but imprecise). 

With photogrammetry, you simply take a large amount of digital photos of the environment (the more the better, and the more positions you move about to, the more accurate the results), and then use software such as Autodesk 123D Catch, Agisoft Photoscan, Imagemodeler, or the like to stitch these images together, and using a similar overlapping point correlation algorithm as described above with the dual height HDRs, a point cloud and/or geometric mesh is created representative of what the actual environment looked like, completely with the photographs projection mapped and blended onto the geometry.

With some amount of model cleanup, a fairly simple and somewhat accurate representation of the environment can be created. From there, the HDR image must be spherically projected onto this geometry, creating texture maps in the high bit-depth radiance format to be used with a renderer which supports geometry as lighting, such as V-Ray, RenderMan, 3Delight, and others.

When used in this manner, the lighting, shadowing, and reflections are far more accurate and produce a much better result right out of the gate. The only problem with this method is since the location in this digital set where the HDR was shot from is not precisely known, these projections (as well as the cleaned up model geometry) will be off, thereby making the projection map somewhat sloppy. I wonder if, using the method I described above to correlate points in space with the stacked HDRs, if that could instead be used to snap the corner points (important vertices) of the created environment geometry into far more accurate positions, in which case the projection should line up almost perfectly. This is an area I will have to look into more, as the technique seems potentially very useful.

Next was the 2pm production session "The Park is Open: Journey to Jurassic World with Industrial Light & Magic." The effects work on the film spanned 988 shots (700 of which had dinosaurs in them), and took eight months for five facilities (three ILM locations, Image Engine, and Hybride, as well as practical models from Tippett Studios and LegacyFX). The two things that I found most interesting about their talk was their use of motion capture for the dinosaurs.

Early on, they decided motion capture might give them a good starting point for their animation, and opted to hold an all-company internal casting call for anyone and everyone who wanted to try their hand at acting like a dinosaur in their in-house mocap stage. After narrowing the participants down to around five or six people, they ended up holding somewhere around 4,050 sessions (if I recall correctly) of these actors performing as Velociraptors, Indominus Rex, T-Rex, and a few other dinosaur types, using reference footage of real animal behavior (mostly fighting and similar) such as bears, lions, komodo dragons, birds, and a home video of Steven Spielberg's dogs of all things. 

It was fun to watch, and the performances were actually surprisingly convincing, providing a great jumping off point for their talented animators to really bring these creatures to life.

The other thing I found interesting was their use of iPads on-set to block out shots. While I've used other commercially-available, lens-blocking software on this platform, they wrote their own custom tool called Cineview, which in addition to allowing them to preview any existing camera/lens combination, made it possible for them to actually bring in textured OBJ format 3D geometry and place it in the scene for proper framing and blocking purposes. Additionally, they showed their use of a relatively-new, commercially-available accessory called the Structure Sensor (http://www.structure.io), which is a sub-$400 camera and software combo which hangs  directly on the iPad and allows for depth scanning and scene reconstruction, as well as tracking and object occlusion in semi-realtime.

This allowed them to prescan a set or location with this tool, and then have their dinosaur model actually track into the view for instant feedback. This is definitely something I'll have to check out.

My final presentation of the day was the 3:45pm production session "Fix the Future: Industrial Light & Magic and Visual Effects for Tomorrowland." The film looks beautiful, most notably the impressive and intricately detailed futuristic cityscape, which exists at three different time periods. The sheer volume of data and model complexity required to create this was pretty astounding, and the concept art and design directives were entertaining and gorgeous to look at. 

There was a whole set of "rules" set up on how the city must work, from the very familiar "Hub and Spoke" design style made famous by Walt Disney in Epcot Center and many of the other Disney parks, to integration of buildings into the shape and natural flow of the surrounding environment and terrain. One unique design element was the integration of actual living plants, such as humongous trees or grasses and hedges, merged with actual building elements and curved walkways and bridges, a fusion which they dubbed "archinature." 

They apparently strived to render the city as a single pass, rather than in multiple layers. While this approach certainly saves time and confusion (and reduces the potential for errors) downstream in the compositing phase, I can only image the memory and render times required to actually perform this.

Another interesting aspect was not only did they create the entire film in 4K resolution, but this was the first feature to be finished in the new HDR projection format. I recently had the opportunity to view a Sony 4K OLED HDR Monitor (PDF), and to say the capabilities and color/exposure range are nothing short of breathtaking would be an understatement. I can see the desire to work in this format natively and finish/color grade everything this way, but the biggest drawback I saw to this was, as is the case with any new technology, most viewers do not have the ability to playback in this manner due to very few people having access to these devices. 

What that means is that an image you finish while viewing on here not only looks far more drab and dull on a standard monitor, but in fact the typical color and composite tricks that you would do to enhance highlights and such on a normal monitor will actually look wrong or poorly done when viewed on this new format. I'm not sure how this particular dilemma can be solved easily, and it's a rather unfortunate side effect of using this new superior technology.

In all, it was another interesting day with some thought provoking presentations. I hope you take the time to look up a few of the things I mentioned, perhaps you may find one of these techniques useful for your own work. Until tomorrow!

David Blumenfeld is the Head of CG/VFX Supervisor at Brickyard VFX (http://brickyardvfx.com) in Santa Monica, CA. He can be reached at: dblumen@brickyardvfx.com.