3D Stereo Compositing Roundtable
Issue: October 1, 2010

3D Stereo Compositing Roundtable

IMAGE: ILM’s Jon Alexander on Avatar: “Even in the best-case scenario, there were  numerous technical fixes that needed to happen to the original left and right eye views to make them acceptable to composite into.”

David Cox - VFX Artist/Colorist/Stereoscopic Consultant - (www.davidcox.tv) - London

Tim Crean - Creative Director - Suspect - (www.suspect.tv) - New York

Jon Alexander - Compositing Supervisor - ILM - (www.ilm.com) - San Francisco

Westley Sarokin - Co-Head of 2D - The Mill - (www.the-mill.com) - New York

Paul Lambert - Compositing Supervisor - Digital Domain - (www.digitaldomain.com) - Venice, CA

As the trend toward 3D stereoscopic projects continues to grow, so does the need for information about working in this new world. Even if job titles aren’t changing, the intricacies of the jobs are. You might be experienced in compositing in 2D, but that doesn’t mean the transition to 3D stereo will be seamless. The same rules no longer apply. Things that weren’t an issue before need to be addressed in order to make sure the audience is literally comfortable with their viewing experience. 

Below, a handful of compositors who have worked on recent 3D stereo projects share their wisdom.

Post: How different is compositing in 3D stereo than compositing in 2D? What are those differences and can you point to a recent stereo project as an example?

DAVID COX: “The addition of the ‘3rd dimension’ is a surprisingly powerful issue that needs to be understood so good composites can be made without giving people headaches. Although we know that the world is 3D in reality, in fact our perception of it is the result of a number of mental processes. One if these is ‘stereopsis,’ which is where the brain takes the two 2D images from our eyes and draws some understanding of distance from them. But it’s not the only method it uses — there are quite a few.

“For example, those horses in the distance aren’t miniature horses —they are just further away. Holding our hand out at arm’s length proves that our hand is closer than the rest of the world because we can’t see the world through our hand. The brain adds up all these ‘depth cues’ to decide where objects really are. The thing with 3D compositing is that we can accidentally mix these up and that causes a nasty headache. A classic one is to place a title on the screen, which is optically in front of an object (we can’t see the object where the title is over it), but stereoscopically behind it — i.e., the title is further away from the viewer than the object it is covering. This is a great example to show how instantly a viewer can be given a big headache. The brain is being given two conflicting ideas about where the title is. It is both in front of and behind the background object at the same time, and the human reaction against this is powerful.

“I did a recent commercial for Samsung where I was compositing CGI butterflies onto a real-world background. In the 2D version, it was easy to suggest the placement of the butterfly by casting a few fake shadows over objects that I wanted the butterfly to feel near. However, in the 3D version is was clear that the butterfly was six feet closer to the camera than where it should have been. No amount of fake shadows tricked the viewer, and the only answer was to make sure the effect of the parallax was identical between the shots.”

TIM CREAN: “We like to tell clients that stereoscopic, live-action compositing take 2.5 times as long as a traditional 2D compositing job. You’ve got to execute all of the compositing and prep work that goes into it twice. Once that’s done you’ve got to spend time combining the two eyes properly, not only on a technical level but I believe on an artistic level as well, so you are utilizing the 3D environment to its fullest. On a recent job for fashion design house Armani Exchange, we tried to live up to every expectation of what the stereoscopic medium can be: a very dynamic and engaging experience.”

JON ALEXANDER: “The big difference is that you can’t get away with as much compositing in stereo. If you look at pretty much any effects shot frame for frame you can probably find some minute technical or artistic error. That’s just the nature of what we do. It’s not that we are trying to get away with a less-than-perfect shot but the idea is to not blow the budget chasing things that in context no one will ever see. The goal is always to make sure that the effects play a supporting role in service to the story and not pull the audience out of the moment by obvious technical flaws.

“When working on a stereo production, even a slight difference or minor flaw in one eye or the other can be disastrous. The discrepancy gets magnified as your brain tries to resolve what it thinks should be just slightly skewed views of the same image. Irregularities that last for an instant in a shot with a lot of motion are not so much the problem. It’s the long lingering scenes where you have lots of time to notice that something is just not right. 

“Live-action stereo is the most challenging because you have so little control over the background plates. Before you can even begin to composite elements into the plates you need to focus on correcting the differences simply caused by shooting with two cameras. I’m ignoring, for the moment, attempts to convert movies shot traditionally, with a single camera and ‘converted’ to stereo after the fact. Obviously, the most successful recent stereo movie that ILM contributed to was Avatar. Even with the most technically savvy director — James Cameron —  a healthy budget and state-of-the-art cameras, you can’t absolutely control optics and physics. There are going to be slight to major differences in the original photography mainly based on illumination positions relative to the two cameras. So even in this best-case scenario, there were numerous technical fixes that needed to happen to the original left and right eye views to make them acceptable to composite into. That preparatory stage is the biggest difference when you move into a stereo pipeline. After that, it is just a matter of being consistent in your approach to compositing both eyes.”

WESTLEY SAROKIN: “Compositing in stereo is tricky in that when you’re working on a shot, any effect or fix done without considering the stereo implications can create visual disparity between left and right eye, which can cause visual discomfort for the viewer. In a traditional 2D composite, all one needs to focus on is how the individual image looks, whereas in stereo, you need to make both left and right eye images work both individually and together. Tasks like paint, roto and keying become much more intensive as these are typically hand-done and can be tricky to make work in stereo.”

PAUL LAMBERT: “In general we’ve found the main difference is precision.  Over the years we developed little tricks to more quickly achieve a final composite in 2D, but those tricks don’t work on a 3D film because every element is assigned a specific depth on screen. This requires much more precision to ensure that each element exists in the correct 3D space, otherwise it would negatively affect your viewing experience.”

Post: Did the changes you made to your tools and workflow integrate well with your current 2D environment?

COX: “Being a user of Mistika, I didn’t change anything as Mistika is as happy at 3D as 2D. It has some great tools to deal with the difficulties of 3D, yet also allows me to ‘hide’ the fact that I am working in 3D, so I can just concentrate on making pretty pictures in 3D without the distraction of worrying about the logistics of 3D.”

CREAN: “Although most of the tools we use have stereo capabilities, being that stereo is a relatively new medium, there were certainly gaps where we had to come up with solutions for. Some of these challenges were quite surprising in that we don’t think twice about them in the realm of traditional 2D compositing. 

“One example of this was CG-generated, stereoscopic lens flares. Not much thought is given to light traveling across a physical space in relation to other objects and its origin when working in 2D. In stereo this is especially important when you want to have sexy lighting originating in the far background, yet still have it wrap and bleed around the edges of actors in the foreground. Another interesting challenge was typography and motion graphics. The Armani project is bookended with animated title sequences. Figuring out a stereo workflow for our designers while keeping them in their application of choice, with no stereoscopic tools, was also a hurdle.”

ALEXANDER: “Yes, the changes we made integrated well with our current pipeline. We’ve made a concerted effort to not change the way artists are used to working. I wouldn’t say that we’ve made the stereoscopic aspects of the projects an afterthought, but there is probably 80-90 percent of the work you can do concentrating on just one view before you really need to see how the images work in the stereoscopic world. From the beginning, our layout department establishes a working stereoscopic value for each shot. But whether the shot is in stereo or 2D, all the environment and creature work takes the same amount of attention to detail to get it to work with in the plate.”

SAROKIN: “Yes, for the project we most recently worked on — Honda’s Eclipse — we used Flame, Nuke and Ocula to do the compositing, with Maya and Mental Ray to create the CG. We use Flame as our primary compositing tool for most jobs, but because Nuke has such an excellent stereoscopic toolset in conjunction with Ocula, we decided to use it more than usual. The most recent Flame upgrade had a number of stereo features, which we certainly put to the test with great success, and along with Nuke we were able to tackle any of the problems we came across.”

LAMBERT: “Nuke already has robust 3D capabilities, so it handles two cameras very well and we did not have to make any significant pipeline modifications to accommodate 3D work.”

Post: Did you have any problems with alignment from the stereo rig or color/lens differences from the two cameras?

COX: “Misalignment and color shifts are a fact of life for stereo shoots. They need to be efficiently dealt with for every 3D shoot, if you want to provide perfect 3D that gives a strong 3D image that isn’t tiring to watch. Mistika scores well here — point and click line-ups between left and right images, with the alterations being applied in realtime with no need for rendering. My method for working in 3D is that immediately after the conform, I run through the edit correcting the misalignments and color shifts and apply a ‘first-level’ depth grade to make the edit comfortable to watch. Thereafter, I just work as normal, applying visual effects and color grades as required.”

CREAN: “Yes. This issue is inescapable in the realm of stereoscopic live-action shooting and compositing — even minuscule differences in vertical positioning can create alignment problems. Additionally, you may be surprised how different the light on an object or actor can look when viewed from just 64mm to the left or right. Most native stereo shooting will happen on a beam splitter as well, so right off the bat you are taking a luminance, saturation and sharpness hit in one eye. Finally, with a ‘toe-in’ lens configuration, there is bound to be some keystoning at the edges of your image, which needs to be corrected. Before you even get into compositing, all these issues should be dealt with as they will certainly haunt you throughout the post process — not to mention causing eyestrain and fatigue for viewers.”

ALEXANDER: “We always knew there would be differences in the cameras that needed to be adjusted before we could begin work on shots. The alignment issues fall to our layout department. We have developed some very specialized tools that make this adjustment seem nearly invisible to the rest of the disciplines contributing to a given shot. Although there are certainly times when hand-tweaking is necessary, the plan is to make this step as procedural as possible. The same can be said for the color adjustments. It’s really no different than color timing shots in a sequence. Although you can pickup some highlight differences randomly as the camera moves, you’re going through a set configuration of lenses, mirrors or beam splitters so within the shot it should be a pretty constant difference that needs to be adjusted. We are not quite there yet but our goal is to make this be a completely procedural process that will resolve the issue without much, if any, individual tweaking per shot.”

SAROKIN: “Definitely. Because we shot the Honda commercial using a beam splitter rig, there were significant color discrepancies between the plates. Highlights, reflective surfaces and overall color balance were very different and we went to great lengths to correct the disparity between them. Alignment wasn’t such an issue as we shot parallel rather than converged. Minor adjustments were necessary but it wasn’t nearly as much of an issue as the color differences.”

LAMBERT: “When shooting with a 3D camera rig, there can be a polarization issue.  The rig’s mirror system produces slightly different colors, reflections and specular highlights in each eye, so we worked closely with The Foundry to update the color matcher node that is now included in their latest release of Ocula. Having this solution in place will be a big help from now on, and the latest 3D camera rig also includes an updated mirror system designed reduce some of the issues mentioned above.”

Post: How much additional roto and paint work did you have to do (if you shot stereo) specifically to fix stereo problems?

COX: “If you are rotoscoping to cut out objects, then although there are some tricks using disparity maps, you are most likely going to be rotoscoping twice. It’s also worth bearing in mind that any horizontal difference in your matte edges will appear as different depth planes in a 3D composite. In terms of fixes specifically for stereoscopic shoots, a common one is dealing with light reflections that are different between the left and right eye. Imagine a shiny tabletop that has a sheen across it from a light. The chances are, its got a different sheen in the left and right images because of the different angles. If this difference is too strong, it makes the viewer uncomfortable because it makes them feel like one eye isn’t working properly — like when you have water in one eye or something obscuring one of the lenses of your sunglasses. The fix is often to roto out the sheen from one image and warp/comp it into the other image.”

CREAN: “The nature of stereo compositing is such that roto and paint work must be very precise: ideally every brush stroke or garbage mask will be duplicated accordingly in the other eye. So while there wasn’t any specific situation where a stereo problem was ‘fixed’ with roto, there was plenty of it to cut out the mattes of all the talent, with their windblown hair and clothing, as well as plenty of beauty retouching. We did cut additional mattes in order to ‘dimensionalize’ or convert some 2D scenes into 3D stereoscopic.”  

ALEXANDER: “In our experience, we’ve found it is pretty close to being twice the work. We have amazing in-house roto and paint applications and amazing roto and paint artists. I have to constantly remind myself when dealing with other houses when collaborating on a film, that they probably do not have the options we have for working on and finishing shots. We have the absolute luxury of knowing that if we can get close, the shots here can be finished in roto and paint for those final frames of touch-up. That being said, and with the continual upgrading of our in-house software, you can only clone over so much procedural work. Although strokes can be duplicated you can’t pull over plate reconstruction because then you bring the grain or noise structure with it. That sticks out like a sore thumb on stereo projection. Roto is conceivably more straight forward but at a certain point the interoccular values and towed-in cameras make it impossible to just use one set of offset splines for both eyes and ignore the issue of where in the motion blur to set the spline. You see that problem exaggerated in poorly converted 2D movies, which often use less experienced roto artists or try to automate too many things in an effort to reduce the time and effort necessary to extract the other eye.”

SAROKIN: “On a stereoscopic project, little fixes can often be a very big deal. In a traditional 2D job, a small paint fix can often be accomplished rather quickly. With stereo, not only does one have to fix two plates, but those two plates have to match up and not create any visual disparity. This makes things much more tricky because the images can look perfectly fine individually but when combined, it looks wrong. Stereo compositing really removes a whole bag of tricks that a compositor can rely on, but at the same time it simply pushes you to create a new bag of tricks that work for a stereo project.”

LAMBERT: “Some shots we can roto the dominant eye and then apply that to the non-dominant eye very easily, other shots require us to roto each eye separately. I would say it’s roughly half and half, with the same rule of thumb applying to paint as well.” 

Post: How different is working on a 3D stereo shot project as opposed to a conversion?

COX: “Conversions are a bad thing. The same as taking a black & white film and making it color. To a programmer, the result of black & white to color conversion looks great. To a filmmaker, it’s all wrong because if it were originally shot in color, the lighting would be different, so would the make-up, art direction, etc. The same for 2D to 3D conversion. Yes it might look 3D, after a fashion, but that’s not how it would have been shot if it were known to be 3D. The only reason to be positive about 2D to 3D conversion is if you’re making money from it.”

CREAN: Although converting 2D to 3D stereo is certainly more engaging and enjoyable to look at than traditional 2D imagery, there is no substitute for shooting native stereoscopic. This is especially true for extreme close-ups on complex, organic shapes like that of a human being. You will have to spend an inordinate amount of time in a conversion figuring out how to separate the elements of your subject matter into appropriate layers for placement in 3D space, and then rotoscope all of those different pieces. It’s a painstaking process, which will eventually get you a ‘feeling’ of depth without the real intricacies of true stereoscopy.”

ALEXANDER: “You get a certain sense of ‘stereo’ for free when working on a project that was developed and shot with the process in mind. For conversions, the need to rotoscope out all the levels you want means compromise must be made as to how much continuous depth you will give the shot. Folks are working furiously to automate a faux full-dimensional world to project and extract continuous layers of images. The algorithms will get better over time but the challenge remains — you still have to make up lots of slivers of images for the second eye that do not exist. Compounding this, if you miss by a pixel or two you drag a foreground edge into the layer behind it. The level of rotoscoping detail necessary to do a conversion successfully is quite high. There is definitely an art to resolving all the edge issues, and for it to be economic you really have to have stereo workstations where an artist can look deeply into the shot and fix every little glitter or stretch that is incorrect without having to walk somewhere else to see their work. On a project that was shot in stereo you get lots of ancillary depth cues. You’ll see a table or a plant you would never have bothered to convert on its own because of the additional cost. But those elements give so many more continuous depth cues that make the shot look less like a multiplane extraction.”

SAROKIN: “Well, for a conversion, the process is basically finishing the shot in 2D, then deriving a stereoscopic version from it which can have some issues. Doing the shot from the ground up in stereo is a much better and truer form in that the footage and visuals represent much more accurately what the human eye would see.”

LAMBERT: “I haven’t yet worked on a 2D to 3D conversion.”