Compositing: Stereo 3D vs. 2D
Issue: November 1, 2010

Compositing: Stereo 3D vs. 2D

Post:How different is compositing in 3D stereo than compositing in 2D? What are those differences and can you point to a recent stereo project as an example?

Rob Trent: “Obviously, the goal is to create a great looking comp for your master left eye, but when you go to apply your processes to the right eye a new crop of issues rears up on you. I was lucky, my first foray into 3D commercials was fairly tame. Bud Light Pimp Your Ride was shot on Red with a convergence rig and needed to deliver on a very compressed schedule. But there were no real dramatic camera moves to track, which simplified our cleanup tasks. And our CG department here at Asylum has a lot of experience doing stereo work for feature films, so they nailed their renders quickly.

“However, shots from each eye were sometimes dramatically different with respect to lighting and color. For example, one shot had a lens flare in the right eye, but not in the left. The two camera angles were offset just enough to have that happen. We had to clean up a row of palm trees behind BBQ smoke and lens flare artifacts then make sure the parallax for both eyes married well, which was a challenge. If you track in a matte painting to a stereo shot and the parallax is misjudged, your eye immediately picks up on it even on a very still shot.”

Post: Did the changes you made to your tools and workflow integrate well with your current 2D environment?

Trent: “We were using Flame 2011 to put this job together. Autodesk has incorporated many useful tools to ease the transition to stereoscopic onlining and comping. In the batch module, you can load all of the most useful nodes as stereo nodes so that once you have adjusted and aligned your two source clips to match, any batch processing will be more or less duplicated. Of course you need to keep an eye on roto or tracking from one eye to the other to assure your offsets are correct. “And once you have comped your shots you can edit them on the desktop, or in the timeline, in stereo with most all of your usual onlining tools.”

Post: Did you have any problems with alignment from the stereo rig or color/lens differences from the two cameras?

Trent: “I think that any stereo shoot yields alignment issues. I talked a bit about alignment already, so you can refer back to that. But basically the camera crew on set should be professional enough to minimize problems for post, however there will always be basic post corrections to line things up. The most problematic issue could be lack of shutter sync between the two cameras. If this is off even fractionally, a shot with dynamic action or camera work will give the viewer a real 3D headache, and it would be very time intensive to fix. This could blow your delivery schedule big time.”

Post: How much additional roto and paint work did you have to do (if they shot stereo) specifically to fix stereo problems?

Trent: “On our particular job, I would say we spent about 30-40 percent more time on roto/paint. On a more FX-ey concept spot I could see an even doubling of effort in these areas though. Possibly more than double the work.”

Post: How different is working on a 3D stereo shot project over a conversion?

Trent: “The process of converting 2D to 3D is painstaking and tedious. I have never had to do it myself, but can imagine that I would not enjoy it. However, if our compositing work is done on one master eye, then sent along to be dimensionalized somewhere else, there is obviously no difference in our process. Asylum's work on G-Force was done this way, I believe. And it can look great.

“There is a right way and a wrong way to do a 3D project (budgets notwithstanding). Shooting properly in stereo will always give you a real depth at the end of the day and there is no disputing that.  Seeing a dimensional object in space from two slightly different perspectives lets you see around to the sides of that object, giving your brain a pretty true scene to sort out. When you shoot one camera and build the other eye, it can still be a dramatic sensory experience for the viewer, but will probably appear a bit more like the flat, tricky diorama that it is (2D cards artificially located into a projected 2 1/2D environment).

“The photography for our end tag on Pimp was actually converted from 2D. They had shot and designed this tag beforehand in 2D and wanted to take it into 3D land. The 2D photography was converted very nicely by a company called Legend. We then rebuilt the type and swoosh graphics in CG to give them a nice depth behavior in the scene. As an added flourish, the client requested a new layer of ice chunks to explode at the camera. The final is pretty engaging and works quite well in the cut and on its own as a billboard element.

“It's an interesting study when considering these two techniques. Part of the reason the conversion works well, I think, is that the ‘real’ 3D supers and ice Asylum generated in CG give your brain something truly dimensional to grab onto, which takes the card-y conversion edge off of the photography. Don't know if I'm explaining it well, but I hope that makes sense?”