We’re all familiar with such popular film stunts as a movie character bursting into pieces, turning into a beast or jumping from a rooftop and landing on two feet completely unharmed. These would be impossible to do without companies like TNG Visual Effects. Relying on Artec’s Eva and Spider for 3D scanning, LA-based TNG, which has worked on such Hollywood films as The Man of Steel, The Twilight Saga, and
The Girl with the Dragon Tattoo, produces 3D models of characters and props that are later incorporated into movies to explode, shatter, and transform. Let’s take a peek behind the scenes.
3D SCANNING IN THE FILM INDUSTRY
“3D scanning brings the ability for directors to do anything they want,” says Nick Tesi, founder of TNG, who started in 3D computer graphics in 1986. “It’s a new era for filmmaking, and today directors can really follow their vision in the story telling process, having the chance to achieve any thought process or concept they have in mind.”
Artec's Andrei Vakulenko, pictured
Tesi adds that while filmmakers are the storytellers, they’re not typically the ones who are making the decisions on what to use to create a scene. That decision is most commonly made by the VFX supervisor, who will also decide on whether or not to use 3D scanning for the job. The budget for the project plays a big part. If it’s a tighter budget, the supervisor could elect to use practical effects instead, and may actually blow up a house or car, rather than create a completely computer-generated scene. This may save a few dollars if the first shot is perfect, but the production could end up losing money if it needs to reshoot the scene. Using a digital double of a vehicle, prop or even a character gives the director more control of the scene by allowing multiple takes until the scene looks just right.
HOW THEY DO IT
TNG has been working with Artec scanners for five years, calling themselves “early adopters.” They usually use Eva for heads and bodies, and Spider for scanning props and other details on the body that calls for a finer scan. Among Eva and Spider’s strengths, Tesi names their small footprint that travels well, better data with newer software, high accuracy and pre-calibration.
One of the most challenging jobs performed by TNG has been the scanning of a character wearing a lot of intricate armor. For that, TNG used a combination of Eva and Spider, which helped them deliver a quality digital double to the customer.
To begin the process of 3D scanning an object, reference pictures are first taken which aid the 3D modeler, the 3D texture artist, and are helpful during the final quality check to make sure the 3D object matches the real-life counterpart. After the object is 3D scanned, the scan data is aligned and fused together. Once the 3D scan technician acknowledges they have as close to 100 percent coverage as possible, the scan is complete and can be processed.
The time it takes to process the data depends on the resolution needed. Once the data has been processed, the 3D modeler can begin their work. With the use of the images captured from the professional photo shoot, along with a perfect silhouette and scale of the item provided by the 3D scan, the modeler will create a 3D model out of the many components and unwrap the geo for the 3D texture artist.
The texture artist then paints the object (or projects images) as well as paints the seams of where the UV coordinates were cut. The completed texture is given back to the 3D modeler who will bring out further detail through sculpting. As the model is finished off, the normal maps and displacement maps are generated to provide multiple ways for the 3D model to be viewed.
“As a 3D scanning company that primarily scans human bodies and human heads, it was only a natural progression for us to branch out into motion capture,” says Tesi. This technology brings static dormant objects to life. A 3D scan captures the surface of a person’s skin and clothing. “Once it’s been put together, we unwrap the UVs and remesh it to prep it for texturing,” Tesi adds. “After this step we may render the model, but the next step is to insert joints [a skeleton] so that there is something to drive the skin of the character into movement. This is the rigging process.”
Once the skeleton is sitting nicely within the computer generated digital double, the weighting process takes place. This allows you to work out how much skin each specific joint will drive. Dialed-in weights will visually create lifelike movements when the character is animated. To animate these joints without having to actually grab the joints themselves, a GUI is created, which is connected to the joints via orientation constraints. This allows an animator to more easily and intuitively animate a 3D character. After animation, the video is rendered frame by frame on a render farm.
CREATING A DIGITAL CHARACTER VS. DEVELOPING A DIGITAL CHARACTER
You can either have the character modeled from scratch, have a maquette of the character created, use a stunt person, animatronics, or 3D scan the person. The fastest and most effective way is to 3D scan.
To fully develop a character, especially if a well-known actor is in the scene, involves a much larger effort to create facial performance, cyber hair and cloth, and apply character lighting and final scale, all the while making sure it is an exact match to the original actor to keep a smooth transition. For example, if the digital copy you’re working on is the main character and needs to be on the screen for a long period of time, then a lot of time would be spent on matching the digital copy to the actor.
There are also those characters that will be some distance from the camera or appear for a very short time on the screen. In this case, less time would be spent on perfecting the details and making sure it was an exact match, but it would retain the illusion of being perfect.
In the event a shot is needed and the actor is unavailable, another approach is the concept of insurance scanning – scanning everything that may seem necessary and then archiving it until the need arises. This ensures that there is always a way to finish the project. Archived assets can be reused and carry over to the interactive field or perhaps the creation of a video game made from a film concept or vice versa.
WHAT MAKES A DIGITAL DOUBLE LOOK TRUE TO LIFE?
Tesi says there are three components to creating a great 3D character. The first one is making sure the character design itself looks like a real human. The overall structure of the body and the look of the skin all need to look like a real person. From the hair to the cyber clothing, every part needs to flow naturally. The real life character and the virtual character need to make the viewer wonder which is which — well, it shouldn’t even be a question.
The second component is full body animation, seeing how it moves — whether it’s stiff, jerky, ill-timed, or even too smooth like an astronaut on the moon. In working with live action, the character needs to have fluid movement to match that of a live character.
Last, facial animation must look like a strong realistic performance. We all know what faces look like as we see them every day; it’s easy to comprehend a person’s emotion by looking at their face, but in 3D that must be created. If not done correctly, the upper part of the face, in particular dead, flat eyes with a thousand yard stare and a lower forehead, could give it all away. Eyes need to have the correct proportions and require as much detail as possible, especially in the corners. The same goes for the mouth and ears. Finally, the volume and flair of the nose needs to have proper nostril placement that is non-symmetrical.
THE FUTURE OF VISUAL EFFECTS
As 3D scanning and 3D modeling technology evolve, it will be nearly impossible to tell which characters are computer-generated and which are the real thing, making it a very personal and real experience. We’ll see movies and television, like video games, becoming completely digital — the backgrounds, foregrounds, characters, etc., providing full digital entertainment in all media in an unpredictable timeframe.
“As more visual effects are used through 3D scanning, motion capture and other types of visual effects, the more the world will follow suit,” says Tesi.
Given that the cost and time of 3D scanning services and/or equipment is becoming smaller (but still producing a high-quality product), it’s highly likely that non-Hollywood filmmakers outside the U.S., including Asia, Europe and Latin America, will catch up with the trend of using 3D scanning to create visual effects for film. It’ll also become more accessible as prices drop, through rental agencies, service bureaus and as technology becomes easier to use without the need of a 3D scanning technician who knows the entire pipeline of how to produce 3D models.
“In the past there were a lot less visual effects on television compared to the film world, but as time has moved on, television episodics and even commercials started to include visual effects,” says Tesi. “They’re giving a big budget look when they use the most up-to-date technology and will stay relevant and exciting in this market.”
If 3D scan data could automatically be stitched together already in full color and could produce a high-quality digital asset that was usable in production, it’d be a huge breakthrough. The possibilities for computer-generated assets would be endless because the delivery time would no longer be such a factor.
Tesi believes that in the near future, movies, commercials, and television programs will be 90 to 100 percent digital, which could lead to the viewer being able to create their own ending. “Before you know it, there will be a world of digital images accessible to the entire industry through a virtual library,” he says.
“Shows and projects will be completely computer-generated from the characters to the weapons to even the locations. The human element will exist only through motion capture and voiceovers. Story generators exist today, but maybe in a dozen years such programs will develop blockbuster scripts.”
Today, 3D scanning is used as a starting point to create a digital asset, but if these predictions become reality, are you ready for a completely digital world?
Andrei Vakulenko is the VP of New Markets for Luxembourg-based Artec 3D (www.artec3d.com).