A special from Computer Graphics World, Post's sister publication
Barbara Robertson
Issue: July 1, 2007

A special from Computer Graphics World, Post's sister publication

Any hard-surface vehicular shape, that is.

In Transformers, Paramount Pictures and DreamWorks SKG’s hotly anticipated summer blockbuster, two alien, robotic races take their desperate fight for survival to planet Earth. The good Autobots, which protect the Earthlings, shape-change to and from cars and trucks; their leader, Optimus Prime, is a semi tractor rig. The enemy, the bad, very bad, Decepticons, twist into military vehicles, and the baddy of all, Megatron, is an interstellar jet. When they fight, they fight as agile, fast-moving, 20- to 30-foot agglomerations of moving, mechanical parts in the shape of anthropomorphic robots that crush, flip, and smash into anything—cars, trucks, buildings—that is in their way. Cars land on tanks that transform into robots. A robot smashes straight through a bus. Megatron rips the legs off an Autobot.

Directed by Michael Bay (Bad Boys, The Rock, Armageddon, Pearl Harbor, The Island), the action-packed, over-the-top visual fest features 14 highly detailed robots and their alternate shapes. Industrial Light & Magic created, transformed, and sent into battle these robots under the supervision of Scott Farrar, who won an Oscar for Cocoon and received nominations for The Chronicles of Narnia, Artificial Intelligence: AI, and Backdraft. In addition, Digital Domain created several one-off robot shots.

“There are shots in this movie that are absolute no-no’s in the world of visual effects,” says Farrar. “It’s a tribute to the artists. A few years ago, I would never have put an actor in direct contact with a CG robot. But we have several shots in the movie where somebody is laying on the robot, or sitting or hanging on the robot, and they fit right into it. The reflections are fabulous.”

Machine Shop

Most of the time, and any time the robots run, jump, and transform, the machines are CG, although there are also quieter shots of the CG Autobots talking with people. “Here’s the problem,” says Farrar, picking up an old toy, an Optimus Prime in its robotic form, from his desk. “How do you build these things on a set? This toy has 51 parts. Our Optimus has 10,108 parts.”

Dave Fogler, who was a model maker for the last two Matrix films and for Star Wars: Episodes I and II, AI, Peter Pan, Pearl Harbor, and Terminator 3, supervised the CG modeling team. “I think it makes a difference that Dave used to work in the model shop,” says Farrar. “He knows what objects should look like. It’s not like it’s all mathematics.”

Fogler started with concept art of the robots and photographs of the vehicles, provided by Michael Bay’s Los Angeles production office. Knowing that all but one of the robots, Scorponok, transformed from vehicle to robot, and that the vehicles would be real as well as CG, the team began by exploring how to build the robots from the vehicle parts.

After a few weeks they gave up. The approved artwork had key pieces from the vehicles but didn’t show explicitly which part went where. Moreover, the robots needed to look heroic, but the cars didn’t have enough mass to create them. And, the schedule was tight. “We needed to start building,” says Russell Earl, associate visual effects supervisor. “We decided to match the artwork and figure out how to transform them later.”

Once the modelers had built the robots in Autodesk’s Maya, they started on the vehicles. When they finished the vehicles, they sent each pair of machines to creature developers who experimented with transformation. After deciding which parts should move on the vehicles, the creature developers sent the machines back to the modelers, who cut the models to help with the transformation.

Initially, the modelers expected to build only three or four robots in full detail. “We had some pegged as background robots,” says Earl. “But, Michael decided he wanted to see more of them. How can you not want to see more of the robots?”

Of all the robots, modelers first put the most detail into the Autobot Bumblebee, a Camero, although he has fewer total parts than Optimus and he’s smaller. “We see a lot of him, so he might have gotten the most attention,” says Fogler. “And also, his artwork was sparse so we felt we had to do something. We packed him with all these little bits—gears, cogs, wheels, springs, rubber hoses. That second round of modeling probably doubled, maybe tripled, his weight.”

Then, one by one, each of the 14 robots got that same level of detail. “Eventually, you get to the position where you want to see that stuff in each one, and each one becomes its own character,” says Earl, “with its own facial expressions.”
 

Director Michael Bay preferred shooting explosions on location,
so ILM often had to fi t its digital robots into smoke-fi lled live-action plates.
 
In your face

On set, Bay had one actor do readings for the various robots with speaking parts. Production assistants and others on the crew held window-washer poles to map out the robots’ movements and give the camera operator something to frame. “We have robots running in, leaning down, and delivering dialog,” says Farrar. “We knew right away that the illusion would be broken if the audience didn’t believe those are real metal parts standing in front of the human actors.”

To prove to Michael Bay how important it was to cast actors for the robots, Farrar had the animators create a test using Optimus and video clips of several actors: Hugo Weaving, Robert De Niro, Peter O’Toole, and Al Pacino. The animators had the robot imitate the actors, using dialog from the clips for the performance. “It was astounding,” says Farrar. “You knew right away who it was.” Bay eventually cast Weaving as Megatron and Peter Cullen, who voiced Optimus Prime in the animated television series The Transformers, as Optimus.

To create expressions on Optimus’s face, animators controlled 200 parts, individually or with preset groups. Because a robot couldn’t have a tongue or teeth, a series of rigid pins inside his mouth provided movement for phonemes, as did his segmented lips. Lip movement could affect his cheek plates. For his eyes, moving bars simulated eye blinks, and inside, 50 metal arms that looked like the rotating blades in an electric razor opened and closed to imitate dilating pupils.

“We played with the rotating blades as if he adjusted his focus,” says Scott Benza, animation supervisor. We felt it was important to get as much movement in the eyes as we could to sell Optimus as a living machine.”
 
 
Optimus Prime Pieces: 10,108 – polygons: 1,830,898 – rig nodes: 27,744 –
Texture maps: 2336 – Volume of all pieces: 5445 cubic feet

 
Scorponok

The crew used two methods to move the living machines in their robotic forms: rigid-body simulation enhanced by animators, and animation with added simulation to enhance performances. Scorponok, one of the Decepticons, is the only robot with rigid-body simulation underlying its performance. “The technology seemed appropriate because the Scorponok has insect-like behavior, and it doesn’t have much personality,” says Benza.

Benza dialed in values and activated switches that set the creature in motion, and then it could move on its own. As Scorponok moved, spin forces in its tail gave it a natural follow-through. By adding gravitational forces to the scorpion-like robot’s pincers, Benza caused it to move left or right. “I don’t physically touch the model,” he says. “I move it in the same way a puppeteer might move it.”

This technique helped animators spin the five-ton creature into a realistic 360-degree roll. Benza simulated how Scorponok would move given its real weight and rotational velocities on its tail, and animators used that as a starting point. “They baked out the simulation onto their animation controllers,” Benza says. “It essentially became motion-capture data, so they could use all the tools built for motion capture.”

Parts Department

For the other Decepticons and the Autobots, creature developers created rigging systems that gave animators control over the thousands of parts. Because they knew no system could work in every shot, the team devised a dynamic rigging system so that animators could arbitrarily group pieces together. This meant the animators could move and adjust every piece of geometry seen from the camera view.

“It was quite complicated to set up,” says Jeff White, digital production supervisor, “but it really paid off.” He gives an example: “If I were an animator working with a particular character and wanted to move a panel composed of five parts, I could pick those parts and have the system build a rig. I could put the pivots exactly where I wanted and animate it just for that shot.”

The dynamic rigging helped animators correct or prevent interpenetration among the thousands of pieces, more easily handle changes in robot and shot designs, and adjust parts in close-ups.

And, equally important, the dynamic rigging made the transformations possible. “The transformations were more of an animation exercise than a technical one,” says White. “The technical challenge was in figuring out what kind of rig to build to make the pieces move.”  Most transformations happen while the vehicle or robot is on the move, but the transformation speed varies. Some happen in one frame; others in slow motion.

“There’s one shot where Optimus is opening up, and you’re right underneath him,” says Tom Martinek, sequence supervisor. “The animators make sure there’s lots of stuff and everything is moving and shaking and turning around.”

A second technical challenge was in enhancing the animation with simulations. Inside the robots, pistons, wheels, and other internal parts move procedurally. “If an animator moves a robot’s arm, maybe a spring compresses and a wheel turns,” says White, “and you want that to be somewhat consistent between shots.”

Outside, the robots needed to interact with the environment. “When you have 30-foot-tall robots fighting in a fairly closed environment, they’re going to destroy a lot of stuff that’s in their way,” says White. “So, we did explosions, concrete destruction, blowing trees, and a water-interaction simulation. We’re using every type of simulation possible.”
 
Appetite for Destruction

At its most basic, Transformers is a wartime drama, with Autobots fighting for their lives against the Decepticons. And, in the third act, all hell breaks loose. 

“In these sequences, it’s not just ‘put the robot there, match the lighting, and you’re done,’” says Hilmar Koch, CG supervisor. “It’s layer after layer of elements to get the destruction and mayhem Michael wanted.”

During a final showdown, Megatron and Optimus fly through buildings and crash through buildings—practical, miniature, and CG—as they make their way to a rooftop. Then, they fall between two tall buildings, smashing into the sides and pulling off fire escapes as they fall. These tall buildings are CG, the destruction is CG, and the robots are CG. An actor cradled in Optimus’s hand, though, is real, filmed on a greenscreen stage.

ILM used its own software and Maya plug-ins to wreak much of the destruction, by cracking, bending, and shattering NURBS surfaces and creating debris. “Pieces could fly off, hit something else, and break again,” says Koch. “It might not sound like much of a challenge, but it was.” For smoke trails, the crew used Maya and ILM’s Zeno particle systems.

To tear away the fire escapes, though, the technical directors used, in effect, the same kind of motors and “sticktion” technology that moved Davy Jones’ tentacles in Pirates of the Caribbean. “On Pirates, we had Davy Jones’s tentacles stick to his body and tear away,” says White. “So we pushed that technology forward to have parts of a fire escape stick together as we tear it off a building.”

Bay filmed many of the buildings, the explosions, and the destruction. “Michael [Bay] always shoots stuff in-camera when he can,” says Koch, “which sometimes makes things hard. But, we’re really grateful because if we had to destroy everything he destroyed, it would be horrible.” Because Bay shot scenes in several locations, especially those for the long fight in the city, the technical directors assembled synthetic plates.

“We’ve got shots from downtown Los Angeles, Detroit, and another city mixed into an all-synthetic background with a miniature shot in San Rafael,” says Koch.
 

ILM created the refl ections on Optimus Prime (at top) using environment
maps and raytracing. For particle-based effects (at bottom),
the studio used Autodesk’s Maya and ILM’s Zeno.
 
Looking good
 
BumbleBee Pieces: 7608 Polygons: 1,511,727 Rig nodes:
19,722 Texture maps: 8094 Volume of all pieces: 1069.01 cubic feet

Some of the hardest work on the film was in lighting the synthetic machines and environments to match the location shots. Although lighters didn’t try to even out the lighting from shot to shot, the CG elements had to fit into the plates. “Michael says that matching one shot to another is old style,” says Koch. “I agree. It’s part of the dirt and grit of a movie that the lighting doesn’t match up. As long as the robots fit within the shots, in most cases we say, ‘We’re done here.’”

Easier said than done, though. Making the robots look good in the shots started with the viewpainters, who painted thousands of parts with texture, dirt, bump, and displacement and other maps in Adobe’s Photoshop, and then used Zeno to lay out UVs and fit the maps onto the models. Optimus Prime had 2336 texture maps.

“We started off with a pie-in-the-sky idea of using shaders,” says Ron Woodall, viewpaint supervisor. “But, it just doesn’t work. It’s the whole dirty factor. They spit-polished the real cars between takes, and the early assumption was that because the cars are clean, we have to make clean robots. But, when they’re clean they look fake, because CG inherently looks fake.”

Moreover, bump maps and displacement maps added complexity without adding geometry. The combination of detailed textures and displacements helped make the robots look like they were really built from car parts. “We’ve pushed hard to make the brass, the engine-cast parts, the metallic paint subsurface that you see with the clear coat on top, and other surfaces look good,” says Farrar. “It’s all in there. The paint department is extraordinary. Making this stuff look good is all about textures and surfaces and how the light reflects. And, the lighting methods are a real departure from a lot of computer graphics lighting.”

Although the surface properties are painted, the reflections are calculated.
 

At top, a robot is poised to do battle in LA. At bottom, modelers created
two versions of the Autobot Bumblebee, one for the 1974 Camero
and another for the spiffy 2007 model.
 
Re-Actions

Lighting the vehicles in Transformers when they were CG was difficult, but nothing compared to lighting the robots. “It’s all about the reflections,” says Earl. “Cars are actually designed so that all the curves reflect beautifully, which makes it difficult enough to light a car in CG to look real. But, imagine if you split that car into a thousand pieces, put it into the form of a robot, and tried to get that same feeling.”

The lighters soon discovered that environments lit for the CG cars didn’t work for the robots; they had to light the robots like actors. Moreover, the robots had thousands of reflecting parts, many of the shots had several robots all moving with several transforming but not simultaneously, and sometimes the robots and vehicles would transform in little more than a frame.

“It was some of the most difficult lighting we’ve done,” says Earl, “because so much of it relies on reflections.”

On set, Duncan Blackman, location matchmover, took 8k-resolution bracketed photographs of the location, chrome spheres, and gray spheres. “The high resolution on the spheres enabled us to get nice reflections,” says Earl.

Hilmar Koch shows an example on screen: a shot with five Autobots (Ratchet, Jazz, Ironhide, Bumblebee, and Optimus). Ratchet, which transforms from an emergency vehicle, has blinking emergency lights on its rear, and you can see those lights reflected in the other robots, along with the actors’ faces and the environment.

“When we first rendered this shot, we thought, ‘Oh, it’s not working well,’” Koch says. “And then we found out that the reason it wasn’t working was because of these tailpipes. It sounds ridiculous now that I’m saying it, but the tailpipes weren’t reflective. Once we started seeing a face reflecting in the tailpipes, it made all the difference.”

For rendering, ILM used Mental Images’ Mental Ray for raytracing and Pixar’s RenderMan for scan-line rendering and raytracing. Lighters could switch from raytracing to scan-line rendering using an environment map within a shot.

“We could turn raytracing off and get a decent robot because we had environments that existed as 8k texture maps,” Koch says, “which is far, far higher resolution than anything we’ve done before. Typically we do half a k, so this was 16 times the information. And we needed it for the shiny surfaces.”

When lighters see something in the shot that needs raytracing, they can select that piece, click a button, and it’s raytraced. “Everything else uses the environment maps,” says Koch. “They’re compatible.”

That helped reduce render times, as did using level of detail. “We start out with subdivision surfaces and then drop to polygons when we hit 4gb, and have robots small enough in screen space,” says Koch. “Even so, we couldn’t have rendered this without the 64-bit machines.”
Mash Up

Compositors on this show used Apple’s Shake to bring all the elements together and fit the robots into shots with fire and smoke. “Michael [Bay] shoots dirty,” says Earl. “One day, we were shooting a big scene for some of the city fight stuff. I could see smoke and sparks going off. But, just before Michael said, ‘Action,’ he said, ‘Pop the smoke.’ All this green smoke started going off, and I’m thinking, ‘Oh, god, we have to put our robots in there.’”

To help bury the characters inside the smoky scenes, the crew rendered depth mattes to pull through some parts of the robot and keep the original smoke. Otherwise, they might put the robots on top of the smoke and look for matching smoke to layer on top.  “Michael likes to blow stuff up because he likes the atmosphere,” Earl says. “So that’s what you get.”

All told, ILM created 450 shots, relatively few compared to some visual effects-laden films. (Digital Domain and Asylum added 100 or so.) Yet, these shots once again prove the studio’s mettle. ILM rendered giant machines made from thousands of animated, reflecting car parts and convinced audiences that robots with human emotions could crash through downtown LA.

“I think it will be really rewarding for people to see this film,” says Farrar. “I think there’s going to be a bit of a wow factor.” 

Through most of the fi lm, the huge Decepticon leader, Megatron,
is 35 feet tall, but at 40 feet, the frozen Megatron is even bigger.


Autobot Totals: Pieces: 35,592 Polygons: 7,435,478
Rig nodes: 95,247 Texture maps: 20,258 Total miles if pieces placed end to end: 3.11
 
 
Total for all Robots: Pieces of geometry:
60,217 Polygons: 12,509,502 Rigging nodes: 144,341 Texture maps: 34,215
 

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.