<I>Avatar: Fire and Ash</I>: A VFX shot breakdown
Eric Saindon
Issue: November/December 2025

Avatar: Fire and Ash: A VFX shot breakdown

For a movie that spends so much time in and around water, it probably isn’t surprising that Avatar: Fire and Ash relied on a lot of very real, very wet filmmaking. The rapids escape sequence is a good example of how practical water work, performance and realtime visual effects all came together on-set.
 
To support the live-action photography, multiple custom pools were built for the production. The largest was a truly mammoth 250,000-gallon tank constructed at MBS Studios. Outfitted with a wave machine, the tank could shift between calm underwater performance capture, open ocean motion, or churning river rapids, depending on what the scene needed. It gave the filmmakers a huge amount of flexibility while keeping everything in a controlled environment.
 


At the same time, Steve Ingram and his special effects team in New Zealand built several additional pools specifically for scenes involving Jack Champion and the live-action cast. These were designed to be smaller, safer, and more plate-photography focused.

For this particular shot, the story finds the kids fresh off an escape from the Ash People, spilling downriver from dangerous rapids. Instead of shooting this in a massive tank, the team built a compact practical set: a shallow pool dressed with real river stones. Water movers kept the water flowing continuously, creating the sense of fast-moving rapids while still giving the performers a stable environment to work in.
 


What really changed the game on this sequence was how it was filmed. The shot used live depth compositing on-set. In addition to the main stereo camera, two extra cameras were rigidly attached to the rig. Those cameras fed realtime data into the system, allowing the team to calculate the depth of every pixel in the frame and generate a live depth map as the camera rolled. That depth information was then combined with the CG scene, which was being rendered in realtime using our in-house renderer. The result was a live composite that showed proper layering between the live-action, CG characters, water extensions and the surrounding digital environment. Instead of waiting for post to find out if everything lined up, Jim Cameron could see a close version of the final shot as it was being filmed. 
 
Once the live-action plate was filmed and delivered, the first step was reconstructing the physical environment digitally. A Lidar scan of the dry live-action set was used to build a detailed CG version of the riverbed. This digital riverbed wasn’t just a replica, it was extended beyond the limits of the practical set to create a fully-digital environment that could support complex simulations. With the digital set in place, the FX team used Wētā FX’s in-house simulation system, Loki, to pour water through the CG riverbed. The goal at this stage wasn’t spectacle, but fidelity. By adjusting the placement and shape of submerged CG rocks, the team could subtly influence currents and surface patterns until the simulated water closely matched the behavior seen in the live-action plate.
 


This represented a significant evolution from older workflows. In the past, animation would typically place CG characters into an environment where the water surface was represented by a relatively simple geometric plane. Any interaction between characters and water would be approximated later. On Fire and Ash, the process was inverted. A first-pass “blocking” water simulation was created before animation began, giving animators a physically grounded foundation to work against. Once that blocking simulation was delivered, the animation team introduced the CG characters, using performance capture as the basis for motion. 

Performances were adjusted to respond naturally to the simulated water shifts in balance, resistance from currents, and physical interaction with the flow. All were all refined so the characters felt truly embedded in the river. That updated character motion was then published back to the FX team, who ran a high-resolution simulation with the characters fully integrated. At this stage details like thin film water sliding across skin, localized splashes driven by character movement, and secondary interactions that sell scale and weight are added into the simulation. The water simulation also became a central driver for other character systems. Cloth, hair and muscle simulations were coupled to the water, ensuring that every element reacted consistently to the same physical forces. Rather than layering effects independently, the shot was built as a single, cohesive system.
 


With all elements in place, the lighting and compositing teams brought the shot home. Using on-set lighting reference from the live-action shoot, CG lighting was carefully matched to the plates, allowing digital elements to integrate seamlessly with the photographed material. The result is a shot where technology disappears into performance, water flows with intent, characters move with purpose, and the boundary between live action and CG becomes impossible to see.

Eric Saindon is Senior VFX Supervisor at Wētā FX.

Images: © 2025 20th Century Studios. All Rights Reserved