British director Gareth Edwards got his start when his debut feature, Monsters, a low-budget independent film, premiered at SXSW in 2010 and went on to screen at Cannes. The acclaimed sci-fi thriller established Edwards as a multi-faceted filmmaker, who also worked as the movie’s writer, production designer, cinematographer and visual effects artist.
Hollywood quickly took notice. He was soon tapped to direct Godzilla, the hit 2014 reboot of the famed franchise, and then 2016’s
Rogue One: A Star Wars Story, the first installment of the
Star Wars anthology series and a billion-dollar box office global blockbuster.
His new film is another epic sci-fi action thriller, The Creator, which stars John David Washington (
Tenet), Gemma Chan (
Eternals), Ken Watanabe (
Inception), Sturgill Simpson (
Dog), newcomer Madeleine Yuna Voyles and Academy Award winner Allison Janney (
I, Tonya). The film’s screenplay, by Edwards and Chris Weitz, from a story by Edwards, is set in the middle of a future war between the human race and the forces of artificial intelligence. Joshua (Washington), a hardened ex-special forces agent grieving the disappearance of his wife (Chan), is recruited to hunt down and kill the Creator, the elusive architect of advanced AI, who has developed a mysterious weapon with the power to end the war — and mankind itself. Joshua and his team of elite operatives journey across enemy lines, into the dark heart of AI-occupied territory, only to discover the world-ending weapon he’s been instructed to destroy is an AI in the form of a young child.
Here, in an exclusive interview with Post, I spoke with Edwards about making the film, working on the VFX, and his love of post, editing and the DI.
You co-wrote this with Chris Weitz, who co-wrote Rogue One. What sort of film did you set out to make?
“A whole bunch of films rolled into one. It’s interesting how it came to be. I was writing another sci-fi film, and I hate writing, and I’d gone to Thailand to work on it, and I finished it and then got a text from Jordan Vogt Roberts, who did Kong: Skull Island. He was in Vietnam, and I ended up going there to meet him, and touring the whole country, and you can’t do all that without thinking about the war and films I grew up loving, like Apocalypse Now. But I was seeing it all through the prism of science fiction, and I’d see monks walking into temples and imagined them as robots. So I got fascinated with the idea of, if someone made Apocalypse Now, but in the Blade Runner universe. That’s a style of film I’ve not seen before, and that gave me the ‘world.’ And then the whole story idea was sparked by seeing this factory in the middle of nowhere in the mid-west as I was driving past, and as a joke I thought, ‘What if they’re building robots in there? And imagine if you were a robot and you escaped, and you saw all the farmland and sky for the first time?’ That idea got me really excited, and by the time I arrived at my destination I had the whole movie in my head. And when things come together that quickly, it usually tells you it’s worth pursuing, and it ended up being my next film.”
I heard you did a lot of location scouting and test shooting all over Asia, including Vietnam, Cambodia, Japan, Indonesia, Thailand and Nepal. What were the main challenges of pulling all this together?
“As important as the story and screenplay is, the actual filmmaking process is equally important, and I wanted to do this quite differently from the normal way, and essentially do it backwards.
"Normally, you work with concept artists to imagine this crazy, ambitious world. The studio looks at it and says, ‘It’ll cost $250 million, you can’t possibly find these locations, so you’ll have to build it all with green screen.’ I didn’t want to do that. Instead, I wanted to shoot amazing locations in the real world that match what’s in the screenplay, edit the movie, then design the world on top of what we’ve edited. That approach is more efficient. So we scouted all these amazing locations, from Tokyo to the Himalayas, and active volcanoes in Indonesia, and I took a prosumer camera and a 1970s anamorphic lens, and shot all this material. Then we went to ILM and asked them, ‘Can you do the VFX process without the usual tracking markers, without people in motion capture? Let’s try and reverse-engineer it.’
"So they went for it and everyone was very surprised it went so well, and it cost very little. So we showed that teaser to the studio and they greenlit it.”
So you didn’t do all the usual previs for all the huge action sequences?
“No, but I did storyboard the set pieces so everyone could see it was do-able and possible, even though we didn’t stick to the storyboards.”
I assume you started integrating post and all the VFX on day one?
“It was tricky as we didn’t actually decide who was going to be a robot and who wasn’t until halfway through post. I wanted them and the AI to be very human and naturalistic in their behavior, and I found that if you told someone they’d be AI, they’d begin to behave differently. The way I wanted to shoot it was to go into these villages and have our background artists and main cast all interact with the locals in these scenes, and then once we’d cut it all, decide who’d be most interesting as a robot. And the best performances were always the most naturalistic and casual. Usually, when you spend a lot of money making someone into a CG AI, you want them front and center, but when you throw it away, it’s far more effective and real.”
You had two DPs – Dune Oscar-winner Greig Fraser, who shot Rogue One, and Oren Soffer. How did that work?
“Greig was on-board from the start, but he had to leave to shoot Dune 2, and Oren’s his protégé, so Greig worked remotely for the shoot in Thailand and Oren was on-set, and then they teamed up at the end for some stage work at Pinewood.”
Is it true the film was shot on the low-cost prosumer Sony FX3 camera?
“It’s true. We used it because the color science is so good now and it’s so small and lightweight, and by the time you add a gimbal to it, I could still hold it all day long, which you can’t with a bigger camera and all the backpacks and gear it needs. It also has an ISO of 12,800, which meant we could shoot in moonlight if necessary. So we didn’t need loads of trucks with massive lights all the time. We could use lightweight handheld LEDs to light scenes, so we could move faster and be freer in the way we shot. We also shot a little with the Sony FX9 when we did the Pinewood LED screen stage work, so we could synchronize the refresh rate of the LED screen to the camera.”
It sounds like it was a real run-and-gun, guerrilla filmmaking approach?
“It was. We had a very small crew of just five around the camera, as I wanted to shoot the real locations with people and animals wandering in and out, and I think people thought we were just YouTubers, not a movie shoot. But we had a bigger infrastructure for the main shoot. I wanted to be able to shoot 360 degrees, so the video village was always hidden.”
Where was post?
“The original plan was to edit and do post at ILM, but COVID destroyed that, and we ended up doing it all on the Fox lot. We edited for over nine months, and we had a small VFX producer team with us, and did the sound there too.”
Talk about editing with three editors: Hank Corwin, Scott Morris and Joe Walker.
“Joe was on from the start, and did all the assembly while we were shooting, and it was a really great foundation, but just like Greig, had to leave to do Dune 2. Luckily I then got Hank when I got back from Thailand, and he and Scott basically cut the whole film.”
What were the main editing challenges?
“Joe’s assembly was about five hours, and the big thing was, ‘How do we get it down to two hours without missing or destroying anything?’ It was a very hard trial-and-error process, and Hank was brilliant at turning 20 minutes of flashbacks into two minutes, for instance.”
There's obviously a ton of visual effects shots. What was your approach to dealing with them?
“We had over 1,700 shots, and ILM did most of the VFX. We also had a lot of other vendors, including Wētā FX, Folks VFX, MARZ, Misc Studios, Fin Design + Effects, Supreme Studio, Outpost VFX, Crafty Apes, Jellyfish Pictures, VFX Los Angeles, Frontier VFX, Outpost VFX and Clear Angle Studios. The most difficult VFX sequences to do were designing the AI look of the young child, mainly because it was so important and we had to get it right. You were going to see her more than any of the other VFX, and the movie lives or dies by the audience’s response to her, so we had to start very early on it and work on it right up until the very last day of post.”
Tell us about the DI. Where did you do it?
“At FotoKem with colorist Dave Cole, who’s done a lot of films with Greig. I was there for all the sessions and Dave was amazing. The thing with this was, we shot digitally, but it was really important to me that it looked like film. So, we spent a lot of time on it, and the first week we just concentrated on the look with their color scientist, trying to emulate film stocks from the ‘70s and early ‘80s, and we added grain, and all the work paid dividends. We were originally going to scan out to film, but in the end we just didn’t have time, so everything in the movie is digital. FotoKem has their own emulation process, and they did a challenge test and scanned out a little bit of film for me, and then did their emulation, and I was 100 percent certain I’d get it right — but I was wrong. So that hurt my whole argument about putting it out on film. They’re one of the last film labs in the world, so they really understand what makes film look the way it does, and they do a lot of digital work and know how to emulate a film look. I’m really happy with the way it looks and I think the film is the closest I’ve ever got to it being like what I had in my head.”