Kingdom of the Planet of the Apes’ VFX lead argues that the movie uses AI ethically
10.05.2024 - 14:13
/ polygon.com
/ Josh Brolin
Right now, every industry faces discussions about how artificial intelligence might help or hinder work. In movies, creators are concerned that their work might be stolen to train AI replacements, their future jobs might be taken by machines, or even that the entire process of filmmaking could become fully automated, removing the need for everything from directors to actors to everybody behind the scenes.
But “AI” is far more complicated than ChatGPT and Sora, the kinds of publicly accessible tools that crop up on social media. For visual effects artists, like those at Wētā FX who worked on Kingdom of the Planet of the Apes, machine learning can be just another powerful tool in an artistic arsenal, used to make movies bigger and better-looking than before. Kingdom visual effects supervisor Erik Winquist sat down with Polygon ahead of the movie’s release and discussed the ways AI tools were key to making the movie, and how the limitations on those tools still make the human element key to the process.
For the making ofKingdom of the Planet of the Apes, Winquist says some of the most important machine-learning tools were called “solvers.”
“A solver, essentially, is just taking a bunch of data — whether that’s the dots on an actor’s face [or] on their mocap suit — and running an algorithm,” Winquist explains. “[It’s] trying to find the least amount of error, essentially trying to match up where those points are in 3D space, to a joint on the actor’s body, their puppet’s body, let’s say. Or in the case of a simulation, a solver is essentially taking where every single point — in the water sim, say — was in the previous frame, looking at its velocity, and saying, ‘Oh, therefore it should be here [in the next frame],’ and applying physics every step of the way.”
For the faces of Kingdom’s many ape characters, Winquist says the solvers might manipulate digital ape models to roughly match the actors’ mouth shapes and lip-synching, giving the faces the vague creases and wrinkles you might expect to form with each word. (Winquist says Wētā originally developed this technology to map Josh Brolin’s Thanos performance onto a digital model in the Avengers movies.) After a solver works its magic, the Wētā artists get to work on the hard part: taking the images the solver started, and polishing them so they look perfect. This is, for Winquist, where the real artistry comes in.
“It meant that our facial animators can use it as a stepping-stone, essentially, or a trampoline,” Winquist explains with a laugh. “So [they can] spend their time really polishing and looking for any places where the solver was doing something on an ape face that didn’t really convey what the actor was doing.”
Instead of having to