Search form

‘Pleroma’: Can A Robot Prototype Save Workers from an Unleashed AI?

Visual effects supervisor Chris Browne has big plans for his sci-fi project originally conceived as a feature film and TV series – he created 200 digitally augmented shots with photoreal robots, virtual environments, and swarming drone bots for the short film proof-of-concept.

Conceived as feature film and television series by animation veteran Tim Hedrick and visual effects supervisor Chris Browne, Pleroma revolves around an AI run corporation turning against its human employees who place their hopes of survival on a prototype robot.

The proof-of-concept short film was directed by Browne, who also handled the creation of 200 digitally augmented shots that contain six photoreal robot characters, virtual environments, destruction, and swarming drone bots. 

The project was conceptualized and executed during the pandemic. “I was also employed at DreamWorks full-time, so I spent evenings and weekends working on it for about a year and a half to two years,” remarks Browne. “The feature script was written over a span of six months, and the short is pulling chunks out of that and sculpting it into its own story.”      

Robots are a science fiction staple and have driven the narrative of Hollywood franchises established by filmmakers like Paul Verhoeven, James Cameron, and Ridley Scott.  “A lot of those films like RoboCop, The Terminator and Blade Runner take place so far into the future, whereas I try to keep it as much present day as possible,” notes Browne.  “The robots look and feel as real in what you would see in a Boston Dynamics promotional video; the only difference is that they’re a little more advanced in their intelligence.” 

Check out the trailer, then learn more about how Browne produced the short film.

The film enlists several different robot types. According to Browne, “There is a big clunky industrial robot built by an amateur which has wires and cables hanging out, you can see its guts, and is a mishmash of junkyard scraps thrown together; that’s the hero of the piece.  The villain is a slick robot designed by this company which resembles a mannequin.” 

Motion-capture, keyframe animation, and rotomation were combined to get the proper motion and poses for the robots.  “The clunky robot is called STAN and his proportions are quite different from that of a human so the motion-capture I converted over to him needed a lot of adjustments,” states Browne.  “An important feature of keeping it as realistic as possible was I didn’t have ball joints but hinges and swivels. STAN is much more machine-like.”  There was only one CG model of STAN.  “I had switches for texture damage [such as dirt and grime] which I then enhanced quite a bit.”  The company robot named ZED was much closer to a human, enabling Browne to do a direct translation of the motion-capture. 

All the robots were entirely CG and integrated into the live-action plates, including a robot that rolls as a disk, unfolds into a crab, and starts attacking in that manner. “It works mechanically and I didn’t use any scale cheats,” Browne shares. “The disk unfolds in sections that fit back in perfectly.  I spent a couple of weeks researching different kinds of hinges and pistons to make sure when I built it, they would work when it folds in and out.  It was complicated to figure out but there are only a few sections that unfold.” 

 

Key components for the VFX pipeline were Unreal Engine, Houdini, Maya, and Nuke. “I am at the point where I do a lot of scripting and coding tools within Houdini, Maya and Nuke to streamline the pipeline process,” notes Browne, who coded over 20 custom tools over the course of the production.  “I created a tool for the dirt coming off the rolling disk robot that allows you to plug-in a piece of geometry, which is animating along the ground. It will automatically create dirt spraying off of it whenever connecting with the ground.  Every time you light a CG character you create all of these individual layers that you can tweak and help integrate them into the environment.  I built a gizmo that allowed me to quickly use the render layers.  There are numerous spherical nanobots so I created a tool that would customize the level of details to make them different.  Sometimes there are lots of panels and on other occasions only a few.  I wanted to procedurally alter how they move and the energy bolts attached to them.  It all had to work as tool that I could randomize. Then I needed to make tools that allowed them to swarm.  In the film, nanobots swarm into different shapes and patterns.  I had to make a tool that caused the nanobots to swarm and deform to the geometry I plugged in.”

Check out the VFX breakdown:

Principal photography occurred at Canada’s particle nuclear physics laboratory TRIUMF situated in Vancouver, which was no small feat in itself.  “Back in the day I ran my own boutique animation and visual effects studio and TRIUMF hired us to do these physics simulations of their experiments,” reveals Browne.  “It was like technical videos.  They gave us a tour of the facility and you feel like you’re in this massive science fiction set except it’s real.  There is a Hadron Collider, gigantic machinery, and scientists with lab coats and clipboards working.  Visually it was incredible.  I felt that I had to film there so I reached out to TRIUMF and because of our previous relationship they agreed. We had to embed scanners for radiation levels because you’re only allowed a certain time in particular areas. It was scary but I had to seize the opportunity.”   

Footage was captured with the Canon 5D Mark II, GoPros and DJI Osmos Pocket camera.  “The DJI Osmos Pocket camera is a cool piece of tech,” notes Browne.  “It shoots 4K, has a gimbal that keeps it smooth and steady, and I had an attachment to a telescoping pole so I was able to hold it up and run.  I could be tracking shots of the crab robot or hang it out the side of a building or outside a window. There is point where the crabs are crawling up the wall of the building.  I could see through my cellphone what the camera is viewing and you could control the pivot of this gimbal.  That was handy for those crazy dynamic robot shots that were happening.” Wide Canon lenses were favoured.  “I was probably using 22mm and 35mm.  For a few interior shots with the actress, I used a 200mm.”     

Big sequences were done in Unreal Engine for logistical reasons.  “The reason for that was I had shot locations in Vancouver that I had no longer access to,” explains Browne.  “I took a lot of photogrammetry shots of those environments so I was able to rebuild them virtually here in Los Angeles. I could stick a camera on the robots and compose any shot that I wanted.  One of them was the interior of a warehouse where ZED is gunning people down. I wanted to be able shoot it dynamically from high up angles and from shots that would be difficult to achieve. I brought in ZED and the human characters covered with surgical masks.” 

Explosions were a must for the storyline, with Browne revealing, “I had to have a sequence where STAN is running with explosions going off all around him.  Before I had even started that was a must.  I was trying think how I could pull this off.  Should there be a helicopter firing at him? That didn’t make sense because he’s the hero and the humans would be flying it.  That’s when I came up with the idea of the swarm bots that were divebombing him.”

Editing was difficult because the footage was cut by Browne before any CG robots were in the shots. “I hadn’t tracked the shots yet so I had shots that were panning by or the camera would tilt off and there would be nothing there,” states Browne.  “I was guessing what’s going to be in the frame.  If it was an action shot and you’re on a big wide I had to imagine the action that was unfolding in terms of pacing.”

     

Ultimately, one of the most intriguing, unique, and impressive things about Pleroma was its one-man production crew.  “I had to keep the bigger picture in mind and have many spreadsheets that tracked every stage of everything,” Browne says. “I had a breakdown of every single shot in it and what phase had to be completed.  I’d be jumping all over the place.  I didn’t have access to a renderfarm.  I had three desktop computers.  While one shot was rendering I might be caching a simulation on another computer and then my third computer I would be animating the next shot.  I was trying to pile up on the things that are happening.  Sometimes all three computers might be caching a giant simulation so I’m either waiting and going out to shoot some new plates. To have the opportunity to get my hands on every single phase is so rare in the business that I wanted to do it this time.” 

Trevor Hogg's picture

Trevor Hogg is a freelance video editor and writer best known for composing in-depth filmmaker and movie profiles for VFX Voice, Animation Magazine, and British Cinematographer.