News Feed

Creating Ahmanet in The Mummy

Siggraph Talk 2017

R&D
August 1, 2017

by Curtis Andrus, Endre Balint, Chong Deng, Simon Coupe

ABSTRACT

In The Mummy, much of MPC’s work involved augmenting the Ahmanet character with various CG elements. This includes, eye splitting, runes, rotten/torn skin, etc. See Figure 1 for an example.

These elements needed to be added on top of a live performance, so tracking a 3D model to Ahamanet’s face was necessary. Doing this sort of work isn’t uncommon, but with the high volume of shots MPC did for this show, it was clear that some new tools would be necessary to help simplify this process.

The Mummy Ahamanet
Figure 1: Close-up shot of Ahmanet’s face with additional CG elements. ©2017 Universal Pictures. All rights reserved.

1 OPTICAL FLOW BACKGROUND

The core of our Software work was a deformer built to handle facial tracking for a previous show, based on optical flow methods. Artists use Nuke to generate a motion vector pass of the plate, which is an image containing velocity information for each pixel. A deformer in Maya then uses this motion information to push the vertices of a model in the direction that the plate is moving. This allows the deformer to automatically capture changes in facial expressions, for example.

2 WORKFLOW TOOLS

The technique proved to be very successful, and has found use on several of our other shows. However, we found there was a bit of a learning curve for newer artists. Many artists in our Technical Animation department don’t have much experience with Nuke, for example. The deformer also doesn’t work as a simple one-click solution. Artists would run the deformer for a few frames then stop as the model diverged from the plate, make a few corrections, and run the deformer again. So, we built a set of tools in an attempt to simplify this process.

2.1 Motion Vector Generation

First, we decided to automate the motion vector generation process, so artists wouldn’t have to use Nuke at all. For this we used Renderflow [Auty et al.2016], MPC’s rendering/preview automation tool, to automatically load the plate into a Nuke template set up to generate a motion pass on the renderfarm. The result was then released into our asset management system and could be picked up by the Maya deformer. This process was all wrapped in a simple UI.

This tool could also generate alpha images of existing animated models, to mask motion vector generation to certain areas. This way we could avoid contamination from motion vectors slightly outside of the character (such as background motion).

The Mummy has a lot of dark lighting conditions, so the original plate often wasn’t enough to generate a good motion pass. To help with these cases, our motion vector generation tool allowed for a small set of basic plate modifications: brightness/contrast, blur/sharpen, RGB channel selection. Under the hood these simply connected up to a Python script that modified the Nuke template before execution. By running a scaled down image through this Nuke template in the background, our tool was also able to display a small preview to the user to provide some visual feedback. All without opening Nuke.

2.2 Presets

After the motion vector generation, we turned our attention to the Maya deformer. Artists found that simply pushing vertices along motion vectors would eventually diverge significantly from the image on the plate. Atmospheric effects such as smoke in front of the character could also introduce undesirable artifacts. To work around this, artists got into a habit of running the deformer in small chunks, blur the deformation in some areas and remove it in others. They would do this for each piece of the sequence, eventually getting a complete track. To avoid all this manual work, we built a presets-functionality into the tool. Examples include:

2.3 Quality Check

Finally, to validate the quality of their tracking, we modified an existing tool to project the plate onto the tracked model. It would then unwrap the image into UV space, and render out the result. This let artists visualize how much the model was “sliding” relative to the plate. Ideally, the tracked part of the model should look static in this space. See Figure 2 for an example image created by this.

QC tools
Figure 2: QC tools provides an unwrapping version of the plate. Motion in these videos indicates flaws in the tracking. ©2017 Universal Pictures. All rights reserved.

3 RESULTS

Overall the tools were successful, making the optical flow deformer accessible to artists with no Nuke experience and allowed them to get an acceptable first-pass tracking result without much work. Artists could then work from there, using our UV checking tool to validate their work along the way. Unfortunately, the feasibility of this tool is highly dependent on the shot, so a lot of manual work was still required by the artists. In the future we hope to do more work to extract more information from the motion vectors, and create a better 3D tracking system.

ACKNOWLEDGMENTS

We’d like to thank Ben Jones as the original creator of the Optical Flow deformer, as well as Francesco Pinto for testing our workflow tools on The Mummy.

REFERENCES

Jared Auty, Marlene Chazot, Ruben D. Hernandez, and Marco Romeo. 2016. Rapid, High Quality Dailies with RenderFlow for The Jungle Book. In ACM SIGGRAPH 2016 Talks (SIGGRAPH ’16). ACM, New York, NY, USA, Article 70, 2 pages. DOI: http://dx.doi.org/10.1145/2897839.2927415

Download PDF

Related Stories

Contact us

Get in touch to take the first steps toward making your vision a reality.

Contact
Contact us