News Feed

Directable Crowd and Ragdoll Simulation for The Lone Ranger

For the Comanches vs. Cavalry sequence in Disney’s The Lone Ranger, MPC was tasked with complex horse and rider simulations for scenes where a crowd of stampeding Comanches are mowed down by the cavalry’s gatling guns.

R&D
August 14, 2014

Siggraph Talk 2014

by Jo Plaete, Adam Davis, Alan Stanzione

Abstract

To achieve this MPC had to extend its proprietary crowd system ALICE to allow for highly artistically directable ragdoll physics simulations blended with motion captured and hand-animated clips.

Comanches vs. Cavalry sequence
Figure 1: Comanche massacre – (c) 2013 Disney Enterprises inc. All rights reserved.

Horse and Rider Crowd Workflow

ALICE crowd engine has a long history of simulating armies and battles including simulating hordes of horses and accompanying riders. ALICE’s sophisticated motion engine and skeleton operators ensured that the base motion capture or animation data provided was kept in sync at all times during the crowd simulation even when artists triggered various dynamic effects. On top of this we were able to stage additive motions throughout the crowd to create interesting variations on both horse and rider agents. Due to the added complexity of simulating ragdolls we streamlined this workflow as much possible so artists could thoroughly focus on the simulations.

Directable Ragdoll Simulation

For the ragdoll work we extended MPC crowd physics framework. To allow us to build better ragdolls we reworked the way our crowd ragdolls are connected to the physics engine getting both more controllable and stable ragdoll constraints and the ability to dynamically compute physically correct, predefined ragdoll volume masses using real world densities. Thanks to the seamless marriage between ALICE and our physics framework PAPI we were able to inject custom scripted logic into various shared simulation events in order to expose advanced functionality to massage the simulation. Exploiting this we created a toolset that let artists create virtual guns in their scenes giving them detailed control over force injections fine-tuning how e.g. a Comanches would be shot from his horse. The guns defined the direction of the impacts and exposed attributes such as strength, falloff and which body parts of the targeted ragdoll they would affect. They could be triggered randomly throughout the crowd, when crowd agents ran into user defined areas or specifically on hand-picked agents. When either a horse or rider was triggered to transition from clip driven crowd agent into a ragdoll the forces would be applied. Thanks to the interconnected nature of a ragdoll system this created very natural and visually pleasing motions mimicking bullet impacts and riders flying off horses inter-colliding with others and the environment. Furthermore the artists also had control over the destiny of the agent’s partner so they could e.g. trigger the horse simultaneously when its rider was hit, introduce a delay or allow the horse to run rider less with the added possibility of attaching on-the-fly foot to stirrup constraints for the effect of shot riders being dragged along by their horses.

Partially Blending in Animation

No matter how much we could enhance the simulation, the inherent problem with triggering characters into a permanent ragdoll state is the sudden loss of life and realism which gets amplified when applied to a crowd. To solve this we added another layer on top of our crowd ragdoll simulation that allowed us to partially blend back animation clips at any point in a motor driven ragdoll simulation. This allowed us to e.g. have a running horse go into a ragdoll state and fall down under gravity but, once on the ground, blend back in some specific limb kicking animation for a certain amount of time. To maintain realism the horse would still inter-collide with the environment and/or other ragdoll characters. This tremendously helped keep the pile of ragdoll characters alive and avoid the dull ragdoll feel. To control this we could define function curves that drove the timing and blend between ragdoll state and various animation clips.

Conclusions

By fine-tuning the subtle balance between motion capture driven crowd and highly artistically directed ragdoll simulations blended with specifically tailored animation clips, we were able to produce the degree of realism required to recreate believable performances for the Comanches massacre. The approach proved so successful that we also employed it to generate clips for close up hero characters providing a physically correct base and increasing the efficiency of our animation team.

Acknowledgements

Mariano Blanc, Kai Wolter, David Schneider and Marco Carboni.

Download PDF

Related Stories

Contact us

Get in touch to take the first steps toward making your vision a reality.

Contact
Contact us