» Mon May 07, 2012 10:48 pm
Thing about those physical rig mocap setups is that they tend to be fragile.....and expensive to fix. What they are trying to do is come up with a facial mocap solution that isn't face specific. The current king is pointcloud animation. You've seen actors with lots of arranged dots on their faces; they do the mocap close up while the actors act......but the face it can be applied to has to be a digital scan of the actors face ,with either minimal or very carefully choreographed variances. The captured 'dots' are actually deforming the mesh, not moving rigging or controlling morphs. So the structure has to stay almost if not exactly identical, or you get wierd deformations.
Oh, and that Unreal 3 demo, 'Samaritan', wasn't on a supercomputer; it -was- on a computer running 3 560's in SLI mode, I do believe. But the computer the video cards were in was nothing really special. They used the same box to demo the cityscape with all the featurses and fog and tesselated gargoyles.