Lifelike Animation Heralds New Era for Computer Games

Extraordinarily lifelike characters are to begin appearing in films and computer games thanks to a new type of animation technology developed by Image Metrics.


Click to watch the video

Emily - the woman in the above animation - was produced using a new modelling technology that enables the most minute details of a facial expression to be captured and recreated. She is considered to be one of the first animations to have overleapt a long-standing barrier known as "uncanny valley" - which refers to the perception that animation looks less realistic as it approaches human likeness.

Emily is a truly monumental achievement, recreating every nuance of human facial expression, even though what you’re actually looking at is the face of a digital actor. Created through a partnership with USC’s Institute for Creative Technologies (ICT), the team’s primary objective was to create a completely convincing, animated computer-generated face.

Using ICT’s special scanning system that can capture facial details down to the individual pore, the face of actress Emily O’Brien was transformed into a digital representation of herself, which could then be entirely machine-manipulated. A special spherical lighting rig captured O’Brien in 35 reference facial poses using a pair of high resolution digital cameras. The facial maps were then converted into 3D data using Image Metrics’ proprietary markerless motion capture technology.



"Ninety per cent of the work is convincing people that the eyes are real," Mike Starkenburg, chief operating officer of Image Metrics, said. "The subtlety of the timing of eye movements is a big one. People also have a natural asymmetry - for instance, in the muscles in the side of their face. Those types of imperfections aren't that significant but they are what makes people look real".

Previous methods for animating faces have involved putting dots on a face and observing the way the dots move, but Image Metrics analyses facial movements at the level of individual pixels in a video, meaning that the subtlest variations - such as the way the skin creases around the eyes, can be tracked.

"There's always been control systems for different facial movements, but say in the past you had a dial for controlling whether an eye was open or closed, and in one frame you set the eye at 3/4 open, the next 1/2 open etc. This is like achieving that degree of control with much finer movements. For instance, you could be controlling the movement in the top 3-4mm of the right side of the smile," Mr Starkenburg said.



For many years now, animators have come up against a barrier known as "uncanny valley", which refers to how, as a computer-generated face approaches human likeness, it begins take on a corpse-like appearance similar to that in some horror films. As a result, computer game animators have purposely simplified their creations so that the players realise immediately that the figures are not real.

"There came a point where animators were trying to create a face and there was a theory of diminishing returns," said Raja Koduri, chief technlology officer in graphics at AMD, the chip-maker. AMD last week released a new chip with a billion transistors that will be able to show off creations such as Emily by allowing a much greater number of computations per second. "If you're trying to process the graphics in a photo-realistic animation, in real-time, there's a lot of computation involved," said Mr Koduri. He said that AMD's new chip - the Radeon HD 4870 X2 - was able to process 2.4 teraflops of information per second, meaning it had a capability similar to a computer that - only 12 years ago - would have filled a room. AMD's chip fits inside a standard PC. But he said that the line between what was real and what was rendered would not be blurred completely until 2020.



There have been several advances in computer-generated imagery (CGI) in recent years. One project at the University of Southern California involves placing an actor inside a giant metallic orb which fires more than 3,000 lights from a range of different angles - and with different degrees of intensity - at the actor while he or she is are being filmed performing an action. The image captured by the camera can then be transported into another piece of film and the lighting effect (on the actor) chosen according to the ambient lighting in the scene.


Click to watch the video

Sources: Times Online (Jonathan Richards) and Technabob (Paul Strauss) via Marketsaw (Jim Dorey)