Monsters vs Aliens: Behind-the-Scenes Look at 3D Tech

To the outside observer, creating a 3D film must look like a piece of cake. Rather than filming with two cameras and adjusting settings by hand, as directors must do in live-action stereoscopic films, animators can set cameras with computers. Need another view to represent the right eye? Accomplished with the click and drag of a mouse. Not to mention that none of your computer-generated actors will refuse to leave their trailers or throw fits when you move a light during a scene.

But animators of Dreamworks' Monsters vs Aliens, out this weekend, beg to differ. Creating an animated 3D film is a complex process, and directors are still using trial and error to find the true potential of the format, which is enjoying a resurgence thanks to improved digital technology. Monsters vs Aliens is the first in Dreamworks' plans to release all of its animated films in 3D.

The 3D Trick
Our brains combine the different views from our eyes, which are positioned about 2 inches apart, to give us depth perception. To create a 3D film, animators mimic natural human vision by building a stereoscopic camera rig within the computer. Each rig is equipped with two cameras - one representing the view of each eye. The most important setting on the rig, according to Phil McNally, head of stereoscopic filmmaking at Dreamworks, is the interaxial setting, or the distance between the two cameras.

"If the two cameras are in the same position, you get no stereo. Everything is in the same position in the world, and you get a 2D movie," he says. "The wider you separate the cameras, the more each point of view can see around an object. So literally, the wider you separate the cameras, the more 3D volume in the scene."

The zero parallax setting (ZPS) is also important to the experience. The ZPS determines where the two camera views converge on screen — and therefore what appears in front of the screen, in what filmmakers call personal space, and behind the screen in what they call "world space."

Most of the time, filmmakers at Dreamworks use computer software — including a suite of commercial programs such as Maya as well as in-house tools and software applets — to precisely control the cameras' stereoscopic settings. Automated control has two benefits: It saves huge amounts of time that filmmakers would otherwise have to use manually setting shots; and it allows filmmakers to overstep one of the problems of old-school 3D—viewer headaches by establishing safe parameters for the objects on screen.

Programmer Paul Newell is responsible for building the rigs to McNally's specifications. "As an artist, I don't really want to have to get the calculator out every time I'm trying to set up my stereo," McNally says. Some shots must be set manually, but for the majority of shots, "I want tools that allow me to set the stereo and calculate the interaxial and convergence point for me, based on what I want to achieve. Paul takes that idea and actually has to make it work by doing the programming and math calculations and building a stereoscopic camera rig inside the computer that gives me handles and dials to turn."

Animators also create a dynamic stereoscopic window — a black box that frames the film — which is part of composition and is even used to enhance the action of a film. "If we want someone to run toward us, it's an optical trick to put the stereo window close to the audience while the character is distant, and as the character runs toward the audience, we actually push the stereo window away to magnify, or amplify, the feeling that the person is coming closer," McNally says. "The character runs forward and the window recedes at the same time, but the audience never sees the window move."

Using 3D Wisely
Still, McNally says, good stereo settings do not a good 3D movie make nor do techniques widely used in mono moviemaking, and this is where filmmakers still have a lot to learn. "Something that we're learning is that the type of layout and composition that you create can be pretty different between normal 2D filmmaking and 3D filmmaking," McNally says. "Something that's very interesting as a graphically flat painting is not necessarily interesting as a 3D spatial environment. Typical shots that have been used a lot in 2D filmmaking — two characters standing side-by-side, against a plain or a blurry background — really don't offer that much in 3D. So really, making a successful 3D movie is about creating, or reinventing, cinematic techniques."

That's where the InterSense camera room comes in. Few other studios have the digital tracking technology, which helps filmmakers plan shots. "It's the coolest thing ever," says Damon O'Beirne, head of layout on Monsters vs Aliens. "It's a display with handles, and in the screen you can actually look into the digital set. And as you walk around with the camera, it starts to walk you through the set. When you angle the camera up into a corner of the room, the computer recalculates and then you see into the corner of the set." The space is also scaleable. "Imagine if we were shooting on a football field, and we have a room, which is maybe 10 meters by 5; we can scale that room up to fit the football field. So when you walk across the 5-meter room, you've just crossed a football field," O'Beirne says. "So very, very easily, you can walk a huge set and look for angles and talk about possible shot construction."

The room has helped them create realistic hand-held camera movement; animators have even taken finished animations into the room for tweaking — such as the sequence where Susan finds the robot on the bridge. "We wanted a really handheld feeling," O'Beirne says. "And we could take the animation back up into the camera capture room and start reacting to every action that she might do. Whereas in the past, we probably would've just left that a lot simpler and maybe even locked it off. But now we can get a lot of intensity. And we couldn't do it without the camera capture room."

Fixing Compression and Distortion
With shots storyboarded, it's time to call action. In the computer, there is a CG environment with the characters in low resolution. McNally inputs the parameters of a particular scene into the computer, which calculates the stereoscopic camera settings within seconds. "The difficulty with that system is you can take a very deep environment and start to compress the scene, and I can make that comfortable by bringing the background in and pushing the foreground away," McNally says. "But the result is you might get characters that look like they're cardboard cutouts in the middle of the scene."

To get around that problem, animators use a tool that measures just how much a character has been squashed to fit into the near and far boxes. "Based on testing and the experience of looking in the theater at full size, we can look and decide what we want the character to be like, what we think is the perfect on-model representation of the character, and then we can put those numbers into the calculator of the stereo rig," McNally says. "So, as I'm setting my safe near and far points, I can also look at the character compression and see if the character been squashed too much or if I need to give it more or less stereo volume to make him look right."

Sometimes, however, that tool isn't enough. If the near and far settings are at maximum, and the character is still compressed, animators will switch strategies and shoot the scene with multiple stereoscopic rigs — up to eight at a time. "We might have a shot which has a distant hill all the way down the road, and in the foreground there are some branches, and there are two characters talking relatively closeup who are behind the branches but in front of the hills," McNally says. "If we set the camera so that the branches and the hills are at a comfortable distance, we now have these really compressed cardboard-cutout characters. In CG, we can have one set of cameras see just the branches; another set of cameras can see the background; and a third set of cameras — or even a third and a fourth, one for each character — where we can manipulate the space for each individual element in the scene."

In-house tools, in conjunction with Maya 3D animating software, allow filmmakers to compose — and see how the scene will play out — in real time in the stereo-preview window. "At the desktop we can sit and look with our glasses on and manipulate these different stereo camera rigs and build a stereo scene," McNally says. "The scene could be put together from two, four, six or eight cameras all working together to create this final result, which is a perfect blend of comfort but also retaining as much stereo volume as we can get. And all of these parameters are animateable as well."

Computing Power
Animating in 3D, previewing 3D live and, finally, rendering two separate films — one for the right eye and one for the left — requires incredible amounts of computing power. Dreamworks used Intel's Core i7 microprocessor (just recently released to the public) on Monsters vs Aliens. "As you might imagine, the computing demands for calculation and processing of all the pixels and all the images that have to be rendered in an feature animation, they take a step function and go way up when you go to 3D," says John Middleton, director of software and services group marketing at Intel. The company set up processors in Dreamworks' servers, where films are rendered, and at its animators' workstations, where the films are created.

"One of the most fundamental ways that people with intensive computing demands can take advantage of these processors is being able to tune and create their software so it knows how to use four or eight processing cores at once," Middleton explains. "This is so-called parallel programming, where different pieces of your application are operating on different threads. This is a great methodology and a great attribute for the type of animation software that Dreamworks uses in creating its movies."

Though animators still have a lot to learn, the guys at Dreamworks are pleased with how 3D plays in Monsters vs Aliens. "It was just the perfect movie for it," O'Beirne says. "In a mono movie you'd look at the robot and you'd go ‘well, maybe it's 50 feet, maybe it's a 100 feet.' But in 3D, you immediately have the spatial clues to actually gauge how big these things are. There's a shot where Susan falls onto the ground and the camera angles up and is looking up at this 400-foot robot. It's towering over the audience. Graphically it's a well-staged shot, but in 3D, it's terrifying."

By Erin McCarthy, PopularMechanics