In-Three on the Workflow Behind 3D Conversions

James Cameron's Avatar and Tim Burton's Alice in Wonderland constituted the one-two punch that convinced Hollywood — and everyone else under the sun — that there's a lot of money to be made in releasing stereo 3D entertainment to movie theaters. In fact, the case made by Avatar's box-office gross was so forceful that studios started forcing 3D tentpole releases into their schedules left and right, with Warner Bros. raising more than a few eyebrows by announcing in January that its late March release of Clash of the Titans would be converted to 3D post-haste (and pushed back a mere week to compensate). That decision pointed to a new frontier in Hollywood tech, with its own gold rush for companies equipped to do the 3D conversion work.

In-Three, one of the companies with solid experience in this realm (it converted the guinea-pig action movie G-Force to 3D last year and has demo'd some impressive scenes from the original Star Wars for industry audiences), converted the live-action "real world" material bookending Alice in Wonderland, adding depth to everything except the carriage-ride scene and a dance sequence.

Film & Video got In-Three VFX Producer Matthew DeJohn and VP of Business Development Damian Wader to sit for a Q&A addressing the tools, the workflow, and the costs involved in "dimensionalizing" (the term refers to In-Three's patented Dimensionalization process) everything from cinema commercials and feature films to movie trailers and TV programming.

Tell me how this process works. Is there a little black box with inputs and outputs and a red button that says “Dimensionalize!”?
Damian Wader: We wish it worked like that!

Matthew DeJohn: It breaks down into three broad artistic phases. One is segmenting the image. You break it up in terms of major layers of depth. You cut out a person [in the scene], maybe somebody else behind him, and then you have the background. The next stage is actually generating the depth for that scene. You model the scene, and our system allows you to create a new perspective, or see the scene from a new perspective once you’ve modeled it out in 3D. And once you are finished with that phase, you’ve revealed parts of the background that weren’t visible in the original perspective, so you have to paint those surfaces, inserting image information where there was none.

The fact that this is happening as a moving picture instead of a still image makes that complicated in terms of tracking and roto work.
MDJ: Absolutely. And there are special cases like transparencies. How do you get that to work in stereo? Smoke particles and sparks are always difficult. So is motion blur. The detail and transparent nature of hair has to be captured accurately to fool the brain into thinking it’s real 3D.

Are you always using the original, flat version of the scene as one eye, and building a second, synthetic eye-view to go with it?
MDJ: That’s one way to do it. Often, especially in cases when you deal with smoke and other transparencies, it’s easier to recreate both perspectives. If you remove all the smoke by painting it out, you would have a new background and you would have to insert new CD smoke. Even for more normal shots, you’re doing a lot of visual effects type work. A live-action 2D-to-3D conversion, even if it’s a drama, is turned into a 100 percent visual-effects show. To get the really nice shaping to people’s faces and to avoid it looking like a cardboard cut-out, you need to model the scene with pretty good detail.

This sounds like a completely hands-on process. Is there any way to automate, say, a first pass? Or does an artist have to be working on every frame of film?
MDJ: There are some automated techniques and processes out there, but we found they just don’t get to the level of quality necessary when you’re creating the actual depth choices for the scene. At the end of the day, the artist has to analyze and interpret the scene. ‘OK, where is this object in space?’ A computer can’t do that for you. There are some approaches that will assist an artist who is rotoing or keying certain elements. But if a character has bushy hair, we have to pull a key that’s as good as you could pull if it were on a green screen. That’s a significant challenge when you have a character baked into the rest of the scene. There’s not a lot of automation that’s up to snuff.

DW: In terms of the depth itself, those values are propagated fairly automatically in our system. Once the choices are made, it’s not as if you’re re-keyframing every frame. Depth-grading is a fairly quick process.

MDJ: That’s a good point. Once we’ve set up the depth of the scene, our adjustment period to tweak that is very quick. For Alice in Wonderland, we submitted an entire sequence that they thought looked great but it wasn’t the artistic direction they wanted to go. We revised the depth on that entire scene in a day or two.

Is that information kept in metadata attached to the shots? How do you keep it with the files?
MDJ: It’s inside of our internal software, called In3gue [pronounced “intrigue”]. We have our artists working in real time to construct the depth and then tweak it to where the filmmakers want to go. When there’s a full shot, in motion, with depth, that's the first time the director sees it. We submit shots to be approved for depth, and they have rough mattes and rough paintwork. Once the depth is locked down, we final all the fine-tuning stuff, detailed key work and detailed paint work.

How big is the team for the 2D-to-3D conversion?
MDJ: Generally, for a 100-minute or 120-minute 2D-to-3D conversion, you would need about 300 to 400 artists phasing and out of production over about four to six months.

Are those 400 artists all working on dimensionalization? That’s not at all a trivial process.
MDJ: It’s very serious visual effects work. We have a lot of ways to attack the process that make it easier for us to get people up to speed and control the quality, but it takes a lot of effort to get a very high quality product.

How often do those people have to look at shots in 3D, and how do they do that?
MDJ: Internally, we’re working in 3D the entire time, save creating roto splines and stuff like that. For our depth artists, 50 or 75 percent of the day is spent looking through 3D glasses. We’re submitting stuff for review in 3D internally, because we’ll go through three or four internal reviews before we send to the client.

I understand a lot of work was done on Alice using Imagineer Systems Mocha. Are there aspects of Mocha that make it especially well-suited to this process, or could any high-end tracking-and-roto software be used?
MDJ: We really like Mocha, and it’s really easy to teach our people how to use it. At the time we made the choice, it was the only one that offered planar tracking. It’s been very powerful and good to us, so that’s what we continue to move forward with. Other companies are implementing forms of planar tracking, but Mocha dovetailed nicely with our internal software.


An example of what the first phase of roto work for a 3D conversion looks like in Imagineer Systems' Mocha
(Click to enlarge)


And how big is In3gue’s role? Are there more proprietary tools involved in the process?
MDJ: We have other, smaller, tools in our proprietary bag and stuff in development, but In3gue is our main software and where a lot of our IP lies. That really is the depth-creation tool. It’s unique. No other software does what we’re doing. It’s designed to model a scene quickly and provide quick feedback.

And the big question: how much does this cost?
DW: Typically, it’s about $80,000 to $100,000 a minute. That's generally speaking. As we move forward through development, we can define things that make it more efficient for us and more cost-effective for the production. A lot of animation houses are coming to us. They want to deal with their 2D story, not stereo or CG. Some projects do come to us as legacy pictures, but when we’re involved in the beginning, it’s the best way to take advantage of the process. It will not affect principal photography as a whole, but it will help our efficiency.

MDJ: Because dimensionalization is a post process, you get more artistic control compared to shooting in stereo. We can tailor the depth to match perfectly with the edit. No shots are ever lost if the camera fails. We can go into a stereo show and dimensionalize a flat shot to match the rest of a scene. It provides more flexibility in your depth choices. But you are limited by the realities of the world. If you shot with a long lens, you can only get so much shape out of the character’s face without having the scene way too deep. We can go in and make specific choices and use the available depth budget to its best advantage.

DW: Everything we do is to help the creative get his vision out there. It’s not to take away at all. We want to work with creatives who want to use 3D as a storytelling tool.

Do you ever get a shot from production that’s just a bear to add depth to?
MDJ: We had a shot on G-Force that was shot through clear plastic freezer curtains, and that was unbelievable. It’s distorting the background, and you have to still be able to see through it in stereo! It’s incredibly hard. We wish they didn’t do that. [Laughs.] But we know we’re going to encounter that stuff. In fact, our pipeline was originally set up to deal with legacy work. But it becomes an issue because it’s more expensive to deal with those shots.

I was curious whether you would be consulted during a shoot, like a VFX supervisor.
DW: We’re sort of looked on as the red-headed stepchild of the industry. But we’re starting to get some play as VFX supervisors. Two years ago, we were an afterthought, but now we’re getting clients saying, 'We need to get you guys in and utilize your expertise in the development of this.' We were on set for one picture, and I think we’re going to be on set a lot more.

Does everything happen in-house at In-Three, including roto and tracking, or just the depth-adding process in In3gue?
DW: The projects so far were done here. We have a joint venture with Reliance Media Works at a dedicated facility in India, and half of our staff are ramping up our outsourcing capabilities. We’re exclusively using them to outsource, and they’re exclusively using us to dimensionalize. The key artistry will remain here – keyframing, approval and QC — and they will handle the bulk of the manual labor — the roto, the key-pulling, the paint.

How do you expect your work to evolve in the future across features and television work? What about hybrid projects that were only partly acquired in 3D?
DW: This company was designed to repurpose old libraries, and to date we haven’t done that yet. We’ve been working on pictures where we were an afterthought, and then Alice thought of us and got us involved before the movie started, so that is occurring. We’re going to see shooting and dimensionalization happening inside single projects moving forward. It will be the best of both worlds to use both tools. They have their own domain in terms of broadcast TV, sports — stuff that’s live. We have our domain in terms of legacy pictures and repurposing old libraries, but it comes together on film projects moving forward, repurposed for TV broadcast or direct to DVD in stereo. Another one is advertising, especially theatrical advertising. We did the Air Force commercial that ran in front of Avatar on about 1000 screens, and we're getting a lot of advertising companies coming at us. And there will be trailer work. I get calls every once in a while for 3D trailer work, even for 3D pictures that, for whatever reason, need trailers converted to 3D.

It was several years ago that I first saw 3D-converted footage from one of the Star Wars movies at an industry gathering. Can you say anything about the status of 3D Star Wars?
DW: You know as much as we do. George Lucas has said he wants to do all six films. Five years ago, when we showed our stuff at ShoWest, he said he was going to do it. He hasn’t made any commitment as yet, but we’re as anxious as you are.

One last question about dimensionalization. Alice in Wonderland got mainly good reviews for its 3D visuals. But as soon as Clash of the Titans started screening for fans, there was a backlash. On the Internet and in some print reviews, a buzz started building that 2D to 3D conversions are no good. [Writing in the Los Angeles Times, Kenneth Turan said, "Consider the possibility that Clash of the Titans is the first film to actually be made worse by being in 3-D."] How do you feel about that?
MDJ:You bring up Alice as a contrast to that, which is good. I would go back even farther, to G-Force. I don’t think anybody knew that was converted. They all assumed it originated in 3D, because it was really clean. In terms of Clash of the Titans and all of the negative press, we look at that situation and see an eight-to-10-week schedule to do an entire movie, and it sounds completely unrealistic to get a high-quality product in that timeframe. If that’s how it turns out [and 3D conversions get a bad reputation], it will be disappointing for us. We’ve been there at the forefront, talking about our dimensionalization process specifically as an extremely viable and artistically competitive approach to generating 3D. If it doesn’t get there, hopefully that doesn’t paint all conversion houses with the same brush. That’s what we’re afraid of. The audience should be able to trust they’ll get a high-quality product, especially on a huge movie.

By Bryant Frazer, StudioDaily