Tapeless Workflow, Real Resolution, and Rolling Out D-Cinema
While 3D production and post was the headline issue at this year’s HPA Technology Retreat in Rancho Mirage, CA, the variety of topics under discussion during the conference was typically wide-ranging. Contentious issues on the broadcast side included the coming government-mandated DTV transition (mark your calendars for February 17, 2009 — that’s the day when you may start getting phone calls from relatives wondering why their rabbit-ears no longer work) and the effects of various different compression algorithms on the quality of the final signal. And on the post side, speakers spoke about everything from the state of the art in tapeless workflows to the bleeding edge of digital-cinema distribution. Here’s a brief round-up.
Tips for the Tapeless
Peter Mavromates, the post supervisor for director David Fincher’s Zodiac and The Curious Case of Benjamin Button, was too busy to attend HPA, but appeared via pre-recorded footage to talk about his tapeless workflow. “The speed at which [Fincher] works is phenomenally fast,” Mavromates said. “We no longer use clappers. He can say “cut” and then be rolling on his next take in 10 seconds, and he often is. When you do the math, the savings in time and the momentum he can get from actors and crew is quite phenomenal.”
Wayne Tidwell, the data-capture engineer on Benjamin Button, entered basic scene-and-take information. The system was developed specifically so that Fincher could delete takes on set. Footage is recorded to one of about 25 or 30 400 GB hard drives, each of which holds 30 minutes of material. The drives are delivered to the “digital lab” — in other words, the film’s editing room. (“There’s a security benefit implicit right there,” Mavromates noted.) Tapes are batch-digitized, with audio, into Final Cut Pro as DVCPRO HD files. The footage is backed up, at full resolution and with no compression, onto two LTO tapes, and finally the hard disk is recycled on set. On the rare occasions when 35mm film was shot, the reels were telecined to a D5 tape deck on one-day rental; otherwise, there were no tape decks in the edit room.
Dailies are viewed via a secure, Web-based system that allows access to be granted for specific people to watch dailies in a specific time frame. “VFX companies send pre-comps to [Fincher] that way,” according to Mavromates. “A lot of that is happening at David’s desk, on his desktop computer. We visit Digital Domain every Friday, but they’re feeding him shots every other day of the week, and sometimes getting feedback in 20 minutes. He can draw notes on frames, and type notes, and the VFX people can look at them. By the time we get there on Friday, we might be seeing shots on the big screen that have already gone through four or five iterations with director feedback.”
Filmmaker Suny Behar took a crack at explaining how to stop worrying and love the solid-state P2 system from Panasonic. “P2 is not your master,” he insisted. “It’s a temporary transport medium. It allows you to not be tethered to a record deck but still get very high quality.” He pointed to a P2 workflow where crew members would carry P2 cards around in “hot pouches” and “cold pouches,” depending on whether they contained “hot” camera footage or were in “cold” mode, ready to be re-used. Large blue and yellow stickers were affixed to each card depending on what stage it was at in the post workflow.
And Panavision’s Marker Karahadian demonstrated the new SSR-1 solid-state recorder that snaps onto the Genesis (or the Sony F23) just like an HDCAM SR tape deck. Plugging the unit into a docking station (dubbed the SSRD) gives it full VTR functionality, including HD SDI output.
Paul Bamborough of Codex Digital appealed for a dramatic change in the production and post-production mindset, arguing that the two disciplines are bleeding into each other. “There are complicated commercial, political and all kinds of other issues, and nobody is forced to do anything overnight,” he said. “But this is the way people will end up working, and this is the right thing to happen.”
What’s in a Pixel?
Canon’s Larry Thorpe and Panavision’s John Galt made an effective tag team on the subject of digital cinema resolution — and why current ideas about the “resolution” of any given camera are at best sketchy and at worst misleading. Thorpe kicked off the session by noting that, while “pixels are synonymous with resolution,” it’s a mistake to think that you can measure a camera’s resolution by simply counting the number of pixels it outputs.
“More pixels do not necessarily create more resolution,” declared Galt, “but can harm overall image performance.” To explain, he recalled "a big argument with Japan" from his tenure as project leader on the group that developed the Panavision Genesis. “I argued vociferously for 1920x1080 RGB,” Galt said. “They were keen on building a 4K camera — which would have been one of the UHD 3840x2160 versions [proposed by NHK]. The main reason we didn’t do that is the pixel would get so small we’d lose two stops of sensitivity and two stops of dynamic range.” That’s because the size of individual pixels on a 35mm imager determines those stats — bigger is better because bigger photosensors can capture more light.
Galt then mounted an argument for MTF (Modulation Transfer Function) — cascaded across all components in an imaging system — as the single best measurement of “system resolution.” Determining the resolution of a film-based system, for instance, would require accounting for the MTF of a camera’s lens, the film negative, the interpositive, the internegative, the print, and the lens in the projector. The weakest link in that chain can have a dramatic detrimental effect on the final quality of an image. “Even if each of those parameters has a 90 percent MTF, the final system is only 53. If each parameter is 90 percent except for one parameter that is 60, the final will be 35 percent.”
“We have fallen into this trap of defining cameras in the context of 1K, 2K, 4K or megapixels,” Galt continued. “It’s only one parameter. The system MTF measurement is the only way to characterize a complete system. If the MTF is less than 35 or 40 percent, the image is going to be out of focus.”
A working knowledge of MTF factors in a given system can lead to some important conclusions, Galt said. For instance, he estimated that a “good Nikkor lens” has an MTF of only about 30 percent at 4K resolution. “If you’re scanning film at 4K — unless you have extraordinary scanning optics — you’re wasting money,” he said.
Thorpe said camera design involves a “fundamental compromise between MTF, sharpness and aliasing,” but noted that much of the pertinent information about a camera’s resolution — the designers’ use of optical or electrical low-pass filters to tweak the captured image or the sensor’s “fill factor” (the percentage of an imaging pixel that is actually light-sensitive rather than taken up by circuitry) — is rarely published by manufacturers. He recommended the “MTF profile” as an important camera metric, suggesting that four MTF measurements be taken, at 200, 400, 600, and 800 lines of resolution, to create an accurate profile of a given camera.
Digital Cinema: The Ugly Years
Wade Hanniball, VP of cinema technology at Universal, spoke on what he called “the ugly years” of digital-cinema, as the studios and exhibitors struggle to maintain two parallel systems — traditional 35mm film distribution alongside digital cinema. “Digital cinema is a long-term strategy that demands discipline and no short-term expediencies,” he said.
Hanniball brought some horror stories with him, including the skin-of-our-teeth tale of how American Gangster was prepped for digital-cinema release. Previously, Universal had been taking a pass on digital releases of movies created without a DI — like the film-finished Breach earlier in 2007 — since it complicated the issue of creating a digital-cinema master. But by the time American Gangster hit screens, some circuits had actually gotten rid of their film projectors (!), which meant a digital release was suddenly mandatory.
Universal started with an HD telecine master (in 709 color space) and tried to convert the whole thing to a digital cinema master (in XYZ color space). They got close, but ended up doing what Hanniball called “a painstaking, cut-by-cut conform of the picture” to get the color exactly right. The DCP was completed less than a week before the film’s release. “We learned a very valuable lesson,” Hanniball said. “Never do it this way again.”
And, on a not-unrelated note, James Mathers did show-and-tell with a Red camera, deflating some expectations that the camera would be a truly affordable option for digital-cinema acquisition. As an example, he said a good lens will cost about three times the camera’s $17,500 price tag. Mathers’ own Red package, including prime lenses, represents an investment of about $100,000. As the required cash outlay becomes more clear, Mathers predicted some reservation-holders will get cold feet. “I don’t think they’ll sell anywhere near 4000, but still a sizable number,” he said."
By Bryant Frazer, StudioDaily