The Practicalities of Stereographic TV Production

While broadcasters continue to experiment with 3DTV and urge greater involvement of the production community it’s worth considering how an independent producer might get to grips with this new media. The good news is that as daunting as stereography might appear, the workflow, technology and creative decision-making is not far removed from that of conventional shoots. The concern though, voiced by the field’s few practitioners, is that 3D stereo can’t be learned overnight and requires an understanding of what works and what doesn’t that can only be gained through experience.

“The process of self-shooting and editing 2D with off-the-shelf equipment is second nature for producers but each step in the 3D production chain needs re-thinking,” says Andy Millns, director, Inition. “Suddenly capture, monitoring, and editing seem more technologically intensive. The pitfalls are trying to shoot too quickly and not sorting out those problems that are easily resolvable at the camera end.”

With fellow 3D specialists Can Communicate, Inition has taken on most UK 3D trials regularly combining resources on the larger projects.

“There have been several occasions where people have asked us to shoot for them and they want their director to work on it but unless that person has a deep understanding of 3D that won’t work,” says Duncan Humphreys, Partner, Can Communicate. “You have to understand what to shoot and how to shoot it. 3D requires a more regimented approach to filming. Nothing is more crucial than balancing the lenses at the start.”

Both companies have opted to work with existing broadcast infrastructure largely because the cost of specialised rigs from Pace and 3ality (which need shipping from LA) are prohibitive outside of feature film budgets or a few day shoots. A wide array of standard broadcast cameras can be attached to P+S Technik rigs (exclusively supplied by Inition) or the ‘Calcutta’ rigs devised by Can, including SI-2K minicams, Hitachi DK-32, Toshiba IK-HD1 and Sony T-block cameras HDC-900 and 950s.

“There are a dozen different ways you can screw up a 3D image and you need to recognise, isolate and diagnose them,” says Millns. “Physical errors include lens misalignment; sync issues; rolling shutters in 3D, and even subtle anamorphic squeezes created by some lenses. Currently available broadcast lenses and mounts aren't designed to work at the high tolerances required for 3D, especially for live broadcast where there is no margin for error. Mirror rigs in particular can introduce colour distortion onto each channel and have issues when polarised reflected light off some surfaces.”

Monitoring can be conducted locally at the camera or in the truck using devices which take two HDSDI images such as Transvideo’s CineMonitorHD 3DView. This includes an option to preview the image in anaglyph. Inition has devised video processing unit StereoBrain (SB-1) which does a similar job by allowing live viewing of a stereoscopic camera pair or other genlocked 3D video source on any of the current breed of 3D TVs (from Hyundai, Samsung, JVC). The SB-1 outputs the images via a DVI/HDMI signal in interlaced or side-by-side mode. It can also output anaglyph and be overlayed left/right on a standard 2D HD-SDI monitor.

In using everyday broadcast equipment the zoom function is sacrificed for live transmission. Since no two lenses track identically when zooming, even a fractional misalignment will lead to uncomfortable 3D viewing. Pace and 3ality technologies overcome this with unique software which account for the particular calibration of each lens and focal length then makes automatic adjustments by way of a series of motors.

Inition is developing the StereoBrain SB-1 unit to address this but in the interim, and until major manufacturers solve the issue with readily available 3D rigs, it is the positioning of cameras which is critical.

“You don’t need high end rigs since you can compensate with cranes, boom and peds but if your camera placements are limited then you need more advanced equipment,” notes Brian Lenz, Sky's head of product design and innovation.

“If you structure the camera positions well enough I’m not convinced that zooming is totally necessary,” adds Humphreys. “Much like a top wide-view camera on a 2D football match you want to be able to breathe the zoom, to edge it in a little bit, but you don’t need to zoom from wide angle to very tight angle.”

A top wide angle shot for a football match however will only give you the spectator’s 2D view of the field of play so 3D needs to be introduced either in the form of spectator’s heads (as you’d experience at any stadia) or from close-ups and it is those positions which are currently at a premium. The issue is compounded by the absence of a Steadicam rig which needs to be light enough to hold two cameras and to transport dual RF streams in-sync back to the truck.

“If you have a position a long way from the pitch the 3D will naturally look fairly flat,” explains Humphreys. “You can exaggerate it by widening the distance between the lenses but that leads to miniaturisation where the depth of the image doesn’t match a viewer’s expectation of the size of the players.” This is one issue that the BBC’s main 3D TV project, 3D4YOU, aims to solve.

“You can bring in rigs for zoom but for the same price you have five different camera positions at different focal lengths,” says Millns. “The fact is that your best shots in a football or rugby match in 3D are achieved from closer positions pitch-side or low down in the crowd for that depth perspective, but you also need storytelling cameras.”

In a live environment recording will typically be made to HDCAM SR which can take dual stream 422 on a single deck. Provided the timecode is interlocked and the cameras genlocked there should be no issue. Mixing can occur as standard since the signal pairs will be received in the truck as one camera position.

In Sky’s model the left and right images are squeezed side-by-side into a single HD frame and transmitted by satellite (at up to 18MBps) via current generation Sky+HD boxes to 3D Ready TVs where the images are re-interlaced. “In theory half of the pixels are thrown away for each eye but in practice the resolution loss is negligible since the viewer’s brain is merging the two different perspectives, using depth cues, to create the 3D picture,” says Lenz. “The viewing plane is surprisingly wide in the living room; 20 degrees vertically and 45 degrees left and right.”

Another essential recommendation is to cut, whether live or recorded, more slowly than normal. “To me the biggest issue is how your camera operator frames, pans and tracks and how a director chooses to edit,” advises Lenz. “We need to go back to some basics and linger longer on shots, less rapid jump cuts, slower pans. All of those things can highlight discontinuity in depth but if you let the depth happen with good framing you will get a rich experience. The real value of 3D is to cause you to forget you are watching a video and that comes down to slower, longer looks.”

Without the ability to view 3D in offline making editorial decisions is tricky. “There are workarounds such as giving a client an anaglyph version (encoded from the rushes) or using a 3D TV. If you don't have 3D viewing in offline, then you loose the ability for the 3D elements of a shot to influence and inform you editorial decisions.” says Millns.

Inition will perform a pre-grade using Iridas SpeedGrade in order to balance the right and left eye colourmetry introduced by mirror rigs, before the creative grade. Can Comunicate prefer Quantel Pablo as the finishing tool. “You need to optimise the 3D in the online, taking a look at every shot and asking whether it works,” says Millns. “Is there pixel misalignment or keystoning (perspective distortion), are their any lens mis-match issues. Special attention must be placed on every cut point. Keyframing can be used to fade the transitions and prevent viewer discomfort as we shift perspectives.”

Another consideration is where and how to place on-screen graphics. “A strong 3D image with graphics or subtitles placed over the top of it can get confusing because it interferes with your perception of the main 3D object,” says Humphreys. “One route is to show match stats much like we see a golf leader board with the graphic brought up over a single image.”

It’s extremely difficult right now to produce 3DTV on a budget. At the very least you need an extra crew member (stereographer) to advise the DP and DI process on every shot. Lenz puts the cost at an additional 40-50% but wisely notes that this profile fits that of HD’s early days. Naturally as more 3D is produced, prices will drop.

“I would say Keane from Abbey Rd in 3D was more than experimentation, it was a proof of concept,” says Vicki Betihavas of Nineteen Fifteen who co-produced the event for Sky. “In order for content to be immersive and well made you need to apply the basic standards: good creative overview; budget; content that lends itself to the process; talented technicians and production team.

“These are no different for 2D, but in 3D you need to understand, as a producer why are you doing it. It’s all about good 3D. If we just produce any 3D then we won't move the case for 3DS very far.”

By Adrian Pennington, TVB Europe