EBU Acquisition Technical Metadata Set

This version 1.0 of the “Acquisition Metadata Set” specification has been developed by the EBU’s Expert Community on Metadata (ECM), under the umbrella of the EBU HIPS Strategic Programme (SP-HIPS). The goal of this Strategic Programme has been to define solutions to improve interoperability in HDTV production (audio and video encoding, wrappers, metadata and SDI interfacing). HIPS-META is the SP-HIPS subgroup on metadata.

The “Acquisition Technical Metadata Set” is a set of metadata collected at capture through interfaces from live cameras or camcorders. It is intended to improve interoperability for the purposes of exchange of material. This set has been commonly agreed by EBU Members (users) and manufacturers for use in a tapeless file-based or live production environment.

The “Acquisition Technical Metadata Set” is clustered in camera device (shooting parameters), lens device (settings and identification) and microphone device (identification) video and audio metadata sets. This document only provides definitions of the different relevant metadata attributes.

For cameras and camcorders, it is expected that the file format used for the export of the essence will be MXF. The metadata structure should support the exchange of native available MXF (export) and XML (Import and export) formats. Additional EBU specifications provide implementation guidelines for different configurations (e.g. using KLV and XML encodings for exchange inside MXF files, or using separate XML files). Descriptive Metadata sets are also specified in a separate EBU specification.

More information on the role of this specification with regard to other related EBU metadata specifications is provided in the ‘metadata’ section of the EBU TECHNICAL website.

Metadata 3D Initiative

Metadata 3D Initiative (m3Di) is proposing a standard for 3D metadata communication in order to make interoperable lenses, cameras, rigs and stereoscopic image processors used in 3D productions, in a way that all the products accomplishing that standard, would allow final customers to replace any item of the production chain with no major impact in the 3D workflow, giving freedom of choice and ensuring interoperability.

SMPTE Makes Closed-Captioning Standard Freely Available for Online Video

The Society of Motion Picture and Television Engineers (SMPTE) announced that it will make its standard for closed-captioning of online video content (known as SMPTE Timed Text and by the designation SMPTE 2052) available free of charge. The announcement comes as the FCC prepares to move on adopting rules to give those with disabilities access to Internet video content with closed captioning.

The SMPTE closed-captioning standard provides a common set of instructions for authoring and distributing captions or subtitles for broadband video content. This design means that TV content providers need only use one method for providing captions rather than custom approaches for different Web browsers or media players, including new digital content and previously captioned analog programs. The standard, which leaves room for innovation, is media-device and media-player agnostic, allowing manufacturers to develop products without worrying about interoperability issues.

Documents to download:


Source: Broadcast Engineering

Broadband Enabled Device Matrix



Click to download the full resolution document


By Dan Rayburn, Business of Video

Camera-Mounted Recorders Comparison

One of the big stories at NAB 2011 was the release of numerous Camera Mounted HD recorders. From the AJA Ki Pro Mini to the Sound Devices PIX 240, a huge variety of portable recorders are becoming available.

These recorders all aim to increase recording quality and ease workflow needs, but they differ greatly in their form, function and price point. To help navigate these different options, we created a Camera Mounted Recorder Comparison Chart that compares several of the different camera mounted recorders.



The chart includes details on recording format, media, inputs and other information that should help you decide which might be right for you. We also included the high end recorders from Codex and S.two, which feature ARRIRAW recording capability. ARRIRAW is a 14-bit RAW Uncompressed format that can be sent out of the ARRI ALEXA camera.

NOTE: Several excellent recorders from Sony, Panasonic, AJA and others are not included, because we wanted to focus on recorders that are designed to be mounted on a camera for production.

By Ian McCausland, CineTechnica

Introduction to Interoperable Mastering Format (IMF)


Click to watch the video


By Jerry Pierce, IMF Forum

Switchable Lenses and Polarizers Grab Attention at SID

I’m attending and reporting from SID’s DisplayWeek confab in Los Angeles this week. What I have seen so far is a lot of interest in LCD technology that can create switchable lenses or can rotate polarization. The first is of interest to autostereoscopic display developers, while the latter is of interest to display makers who offer passive-polarized 3D glasses-based solutions.

One of the more common approaches to making an eyewear-free 3D display is to use a lenticular sheet. This is a plastic sheet that consists of long cylindrical lenses that are aligned diagonally to the LCD imaging panel. This helps direct views to the eyes in 3D mode, but it can distort the image if you want to use the panel for 2D content.

To address this, researchers are developing liquid crystal device structures that can create a lens in one state that is optically transparent in the other state. This allows the lenses to be turned on or off, thus increasing the utility of the panel. An undistorted 2D image is certainly a requirement for an autostereoscopic 3DTV solution, with switching lenses being one way to get there.

The first products based on switchable lenticular arrays are coming this year, most notably a laptop from Toshiba. I don’t know if this is on exhibit at SID yet (I will find out), but there were a number of interesting papers given by researchers from the U.S. and Taiwan.

For example, researchers from Kent State University described a project that created a LC lens with a tunable focus. The use of diffractive and refractive designs was explored and explained.

The University of Central Florida is quite active with several projects including a design that electronically moves the switchable lens laterally as well as a new design for a switchable lens using Blue-Phase liquid crystal. You will be hearing more about Blue-Phase LCs because they have the capability for very fast switching, potentially allowing field sequential LCDs. That means panel makers can triple resolution (no spatial color filter matrix) and reduce costs. The University is very active in this area and commercialization of products now seems 2-3 years off.

National Chiao Tung University in Taiwan is also quite active and showcased several design options.

Polarization switching is the other big theme. The concept was highlighted two years ago at SID by LG Display who calls the process Active Retarder. They presented a paper on this again this year, but it would seem most of their efforts at this time are focused on Film Patterned Retarder (FPR) rather than Active Retarder. The key advantage of the Active Retarder approach is that it allows the delivery of full resolution images to each eye and the use of passive polarized glasses, without vertical viewing angle restrictions (FPR offers half resolution per eye with some vertical viewing angle restrictions).

But two other groups are taking up the charge. One is RealD and Samsung LCD, which showed their Active Shutter version of the technology at SID. We saw their demos in a private suite at CES and we were quite impressed. I have not yet had a chance to see it on the show floor at SID.

The other group is being led by LC Tech that has developed a very fast (400Hz switching) double LCD cell approach. This has been commercialized as a polarization switcher by Lightspeed Design for Cinema and ProAV applications. LC Tech is now working with Seiko Epson to adapt the technology to LCD panels.

These polarization switching panels are fairly simple devices. Both LGD and RealD use a scanning approach (RealD uses 8 bands that switch in synchrony with the row scanning), while LC Tech uses a single cell.

The big question for commercialization for flat panels is the cost of this additional LCD element. Clearly, current TFT LCD fabs are way too costly to make this component, so these parties are trying to understand the manufacturing options and fab investment costs to determine if and when commercial products will come. Samsung and RealD seem comfortable enough with the solution to have announced availability of 23-inch and 27-inch monitor products by the end of 2012, but did not say when up to 55" TVs would come to market. Essentially, the industry will need a Gen 8 to Gen 11 fab for TN/STN panels for these polarization switchers, if cost-effective LCD TVs are to come to market.

By Chris Chinnock, Display Daily

Sky Trials 3Ality Software + Integrated Cameras

BSkyB has begun testing 3Ality’s 3space suite of software tools for automated alignment and convergence and is positive its use can reduce the number of 3D crew required on location. Sky Sports is also to test a number of twin lens integrated camcorders.

The 3Space software is being tested on a range of Sky Sports productions, including at Manchester City Vs Tottenham and the Heineken Cup rugby final.

“The principal is to get from a situation now in which five to eight operators all have slightly different ideas of depth budget toward a more controlled environment,” said Darren Long, Director of Operations at Sky Sports. “3Ality suggest one operator could manage five camera pairs. We think 2-3 cameras are manageable, with operators treated more like vision mixers with intelligent software between cameras allowing a smoother transition between depth. Even one operator to manage 2-3 cameras is a massive step change.

“The software isn’t perfect, but that is what the testing is about. As the software matures, and as we build more logic into it, we’ll be able to take an analysis of 50 football matches and say this is the kind of depth and pace we are looking for and build that into the editorial plan. The Holy Grail is to have one or two stereographers managing the whole show. Long term, that may happen.”

Sky is also trialling 3Space on entertainment shows, with music concerts perhaps better suited to bringing multiple camera pairs under the control of a couple of stereographers because of the slower editorial pace of such events.

3Ality’s automatic lens line-up procedure (IntelleCal) shows particular promise. IntelleCal profiles and matches lenses and performs alignment on five axes to automatically align two cameras on a rig, at the push of a button.

According to Long: “The issue with rigs is the amount of time it takes to set the lenses on them. We have to get to a position where the process is automated. I saw a demo of the system which effectively had the camera’s lined up in five minutes. That sort of speed would be an incredible boost to our operations.”

Sky is also about to beta test a range of new integrated cameras, including shoulder mount camcorders such as the Sony PMW-TD300 and Panasonic's AG-3DP1, for Steadicam work.

“I want to see more of these types of camera because we are desperate for them,” said Long, who has tasked Aerial Camera Systems with devising smaller form factor stereo-cams for behind the goal soccer action.

Long added that Sky was also to test Avid’s new stereo 3D editing software with a view to installing it at Sky’s production facility “as a middle ground between Final Cut Pro and Mistika.”

He added: “The manufacturing market has gone from phase one of stereo into phase two, which is to fill in the gaps in the 3D production toolbox. There is a lot that is promising out there but we need to keep pushing manufacturers to deliver.”

By Adrian Pennington, TVB Europe

CPG Outlines its 2D-3D Vision

The Cameron Pace Group (CPG) wants to demonstrate its technology in Europe covering a football match to prove its vision of simultaneous 2D and 3D production. Vince Pace, co-founder and CEO of CPG insists that he and partner James Cameron share a fundamentally different view to most European 3D broadcasters of how TV coverage can be enhanced.

“I designed our technology with soccer in mind,” Pace told TVB Europe. “I would love to do a soccer game and CPG will get us to that effort. Until CPG gets the tools into their [broadcaster’s] hands it is hard for us to change prevailing perceptions. It would be great to collaborate with Sky on rugby or soccer and they could get a chance to see those differences.”

Pace was responding to the somewhat sceptical reaction among key European 3D sports producers of Pace and Cameron’s vision (evangelised at NAB) that all TV productions should be able to derive a 3D feed from 2D editorial and camera positions.

Darren Long, Director of Operations at Sky Sports admits to feeling “incredibly surprised by [Cameron’s] comments, bearing in mind how knowledgeable he is about 3D, because I don’t see how you can treat every production the same. It’s true to an extent that some sports can be done in 2D and 3D. But there are certain sports where a normal 2D cut with lots of jumping around will simply not work for the viewer in 3D.”

Sky is trialling dual 2D/3D operation but picking its sports carefully, Long revealed. “Darts works. Boxing is totally possible, snooker also, and other sports where the action is constrained in one area and we’re not swinging cameras around. With football though, you will get away with some joint editorial, but not all. How can a director focus on getting the excitement into 2D and 3D if there is one cut and one set of cameras?”

Pace acknowledges the difference of opinion. “The prevailing view is that 3D is a standalone product which is getting 2D to convert to a 3D methodology, but that is not our direction,” he said. “We recognise that 2D is the revenue stream and we don’t want to be disturbing that. Our view of the broadcast challenge is to concentrate on enhancing the viewing experience without treating 3D as a different product. When people went to see Avatar they enjoyed a good film, which 3D enhanced but wasn’t solely responsible for that enjoyment.”

CPG’s headline grabbing launch at NAB and its declaration that it would target broadcasting were heavyweight messages intended to shake up the 3DTV space, but it’s not as if PACE (his previous company) were not already active in it.

PACE has completed over 40 live 3D sports broadcasts (and several non-sport events), working closely with ESPN, Fox Sports and the NBA and designing and building two dedicated 3D trucks for NEP Visions. PACE recently won a Sports Emmy Award with CBS Sports the 2D/3D production of the 2010 US Open Tennis Championship.

“Although I think the direction is fundamentally different for us we’re not just the Hollywood guys shouting at broadcast with loudhailers,” said Pace. “We have learnt from working with ESPN and we continue to learn. Our approach to designing technology is to use as many of the 2D assets as we can, to tell the story of sports with a 2D foundation and elevate the viewing experience to another level.”

For Pace the best way of telling the story of a live sports game has already been devised by 2D producers, so why change the wheel?

“I used to say, like many others, that 3D is the best seat in the house. But I now realise that the person sitting in the best seat in the house is the 2D camera guy and the 2D director. That’s how it has been planned so we have to find ways of working with that, not against it.

“Another example - there is a lot of value in the colour and commentary of a 2D sportscast with which people are familiar. Are we going to have to condition people to accept separate commentaries?”

The experience of the Sony/HBS production of the FIFA World Cup and also of Sky Sports’ 3D coverage of English Premiership Football leans toward a slower pace of cuts, judicious use of Steadicams at pitchside, slower pans and a belief that the Camera One gantry position doesn’t provide the depth of field to add anything for the 3D viewer. Sky is negotiating with English soccer stadia to locate its rigs on lower positions.

This editorial philosophy is at odds with that of CPG, and Pace recognises that he has a fight on his hands to convince broadcasters to alter their perceptions.

“The technology feels like it is restricting the editorial vision at this point – that you can’t move cameras fast, that you have to frame differently in 3D, that the high up angle is flat,” he said. “When you are dealing with tools at a basic level it pushes you into an interpretation of 3D that is unfair. This is what happens when you don’t have the right tools to experiment with. The technology should be working with the subject matter, not against it.”

Pace points to the Shadow system (a combination of rig and software) which was used by CBS Sports to produce 2D and 3D coverage of the US Masters golf tournament in April as an asset CPG believes can be used to deliver 2D and 3D shared technical and editorial operations.

The Shadow D rig stacks a 3D camera next to a longer 2D camera lens and allows one camera operator to control and drive both cameras with one set of controls, capturing 2D and 3D images simultaneously. This allows 2D camera operators to capture 3D without learning new production techniques, reduces 3D production costs and says Pace, solves the conflict of 2D and 3D camera placement around a sports venue.

“The key is camera balance,” says Pace. “If you have continuity in stereo you will have continuity in 3D storytelling so that when cutting between 8 or 18 cameras you are aware of how all the cameras complement each other.

“With Shadow Vision I can read a distance to a subject, read focal length and adjust my framing based on that information. I can either take the 2D shot or I can decide to adjust my shot for 3D by keyframing the difference between the 2D and 3D frame.”

In practice Pace says he finds that often the standard 2D shot is the optimal one for 3D.

“On tennis for example we initially decided to be close up on the player’s face for 2D because 2D likes reaction shots, and we keyframed for a wider shot on the player for 3D. But after a while we decided to match 2D exactly and it worked perfectly. The reason was that 2D was going for the reaction shots, which is critical to a viewer’s understanding of the flow of the game regardless of the format you are watching in.”

There is, in fact, a great deal of common ground between Cameron - Pace and, say, the BSkyB position, not least in the drive to reduce operational costs by using technology to streamline production and eradicate the number of convergence and 3D technicians as far as possible.

“I had six cameras on the US Masters event with one person supervising camera performance and overall creativity - no operators. You can divest yourself of the expense of travel, hotel, salary and food costs for additional and unnecessary crew today," said Pace.

“The problem is that we are all discussing the 10-30% difference between 2D and 3D when we should be working on the 60% that is positive and the same. I think the viewer wants an enhanced viewing experience, yet we get so caught up on how this camera pan won’t work, or this cut won’t make it in 3D – shots which wind up being 5% of the total show. As an industry we are fixated on that 5% instead of the real heart of the production. If we can elevate 2D production into an entertainment experience people are willing to pay for then we will have accomplished our goal.”

By Adrian Pennington, TVB Europe

3D Lens Adapter for ENG

A prototype lens adapter made by South Korean developer Wasol in co-operation with Ikegami and marketed by Korea's K2E can turn any standard 2D professional camera into one capable of capturing 3D.

The 3D Lensys adapter can be fitted to a conventional lens in about 15 minutes and works by means of a field sequencing device rotating in front of two small lenses. It weighs 21kg.

The field sequencer rotates 60 times a second to deliver 30fps of light into each of the lenses in the adapter. The combined 3D signal is output as HD SDI or component.

“3D Lensys allows you to use only one camera and to shoot in the same way as you would with existing cameras,” said Jin Kin, overseas sales and marketing for K2E. “You save time and money by using your existing equipment throughout the production process.”

K2E said the Lensys can be built to order. It was shown at NAB working on an Ikegami shoulder mount body.


Click to watch the video


Source: TVB Europe

Le Livre Blanc du Relief (3Ds) au Cinéma et à la Télévision

For French readers, a white paper published by the Ficam and several other organizations (CST, UP3D, HD-Forum and AFC).

The Care and Feeding of 3D Signals

If transporting live stereoscopic video streams for 3D television was as simple as using two standard video circuits, then the decision to broadcast live 3D would only be a matter of economics. Producers could just book two contribution circuits in place of one, and deliver their content to the production studio.

The reality, unfortunately, is much more difficult. 3D video signals require a great deal of care throughout the transport network to ensure that they arrive in a usable form at their destination.

To create the illusion of 3D using stereoscopic video streams, the human eye/brain combination must be persuaded to see the two different images shown to each eye as being from the same original scene, with just a slight horizontal offset. Maintaining this illusion requires that essentially all other aspects of the video signal are identical—the exposure, the color balance, the resolution, and of course the synchronization between the two sets of image streams. Transmission systems must be designed to avoid any changes that would change the appearance of one stream relative to the other.

3D Transmission Impairments
One of the most important issues in transporting 3D video is maintaining frame-accurate synchronization between the two video streams. If the left eye image gets out of timing alignment with the right eye image by even one frame, the 3D illusion can be lost. In transmission, loss of synchronization can occur for a number of reasons, including:

Different Routes: If the two video streams take different routes through the network, they may arrive at the destination at different times. This can occur if one route is longer than the other, or if one stream traverses more devices.

Data Loss: If one stream is affected by bit errors or other loss of data during transmission, then transmission can be delayed by error correction techniques such as ARQ (Automatic Repeat reQuest).

Queuing Delay: IP routers commonly use packet queues to manage the loads on telecom channels. Delays can occur if one stream passes through a congested router or is usurped by higher-priority traffic.

Compression can also cause impairments in 3D video streams. For example, the two streams could be compressed using MPEG encoders that are not synchronized with respect to their GOP (Group of Pictures) structure. This would allow, say, one frame of the left eye stream to be encoded using an I-frame when the corresponding frame of the right eye stream is being compressed using a B-frame. This mismatch could potentially cause the viewer to either consciously or subconsciously notice a difference between the two image streams and create impairments to the 3D effect.

Broadcasters Choices
Two primary alternatives are available today to broadcasters for contribution video: dual stream transport and frame compatible transport. As shown in Fig. 1A, the dual stream approach creates two distinct video streams, one for the left eye image and one for the right eye image. Fig. 1B shows the same 3D sequence in a side-by-side frame compatible mode, where the left eye and right eye images are combined into a single video frame prior to transport.


Fig. 1A: Dual Stream Method, Fig. 1B: Frame-compatible Method


For contribution, ESPN has chosen to use the dual-stream approach, combined with some safeguards, according to Emory Strilkauskas, Principal Engineer, Transport Technologies at ESPN.

"For contribution, we elected to deliver full resolution left/right feeds from the venue to our production facility," he said. "This preserves the highest picture quality for use in our 3D workflow. The challenge with this choice is maintaining frame alignment between the left and right picture. The easiest way to accomplish that with what is available is to combine both signals into one MPTS [Multi Program Transport Stream] which ensures that both signals always travel the same path.

"Several manufacturers have also worked on the encode/decode process to ensure that that is also consistent between the left and right signal," he adds. "For us, contribution requires us to double our bandwidth. We use the newest compression solutions and modulation schemes to offset some of that cost."

Rick Ackermans, Engineering Technology Fellow at Turner Broadcasting, on the other hand, advocates the frame-compatible transport approach. "Given the current state of the available technology, I have had good success in transporting contribution video using frame-compatible stereoscopic streams," he said. "I feel that this approach ensures that video framing and compression GOP are always synchronized between left and right eye signals. While the loss of resolution is an admitted trade-off, the current economics of 3D make it a worthwhile method for production and transmission at this time."

Clearly, the approaches being used for live 3D content contribution are evolving. There are trade-offs in using either dual stream or frame-compatible transport in areas such as image quality, cost of equipment and bandwidth consumption. As these technologies mature, look for improvements in encoder and transport technologies that will help bring down the costs of 3D to make it feasible for more live events.

By Wes Simpson, TV Technology