Delivering a Tapeless Model

When it comes to delivering programmes for playout, the broadcast industry has hit the rewind button on recent technical advances, with most programmes delivered on HD Cam SR tape - even though an increasing number of shows are shot with a tapeless camera and edited in a file-based workflow.

The earthquake and tsunami that struck Japan in March, and the knock-on effect on manufacturing in the country, brought the industry’s reliance on the Sony format sharply into focus.

With the Sony factory not due to restart the production of SR tape until next month, the priority for broadcasters has been to secure enough tape stock to keep programmes on air, but the debate has moved on to broadcasters’ long-term plans for the file-based delivery of programmes.

The move towards a truly tapeless, end-to-end workflow has been planned for some time, but there are still some significant obstacles that need to be overcome. The key, according to Sohonet chief technical officer Ben Roeder, is the agreement of a common file standard.

“With tape, there is only one codec, so if it is a DigiBeta tape, it has a DigiBeta codec, and the same for HD Cam. But once you deliver a file in a container format like MXF, it can have JPEG 2000, MPEG-2, MPEG-4 AVC and so on. Because there isn’t any standardisation, broadcasters don’t have the workflows in place.”

Adopting Standards
The Digital Production Partnership (DPP), which comprises representatives from the BBC, ITV, Channel 4, Channel 5 and Sky, plans to release a basic specification for file-based delivery towards the end of July.

The DPP HD file format will be based on the AVCI standard and MXF-wrapped. Kevin Burrows, chief technology officer, broadcast and distribution, Channel 4, and the broadcaster’s lead at the DPP, describes it as a “futureproof, industry standard”.

“We’re also working on a metadata scheme, which will typically include the information found on a tape-based record report, with additional information such as editorial data, aspect ratio, audio track layouts, text, access services and product placement information,” he says.

As part of the process to agree a specification, the DPP is working with the Advanced Media Workflow Association to make some changes to the AS-03 standard, because it doesn’t support HD at the required profile. The new standard will be named AS-11, and the DPP is working on standardising the remaining data fields so that they are based on existing SMPTE, EBU and TV Anytime standards.

“All vendors currently support MXF and the key is to base it on an open standard - in our case, AVCI, which is already integrated by many vendors such as Panasonic and Avid,” says Burrows, adding that the format was chosen because it is an open standard.

One of the problems presented by internet-based delivery is confirmation that it has been received. Where a programme is delivered on a drive, similar processes to tape-based delivery are in place, but those that opt for FTP will need a record of delivery. The DPP’s guidelines won’t deal with the mechanism of the delivery but will include a check sum to establish whether a file is corrupted.

The issue of security when delivering over FTP is also something that is likely to be specific to each broadcaster. “In terms of receipts and confirmation of delivery, that really is a workflow issue,” says Burrows. “It is not something we have thought about in detail, but it might be something we could look at if it looks like it will be important to have.”

Early Adopter
Earlier this year, Sky updated its programme delivery guidelines to include file-based delivery. Head of broadcast strategy John Lennon says the task of engaging with content suppliers to share technical formats was made easier because 85% of content comes from 18 key suppliers.

At the moment, only “a small percentage” of the 25,000 hours of programming that is delivered to Sky every year is delivered as files, but that will change by October, when Sky expects “wholesale” delivery of file-based programme content.

“Pretty much every supplier is up for file-based delivery because it is cheaper, faster and kinder to the environment,” says Lennon. “And for programmes that come from the US, it means we have the ability to get on air as soon as our rights deals allow.”

Sky asks productions to deliver programmes on AVC Intra 100 codec, with video encoded using Intra frame at 100Mbps, with 4:2:2 luminance and colour difference sampling ratio level 4.1, and a 10 bit sampling depth. Despite the stipulation, Lennon says Sky appreciates that it won’t always be possible for all programmes to be delivered according to the guidelines, but he wants deviations to be “the exception rather than the norm”.

Sky’s move to its Harlequin 1 building has provided the perfect opportunity to ditch legacy systems in favour of new kit and workfl ows. But many broadcasters still have tape-based kit that was installed less than ten years ago.

Michelle Munson, president and co-founder of California-based file transfer software firm Aspera, says there is a marked difference between the approach of broadcasters in the US and UK.

“Because there are more state broadcasters in Europe, the operation is typically very focused on long-term processes,” she says. “They often have systems they are adopting for many years to come and they have a very careful process of selection.”

It is not just the broadcasters that need to update their systems and practices. Steve Plunkett, director of innovation at Red Bee Media, says when his company moved into the Broadcast Centre in 2003, the intention was to be entirely file-based.

“We very quickly found ourselves having to build large tape ingest systems because the industry just wasn’t ready,” he says. “It’s difficult to overestimate the amount of legacy systems out there, such as production companies that work off tape and have found it difficult, commercially and from an expertise point of view, to move to file-based systems.”

As well as kit, working practices need to be updated because without any modification to the workflow, the benefits of moving away from tape won’t be realised. But implementing adjustments to the way people work is not always easy.

JCA commercial director Matt Bowman says first and foremost, broadcasters need to pay the same qualitative attention to a file as they would to a tape.

“Often, file-based management becomes the domain of the IT department, but they aren’t always experts in video and those two departments need to be kept apart. In a broadcast environment, you have to be disciplined - you can’t, for example, drag and drop as you would in a consumer situation.”

Another problem posed by file-based workflows is keeping track of multiple changes and making sure multiple versions are brought back to one master copy. “Tape locked you into a linear process of editing, compliance, QC and so on, so everyone knew where the asset was,” says Plunkett.

But if it is done correctly, the main benefit of a file, he says, is that it can be in a multitude of different places at the same time and worked on by different departments, potentially saving broadcasters both time and money.

Chello DMC on Going Tapeless
Chello DMC plays out 60 channels including Fox International Channels and 10 National Geographic and NatGeo Wild-branded channels in Amsterdam. Jon Try, vice-president of technology, says almost all HD content arrives at the Amsterdam playout centre as files.

“Whenever we launch a channel, we try to set up a tapeless workflow,” he says. “We can’t tell clients [to go down that route], but we do say it’s better that way because it saves tape and delivery costs, and fits into our workflow more easily.

“A common standard is a good idea but, in practice, people already have a set standard for themselves and their suppliers - it needs to be something they can transition to relatively easily. We started out dictating what spec they delivered to us, but everyone has a different format. We have the ability to transcode so we can re-wrap it for TX if necessary.”

By George Bevir, Broadcast

EBU 3D TV Webinar

This webinar gives an hour's overview on Multiview cameras and displays, which are included in the EU projects (MUSCADE and 3D VIVANT), the transmission side (e.g. the difference between frame and service compatible and compression), and the current status of standardisation activities.

3D-TV Production Standards - First Report of the ITU-R Rapporteurs

This article is based on the first report written by the ITU-R co-Rapporteurs on 3D-TV production formats: Andy Quested (BBC) and Barry Zegel (CBS). Published in early May 2011 as ITU-R Document 6C/468-E, the report assembles information on the image systems and techniques currently being utilized for 3D-TV programme production, along with their pros and cons.

The report also discusses their effect on a number of other elements that are likely to be linked to the choice of programme production solutions that would play a role in the success of any eventual deployment of permanent international 3D-TV broadcasting services.

Metadata

By now, most engineers have heard that metadata refers to data about data. As facilities move from tape to files, and from tight integration to service oriented-media workflows, it becomes critical for engineers to have a solid understanding of this important topic.

The definition of metadata as “data about data” may be accurate, but it is perhaps not entirely useful. Revising the definition a bit, in our industry, metadata typically refers to data about the video and audio — the essence of the program. System designers have found it useful to distinguish between essence and metadata. But just as things are starting to become clear, someone may ask, “What about closed captioning? Surely that is data — right? How about time code?”

When talking about data in relation to video and audio, it is important to ask yourself whether the data you are talking about is part of the program (essence), or whether it is descriptive information about the program (metadata). Using this as a guide, we can classify different elements of a program as either essence or metadata. (See Table 1 below.)


Essential Metadata vs. Descriptive Metadata
The types of metadata are practically limitless. Metadata can include identifiers, time code, geospatial data and even free-form user metadata such as field notes entered by a camera person. That said, there are two main classifications of metadata that can be useful. The first is Essential Metadata. Essential Metadata is data that is critical in order to play the content. Examples include unique identifier, frame rate, compression coding parameters and time code. The second is Descriptive Metadata. Descriptive Metadata describes the essence. Examples include house number, title, program length, sponsor or advertiser, and ISCI code.

It is useful to divide metadata into these two classifications because, in cases where data storage space is severely limited, it may be that only Essential Metadata is stored with the content. As we move further away from legacy videotape-oriented systems, storage space for metadata becomes less of a problem.

In Table 1, some of these classifications may seem arbitrary. To some extent, this is true. Is AFD essence data or metadata? Is a house number Essential Metadata or Descriptive Metadata? It may depend upon the systems involved. While there is no definitive answer to these questions, the concepts of essence, metadata, Essential Metadata and Descriptive Metadata can be helpful as you learn more about this topic.

Media Identification
One can argue that the most critical metadata component is the identifier assigned to the essence. After all, once the tape is converted to a file, there are only two ways to identify the content — either by the file name or by an identifier that is closely associated with the content. Gone is the trusty label stuck to the cassette.

In many facilities, file names are used to identify the content of a file, and this can work very well. But there are some problems with this approach. First, in many cases, nothing enforces file naming conventions beyond a policy that has been established by the company. This is good because changing your naming convention is as simple as sending a memo or an e-mail. But this is bad because you are tied to the file name limitations of the operating system.

Another issue is that the file name can be easily changed. Again, this can be a good thing because some workflows may rely on the name of the file being changed once a QC check or some other step has been performed on the file. But this could be a problem because the workflow process relies on humans to correctly type the file name and to correctly change the file name as the work progresses through your system.

Finally, the strongest argument for not using file names for identification is that there is no guarantee that the file name is unique. This can cause some major headaches, which I will get to later.

In many master control operations, content is identified by a house number. House numbers are typically assigned by the traffic or programming departments, they are internally generated, and they are used by various computer systems to identify content in the facility. House numbers may be incorporated into file names, and they may also appear within the metadata of the file itself.

Unfortunately, most house number systems suffer from the same limitations as file name systems previously discussed. In fact, a long time ago, a station where I worked used the same house numbers over and over. The promo for the Friday night movie, for example, was always 50555. This system worked well until we accidently promoted last week's Friday night movie because no one had replaced the old promo with the new one. Even though we knew there was a problem with our numbering system, we continued to use it until we accidently played a national automobile commercial during the wrong week. I do not remember the particulars, but I do remember some very uncomfortable meetings over the issue. Finally, we stopped reusing house numbers and went to a different system.

The MXF standard specifies that Unique Media IDentifiers (UMIDs) be used as labels within the MXF file to uniquely identify the content. UMIDs are computer-generated 16-byte strings that can be locally generated, meaning that you do not need any outside references to create the UMID. Statistically, UMIDs are almost guaranteed to be unique. This means that it is possible to uniquely identify a piece of media no matter where that content came from. One rub is that UMIDs are not meant to convey any information at all in and of themselves. For many media companies, it can be challenging to stop relying on the media ID as a way to convey information about the content. But file-based workflows rely on unique identifiers. In fact, it is a key assumption that the identifiers are unique.

The topic of media identification in metadata is an important one, and I expect you will see more on this topic as more companies move to file-based workflows.

Metadata Synchronization
Recently, the industry has spent a lot of time focusing on metadata contained in file wrappers. This is great, because without some consistency in how we treat metadata, interoperability at the file level is impossible. But one question facing system architects is how we can ensure metadata synchronization. Remember that metadata is not only contained in file wrappers, but it is also contained in databases that are used in many places in our facility. What should we do when metadata is modified? Should we always strive to ensure that the metadata in the file header matches what is contained in the database? Since it takes time to modify metadata in file headers, should we only modify the file header metadata to match the database when we export the file?

Perhaps the best approach would be to write the minimum metadata to the file header and keep all the rest of the nonessential metadata in the database. This is fine, but some media facilities want to be able to rebuild metadata contained in databases from the metadata contained in the files in the case of a database failure.

There are no easy answers. Metadata is a complex topic, and thinking on this topic is evolving. As you transition to file-based workflows, it is a good idea to spend time with your vendors, understanding how metadata is treated in a wide variety of scenarios.

By Brad Gilmer, Broadcast Engineering

Media Wrappers

Broadcasting today relies on the ingest, storage and playout of content involving many different tape- and file-based media. With the migration toward digital media, numerous media container systems are now in use as well. An overview of these different media packaging standards will make repurposing of content to different fixed, mobile and Web-based devices a more manageable task.

Containers Facilitate Handling of Multipurposed Content
A media container is essentially a “wrapper” that contains video, audio and data elements, and can function as a file entity or as an encapsulation method for a live stream. Each of the various containers used today is based on a particular specification that describes how the media elements are defined within it. While containers usually do not describe how data or metadata is encoded, specific containers will often constrain the types of video and audio contained within, often excluding other types. Containers can be assembled offline and then stored as finite computer files. They can also be generated on the fly and transmitted to real-time receiving devices. A real-time container is thus an open-ended stream, not necessarily intended for storage. Receiving devices, however, can capture such a stream and store it as a file.

The most familiar container to digital broadcasters is the MPEG Transport Stream, which almost always contains an MPEG video stream and some form of audio stream, such as Dolby Digital in ATSC systems and MPEG audio in DVB systems. The MPEG Transport Stream is comprised of various layers, enabling media players (TVs, etc.) to quickly parse the stream and select the desired elements. This enables decoders to easily separate out video and audio, and data elements such as EPG. Transport Streams are also assembled in such a way that multiple programs can be easily and separately accessed. Specific elements of the Transport Stream include packets, containing the elementary data; PIDs (packet IDs), which can address elements such as elementary streams; programs; and program specific information (PSI). Null packets are also inserted into Transport Streams to satisfy bit rate requirements of the stream.

Alternate Containers Emerge
When digital video became viable on consumer devices, the rapid availability of sizeable storage on PCs meant that media could be stored locally within files. Various container providers thus emerged in a competitive fashion, each with its own set of compatibilities (or not).

Microsoft Advanced Systems Format (ASF) is an extensible file format comprising one or more digital media streams; the most common file types contained within an ASF file are Windows Media Audio (WMA) and Windows Media Video (WMV). ASF files are logically composed of three types of top-level objects: a header object, a data object and the index object(s). The header object contains a recognizable start sequence identifying the file (or stream) and generally contains metadata about the file. The data object contains all of the digital media data, which can be defined as having a stream property or a file property. The index object can contain time- and/or frame-based content indices pointing to locations within the digital media. Although ASF files may be edited, ASF is specifically designed for streaming and/or local playback.

Apple QuickTime (QT or MOV) is a container format that encapsulates one or more tracks, each of which stores audio, video, effects or text. Each track contains either a media stream or a data reference to a media stream located in another file, and these tracks are arranged in a hierarchical data structure consisting of objects called “atoms.” One advantage of this track structure is that it enables in-place editing of the files without requiring rewriting after editing. The various video codecs that can be encapsulated in QuickTime include MPEG-4 Part 2, H.264 (MPEG-4 Part-10/AVC), DivX, 3ivx, H.263 and FLV1 (Sorenson H.263). The MPEG-4 Part 14 (MP4) multimedia container format is essentially an extension of the ISO base media file format (MPEG-4 Part 12), which was based on the QuickTime file format.

Adobe Flash (FLV), which has become popular on the Internet, most often contains video encoded using Sorenson Spark, On2 Technologies' VP6 compression, or H.264. (On2 claims that VP6 offers better image quality and faster decoding performance than other codecs, including Windows Media Video, Real Media, H.264 and QuickTime MPEG-4.) A different version of the Flash container format is F4V, which is based on MPEG-4 Part 12, and supports H.264.

RealMedia (RM) carries the proprietary RealVideo and RealAudio streams. RealVideo was originally based on H.263, but is now a proprietary video codec. A RealMedia file consists of a series of chunks, each of which carries information on data type, size, version, and of course, the video and audio payload. Content description and metadata can be carried as well.

The Material eXchange Format (MXF), defined by SMPTE-377M, is a file format that encapsulates video, audio, metadata and other bit streams (“essences”). MXF was initially targeted to production, as a middle-state format for content exchange and archiving; the container format is specifically called MXF_GC, for MXF Generic Container. MXF_GC is “fully streamable,” i.e., A/V content can be continuously decoded through mechanisms such as interleaving essence components with stream-based metadata. The benefits of MXF include interoperability, a high level of sophistication and the use of international standards.

Structurally, MXF is comprised of a file header, file body and file footer. The header contains file identification and metadata, the latter including timing and synchronization parameters. The file body incorporates the data essence, i.e., video, audio and other media. The footer closes out the MXF file.

MXF also shares a common object model with the Advanced Authoring Format (AAF), a sophisticated data model and software toolset that allows post-production devices to share essence data and metadata. MXF can therefore store completed works with metadata, allowing for a viable means of tape replacement. It can also package together a playlist of files and store the synchronization information, as well as store cuts-only EDLs and the material they affect. Another strength of MXF is that it can encapsulate audio and video of any compression format, making it truly universal.

Conclusion
With the continuing evolution of container formats and storage media, content archiving is increasingly susceptible to “data rot,” where an original version of material will eventually require migration to various (newer) storage means. As if efficient workflow wasn't enough of a challenge, short- and long-term planning (and budgets) must consider the growing issue of content permanence.

By Aldo Cugnini, Broadcast Engineering

Patterned Retarder to Beat Shutter Glasses in 2H11 Shipments, Says AUO

An estimated one million shutter glasses 3D TV panels were shipped globally in first-quarter 2011 while shipments of patterned retarder 3D TV panels reached nearly 900,000. But the latter group is expected to surpass the former in shipments in second- half 2011, according to AU Optronics (AUO).

Of the global top-four LCD panel makers, Samsung Electronics and Chimei Innolux (CMI) adopt shutter glasses for their 3D TV panels while AUO and LG Display choose patterned retarder. AUO develops chiefly glass patterned retarder (GPR) technology, but also makes a small volume of FPR (film PR) 3D TV panels.

AUO, using switchable lens technology and face-tracking system, is technically able to produce dead-zone-free naked-eye 3D TV panels, but the production cost is still high, the company said.

The 3D penetration of global LCD TV shipments stood at less than 4% in first-quarter 2011 and is expected to rise to 10% in the second quarter.

By Rebecca Kuo and Adam Hwang, DigiTimes

Beta VLC Media Player for HLS

As there were no free Windows PC video player compatible with HLS format, Anevia decided to co-sponsor the development of the HLS client within VLC Media Player. It will be officially integrated in release 1.2.0, which will mark the 10th birthday of this great open-source software.

However, you can download from this web page an exclusive beta version. We provide here a Windows binary installer, and a source pack that can be compiled for Windows, MacOSX and Linux OS. Once this version of VLC is installed on your PC, be careful to add ".m3u8" file extension to be opened as a VLC network stream.

Initiatives Promise Efficiency Gains For Multi-Screen Service Operations

While the Internet has yet to become an ideal medium for distributing premium content, the good news is there’s much more to come from technology advances that have already gone a long way toward enabling secure, efficient delivery of high-quality entertainment to devices of all descriptions.

In fact, these advances promise not only much better performance for unmanaged over-the- top (OTT) content; they will also streamline efforts of service providers that are developing hybrid managed services where IP-based content is a critical component of multi-device service strategies.

One key development along these lines is progress toward a unifying standard for adaptive streaming under the auspices of the International Organization for Standards’ Moving Picture Experts Group (ISO-MPEG). The MPEG Dynamic Adaptive Streaming over HTTP (DASH) working group has proposed a solution that does away with maintaining separate manifest formats for the two most prominent streaming platforms on the Web – Apple’s HTTP Live Streaming (HLS) and Microsoft’s Smooth Streaming.

At the same time, Hollywood studios through their electronic sell-through initiatives, including most notably the Digital Entertainment Content Ecosystem’s (DECE) UltraViolet platform, have created mechanisms essential to a more interoperable and consumer-friendly online content marketplace. Along with equipping UltraViolet to provide in-the-cloud support for single user accounts, DECE has specified a common file format for all UltraViolet content together with rigorous certification procedures to enable the supply of content protection from multiple vendors.

The Benefits of Adaptive Rate Streaming
When it comes to delivering long-form video over the Internet and bandwidth-constrained access links, one of the great tools invented to compensate for bandwidth fluctuations and other disruptions is Adaptive Rate Streaming (ARS). ARS dynamically adjusts bit rates to changing network and device conditions so that content is delivered at the highest level of quality that can be sustained at any given time without dropping frames or interrupting the flow for buffering.

Thus, for example, for streams targeted to HD sets, ARS bit rates might support an optimum resolution of 1080p when conditions allow and then drop down to rates suited to lower resolutions such as 780p when conditions would cause the stream to stop and buffer if the higher bit rates were maintained. Different ranges of bit rates can be set for different ranges of resolution, depending on the screen sizes of targeted devices.

ARS is a device-driven “stateless” technology, which means the ARS system must match the ARS mode that is supported by the client software running on any given device. At the initiation of a video streaming session by any user on any device, the device is presented live manifest files that define each of the available bit rate profiles for the chosen content. The device signals which profile should be streamed based on what the available access data rate is on the network and how much processing power is available on the device to handle the video stream.

To accommodate fluctuations in network bandwidth and other conditions, the ARS servers rely on feedback from the device to continually update the bit rate profile throughout the session. Every few seconds another file segment or “chunk” is sent to the device at the requested bit rate.

The two most commonly used ARS modes, Apple’s HLS and Microsoft’s Smooth Streaming, have much in common. They both rely on H.264 encoding to set the bit rates. And they employ HTTP (HyperText Transfer Protocol), the streaming protocol that supports the sequence of client request and server response transactions that are the foundation to communications on the World Wide Web. But they are incompatible because they employ different transport container formats.

HLS uses the MPEG Transport Stream format while Smooth Streaming uses the Fragmented MP4 file format for its chunks. The differences in container file sizes impose different time sequences on the client-server communications of the respective ARS systems.

The DASH Initiative
If these differences could be resolved, the fact that H.264 has emerged as the dominant codec by far in IP video communications would make it possible to fashion an ARS standard that would easily fit into the existing video streaming ecosystem. This is precisely what the MPEG DASH working group is attempting to do.

In October 2010, the DASH platform achieved Committee Draft status with expectations that it will reach Final Draft International Standard status by July 2011. The core of the draft is closely aligned with the mobile industry’s 3GPP Adaptive HTTP Streaming (AHS) specification.

AHS, adopted by 3GPP in March 2010, relies on the ISO Base Media File Format ISO/IEC 14496-12, the basis for the MP4 container, to create a uniform approach to streaming video and audio content to mobile devices. DASH has expanded on AHS (now commonly referred to as 3GP-DASH) to create the means by which clients can order ARS segments from servers that use either MP4 encapsulation or MPEG-TS.

This functionality is accomplished through a server-to-client communications mode that delivers a structured collection of data in a format known as the Media Presentation Description to define various segments within the stream, each of which is associated with a uniquely referenced HTTP URL. Altogether these segments describe how a DASH client can use the information to establish a streaming service for the user.

Thus, DASH has been structured to not only provide the means by which the DASH client can order chunks in MP4 or MPEG-TS mode, but to open standardized means of accessing many other functionalities as well. These include support for live streaming as well as progressive download of on-demand content; fast initial startup and seeking; enhanced trick modes and random access capabilities; dual streams for stereoscopic 3D presentations; Multi-view Video Coding used in Blu-ray 3D displays, and dynamic implementation of diverse protection schemes. The platform is designed to work with any HTTP 1.1-compliant server and to support distribution through regular Web infrastructures such as HTTP-based CDNs. This is critical to scaling a common streaming mode to a mass consumer base through use of the embedded base of server farms associated with CDNs worldwide.

Support for DASH
The fact that Microsoft and Apple have joined a global lineup of major firms to bring coherence to adaptive streaming represents a major step forward in the evolution of IP-based distribution of premium content. In a recent blog, Christian Kaiser, vice president of engineering at Netflix, extolled the progress on DASH, saying it has addressed most of the major points required to facilitate a standardized approach to video streaming in conjunction with use of the video playback facility in HTML5, the new multi-media optimized version of the Web’s HyperText Markup Language.

“Since HTML5 includes a facility to embed video playback (the <video> tag), it seems like a natural next step for us to use it for streaming video playback within our HTML5-based user interfaces,” Kaiser said. “However, as of today, there is no accepted standard for advanced streaming through the <video> tag.”

He said such a standard should define acceptable A/V container formats; audio and video codecs; the streaming protocol to be used (such as HTTP); how the streaming protocol adapts to available bandwidth; a way of conveying information about available streams and other parameters to the streaming player module, and a way of exposing these functionalities into HTML5. Netflix has resolved all but the last of these requirements with its own proprietary technology, he noted.

If Netflix could replace its proprietary solution with a standardized ARS platform that met all these conditions, the company and “any other video streaming service could deliver to a standard browser as a pure HTML5 web application, both on computers and in CE devices with embedded browsers,” Kaiser said. “Consumers would benefit by having a growing number of continually evolving choices available on their devices, just like how the web works today for other types of services.”

In Kaiser’s view, DASH provides a solution that would meet all his criteria, with the exception of the last one pertaining to having a way of tying the standard into the HTML5 <video> tag. He said Netflix is engaging with the community to help achieve this integration. Kaiser noted that Netflix intends by early next year to publish a limited subset of the MPEG DASH standard, which will refine the requirements for premium on-demand streaming services. Critically, he noted, the Netflix profile “will take advantage of hooks included in the DASH standard to integrate the DRM technologies that we need to fulfill our contractual obligations to the content providers, thus covering the sixth item [conveying parameters to the player] on our list.”

The fact that DASH creates a way to readily communicate to devices what the specific DRM parameters are for a given piece of content overcomes a major barrier to scaling availability of premium content cost effectively. Today, a content supplier who wants to provide a greater level of security than is supported on any given ARS platform must take specific steps to secure each targeted device with the appropriate DRM client. DASH will allow any DASH-compliant DRM to be implemented automatically with no need for intervention by the content supplier.

Challenges to DASH
Needless to say, DASH will not immediately eliminate all the chaos surrounding competing ARS platform with their variations in codecs, DRMs and streaming profiles. Notably, it remains to be seen how Adobe will adjust to the new standard, although the firm’s move away from sole reliance on proprietary compression with its embrace of H.264 on Flash 9 and 10 and its implementation of an HTTP-based ARS system on Flash 10.1 points to a more open approach in the future. Most recently Adobe indicated it is preparing to provide means by which broadcasters who publish video in Flash can deliver that video to HLS-equipped devices, including the iPhone and iPad.

Another development moving against the tide of ARS standardization is Google’s WebM initiative, which aims to provide what it says is an open-source alternative to H.264 without relying on HTTP streaming. Google says its Chrome browser will abandon support for H.264 in favor of WebM, although its YouTube videos continue to use H.264-based Flash as well as older VP6 compression in videos based on pre-9.0 versions of Flash. Suppliers of the Firefox and Opera browsers, which have never used H.264, will go to WebM in their latest versions.

WebM uses the VP8 video compression format developed as a successor to VP6 by On2, which Google acquired in 2010. It uses the Vorbis codec for audio, which comes out of an open-source project headed by the Xiph.Org Foundation. The transport container used by WebM is based on the Matroska Multimedia Container, the product of another open-source initiative.

Rather than relying on the chunk-based process used by ARS, WebM employs proprietary techniques in conjunction with variable bit rate encoding to effect rate fluctuations within a continuous stream. Describing WebM as an “untried technology,” Ben Schwartz, CTO of Innovation Consulting, in a white paper produced by Harmonic and Verimatrix, questioned the practicality of the variable bit rate (VBR) alternative to ARS. “It is much harder to benefit from caching with this approach, and head-end scalability might become an area of concern,” Schwartz said.

At this nascent stage, it is hard to say how much traction WebM will gain or even whether it will prove to be as royalty-free as Google would like. In February, MPEG-LA, which manages the H.264 patent pool, issued a call for patents related to VP8 in what appeared to be preparations for an assessment as to what royalties might be due from users of VP8.

Microsoft, while not opposed to WebM, indicated it wants to know how Google will provide protection to WebM users against patent claims before it whole-heartedly embraces WebM in its Internet Explorer browser. While Internet Explorer 9 users who install third-party WebM video support on their Windows operating systems will be able to play WebM video in IE9, IE9 will otherwise play HTML5 video in the H.264 format, because H.264 “is a high-quality and widely used video format that serves the Web very well today,” said Dean Hachamovitch, Microsoft’s corporate vice president for IE.

Writing in a blog posted in February, Hachamovitch cited Microsoft’s experiences with previous efforts to offer its Window Media Video (WMV) compression technology (later standardized by the SCTE as VC-1) on an open-source basis as a strong reason to doubt that patent infringement claims won’t be made against VP8. “Asserting openness is not a legal defense,” Hachamovitch said. “The risk question is a legitimate business concern,” he continued. “There are hundreds if not thousands of patents worldwide that read on video formats and codec technologies. Our experience with trying to release WMV for free and open use, and the subsequent claims against Microsoft, support this history as do the cases against JPEG, GIF, and other formats.”

Microsoft offered to work with Google to resolve the risk issues. “Microsoft is willing to commit that we will never assert any patents on VP8 if Google will make a commitment to indemnify us and all other developers and customers who use VP8 in the future,” Hachamovitch said. “We would only ask that we be able to use those patent rights if we are sued first by somebody else. If Google would prefer a patent pool approach, then we would also agree to join a patent pool for VP8 on reasonable licensing terms so long as Google joins the pool and is able to include all other major providers of playback software and devices.”

Clearly, with wide support and the blessing of Apple and Microsoft, DASH is by far the best hope for creating an ARS template that content distributors worldwide can rely on to ease the pain of providing premium content with acceptable quality of experience across all networks and a broader range of device types. As DASH comes into commercial use, distributors employing advanced protection mechanisms will be able to upgrade their distribution networks for content delivered to any DASH compatible client.

UltraViolet
Another major force for simplifying IP-based content distribution is UltraViolet, the electronic sell-through platform developed by the Digital Entertainment Content Ecosystem (DECE). The consortium of more than 60 companies is driven by major studios, including Sony Pictures Entertainment, Fox, Lionsgate, NBC Universal, Paramount, and Warner Bros. Many of the powerhouses in media distribution, retailing and technology are also involved with DECE, including Adobe, Best Buy, BT, CableLabs, Cisco, Comcast, Cox Communications, HP, Huawei, IBM, Intel, LG Electronics, Liberty Global, Microsoft, Motorola, Netflix, Neustar, Nokia, Panasonic, Philips, Samsung Electronics, Thomson and Toshiba.

Once the UltraViolet ecosystem is operational, which is scheduled to happen this summer, consumers will be able to create a cloud-based account with digital rights locker via one of many UltraViolet service providers or through the UltraViolet website. They will then be able to access and manage all of their UltraViolet entertainment for use on all their devices, regardless of where the content was purchased.

The UltraViolet Digital Rights Locker serves as the hub for this new marketplace. The authentication service and account management system allows consumers to access content from multiple registered devices over multiple service outlets operating on broadband and mobile networks. DECE will provide an open API (Application Programming Interface) that allows any Web-enabled storefront, service or device to integrate access to the digital rights locker into its own consumer offering.

To the extent UltraViolet eventually scales to mass usage, the platform will drive uniform approaches to file formatting and DRM that could become a force for standard practices across the entire IP video ecosystem. The UltraViolet Common File Format can be licensed by any participating company to create a consumer offering that will play on any service or device built to DECE specifications. Content providers will be able to encode and encrypt one file type in portable, standard and high definition modes with assurance their files can be accessed by consumers from home storage or the cloud from any registered device.

By mandating a common encryption format and setting rigorous requirements for certifying DRMs for use in UltraViolet, DECE is taking the guesswork out of DRM selection by content distributors and greatly simplifying their ability to accommodate multiple DRMs. Because the file format establishes where in the file sequence various DRM functionalities must be implemented, the cloud-based and local storage systems are able to implement whatever UltraViolet-certified DRM is appropriate to a given piece of content.

DECE has gone a step further toward streamlining content protection by embracing Marlin as one of the approved DRM systems – one of the true multi-vendor DRM systems in the marketplace today, and also security standard at the heart of the Open IPTV Forum’s open standards efforts. Marlin, originated by Panasonic, Philips, Samsung Electronics, Sony and Intertrust Technologies, establishes sophisticated, uniform processes for managing rights on devices in a way that can be integrated into a flexible commercial revenue security offering.

Marlin, which is gaining traction in many parts of the world, was recently selected by the U.K. YouView initiative (previously known as Project Canvas) as the common protection model for all content distributed through the group’s new YouView video store. YouView will aggregate and deliver catchup and on-demand content from the BBC, Channel Five, ITV, Channel 4, BT, Arqiva and TalkTalk to set-top box devices from a range of CE vendors.

Critically, there’s a direct tie-in between Marlin and DASH insofar as DASH has drawn heavily on the work of the Open IPTV Forum. Thus the fact that Marlin is intrinsically compatible with DASH serves as another step forward in the content industry’s efforts to overcome encumbrances to efficient operations resulting from DRM incompatibilities.

By Steve Christian, Screen Plays

SpectSoft Releases Free Rave Software

SpectSoft, a leading provider of uncompressed video solutions on the Linux platform, announced that the RaveHD product is now available at no charge. Developing products since 1997, SpectSoft has released RaveHD as part of their "Giving back to the community" program. This is the second, high-end solution Spectsoft has release to this program, releasing last year a version of their 3d live product.


SpectSoft RaveHD

Today, RaveHD is available for download to anyone who has the need to work with high-quality images in demanding pipelines. Designed to be a solid VTR replacement, Rave is being used on television and feature film work globally. Rave offers features that expand beyond the traditional VTR replacement, which has made this product line very popular in various facilities worldwide. Rave allows users to bridge film, video and data workflows and pipelines in a non-proprietary, flexible manner.

A few of the features that can be found in RaveHD are:
- Uncompressed Frame Support (DPX, Cineon, TIFF, etc)
- Compressed File Support (MPEG, DNxHD, H.264, etc)
- Conversion (From one format to another)
- LTC Support (In/Out)
- SAN Support (Standard)
- Unlimited Storage (No additional fees to add on storage)
- Burn-In (Frame info, Keycode, Timecode, etc)
- Conforming
- Standard (Open) File System (Immediate access and use of all files)
- Various Capture Methods
- External Control (RS422, Ethernet, etc)

Source: SpectSoft

Explore the Principles Behind 3D TV

An introductory online course published by the IABM Training Academy. The course takes about an hour to complete and provides an introduction to 3D and how it works for TV and in the cinema. No prior knowledge is required to take the course.

Wi-Fi Direct Connects Television to Other Devices

Digital televisions are among the first devices to adopt Wi-Fi Direct, a set of peer wireless networking protocols that enable compatible products to communicate directly, with or without a wireless access point or internet connection. A report forecasts that there will be 80 million digital televisions with Wi-Fi Direct enabled by 2015. Furthermore, it predicts that by 2014 every personal computer, consumer electronics device or mobile phone that ships with Wi-Fi will support the new standard.

In-Stat forecasts that more than 170 million devices with Wi-Fi Direct will ship in 2011, almost a fifth of the Wi-Fi products expected to be manufactured. Within a few years Wi-Fi Direct will be widely available in many types of consumer electronics product, including smart phones and televisions.

Wi-Fi Direct is a certification for devices that can connect with one another using a wireless network, without joining a traditional home, office or hotspot network access point.

Although Wi-Fi has long had limited support for ad-hoc networks, Wi-Fi Direct is designed to make it easier to print, share, synch and display, enabling mobile phones, cameras, printers, computers and gaming devices to connect to each other directly to transfer media and share applications.

Wi-Fi Direct competes with Bluetooth, which is generally more limited, although well-suited to applications such as wireless headsets.

A number of digital televisions, Blu-ray players, home theatre systems and smartphones from LG have already received the Wi-Fi Direct designation. A list of compatible devices is maintained on the Wi-Fi alliance web site.

A Wi-Fi Direct device effectively includes an embedded software access point and will signal to other local devices that it can make a direct connection. Compatible devices will include Wi-Fi Protected Setup, which means it can be as simple as pushing a button or entering an identification number displayed by the device to set up a secure connection.

Wi-Fi Direct devices will generally be able to work in the traditional Wi-Fi networking mode and interoperate with existing 802.11 Wi-Fi products. Some devices will be able to connect simultaneously to a group of Wi-Fi Direct devices and a regular infrastructure network.

It could make it easier to connect a smart phone or tablet or television display, to act as a controller or to exchange or synchronise media.

“The technology behind Wi-Fi Direct presents an incredibly compelling solution for digital home applications,” said Brian O’Rourke, Research Director of In-Stat. “As application developers define new solutions to run over Wi-Fi Direct connections, we expect to see a very strong adoption curve over the coming years.”

The potential applications demonstrate that a connected television is more than just a television that can connect to the internet to stream and download media and applications. For such purposes a wired connection may still be preferable. However, a network-connected television also becomes a large screen display that can be used to show and share a variety of media experiences.

This has been the case since the first videocassette recorders and video games consoles were connected to the television, but in the future there will be less need to mess around with clumsy cables and connections.

While a wireless network may not be able to deliver uncompressed high-definition video in the way that an HDMI connecting cable can, smart screens will be able to display digitally compressed images and video from other sources, or indeed make media available for display on other devices.

Wireless communication, which once defined radio and television, could yet further transform the television experience.

Source: Informitv

Spate of ACR Initiatives Brings New Efficiencies to Ads & Apps

Suddenly, it seems, anyone moving into the advanced advertising and apps spaces who doesn’t have an Automatic Content Recognition (ACR) strategy in play risks being left behind in the race to facilitate efficient use of connected devices to drive new revenues.

ACR applies to a range of techniques which can employ audio and/or video fingerprinting, digital watermarking or, in at least one case, “video signaturing” to trigger a real-time implementation of an ad or app in conjunction with what any given user is viewing or listening to.

“The beauty of this technology is we can use existing devices without huge infrastructure issues,” says Alex Terpstra, CEO of Civolution, which is one of many companies bringing ACR capabilities to market.

Civolution previewed its forthcoming ACR solution, which can be used with either the firm’s fingerprinting or watermarking technologies, at the recent NAB Show in Las Vegas. “We port our software to the devices like tablets and smartphones where we can help applications developers synchronize the applications automatically to the main [TV] screen,” Terpstra says.

“If you would have a TV show, even a live show, that has an appealing app on a tablet device, a lot of things could happen in that app that are related to the actual program on TV if you were able to accurately synchronize the two,” he says. “That’s what we do with our watermarking and fingerprinting technologies.”

Civolution has its eye on the connected TV space as well, Terpstra notes. “We have our technology run on the chipsets within these Web-connected devices to provide the ability to recognize content within the TV sets,” he says.

“Especially with using watermarks, we can determine the precise location of an advertisement that is being played out, and we can recognize which advertisement it is,” he explains. “If you know the exact location and which advertisement it is, you can also replace it with another advertisement that’s a local ad or targeted ad if the user’s profile is also known to the server. We’re working with advertising companies to close the whole loop.”

ACR has been in play for awhile in a wide range of applications, such as the Nielsen Audio Video Encoding (NAVE) system, the audio watermarking element of which for the past decade or so has been used as a way to identify and track program content for TV ratings purposes. The Nielsen meter equipped with NAVE capabilities registers what the viewer is watching automatically by virtue of identifying the source of the audio watermark in the program.

Another long-standing ACR user is Shazam, the provider of fingerprinting-based music discovery apps for mobile devices, which last year extended its business with introduction of the Shazam Audio Recognition Advertising (SARA) program. First used by clothing brand Dockers in the “Wear the Pants” campaign during the 2010 Super Bowl, SARA employs Shazam’s music recognition system to let brand advertisers tag a commercial for direct interaction with the Shazam user base, now numbering over 100 million people worldwide.

The tag induces viewers to engage with the promotion by pointing their mobile device in the direction of the commercial and hitting the Shazam button on the device. Shazam’s audio technology recognizes the specific advertisement and returns a customized result to the mobile device.

In a sign of how hot ACR has become, Yahoo! last month acquired a three-month-old startup in the field, IntoNow, for a reported $20-$30 million, winning a battle to buy the firm that included Twitter and Facebook, according to the online outlet TechCrunch. IntoNow, led by former Google and Viacom executive Adam Cahan, uses audio fingerprinting to allow users to identify and share television programs with their friends via iPhones.

Yahoo! believes the technology will help socialize the video programming offered through its platform. “Relying on social channels as a means for discovering content – whether it’s on a PC, mobile device, or TV – is rapidly on the rise,” says Bill Shaughnessy, Yahoo!’s senior vice president of product management. With a growing data base currently comprising 150 million minutes of content, IntoNow can fingerprint TV shows in real time with 99 percent accuracy, without requiring the participation of the networks.

But there’s a big advertising play as well, as revealed by IntoNow’s recent deal with Pepsi, which allows users to respond to commercials in a new campaign for Pepsi Max. The first 50,000 viewers who use IntoNow to tag a Major League Baseball-themed Pepsi MAX commercial will receive coupons for a free 20-ounce soda at retailers like Target and CVS. The company says automotive and movie advertisers are likely to be the next entities to sign on to the platform.

While the applications are similar, there’s actually a big difference between how watermarking and fingerprinting technologies get the job done. In the case of audio or video watermarking, the identification of the content is based on a digital code embedded invisibly in the audio or video stream that can be readily matched against a data base containing all such codes. Fingerprinting, on the other hand, relies on matching of marked slices of content with the same slices in a data base, which entails a search across the data base, typically in a two-step approach that first finds a set of likely matches and then drills down to get to a precise match.

Both technologies have their drawbacks and advantages. Watermarking is subject to precise placement of the identifying code where it won’t interfere with the content and with sufficient robustness to survive through all the processing steps between the content source and the end user. Fingerprinting avoids the robustness and precision placement issues but requires a processing-intensive search support infrastructure.

In general, for ACR purposes, audio fingerprinting or watermarking is preferred over video, insofar as the signals are easier to identify with companion devices using their microphones rather than their Web cams, and, in the case of fingerprinting search matches, there’s less information to process with an audio file . But audio can’t deliver a fingerprint when the track is silent, and there can be synchronization issues with use of language dubbing in foreign films. Thus, some suppliers are using video fingerprinting as a backup.

Ultimately, the business model may be the determining factor in choosing between watermarking and fingerprinting-based systems for advertising. Watermarking precisely identifies not only the content but the network it appears on, but it also requires participation of the content provider as well as the advertiser or app supplier. Fingerprinting doesn’t convey where the ad appears but can operate independently of any involvement by content providers.

Still another startup, TV Interactive Systems, has come up with another approach to ACR which it calls video signaturing. While it works somewhat like fingerprinting, VS doesn’t look for a unique file segment as required with fingerprinting. Instead, VS identifies “clues” or characteristics of a given piece of content which individually don’t make for an accurate match, but which taken together in batches of about ten per second ensure an accurate match with the data base without requiring the more costly processing used for identification in fingerprinting. Beyond the mobile device realm one of the key opportunities for ACR on the advertising front is the connected TV. TV Interactive Systems is focused exclusively on this end of the device market.

Among more recent developments in ACR is the new fingerprinting system offered by Vobile, which for several years has been supplying VideoDNA technology along with a data base of authorized video fingerprints, metadata and business rules from movie studios and programming networks to enable fully automated identification, tracking and management of content. Vobile’s integrated ACR solution will drive novel smart TV applications, such as broadcast monitoring and advanced advertising models, says CEO Yangbin Wang.

“Internet-enabled smart television brings TV entertainment to a new level,” Wang says. “With our latest ACR solution, content owners can track content distribution anywhere, anytime to any device. This opens up a world of new revenue opportunities for television programmers.”

Startup Zeitera, along with pursuing ACR TV apps for companion devices, is offering its fingerprinting cloud-based service to TV set manufacturers, providing them a means of working directly with advertisers to enable placement of targeted ads in linear programming without requiring any participation by the content suppliers.

In its latest move, Zeitera has introduced a software development kit and API development toolkit for programmers, CE manufacturers and app developers who want to implement the firm’s Vivid audio and video fingerprinting ACR solution in conjunction companion apps on tablets and smartphones. Zeitera offers its solution as a cloud-based service which is designed to eliminate a lot of the heavy lifting associated with individual users’ implementation of ACR.

“The Vvid Mobile API and SDK Development Platform for iOS and Android fills the needs of those broadcasters and content owners that would like to have smartphone and tablet applications that are able to synchronize with their TV programs, commercials and movies,” says Zeitera CEO Dan Eakins. “Zeitera is pioneering the market for white label ACR services and has several partners signed on developing applications with these APIs.”

Source: Screen Plays