MPEG-DASH Content Generation Using MP4Box and x264

An interesting article describing how to produce MPEG-DASH content with open source tools.

RGB Networks Announces an Open Source Software Transcoder Initiative

RGB Networks has announced that it is developing an open source version of its popular TransAct Transcoder. Called ‘Ripcode Transcoder’, after the company Ripcode, which was acquired by RGB Networks in 2010 and which originally developed TransAct, the new, cloud-enabled software transcoder will provide RGB Networks’ customers with greater control, integration and flexibility in their video delivery workflows.

In a pioneering move, and harnessing the industry momentum toward developing cloud-based solutions, RGB Networks is actively welcoming operators and vendors to be part of a community of contributors to the open source project.

RGB Networks’ CloudXtream solution for nDVR and dynamic Ad Insertion for Multiscreen (AIM) environments, launched in October 2013 and built on the industry standard open source cloud operating system OpenStack, has paved the way for this latest innovation. The company intends to build on this success with Ripcode, which will be an “open core” project, where the core technology from the TransAct Transcoder is being used to create the foundations of the open source project.

Suitable for a variety of applications, the Ripcode Transcoder will include the full feature set expected of an industrial-strength, professional transcoder, leaving customers open to select and integrate the packaging solution of their choice necessary to produce their desired Adaptive Bit Rate output formats.

The intended feature set of the open source Ripcode Transcoder will include:

  • Both Linear (live) and Video on Demand (VOD) transcoding
  • Full cluster management, load balancing, and failover
  • Linear and VOD transcoding of MPEG2, H.264, H.265, AAC, AC3, and other industry leading video and audio codecs
  • File-to-File watch folders
  • Full reporting and logging of events
  • Commercial-grade GUI
  • RESTful APIs

Unlike other open source projects, an open source transcoder is more difficult to release due to built-in professional codec licensing. RGB Networks will release Ripcode Transcoder with only the codecs that can be legally used with open source software. Additionally, in order to facilitate use of the transcoder in professional environments that require licensed, third party codecs and pre/post processing filters, the Ripcode transcoder will include a plug-in framework, that will allow use of best-of-breed codecs and filters.

A number of vendors of such components and related technologies have expressed interest in participating in the Ripcode initiative including the following:
  • Video Codecs (including H.264/AVC and H.265/HEVC): eBrisk Video, Intel Media Server Studio, Ittiam Systems, MainConcept (A DivX company), Multicoreware, NGCodec (providing HW acceleration for H.264/AVC and H.265/HEVC), Squid Systems, Vanguard Video
  • Audio Codecs: Dolby Laboratories, Fraunhofer IIS
  • Video Optimization: Beamr

The release of the first version of Ripcode Transcoder – 1.0 – with all the appropriate licensing is targeted for Q1 2015.

Source: RGB Networks

SMPTE Publishes Archive eXchange Format Standard

The Society of Motion Picture and Television Engineers published a standard that codifies the Archive eXchange Format. An IT-centric file container that can encapsulate any number and type of files in a fully self-contained and self-describing package, AXF supports interoperability among disparate content storage systems and ensures content’s long-term availability, no matter how storage or file system technology evolves.

Designed for operational storage, transport, and long-term preservation, AXF was formulated as a wrapper, or container, capable of holding virtually unlimited collections of files and metadata related to one another in any combination. Known as “AXF Objects,” such containers can package, in different ways, all the specific information different kinds of systems would need in order to restore the content data. The format relies on the Extensible Markup Language to define the information in a way that can be read and recovered by any modern computer system to which the data is downloaded.

AXF Objects are essentially immune to changes in technology and formats. Thus, they can be transferred from one archive system into remote storage—geographically remote or in the cloud, for instance—and later retrieved and read by different archive systems without the loss of any essence or metadata.

AXF Objects hold files of any kind and any size. By automatically segmenting, storing on multiple media, and reassembling AXF Objects when necessary, “spanned sets” enable storage of AXF Objects on more than one medium. Consequently, AXF Objects may be considerably larger than the individual media on which they are stored. This exceptional scalability helps to ensure that AXF Objects may be stored on any type or generation of media. The use of “collected sets” permits archive operators to make changes to AXF Objects or files within them, while preserving all earlier versions, even when write-once storage is used.

The nature of AXF makes it possible for equipment manufacturers and content owners to move content from their current archive systems into the AXF domain in a strategic way that does not require content owners to abandon existing hardware unless or until they are ready to do so. In enabling the recovery of archived content in the absence of the systems that created the archives, AXF also offers a valuable means of protecting users’ investment in content. By maintaining preservation information such as fixity and provenance as specified by the OAIS model, AXF further enables effective long-term archiving of content. Resilience of data is ensured through use of redundant AXF structures and cryptographic hash algorithms.

AXF already has been employed around the world to help businesses store, protect, preserve, and transport many petabytes of file-based content, and the format is proving fundamental to many of the cloud-based storage, preservation, and IP-based transport services available today.

Source: TV Technology

DASH AVC/264 Support in GPAC

This article shows you how to setup GPAC for your OnDemand and Live contents.

CableLabs Boots Up 4K Video Sharing Website

CableLabs has launched a 4K-focused microsite that provides access to Ultra HD/4K video clips to help platform developers, vendors, network operators and other video pros conduct tests with the emerging eye-popping format.

CableLabs said it’s offering the content under the Creative Commons License, meaning it can be used freely for non-commercial testing, demonstrations and the general advancement of technology.

As vendors utilize content from the site to test new technology, CableLabs helps the industry get one step closer to standardizing 4K content and delivering to the home.

As of this writing, the site hosts seven videos, all shot with a Red Epic camera. The longest of the batch is a fireman-focused clip titled “Seconds That Count” that runs 5 minutes and 22 seconds.

On the site, CableLabs has integrated an upload form for anyone who wants to share their 4K videos for the purpose of testing. Interested particiapnts are directed to provide a lower bite-rate HD file for preview purposes along with a 4K version. CableLabs is accepting pre-transcoded versions using MPEG HEVC or AVC, or Apple ProRes version. CableLabs will take on the task of transcoding the content into two high quality versions available for download on the website. CableLabs notes that uploaded content might be used for demos at forums, shows, and conferences.

CableLabs is launching the site as the cable industry just begins to develop plans around 4K. Among major U.S. MSOs, Comcast plans to launch an Internet-based, on-demand Xfinity TV 4K app before the end of the year that will initially be available on new Samsung UHD. The MSO is also working with partners on a new generation of boxes for its X1 platform that uses HEVC and can decode native 4K signals.

On the competitive front, DirecTV president and CEO Mike White said on the company's second quarter earnings call that the satellite TV giant will be ready to deliver 4K video on an on-demand basis this year, and be set up to follow with live 4K streaming next year or by early 2016.

Source: Multichannel News

DVB MPEG-DASH Profile Specification

The DVB MPEG-DASH Profile Specification defines the delivery of TV content via HTTP adaptive streaming. This includes the following:
  • A profile of the features defined in MPEG DASH (referred to by MPEG as an "interoperability point") largely based on the "ISOBMFF live" profile defined by MPEG.

  • Constraints on the sizes or complexity of various parameters defined in the MPEG DASH specification.

  • A selection of the video and audio codecs from the DVB toolbox that are technically appropriate with MPEG DASH constraints and/or requirements for the use of these, without mandating any particular codec.

  • Using MPEG Common Encryption for content delivered according to the present document.

  • Use of TTML subtitles with MPEG DASH.

  • Requirements on Player behaviour needed to give inter-operable presentation of services.

  • Guidelines for content providers on how to use MPEG DASH.

Overhead and Performance of Low Latency Live Streaming using MPEG-DASH

HTTP Streaming is a recent topic in multimedia communications with on-going standardization activities, especially with the MPEG DASH standard which covers on demand and live services. One of the main issues in live services deployment is the reduction of the overall latency. Low or very low latency streaming is still a challenge.

In this paper, we push the use of DASH to its limits with regards to latency, down to fragments being only one frame, and evaluate the overhead introduced by that approach and the combination of: low latency video coding techniques, in particular Gradual Decoding Refresh; low latency HTTP streaming, in particular using chunked-transfer encoding; and associated ISOBMF packaging.

We experiment DASH streaming using these techniques in local networks to measure the actual end-to-end latency, as low as 240 milliseconds, for an encoding and packaging overhead in the order of 13% for HD sequences and thus validate the feasibility of very low latency DASH live streaming in local networks.

By Nassima Bouzakaria, Cyril Concolato and Jean Le Feuvre

DVB Approves UHDTV HEVC Delivery Profile

A significant step in the road to Ultra High Definition TV services has been taken with the approval of the DVB-UHDTV Phase 1 specification at the 77th meeting of the DVB Steering Board. The specification includes an HEVC Profile for DVB broadcasting services that draws, from the options available with HEVC, those that will match the requirements for delivery of UHDTV Phase 1 and other formats. The specification updates ETSI TS 101 154 (Specification for the use of Video and Audio Coding in Broadcasting Applications based on the MPEG-2 Transport Stream).

The new DVB-UHDTV Phase 1 will allow images with four times the static resolution of the 1080p HDTV format, at frame rates of up to 60 images per second. Contrast will be drastically improved by increasing the number of bits per pixel to 10 bit. From the wide range of options defined in the HEVC Main 10 profile, Level 5.1 is specified for UHD content for resolutions up to 2160p. For HD content, HEVC Main profile level 4.1 is specified for supporting resolutions up to 1080p.

The DVB-UHDTV Phase 1 specification takes into account the possibility that UHDTV Phase 2 may use higher frame rates in a compatible way, which will add further to the image quality of UHDTV Phase 1.

“HEVC is the most recently-developed compression technology and, among other uses, it is the key that will unlock UHDTV broadcasting,” said DVB Steering Board Chairman, Phil Laven. “This new DVB–UHDTV Phase 1 specification not only opens the door to the age of UHDTV delivery but also potentially sets the stage for Phase 2, the next level of UHDTV quality, which will be considered in upcoming DVB work,” he continued.

Also approved was the specification for Companion Screens and Streams, Part 2: Content Identification and Media Synchronization. Companion Devices (tablets, smart phones) enable new user experiences for broadcast service consumption. Many of these require synchronisation between the Broadcast Service at the TV Device and the Timed Content presented at the Companion Device. This specification focuses on the identification and synchronisation of a Broadcast Service on a TV Device (Connected TV or STB and screen) and Timed Content on a Companion Screen Application running on a Companion Device. Part 2 outlines the enabling factors for the identification of, and synchronisation with, broadcast content, timed content and trigger events on TV devices (for example a Connected TV or STB) and related content presented by an application running on a personal device.

Another specification to gain approval from the Steering Board was the MPEG-DASH Profile for Transport of ISO BMFF Based DVB Services over IP Based Networks. This specification defines the delivery of TV content via HTTP adaptive streaming. MPEG-DASH covers a wide range of use cases and options. Transmission of audiovisual content is based on the ISOBMFF file specification. Video and audio codecs from the DVB toolbox that are technically appropriate with MPEG-DASH have been selected. Conditional Access is based on MPEG Common Encryption and delivery of subtitles will be XML based. The DVB Profile of MPEG-DASH reduces the number of options and also the complexity for implementers. The new specification will facilitate implementation and usage of MPEG-DASH in a DVB environment.

The three new specifications will now be sent to ICT standards body ETSI for formal standardisation and the relevant BlueBooks will be published shortly.

Source: Advanced Television

CEA: UHDTV is 8-bit, 3840x2160

The Consumer Electronics Association has announced updated core characteristics for ultra high-definition TVs, monitors and projectors for the home. As devised and approved by CEA’s Video Division Board, these characteristics build on the first-generation UHD characteristics released by CEA in October 2012.

These expanded display characteristics (CEA’s Ultra High-Definition Display Characteristics V2) – voluntary guidelines that take effect in September 2014 – are designed to address various attributes of picture quality and help move toward interoperability, while providing clarity for consumers and retailers alike.

Under CEA’s expanded characteristics, a TV, monitor or projector may be referred to as Ultra High-Definition if it meets the following minimum performance attributes:

  • Display Resolution – Has at least eight million active pixels, with at least 3,840 horizontally and at least 2,160 vertically.
  • Aspect Ratio – Has a width to height ratio of the display’s native resolution of 16:9 or wider.
  • Upconversion – Is capable of upscaling HD video and displaying it at ultra high-definition resolution.
  • Digital Input – Has one or more HDMI inputs supporting at least 3840x2160 native content resolution at 24p, 30p and 60p frames per second. At least one of the 3840x2160 HDMI inputs shall support HDCP revision 2.2 or equivalent content protection.
  • Colorimetry – Processes 2160p video inputs encoded according to ITU-R BT.709 color space and may support wider colorimetry standards.
  • Bit Depth – Has a minimum color bit depth of eight bits.

Because one of the first ways consumers will have access to native 4K content is via Internet streaming on connected ultra HDTVs, CEA has defined new characteristics for connected UHDTV displays. Under these new characteristics, which complement the updated core UHD attributes, a display system may be referred to as a connected ultra HD device if it meets the following minimum performance attributes:
  • Ultra High-Definition Capability – Meets all of the requirements of the CEA Ultra High-Definition Display Characteristics V2 (listed above).
  • Video Codec – Decodes IP-delivered video of 3840x2160 resolution that has been compressed using HEVC* and may decode video from other standard encoders.
  • Audio Codec – Receives and reproduces, and/or outputs multichannel audio.
  • IP and Networking – Receives IP-delivered Ultra HD video through a Wi-Fi, Ethernet or other appropriate connection.
  • Application Services – Supports IP-delivered Ultra HD video through services or applications on the platform of the manufacturer’s choosing.

CEA’s expanded display characteristics also include guidance on nomenclature designed to help provide manufacturers with marketing flexibility while still providing clarity for consumers. Specifically, the guidance states, “The terms ‘Ultra High-Definition,’ ‘Ultra HD’ or ‘UHD’ may be used in conjunction with other modifiers,” for example “Ultra High-Definition TV 4K”.

*High Efficiency Video Compression Main Profile, Level 5, Main tier, as defined in ISO/IEC 23008-2 MPEG-H Part 2 or ITU-T H.265, and may support higher profiles, levels or tiers.

Source: TV Technology

H265 - Technical Overview

A nice introduction to HEVC by Fabio Sonnati.

Delivering Breaking Bad on Netflix in Ultra HD 4K

This week Netflix is pleased to begin streaming all 62 episodes of Breaking Bad in UltraHD 4K. Breaking Bad in 4K comes from Sony Pictures Entertainment’s beautiful remastering of Breaking Bad from the original film negatives. This 4K experience is available on select 4K Smart TVs.

As pleased as I am to announce Breaking Bad in 4K, this blog post is also intended to highlight the collaboration between Sony Pictures Entertainment and Netflix to modernize the digital supply chain that transports digital media from content studios, like Sony Pictures, to streaming retailers, like Netflix.

Netflix and Sony agreed on an early subset of IMF for the transfer of the video and audio files for Breaking Bad. IMF stands for Interoperable Master Format, an emerging SMPTE specification governing file formats and metadata for digital media archiving and B2B exchange.

IMF specifies fundamental building blocks like immutable objects, checksums, globally unique identifiers, and manifests (cpl). These building blocks hold promise for vastly improving the efficiency, accuracy, and scale of the global digital supply chain.

At Netflix, we are excited about IMF and we are committing significant R&D efforts towards adopting IMF for content ingestion. Netflix has an early subset of IMF in production today and we will support most of the current IMF App 2 draft by the end of 2014.

We are also developing a roadmap for IMF App 2 Extended and Extended+. We are pleased that Sony Pictures is an early innovator in this space and we are looking forward to the same collaboration with additional content studio partners.

Breaking Bad is joining House of Cards season 2 and the Moving Art documentaries in our global 4K catalog. We are also adding a few more 4K movies for our USA members. We have added Smurfs 2, Ghostbusters, and Ghostbusters 2 in the United States. All of these movies were packaged in IMF by Sony Pictures.

By Kevin McEntee, VP Digital Supply Chain, Netflix

HTML5 Video in Safari on OS X Yosemite

We're excited to announce that Netflix streaming in HTML5 video is now available in Safari on OS X Yosemite! We've been working closely with Apple to implement the Premium Video Extensions in Safari, which allow playback of premium video content in the browser without the use of plugins.

If you're in Apple's Mac Developer Program, or soon the OS X Beta Program, you can install the beta version of OS X Yosemite. With the OS X Yosemite Beta on a modern Mac, you can visit Netflix.com today in Safari and watch your favorite movies and TV shows using HTML5 video without the need to install any plugins.

We're especially excited that Apple implemented the Media Source Extensions (MSE) using their highly optimized video pipeline on OS X. This lets you watch Netflix in buttery smooth 1080p without hogging your CPU or draining your battery. In fact, this allows you to get up to 2 hours longer battery life on a MacBook Air streaming Netflix in 1080p - that’s enough time for one more movie!

Apple also implemented the Encrypted Media Extensions (EME) which provides the content protection needed for premium video services like Netflix.

Finally, Apple implemented the Web Cryptography API (WebCrypto) in Safari, which allows us to encrypt and decrypt communication between our JavaScript application and the Netflix servers.

The Premium Video Extensions do away with the need for proprietary plugin technologies for streaming video. In addition to Safari on OS X Yosemite, plugin-free playback is also available in IE 11 on Windows 8.1, and we look forward to a time when these APIs are available on all browsers.

Congratulations to the Apple team for advancing premium video on the web with Yosemite! We’re looking forward to the Yosemite launch this Fall.

By Anthony Park and Mark Watson, Netflix Tech Blog

UHD HEVC Data Set

As part of the 4Ever project, we have been releasing an HEVC and DASH ultra high definition dataset, ranging from 8bit 720p 30Hz up to 10bit 2160p 60 Hz. The dataset is released under CC BY-NC-ND.

The data set web page is here, and more information on the dataset can also be found in this article.

Source: GPAC

Reconciling Mozilla’s Mission and W3C EME

With most competing browsers and the content industry embracing the W3C EME specification, Mozilla has little choice but to implement EME as well so our users can continue to access all content they want to enjoy. Read on for some background on how we got here, and details of our implementation.

Digital Rights Management (DRM) is a tricky issue. On the one hand content owners argue that they should have the technical ability to control how users share content in order to enforce copyright restrictions. On the other hand, the current generation of DRM is often overly burdensome for users and restricts users from lawful and reasonable use cases such as buying content on one device and trying to consume it on another.

DRM and the Web are no strangers. Most desktop users have plugins such as Adobe Flash and Microsoft Silverlight installed. Both have contained DRM for many years, and websites traditionally use plugins to play restricted content.

In 2013 Google and Microsoft partnered with a number of content providers including Netflix to propose a “built-in” DRM extension for the Web: the W3C Encrypted Media Extensions (EME).

The W3C EME specification defines how to play back such content using the HTML5 video element, utilizing a Content Decryption Module (CDM) that implements DRM functionality directly in the Web stack. The W3C EME specification only describes the JavaScript APIs to access the CDM. The CDM itself is proprietary and is not specified in detail in the EME specification, which has been widely criticized by many, including Mozilla.

Mozilla believes in an open Web that centers around the user and puts them in control of their online experience. Many traditional DRM schemes are challenging because they go against this principle and remove control from the user and yield it to the content industry.

Instead of DRM schemes that limit how users can access content they purchased across devices we have long advocated for more modern approaches to managing content distribution such as watermarking. Watermarking works by tagging the media stream with the user’s identity. This discourages copyright infringement without interfering with lawful sharing of content, for example between different devices of the same user.

Mozilla would have preferred to see the content industry move away from locking content to a specific device (so called node-locking), and worked to provide alternatives.

Instead, this approach has now been enshrined in the W3C EME specification. With Google and Microsoft shipping W3C EME and content providers moving over their content from plugins to W3C EME Firefox users are at risk of not being able to access DRM restricted content (e.g. Netflix, Amazon Video, Hulu), which can make up more than 30% of the downstream traffic in North America.

We have come to the point where Mozilla not implementing the W3C EME specification means that Firefox users have to switch to other browsers to watch content restricted by DRM.

This makes it difficult for Mozilla to ignore the ongoing changes in the DRM landscape. Firefox should help users get access to the content they want to enjoy, even if Mozilla philosophically opposes the restrictions certain content owners attach to their content.

As a result we have decided to implement the W3C EME specification in our products, starting with Firefox for Desktop. This is a difficult and uncomfortable step for us given our vision of a completely open Web, but it also gives us the opportunity to actually shape the DRM space and be an advocate for our users and their rights in this debate. The existing W3C EME systems Google and Microsoft are shipping are not open source and lack transparency for the user, two traits which we believe are essential to creating a trustworthy Web.

The W3C EME specification uses a Content Decryption Module (CDM) to facilitate the playback of restricted content. Since the purpose of the CDM is to defy scrutiny and modification by the user, the CDM cannot be open source by design in the EME architecture. For security, privacy and transparency reasons this is deeply concerning.

From the security perspective, for Mozilla it is essential that all code in the browser is open so that users and security researchers can see and audit the code. DRM systems explicitly rely on the source code not being available. In addition, DRM systems also often have unfavorable privacy properties. To lock content to the device DRM systems commonly use “fingerprinting” (collecting identifiable information about the user’s device) and with the poor transparency of proprietary native code it’s often hard to tell how much of this fingerprinting information is leaked to the server.

We have designed an implementation of the W3C EME specification that satisfies the requirements of the content industry while attempting to give users as much control and transparency as possible. Due to the architecture of the W3C EME specification we are forced to utilize a proprietary closed-source CDM as well. Mozilla selected Adobe to supply this CDM for Firefox because Adobe has contracts with major content providers that will allow Firefox to play restricted content via the Adobe CDM.

Firefox does not load this module directly. Instead, we wrap it into an open-source sandbox. In our implementation, the CDM will have no access to the user’s hard drive or the network. Instead, the sandbox will provide the CDM only with communication mechanism with Firefox for receiving encrypted data and for displaying the results.

Traditionally, to implement node-locking DRM systems collect identifiable information about the user’s device and will refuse to play back the content if the content or the CDM are moved to a different device.

By contrast, in Firefox the sandbox prohibits the CDM from fingerprinting the user’s device. Instead, the CDM asks the sandbox to supply a per-device unique identifier. This sandbox-generated unique identifier allows the CDM to bind content to a single device as the content industry insists on, but it does so without revealing additional information about the user or the user’s device. In addition, we vary this unique identifier per site (each site is presented a different device identifier) to make it more difficult to track users across sites with this identifier.

Adobe and the content industry can audit our sandbox (as it is open source) to assure themselves that we respect the restrictions they are imposing on us and users, which includes the handling of unique identifiers, limiting the output to streaming and preventing users from saving the content. Mozilla will distribute the sandbox alongside Firefox, and we are working on deterministic builds that will allow developers to use a sandbox compiled on their own machine with the CDM as an alternative. As plugins today, the CDM itself will be distributed by Adobe and will not be included in Firefox. The browser will download the CDM from Adobe and activate it based on user consent.




While we would much prefer a world and a Web without DRM, our users need it to access the content they want. Our integration with the Adobe CDM will let Firefox users access this content while trying to maximize transparency and user control within the limits of the restrictions imposed by the content industry.

There is also a silver lining to the W3C EME specification becoming ubiquitous. With direct support for DRM we are eliminating a major use case of plugins on the Web, and in the near future this should allow us to retire plugins altogether. The Web has evolved to a comprehensive and performant technology platform and no longer depends on native code extensions through plugins.

While the W3C EME-based DRM world is likely to stay with us for a while, we believe that eventually better systems such as watermarking will prevail, because they offer more convenience for the user, which is good for the user, but in the end also good for business. Mozilla will continue to advance technology and standards to help bring about this change.

By Andreas Gal, Mozilla

Netflix’s Many-Pronged Plan to Eliminate Video Playback Problems

For all of Netflix’s complaints about Internet service providers harming video performance, one of the company’s top technology experts is confident that the streaming company can solve most of its customers’ problems.

David Fullagar, Netflix’s director of content delivery architecture, spoke about the company’s plans Monday at the Content Delivery Summit in New York. He described the hardware Netflix uses in its Open Connect content delivery network (CDN), noting that the company has a technological advantage over traditional CDNs because it’s always delivering content to devices running Netflix’s own software rather than using a hodgepodge of products built by other companies.

The best-known parts of Open Connect are probably the storage boxes that Internet service providers can take into their own networks to bring content closer to consumers. ISPs can also peer with Netflix, exchanging traffic directly without hosting Netflix equipment. But these aren’t the only ways Netflix’s Open Connect technology can deliver good quality.

Netflix used to use third-party CDNs such as Akamai, but it has moved most of its traffic over to Open Connect in the past couple of years. Outside the US, 100 percent of Netflix traffic is distributed using Open Connect equipment. The percentage is in the “high 90s” in the US, with plans to hit 100 percent this summer. Even if the storage boxes aren’t inside an ISP’s network, they’re not too far away. They could even be in the same data centers, the Internet exchange points where Netflix transit providers connect to ISPs.

Fullagar was asked by an audience member how Netflix works with ISPs who offer competing products. “From a quality point of view we don’t need to be that close to the end user for the sort of video we serve,” Fullagar said. “Having extremely low latency is nice” because it allows videos to start playing faster. However, “what we’re most interested in is a good, uncongested link, and that doesn’t necessarily have to be very low latency.”

Netflix’s peering with ISPs has been controversial because some of the Internet providers have demanded payment in exchange for accepting Netflix traffic. Netflix gave in to Verizon and Comcast, agreeing to pay both companies, but it has claimed that the Federal Communications Commission should force the ISPs to provide free peering. Netflix has sent its traffic through congested links when its business disputes have gone unresolved, deteriorating quality despite the other steps Netflix takes to improve it. (Comcast and analyst Dan Rayburn accused Netflix of purposely sending traffic through congested links.)

When asked how much Netflix can affect streaming performance given that it controls the server end of the connection as well as the user’s software, Fullagar said, “I think we’re on the tip of the iceberg of being able to do quite a lot there.” Netflix’s access to information about each customer’s device and Internet connection will fuel some as-yet-unrevealed strategies for improving quality, he said.

“We have extra information beyond just, hey this is someone wanting this file," he said. "At connection time we know the sort of client they are, whether it’s a Wii or a PS4 or a streaming stick. We know the network they’re on, we know a bunch of historical information about latency and quality of service we’ve had to those networks. We know whether they’re connected on a device that’s wired or wireless. There’s a bunch of hints that we have there.”

The company has started some “experiments that are working out really well, and in the future we’ll talk more about that.”

Netflix itself has equipment at about 20 Internet exchange points in North America and Europe and has "tens if not hundreds of embedded caches in ISP networks," Fullagar said.


The Network Team
Netflix’s Open Connect division has about 40 people, Fullagar said. About 20 are software engineers who either build software for Netflix servers or work on the company's management software, which runs on Amazon’s cloud network and performs functions such as load balancing. Another 10 Open Connect employees are network architects, and another 10 are in operations.

Netflix stores video on two types of boxes that it designed, one that’s heavy on HDDs and another that’s all SSDs. Netflix built them in part because it couldn’t find the right mix of compute and storage capabilities in products from hardware vendors.

The HDD unit is a 4U-sized chassis that holds 216TB on 36 drives of 6TB each. It has 64GB RAM, a 10 Gigabit NIC, and some SSD for frequently accessed content.

The smaller, 1U, SSD-only unit contains 14 drives of a terabyte each, 256GB of RAM and a 40 Gigabit NIC. About 75 percent of the cost of both the HDD and SSD boxes is taken up by storage. Each unit uses Intel CPUs.

Netflix refreshes hardware annually to improve performance. At its biggest locations, Netflix keeps multiple copies of its entire video library in case of failure. That’s more than a petabyte of video files for its North American catalog.

The company relies heavily on open source software, including FreeBSD and the Nginx Web server, as well as several management applications the company wrote itself.

Netflix distributes multiple terabits per second and accounts for an astonishing one-third of North American Internet traffic at peak times, i.e. the traditional TV “prime time” each evening. During off-peak hours in the middle of the night, Netflix fills disks with the videos its algorithms say people are most likely to watch the next day. This dramatically reduces network utilization during peak hours.

The management software Netflix runs on Amazon Web Services handles distribution of content, analyzes network performance, and connects users to the proper video sources. Netflix wrote its own adaptive bitrate algorithms to react to changes in throughput, and a CDN selection algorithm to adapt to changing network conditions such as overloaded links, overloaded servers, and errors, the company said.

When Netflix used multiple third-party CDNs, connections would fail over from one to another in case of error. Netflix still uses the same failover technology, but with “multiple hierarchies” within Open Connect instead of multiple CDNs, Fullagar said.

Although Netflix is moving all its data onto Open Connect hardware, that doesn’t automatically reduce the controversial role its transit providers Level 3 and Cogent have played in carrying traffic. Level 3 and Cogent have warred with ISPs over whether they should have to pay in order to send Netflix traffic onto their networks. As a result, interconnections between these transit providers and ISPs have gotten congested, reducing the quality of Netflix and other Web services that travel over the links.

The role of transit providers is only reduced when Netflix signs direct interconnection agreements with ISPs, as it has done Verizon and Comcast, a Netflix spokesperson said. In the absence of such agreements, Netflix data passes through the company’s own CDN and then through a transit provider before hitting an ISP's network.

The payment controversies don’t necessarily affect the working relationship between the technical teams of Netflix and ISPs, though. “Engineering people at companies, whether large or small, operate independently of commercial interests,” Fullagar said. “In the UK, one of our biggest competitors is one of our best networking partners.”

Source: Ars Technica

PrestoCentre Standards Register

The PrestoCentre Standards Register gathers information on standards for content and metadata used across all communities involved in audiovisual digital preservation.

Free Loudness Meter for Windows & Mac

Orban has introduced Version 2.7 of the free Loudness Meter for Windows (Vista/7/8) and Mac (OS X 10.6 or greater). The software adds two important new features: support for up to 7.1-channel surround, and the ability to analyse files in several common formats offline to measure their ITU-R BS.1770-3 Integrated Loudness and Loudness Range. This combination of new features allows any organization to qualify files, whether stereo or surround, for compliance with the Calm Act and EBU R128.

Like its predecessor, version 2.7 measures loudness using both the Jones & Torick (CBS Technology Center) and BS.1770-3 algorithms, displaying BS.1770-3 Short-Term, Momentary, and Integrated loudness in addition to the Jones and Torick loudness.



The metre also provides PPM and VU meters, and a "Reconstructed Peak" meter with an 8X-upsampled sidechain to predict the peak levels that will appear after digital to analogue conversion. This reconstructed peak metre exceeds the requirements of the "true-peak" metre described in BS.1770-3, which specifies a 4X-upsampled sidechain.

Source: TV Technology

Faroudja’s Bit Rate Reducer Gambit

There's no end in sight for the consumer's insatiable appetite for bandwidth. That demand continues to stress transmission networks and storage systems throughout the world.

But what if someone said there's a way to significantly reduce the bit rate requirements of video content -- other than current compression schemes -- without damaging the image quality?

That would make people sit up and listen -- especially when that someone is Yves Faroudja, a legend in the video industry. Faroudja has been behind almost every video processing and scaling technology development for decades and is a recipient of three Emmy technology awards.

So, of course, everyone in Las Vegas at this year's NAB (National Association of Broadcasters) Show listened when Faroudja Enterprises, a privately funded startup, trotted out its new technology, a video bit rate reducer designed to provide up to 50 percent reduction in video bit rates without reduction in image quality.



In a hotel suite at the NAB show in Las Vegas, Faroudja Enterprises showed off
its new technology, a video bit rate reducer.


The Faroudja's scheme doesn't alter current compression standards (MPEG-2, MPEG -4, HEVC). It's rooted in Faroudja's belief that such compression systems aren't using all the available redundancies to improve compression efficiency.

Under the new scheme, Faroudja introduces a new pre-processor (prior to compression) and post-processor (after compression decoding). "We take an image and simplify it; and that simplified image goes through the regular [standards-based] compression process," he explained. "But we never throw away information."








Instead, in parallel with the conventional compression path, Faroudja inserts what he calls a "support layer." This compresses signals not used in Faroudja's so-called simplified image. Together with the decompressed simplified image, the support layer helps reconstruct the original image in full resolution -- at a reduced bit rate.



System overview shows an example of how Faroudja Enterprises' process
works when applied to 4K video.


Faroudja claims "a bit rate reduction of 35% to 50% for an equivalent image quality [and] a significant improvement of the image quality on low bit rate content."

Details, of course, remain tightly wrapped in the black box. Faroudja hasn't decided on the initial market focus for his new technology. He said he's keeping his options open for a business model -- for now. Faroudja's team, however, has completed proof-of-concept and done a public demonstration.

Potential markets for the new technology include broadcast, cable and satellite, streaming media over the Internet, or Skype-like video conferencing.

Faroudja Enterprises is about to embark upon "market implementation" of the technology "over the next couple of months" by tailoring it to market demands, Faroujda told EE Times in an interview this week.

Jon Peddie, president of Jon Peddie Research, said that there is no one else who has rolled out anything similar to what Faroudja has done. "Faroudja is not displacing any company or technology. They are augmenting the existing codecs, including H.265," said Peddie.

But when it comes to its potential competitors, there are many. Richard Doherty, research director at Envisioneering Group, says future competitors are "universities with strong analytical math departments," including Cambridge, Oxford, Harvard, MIT, Columbia, Stanford, Princeton, Georgia Tech, U.C. Berkeley, Purdue, and the Fraunhofer institute.

Doherty predicts, "Once this is known to work, other low-key efforts might get ramped up." They'd be all racing to "receive funding to uncover other methods which do not infringe on Yves's.


Market Acceptance?
Faroudja's video bit rate reducer has many advantages. It's compatible with existing standards. The new process is compression-standard agnostic. It applies to all existing standards, from MPEG-2 to HEVC, and reduces bit rate in all cases, the company claims.

Faroudja explained that a final image will be preserved even if just the pre-processor is used, without the post-processor. The image survives even if the support layer is interrupted. Images with two levels of resolution can be made available at the same time -- for example, simultaneous availability of a program in both 1080 lines and 4K resolutions, according to the firm.

Moreover, Faroudja's support layer "can further be configured as a transcoder" to help convert video among existing compression formats, such as MPEG-2, H.264, etc., to and from Faroudja's support layer format "to save bandwidth, bit rate, or file size in the cloud without sacrificing image quality."


The Industry Reaction
After the NAB show, Faroudja reported that his company's technology demonstration in a Las Vegas hotel suite produced "a lot of very positive feedback" from audience members in the high-end video business, including broadcasters and cable and satellite operators.

However, still missing are "responses from streaming Internet companies and those with Skype-like teleconferencing technologies. We will know more [about the target market] after we attend the Streaming Media East conference in New York in May."

Asked about potential business and technology hurdles, Envisioneering's Doherty said, "If [the technology is] as simple as Yves suggests, not many. It would be a slam dunk for satellite, wireless, fiber, cable, anything." Asked who would embrace this first, Doherty predicted that bandwidth constrained back-hauls -- broadcast, cable, satellite, and cellular -- will benefit most.


Highly Dependent on Content
Tom McMahon at Del Rey Consultancy, a video expert, pointed out that the industry still needs to see how it works in a variety of scenarios -- fast motion video, noisy images, video in lower resolution, video streamed at a lower bit rate -- and how much gain it could achieve.

He noted that "a bunch of experts" discussed similar layered approaches a decade ago when the industry was still developing the H.264 standard. Dolby Laboratories worked on a similar scheme, he said, but the technology was never implemented. "The issue [of such an approach] is that it is highly dependent on content." However, he quickly added that service providers intent on bandwidth conservation and delivery of better quality video at lower bit rates are hungry for a technology such as Faroudja's. It's also applicable in smart encoder designs.

Satellite operators such as DirecTV, operating in a controlled environment, might be interested, said McMahon. Even next-generation set-tops or TVs/media boxes from Google or Apple could take advantage of meta-data offered in a support layer such as Faroudja's, he added.

In the end, market acceptance will depend on how much investment is needed for Faroudja's system (pre- and post-processing, and the inclusion of a support layer).

Jon Peddie pointed out that one of the initial hurdles in garnering market acceptance is "getting people to 'grok' it." He called it a "double-edged sword," because it is "black magic until Yves goes into detail." That's something Faroudja "doesn't want to do with anyone who isn't willing to commit to it. Anyone selling IP has to walk that balancing act: disclosure vs. trust/risk."

Peddie predicts that telecoms wanting to compete with cable and satellite should be early to embrace Faroudja's video bit rate reducer.


Complexity
Faroudja explained that the processing power necessary for Faroudja Enterprises' process is roughly a "doubling of H.264." Most of the complexity, he noted, is in the pre-processor side.

In the technology demonstration, Faroudja used a multiple-unit rack system including off-the-shelf GPU boards. "We probably used 10 to 15 percent of the hardware [processing power] we had."

Asked if the technology is applicable to the consumer market -- including TVs, mobile phones, tablets -- he said it will be in the future, if it's licensed to chip vendors. However, he added that the process's present form isn't yet consumer-ready.

Faroudja hasn't exactly decided on the company's business model. Options he's considering include making the technology available in hardware, software, or through licensing. He said his company can do licensing agreements with system vendors, license IPs to silicon, or make its software downloadable on customers' hardware.

Faroudja Enterprises, which has developed its latest technology in complete secrecy over the last year, already has two patents granted and six more pending.

Faroudja's reputation as a veteran inventor over age 65 helped grease the skids for the patents. Faroudja was surprised that his patent applications were entitled to accelerated examination. Briefly assuming the aspect of an absent-minded professor, he said, "I didn't know that."

By Junko Yoshida, EE Times

HEVC Demonstrates its High Efficiency

HEVC Version 1 demonstrates it has achieved more than 50% bitrate savings compared to MPEG 4 AVC/H.264, the MPEG standards group announced following its 108th meeting held at the beginning of April in Valencia, Spain.

In a verification test campaign of the HEVC, ITU-T Rec H.265 | ISO/IEC 23008-2 compression standard following the finalisation of the standard last year “a formal subjective quality assessment has been executed using a large variety of video material, ranging from wide-screen VGA resolution up to 4K. The material had not previously been used in optimizing HEVC's compression technology”, MPEG, announced in a prepared statement, adding: “Clear evidence was found that HEVC is able to achieve 50% bitrate savings and more, compared to the AVC High Profile”. The results will be made publicly available in the report N14420, to be published on the MPEG website.

HEVC's scope continues to be extended with a call for proposals on Screen Content Coding, the compression of video containing rendered graphics and animation. This extention is scheduled for completion by the end of 2015.

HEVC's 2nd edition includes support for additional colour formats and higher precision. The ‘Range Extensions amendment’, with technology allowing efficient compression of video content for colour sampling formats beyond 4:2:0 and up to 16 bits of processing precision has been finalised. In particular, the lossless and near lossless range of visual quality is more efficiently compressed than is possible with the current version 1 technology.

Web Video Coding the standard for a worldwide, reasonable and non-discriminatory, and free of charge licensable online compression scheme for use in browser has reached the final stage before approval. MPEG expects to complete the Final Draft International Standard in February 2015.

MPEG is also working on standardising what it refers to as ‘free-viewpoint television’. A public seminar on FTV (Free-viewpoint Television) will be held on July 8th to align MPEG's future standardization of FTV technologies with users and industry needs. Targeted application scenarios are Super Multiview Displays “where hundreds of very densely rendered views provide horizontal motion parallax for realistic 3D visualization, extracted from a dense or sparse set of input views/cameras in a circular or linear arrangement”.

“Integral Photography, where 3D video with both horizontal and vertical motion parallax are captured for realistic display”. And “Free Navigation that allows the user to freely navigate or fly through the scene, not just along predefined pathways”.

MPEG expects that future FTV systems will require new functionalities such as a substantial increase in coding efficiency and rendering capability compared to technology currently available. The FTV initiative will also consider novel means for acquiring 3D content that have recently emerged, e.g. plenoptic and light field cameras. You are invited to join the FTV seminar to learn more about MPEG activities in this area and to help revolutionise the viewing experience.

The increases in spatial resolution and colour resolution, scalable coding and autostereoscopic 3D, or in MPEG speak Multi-view, leads to amendments in the trusty old MPEG-TS, transport stream layer. The amendment specifies transport of layered coding extensions for the scalable and multiview enhancements of HEVC, and the signaling of associated descriptors so that different layers can be encapsulated and transported individually.

To support the new standards to offer 4K/8K UHDTV services by using a newly developed MPEG standard, MPEG Media Transport (MMT) an effort to promote the new transport layer standard has begun in Japan with a growing number of companies implementing MMT for various applications. MPEG is organising MMT Developers' Day in conjunction with its 109th meeting this July in Sapporo, Japan.

MPEG is also working on 3D audio and dynamic range control. The DRC system provides comprehensive control to adapt the audio as appropriate for the particular content, the listening device, environment, and user preferences. The loudness control can be applied to meet regulatory requirements and to improve the user experience, especially for content with large loudness variations.

By Donald Koeleman, Broadband TV News

LG Electronics Expands 'Second Screen' TV Ecosystem with Open-Source, Multi-Platform 'Connect SDK'

LG Electronics is making Connect SDK, an open source software development kit, available to Google Android and iOS developers to extend their mobile experience across tens of millions of big TV screens around the world.

By unifying device discovery and connectivity across multiple television platforms, Connect SDK is the first to truly address the complexity associated with implementing second screen capabilities while reaching the largest installed base of smart TVs and connected devices.

For consumers, this means that LG's new Smart TVs, powered by the webOS platform, as well as other popular TV devices, will be able to connect and interact with more mobile apps – further enhancing their second screen television experience.



Using Connect SDK, application developers can connect their mobile applications with 2014 LG webOS Smart TVs, LG Smart TVs from 2012 and 2013, Roku, Google Chromecast, and Amazon Fire TV devices:

  • Mobile applications with photos, videos, and audio can beam media to four TV platforms. Applications with YouTube videos can beam them to all LG Smart TVs dating back to 2012, Roku 3, Google Chromecast, Amazon Fire TV, and most DIAL-enabled devices.
  • Developers can build TV-optimized web applications and media viewers and use them across LG webOS Smart TVs and Google Chromecast.
  • TV application developers can use their mobile apps to promote the existence of their TV app on 2014 LG webOS Smart TVs and 2013 LG Smart TVs, as well as Roku devices.
 Source: LG

Mezzanine Film Format Plugfest

The adoption of the Digital Cinema projection has opened new opportunities for the distribution of films. The digitization of commercial film collections and film heritage is developing rapidly. Up to now there has been no specific format for the interoperable exchange and conservation of cinematographic works with the highest required quality.

In February 2011, at the request of the French Centre National du Cinema, the Commission Supérieure Technique started to work on the technical recommendation CST-RT021. The goal of this recommendation is to ensure that the commercial exploitation of digitized cinematographic works on modern digital distribution channels is made possible with the required quality.

The first version of the CST-RT021-MFFS specification has been published. It has been designed to be consistent with current developing standards, specifically the SMPTE Interoperable Master Format (Technical Committee 35 PM). The expert group led by CST is proposing that film laboratories, audio-visual equipment manufacturers and all interested parties participate in a Plugfest to test the specification.

This event will allow testing the wrapping, encoding of image and sound and colour coding detailed in the CST-RT021-MFFS, in order to ensure the interoperability of commercial solutions.

Source: ETSI

DASH Talks NAB 14

During the 2014 NAB show in Las Vegas, the DASH Industry Forum organized a 2hr session with nine presentations giving you the latest technical, business and deployment updates on MPEG-DASH. Speakers represented companies from across the DASH ecosystem. Here is the video replay of this event.

DPP Launches a Producer’s Guide to File Delivery

As the UK broadcast industry moves towards full digital delivery, the DPP has launched A Producer’s Guide to File Delivery, a complete handbook that explains – from a production point of view – all that is involved and required in the new process.

The guide is published six months out from the 1st October, the date UK broadcasters will move to full digital delivery, and is a step-by-step guide to the process. The handbook includes guidance on the key stages of file delivery:
  • Completing the Programme – final Video and Audio
  • Creating MXF Files
  • QC Checks – Eyeball tests and Automated QC
  • PSE Checks
  • Adding DPP Metadata
  • Delivering DPP Standard Programme File
  • Late Changes before TX
Source: DPP

Content Versioning is Out of Control

At Technology Summit on Cinema here at NAB, Walt Disney’s Howard Lukk said there can now be a total of 35,000 possible versions of a movie that will have to be generated to serve all possible ways the movie can be scene. This is apparently based on a permutation (or multiplication) of all of the variables in creating a particular version.

This presumably means cinematic versions, packaged media versions and versions for cable, satellite, broadcast and internet distribution. While this high may be possible, it is also unlikely, but nonetheless is in the thousands and represents a huge challenge for the industry.

For theatrical distribution for example, Walt Disney’s Leon Silverman said they typically need to create over 100 masters of each film. That includes versions with different audio mixes, different languages and different platform specific needs.

To illustrate what they are doing he described two movies in his talk. The new movie “Planes” for example, had 126 different masters with one of the plane characters having a different name and artwork depending on what country it was screened in.

He also showed a clip from the movie “Frozen” playing the song “Let it Go” where every verse was sung in a different language. There must have been 40 different languages with each performance created by a difference signer, yet they were blended perfectly to sound seamless. Incredible! He also lamented the versioning that is required to market a film requiring dozens of thumbnail images that must be used on various web sites.

Overall, he described what he called the “new post post world order,” which he said has changed the landscape for just about every aspect of the way movies are made today. He started his talk by noting that the workflow of cinema is increasingly being merged with the TV production workflow and that it may be hard to tell the difference in the future. He then gave details on ways the industry is complex (versioning being one aspect), connected, global and secure. While the title of the session was “From Camera to Consumer”, he renamed it “From Camera to Netflix”.

Filmmakers must work in a connected, networked global environment, but he did not seem particularly concerned about technology being able to handle the needs going forward. Security is more of a concern for Disney with isolated networks, multiple security protocol and audits done to help protect their IP. Success or failure here can have huge impacts on the studio and careers as well.

His description of what is needed was so incredible it led others on the panel to hope they never had his job.

How one gets to 35,000 versions is still a little unclear, but presumably includes all the different aspect ratios, video formats, audio formats, broadcast formats, Internet formats and may include encoding formats and all of those variables as well. While a studio would not necessarily have to produce all those versions, someone somewhere would adding enormous overheads to the process.

One solution to the format issue is a project that was also described at the event called IMF (Interoperable Master Format). This is an industry-wide effort started by the major studios that began as a Business to Consumer version of the Business to Business cinema formatting standards effort that is now called DCI (Digital Cinema Initiatives). Speakers from Disney, Sony and Fox described their efforts to create a SMPTE standard (now issued) and to implement the format at their studios.

The basic idea is to be able to have a “core framework” that consists of the main visual aspects plus a series of “modular application” that are plug ins to the format that add specific functionality like codecs, specific resolutions and frame rates and other aspects. This is all managed by a Composition Play List (CPL). This allows the generation of localized versions from a single file format.


Basic structure of IMF package


While IMF doesn’t reduce or eliminate versioning needs, it does help to create a file structure that is much more efficient in the way the versions are created and has a huge impact on storage space needed for all the versions.  Both Sony and Fox cited incredible storage savings (on the order of 25X) for projects they have initiated using IMF.

Sony’s project for example, was to create 100 UHD resolution titles that they could use in the roll out of their UHD/4K TVs, which they have now done. For Sony, this meant going back to the original masters of each film and remastering a finished film in UHD resolution in the xvYCC/rec. 709 color space and encoding in YCbCr using the IMF App 2 (broadcast profile level 5) at a 400 Mbps average and 800 Mbps max rate.




As shown in the graphic, Sony has now created IMF versions of 104 feature films and 140 TV episodes.  And the space savings are incredible. The uncompressed versions of these films is a whopping 1,001 TB while the IMF version are only 43 TB.

By Chris Chinnock, Display Central

Ultra HD in 2014: A Real World Analysis

An interesting white paper about UHD by Harmonic.

Vantrix Open Sources Free HEVC Encoder

Vantrix announced the creation of an open source version of the H.265 encoder, calling it the F265 project. The project aims to accelerate the industry-wide development and adoption of H.265, also known as High Efficiency Video Coding (HEVC).

H.265 is the successor to the industry standard H.264 codec used for video compression. The new specification, ratified in 2013, provides for double the data compression ratio of H.264 while ensuring the same level of visual quality. It is expected to be a major driver in the adoption of 4k Ultra High Definition and beyond by reducing the amount of transmission bandwidth required versus current standards.

Vantrix's F265 encoder will be licensed under the OSI BSD terms, enabling access to source code, free redistribution, and derived works. The intent is to encourage both researchers and commercial entities to contribute to the refinement and evolution of the code to accelerate the implementation of both software and hardware systems.

The project will initially target high quality offline encoding, but will not be limited to this scope. It is designed from the ground up to maximize quality and performance in offline and real time scenarios using recent hardware technology and interfaces such as the Intel AVX2 instruction set.

The F265 project site is in the process of being finalized and those interested in being notified of its availability can sign up at www.vantrix.com/f265.

Source: Vantrix

EBUCore v1.5 Includes the New EBU Audio Model (ADM)

The new version of EBUCore can be downloaded as EBU Tech 3293 v1.5. Thanks to the efforts of the metadata experts in the EBUCore developer and user community, the new version of the EBU's metadata flagship includes several enrichments.

The most prominent update probably is the integration of the recently published EBU Audio Definition Model (ADM) (EBU Tech 3364). The ADM provides a complete set of technical and informative metadata to describe a file's audio content.


Graphical representation of the EBU ADM (click to enlarge)


It is designed not only to support current channel-based audio configurations such as 5.1 and 15.1, but also to be ready for future formats, by using ADM extensions. The ADM is shared with other standards organisations, such as the AES, AMWA/EBU FIMS, ISO/IEC MPEG, ITU, SMPTE and W3C.

All EBUCore additions are clearly documented in the new version. Special attention has been paid to simplifying the specification, by focussing it on examples of implementations. A 'Download Zone' chapter provides links to the related Schema, including semantic technology in the form of the updated EBUCore RDF/OWL ontology.

Source: EBU

Microsoft Smooth Streaming Client 2.5 with MPEG DASH Support

The PlayReady team, working in conjunction with the Windows Azure Media Services team is pleased to announce the availability of the Microsoft Smooth Streaming Client 2.5 with MPEG DASH support.

This release adds the ability to parse and play MPEG DASH manifests in the Smooth Streaming Media Engine (SSME) to provide a Windows7/Windows8 and MacOS solution using MPEG DASH for On-Demand scenarios.

Developers that wish to move content libraries to DASH have the option of using DASH in places where Silverlight is supported. The existing SSME object model forms the basis of DASH support in the SSME. For example, DASH concepts like Adaptation Sets and Representations have been mapped to their logical counterpart in the SSME.

Also, Adaptation Sets are exposed as Smooth Streams and Representations are exposed as Smooth Tracks. Existing Track selection and restriction APIs can be expected to function identically for Smooth and DASH content.

In most other respects, DASH support is transparent to the user and a programmer who has worked with the SSME APIs can expect the same developer experience when working with DASH content.

Some details on the DASH support compared to Client 2.0:

  • A new value of ‘DASH’ has been added to the ManifestType enum. DASH content that has been mapped into Smooth can be identified by checking this property on the ManifestInfo object. Additionally the ManifestInfo object’s version is set to 4.0 for DASH content.
  • Support has been added for the four DASH Live Profile Addressing modes: Segment List, Segment Timeline, Segment Number, and Byte Range.
  • For byte range addressable content, segments defined in the SIDX box map 1:1 with Chunks for the track.
  • A new property, MinByteRangeChunkLengthSeconds, has been added to Playback Settings to provide SSME with a hint at the desired chunk duration.
  • Multiple movie fragments will be addressed in a single chunk such that all but the last chunk will have a duration greater than or equal to this property. For examples of how to set Playback Settings see the Smooth documentation.

There are some limitations in this DASH release, including:
  • Dynamic MPD types are not supported.
  • Multiple Periods are not supported in an MPD.
  • The EMSG box is not supported.
  • The codec and content limitations that apply to Smooth similarly apply to DASH. (see here)
  • Seeking is supported, but not trick play. Playback rate must be 1.
  • Multiplexed streams are not supported.
If you have feature requests, or want to provide general feedback, please use the Smooth Streaming Client 2.5 forum.

Source: Microsoft

DPP Launches Quality Control Guidelines

The DPP has released a set of standardised UK Quality Control Requirements to help producers carry out the necessary QC checks to ensure they deliver broadcast quality files, which meet the necessary standards.

The guidelines, which are published six months out from the 1st October, when UK broadcasters will make the move to full digital delivery, have been produced in collaboration with the EBU.

The EBU’s Strategic Programme for Quality Control is currently defining all possible QC criteria as well as guidance on their implementation.  The DPP has taken the EBU definitions and created a minimum set of tests and tolerance levels required for UK broadcasters.

Included in the new guidelines are AS-11 file compliance checks, and Automated Quality Control tests for Video and Audio. Examples are loudness levels, and freeze frames.

The guidelines also includes a list of ‘Eyeball’ tests that a producer needs to undertake before delivering the programme. Included on the checklist are such things as Audio Sync, Buzzing, Unclear Sound, Clock, and Visual Focus etc.

Commenting on the new work, Kevin Burrows, DPP Technical Standards Lead and CTO Broadcast & Distribution, Channel 4, said, “The DPP’s QC guidelines offer a standardised set of checks expected prior to the digital delivery of broadcast programmes to the UK Broadcasters. They are designed to streamline the QC process and help minimise the issues that can arise in programme delivery.”

Andy Quested, Principal Technologist BBC, who has been instrumental in devising the guidelines as well as leading the EBU’s work, added, “Post houses and broadcasters are seeing a significant increase in the volume of file-based programmes they need to handle. It is really important that the QC tests give accurate and consistent results. The new guidelines don’t just explain the process and the test to be carried out, they also make it clear who is responsible for signing off the QC process.”

The guidelines will be implemented by broadcasters as they move towards digital file delivery. Production companies will be required to deliver their compliant files along with a valid QC report, as has previously been the case with the PSE report.

The QC report can be delivered using a separate PDF or XML file output from the QC tool. DPP broadcasters will accept PDF QC reports initially, but will encourage XML reports over time once they are standardised by the EBU group and the DPP.

Source: Digital Production Partnership

DASH Segmenting

Usually when creating a video, all that is needed is to encode it using a codec (for example H.264 or HEVC). However, to transmit a video using MPEG-DASH, an extra segmentation step is required. Typical encoders do not provide this step and produce content which is not compatible with DASH.

Hosted as a GitHub project, dash.encrypt is available as an open-source application written in Java. It takes encoded video and audio from an array of different formats and repackages them as valid DASH streams. It also generates the required manifest which is the table of contents for the stream.

Firefly Launches FirePlay

Firefly Cinema launches FirePlay, the first free universal player able to read all digital file formats in real time, up to a resolution of 4K.

FirePlay is extremely easy to use and requires no specific knowledge. It is intended for all professionals working in digital cinema, and simplifies the work of DPs, camera assistants, or DITs.




FirePlay can play a variety of file formats in real time - such as RED, ARRIRAW, Apple QuickTime, ProRes, Canon and Sony - and allows quality control of rushes. The latest version also supports new 4K and UHD formats. In addition, leveraging the processing power of the new Apple Mac Pro, FirePlay can instantaneously read Sony RAW 4K files.

Aside from supporting native camera files, FirePlay can also read OpenEXR, DPX, TIFF files and more, making it a powerful preview tool.

Finally, FirePlay offers a unique tool that can be used on the set, allowing to apply corrections to primary and secondary colors in real time.

Click here to download FirePlay.

DPP Launches Access Services Standards

The UK's Digital Production Partnership (DPP) has agreed an industry document format standard for the exchange of subtitles for the hard of hearing and audio description.

ligned with the EBU’s new subtitle document format (EBU-TT), produced in July 2012, the new standard has been created to support UK broadcasters’ requirements for subtitle and audio description script transfer.

The DPP’s subtitle document format extends EBU-TT with metadata required to support the workflow for delivering prepared subtitles and captured live subtitles in addition to the scripts used during the production of audio description.

All DPP subtitle documents are valid EBU-TT, itself based on the W3C TTML recommendation. The format separates the text and it’s associated display timing from information about that text, such as where it should be placed on the screen, the font style, size and colour, and separate metadata such as whether a given subtitle is describing dialogue, music, sounds effects etc., and what language the subtitles are in.

The format is a flavour of XML whose format can be validated using off the shelf tools and extended to meet specific requirements while still being interchangeable. In addition it can support all Unicode characters and arbitrary fonts, should downstream platforms support them. EBU is in the process of finalising the draft guidance for converting from the legacy STL format to EBU-TT.

DPP’s subtitle document format will allow UK broadcasters and access service providers to move away from legacy formats, proprietary or otherwise, and towards an open future-facing format that can be used to provide subtitles on broadcast television and online. By agreeing this format before making the transition, DPP is able to lay the groundwork for a common UK interchange format that will benefit all businesses that need to exchange these documents.

Companies that manufacture access services authoring and processing tools have a clear target format, which is vital in a historically fragmented marketplace. Broadcasters and distributors similarly will have a lower cost of adoption of this richer format that is not encumbered by the constraints of legacy formats.

Kevin Burrows, DPP Technical Standards Lead and CTO Broadcast & Distribution, Channel 4, said, “This new subtitle standard, encompassing the existing EBU TT specification, will allow for the display of subtitles on current and future consumer platforms by the UK broadcasters. This will benefit viewers by enabling a consistent viewer experience across their services.”

Source: Digital Production Partnership

Live Streaming to the Browser Using MSE and MPEG-DASH

For the next two weeks, we’re running a trial in conjunction with Radio 3 to deliver surround sound to your browser for a series of classical music concerts. On the Radio 3 blog, Rupert Brun explains the background to the trial and how to get involved.

Here in Broadcast & Connected Systems at BBC R&D, we are always looking for new ways to apply our technology research to extend the reach of BBC content to the maximum number of users. Surround sound isn’t new, but delivering it to the home via the Internet has traditionally meant installing plugins or other applications, limiting the platforms and consumers we can target.

In this experiment, we believe we are the first broadcaster in the world to stream a live outside broadcast in discrete multichannel audio to the home using MPEG-DASH, and we're doing it using just a compliant web browser - no plugins, no separate software installation required.


Why Stream to the Browser, and Why Now?
The beauty of the browser is that it is (almost) ubiquitous. Every PC, tablet and smart phone has a browser installed when it ships. Increasingly, smart TVs, set top boxes and games consoles have some form of browser environment available, bringing HTML5, CSS and Javascript functionality to the majority of consumer electronic devices.

People expect to be able to consume BBC content on any platform. To enable this, the BBC currently has to support a number of streaming protocols and has to maintain many different applications with differing code bases and levels of functionality in order to support hundreds of different set top boxes, smart TVs, mobile devices and desktop environments.

What if we could have a single encoding and distribution workflow and a single cross-platform client application, reducing the complexity of distribution and allowing our developers to concentrate on delivering great user experiences?

From a listener’s perspective, the browser “just works” which makes accessing our services much easier. Removing the requirement to install plugins or other software removes a significant barrier for some users. Indeed, for cross-platform compatibility, security and stability, many browser vendors have decided not to support plugins in the future so we need to move away from these anyway.

Three particular technical standards should enable us to do this in the future: HTML5, MPEG-DASH and W3C Media Source Extensions.


HTML5 and Media Source Extensions (MSE)
In HTML5 the HTMLMediaElement, typically a video or audio tag, exposes a source element which accepts a URL of the content to be played. The browser retrieves, decodes and plays the media data automatically, providing it knows how to handle your media type. This offers simplicity for the developer and, in theory, has removed the need for plugins, but the trade-off is that there is no control over many important variables: how data is downloaded and from where, how much data is buffered, which adaptive streaming algorithms to use or what to do in case of failure.

The ability to control these variables is key to providing a world-class user experience, but by default they are hard-coded into the browser. Ideally we want to hand as much control as possible to the Javascript application, while still deferring to the browser for parsing, decoding and rendering the media data. Typically, Adobe Flash or Microsoft Silverlight applications have been used to provide all of this functionality on those platforms that support those plugins.

Most of these features can be replicated in Javascript but, until now, it has not been possible to feed media data to the HTMLMediaElement. The Media Source Extensions define a Javascript API which allows media streams to be constructed dynamically within a Javascript application.

At the heart of MSE is the MediaSource object. This object is created by the application and attached to the media element. Its purpose is to provide the media data for playback as requested by the media element.

The MediaSource object maintains a collection of SourceBuffers. These are the interface through which the application appends media data to the source and methods are provided to insert, remove and manage media data. They are essentially an abstraction of a timeline – media data can appended to the buffer based on media playback timestamp, or it can be appended sequentially, ignoring timestamps. The latter mode enables unrelated media to be spliced together, which allows uses such as advert insertion or even video editing in the browser.



The application handles the requesting of media data from the server and appends the response to the SourceBuffer. Decoupling the fetching of media data from playback allows the media data to be sourced using novel transport mechanisms or from different locations.

SourceBuffers can contain audio, video or timed text and an instance is created for each stream that needs to be presented. Typically there might be one video stream, one audio stream and perhaps a subtitle stream. Since each media type is handled separately, access services such as audio description or subtitling can be selected simply by requesting a different stream.

Finally, the specification also includes extensions to the HTMLVideoElement allowing measurement of video decode and rendering performance which could be used to help decide the most appropriate video stream to present if a number of options are available.

An additional benefit of not hardcoding features into the browser is that any functionality upgrades such as improved adaptive algorithms or defect fixes are simply a case of updating the Javascript application, which is freshly fetched each time the page is loaded, rather than requiring every user to upgrade their browser. Software updates to the browser itself might be fairly easy on a PC but happen infrequently on a smart TV or set top box.


Content Delivery Using MPEG-DASH
MPEG-DASH is the new standard for delivering media content over the Internet. It is designed to allow content to be delivered efficiently in a segmented form, making use of standard caching techniques for web content in order to deliver to large audiences. It supports bitrate adaptation, allowing each viewer to receive a stream in the best quality that their Internet connection can deliver.

Since even surround audio streams need only a low bitrate connection, we are not using bitrate adaptation for this trial. The audio stream is simply encoded at a constant rate of 320 kbps using AAC-LC. However, MPEG-DASH still takes care of dividing the live audio stream into short segments that the client can retrieve using HTTP. Most importantly, MPEG-DASH is a streaming standard which can be implemented for a browser using the W3C Media Source Extensions.


Building a MPEG-DASH Player Using MSE
In order to deliver the surround audio to you, a DASH player application needs to at least perform the following tasks:
  1. Create a MediaSource object and set it as the source of the media element
  2. Request and parse manifest and create SourceBuffer objects for each enabled stream
  3. Request segments for each stream and append them to the SourceBuffers
  4. Repeat step 3
Quite a lot of code is required just for those few steps. For this trial, we’ve chosen to use a modified version of dash.js, an open-source MPEG-DASH player implemented entirely in Javascript.


Where Next?
MSE has recently reached Candidate Recommendation stage, meaning that it should be complete enough to allow implementation, but browser support is still limited.

Right now, Chrome (33 or higher), and IE11 on Windows 8.1, are the only browsers we’ve seen which support enough features for our trial. If you’re a fan of Firefox, Safari or other browsers, these currently have incomplete support, though many of these vendors have publicly stated they are working on it.

As support for these features becomes more widespread, we expect that more and more content on the Internet will be delivered this way. Other content providers are also starting to use these techniques: Netflix has deployed a MSE-based player – this is the default player if you are using IE11 on Windows 8.1. Youtube has also deployed a MSE-based player for some content on some platforms.

Although we are actively experimenting with MSE in BBC R&D, there are no immediate plans to launch any BBC services using the technology. Nevertheless, HTML5, MPEG-DASH and MSE are a powerful set of standards that are sure to play a significant role in delivering media content on the Internet in the coming years.

By Dave Evans, BBC R&D

Dash.as Player

Dash.as runs MPEG-DASH video on any device supporting Adobe Flash. It was designed from the ground-up to be lightweight with performance in mind. Hosted as a GitHub project, it is available as an open-source video player written in Adobe ActionScript.

Source: castLabs

How to Encode Video for HLS Delivery

If you need to deliver to mobile devices and via OTT platforms, you need to deliver HTTP Live Streaming (HLS). Apple provides plenty of advice for compressionists, but here are some tips and tricks for encoding and testing your HLS files.

By Jan Ozer, StreamingMedia.com

Video Processing at Dropbox

A description of HLS encoding workflow at Dropbox.

Key Media Industry Organizations Launch Joint Task Force on File Formats and Media Interoperability

The launch of the Joint Task Force on File Formats and Media Interoperability was announced today by its sponsors, the North American Broadcasters Association (NABA), Advanced Media Workflow Association (AMWA), Society of Motion Picture and Television Engineers (SMPTE), International Association of Broadcast Manufacturers (IABM), American Association of Advertising Agencies (4A’s), and Association of National Advertisers (ANA). The European Broadcasting Union (EBU) is participating as an observer.

Bringing together manufacturers, broadcasters, advertisers, ad agencies, and industry organizations (standards bodies and trade associations) serving the professional media market, the Task Force has an ultimate goal to create greater efficiencies and cost savings for exchange of file-based content.

The group’s initial focus will be to gather and analyze requirements for a machine-generated and readable file interchange and delivery specification — including standardized and common structured metadata — for the professional media industry. Use case examples include promo, spot, and program delivery from a provider to a broadcaster.

In one of its initial actions, the task force has published a survey designed to collect data on user requirements. Open to any member of the media industry, the survey asks participants to create a one-sentence “user story” by identifying the nature of their work, the specific function they seek, and the business value that would be provided by that function.

Other task force activities will include the collection of data on existing products for transcode, transform, and file QC, and their ability to be driven by data from UML, XML, API, script, and other machine-to-machine communication mechanisms.

In addition to analyzing and publishing this data within a formal report, the task force will analyze the data in terms of current, planned, and unplanned standards activities and publish recommendations for future activities.

Source: SMPTE

EBU Puts Subtitles On Line

The EBU has published a new specification for the distribution of subtitles: EBU-TT-D (Tech 3380). The XML based EBU-TT-D format is a low-complexity way to combine subtitle text, styling, timing information, and positioning details to allow implementers to provide users with a subtitle experience at least as good as that on current TVs, regardless of the platform on which they are watching the content.

EBU-TT-D was developed in less than a year, by taking into account expertise from users, distribution parties, hybrid TV organizations and CE manufacturers. The work built on the EBU XML Subtitles group’s knowledge gained when creating the EBU-TT (EBU Tech 3350) subtitle format for production interchange and archiving.

The specification is derived from the base W3C TTML specification. It strongly constrains the feature set of TTML to make it easier for decoder/renderer implementers to add subtitle overlays to video without the complexity that is present in TTML to support other scenarios.

Work is in progress in HbbTV and DVB to reference EBU-TT-D within the upcoming HbbTV 2.0 and DVB DASH standards. The EBU has also published the first carriage specification document for EBU-TT-D, EBU Tech 3381 v0.9, which defines how to carry EBU-TT-D in ISO BMFF, itself a necessary step for distributing EBU-TT-D via DASH. This builds on work done by MPEG, not yet published in international standard form.

Source: EBU