UHD HEVC Data Set

As part of the 4Ever project, we have been releasing an HEVC and DASH ultra high definition dataset, ranging from 8bit 720p 30Hz up to 10bit 2160p 60 Hz. The dataset is released under CC BY-NC-ND.

The data set web page is here, and more information on the dataset can also be found in this article.

Source: GPAC

Reconciling Mozilla’s Mission and W3C EME

With most competing browsers and the content industry embracing the W3C EME specification, Mozilla has little choice but to implement EME as well so our users can continue to access all content they want to enjoy. Read on for some background on how we got here, and details of our implementation.

Digital Rights Management (DRM) is a tricky issue. On the one hand content owners argue that they should have the technical ability to control how users share content in order to enforce copyright restrictions. On the other hand, the current generation of DRM is often overly burdensome for users and restricts users from lawful and reasonable use cases such as buying content on one device and trying to consume it on another.

DRM and the Web are no strangers. Most desktop users have plugins such as Adobe Flash and Microsoft Silverlight installed. Both have contained DRM for many years, and websites traditionally use plugins to play restricted content.

In 2013 Google and Microsoft partnered with a number of content providers including Netflix to propose a “built-in” DRM extension for the Web: the W3C Encrypted Media Extensions (EME).

The W3C EME specification defines how to play back such content using the HTML5 video element, utilizing a Content Decryption Module (CDM) that implements DRM functionality directly in the Web stack. The W3C EME specification only describes the JavaScript APIs to access the CDM. The CDM itself is proprietary and is not specified in detail in the EME specification, which has been widely criticized by many, including Mozilla.

Mozilla believes in an open Web that centers around the user and puts them in control of their online experience. Many traditional DRM schemes are challenging because they go against this principle and remove control from the user and yield it to the content industry.

Instead of DRM schemes that limit how users can access content they purchased across devices we have long advocated for more modern approaches to managing content distribution such as watermarking. Watermarking works by tagging the media stream with the user’s identity. This discourages copyright infringement without interfering with lawful sharing of content, for example between different devices of the same user.

Mozilla would have preferred to see the content industry move away from locking content to a specific device (so called node-locking), and worked to provide alternatives.

Instead, this approach has now been enshrined in the W3C EME specification. With Google and Microsoft shipping W3C EME and content providers moving over their content from plugins to W3C EME Firefox users are at risk of not being able to access DRM restricted content (e.g. Netflix, Amazon Video, Hulu), which can make up more than 30% of the downstream traffic in North America.

We have come to the point where Mozilla not implementing the W3C EME specification means that Firefox users have to switch to other browsers to watch content restricted by DRM.

This makes it difficult for Mozilla to ignore the ongoing changes in the DRM landscape. Firefox should help users get access to the content they want to enjoy, even if Mozilla philosophically opposes the restrictions certain content owners attach to their content.

As a result we have decided to implement the W3C EME specification in our products, starting with Firefox for Desktop. This is a difficult and uncomfortable step for us given our vision of a completely open Web, but it also gives us the opportunity to actually shape the DRM space and be an advocate for our users and their rights in this debate. The existing W3C EME systems Google and Microsoft are shipping are not open source and lack transparency for the user, two traits which we believe are essential to creating a trustworthy Web.

The W3C EME specification uses a Content Decryption Module (CDM) to facilitate the playback of restricted content. Since the purpose of the CDM is to defy scrutiny and modification by the user, the CDM cannot be open source by design in the EME architecture. For security, privacy and transparency reasons this is deeply concerning.

From the security perspective, for Mozilla it is essential that all code in the browser is open so that users and security researchers can see and audit the code. DRM systems explicitly rely on the source code not being available. In addition, DRM systems also often have unfavorable privacy properties. To lock content to the device DRM systems commonly use “fingerprinting” (collecting identifiable information about the user’s device) and with the poor transparency of proprietary native code it’s often hard to tell how much of this fingerprinting information is leaked to the server.

We have designed an implementation of the W3C EME specification that satisfies the requirements of the content industry while attempting to give users as much control and transparency as possible. Due to the architecture of the W3C EME specification we are forced to utilize a proprietary closed-source CDM as well. Mozilla selected Adobe to supply this CDM for Firefox because Adobe has contracts with major content providers that will allow Firefox to play restricted content via the Adobe CDM.

Firefox does not load this module directly. Instead, we wrap it into an open-source sandbox. In our implementation, the CDM will have no access to the user’s hard drive or the network. Instead, the sandbox will provide the CDM only with communication mechanism with Firefox for receiving encrypted data and for displaying the results.

Traditionally, to implement node-locking DRM systems collect identifiable information about the user’s device and will refuse to play back the content if the content or the CDM are moved to a different device.

By contrast, in Firefox the sandbox prohibits the CDM from fingerprinting the user’s device. Instead, the CDM asks the sandbox to supply a per-device unique identifier. This sandbox-generated unique identifier allows the CDM to bind content to a single device as the content industry insists on, but it does so without revealing additional information about the user or the user’s device. In addition, we vary this unique identifier per site (each site is presented a different device identifier) to make it more difficult to track users across sites with this identifier.

Adobe and the content industry can audit our sandbox (as it is open source) to assure themselves that we respect the restrictions they are imposing on us and users, which includes the handling of unique identifiers, limiting the output to streaming and preventing users from saving the content. Mozilla will distribute the sandbox alongside Firefox, and we are working on deterministic builds that will allow developers to use a sandbox compiled on their own machine with the CDM as an alternative. As plugins today, the CDM itself will be distributed by Adobe and will not be included in Firefox. The browser will download the CDM from Adobe and activate it based on user consent.




While we would much prefer a world and a Web without DRM, our users need it to access the content they want. Our integration with the Adobe CDM will let Firefox users access this content while trying to maximize transparency and user control within the limits of the restrictions imposed by the content industry.

There is also a silver lining to the W3C EME specification becoming ubiquitous. With direct support for DRM we are eliminating a major use case of plugins on the Web, and in the near future this should allow us to retire plugins altogether. The Web has evolved to a comprehensive and performant technology platform and no longer depends on native code extensions through plugins.

While the W3C EME-based DRM world is likely to stay with us for a while, we believe that eventually better systems such as watermarking will prevail, because they offer more convenience for the user, which is good for the user, but in the end also good for business. Mozilla will continue to advance technology and standards to help bring about this change.

By Andreas Gal, Mozilla

Netflix’s Many-Pronged Plan to Eliminate Video Playback Problems

For all of Netflix’s complaints about Internet service providers harming video performance, one of the company’s top technology experts is confident that the streaming company can solve most of its customers’ problems.

David Fullagar, Netflix’s director of content delivery architecture, spoke about the company’s plans Monday at the Content Delivery Summit in New York. He described the hardware Netflix uses in its Open Connect content delivery network (CDN), noting that the company has a technological advantage over traditional CDNs because it’s always delivering content to devices running Netflix’s own software rather than using a hodgepodge of products built by other companies.

The best-known parts of Open Connect are probably the storage boxes that Internet service providers can take into their own networks to bring content closer to consumers. ISPs can also peer with Netflix, exchanging traffic directly without hosting Netflix equipment. But these aren’t the only ways Netflix’s Open Connect technology can deliver good quality.

Netflix used to use third-party CDNs such as Akamai, but it has moved most of its traffic over to Open Connect in the past couple of years. Outside the US, 100 percent of Netflix traffic is distributed using Open Connect equipment. The percentage is in the “high 90s” in the US, with plans to hit 100 percent this summer. Even if the storage boxes aren’t inside an ISP’s network, they’re not too far away. They could even be in the same data centers, the Internet exchange points where Netflix transit providers connect to ISPs.

Fullagar was asked by an audience member how Netflix works with ISPs who offer competing products. “From a quality point of view we don’t need to be that close to the end user for the sort of video we serve,” Fullagar said. “Having extremely low latency is nice” because it allows videos to start playing faster. However, “what we’re most interested in is a good, uncongested link, and that doesn’t necessarily have to be very low latency.”

Netflix’s peering with ISPs has been controversial because some of the Internet providers have demanded payment in exchange for accepting Netflix traffic. Netflix gave in to Verizon and Comcast, agreeing to pay both companies, but it has claimed that the Federal Communications Commission should force the ISPs to provide free peering. Netflix has sent its traffic through congested links when its business disputes have gone unresolved, deteriorating quality despite the other steps Netflix takes to improve it. (Comcast and analyst Dan Rayburn accused Netflix of purposely sending traffic through congested links.)

When asked how much Netflix can affect streaming performance given that it controls the server end of the connection as well as the user’s software, Fullagar said, “I think we’re on the tip of the iceberg of being able to do quite a lot there.” Netflix’s access to information about each customer’s device and Internet connection will fuel some as-yet-unrevealed strategies for improving quality, he said.

“We have extra information beyond just, hey this is someone wanting this file," he said. "At connection time we know the sort of client they are, whether it’s a Wii or a PS4 or a streaming stick. We know the network they’re on, we know a bunch of historical information about latency and quality of service we’ve had to those networks. We know whether they’re connected on a device that’s wired or wireless. There’s a bunch of hints that we have there.”

The company has started some “experiments that are working out really well, and in the future we’ll talk more about that.”

Netflix itself has equipment at about 20 Internet exchange points in North America and Europe and has "tens if not hundreds of embedded caches in ISP networks," Fullagar said.


The Network Team
Netflix’s Open Connect division has about 40 people, Fullagar said. About 20 are software engineers who either build software for Netflix servers or work on the company's management software, which runs on Amazon’s cloud network and performs functions such as load balancing. Another 10 Open Connect employees are network architects, and another 10 are in operations.

Netflix stores video on two types of boxes that it designed, one that’s heavy on HDDs and another that’s all SSDs. Netflix built them in part because it couldn’t find the right mix of compute and storage capabilities in products from hardware vendors.

The HDD unit is a 4U-sized chassis that holds 216TB on 36 drives of 6TB each. It has 64GB RAM, a 10 Gigabit NIC, and some SSD for frequently accessed content.

The smaller, 1U, SSD-only unit contains 14 drives of a terabyte each, 256GB of RAM and a 40 Gigabit NIC. About 75 percent of the cost of both the HDD and SSD boxes is taken up by storage. Each unit uses Intel CPUs.

Netflix refreshes hardware annually to improve performance. At its biggest locations, Netflix keeps multiple copies of its entire video library in case of failure. That’s more than a petabyte of video files for its North American catalog.

The company relies heavily on open source software, including FreeBSD and the Nginx Web server, as well as several management applications the company wrote itself.

Netflix distributes multiple terabits per second and accounts for an astonishing one-third of North American Internet traffic at peak times, i.e. the traditional TV “prime time” each evening. During off-peak hours in the middle of the night, Netflix fills disks with the videos its algorithms say people are most likely to watch the next day. This dramatically reduces network utilization during peak hours.

The management software Netflix runs on Amazon Web Services handles distribution of content, analyzes network performance, and connects users to the proper video sources. Netflix wrote its own adaptive bitrate algorithms to react to changes in throughput, and a CDN selection algorithm to adapt to changing network conditions such as overloaded links, overloaded servers, and errors, the company said.

When Netflix used multiple third-party CDNs, connections would fail over from one to another in case of error. Netflix still uses the same failover technology, but with “multiple hierarchies” within Open Connect instead of multiple CDNs, Fullagar said.

Although Netflix is moving all its data onto Open Connect hardware, that doesn’t automatically reduce the controversial role its transit providers Level 3 and Cogent have played in carrying traffic. Level 3 and Cogent have warred with ISPs over whether they should have to pay in order to send Netflix traffic onto their networks. As a result, interconnections between these transit providers and ISPs have gotten congested, reducing the quality of Netflix and other Web services that travel over the links.

The role of transit providers is only reduced when Netflix signs direct interconnection agreements with ISPs, as it has done Verizon and Comcast, a Netflix spokesperson said. In the absence of such agreements, Netflix data passes through the company’s own CDN and then through a transit provider before hitting an ISP's network.

The payment controversies don’t necessarily affect the working relationship between the technical teams of Netflix and ISPs, though. “Engineering people at companies, whether large or small, operate independently of commercial interests,” Fullagar said. “In the UK, one of our biggest competitors is one of our best networking partners.”

Source: Ars Technica

PrestoCentre Standards Register

The PrestoCentre Standards Register gathers information on standards for content and metadata used across all communities involved in audiovisual digital preservation.

Free Loudness Meter for Windows & Mac

Orban has introduced Version 2.7 of the free Loudness Meter for Windows (Vista/7/8) and Mac (OS X 10.6 or greater). The software adds two important new features: support for up to 7.1-channel surround, and the ability to analyse files in several common formats offline to measure their ITU-R BS.1770-3 Integrated Loudness and Loudness Range. This combination of new features allows any organization to qualify files, whether stereo or surround, for compliance with the Calm Act and EBU R128.

Like its predecessor, version 2.7 measures loudness using both the Jones & Torick (CBS Technology Center) and BS.1770-3 algorithms, displaying BS.1770-3 Short-Term, Momentary, and Integrated loudness in addition to the Jones and Torick loudness.



The metre also provides PPM and VU meters, and a "Reconstructed Peak" meter with an 8X-upsampled sidechain to predict the peak levels that will appear after digital to analogue conversion. This reconstructed peak metre exceeds the requirements of the "true-peak" metre described in BS.1770-3, which specifies a 4X-upsampled sidechain.

Source: TV Technology