An interesting article about MPEG-DASH.
An end-to-end demonstration of EBU-TT-D subtitles being delivered via MPEG DASH and displayed by a client.
This article describes the most important pieces of the MPD, starting from the top level (Periods) and going to the bottom (Segments).
Saturday, April 11, 2015
Labels: MPEG DASH
Tuesday, March 31, 2015
An interesting article by Nicolas Weil, presenting the past, present, and future of MPEG-DASH.
Friday, March 27, 2015
Labels: MPEG DASH
Thursday, February 26, 2015
Labels: MPEG DASH
An interesting article describing how to produce MPEG-DASH content with open source tools.
Tuesday, November 04, 2014
Labels: MPEG DASH
RGB Networks has announced that it is developing an open source version of its popular TransAct Transcoder. Called ‘Ripcode Transcoder’, after the company Ripcode, which was acquired by RGB Networks in 2010 and which originally developed TransAct, the new, cloud-enabled software transcoder will provide RGB Networks’ customers with greater control, integration and flexibility in their video delivery workflows.
In a pioneering move, and harnessing the industry momentum toward developing cloud-based solutions, RGB Networks is actively welcoming operators and vendors to be part of a community of contributors to the open source project.
RGB Networks’ CloudXtream solution for nDVR and dynamic Ad Insertion for Multiscreen (AIM) environments, launched in October 2013 and built on the industry standard open source cloud operating system OpenStack, has paved the way for this latest innovation. The company intends to build on this success with Ripcode, which will be an “open core” project, where the core technology from the TransAct Transcoder is being used to create the foundations of the open source project.
Suitable for a variety of applications, the Ripcode Transcoder will include the full feature set expected of an industrial-strength, professional transcoder, leaving customers open to select and integrate the packaging solution of their choice necessary to produce their desired Adaptive Bit Rate output formats.
The intended feature set of the open source Ripcode Transcoder will include:
- Both Linear (live) and Video on Demand (VOD) transcoding
- Full cluster management, load balancing, and failover
- Linear and VOD transcoding of MPEG2, H.264, H.265, AAC, AC3, and other industry leading video and audio codecs
- File-to-File watch folders
- Full reporting and logging of events
- Commercial-grade GUI
- RESTful APIs
Unlike other open source projects, an open source transcoder is more difficult to release due to built-in professional codec licensing. RGB Networks will release Ripcode Transcoder with only the codecs that can be legally used with open source software. Additionally, in order to facilitate use of the transcoder in professional environments that require licensed, third party codecs and pre/post processing filters, the Ripcode transcoder will include a plug-in framework, that will allow use of best-of-breed codecs and filters.
A number of vendors of such components and related technologies have expressed interest in participating in the Ripcode initiative including the following:
- Video Codecs (including H.264/AVC and H.265/HEVC): eBrisk Video, Intel Media Server Studio, Ittiam Systems, MainConcept (A DivX company), Multicoreware, NGCodec (providing HW acceleration for H.264/AVC and H.265/HEVC), Squid Systems, Vanguard Video
- Audio Codecs: Dolby Laboratories, Fraunhofer IIS
- Video Optimization: Beamr
The release of the first version of Ripcode Transcoder – 1.0 – with all the appropriate licensing is targeted for Q1 2015.
Source: RGB Networks
Friday, September 26, 2014
The Society of Motion Picture and Television Engineers published a standard that codifies the Archive eXchange Format. An IT-centric file container that can encapsulate any number and type of files in a fully self-contained and self-describing package, AXF supports interoperability among disparate content storage systems and ensures content’s long-term availability, no matter how storage or file system technology evolves.
Designed for operational storage, transport, and long-term preservation, AXF was formulated as a wrapper, or container, capable of holding virtually unlimited collections of files and metadata related to one another in any combination. Known as “AXF Objects,” such containers can package, in different ways, all the specific information different kinds of systems would need in order to restore the content data. The format relies on the Extensible Markup Language to define the information in a way that can be read and recovered by any modern computer system to which the data is downloaded.
AXF Objects are essentially immune to changes in technology and formats. Thus, they can be transferred from one archive system into remote storage—geographically remote or in the cloud, for instance—and later retrieved and read by different archive systems without the loss of any essence or metadata.
AXF Objects hold files of any kind and any size. By automatically segmenting, storing on multiple media, and reassembling AXF Objects when necessary, “spanned sets” enable storage of AXF Objects on more than one medium. Consequently, AXF Objects may be considerably larger than the individual media on which they are stored. This exceptional scalability helps to ensure that AXF Objects may be stored on any type or generation of media. The use of “collected sets” permits archive operators to make changes to AXF Objects or files within them, while preserving all earlier versions, even when write-once storage is used.
The nature of AXF makes it possible for equipment manufacturers and content owners to move content from their current archive systems into the AXF domain in a strategic way that does not require content owners to abandon existing hardware unless or until they are ready to do so. In enabling the recovery of archived content in the absence of the systems that created the archives, AXF also offers a valuable means of protecting users’ investment in content. By maintaining preservation information such as fixity and provenance as specified by the OAIS model, AXF further enables effective long-term archiving of content. Resilience of data is ensured through use of redundant AXF structures and cryptographic hash algorithms.
AXF already has been employed around the world to help businesses store, protect, preserve, and transport many petabytes of file-based content, and the format is proving fundamental to many of the cloud-based storage, preservation, and IP-based transport services available today.
Source: TV Technology
Thursday, September 18, 2014
This article shows you how to setup GPAC for your OnDemand and Live contents.
Wednesday, September 03, 2014
Labels: MPEG DASH
CableLabs has launched a 4K-focused microsite that provides access to Ultra HD/4K video clips to help platform developers, vendors, network operators and other video pros conduct tests with the emerging eye-popping format.
CableLabs said it’s offering the content under the Creative Commons License, meaning it can be used freely for non-commercial testing, demonstrations and the general advancement of technology.
As vendors utilize content from the site to test new technology, CableLabs helps the industry get one step closer to standardizing 4K content and delivering to the home.
As of this writing, the site hosts seven videos, all shot with a Red Epic camera. The longest of the batch is a fireman-focused clip titled “Seconds That Count” that runs 5 minutes and 22 seconds.
On the site, CableLabs has integrated an upload form for anyone who wants to share their 4K videos for the purpose of testing. Interested particiapnts are directed to provide a lower bite-rate HD file for preview purposes along with a 4K version. CableLabs is accepting pre-transcoded versions using MPEG HEVC or AVC, or Apple ProRes version. CableLabs will take on the task of transcoding the content into two high quality versions available for download on the website. CableLabs notes that uploaded content might be used for demos at forums, shows, and conferences.
CableLabs is launching the site as the cable industry just begins to develop plans around 4K. Among major U.S. MSOs, Comcast plans to launch an Internet-based, on-demand Xfinity TV 4K app before the end of the year that will initially be available on new Samsung UHD. The MSO is also working with partners on a new generation of boxes for its X1 platform that uses HEVC and can decode native 4K signals.
On the competitive front, DirecTV president and CEO Mike White said on the company's second quarter earnings call that the satellite TV giant will be ready to deliver 4K video on an on-demand basis this year, and be set up to follow with live 4K streaming next year or by early 2016.
Source: Multichannel News
Monday, August 25, 2014
- A profile of the features defined in MPEG DASH (referred to by MPEG as an "interoperability point") largely based on the "ISOBMFF live" profile defined by MPEG.
- Constraints on the sizes or complexity of various parameters defined in the MPEG DASH specification.
- A selection of the video and audio codecs from the DVB toolbox that are technically appropriate with MPEG DASH constraints and/or requirements for the use of these, without mandating any particular codec.
- Using MPEG Common Encryption for content delivered according to the present document.
- Use of TTML subtitles with MPEG DASH.
- Requirements on Player behaviour needed to give inter-operable presentation of services.
- Guidelines for content providers on how to use MPEG DASH.
Wednesday, July 16, 2014
Labels: MPEG DASH
HTTP Streaming is a recent topic in multimedia communications with on-going standardization activities, especially with the MPEG DASH standard which covers on demand and live services. One of the main issues in live services deployment is the reduction of the overall latency. Low or very low latency streaming is still a challenge.
In this paper, we push the use of DASH to its limits with regards to latency, down to fragments being only one frame, and evaluate the overhead introduced by that approach and the combination of: low latency video coding techniques, in particular Gradual Decoding Refresh; low latency HTTP streaming, in particular using chunked-transfer encoding; and associated ISOBMF packaging.
We experiment DASH streaming using these techniques in local networks to measure the actual end-to-end latency, as low as 240 milliseconds, for an encoding and packaging overhead in the order of 13% for HD sequences and thus validate the feasibility of very low latency DASH live streaming in local networks.
By Nassima Bouzakaria, Cyril Concolato and Jean Le Feuvre
Wednesday, July 16, 2014
Labels: MPEG DASH
A significant step in the road to Ultra High Definition TV services has been taken with the approval of the DVB-UHDTV Phase 1 specification at the 77th meeting of the DVB Steering Board. The specification includes an HEVC Profile for DVB broadcasting services that draws, from the options available with HEVC, those that will match the requirements for delivery of UHDTV Phase 1 and other formats. The specification updates ETSI TS 101 154 (Specification for the use of Video and Audio Coding in Broadcasting Applications based on the MPEG-2 Transport Stream).
The new DVB-UHDTV Phase 1 will allow images with four times the static resolution of the 1080p HDTV format, at frame rates of up to 60 images per second. Contrast will be drastically improved by increasing the number of bits per pixel to 10 bit. From the wide range of options defined in the HEVC Main 10 profile, Level 5.1 is specified for UHD content for resolutions up to 2160p. For HD content, HEVC Main profile level 4.1 is specified for supporting resolutions up to 1080p.
The DVB-UHDTV Phase 1 specification takes into account the possibility that UHDTV Phase 2 may use higher frame rates in a compatible way, which will add further to the image quality of UHDTV Phase 1.
“HEVC is the most recently-developed compression technology and, among other uses, it is the key that will unlock UHDTV broadcasting,” said DVB Steering Board Chairman, Phil Laven. “This new DVB–UHDTV Phase 1 specification not only opens the door to the age of UHDTV delivery but also potentially sets the stage for Phase 2, the next level of UHDTV quality, which will be considered in upcoming DVB work,” he continued.
Also approved was the specification for Companion Screens and Streams, Part 2: Content Identification and Media Synchronization. Companion Devices (tablets, smart phones) enable new user experiences for broadcast service consumption. Many of these require synchronisation between the Broadcast Service at the TV Device and the Timed Content presented at the Companion Device. This specification focuses on the identification and synchronisation of a Broadcast Service on a TV Device (Connected TV or STB and screen) and Timed Content on a Companion Screen Application running on a Companion Device. Part 2 outlines the enabling factors for the identification of, and synchronisation with, broadcast content, timed content and trigger events on TV devices (for example a Connected TV or STB) and related content presented by an application running on a personal device.
Another specification to gain approval from the Steering Board was the MPEG-DASH Profile for Transport of ISO BMFF Based DVB Services over IP Based Networks. This specification defines the delivery of TV content via HTTP adaptive streaming. MPEG-DASH covers a wide range of use cases and options. Transmission of audiovisual content is based on the ISOBMFF file specification. Video and audio codecs from the DVB toolbox that are technically appropriate with MPEG-DASH have been selected. Conditional Access is based on MPEG Common Encryption and delivery of subtitles will be XML based. The DVB Profile of MPEG-DASH reduces the number of options and also the complexity for implementers. The new specification will facilitate implementation and usage of MPEG-DASH in a DVB environment.
The three new specifications will now be sent to ICT standards body ETSI for formal standardisation and the relevant BlueBooks will be published shortly.
Source: Advanced Television
The Consumer Electronics Association has announced updated core characteristics for ultra high-definition TVs, monitors and projectors for the home. As devised and approved by CEA’s Video Division Board, these characteristics build on the first-generation UHD characteristics released by CEA in October 2012.
These expanded display characteristics (CEA’s Ultra High-Definition Display Characteristics V2) – voluntary guidelines that take effect in September 2014 – are designed to address various attributes of picture quality and help move toward interoperability, while providing clarity for consumers and retailers alike.
Under CEA’s expanded characteristics, a TV, monitor or projector may be referred to as Ultra High-Definition if it meets the following minimum performance attributes:
- Display Resolution – Has at least eight million active pixels, with at least 3,840 horizontally and at least 2,160 vertically.
- Aspect Ratio – Has a width to height ratio of the display’s native resolution of 16:9 or wider.
- Upconversion – Is capable of upscaling HD video and displaying it at ultra high-definition resolution.
- Digital Input – Has one or more HDMI inputs supporting at least 3840x2160 native content resolution at 24p, 30p and 60p frames per second. At least one of the 3840x2160 HDMI inputs shall support HDCP revision 2.2 or equivalent content protection.
- Colorimetry – Processes 2160p video inputs encoded according to ITU-R BT.709 color space and may support wider colorimetry standards.
- Bit Depth – Has a minimum color bit depth of eight bits.
Because one of the first ways consumers will have access to native 4K content is via Internet streaming on connected ultra HDTVs, CEA has defined new characteristics for connected UHDTV displays. Under these new characteristics, which complement the updated core UHD attributes, a display system may be referred to as a connected ultra HD device if it meets the following minimum performance attributes:
- Ultra High-Definition Capability – Meets all of the requirements of the CEA Ultra High-Definition Display Characteristics V2 (listed above).
- Video Codec – Decodes IP-delivered video of 3840x2160 resolution that has been compressed using HEVC* and may decode video from other standard encoders.
- Audio Codec – Receives and reproduces, and/or outputs multichannel audio.
- IP and Networking – Receives IP-delivered Ultra HD video through a Wi-Fi, Ethernet or other appropriate connection.
- Application Services – Supports IP-delivered Ultra HD video through services or applications on the platform of the manufacturer’s choosing.
CEA’s expanded display characteristics also include guidance on nomenclature designed to help provide manufacturers with marketing flexibility while still providing clarity for consumers. Specifically, the guidance states, “The terms ‘Ultra High-Definition,’ ‘Ultra HD’ or ‘UHD’ may be used in conjunction with other modifiers,” for example “Ultra High-Definition TV 4K”.
*High Efficiency Video Compression Main Profile, Level 5, Main tier, as defined in ISO/IEC 23008-2 MPEG-H Part 2 or ITU-T H.265, and may support higher profiles, levels or tiers.
Source: TV Technology
Tuesday, July 01, 2014
A nice introduction to HEVC by Fabio Sonnati.
Saturday, June 21, 2014
This week Netflix is pleased to begin streaming all 62 episodes of Breaking Bad in UltraHD 4K. Breaking Bad in 4K comes from Sony Pictures Entertainment’s beautiful remastering of Breaking Bad from the original film negatives. This 4K experience is available on select 4K Smart TVs.
As pleased as I am to announce Breaking Bad in 4K, this blog post is also intended to highlight the collaboration between Sony Pictures Entertainment and Netflix to modernize the digital supply chain that transports digital media from content studios, like Sony Pictures, to streaming retailers, like Netflix.
Netflix and Sony agreed on an early subset of IMF for the transfer of the video and audio files for Breaking Bad. IMF stands for Interoperable Master Format, an emerging SMPTE specification governing file formats and metadata for digital media archiving and B2B exchange.
IMF specifies fundamental building blocks like immutable objects, checksums, globally unique identifiers, and manifests (cpl). These building blocks hold promise for vastly improving the efficiency, accuracy, and scale of the global digital supply chain.
At Netflix, we are excited about IMF and we are committing significant R&D efforts towards adopting IMF for content ingestion. Netflix has an early subset of IMF in production today and we will support most of the current IMF App 2 draft by the end of 2014.
We are also developing a roadmap for IMF App 2 Extended and Extended+. We are pleased that Sony Pictures is an early innovator in this space and we are looking forward to the same collaboration with additional content studio partners.
Breaking Bad is joining House of Cards season 2 and the Moving Art documentaries in our global 4K catalog. We are also adding a few more 4K movies for our USA members. We have added Smurfs 2, Ghostbusters, and Ghostbusters 2 in the United States. All of these movies were packaged in IMF by Sony Pictures.
By Kevin McEntee, VP Digital Supply Chain, Netflix
We're excited to announce that Netflix streaming in HTML5 video is now available in Safari on OS X Yosemite! We've been working closely with Apple to implement the Premium Video Extensions in Safari, which allow playback of premium video content in the browser without the use of plugins.
If you're in Apple's Mac Developer Program, or soon the OS X Beta Program, you can install the beta version of OS X Yosemite. With the OS X Yosemite Beta on a modern Mac, you can visit Netflix.com today in Safari and watch your favorite movies and TV shows using HTML5 video without the need to install any plugins.
We're especially excited that Apple implemented the Media Source Extensions (MSE) using their highly optimized video pipeline on OS X. This lets you watch Netflix in buttery smooth 1080p without hogging your CPU or draining your battery. In fact, this allows you to get up to 2 hours longer battery life on a MacBook Air streaming Netflix in 1080p - that’s enough time for one more movie!
Apple also implemented the Encrypted Media Extensions (EME) which provides the content protection needed for premium video services like Netflix.
The Premium Video Extensions do away with the need for proprietary plugin technologies for streaming video. In addition to Safari on OS X Yosemite, plugin-free playback is also available in IE 11 on Windows 8.1, and we look forward to a time when these APIs are available on all browsers.
Congratulations to the Apple team for advancing premium video on the web with Yosemite! We’re looking forward to the Yosemite launch this Fall.
By Anthony Park and Mark Watson, Netflix Tech Blog
As part of the 4Ever project, we have been releasing an HEVC and DASH ultra high definition dataset, ranging from 8bit 720p 30Hz up to 10bit 2160p 60 Hz. The dataset is released under CC BY-NC-ND.
The data set web page is here, and more information on the dataset can also be found in this article.
With most competing browsers and the content industry embracing the W3C EME specification, Mozilla has little choice but to implement EME as well so our users can continue to access all content they want to enjoy. Read on for some background on how we got here, and details of our implementation.
Digital Rights Management (DRM) is a tricky issue. On the one hand content owners argue that they should have the technical ability to control how users share content in order to enforce copyright restrictions. On the other hand, the current generation of DRM is often overly burdensome for users and restricts users from lawful and reasonable use cases such as buying content on one device and trying to consume it on another.
DRM and the Web are no strangers. Most desktop users have plugins such as Adobe Flash and Microsoft Silverlight installed. Both have contained DRM for many years, and websites traditionally use plugins to play restricted content.
In 2013 Google and Microsoft partnered with a number of content providers including Netflix to propose a “built-in” DRM extension for the Web: the W3C Encrypted Media Extensions (EME).
Mozilla believes in an open Web that centers around the user and puts them in control of their online experience. Many traditional DRM schemes are challenging because they go against this principle and remove control from the user and yield it to the content industry.
Instead of DRM schemes that limit how users can access content they purchased across devices we have long advocated for more modern approaches to managing content distribution such as watermarking. Watermarking works by tagging the media stream with the user’s identity. This discourages copyright infringement without interfering with lawful sharing of content, for example between different devices of the same user.
Mozilla would have preferred to see the content industry move away from locking content to a specific device (so called node-locking), and worked to provide alternatives.
Instead, this approach has now been enshrined in the W3C EME specification. With Google and Microsoft shipping W3C EME and content providers moving over their content from plugins to W3C EME Firefox users are at risk of not being able to access DRM restricted content (e.g. Netflix, Amazon Video, Hulu), which can make up more than 30% of the downstream traffic in North America.
We have come to the point where Mozilla not implementing the W3C EME specification means that Firefox users have to switch to other browsers to watch content restricted by DRM.
This makes it difficult for Mozilla to ignore the ongoing changes in the DRM landscape. Firefox should help users get access to the content they want to enjoy, even if Mozilla philosophically opposes the restrictions certain content owners attach to their content.
As a result we have decided to implement the W3C EME specification in our products, starting with Firefox for Desktop. This is a difficult and uncomfortable step for us given our vision of a completely open Web, but it also gives us the opportunity to actually shape the DRM space and be an advocate for our users and their rights in this debate. The existing W3C EME systems Google and Microsoft are shipping are not open source and lack transparency for the user, two traits which we believe are essential to creating a trustworthy Web.
The W3C EME specification uses a Content Decryption Module (CDM) to facilitate the playback of restricted content. Since the purpose of the CDM is to defy scrutiny and modification by the user, the CDM cannot be open source by design in the EME architecture. For security, privacy and transparency reasons this is deeply concerning.
From the security perspective, for Mozilla it is essential that all code in the browser is open so that users and security researchers can see and audit the code. DRM systems explicitly rely on the source code not being available. In addition, DRM systems also often have unfavorable privacy properties. To lock content to the device DRM systems commonly use “fingerprinting” (collecting identifiable information about the user’s device) and with the poor transparency of proprietary native code it’s often hard to tell how much of this fingerprinting information is leaked to the server.
We have designed an implementation of the W3C EME specification that satisfies the requirements of the content industry while attempting to give users as much control and transparency as possible. Due to the architecture of the W3C EME specification we are forced to utilize a proprietary closed-source CDM as well. Mozilla selected Adobe to supply this CDM for Firefox because Adobe has contracts with major content providers that will allow Firefox to play restricted content via the Adobe CDM.
Firefox does not load this module directly. Instead, we wrap it into an open-source sandbox. In our implementation, the CDM will have no access to the user’s hard drive or the network. Instead, the sandbox will provide the CDM only with communication mechanism with Firefox for receiving encrypted data and for displaying the results.
Traditionally, to implement node-locking DRM systems collect identifiable information about the user’s device and will refuse to play back the content if the content or the CDM are moved to a different device.
By contrast, in Firefox the sandbox prohibits the CDM from fingerprinting the user’s device. Instead, the CDM asks the sandbox to supply a per-device unique identifier. This sandbox-generated unique identifier allows the CDM to bind content to a single device as the content industry insists on, but it does so without revealing additional information about the user or the user’s device. In addition, we vary this unique identifier per site (each site is presented a different device identifier) to make it more difficult to track users across sites with this identifier.
Adobe and the content industry can audit our sandbox (as it is open source) to assure themselves that we respect the restrictions they are imposing on us and users, which includes the handling of unique identifiers, limiting the output to streaming and preventing users from saving the content. Mozilla will distribute the sandbox alongside Firefox, and we are working on deterministic builds that will allow developers to use a sandbox compiled on their own machine with the CDM as an alternative. As plugins today, the CDM itself will be distributed by Adobe and will not be included in Firefox. The browser will download the CDM from Adobe and activate it based on user consent.
While we would much prefer a world and a Web without DRM, our users need it to access the content they want. Our integration with the Adobe CDM will let Firefox users access this content while trying to maximize transparency and user control within the limits of the restrictions imposed by the content industry.
There is also a silver lining to the W3C EME specification becoming ubiquitous. With direct support for DRM we are eliminating a major use case of plugins on the Web, and in the near future this should allow us to retire plugins altogether. The Web has evolved to a comprehensive and performant technology platform and no longer depends on native code extensions through plugins.
While the W3C EME-based DRM world is likely to stay with us for a while, we believe that eventually better systems such as watermarking will prevail, because they offer more convenience for the user, which is good for the user, but in the end also good for business. Mozilla will continue to advance technology and standards to help bring about this change.
By Andreas Gal, Mozilla
Friday, May 16, 2014