How Modern Video Players Work

An interesting post by Streamroot.

Next-Generation Video Encoding Techniques for 360 Video and VR

An interesting way to preprocess equirectangular 360-degree videos in order to reduce their file size.

In an Admission that 4K Alone is Not Enough, UHD Alliance Unveils “Ultra HD Premium”

The UHD Alliance, a group made up from leading producers, distributors and device makers has defined the Ultra HD Premium brand that requires certain minimum specifications to be met for content production, streaming and replay. The Premium logo is reserved for products and services that comply with performance metrics for resolution, High Dynamic Range (HDR), peak luminance, black levels and wide color gamut among others.

The specifications also make recommendations for immersive audio and other features. These advances in resolution, contrast, brightness, color and audio will enable certified displays and content to replicate greater image quality for in-home viewers than simply more resolution.

As the industry starts to set quality standards, camera manufacturers may be pushed towards offering higher-quality 10-bit 4K recording. Premium designation requires 10-bit capture, distribution and playback, meaning cameras must be able to record 10-bit footage to meet the standard.

Currently, many 4K cameras can only capture eight-bit files, limiting dynamic range and flexibility at the color grading stage.

The UHD Alliance supports various display technologies and consequently, have defined combinations of parameters to ensure a premium experience across a wide range of devices. In order to receive the UHD Alliance Premium Logo, the device must meet or exceed the following specifications:

  • Image Resolution: 3840x2160
  • Color Bit Depth: 10-bit signal
  • Color Palette (Wide Color Gamut)
  • Signal Input: BT.2020 color representation
  • Display Reproduction: More than 90% of P3 colors
  • High Dynamic Range
  • SMPTE ST2084 EOTF
  • A combination of peak brightness and black level either:
    • More than 1000 nits peak brightness and less than 0.05 nits black level
    • More than 540 nits peak brightness and less than 0.0005 nits black level

Distribution

Any distribution channel delivering the UHD Alliance content must support:
  • Image Resolution: 3840x2160
  • Color Bit Depth: Minimum 10-bit signal
  • Color: BT.2020 color representation
  • High Dynamic Range: SMPTE ST2084 EOTF

Content Master

The UHD Alliance Content Master must meet the following requirements:
  • Image Resolution: 3840x2160
  • Color Bit Depth: Minimum 10-bit signal
  • Color: BT.2020 color representation
  • High Dynamic Range: SMPTE ST2084 EOTF

The UHD Alliance recommends the following mastering display specifications:
  • Display Reproduction: Minimum 100% of P3 colors
  • Peak Brightness: More than 1000 nits
  • Black Level: Less than 0.03 nits

The UHD Alliance technical specifications prioritize image quality and recommend support for next-generation audio.

DASHInterpret

DASHInterpret converts video-on-demand dynamic adaptive HTTP streams (MPEG-DASH) to Apple HTTP live stream (HLS), useful when you already have existing MPEG-DASH content and you need to serve it to existing Apple users who don’t want to use a third-party

There are several software out there (e.g. Wowza or Evostream) that takes in live streams in the form of RTSP, RTMP, MPEG-TS, FLV, etc… and they produce many other formats out of that single input. They come in the form of a server (service) listening to ports or connecting to other remote streaming servers. Their purpose is to simply bridge the gap between different streaming format as they serve multiple streaming format output from a single input. These servers are mainly used in 3 use cases, but majority of usage is in live streaming.

Use Cases:

  • Takes a live stream (e.g. RTMP, RTSP, MPEG-TS…) as an input, and outputs another live stream which could be the same format or a different format
  • Takes a live stream as an input, and outputs a recorded or video on demand stream (e.g. MP4, MPEG-DASH, HLS…) stored as files on disk
  • Takes a video on demand stream as an input, and outputs either a live stream or another video on demand stream.

DASHInterpret is not one of these software, although what it solves is similar, it leans more to the video on demand space. It is not a server running on the background listening to ports or making connections and all those networking stuff. It is simply a utility that reads in a video on demand MPEG-DASH and converts it to another video on demand format. The software is portable in that you can move around computers and use it without doing any setup, configuration or installation. Version 1.0.0 release currently supports conversion of DASH-IF compliant Dynamic Adaptive Streaming over Http (MPEG-DASH) content to Http Live Streaming (HLS).

So what is the advantage of using DASHInterpret? If you are only converting VODs, you will be able to do this a lot easier and quicker than any of these live streaming servers. Since DASHInterpret does not get its input from the network, it does not have to worry about the performance penalty of going through the network stack, it is also not limited to the meager 1000 Mbit speed of modern networks, rather, it operates on the speed of your hard disk I/O which is a minimum of 6000 Mbit for average computers. You also don’t need to spend a day or week to learn how to use it, 30 seconds is all you need to learn how to use it. You can take it anywhere and be able to use without configuring or some sorts of setup, you can even store it in a USB drive and run directly from there no problem.

Why convert only MPEG-DASH streams? Because MPEG-DASH is the current and next generation technology when it comes to adaptive bit rate streaming over http protocol. Streaming businesses will transition now or in the near future, and part of that transition are users who can’t transition or will not transition quickly. That’s where DASHInterpret will come in, the content provider will use it to translate its MPEG-DASH to the format familiar to the customer like HLS for Apple users. As of this writing, HLS is still the dominant http streaming technology in operation especially for the millions of Apple users.

Per-Title Encode Optimization

http://techblog.netflix.com/2015/12/per-title-encode-optimization.html

The Applicability of the Interoperable Master Format (IMF) to Broadcast Workflows

The broadcast industry is faced with new challenges related to advanced file-based workflows. The adoption of an increasing number of distribution channels and localized content versions points to several editorial versions and output versions being required. Furthermore, broadcasters are starting to produce UHD content, which raises even more questions in terms of file handling, workflow efficiency and compression technologies.

The Interoperable Master Format (IMF) has capabilities that might make it a suitable candidate to solve many of today’s challenges in the broadcasting industry. However, it doesn’t yet appear to be sufficient for broadcast applications.

This article suggests a way of adapting IMF to broadcasters’ requirements by giving an insight into possible extensions to the IMF structure. It will be of interest to broadcasters, distributers and producers who need an efficient master format capable of accommodating today’s workflow challenges.

The achievements, presented in this paper, are part of a collaborative master thesis, realized at the EBU and the RheinMain University of Applied Sciences, Germany.

By Melanie Matuschak

Is Virtual Reality Streaming Ready for Primetime?

Virtual Reality is poised to revolutionize many industries including live video streaming. Join us as we cover the techology and possibilities of it opening the door to new markets.

By Mark Alamares

HbbTV 2.0: Could This Standard Become the Future of Television?

The next version of HbbTV is bringing a much more powerful toolset with it, and has the potential to change the current worldwide television landscape.

By Nicolas Weil

Standards-Based, Premium Content for the Modern Web

These days, a person is just as likely to be watching a movie on their laptop, tablet, or mobile phone, as they are to be sitting in front of a television. Cable operators are eager to provide premium video content to these types of devices but there are high costs involved in supporting the wide array of devices owned by their customers.

A multitude of technological obstacles stand in the way of delivering a secure, high-quality, reliable viewing experience to the small-screen. This four-part blog series describes an open, standards-based approach to providing premium, adaptive bitrate, audio/video content in HTML and how open source software can assist in the evaluation and deployment of these technologies.

By Greg Rutz, Lead Architect, CableLabs

MPEG-DASH

An interesting article about MPEG-DASH.

Subtitles in an IP World

An end-to-end demonstration of EBU-TT-D subtitles being delivered via MPEG DASH and displayed by a client.

The Structure of an MPEG-DASH MPD

This article describes the most important pieces of the MPD, starting from the top level (Periods) and going to the bottom (Segments).

Interoperable Master Format (IMF) - Application 2 and Beyond


The State of MPEG-DASH 2015

An interesting article by Nicolas Weil, presenting the past, present, and future of MPEG-DASH.

How to Encode Multi-bitrate Videos in MPEG-DASH for MSE Based Media Players

An interesting article by Streamroot in 2 parts:
Part 1
Part 2

MPEG-DASH Content Generation Using MP4Box and x264

An interesting article describing how to produce MPEG-DASH content with open source tools.

RGB Networks Announces an Open Source Software Transcoder Initiative

RGB Networks has announced that it is developing an open source version of its popular TransAct Transcoder. Called ‘Ripcode Transcoder’, after the company Ripcode, which was acquired by RGB Networks in 2010 and which originally developed TransAct, the new, cloud-enabled software transcoder will provide RGB Networks’ customers with greater control, integration and flexibility in their video delivery workflows.

In a pioneering move, and harnessing the industry momentum toward developing cloud-based solutions, RGB Networks is actively welcoming operators and vendors to be part of a community of contributors to the open source project.

RGB Networks’ CloudXtream solution for nDVR and dynamic Ad Insertion for Multiscreen (AIM) environments, launched in October 2013 and built on the industry standard open source cloud operating system OpenStack, has paved the way for this latest innovation. The company intends to build on this success with Ripcode, which will be an “open core” project, where the core technology from the TransAct Transcoder is being used to create the foundations of the open source project.

Suitable for a variety of applications, the Ripcode Transcoder will include the full feature set expected of an industrial-strength, professional transcoder, leaving customers open to select and integrate the packaging solution of their choice necessary to produce their desired Adaptive Bit Rate output formats.

The intended feature set of the open source Ripcode Transcoder will include:

  • Both Linear (live) and Video on Demand (VOD) transcoding
  • Full cluster management, load balancing, and failover
  • Linear and VOD transcoding of MPEG2, H.264, H.265, AAC, AC3, and other industry leading video and audio codecs
  • File-to-File watch folders
  • Full reporting and logging of events
  • Commercial-grade GUI
  • RESTful APIs

Unlike other open source projects, an open source transcoder is more difficult to release due to built-in professional codec licensing. RGB Networks will release Ripcode Transcoder with only the codecs that can be legally used with open source software. Additionally, in order to facilitate use of the transcoder in professional environments that require licensed, third party codecs and pre/post processing filters, the Ripcode transcoder will include a plug-in framework, that will allow use of best-of-breed codecs and filters.

A number of vendors of such components and related technologies have expressed interest in participating in the Ripcode initiative including the following:
  • Video Codecs (including H.264/AVC and H.265/HEVC): eBrisk Video, Intel Media Server Studio, Ittiam Systems, MainConcept (A DivX company), Multicoreware, NGCodec (providing HW acceleration for H.264/AVC and H.265/HEVC), Squid Systems, Vanguard Video
  • Audio Codecs: Dolby Laboratories, Fraunhofer IIS
  • Video Optimization: Beamr

The release of the first version of Ripcode Transcoder – 1.0 – with all the appropriate licensing is targeted for Q1 2015.

Source: RGB Networks

SMPTE Publishes Archive eXchange Format Standard

The Society of Motion Picture and Television Engineers published a standard that codifies the Archive eXchange Format. An IT-centric file container that can encapsulate any number and type of files in a fully self-contained and self-describing package, AXF supports interoperability among disparate content storage systems and ensures content’s long-term availability, no matter how storage or file system technology evolves.

Designed for operational storage, transport, and long-term preservation, AXF was formulated as a wrapper, or container, capable of holding virtually unlimited collections of files and metadata related to one another in any combination. Known as “AXF Objects,” such containers can package, in different ways, all the specific information different kinds of systems would need in order to restore the content data. The format relies on the Extensible Markup Language to define the information in a way that can be read and recovered by any modern computer system to which the data is downloaded.

AXF Objects are essentially immune to changes in technology and formats. Thus, they can be transferred from one archive system into remote storage—geographically remote or in the cloud, for instance—and later retrieved and read by different archive systems without the loss of any essence or metadata.

AXF Objects hold files of any kind and any size. By automatically segmenting, storing on multiple media, and reassembling AXF Objects when necessary, “spanned sets” enable storage of AXF Objects on more than one medium. Consequently, AXF Objects may be considerably larger than the individual media on which they are stored. This exceptional scalability helps to ensure that AXF Objects may be stored on any type or generation of media. The use of “collected sets” permits archive operators to make changes to AXF Objects or files within them, while preserving all earlier versions, even when write-once storage is used.

The nature of AXF makes it possible for equipment manufacturers and content owners to move content from their current archive systems into the AXF domain in a strategic way that does not require content owners to abandon existing hardware unless or until they are ready to do so. In enabling the recovery of archived content in the absence of the systems that created the archives, AXF also offers a valuable means of protecting users’ investment in content. By maintaining preservation information such as fixity and provenance as specified by the OAIS model, AXF further enables effective long-term archiving of content. Resilience of data is ensured through use of redundant AXF structures and cryptographic hash algorithms.

AXF already has been employed around the world to help businesses store, protect, preserve, and transport many petabytes of file-based content, and the format is proving fundamental to many of the cloud-based storage, preservation, and IP-based transport services available today.

Source: TV Technology

DASH AVC/264 Support in GPAC

This article shows you how to setup GPAC for your OnDemand and Live contents.

CableLabs Boots Up 4K Video Sharing Website

CableLabs has launched a 4K-focused microsite that provides access to Ultra HD/4K video clips to help platform developers, vendors, network operators and other video pros conduct tests with the emerging eye-popping format.

CableLabs said it’s offering the content under the Creative Commons License, meaning it can be used freely for non-commercial testing, demonstrations and the general advancement of technology.

As vendors utilize content from the site to test new technology, CableLabs helps the industry get one step closer to standardizing 4K content and delivering to the home.

As of this writing, the site hosts seven videos, all shot with a Red Epic camera. The longest of the batch is a fireman-focused clip titled “Seconds That Count” that runs 5 minutes and 22 seconds.

On the site, CableLabs has integrated an upload form for anyone who wants to share their 4K videos for the purpose of testing. Interested particiapnts are directed to provide a lower bite-rate HD file for preview purposes along with a 4K version. CableLabs is accepting pre-transcoded versions using MPEG HEVC or AVC, or Apple ProRes version. CableLabs will take on the task of transcoding the content into two high quality versions available for download on the website. CableLabs notes that uploaded content might be used for demos at forums, shows, and conferences.

CableLabs is launching the site as the cable industry just begins to develop plans around 4K. Among major U.S. MSOs, Comcast plans to launch an Internet-based, on-demand Xfinity TV 4K app before the end of the year that will initially be available on new Samsung UHD. The MSO is also working with partners on a new generation of boxes for its X1 platform that uses HEVC and can decode native 4K signals.

On the competitive front, DirecTV president and CEO Mike White said on the company's second quarter earnings call that the satellite TV giant will be ready to deliver 4K video on an on-demand basis this year, and be set up to follow with live 4K streaming next year or by early 2016.

Source: Multichannel News