Facebook Open-Sources 360-Degree Camera to Jumpstart VR

Facebook debuted the Facebook Surround 360 camera for 360-degree video and VR at its F8 conference this week. The company will also freely share its hardware schematics and complex stitching software via GitHub this summer. Others share Facebook’s vision of virtual reality, including Nokia, Jaunt and Google, all of which built their own 360-degree cameras. But Facebook, by open-sourcing its plans, says chief executive Mark Zuckerberg, furthers its central mission of connecting everyone in the world.

Wired describes the camera as featuring 17 evenly spaced lenses, constructed from about $30,000 worth of off-the-shelf hardware. The camera has three fisheye lenses, one on top of the camera to capture what’s above and two on the bottom of the camera to capture what’s below.

The 360-degree videos are a bridge, says Wired, “to the kind of full-fledged virtual reality Facebook plans on offering through the Oculus Rift.” Facebook chief product officer Chris Cox notes that, “We do not have ambitions of getting into the camera business. But we did observe that there wasn’t really a great reference camera that took really nice, high-resolution, 3D, fully spherical video.”

On the Facebook “Code” page, engineer Brian Cabral, who headed up the team to build the Surround 360, says the goal was to create “a camera that’s not only capable of high-resolution spherical video but also durable and easy to use.”

He describes Facebook’s thinking in designing the hardware and software, and digs into the specifications. For example, the system exports 4K, 6K and 8K for each eye, using its custom Dynamic Streaming technology for the 6K and 8K videos; the 8K “doubles industry standard output.”

Approaches to Building a VOD Service from Scratch

If you’re thinking about building a VOD service and you don’t have to deal with legacy, this is how you do it.

VR Technology Comparison: Samsung Gear VR, Oculus Rift, HTC Vive, Sony PlayStation VR

The Age of VR has emerged from all of the hype and expectation into a burgeoning new economy which will service a very wide variety of users, purposes and needs. Here is a technological snapshot of the 4 top Virtual Reality systems in their debut incarnations.

The Netflix IMF Workflow

This post describes the Netflix IMF ingest implementation and how it fits within the scope of their content processing pipeline. While conventional essence containers (e.g., QuickTime) commonly include essence data in the container file, the IMF CPL is designed to contain essence by reference and this has interesting architectural implications.

The State of MPEG-DASH 2016

The industry is turning away from plug-ins and embracing HTML5 everywhere. Here's how the vendor-independent streaming standard is gaining momentum.

Buyer's Guide to DRM 2016

A simple guide through the complex landscape of multiple DRM technologies. Learn what DRM is, and how to choose and deploy the best solution for each platform.

2016 VR Industry Landscape


IMF: A Prescription for Versionitis

This blog post provides an introduction to the emerging IMF (Interoperable Master Format) standard from SMPTE (The Society of Motion Picture and Television Engineers),  and delves into a short case study that highlights some of the operational benefits that Netflix receives from IMF today.

How Modern Video Players Work

An interesting post by Streamroot.

Next-Generation Video Encoding Techniques for 360 Video and VR

An interesting way to preprocess equirectangular 360-degree videos in order to reduce their file size.

In an Admission that 4K Alone is Not Enough, UHD Alliance Unveils “Ultra HD Premium”

The UHD Alliance, a group made up from leading producers, distributors and device makers has defined the Ultra HD Premium brand that requires certain minimum specifications to be met for content production, streaming and replay. The Premium logo is reserved for products and services that comply with performance metrics for resolution, High Dynamic Range (HDR), peak luminance, black levels and wide color gamut among others.

The specifications also make recommendations for immersive audio and other features. These advances in resolution, contrast, brightness, color and audio will enable certified displays and content to replicate greater image quality for in-home viewers than simply more resolution.

As the industry starts to set quality standards, camera manufacturers may be pushed towards offering higher-quality 10-bit 4K recording. Premium designation requires 10-bit capture, distribution and playback, meaning cameras must be able to record 10-bit footage to meet the standard.

Currently, many 4K cameras can only capture eight-bit files, limiting dynamic range and flexibility at the color grading stage.

The UHD Alliance supports various display technologies and consequently, have defined combinations of parameters to ensure a premium experience across a wide range of devices. In order to receive the UHD Alliance Premium Logo, the device must meet or exceed the following specifications:

  • Image Resolution: 3840x2160
  • Color Bit Depth: 10-bit signal
  • Color Palette (Wide Color Gamut)
  • Signal Input: BT.2020 color representation
  • Display Reproduction: More than 90% of P3 colors
  • High Dynamic Range
  • SMPTE ST2084 EOTF
  • A combination of peak brightness and black level either:
    • More than 1000 nits peak brightness and less than 0.05 nits black level
    • More than 540 nits peak brightness and less than 0.0005 nits black level

Distribution

Any distribution channel delivering the UHD Alliance content must support:
  • Image Resolution: 3840x2160
  • Color Bit Depth: Minimum 10-bit signal
  • Color: BT.2020 color representation
  • High Dynamic Range: SMPTE ST2084 EOTF

Content Master

The UHD Alliance Content Master must meet the following requirements:
  • Image Resolution: 3840x2160
  • Color Bit Depth: Minimum 10-bit signal
  • Color: BT.2020 color representation
  • High Dynamic Range: SMPTE ST2084 EOTF

The UHD Alliance recommends the following mastering display specifications:
  • Display Reproduction: Minimum 100% of P3 colors
  • Peak Brightness: More than 1000 nits
  • Black Level: Less than 0.03 nits

The UHD Alliance technical specifications prioritize image quality and recommend support for next-generation audio.

DASHInterpret

DASHInterpret converts video-on-demand dynamic adaptive HTTP streams (MPEG-DASH) to Apple HTTP live stream (HLS), useful when you already have existing MPEG-DASH content and you need to serve it to existing Apple users who don’t want to use a third-party

There are several software out there (e.g. Wowza or Evostream) that takes in live streams in the form of RTSP, RTMP, MPEG-TS, FLV, etc… and they produce many other formats out of that single input. They come in the form of a server (service) listening to ports or connecting to other remote streaming servers. Their purpose is to simply bridge the gap between different streaming format as they serve multiple streaming format output from a single input. These servers are mainly used in 3 use cases, but majority of usage is in live streaming.

Use Cases:

  • Takes a live stream (e.g. RTMP, RTSP, MPEG-TS…) as an input, and outputs another live stream which could be the same format or a different format
  • Takes a live stream as an input, and outputs a recorded or video on demand stream (e.g. MP4, MPEG-DASH, HLS…) stored as files on disk
  • Takes a video on demand stream as an input, and outputs either a live stream or another video on demand stream.

DASHInterpret is not one of these software, although what it solves is similar, it leans more to the video on demand space. It is not a server running on the background listening to ports or making connections and all those networking stuff. It is simply a utility that reads in a video on demand MPEG-DASH and converts it to another video on demand format. The software is portable in that you can move around computers and use it without doing any setup, configuration or installation. Version 1.0.0 release currently supports conversion of DASH-IF compliant Dynamic Adaptive Streaming over Http (MPEG-DASH) content to Http Live Streaming (HLS).

So what is the advantage of using DASHInterpret? If you are only converting VODs, you will be able to do this a lot easier and quicker than any of these live streaming servers. Since DASHInterpret does not get its input from the network, it does not have to worry about the performance penalty of going through the network stack, it is also not limited to the meager 1000 Mbit speed of modern networks, rather, it operates on the speed of your hard disk I/O which is a minimum of 6000 Mbit for average computers. You also don’t need to spend a day or week to learn how to use it, 30 seconds is all you need to learn how to use it. You can take it anywhere and be able to use without configuring or some sorts of setup, you can even store it in a USB drive and run directly from there no problem.

Why convert only MPEG-DASH streams? Because MPEG-DASH is the current and next generation technology when it comes to adaptive bit rate streaming over http protocol. Streaming businesses will transition now or in the near future, and part of that transition are users who can’t transition or will not transition quickly. That’s where DASHInterpret will come in, the content provider will use it to translate its MPEG-DASH to the format familiar to the customer like HLS for Apple users. As of this writing, HLS is still the dominant http streaming technology in operation especially for the millions of Apple users.

Per-Title Encode Optimization

http://techblog.netflix.com/2015/12/per-title-encode-optimization.html

The Applicability of the Interoperable Master Format (IMF) to Broadcast Workflows

The broadcast industry is faced with new challenges related to advanced file-based workflows. The adoption of an increasing number of distribution channels and localized content versions points to several editorial versions and output versions being required. Furthermore, broadcasters are starting to produce UHD content, which raises even more questions in terms of file handling, workflow efficiency and compression technologies.

The Interoperable Master Format (IMF) has capabilities that might make it a suitable candidate to solve many of today’s challenges in the broadcasting industry. However, it doesn’t yet appear to be sufficient for broadcast applications.

This article suggests a way of adapting IMF to broadcasters’ requirements by giving an insight into possible extensions to the IMF structure. It will be of interest to broadcasters, distributers and producers who need an efficient master format capable of accommodating today’s workflow challenges.

The achievements, presented in this paper, are part of a collaborative master thesis, realized at the EBU and the RheinMain University of Applied Sciences, Germany.

By Melanie Matuschak

Is Virtual Reality Streaming Ready for Primetime?

Virtual Reality is poised to revolutionize many industries including live video streaming. Join us as we cover the techology and possibilities of it opening the door to new markets.

By Mark Alamares

HbbTV 2.0: Could This Standard Become the Future of Television?

The next version of HbbTV is bringing a much more powerful toolset with it, and has the potential to change the current worldwide television landscape.

By Nicolas Weil

Standards-Based, Premium Content for the Modern Web

These days, a person is just as likely to be watching a movie on their laptop, tablet, or mobile phone, as they are to be sitting in front of a television. Cable operators are eager to provide premium video content to these types of devices but there are high costs involved in supporting the wide array of devices owned by their customers.

A multitude of technological obstacles stand in the way of delivering a secure, high-quality, reliable viewing experience to the small-screen. This four-part blog series describes an open, standards-based approach to providing premium, adaptive bitrate, audio/video content in HTML and how open source software can assist in the evaluation and deployment of these technologies.

By Greg Rutz, Lead Architect, CableLabs

MPEG-DASH

An interesting article about MPEG-DASH.

Subtitles in an IP World

An end-to-end demonstration of EBU-TT-D subtitles being delivered via MPEG DASH and displayed by a client.

The Structure of an MPEG-DASH MPD

This article describes the most important pieces of the MPD, starting from the top level (Periods) and going to the bottom (Segments).