HTML5 Video in IE 11 on Windows 8.1

We've previously discussed our plans to use HTML5 video with the proposed "Premium Video Extensions" in any browser which implements them. These extensions are the future of premium video on the web, since they allow playback of premium video directly in the browser without the need to install plugins.

Today, we're excited to announce that we've been working closely with Microsoft to implement these extensions in Internet Explorer 11 on Windows 8.1. If you install the Windows 8.1 Preview from Microsoft, you can visit Netflix.com today in Internet Explorer 11 and watch your favorite movies and TV shows using HTML5!

Microsoft implemented the Media Source Extensions (MSE) using the Media Foundation APIs within Windows. Since Media Foundation supports hardware acceleration using the GPU, this means that we can achieve high quality 1080p video playback with minimal CPU and battery utilization. Now a single charge gets you more of your favorite movies and TV shows!

Microsoft also has an implementation of the Encrypted Media Extensions (EME) using Microsoft PlayReady DRM. This provides the content protection needed for premium video services like Netflix.

Finally, Microsoft implemented the Web Cryptography API (WebCrypto) in Internet Explorer, which allows us to encrypt and decrypt communication between our JavaScript application and the Netflix servers.

We expect premium video on the web to continue to shift away from using proprietary plugin technologies to using these new Premium Video Extensions. We are thrilled to work so closely with the Microsoft team on advancing the HTML5 platform, which gets a big boost today with Internet Explorer’s cutting edge support for premium video. We look forward to these APIs being available on all browsers.

By Anthony Park and Mark Watson, The Netflix Tech Blog

Captioning for Streaming Video still a "Wild West"

Captioning video on demand “is a wild west,” at least in the U.S, where the service is mandated by law and therefore a key topic for content owners and their partners to get their heads around.

Delivering a measured analysis of the issues, Telestream product manager Kevin Louden captioned his own session, "Practicalities of Putting Captions on IP-Delivered Video Content" with the question: "Can anyone see your subtitles?"

Louden began by pointint out that there are legal, moral, and business reasons to make sure your content is captioned.

“The 21st Century Communications Act in the U.S. mandates that content previously broadcast or intended for broadcast have captions to it,” he explained. “This comes into effect in stages between now and 2016.

“Even if you don't do it by law [no other region of the world has quite the same legislation] some people say it's simply the right thing to do, and from a business perspective you can broaden audiences for your content by reaching out to multiple language groups.”

So how is it done? Just as there are lots of different video and audio formats for streaming and progressive download protocols there are lots of caption file formats for video on demand, the main ones being W3C TT/DFXP and WebVTT/SRT.

The former is an open standard which contains lot of information about position, font size, color, and so on for a rich presentation of the information and is “potentially very complicated,” he said.

WebVTT/SRT, on the other hand, is a text-based format native to HTML5 video tags, “very simple in its current iteration” but with little or no control of presentation features in the file.

“This is what people cobbled together before there were any standards in place, and because of that there are a lot of entrenched workflows,” Louden said.

To smooth the multiplicity of formats, two leading standards bodies are attempting to create a universal file interchange format, as a sort of mezzanine or master level.

SMPTE 2052, being proposed in the U.S., is an XML-based time text file which emerged from the Act so that content owners or their partner organisations could create deliverable formats from broadcast content for IP distribution in all its streaming media end user forms.

In Europe, EBU-TT is a similar proposition and a subset of the TTML format, for use as a universal handoff.

For organisations wanting to generate captioning information for linear video on their own websites there are several options. JW Player, for example, has in built support for WebTT, SRT and DFXP while Flow Player supports W3C TT, SRT

Numerous video encoding tools, perhaps already in situ at a facility, contain subtitling and captioning capabilities for translating between formats.

Alternatively one can employ graphical overlays or physically burn the subtitles onto the picture, a pracitce which is still remarkably common, reported Louden. “You don't need special players or sidecar files, but obviously there's not much flexibility.”

Charging a third-party service provider is a useful way of delegating the problem but, says Louden, “in theory you hand over your master SMPTE TT or EBU TT safe harbour file as the interchange format, but the reality is that people are used to their own existing profiles and will request an SRT, WebVTT format since this is the way it's always been done.”

Turning to adaptive bitrate provision, Louden noted that the main ABR formats cater for different captioning files.

The HLS specification for iOS devices contains a means of embedding 608 captions in a video's MPEG headers, while Smooth Streaming and HTTP Dynamic Streaming both support the sidecar formats DXFP and TTML (useful for repurposing linear and non-linear VoD). Where MPEG-DASH fits into this equation is up in the air.

Louden pointed out a couple of bumps in the road for anyone looking to caption their content, which included taking care of rights, especially when repurposing legacy broadcast content.

“If you sent the work out to a caption house then beware that many of them work on individual negotiations, so while you may have a licence to broadcast that information you may not have the web rights for it,” he advised.

“Also be careful editing content," he said. "Any retiming of the content will have a knock-on to the timecode-synced caption information. You have to be sure when you do your format translation that the captions are retimed too, perhaps manually.”

The demand for a universal captioning standard was agreed on by delegates in the room, but no one really believed that a standard could be agreed or made to work in practice because of commercial pressures among competing vendors.

By way of addendum: Louden noted the little differences in definition between the two continents.

“In the U.S. 'captions' display text and other sound information for the hearing impaired," he said. "Subtitles are translations to different languages, whereas in Europe both of these things are commonly referred to as the same thing—a subtitle.”

By Adrian Pennington, Streaming Media

The 2013 Fletcher Digital Camera Comparison Chart

The 2013 Digital Camera Comparison Chart is a great resource for anyone who wants to quickly see what sort of specifications they're looking at for a particular digital cinema camera.

DASH Industry Forum Releases Implementation Guidelines

DASH Industry Forum (DASH-IF) has released DASH-AVC/264, its recommended guidelines for deployment of the MPEG-DASH standard.

The MPEG-DASH specification provides a standard method for streaming multimedia content over Internet by simplifying and converging delivery of IP video. This improves user experience while decreasing cost and simplifying workflow.

The DASH-AVC/264 Implementation Guidelines recommend using a subset of MPEG-DASH, with specific video and audio codecs in order to promote interoperability among deployments in the industry.

The DASH-AVC/264 Implementation Guidelines:

  • Provide an open, interoperable standard solution in the multimedia streaming market
  • Address major multimedia streaming use cases including on-demand, live and time-shift services
  • Outline the use of two specific MPEG-DASH profiles
  • Describe specific audio and video codecs and closed caption format
  • Outline the use of common encryption
The Guidelines were created by the 67 industry-leading DASH-IF member companies, including founding members Akamai, Ericsson, Microsoft, Netflix, Qualcomm and Samsung.

The DASH-AVC/264 Implementation Guidelines are also aligned with initiatives from other industry standards bodies and consortia, including HbbTV, DECE, DTG, HD-Forum, OIPF, EBU and DLNA.

Future guideline updates will address advanced services and applications, including multi-channel audio and HEVC support.

Dynamic Target Tracking Camera System Keeps its Eye on the Ball




Source: DigInfo TV

8K Ultra HD Compact Camera and H.265 Encoder Developed by NHK




Source: DigInfo TV