iPlayer3D Update

The Digital Service Development group led by Phil Layton in BBC R&D was involved in the previous trial of 3D at the Wimbledon Tennis Championships this year and also the recently broadcast Strictly Come Dancing Grand Final. In this post Dr Peter Cherriman and Paul Gorley outline the work they did to determine if it was possible to put 3D content onto the Freeview and Freesat versions of TV iPlayer.

Generally 3D requires a high bitrate to achieve good stereoscopy. If the video bitrate is too low, depth cues are lost and the 3D becomes tiring to watch. However, due to varying Internet speeds, the higher the bitrate on iPlayer, the less people that are able to watch it. So we had the challenging task of trying to producing high quality 3D at as low a bitrate as possible.

Our 3D television broadcasts on Freeview, Freesat, Sky and Virgin all use a side-by-side frame-compatible format. This combines the Left and Right eye views into a single HD signal, by anamorphically squashing horizontally each eye's view into half of the HD frame, so they appear side-by-side. This HD signal was compressed at resolution of 1920x1080i25, which means 25 interlaced frames per second each of which is comprised of 1920 pixels across and 1080 lines. Each interlaced frame comprised of two fields, each field is 1920x540 pixels, and the fields are captured 1/50th of a second apart.

In order to produce the best quality 3D we decided to use a recording made in Blackpool, rather than use the broadcast feed received via satellite. This had a number of advantages, it meant we weren't limited to the side-by-side 3D format and the recording would have less compression artefacts.

We did a number of experiments with different resolutions and determined the best compromise for bitrate and quality was to convert the recorded 1920x1080i25 interlaced signal for each eye into a non-interlaced 1280x720p50 signal using a professional cross-converter. The intermediate signal created is at 50 frame per second, where each frame is 1280x720 pixels for each eye.

We then needed to convert the pair of 1280x720p50 signals into a standardised frame-compatible format. The preferred frame-compatible format for 1280x720p50 signals is to anamorphically squash vertically each eye signal into half the HD frame, the so-called top-bottom or over-under format. This results in 1280 by 360 pixels per eye per frame.

Our broadcasts use side-by-side format, however this requires horizontal squashing of the picture which degrades the stereoscopic depth cues. These would be further degraded for a 1280x720 image format. Vertical squashing used in the top-bottom format preserves more of this depth information, but the rescaling required in a receiver is much harder for the interlaced broadcast format which is why it's not used.

We found that our broadcast video encoders were much more efficient at encoding this 1280x720p50 signal than the existing software encoders. The bespoke workflow of this experiment allowed us to trial the use of broadcast encoders for iPlayer content. We tested with a wide range of 3D material, but the most challenging of which was the Strictly Come Dancing 3D footage, shown in cinemas, for last year's Children in Need. By using HE-AAC audio encoding we were able to minimise the audio bitrate required. This enabled us to create good quality 3D at a constant total bitrate of less than 5Mbit/s. You should notice the improved quality of the iPlayer 3D pictures in terms of less compression artefacts, and smoother motion due to the 50 frames per second, which is twice the framerate of standard iPlayer.

The next challenge was to modify the file to be suitable to upload to the iPlayer platform. The Freesat receivers required a MPEG Transport Stream (ISO/IEC 13818-1), which is produced directly by the broadcast video encoders.

However, FreeviewHD receivers require a mp4 file. When we used our standard software tool to create the mp4 files we found that the audio and video was not in-sync on some receivers. The mp4 files contain metadata to indicate how to synchronise the video and audio. However, it seems some receivers assume the first frame of video should be synchronised with the first frame of audio and don't make use of this metadata. Using a alternative tool, we were able to create mp4 files which played in-sync on all receivers available to us, including those which previously seemed to ignore this synchronisation metadata.

We don't yet know what the future of 3D will be, but these experiments have demonstrated another platform on which 3D content can be delivered to viewers.

By Ant Miller, BBC R&D

JPEG 2000

JPEG 2000 has caught the attention of the professional media world for good reason. First, it closely matches some workflows, where the production process operates on each frame of a video stream as a discrete unit. This is different from MPEG-2 and MPEG-4 AVC Long-GOP flavors, where, during the reconstruction process, algorithms reference frames before and after the frame being reconstructed.

The ability to compress each frame as a free-standing unit has made it popular in the digital intermediate space in Hollywood. JPEG 2000 is also of interest to those who want lossless compression. It can provide a bit-perfect reconstruction of the original compressed image, although at a cost in terms of bandwidth. Also, the wavelet compression used in JPEG 2000 provides some unique opportunities that are not available in other compression methods.

The wavelet transform separates the image into four sub-bands. The first sub-band is a lowpass horizontal and lowpass vertical (LL). Images that have gone through this separation are basically lower-resolution images of the original. The other sub-bands are as follows: lowpass horizontal and highpass vertical (LH); highpass horizontal and lowpass vertical (HL); and highpass horizontal and highpass vertical (HH).

Using wavelet transforms and some clever thinking, implementers can do some interesting things. For example, they can send only the LL image, if they know that they are feeding a low-resolution display. Or they can send the LL sub-band in a highly-protected stream, in order to ensure the original image arrives intact. That said, they can then send the higher-resolution sub-bands unprotected since a momentary loss of these sub-bands is not likely to be noticed.

Given JPEG 2000's popularity, it is not surprising there have been some developments that make it particularly interesting for professional applications. First, the ITU has created an amendment (1) that outlines specific configurations for broadcast contribution applications. These configurations are intended to establish interoperability points for those implementing JPEG 2000 in professional applications. This is important because, until the amendment was released, there were so many variables in the compression tool set that interoperability was unlikely. The second important development, Amendment 5 to the MPEG 2 standard (2), provides a mapping of the JPEG 2000 Program Elementary Stream (PES) onto the MPEG-2 Transport Stream (TS).

Finally, some time ago, the Pro-MPEG Forum started to develop a standardized way to transport MPEG-2 TS over IP networks. The Video Services Forum picked up on this work and continued to develop it, finally submitting a draft for standardization within the SMPTE. This standard, SMPTE 2022-2 (3), describes a method for mapping MPEG-2 Transport Streams onto IP networks using RTP and UDP. The document was approved in 2007 and is the most common standard deployed today for professional video transport applications.

So these three developments — development of broadcast profiles; a mapping of JPEG 2000 Program Elementary Streams to MPEG 2 Transport Streams; and wide availability of MPEG-2 TS over IP transport equipment — mean now it is possible to transport JPEG 2000 over IP networks.

JPEG 2000 over IP relies on three critical developments:
broadcast contribution profiles, JPEG 2000 PES to MPEG-2 mapping,
and SMPTE 2022-2 for MPEG-2 transport over IP networks.

Starting with a video source, the image is compressed using a compression engine. This engine is configured to one of the Broadcast Contribution profiles in ITU-T Amendment 3. The compression engine produces a JPEG 2000 PES. This stream is then fed to an MPEG-2 encapsulator. The encapsulator uses the mapping rules established in the MPEG-2 specification, Amendment 5, to map the PES onto an MPEG-2 TS. This MPEG-2 TS is fully compliant with MPEG-2 specifications because, from the outside, it looks just like a normal MPEG-2 Transport Stream. As such, the output of the MPEG-2 encapsulator can be fed into a SMPTE 2022-2 compliant video transport device. This device encapsulates the MPEG-2 TS in standard RTP and UDP packets, and then those packets are wrapped in IP packets. These IP packets can now be fed into an IP network.

You might wonder why we take a relatively new compression algorithm such as JPEG 2000 and encapsulate it in MPEG-2. There are several reasons for this. First and foremost, there are already a number of specifications for how to encapsulate a number of different audio formats into MPEG-2 Transport Streams. Remember: JPEG 2000 says nothing about audio. Using MPEG-2 TS allows us to transport and present the audio alongside the JPEG 2000 compressed video using well-known audio standards. Also, this approach allows us to leverage the existing SMPTE 2022-2 MPEG-2 TS over IP standard. Finally, there are no technical issues in MPEG-2 TS that need to be fixed in this application space, so re-use of Transport Streams rather than inventing something entirely new seems like a good solution.

So, the good news is that the time is ripe for development of an interoperable, open solution for the transport of JPEG 2000 video and audio over IP networks. The standards exist, and there is a clear path forward. But, there are a few issues that need addressed.

JPEG 2000 has been around for quite some time. As such, some proprietary JPEG 2000 over IP transport solutions have already been created. Of course, these were developed in response to customer demand, so existing implementations may need to be changed. Another issue is that while the broadcast contribution profiles in Amendment 3 go a long way toward interoperability in the JPEG 2000 PES space, recent analysis suggests, without further definition, implementations based upon these profiles will not be interoperable. Finally, until the industry actually tries to connect devices from different manufacturers together, interoperability cannot be assured.

Fortunately, the industry is becoming aware of these issues, and steps are being taken to begin work in earnest on interoperable, open transport of professional JPEG 2000 images over IP networks. I would expect to see some developments around this in the first half of the coming year.

  1. “Profiles for Broadcast Applications” ISO/IEC 15444-1:2004 Amd.3-2010 (ISO/IEC, Geneva, Switzerland: 2010) |Rec. ITU-T T.800 Amd.3 (06/2010) (ITU, Geneva, Switzerland:2010)
  2. Amendment 5: Transport of JPEG 2000 Part 1 (ITU-T Rec T.800 | ISO/IEC 15444-1) video over ITU-T Rec H.222.0| ISO/IEC 13818-1
  3. SMPTE ST20 22-2:2007 “Unidirectional Transport of Constant Bit Rate MPEG-2 Transport Streams on IP Networks”

By Brad Gilmer, Broadcast Engineering

Streaming Standards for Worship

If you produce streaming video in the worship market and have your ear to the ground, you may be experiencing sensory overload right now. HTML5 is being promoted as a panacea for all plug-in-related woes; Adobe threw the mobile market into turmoil by ceasing development of the Flash Player, and there’s a new standard called DASH that supposedly will create a unified approach for adaptive streaming to all connected devices. Seems like getting that sermon out over the Internet has gotten a lot more complicated.

Well, maybe not. In this article I’ll describe what’s actually happening with HTML5, Flash, and DASH, and make some suggestions as to how to incorporate these changes into your video-related technology plans.

About HTML5
Let’s start with HTML5, which has one potential show stopper for many houses of worship: the lack of a live capability. Apple has a proprietary technology called HTTP Live Streaming that you can use to deliver to iDevices and Macs but not Windows computers. So if live video is a requirement, HTML5 is out—at least for the time being.

If on-demand video is your sole requirement, HTML5 is a tale of two marketplaces: desktops and mobile. By way of background, HTML5-compatible browsers don’t require plug-ins like Flash or Silverlight to play web video. Instead, they rely on players that are actually incorporated into and shipped with the browser. Integrating video into a webpage for HTML5 playback uses a simple tag rather than a complicated text string to call a plug-in.

Today, the installed base of HTML5-compatible browsers on desktop computer is only around 60 percent, which makes it an incomplete solution, particularly for houses of worship whose older parishioners may be technology laggards who don’t quickly upgrade to new browsers. However, in the link that you use to display your video, it’s simple to query the browser used by the viewer to test for HTML5-playback capabilities. If the viewer’s browser is HTML5-compatible, the video will play in the HTML5 player. If not, you can code the page to “fall back” to the existing Flash Player or other plug-in, which will then load and play normally. While this sounds complicated, Flash fallback is totally transparent to the viewer and occurs in just a millisecond or two.

Why HTML5 first? Because as we’ll see in a moment, this is a very solid strategy for supporting Apple and Android devices. However, before jumping in, keep in mind that HTML5 is not as mature as Flash in several important respects. First, it lacks true streaming, or the ability to meter out video as it’s played, which is more efficient than progressive download. HTML5 also can’t adaptively stream or dynamically distribute multiple streams to your target viewers to best suit their connection speed and CPU power. It’s these two issues that the aforementioned DASH standard hopes to address.

However, the DASH standard doesn’t address HTML5’s biggest implementation hurdle, which is that all HTML5 browsers don’t support a single compression technology or codec. Specifically, Microsoft’s Internet Explorer 9 and Apple Safari include an HTML5 player for the H.264 codec, while Mozilla Firefox and the Opera browser support only Google’s open-source codec, WebM. Today, Google Chrome browser includes both codecs, but Google has stated that they intend to remove the H.264 codec sometime in the future. It’s actually a bit worse than this sounds because Firefox version 3.6, which is still more than 5 percent of the installed base of desktop browsers, only supports a third codec, Ogg Theora.

To fully support the universe of HTML5-compatible browsers, you’d have to encode files in three formats and still fall back to Flash for viewers without HTML5-compatible browsers. Or you could just continue to solely support Flash and wait a year or two (or more) until the penetration rate of HTML5 browsers exceeds 95 percent, and then reevaluate.

If your only concern was desktop players, this might be a good strategy. Include mobile in the equation, however, and creating an HTML5 player with fallback to Flash might be a great strategy for your on-demand streams.

HTML5 on Mobile Devices
In the two key mobile markets, Apple and Android, HTML5 support is ubiquitous, as is support for the H.264 codec. So the simplest way to enable on-demand playback for Apple and Android devices is to add an HTML5 player on your website using only the H.264 codec, with fallback to Flash. Android and Apple devices would use the HTML5 player, as would desktop viewers running HTML5 browsers that support H.264 playback. All other desktop viewers would fall back to Flash.

While on the topic of mobile, let’s talk about Adobe’s recent mobile-related decision, starting with precisely what they decided to do. Here’s a quote from the Adobe blog that discusses this decision:

“Our future work with Flash on mobile devices will be focused on enabling Flash developers to package native apps with Adobe AIR for all the major app stores. We will no longer continue to develop Flash Player in the browser to work with new mobile device configurations (chipset, browser, OS version, etc.) following the upcoming release of Flash Player 11.1 for Android and BlackBerry PlayBook.”

Adobe will discontinue development of the Flash Player on mobile devices, pushing their key content producers to produce native apps for the mobile platforms. According to the blog post, Adobe will also continue development of the Flash Player on the computer desktop, focusing on markets where Flash “can have most impact for the industry, including advanced gaming and premium video.”

Why the decision to cease development for mobile? There are a number of reasons. The sheer number of Android device configurations made it very expensive to provide device-specific support. Now Adobe has passed the problem of ensuring device compatibility to its app developers. In addition, Adobe was locked out of the iOS market—which doesn’t support Flash—and Microsoft has also said that they won’t enable plug-ins like Flash in its upcoming Windows 8 tablet OS.

In contrast, Android, iOS and Windows 8 all support HTML5, making it the best multiple-platform solution for deploying browser-based content to the mobile market. Adobe saw the writing on the wall and decided to exit a market that they couldn’t affordably and adequately serve.

What’s the key takeaway? At a higher level, simple video playback of on-demand content is becoming commoditized, and it can be performed just as well in HTML5 as in Flash. In addition, HTML5 also has much greater reach, allowing one player to serve mobile and desktop markets—though Flash fallback is clearly necessary on the desktop.

Adobe is positioning Flash as a premium technology that offers many advantages that HTML5 can’t offer, including all those mentioned above. Other noteworthy features that HTML5 doesn’t offer include multicasting and peer-to-peer delivery, which are particularly important in enterprise markets. However, if these features aren’t important to your organization, it’s time to start implementing HTML5 for your on-demand streams—if only with H.264 support to serve the iOS and Android markets.

DASH stands for Dynamic Adaptive Streaming over HTTP, and it’s an International Standards Organization (ISO) standard that one day may provide standards-based live and on-demand adaptive streaming to a range of platforms—including mobile, desktop, and over the top (OTT) television consoles. It’s a web producer’s dream, since by supporting a single technology, your video can play on all these platforms. The specification enjoys significant industry support, with more than 50 companies contributing to the specification.

Unfortunately, there are some implementation hurdles that may delay or even derail some of this promise. First, at this point, it’s unclear whether DASH will be royalty free. Many companies have contributed intellectual property to the specification. Many companies—such as Microsoft, Cisco, and Qualcomm—have waived any royalties from their contributions, though this is not yet universal. In fact, the current status of DASH in this regard is so uncertain that Mozilla has announced that it’s “unlikely to implement at this time.” Obviously, taking Firefox out of the equation limits the effectiveness of DASH in the HTML5 marketplace.

In addition, while some companies such as Microsoft have publicly announced that they will support DASH once finalized, two critical companies—Adobe and Apple—have not done the same. This isn’t unusual in its own right since neither company typically discusses unannounced products. Still, because these companies dominate the mobile and desktop browser plug-in markets, it’s tough to plot a strategy until you know their intent.

The Bottom Line?
If you’re broadcasting live, HTML5 isn’t an option in the short term. DASH may change things in early 2012, but until we’re certain which platforms will support it and when, you shouldn’t change your existing strategy. For most producers, live broadcasting means one stream (or set of streams) for Flash and another for iOS devices using Apple’s HTTP Live Streaming (HLS). In this regard, Flash should be available for Android devices for the foreseeable future, and Android 3.0 devices should be able to play HLS streams.

For your on-demand streams, it may be time to consider switching over to an HTML5 first with H.264 support with Flash fallback. This is the most efficient mechanism for reaching iOS, Android, and other HTML5-compatible mobile devices while continuing to support legacy desktop browsers.

By Jan Ozer, Sound & Video Contractor

Netflix Sees Cost Savings in MPEG DASH Adoption

"The biggest advantage to us of a standard like MPEG DASH is that everything can be encoded one way and encapsulated one way, and stored on our CDN servers just once. That's a benefit both in terms of saving our CDN costs from a storage perspective and a benefit because you have greater cache efficiency," said Mark Watson, senior engineer for Netflix.

Watson made his comments in a red carpet interview at the recent Streaming Media West conference in Los Angeles, shortly before taking part in a panel on the MPEG DASH specification. MPEG DASH would be a great help to Netflix, he said, because then it could avoid saving several different copies of its entire movie and TV show library.

While there are several different profiles defined in MPEG DASH, Netflix will use the on-demand profile, Watson said, because all of its online content is on-demand. Between the two types of stream segments defined -- MPEG-2 Transport Streams and fragmented MP4 files -- Netflix sides with fragmented MP4. It works well for adaptive streaming and is simpler, he offered.

Netflix, Watson said, contracts with multiple CDNs and allows the client devices to determine which works best for them at any time. The company is also sensitive to the amount of traffic it's putting across networks.

video platformvideo managementvideo solutionsvideo player

Click to watch the video

By Troy Dreier, Streaming Media

The MPEG-DASH Standard for Multimedia Streaming Over the Internet

A white paper by Microsoft.

Watching Video Over the Web

Two interesting white papers by Cisco: part 1 and part 2.

Forget 3D, Here Comes the QD TV

Researchers have developed a new form of light-emitting crystals, known as quantum dots, which can be used to produce ultra-thin televisions.

The tiny crystals, which are 100,000 times smaller than the width of a human hair, can be printed onto flexible plastic sheets to produce a paper-thin display that can be easily carried around, or even onto wallpaper to create giant room-size screens.

The scientists hope the first quantum dot televisions – like current flat-screen TVs, but with improved colour and thinner displays – will be available in shops by the end of next year. A flexible version is expected to take at least three years to reach the market.

Michael Edelman, chief executive of Nanoco, a spin out company set up by the scientists behind the technology at Manchester University, said: "We are working with some major Asian electronics companies. The first products we are expecting to come to market using quantum dots will be the next generation of flat-screen televisions.

"The real advantage provided by quantum dots, however, is that they can be printed on to a plastic sheet that can rolled up. It is likely these will be small personal devices to begin with.

"Something else we are looking at is reels of wallpaper or curtains made out of a material that has quantum dots printed on it. You can imagine displaying scenes of the sun rising over a beach as you wake up in the morning."

Although Mr Edelman was unable to reveal which companies Nanoco are working with due to commercial agreements, it is believed that electronics giants Sony, Sharp, Samsung and LG are all working on quantum dot television technology.

Most televisions now produced have a liquid-crystal display (LCD) lit by light-emitting diodes (LED), with the screen two to three inches thick. Replacing the LEDs with quantum dots could reduce the thickness.

Shortages of rare earth elements needed in these displays have driven up production costs, driving electronics firms to look for new ways of making them. Quantum dots are made from cheaper semi-conducting materials that emit light when energised by electricity or ultraviolet light.

By changing the size of the crystals, the researchers found they can manipulate the colour of light they produce.

Placing quantum dots on top of regular LEDs can also help to produce a more natural coloured light and Nanoco working to produce new types of energy efficient light bulbs. They also hope to produce solar powered displays using quantum dots.

Professor Paul O'Brien, an inorganic materials chemist at the University of Manchester who helped top develop the quantum dot technology, said: "By altering the size of the crystals we are able to change the colour they produce.

"It is rather like when you twang a ruler on a desk and the noise changes, the same is happening with the light produced by the quantum dots.

"As the colours are very bright and need little energy it has generated huge excitement in the electronics industry – the quality of display they can produce will be far superior to LCD televisions."

By Richard Gray, The Telegraph

Gracenote Readies its Own Second-Screen Platform

Gracenote is about to introduce its very own second-screen content recognition platform at CES. The company, which became a wholly owned subsidiary of Sony three years ago, aims to compete with similar solutions from Yahoo’s IntoNow and social check-in services like Miso and GetGlue.

Gracenote’s advanced content recognition technology makes it possible to identify both on-demand movies as well as live TV content. Gracenote President Stephen White gave me a quick demo of the technology last week in San Francisco.

Click to watch the video

Gracenote’s service is comparable to IntoNow in that it uses a tablet’s microphone to listen to the audio track of what’s playing on TV. It then checks the resulting fingerprint against a growing database of video content to deliver information to the tablet — a process that takes five seconds or less, according to White.

He told me Gracenote’s service will eventually be able to deliver rich scene-level metadata for tens of thousands of movies, going as far as offering links to buy any of the products shown on-screen. Information for hundreds of movies will be available when the service launches in earnest next March.

Contextual information for live TV isn’t quite as deep, but Gracenote uses its partnership with Tribune Media Services to cross-reference TV Guide data with what a user is currently watching. Gracenote wants to offer its advanced content recognition platform to CE manufacturers, broadcasters and developers of second screen apps, White explained.

This is technically not the first time the company has been powering this kind of second-screen experiences: Gracenote’s subsidiary Gravity Mobile built the technology behind the ABC second-screen apps that are powered by Nielsen. However, those apps are based on watermarks incorporated into the shows ABC build its apps for, which obviously requires the cooperation of the broadcaster.

Gracenote’s fingerprinting approach, on the other hand, works with any kind of content. That means third parties could also use this kind of technology to sell ads against content they don’t own or control. “It’s definitely disruptive,” admitted White. However, he said broadcasters for the most part are very interested in second-screen solutions, if only to defend their own ad dollars. Said White: “The broadcasters realize that it’s gonna happen with our without them.”

By Janko Roettgers, GigaOM

1080p and 3-D Developments

Viewers and producers both seek a more realistic viewing experience from cinema and television systems. There are several ways to make the television more immersive. Three paths that are being followed include increasing the field of view, adding depth perception and improving motion rendition.

Wider Field of View
We view television as a small 2D window on the world restricting us to the role of a voyeur rather than “being there.” The cinema has toyed with Cinerama, IMAX and Omnimax to give a very wide field of view. In the case of Omnimax, the field of view matches our peripheral vision.

Current HDTV was originally conceived to increase the field of view of 10 degrees with SD to around 30 degrees. However, binocular human vision subtends over 120 degrees. The UHDTV or Super Hi-Vision (SHV) project aims to increase the field of view to 80 to 100 degrees by raising the resolution to 8K.

Current HDTV originally increased the field of view to about 30 degrees.
Super Hi-Vision aims to improve it up between 80 and 100 degrees.

Depth Perception
One route to increased realism and immersion is to add depth information to video, with stereoscopic 3-D (S3D) being the first implementation. S3D is a long way from true 3-D, in that it creates a planar presentation at a fixed viewpoint and reproduces depth through binocular disparity. The depth budget of the production has to be managed as to avoid the eyestrain that results from objects being placed away from the display plane.

To achieve true representation of depth in the scene, the television system would have to reproduce the light field. Conventional flat-panel displays only carry intensity and color information at each pixel. A light field display also carries information about the direction of light rays, which allows objects to be viewed in front or behind the plane of the display. However, S3D is an affordable compromise, and consumer versions of light field systems are a long way off.

Scanning Rates
One side effect of increasing the field of view is that our eyes are more sensitive to flicker at the periphery of vision. Scanning rates of 24fps, 25fps or 30fps date back to pre-World War II technologies in film and television cameras. Those were the minimum rates that would work but still have always suffered motion artifacts.

With film, it is temporal aliasing (the car wheel rotating the wrong way), and with television the well-known artifacts of interlace. For high-resolution systems, 25fps is simply just not adequate. We have already seen the way forward with 720p/60 broadcasts in the U.S. and high-frame rate cinema — pioneered by Douglas Trumball with the 60fps Showscan film system. It has now become much easier with digital cinema.

The current recommended EBU HDTV systems include system 4, 1080p/50, although broadcasters have not yet adopted the system for transmission. Research by the Super Hi-Vision team at NHK indicates a minimum frame rate of 120fps would have to happen in order to avoid flicker on the large screen of their system and to give motion portrayal worthy of the static resolution.

As viewers' expectations of picture quality increase, current 1080i/25 systems are showing the extent of their limitations. The biggest is interlace, a 1920s technology that is stubborn in its refusal to lie down. We have the curious situation where receiver manufacturers market sets as 1080p; however, decoders only support 1080i/25 or 720p/50. Sure, the panels are progressive, but that is how they work.

Looking Forward
We are where we are with the crude technology of interlace and all its attendant artifacts. Even if it could be argued that viewers don't notice the artifacts, one fact is inescapable: It is more complex to encode to MPEG standards with the result of a lower compression efficiency than progressive scan video. Interlaced systems must also reduce the vertical resolution of graphics — anti-aliasing — to avoid interline twitter, with the result that the potential resolution of the system is halved.

Comprehensive viewing tests by the EBU have demonstrated that 1080p/50 can be transmitted at the same bit rate as existing 1080i/25 services, but with a better picture quality on large displays.

Most television receivers do not have the necessary performance in the decoder to support the AVC Level 4.2 that is required for 1080p/50 signals, and this remains an obstacle for migration to all-progressive services. It will change as receivers become more sophisticated and add support for DVB-T2 and for AVC level 4.2.

Producers look to maintain the value of their investments into the future. We already see SD channels commissioning HD programming with a eye on the future. New formats like S3D are gaining a niche following among viewers, but Super Hi-Vision — 4K and 8K — is going to set a new benchmark for video quality.

Many broadcasters have an infrastructure that is largely 1.5Gb/s, for 1080i/25 or 720p/50, or even 270Mb/s standard definition. New builds are now predominately 3Gb/s, so the world is gradually moving to a position where 1080p/50 is supported by acquisition and post-production equipment. However, there remains a huge legacy of interlaced material in the program archives.

Mastering in 1080p/50 provides a file that can be transformed to 1080i/25 and 720p/50 without the artifacts inherent in crossconverting current interlaced or 720-line masters. Furthermore, much television content is consumed on inherently progressive devices like LCD TVs, PCs and tablets.

The 3Gb/s infrastructure also lends itself to the carriage of stereo signals, as a 3Gb/s can operate as two SMPTE292 1.5Gb/s links, for left and right. These could be 720p/50, 1080p/25 or 1080i/25.

As the NHK-sponsored project to find new levels of realism progresses, every aspect of the production chain, from cameras to displays, is evolving to support this high-res standard. There are many obstacles yet to overcome, with the delivery of such high-data rates to the viewer being perhaps the most challenging. The uncompressed SHV signal is around 48Gb/s, and using current compression techniques would need a bandwidth of up to 400MHz, beyond current satellite transponders, FTTH systems or optical discs.

3-D Reproduction
The most basic form of 3-D television is fixed view stereoscopic. It gives the illusion of depth, but every viewer gets the same view, irrespective of his or her position relative to the display. Stereo 3D effectively delivers a single view from a pair of cameras directly to the left and right eyes via separate, respective channels. Potential advances in technology could realize a free viewpoint, where the scene changes as the viewer moves from side-to-side.

Early coding schemes have used simple delivery of the left-right streams of a stereoscopic system to a display by spatially multiplexing left and right signals into the existing television frame. The display demultiplexes and displays the two channels using temporal multiplexing and shuttered eyewear, or though passive techniques based on polarization.

Frame Compatible S3D
Two general forms of spatial multiplexing are used, called Side-by-Side (SbS) and Top-and-Bottom (TaB). This is called Frame Compatible Plano-Stereoscopic 3D-TV.

Frame-compatible S3D sacrifices horizontal resolution (SbS) or vertical resolution (TaB) in the process of spatially multiplexing the L and R images streams. Frame-compatible in the first-generation implementation is not compatible with a 2D service, so it requires a simulcast to serve 3-D and 2D viewers. Extensions to standards to add support for signaling, which already exist within AVC standards, would allow future STBs to select, say, the left channel for the 2D viewer.

Service Compatible S3D
There are alternative methods of transmission that provide a service compatible with 2D viewers. One is the Scalable Video Coding (SVC), which forms part of AVC. This allows additional data to be carried that basic decoders can ignore.

The depth difference information could be carried as an additional channel to the base 2D channel, and suitable STBs could use that to reconstruct the left and right signals. MPEG-C Part 3 (ISO/IEC 23002-3) specifies a 2D+Depth coding scheme.

There is much redundancy between the left and right views, and this can be exploited in compression schemes much in the same way as the interframe compression in long-GOP MPEG.

The DVB has released a 3DTV specification (A154) detailing frame-compatible 3DTV service standards. Future compliant receivers could utilize signaling carried in the AVC supplemental Enhancement Information (SEI) to automatically manage mixed 2D and 3-D broadcasts to 3-D, or to 2D-only receivers. The specification is aligned with HDMI 1.4 and supports TaB and SbS multiplexing.

Multiview Video Coding (MVC)
Another extension to AVC, the Multiview Video Coding (MVC), allows multiple viewpoints to be encoded to a single bit stream, and then decoded to 2D, stereo or multi-view to suit the display device. Typically, this can be used with two viewpoints (stereo high profile), with true multiple viewpoint capture (multiview high profile) to be used in the future. MVC uses temporal and inter-view prediction to remove redundant information across views.

The Blu-Ray Disc Association updated its specification to support 3-D using MVC encoding. The format allows existing players to decode a 2D signal from 3-D discs. Through the use of MVC, the BDA claims it can achieve the quality of separate L/R stream at 75 percent of the data rate.

As broadcasters move to a 3Gb/s infrastructure, and if and when a large proportion of receivers support H.264 level 4.2, the way is clear to move to 1080p/50.

The pace of change is accelerating, and for consumers each change — DVB-T to T2, MPEG-2 to AVC and 2D to 3-D — involves a new STB. The devices are rarely forward-compatible, and they are designed to a price where every cent counts. The days of a receiver lasting for a decade or more look set to be placed with constant obsolescence. Issues remain as to how the data rates of an 8K system can be delivered to consumers, although many would say 4K would suffice for the foreseeable future.

By David Austerberry, Broadcast Engineering

Online Video Services and DRM Technology

Major contemporary trends in online video services:

  • Multiscreen: anytime access from any device.

  • A variety of business models: many options to access video content (online viewing, pay-per-view, download, in-stream ads).

  • Technological complexity – a variety of video platform modules and features (content load, processing, protection and delivery; video load balancing, video playback, etc.).

  • Providing customers with access to the premium content and the related need in security tools: the major movie studios impose heavy demand on security as they offer very expensive content (multi-million-dollar budgets are common for today’s movies).

A simplified design of a modern video platform for Internet broadcast involves content preparation, protection and distribution to the end user devices. Usually, the source content first enters a transcoding system (ingestion) that generates a set of videos for different end-user devices and bitrates for each device (to enable dynamic bit rate). The prepared content is sent to the operator’s in-house video delivery network or the CDN provider external infrastructure.

A simplified online video platform design

There are several most popular options for delivery of content to the user:
  • Downloading a video file to be viewed in a special application (HTTP Download).

  • Live streaming using ad-hoc protocols and playback of the video stream (RTSP, RTMP, MPEG2-TS).

  • Downloading a file in fragments, gluing of the fragments together in the video player and viewing of the video content in the process of download (Adaptive Streaming over HTTP).

If the content needs high-quality protection, prior to transferring it to the delivery network it has to be encrypted with a DRM solution. On the attempt of incoming content playback, the video player can detect the encrypted data and request authentication and a decryption key from the license server. At the level of a video player run on a client device, video analytics is very often implemented. It includes collection of detailed viewing data, monitoring of quality parameters, etc.

Specifics of Content Protection
Almost all DRM solutions are built on a unified architecture and consist of two parts: the server part (business logic) and the client part (player). The server part also consists of two modules: the encoder which prepares (encrypts) the content and the license server which issues content playback licenses to the users (players).

In terms of content delivery networks, there is no difference between the secure and open content. DRM solutions usually use asymmetric encryption algorithms, in this case, the content is encrypted with the master key, but users have to apply their unique session-specific keys.

In the case of LIVE streaming, to enable a more robust content protection, key rotation mechanisms are typically used. A key is changed at a predefined time interval. In case of VOD, different keys are used for different content units and, if the copyright holder has taken care of this, license caching is also performed. That is, the key is stored on the local machine for a certain amount of time or a given number of views, so that the application does not have to contact the license server at each viewing.

All these parameters usually mentioned as a Content Protection Policy which could be different for different major copyright holders and even for different content units.

Major DRM Vendors
There are many Digital Rights Management systems on the market. Each of them has its strengths and weaknesses. The situation is exacerbated by a high degree of segmentation on the end-user device market: Windows (with multiple browsers), MacOS, iOS, Android, TV sets and set-top boxes with their relevant operating systems and technologies.

With such a variety of market players, it is pretty difficult to agree on common standards and technologies. Therefore, multiple groups of technical solutions emerge, deferring creation of the universal technology for a certain while.

The most popular content protection tools used by the online video services are the following:
  • Adobe Flash Access. Due to its popularity (most videos on the Web are based on Adobe Flash). This is the perfect solution to protect content in Web browsers, on PCs, and on the Android devices.

  • Microsoft PlayReady is a successor of Windows Media DRM, one of the first content protection solutions. That’s why the Microsoft PlayReady SDK has got installed on most of Connected TV devices (LG, Samsung, etc.) and, of course, on the Windows operating systems (Media Player).

  • Widevine, now part of Google, is a developer of content protection solutions. Its popularity is due to the support of a broad range of consumer devices (TVs, game consoles, iOS devices).

  • Marlin is one of the few DRM solutions supported in Philips TVs, Sony PS3 and PSP devices. It is notable by its reliance on open standards and SaaS model support (paying per transaction for obtaining licenses).

Adobe Protected Streaming deserves special attention. It implements RTMPE (RTMP Encrypted) to enable secure streaming. This is an alternative to DRM. In contrast to DRM, security is implemented at the transport level rather than at the content level. It includes encryption of multimedia content between Flash Media Server and Flash Player.

However this is not the most reliable solution, but still it is approved by most major copyright holders. RTMPE involves additional mechanisms for content protection at the application level, such as: tokenization of links to content (i.e., preventing of direct content access bypassing the video service logic), and user authentication.

Compared to full-scale DRM, this approach combines a several times lower cost and ease of launch (a minimum configuring complexity). In addition, no user actions to install the DRM component are required. However, as new distribution technologies are spreading (such as the Adaptive Streaming ove HTTP), operators use to give up this approach in favor of DRM that is more universal. In addition, there are certain tools capable of cracking RTMPE.

DRM Solution Architecture
Almost any DRM solution consists of three parts:
  • The service of content encryption (server) pre-encrypts the content for distribution via the open Web channels in the protected format only. Most of solutions use AES-128. Many vendors have supported HSM modules to optimize encryption and offload the main CPU.

  • Licensing service (server) is a solution making a decision whether to issue a video content decryption key, in accordance with the business logic (content distribution rules). This includes identification of a pending payment or checking of viewer balance sufficiency. In most cases, this is the application server (for instance, it can be Java-based).

  • SDK (i.e., a set of software libraries) to handle protected content in the video player. It integrates the logic of interacting with the licensing service into the video player, and decodes the video before playback. Also, the SDK tools implement additional security features: control of access to the video player memory, protection of analog and digital outputs, and detection of hacker attacks. SDK validates all technical and logical restrictions set on the server for the content and a particular user, such as the ability to record to a disk or mobile device.

So, at the DRM input, the content is received and access restriction business rules are applied to it. The output video can then be played back on a wide range of consumer devices.

As it has already been noted above, DRM content delivery is independent of its protection method and is determined primarily by the business model. Hence, the same protected content can be made either downloadable (e.g., via the P2P networks, to minimize the cost) or presented for online streaming or viewing. However, when selecting a DRM system, you should check which delivery models and access restriction models are enabled by a given solution.

DRM algorithm for streaming video

DRM algorithm for downloadable video

The above figures show that, in terms of DRM there is no difference between streaming and video on demand. As a rule, it all comes down to using of different alternative authentication algorithms (identifying whether the download or just online viewing has been paid).

DRM Vendor Selection
A critical issue in choosing a content protection solution is the range of consumer devices supported. Each online video service has its own priority of consumer platforms. Here we would like to discuss just the very basics of running content security solutions on various platforms:
  • Web browsers on Windows / MacOS / Linux
  • Desktop applications
  • Web browsers on Android
  • Android applications
  • Web browsers on iOS
  • iOS-applications
  • Connected TV

The key differences between the various content protection solutions consist in supporting the above technological platforms. For instance, Adobe Flash Access offers user-friendly tools of content protection that do not require installation of additional plugins, as Flash Player is already installed almost anywhere (according to Adobe, on 99% of computers and 80% of mobile devices).

The need to install additional plugins can filter off those potential service users that possess no administrator rights or have a low level of computer literacy. In addition, there is a great community of Flash developers leveraging the Adobe technology. They have multiple years of experience in creating of cross-platform applications with rich interactive interfaces (which is pretty important for online video services).

Other technologies rely on less abundant players and feel an acute deficit of technical experts. Still, it is worth noting that the Widevine based solutions provide efficient content protection for Samsung and LG TV sets and iOS devices. Historically, the solutions based on Windows Media DRM have been supported on an even greater number of Connected TVs. Also, Verimatrix is unequaled among the STB vendors.

Recently, Adobe has shook the industry by announcing discontinuation of Flash plugin in mobile browsers (in fact, this means Android mostly), explaining it by their focus on mobile AIR application development and HTML5. To clarify its stand, just the other day Adobe announced that it is now working on HTML5 DRM.

It looks like we are in for something fascinating to come. The rapid spread of the new HTML5 video format is prevented by absence of uniform standards of video codecs (such as H.264, WebM, Ogg Theora) across a variety of browsers and lack of a single reliable DRM system to meet the copyright holder requirements. Different DRM solutions are used to support various video containers (mp4, wmv, ogg, mkv). Now, most of video on the Web is using mp4, so all the DRM vendors are concerned with supporting mp4. But with HTML5, mp4 is far from ubiquitous support.

In the case of Desktop applications the situation is much simpler: the vendors offer various SDKs to integrate DRM into video applications.

Content protection for iOS devices is a separate big topic. The issue is that Apple went its own way and preferred Flash to HTML5 with H.264 video. For video service developers, only one method of content protection is available: blockwise stream encryption with AES-128. This scheme is supported, for example, in the latest version of Adobe Flash Media Server 4.5.

However, the developers have to implement their own license management mechanisms or use an SDK from Widevine or other vendors. In case of browser playback, license management has to be implemented in JavaScript, which posits a substantial vulnerability. In case of playback from an application, Objective C SDK has to be used to enable a much higher level of protection.

In the world of music, DRM solutions showed no viability, especially after Apple iTunes abandoned FairPlay DRM. In our opinion, this is due to a significantly lower cost of audio content as compared to Hollywood blockbusters. As a result, copyright holders take a substantially harder stand with videos. The well-known western movie studios present a long list of technical requirements to companies seeking to distribute their digital content. Practically all solutions of IPTV operators and the major Runet websites with premium content are the examples of content protection systems.

All DRM solutions are scalable and consume roughly the same resources, as the encryption algorithms used are uniform. When choosing the DRM solutions, we recommend you to make a list of user devices based on their priorities. Supported devices (as well as containers and codecs), price, user experience, business models and technologies for content delivery: all these make up the applicable selection criteria.

Source: DENIVIP Media

HDMI Alternative MHL: Your Phone’s Best-Looking Secret

A technology that’s cool, useful and about to be supported by millions of handsets – but no one has ever heard about it? Welcome to Tim Wong’s world. Wong is the president of the MHL Consortium, which is trying to establish Mobile High-Definition Link (MHL) as a new standard to connect mobile phones to big-screen TVs and other display devices. Think of it as a replacement for HDMI that’s especially suited for mobile, if you will.

MHL’s Awareness Problem
Wong’s chances to compete with HDMI aren’t actually all that bad: The MHL Consortium was founded by industry heavyweights like Nokia, Toshiba and Sony. It has about 80 licensees, and the technology is embedded in more than a dozen of new handsets from Samsung, HTC and LG. Walk into any AT&T store in the U.S., and you’ll see seven MHL-capable devices on display, Wong told me.

But talk to the salespeople, and you’ll find that no one has ever heard of MHL. Handset manufacturers don’t advertise the feature, and CE makers don’t even bother to label the MHL port on their TV sets.

“This is my fundamental problem,” Wong told me this week. It’s also the reason why he is practically in non-stop campaign mode, traveling to Germany, China and throughout the U.S. to get the word out. “The awareness just isn’t there today,” Wong admitted.

Gaming, Presentations, 1080p Video
Wong is trying hard to change that, and I caught up with him in San Francisco during yet another day of press briefings. He gave me a demo of MHL, and the technology is actually pretty impressive: MHL-enabled handsets basically repurpose their micro USB port to connect to big screen TVs through special MHL cables. A few select TV sets already support MHL natively through a port that doubles as an HDMI input.

Once connected, MHL doesn’t just offer mirroring, but up to 1080p video playback and the capability to control your handset through your TV’s remote control. Essentially, your phone becomes something of a mobile set-top box. Don’t have an MHL-compatible TV yet? No worries, MHL-to-HDMI adapters that connect to any HD TV are readily available for around $15. Wong told me that MHL cables could eventually be as cheap as your plain old USB cable. Oh, and MHL is powered, so your phone gets charged while connected to the TV.

So why would you need a mobile set-top box? First of all, showing off your videos and photos everywhere you go is nice, and plugging a Netflix-capable device into your hotel TV sounds like a great idea as well. Wong also ran his presentation off his phone when we met, and showed me that the technology is great for bringing games like Angry Birds to the big screen.

Click to watch a quick demo of MHL featuring the Samsung Galaxy Nexus

Great for Emerging Markets
But the really interesting use cases may be beyond simple screen mirroring. CE manufacturers and cable companies like Comcast have long tried to get people to use Skype on their TV set, but using the big screen for videoconferencing usually requires the purchase or rental of a separate webcam. Virtually all modern smart phones on the other hand already have a front-facing camera. Connect them to your TV via MHL, and you got yourself a video conferencing setup on the cheap.

Wong also told me that BMW is looking to use MHL in the car, where the port could be used for entertainment as well as emergency services. Phone makers are looking to extend the experience to the big screen beyond mirroring, so you’d have multiple or extended desktops, just like when you connect your laptop to a second monitor. And MHL is a very interesting proposition for emerging markets, where cell phones increasingly replace computers, and people don’t have much money to spend on additional home entertainment devices.

Wireless isn’t Really All That Wireless
Of course, MHL also has its downsides. For one, phones are very intimate devices. You use them to carry around your own media and playlists and get alerts for personal messages on them. Sharing all of that on the big screen can be a daunting proposition. Secondly, your phone is somewhat out of reach when connected to the TV, even if you have a really long cord.

Apple’s Airplay, DLNA and various wireless display technologies don’t have that stigma, but Wong dismissed them as battery drainers, especially when watching HD video.

“Fundamentally, wireless isn’t wireless,” he said. At some point, you always have to plug it in, if only to recharge. So why not just plug them into the TV?

Of course, battery issues haven’t stopped Apple from gaining huge mind share with Airplay. Heck, even iPad HD mirroring is getting more attention than MHL, despite being less advanced and much more expensive. Going up against that won’t be an easy feat, even though MHL seems in a very good position to reach millions of consumers in 2012.

“Getting people excited about it is easy,” Wong told me. “Getting people to know about it is hard.”

Mobile devices that support MHL, as of December 2011:
  • Samsung: Infuse, Galaxy S2 Galaxy Note, Galaxy Nexus, Epic 4G Touch
  • HTC: Sensation, Sensation XE, Sensation XL, Rezound, Vivid, EVO 3D, Amaze, Raider, Flyer, EVO View 4G, JetStream
  • LG: Optimus LTE (Nitro HD)
  • Meizu: Meizu MX
By Janko Roettgers, GigaOM

Transcoding Strategies for Adaptive Streaming

An interesting white paper by ARRIS.

Argentine TV in 3D HD over DTT Breakthrough

A team of scientists and engineers in Argentina have carried out the world's first successful experimental transmission of a 3D Full HD video signal over a digital terrestrial television (DTT) channel.

According to technology website RedUSERS, at 17:31 local time on Friday engineers Francisco Carrá and Oscar Nuncio (technical manager and technical sub-manager of Argentine public broadcaster Channel 7) were able to verify the reception of the 3D signals. These had been broadcast via one of the new antennas being erected in the country, and transmitted through a regular ISDB-T channel.

A video compression catalyser designed and developed by Mario Mastriani, head of the Images and Signals Laboratory at the National University of Tres de Febrero (UNTREF), was at the heart of the system.

The device boosts the video compression level of H.264/MPEG-4 Part 10 (or AVC) codecs by a factor of four. With the help of this component, the team was able to broadcast the 3D signals in 1080p (Full HD) quality, but critically using the same bitrate currently utilised to transmit a 2D HD channel (in 1080i) via DTT. No additional latency to that typically seen in H.264 transmissions was observed during the tests.

The images were captured with a Panasonic 1080p-3D camera, while a NEC encoder was also used. The arriving signal was decoded with an ISDB-T USB device connected to a PC fitted with Nvidia SDI and Nvidia Quadro 6000 input/output video cards. The obtained signal was routed into a 3D HDTV video activity monitor, while the output signal was routed into a standard 3D screen.

According to RedUSERS, the same receiving setting could be easily replicated in a conventional set-top box, whose only additional requirement would be the incorporation of a 3D image processing chip.

Back in January 2011, British firm Motive Television had announced the development of set-top box software able to deal with 3D TV content broadcast via DTT. The Motive technology, branded as 3VOD, was first deployed in Italy by Silvio Berlusconi's broadcaster Mediaset.

By Juan Pablo Conti, RapidTVNews

The Trials and Tribulations of HTML Video in the Post-Flash Era

Adobe reversed course on its Flash strategy after a recent round of layoffs and restructuring, concluding that HTML5 is the future of rich Internet content on mobile devices. Adobe now says it doesn’t intend to develop new mobile ports of its Flash player browser plugin, though existing implementations will continue to be maintained.

Adobe’s withdrawal from the mobile browser space means that HTML5 is now the path forward for developers who want to reach everyone and deliver an experience that works across all screens. The strengths and limitations of existing standards will now have significant implications for content creators who want to deliver video content on the post-flash Web.

Apple’s decision to block third-party browser plugins like Flash on its iOS devices played a major role in compelling Web developers to build standards-based fallbacks for their existing Flash content. This trend will be strengthened when Microsoft launches Windows 8 with a version of Internet Explorer that doesn’t support plugins in the platform’s new standard Metro environment.

Flash still has a significant presence on the Internet, but it's arguably a legacy technology that will decline in relevance as mobile experiences become increasingly important. The faster pace of development and shorter release cycles in the browser market will allow open standards to mature faster and gain critical mass more quickly than before. In an environment where standards-based technologies are competitive for providing rich experiences, proprietary vendor-specific plugins like Flash will be relegated to playing a niche role.

Our use of the phrase “post-Flash” isn’t intended to mean that Flash is dead or going to die soon. We simply mean that it’s no longer essential to experiencing the full Web. The HTML5 fallback experiences on many Flash-heavy sites still don’t provide feature parity with the Flash versions, but the gap is arguably shrinking—and will continue to shrink even more rapidly in the future.

Strengths and Weaknesses of HTML5 Video
HTML5 has much to offer for video delivery, as the HTML5 video element seamlessly meshes with the rest of the page DOM and is easy to manipulate through JavaScript. This means that HTML5 video offers significantly better native integration with page content than it has ever been possible to achieve with Flash. The open and inclusive nature of the standards process will also make it possible for additional parties to contribute to expanding the feature set.

A single company no longer dictates what can be achieved with video, and your video content is no longer isolated to a rectangle embedded in a page. HTML5 breaks down the barriers between video content and the rest of the Web, opening the door for more innovation in content presentation. Three are some really compelling demonstrations out there that showcase the use of video in conjunction with WebGL and other modern Web standards. For example, the video shader demo from the 3 Dreams of Black interactive film gives you a taste of what’s possible.

Click to watch the video

Of course, transitioning video delivery in the browser from Flash to HTML5 will also pose some major challenges for content creators. The standards aren’t fully mature yet and there are still a number of features that aren’t supported or widely available across browsers.

For an illustration of how deep the problems run, you need only look at Mozilla’s Firefox Live promotional website, which touts the organization’s commitment to the open Web and shows live streaming videos of Red Panda cubs from the Knoxville Zoo. The video is streamed with Flash instead of using standards-based open Web technologies.

In an FAQ attached to the site, Mozilla says that it simply couldn’t find a high-volume live streaming solution based on open codecs and open standards. If Mozilla can’t figure out how to stream its cuddly mascot with open standards, it means there is still work to do.

Flash is required to see the Red Panda cubs on Mozilla's website

Two of the major technical issues faced by HTML5 video adopters are the lack of adequate support for adaptive streaming and the lack of consensus surrounding codecs. There is currently an impasse between backers of the popular H.264 codec and Google’s royalty-free VP8 codec. There’s no question that a royalty-free video format is ideal for the Web, but the matter of whether VP8 is truly unencumbered by patents—and also meets the rest of the industry’s technical requirements—is still in dispute.

There is another major issue that hasn’t been addressed yet by open Web standards that could prove even more challenging: content protection. The vast majority of Flash video content on the Internet doesn’t use any kind of DRM and is trivially easy to download. Flash does, however, provide DRM capabilities and there are major video sites that rely on that technology in order to protect the content they distribute.

Can DRM Be Made to Play Nice with Open Standards?
DRM is almost always bad for regular end users and its desirability is highly debatable, but browser vendors will have to support it in some capacity in order to make HTML5 video a success. Many of the content creators who license video material to companies like Netflix and Hulu contractually stipulate a certain degree of content protection.

Mozilla’s Robert O’Callahan raised the issue of HTML5 video DRM in a recent blog entry shortly after Adobe’s announcement regarding mobile Flash. He expressed some concern that browser vendors will look for a solution that is expedient rather than inclusive, to the detriment of the open Web.

“The problem is that some big content providers insist on onerous DRM that necessarily violates some of our open Web principles (such as Web content being equally usable on any platform, based on royalty-free standards, and those standards being implementable in free software),” O'Callahan wrote. “We will probably get into a situation where Web video distributors will be desperate for an in-browser strong DRM solution ASAP, and most browser vendors (who don’t care all that much about those principles) will step up to give them whatever they want, leaving Mozilla in another difficult position. I wish I could see a reasonable solution, but right now I can’t. It seems even harder than the codec problem.”

O'Callahan also pointed out in his blog entry that the upcoming release of Windows 8, which will not support browser plugins in its Metro environment, means that the lack of DRM support in standards-based Web video is no longer just a theoretical concern. Microsoft may need to furnish a solution soon, or risk frustrating users who want to watch commercial video content on the Web in Windows 8 without installing additional apps or leaving the Metro shell.

Netflix Stands Behind DASH
Flash evangelists may feel that the limitations of HTML5 video and the problems that content creators are sure to face during the transition are a vindication of the proprietary plugin model. But the advantages of a truly open, vendor-neutral, and standards-based video solution that can span every screen really dwarf the challenges. That is why major stakeholders are going to be willing to gather around the table to try find a way to make it work.

Netflix already uses HTML5 to build the user interfaces of some of its embedded applications, including the one on the PS3. The company has soundly praised the strengths of a standards-based Web technology stack and has found that there are many advantages. But the DRM issue and the lack of suitably robust support for adaptive streaming have prevented Netflix from dropping its Silverlight-based player in regular Web browsers.

The company has committed to participating in the effort to make HTML5 a viable choice for all video streaming. Netflix believes that the new Dynamic Adaptive Streaming over HTTP (DASH) standard being devised by the Motion Picture Experts Group (MPEG) will address many of the existing challenges and pave the way for ubiquitous adoption of HTML5 for streaming Internet video.

DASH, which is expected to be ratified as an official standard soon, has critical buy-in from many key industry players besides Netflix, including Microsoft and Apple. An early DASH playback implementations is already available as a plugin for the popular VLC video application.

The DASH standard makes video streaming practical over HTTP and addresses the many technical requirements of high-volume streaming companies like Netflix, but it doesn’t directly address the issue of DRM by itself. DASH can be implemented in a manner that is conducive to supporting DRM, however.

Ericsson Research, which is involved in the DASH standardization effort, has done some worthwhile preliminary research to evaluate the viability of DRM on DASH. Ericsson produced a proof-of-concept implementation that uses DRM based on the Marlin rights management framework.

Marlin, which was originally created by a coalition of consumer electronics vendors, is relatively open compared to alternate DRM technologies and makes use of many existing open standards. But Marlin is still fundamentally DRM and suffers from many of the same drawbacks, and adopters have to obtain a license from the Marlin Trust Management Organization, which holds the keys.

The architecture of the Marlin rights management framework

Ericsson explains in its research that it chose to experiment with Marlin for their proof-of-concept implementation because it’s available and mature—other similar DRM schemes could also easily be adopted. Existing mainstream DRM schemes would all likely pose the same challenges, however, and it’s unlikely that such solutions will be viewed as acceptable by Mozilla. More significantly, an implementation of HTML5 video that relies on this kind of DRM would undermine some of the key values and advantages of openness that are intrinsic to the open Web.

The ease with which solutions like Marlin can be implemented on top of HTML5 will create pressure for mainstream browser vendors to adopt them quickly. This could result in the same kind of fragmentation that exists today surrounding codecs. As O’Callahan said, it’s easy to see this issue becoming far more contentious and challenging to overcome than the codec issue.

What Next?
The transition to HTML5 and standards-based technology for video delivery will bring many advantages to the Web. There are some great examples that show what can be achieved when developers really capitalize on the strengths of the whole open Web stack. The inclusiveness of the standards process will also give a voice to additional contributors who want to expand the scope of what can be achieved with video on the Web.

There are still some major obstacles that must be overcome in order for the profound potential of standards-based Web video to be fully realized in the post-Flash era. Open standards still don’t deliver all of the functionality that content creators and distributors will require in order to drop their existing dependence on proprietary plugins. Supplying acceptable content protection mechanisms will prove to be a particularly bitter challenge.

Despite the barriers ahead, major video companies like Netflix recognize the significant advantages of HTML5 and are willing to collaborate with other stakeholders to make HTML5 video a success. The big question that remains unanswered is whether that goal can be achieved without compromising the critically important values of the open Web.

By Ryan Paul, Ars Technica

IMF for a Multi-Platform World

Among other things, the looming arrival of the Interoperable Master Format (IMF) is illustrating that the digital media industry is now capable of moving "nimbly and quickly" to create technical standards to address and evolve the ways that it packages, moves, and protects precious content in the form of digital assets in a world where the technology used to do all that, and the very industry itself, is fundamentally changing at a startling rate. The term "nimbly and quickly" comes from Annie Chang, Disney's VP of Post-Production Technology who also chairs the SMPTE IMF work group (TC-35PM50).

Six Hollywood Studios through the University of Southern California Entertainment Technology Center (USC ETC) started to develop IMF in 2007, and in early 2011, they created an input document that the SMPTE IMF working group is now using as the basis of the IMF standardization effort. Over time, IMF has developed into an interchangeable, flexible master file format designed to allow content creators to efficiently disseminate a project's single master file to distributors and broadcasters across the globe.

Chang reports that progress has moved quickly enough for the work group to expect to finalize a basic version of the IMF standard in coming months, with draft documents possibly ready by early 2012 that focus on a core framework for IMF, and possibly a few of the potential modular applications that could plug into that framework.

Once that happens, content creators who have prepared for IMF will be in a position to start feeding all their distributors downstream far more effectively than has been the case until now in this world of seemingly unending formats. They will, according to Chang, be able to remove videotape from their production workflow, reduce file storage by eliminating the need for full linear versions of each edit or foreign language version of their content, and yet be able to take advantage of a true file-based workflow, including potentially automated transcoding, and much more.

The rollout will still need to be deliberate as various questions and unanticipated consequences and potential new uses of IMF begin to unfold. But that said, Chang emphasizes that the goal of being able to streamline and improve the head end of the process—creating a single, high quality, ultimate master for all versions is real and viable, and with a little more work and input, will be happening soon enough.

"Today, we have multiple versions, different resolutions, different languages, different frame rates, different kinds of HD versions, standard-definition versions, different aspect ratios—it's an asset management nightmare," she says, explaining why getting IMF right is so important to the industry.

"Everyone creates master files on tape or DPX frames or ProRes or others, and then they have to create mezzanine files in different formats for each distribution channel. IMF is designed to fix the first problem—the issue of too many file formats to use as masters."

Therefore, IMF stands to be a major boon for content creators who repeatedly and endlessly create different language versions of their material.

"For a ProRes QuickTime, you are talking about a full res version of a movie each time you have it in a new language," Chang says. "So 42 languages would be 42 QuickTime files. IMF is a standardized file solution built on existing standards that will allow people to just add the new language or whatever other changes they need to make to the existing master and send it down the chain more efficiently."

Chang emphasizes the word "flexible" in describing IMF, and the word "interoperable" in the name itself because, at its core, IMF allows content distributors to uniformly send everybody anything that is common, while strategically transmitting the differences only to where they need to go. In that sense, IMF is based on the same architectural concept as the Digital Cinema specification—common material wrapped together, combined with a streamlined way to package and distribute supplemental material. Eventually, each delivery will include an Output Profile List (OPL) to allow those transcoding on the other end a seamless way to configure the file as they format and push it into their distribution chain.

Unlike the DCI spec, however, IMF is not built of wholly new parts. Wherever possible, the file package will consist of existing pieces combined together in an MXF-flavored wrapper. This should, Chang hopes, make it easier for businesses across the industry to adapt without huge infrastructure changes in most cases as IMF comes to fruition.

"With IMF, we are using existing standards—a form of MXF (called MXF OP1A/AS-02) to wrap the files, and parts of the Digital Cinema format and other formats that many manufacturers already use," she says. "So, hopefully, there is not much of a learning curve. We hope that most of the big companies involved in the process won't be caught unaware, and will be able to make firmware or software upgrades to their systems in order to support IMF. Hopefully, companies will not have to buy all new equipment in order to use IMF.

"And with the concept of the Output Profile List (OPL), which essentially will be global instructions on output preferences for how to take an IMF file and do something with it relative to your particular distribution platform, companies that are doing transcoding right now will have an opportunity to use that to their advantage to better automate their processes. IMF has all the pieces of an asset management system and can use them all together to create standardized ways to create packages that fit into all sorts of other profiles. It's up to the content owners to take these OPL's and transcode the files. As they do now, they could do it in-house or take it to a facility. But if transcoding facilities get smart and use IMF to its potential, they can take advantage of IMF's capabilities to streamline their processes."

Chang says major technology manufacturers have been extremely supportive of the SMPTE standardization effort. Several, such as Avid, Harmonic, DVS, Amberfin, and others have actively participated and given input on the process, which is important because changes to existing editing, transcoding, and playback hardware and software, and the eventual creation of new tools for those tasks, will eventually need to happen as IMF proliferates. After all, as Chang says, "what good is a standard unless people use it?"

She emphasizes that manufacturer support is crucial for IMF, since it is meant to be a business-to-business tool for managing and distributing content, and not a standard for how consumers will view content. Therefore, outside of the SMPTE standardization effort, there is a plan to have manufacturers across the globe join in so-called "Plugfests" in 2012 to create IMF content out of draft documents, interchange them with each-other, and report on their findings.

As Chang suggests, "it's important to hit IMF from multiple directions since, after all, the first word in the name is 'interoperable.' " As a consequence of all these developments, it's reasonable to assume that IMF will officially be part of the industry's typical workflow chain where content distributors can start sending material to all sorts of platforms in the next year. Some studios and networks are already overhauling their infrastructures and workflow approaches to account for IMF's insertion into the process, and encoding houses and other post-production facilities should also, in most cases, have the information and infrastructure to adapt to the IMF world without any sort of fundamental shift. But the post industry will be somewhat changed by IMF, especially if some facilities or studios decide on processes for automating encoding at the front end of the process that changes their reliance on certain facilities currently doing that kind of work.

However, Chang adds, the broadcast industry specifically will probably have the most significant learning curve in terms of how best to dive into IMF since, unlike studios, which have been discussing their needs and pondering IMF since about 2006, the broadcast industry was only exposed more directly to IMF earlier this year when SMPTE took the process over. IMF was originally designed and intended as a higher bit-rate master (around 150-500MB/s for HD, possibly even lossless, according to Chang), but broadcasters normally use lower bit-rate files (more like 15-50MB/s).

"However, I feel that broadcasters would like to have that flexibility in versioning," Chang says. "But because they need different codecs and lower bit-rates, there is still discussion in SMPTE about what those codecs should be. Broadcasters are only now starting to evaluate what they need out of IMF, but there is still plenty of time for them to get involved."

Of course, as the explosion of mobile devices and web-enabled broadcasting on all sorts of platforms in a relatively short period of time illustrates, viewing platforms will inevitably change over time, and therefore, distribution of data will have to evolve, as well. As to the issue of whether IMF is relatively future-proofed, or merely the start of a continually evolving conversation, Chang is confident the standard can be in place for a long time because of its core framework—the primary focus to date. That framework contains composition playlists, general image data, audio data (unlimited tracks, up to 16 channels each), sub-titling/captioning data, any dynamic metadata needed, and so on.

Modular applications that could plug into that framework need to be further explored, Chang says, but the potential to allow IMF to accommodate new, higher compressed codecs, new or odd resolutions or frame rates, and all sorts of unique data for particular versions is quite high.

"The core framework we created with documents is something we tried to future proof," she says. "The question is the applications that might plug into that core framework (over time). We are trying to make it as flexible as possible so that if, in the future, even if you have some crazy new image codec that goes up to 16k or uses a new audio scheme, it will still plug into the IMF framework. So image, audio, or sub-titling could be constrained, for example, but as long as the sequence can be described by the composition playlist and the essence can be wrapped in the MFX Generic Container, the core framework should hold up for some time to come."

To connect with the SMPTE IMF effort, you can join the SMPTE 35PM Technology Committee, and then sign up as a member of 35PM50. The IMF Format Forum will have the latest news and discussions about the status of the IMF specification.

More information about IMF:

By Michael Goldman, SMPTE Newswatch

A VLC Media Player Plugin Enabling DASH

This poster describes an implementation of the emerging Dynamic Adaptive Streaming over HTTP (DASH) standard which is currently developed within MPEG and 3GPP. Our implementation is based on VLC and fully integrated into its structure as a plugin.

Furthermore, our implementation provides a powerful extension mechanism with respect to adaptation logics and profiles. That is, it should be very easy to implement various adaptation logics and profiles.

Future versions of the plugin will provide an update to the latest version of the standard (i.e., a lot of changes have been adopted recently, e.g., Group has changed to AdaptationSet), add support for persistent HTTP connections in order to reduce the overhead of HTTP streaming (e.g., compared to RTP), and seeking within a DASH stream.

By Christopher Mueller and Christian Timmerer, Alpen-Adria-Universitaet Klagenfurt


DASHEncoder generates the desired representations (quality/bitrate levels), fragmented MP4 files, and MPD file based on a given config file or by command line parameters respectively. Given the set of parameters the user has a wide range of possibilities for the content generation, including the variation of the segment size, bitrate, resolution, encoding settings, URL , etc.

The DASHEncoder steps are depicted here:

Current features and restrictions:
  • Generation of video only, audio only or audio+video DASH content.
  • H.264 encoding based on x264: Constant and variable bitrate encoding.
  • Supported profile: urn:mpeg:dash:profile:isoff-main:2011.
  • PSNR logging and MySQL interface for storing in a database (only for common resoltution representations).
  • There are currently problems with the playback of the content containing Audio with the DASH VLC plugin.

The DASHEncoder is available as open source with the aim that other developers will join this project.

Source: Alpen-Adria Universität Klagenfurt via Video Breakthroughs

Energy Efficient and Robust S3D Video Distribution Enabled with Nomad3D CODEC and 60 GHz Link

This white paper describes a 3D Video Distribution scheme using the new wireless 60 GHz standard for connectivity and Nomad3D 3D+F 3D CODEC. It will be shown that a specially dedicated Video Delivery System using 60 GHz Modems and the 3D+F CODEC is very efficient in overall system power consumption and more robust to channel impairments.

Source: Nomad3D

Getting Machines to Watch 3D for You

The advantages of automatic monitoring of multiple television channels are well known. There are just not enough eyeballs for human operators to see what is going on. With the advent of stereoscopic 3D in mainstream television production and distribution, the benefits of automatic monitoring are even greater, as 3D viewing is even less conducive to manual monitoring.

This paper, presented at IBC 2011, gives a comprehensive introduction to a wide range of automatic monitoring possibilities for 3D video. There are significant algorithmic challenges involved in some of these tasks, often involving careful high-level analysis of picture content. Further challenges arise from the need for monitoring to be robust to typical processing undergone by video signals in a broadcast chain.

By Mike Knee, Consultant Engineer, R&D Algorithms Team, Snell

MPEG Analysis and Measurement

Broadcast engineering requires a unique set of skills and talents. Some audio engineers claim the ability to hear the difference between tiny nuisances such as different kinds of speaker wire. They are known as those with golden ears. Their video engineering counterparts can spot and obsess over a single deviate pixel during a Super Bowl touchdown pass or a “Leave it to Beaver” rerun in real time. They are known as eagle eyes or video experts.

Not all audio and video engineers are blessed with super-senses. Nor do we all have the talent to focus our brain’s undivided processing power to discover and discern vague, cryptic and sometimes immeasurable sound or image anomalies with our bare eyes or ears on the fly, me included. Sometimes, the message can overpower the media. Fortunately for us and thanks to the Internet and digital video, more objective quality and measurement standards and tools have developed.

One of those standards is Perceptual Evaluation of Video Quality (PEVQ). It is an End-to-End (E2E) measurement algorithm standard that grades picture quality of a video presentation by a five-point Mean Opinion Score (MOS), one being bad and five being excellent.

PEVQ can be used to analyze visible artifacts caused by digital video encoding/decoding or transcoding processes, RF- or IP-based transmission systems and viewer devices like set-top boxes. PEVQ is suited for next-generation networking and mobile services and include SD and HD IPTV, streaming video, mobile TV, video conferencing and video messaging.

The development for PEVQ began with still images. Evaluation models were later expanded to include motion video. PEVQ can be used to assess degradations of a decoded video stream from the network, such as that received by a TV set-top box, in comparison to the original reference picture as broadcast from the studio. This evaluation model is referred to as End-to-End (E2E) quality testing.

E2E exactly replicates how so-called average viewers would evaluate the video quality based on subjective comparison, so it addresses Quality-of-Experience (QoE) testing. PEVQ is based on modeling human visual behaviors. It is a full-reference algorithm that analyzes the picture pixel-by-pixel after a temporal alignment of corresponding frames of reference and test signal.

Besides an overall quality Mean Opinion Score figure of merit, abnormalities in the video signal are quantified by several Key Performance Indicators (KPI), such as Peak Signal-to-Noise Ratios (PSNR), distortion indicators and lip-sync delay.

PVEQ References
Depending on the data made available to the algorithm, video quality test algorithms can be divided into three categories based on available reference data.

A Full Reference (FR) algorithm has access to and makes use of the original reference sequence for a comparative difference analysis. It compares each pixel of the reference sequence to each corresponding pixel of the received sequence. FR measurements deliver the highest accuracy and repeatability but are processing intensive.

A Reduced Reference (RR) algorithm uses a reduced bandwidth side channel between the sender and the receiver, which is not capable of transmitting the full reference signal. Instead, parameters are extracted at the sending side, which help predict the quality at the receiving end. RR measurements are less accurate than FR and represent a working compromise if bandwidth for the reference signal is limited.

A No Reference (NR) algorithm only uses the degraded signal for the quality estimation and has no information of the original reference sequence. NR algorithms are low accuracy estimates only, because the original quality of the source reference is unknown. A common variant at the upper end of NR algorithms analyzes the stream at the packet level, but not the decoded video at the pixel level. The measurement is consequently limited to a transport stream analysis.

Another widely used MOS algorithm is VQmon. This algorithm was recently updated to VQmon for Streaming Video. It performs real-time analysis of video streamed using the key Adobe, Apple and Microsoft streaming protocols, analyzes video quality and buffering performance and reports detailed performance and QoE metrics. It uses packet/frame-based zero reference, with fast performance that enables real-time analysis on the impact that loss of I, B and P frames has on the content, both encrypted and unencrypted.

The 411 on MDI
The Media Delivery Index (MDI) measurement is specifically designed to monitor networks that are sensitive to arrival time and packet loss such as MPEG-2 video streams, and is described by the Internet Engineering Task Force document RFC 4445. It measures key video network performance metrics, including jitter, nominal flow rate deviations and instant data loss events for a particular stream.

MDI provides information to detect virtually all network-related impairments for streaming video, and it enables the measurement of jitter on fixed and variable bit-rate IP streams. MDI is typically shown as the ratio of the Delay Factor (DF) to the Media Loss Rate (MLR), i.e. DF:MLR.

DF is the number of milliseconds of streaming data that buffers must handle to eliminate jitter, something like a time-base corrector once did for baseband video. It is determined by first calculating the MDI virtual buffer depth of each packet as it arrives. In video streams, this value is sometimes called the Instantaneous Flow Rate (IFR). When calculating DF, it is known as DELTA.

To determine DF, DELTA is monitored to identify maximum and minimum virtual depths over time. Usually one or two seconds is enough time. The difference between maximum and minimum DELTA divided by the stream rate reveals the DF. In video streams, the difference is sometimes called the Instantaneous Flow Rate Deviation (IFRD). DF values less than 50ms are usually considered acceptable. An excellent white paper with much more detail on MDI is available from Agilent.

Figure 1 - The Delay Factor (DF) dictates buffer size needed to eliminate jitter

Using the formula in Figure 1, let’s say a 3.Mb/s MPEG video stream observed over a one-second interval feeds a maximum data rate into a virtual buffer of 3.005Mb and a low of 2.995Mb. The difference is the DF, which in this case is 10Kb. DF divided by the stream rate reveals the buffer requirements. In this case, 10K divided by 3.Mb/s is 3.333 milliseconds. Thus, to avoid packet loss in the presence of the known jitter, the receiver’s buffer must be 15kb, which at a 3Mb rate injects 4 milliseconds of delay. A device with an MDI rating of 4:0.003, for example, would indicate that the device has a 4 millisecond DF and a MLR of 0.003 media packets per second.

The MLR formula in Figure 2 is computed by dividing the number of lost or out-of-order media packets by observed time in seconds. Out-of-order packets are crucial because many devices don’t reorder packets before handing them to the decoder. The best-case MLR is zero. The minimum acceptable MLR for HDTV is generally considered to be less than 0.0005. An MLR greater than zero adds time for viewing devices to lock into the higher MLR, which slows channel surfing an can introduce various ongoing anomalies when locked in.

Figure 2 - The Media Loss Rate (MLR) is used in the Media Delivery Index (MDI)

Watch That Jitter
Just as too much coffee can make you jittery, heavy traffic can make a network jittery, and jitter is a major source of video-related IP problems. Pro-actively monitoring jitter can alert you to help avert impending QoE issues before they occur.

One way to overload a MPEG-2 stream is with excessive bursts. Packet bursts can cause a network-level or a set-top box buffer to overflow or under-run, resulting in lost packets or empty buffers, which cause macro blocking or black/freeze frame conditions, respectively. An overload of metadata such as video content PIDs can contribute to this problem.

Probing a streaming media network at various nodes and under different load conditions makes it possible to isolate and identify devices or bottlenecks that introduce significant jitter or packet loss to the transport stream. Deviations from nominal jitter or data loss benchmarks are indicative of an imminent or ongoing fault condition.

QoE is one of many subjective measurements used to determine how well a broadcaster’s signal, whether on-air, online or on-demand, satisfies the viewer’s perception of the sights and sounds as they are reproduced at his or her location. I can’t help but find some humor in the idea that the ones-and-zeros of a digital video stream can be rated on a gray scale of 1-5 for quality.

Experienced broadcast engineers know the so-called quality of a digital image begins well before the light enters lens, and with apologies to our friends in the broadcast camera lens business, the image is pre-distorted to some degree within the optical system before the photons hit the image sensors.

QoE or RST?
A scale of 1-5 is what ham radio operators have used for 100 years in the readability part of the Readability, Strength and Tone (RST) code system. While signal strength (S) could be objectively measured with an S-meter such as shown in Figure 3, readability (R) was purely subjective, and tone (T) could be subjective, objective or both.

Figure 3 - The S-meter was the first commonly used metric to objectively
read and report signal strength at an RF receive site

Engineers and hams know that as S and or T diminish, R follows, but that minimum acceptable RST values depend almost entirely on the minimum R figure the viewer or listener is willing to accept. In analog times, the minimum acceptable R figure often varied with the value of the message.

Digital technology and transport removes the viewer or listener’s subjective reception opinion from the loop. Digital video and audio is either as perfect as the originator intended or practically useless. We don’t need a committee to tell us that. It seems to me the digital cliff falls just south of a 4x5x8 RST. Your opinion may vary.

By Ned Soseman, Broadcast Engineering