DVB-3DTV Specification Published

DVB Steering Board has approved the DVB-3DTV specification, which has just been published as BlueBook A154 ‘Frame Compatible Plano-Stereoscopic 3DTV’. The specification will be sent to the European Telecommunications Standards Institute (ETSI) for formal standardisation.

The specification specifies the delivery system for frame compatible plano-stereoscopic 3DTV services, enabling service providers to utilise their existing HDTV infrastructures to deliver 3DTV services that are compatible with 3DTV capable displays already in the market. This system covers both use cases of a set-top box delivering 3DTV services to a 3DTV capable display device via an HDMI connection, and a 3DTV capable display device receiving 3DTV services directly via a built-in tuner and decoder.

Plano-stereoscopic imaging systems deliver two images (left and right) that are arranged to be seen simultaneously, or near simultaneously, by the left and right eyes. Viewers perceive increased depth in the picture, which becomes more like the natural binocular viewing experience. The DVB-3DTV specification also provides a mechanism that allows subtitles and other onscreen graphics to be best positioned so that they can be viewed correctly in the stereoscopic picture.

Source: DVB

Making Of 3D No Glasses by Jonathan Post

Simultaneous 2D and 3D Viewing - Cool!

At the recent Integrated Systems Europe 2011 exhibition, Toshiba debuted a new LED wall technology. The prototype display was designed for use in either indoor or outdoor, stadium like venues. The unique aspect of the new display is that it presents an image that can be simultaneously viewed in either conventional 2D or passive, glasses-based 3D.

In a telephone interview with Charley Bocklet, Toshiba’s National Sales Manager for LED Display Systems, I was told that the technology underlying the new display is a result of a development program undertaken in conjunction with Chroma3D Systems Inc. (Cannon Falls, MN). To learn more, I spoke with Monte Ramstad of Chroma3D Systems, the inventor of the 3D technology.

In the new display, each pixel is composed of Red, Green, Blue and Yellow subpixels. A conventional 2D image is presented utilizing the RGB pixels. The depth information is presented utilizing the Y pixels.

When the display is viewed naturally, without special glasses, the 2D image appears almost completely normal. A slight, not overly distracting, yellow halo may be visible associated with portions of some objects.

When the user wears passive glasses, the image is perceived in stereoscopic 3D. One lens in the glasses transmits restricted portions of the red, green and blue spectrum and the other lens transmits only yellow.

In a display based on the new technology, the resolution of the 2D and 3D images is the same and the brightness of the images are similar.



At the ISE show, it was reported that, when viewed from a distance of greater than about 20-feet, the display looked perfectly fine in 2D. (The distance is a reflection of the LED resolution rather than the 3D technology.) In fact, Ramstad mentioned that at ISE, most people did not notice that there was anything different or special about the 2D image until they were told so. When the viewer put on the passive glasses, the 3D was described as having "a decent pop to it."

This is not a new concept for Ramstad either. A number of years ago he was promoting a similar solution that used a projection system and an RGB and yellow 3D encoding scheme. That did not take off, but this implementation in an LED wall has more potential, we think.

The new Toshiba LED wall is not the first display technology with a claim to providing simultaneous 2D and 3D viewing. Such is the capability reported by ColorCode 3-D (Lyngby, Denmark). There are, however, significant differences between the Toshiba and ColorCode 3-D technologies.

The ColorCode 3-D technology is based on a display having conventional RGB subpixels. The viewer wears passive glasses in which one lens transmits blue and the other amber.

Ramstad stated that the Chroma3D Systems approach produces an image with a wider color gamut and a "more comfortable 3D viewing" experience.

Toshiba has not yet revealed plans for deployment of display systems utilizing the new technology.

By Arthur Berman, Display Daily

Streaming Adaptive - a Brief Tutorial

An interesting white paper by Niels Laukens (EBU).

New Data Released on the Performance of Adaptive Streaming over HTTP

Adaptive streaming over HTTP is gradually being adopted by content owners as it offers significant advantages in terms of both user quality and resource utilization for content and network service providers. To understand whether today's existing commercial players perform well, especially under dynamic network conditions, Cisco and The Georgia Institute of Technology just released a technical white paper on the subject.

Their experiments covered three important operating conditions:
- First, how does an adaptive video player react to either persistent or short-term changes in the underlying network available bandwidth? Can the player quickly converge to the maximum sustainable bitrate?

- Second, what happens when two adaptive video players compete for available bandwidth in the bottleneck link? Can they share the resources in a stable and fair manner?

- And third, how does adaptive streaming perform with live content? Is the player able to sustain a short playback delay?

The paper identifies major differences between Microsoft's Smooth Streaming, the player used by Netflix, and one open source player (Adobe OSMF). Their findings report that the Smooth Streaming player is quite effective under unrestricted available bandwidth as well as under persistent available bandwidth variations. It quickly converges to the highest sustainable bitrate, while it accumulates at the same time a large playback buffer requesting new fragments (sequentially) at the highest possible bitrate.

On the negative side, the paper says that the Smooth Streaming player reacts to short-term available bandwidth spikes too late and for too long, causing either sudden drops in the playback buffer or unnecessary bitrate reductions. Further, the experiments with two competing Smooth Streaming players indicate that the rate-adaptation logic is not able to avoid oscillations, and it does not aim to reduce unfairness in bandwidth sharing.

The Netflix player is similar to Smooth Streaming as they both use Silverlight for the media representation. However, the paper reports that the former showed some important differences in its rate-adaptation behavior, becoming more aggressive than the latter and aiming to provide the highest possible video quality, even at the expense of additional bitrate changes. Specifically, the Netflix player accumulates a very large buffer (up to few minutes), it downloads large chunks of audio in advance of the video stream, and it occasionally switches to higher bitrates than the available bandwidth as long as the playback buffer is almost full. It shares, however, the previous shortcomings of Smooth Streaming.

The paper says that the OSMF player often fails to converge to an appropriate bitrate even after the available bandwidth has stabilized. This player has been made available so that developers will customize the code including the rate-adaptation algorithm for HTTP Dynamic Streaming for their use case.

By Dan Rayburn, Business of Video

The Impact of Adaptive Rate Streaming

An interesting white paper by Benjamin Schwarz (Verimatrix and Harmonic).

Orange Ready for the Next Big Challenges

Having established itself as a major Pay TV provider and pioneered multi-screen TV, Orange provides one of the best examples of how a telco can transform itself from a voice/data provider to a media company. The France Telecom brand also demonstrates the impact that video-over-IP has had on the TV industry in the last decade. It now provides a pointer to where the rest of the industry may move next as it looks to develop a converged backend for TV services that are currently running over multiple access networks, with the possibility that eventually all video services might be delivered as HTTP.

Luc Barnaud, VP, TV at Orange Technocentre, says the key differentiator for Orange in the TV market today is the way it can provide a complete TV experience. This starts by combining live and on-demand, with VOD, SVOD and catch-up TV still proving a strong consumer benefit compared to broadcast-centric network services, he notes. It also means offering services on the TV, PC, mobile, tablets and connected TVs. Finally, Orange makes its services accessible on the portable devices outside the home as well as inside. “Allowing customers to use their service on multiple screens in different places represents a compelling option,” Barnaud declares.

With the growing popularity of over-the-top video and the increasing penetration of connected TV devices, all service providers now need a strategy to both counter and embrace online video, and potentially monetize it. Orange is no exception. The company provides a content portal on LG televisions in France already, something that gives existing Orange customers and other consumers more opportunities to engage with the brand. It is also engaged with other device manufacturers.

Meanwhile, like a number of major telcos, the company will use its own CDN (Content Delivery Network) to ensure it has a stake in the delivery of OTT video (and the opportunity to improve the Quality of Experience for online services). “CDN infrastructure is a natural extension of our networks and service platforms and leverages our historical skills as an operator,” Barnaud says, pointing to QoS management, 24/24 availability and systems reliability among other core Orange competencies.

The company is also gearing itself to integrate Web-powered entertainment experiences into the IPTV offering more closely. Barnaud explains: “We have had a browser-based strategy for several years and can use our expertise to provide an enriched TV experience that can take advantage of content from the Web, like interactive TV ads connecting to third-party Web servers, and specific events that mix live and Web content.” He says Orange is currently working on a content recommendation solution that harnesses social networks, providing an example of how it can go further in exploiting Web content.

“Our approach to integrating Web services on TV remains very TV-centric,” he adds. “We aim to deliver relevant and tailored services to the customer. Today, the Web as we know it on PC is not easy to use on a TV. Our job is to select the right services and make them relevant on TV in terms of usage.”

Barnaud points out that overall, it is the guarantee of a premium quality service that sets Orange apart from today’s over-the-top services, however. He highlights the company’s end-to-end control of the content distribution chain and the associated services, from the content platform and portals to the end user device, including network elements covering CDNs, routers, DSLAMs and domestic gateways. In short, the company has the infrastructure to compete on every level, whether via broadband or the managed network.

One of the next big challenges for Orange, and one that will confront most major Pay TV operators in the next few years, is how to rationalise the service delivery that is now encompassing multiple access networks, managed and unmanaged video and an increasing array of devices. Barnaud reveals: “We are working on the convergence of our content platforms, starting with the unification of the content management and other service enablers. We are also working on the convergence of the content specific protocols that were previously different according to the screen and access used.”

With Orange’s TV service now available via FTTH and DSL, and live feeds coming over satellite and digital terrestrial as well as IPTV, the company is also migrating to a unique core platform for all its fixed TV services. Barnaud acknowledges the growing interest in the industry around the possibility that all video services across all platforms could converge around HTTP adaptive streaming, including for set-top boxes. He says: “Given the success of the protocols and formats that emerged from the web and IT actors, it is difficult not to consider that kind of convergence.”

He says the challenge, as a network and service operator, is to find the balance that delivers the benefits of open protocols and softwares and the performance, security and quality that is enabled by a good vertical, end-to-end integration of content, platforms, networks and devices.

By John Moulding, Videonet

European Operators Face Close Decisions over HD Contribution Encoding

The announcement late January by Portuguese broadcaster Televisão Independente (TVI) that it is upgrading its SNG vehicles to H.264/MPEG-4-capable contribution encoding systems has set the agenda for other European broadcasters over the next two years. Many operators have to prepare for a world where most content is HD, which requires more efficient encoding than current MPEG-2 products working in an 8-bit profile.

Significantly, TVI has opted for a system from French encoding vendor ATEME that is capable of encoding in the H.264/MPEG-4 10-bit profile, even though it will be operating the product in MPEG-2 mode for now. The decision to start installing MPEG-4-capable encoders today in SNG vehicles used for HD coverage of soccer matches for worldwide satellite distribution by the EBU was made so the broadcaster ready when MPEG-4 10-bit contribution becomes more widely deployed in the broadcast community, according to TVI’s chief technology officer José Nabais.

This raises an interesting point because not all broadcasters and operators will migrate to MPEG-4 encoding for contribution, even though they will use it for distribution at lower bit rates around 50Mb/s. There are other options for contribution, notably JPEG 2000, which is being promoted strongly in Europe by T-VIPS of Norway and Swedish media transport vendor Net Insight. The choice made will depend on several factors, including cost of bandwidth, the application and need for interoperability with other broadcasters or operators during the contribution cycle. The principal choices are doing nothing, migrating to MPEG-4 (either 8-bit or 10-bit profile), adopting a version of JPEG 2000 or going to completely uncompressed HD video.

The last option is the least common, and it seems surprising at first sight that any broadcaster or operator would take that path given that most of the industry is going for greater compression, not less. After all, raw HD generates 1.5Gb/s, and double that for the 1080p version that will be deployed in coming years. But given the falling cost of bandwidth across regional and global IP fiber-optic networks, it is sometimes worth trading that cost for the savings that can be made by reducing the need for skilled engineers on-site for external broadcast events, particularly soccer matches in Europe. Such remote operation requires the ability to transmit full uncompressed video over terrestrial links.

European telecommunications operator TeliaSonera, headquartered in Stockholm, already provides uncompressed HD video transport from soccer stadiums and horse racing tracks across Europe over its IP fiber backbone network, arguing that the extra bandwidth cost is usually more than repaid by being able to get rid of satellite vans and centralize production from the external sites to the studio.

“It also removes a layer of complexity and takes away a point of failure by avoiding compression,” said Per Lindgren, founder and vice president for business development at Net Insight, which supplies transmission systems to TeliaSonera.

In practice though, as Lindgren agrees, modern encoders hardly ever fail, so the main argument is economic. A reasonable question then is why operators would not adopt the recently developed, mathematically lossless version of JPEG 2000. This guarantees that no picture quality is lost during compression, even in theory, and even after repeated compression/decompression cycles, while reducing the bandwidth of HD streams down from 1.5Gb/s to around 500Mb/s, varying between 400Mb/s and 600Mb/s depending on the content. There is then some variation in the degree of compression, but the main reason for sending video uncompressed is timing. The JPEG 2000 compression process imposes a delay of about 100ms, which can be a problem for broadcasters using remote-controlled cameras with centralized production, which is precisely the application promoted by TeliaSonera.

But for the great majority of cases, contribution compression will continue to be required for years to come, with the focus being on making it effectively as near lossless as possible. This has given JPEG 2000 an edge because even without the mathematically lossless option, it is virtually lossless at bit rates down to 150Mb/s. It is only at bit rates below 70Mb/s when JPEG 2000 starts to break down, making it unsuitable for distribution.

However, there are two other factors. First, while it is true that MPEG-4 compression has until recently been inferior to JPEG 2000 for contribution, this has been in large part because it has been implemented in 8-bit profile. The profile refers to the number of bits used to encode video components for compression, and 10 bits are used at the production stage to produce SDI signals. The video then has to be downscaled to 8 bits and subsampled prior to encoding to use MPEG-4 8-bit profile encoders, causing degradations and artifacts such as color bleeding and smearing. Furthermore, these artifacts increase with each compression cycle.

These problems have been largely fixed, however, with the 10-bit profile version of MPEG-4. This avoids the downscaling after production, virtually eliminating progressive degradation during repeated compression cycles and making the encoding almost lossless at bit rates comparable to JPEG 2000. In fact, MPEG-4 allows profiles in the range of 8 bits to 14 bits, but research has found that while 10 bits yield a huge improvement in encoding efficiency over 8 bits, there is relatively little further gain at higher rates up to 14 bits.

It may be that JPEG 2000 is still marginally superior, but the use of MPEG-4 enables broadcasters to standardize on a single compression technology across the whole contribution/compression path. As a result, Net Insight’s Lindgren predicts a close battle between JPEG 2000 and MPEG-4 over the coming year or two, even though he insists the former will yield slightly higher quality.

The second factor concerns standardization, where MPEG-4 wins out because of the long history of interoperability in the MPEG world. The JPEG 2000 world, on the other hand, is split between two camps. JPEG 2000 was actually conceived well before year 2000 and was far too complex for cost-effective deployment on the hardware of the day. But the standard proved its quality in a few military projects and was then adopted by Analog Devices, which developed a proprietary implementation on its chipset. By combining multiple chips, it became possible to encode video at much higher resolutions than with the main alternative, MPEG-2. Current JPEG 2000 products evolved from this version, but more recently, an alternative implementation emerged that takes advantage of contemporary Field-Programmable Gate Arrays (FPGAs). This gave a new generation of would-be compression vendors a potential leg up into the market.

This has led to the two camps to lock horns in various workgroups, and the eventual outcome is unclear. Yet the long-term success of JPEG 2000 depends on consensus emerging on a common standard, or else the MPEG-4 10-bit profile will most likely become the favored compression mechanism, especially in Europe. Still, the lack of standardization has not deterred many operators from deploying JPEG 2000, with T-VIPS and Net Insight enjoying considerable success.

Indeed, some operators are assuming the standards issues will be resolved and are adopting JPEG-2000 in the belief that this will deliver the best results at relatively high contribution bandwidths above 300Mb/s.

“Some customers, when planning their new contribution systems, look 10 years forward,” said Johnny Dolvik, CEO of T-VIPS. “ORF in Austria selected IP network and JPEG 2000 compression to meet both current and future needs. Our JPEG solution gave them the option of SD, HD, 3G and 3D within the same unit.”

It is true that most current adopters of JPEG 2000 are large operators with their own IP networks where the lack of interoperability between different vendors’ transcoders does not matter so much. In such cases, JPEG 2000 does have one big undeniable advantage: It is only about half the price. But companies involved in the wider contribution ecosystem, backhauling video to affiliates who subsequently decode prior to distribution, are deterred from using JPEG 2000 because they cannot impose a single vendor’s codec on their partners.

The future of the contribution compression scene, therefore, depends partly on the outcome of the JPEG 2000 standards battle, but either way, the MPEG-4 10-bit profile will be a major player. There will also be the option of totally uncompressed video for centralised production, but very few operators will be able to justify the cost of 1.5Gb/s or 3Gb/s per channel for sometime to come.

By Philip Hunter, BroadcastEngineering

OpenCable Encoding Specifications

The latest OpenCable encoding specifications with a dedicated chapter for 3D content (see pages 23 to 36).

Stereolabs Launches 3D Production Processor

3D firm Stereolabs has released Pure, a live 3D production system for studio and mobile 3D production. Pure features automated alignment of stereo images, correcting lens, sensors and geometric mismatches directly on-set. The processor also provides real-time 3D monitoring, convergence adjustment and a range of tools to help producers and stereographers control depth during shooting.

Source: Broadcast

Google TV Templates to Tempt Developers

Google has released two sets of open source templates to help developers create web sites that will work well with Google TV products. They use standard web technologies based on HTML5, JavaScript and CSS, or optionally Flash. A user interface library has also been released to assist with the development of web sites that can be navigated with a remote control.

The templates are designed to deliver video but are equally suitable for photos or other multimedia. Two sets of templates are provided. One is based on open web technologies. Designed specifically for Google TV, it does not seem to be compatible across all browsers but interestingly it does appear to work reasonably well on an Apple iPad. The other uses Flash and requires Flash Player 10.2, which is currently a release candidate version.



Both sets of templates represent useful starting points for developers. They have been released as open source under the Apache 2 licence, so developers can modify and customise them to meet their needs.

When Google TV first launched with partners Sony and Logitech, it seemed that Google had done little to prepare the market, beyond releasing some guides on how to optimise web sites for display on television.

These new templates will allow developers to deliver video, with full keyboard navigation and playback controls. They may also be useful for developers wishing to target other browser-based connected television platforms. However, many may be waiting to deploy full Google TV applications through the Android Market.

As usual with Google, the company is taking a code-driven approach in an attempt to attract developers. The templates do not reflect any attention to visual design. It is telling that the placeholder media are drawn from application developer events. The video images are not in 16:9 format, although the video is and Google TV assumes a high-definition display. Still, that can be easily fixed. What is worrying is that Google appears to be approaching the television screen through the lens of engineers rather than as an engaging entertainment experience.

Google wandered into the living room with little apparent appreciation for television and still seems precociously preoccupied with coding rather than building relationships. It is not clear that these templates will be enough to persuade networks like ABC, CBS, NBC and Fox to allow Google TV products to access their online video services.

With the number of projects that Google is pursuing, it is not evident that television has its full corporate attention. That said, informitv expects to see continuous enhancement and extension of the Google TV proposition and its open nature will encourage third parties to use it as a platform on which to build products and services.

Source: Informitv

Miniweb Opens Woomi with View to Connected Television

Miniweb is seeking a sweet spot in the living room with its woomi proposition. It provides the glue to stick the long tail of online video programming on connected television screens. At the launch event in London, Miniweb showed informitv the woomi service running on a Samsung smart television. Miniweb plans to roll out onto other brands of connected television devices and displays. The aim is to enable video publishers to reach viewers globally across multiple platforms. It will allow manufacturers to aggregate programming on their products without having to do deals with countless content providers, each with their own widget or application.

Ian Valentine, the founder of Miniweb, is a veteran of interactive television. In 2007, he span the company out of Sky, which had originally acquired waptv, the company he co-founded at the dawn of the digital television era and which powered their betting, retail and red-button services for many years. With indefatigable enthusiasm he is now aiming to bring the best of online video to the connected television experience.

That does not necessarily mean bringing the web browsing experience to the television. He draws the distinction between the web and the internet. An internet-connected television is a better television, he argues, but it does not necessarily need to be able to browse the web.



It is a change of approach for Miniweb, which as the name suggests used to be all about a browser, albeit a special browser, designed for the television environment. The technology was based on the original waptv approach, developed as an open standard, WTVML, a markup language and micro-browser specifically for television devices, with an automatic testing mechanism.

In the meantime, the internet has come to the television, from Yahoo! Connected TV widgets to a full Google TV web browser. With no single standard or application market likely to dominate the display market, Miniweb aims to provide a single point of integration for both programming providers and device manufacturers. It offers media brands the prospect of global distribution and offers manufacturers added value for their devices with a potential incremental revenue stream, based on a small share of any retained revenues.

The logic is that manufacturers will always want their own branded portals. There will also be key content brands that they will want to support, like YouTube and Netflix. The major broadcast networks will find their own way to get on connected television screens. Miniweb aims to provide a point of access to other programming providers, like Sail TV, that are unlikely to be able to develop their own applications for each device and get prominent placement on product portals.



Once in the world of woomi, the user is presented with a user interface that can be navigated with up, down, left, right, select and back buttons. Multiple users can establish profiles for their particular preferences and subscriptions. These are stored in the network, so in the future it will be possible for them to be accessed through other woomi compatible devices.

Initially, woomi will be available on Samsung products that support its Internet@TV platform. Miniweb has also developed versions for MHEG, HbbTV and Flash. Miniweb sees its role as an enabler, taking responsibility for creating device-specific thin clients that integrate with its platform in the internet cloud, which provides a single point of integration for programming providers. “It removes the many-to-many relationship problem,” explains its founder. “It takes the spaghetti away.”

Unlike the approach of Google TV or YouView, there is no need to mandate the operating system for television devices and displays, he suggests. This leaves manufacturers free to innovate and differentiate in the years ahead. “It is important not to freeze dry the specification for television.” The answer instead could be based on application programming interfaces that enable an internet service architecture. That is what Miniweb aims to offer with woomi.

The woomi philosophy is that programme providers should publish to people, not devices. The platform will profile stream formats and devices, only presenting media that can be displayed on a particular device. It can even follow individuals as they use different types of device. The aim is also to target viewers based on their viewing behaviour — mediagraphics rather than demographics — creating addressable market segments for advertisers.

With woomi, Miniweb actually benefits from the fragmentation in the market, simply by creating clients for different devices to present a homogenous addressable market for programme publishers. At the moment, the material available may be eclectic in content and variable in quality, but it offers brands an easier way to gain a presence on connected television screens. woomi will need to attract a critical mass of programming from outside the mainstream to fulfil the true promise of connected television.

The real challenge will be to promote woomi as the place to go to watch such programming. Miniweb will need the support of manufacturers to market and promote the proposition. It has a long way to go, but if Miniweb can get woomi on products from a range of manufacturers, it might just fill the gap in the market between YouTube and Netflix.

Source: Informitv

Are Plug-ins the Future of Web Video?

The video codec wars continue, with parties on both sides of the debate digging in deeper. In a long blog post, Microsoft Corporate VP Dean Hachamovitch — the man behind Microsoft’s Internet Explorer — reiterated the software giant’s full support of H.264 as the dominant format for web video.

Hachamovitch said Microsoft is releasing a plug-in for Chrome users to be able to view H.264-encoded HTML5 videos. The move is a counter to Google’s earlier announcement that it would remove support for H.264 in future versions of its Chrome web browser, relying instead on its own open-source WebM video format for playback of HTML5 video.

The rift has caused some to question the future of standards-based video on the web, as publishers are either forced to choose between H.264, which is supported by Microsoft’s IE9 and Apple’s Safari web browsers, and WebM, which is backed by Mozilla’s Firefox, Opera and now Google Chrome. Alternatively, publishers can choose to support both, which would drive up the cost of encoding and storage of multiple video assets. Or they could just do what they’ve always done, which is to continue delivering web video through Adobe’s Flash player on the web and encoding in H.264 for Apple iOS and other connected devices.

None of these solutions is ideal, which is why Google and Microsoft are building browser plug-ins to guarantee the widest available support of their favored format across all browsers. When it announced it was pulling support for H.264 in Chrome, Google said it would be doubling down on support of WebM through browser plug-ins that it was making available to IE and Safari users. And Microsoft says it has already built add-ons for Firefox users that wish to display HTML5 video in H.264.

While issuing plug-ins might quell some short-term concerns about HTML5 video delivery, they do little to solve the longer-range issues surrounding the future of web video. H.264, while widely adopted for Flash-based video delivery and on connected devices, is still encumbered by the threat of licensing body MPEG LA someday demanding fees for its use. And WebM, while open source, has some issues of its own; as Hachamovitch points out, Google hasn’t indemnified those that use WebM, which could protect video publishers from the threat of patent litigation.

There’s also the issue of hardware support, particularly in mobile devices where processing power is at a premium. Most devices on the market today have built-in hardware acceleration for H.264, which is one reason that publishers rely on it for delivery to those devices. But hardware designs lag behind software advances, which means it will be some time before WebM can be relied upon for HTML5 video without taxing mobile processors or draining device batteries.

Many publishers are keen on the idea of standards-based video delivery, but until these issues are solved, adoption of standards-based video by consumers and publishers will continue to lag delivery in proprietary formats like Adobe’s Flash.

By Ryan Lawler, GigaOM

Passive 3D with a Scanned Retarder - Go or No Go?

Much of the 3D buzz from CES was related to passive 3D HDTVs. And most of the action is around the film patterned retarder technology commercialized by LG Chemical and LG Display. Panels from LG Display will show up in 3D HDTVs from Vizio, Toshiba, Philips, LG Electronics and all six major Chinese TV brands. But there are two other approaches for creating passive polarized 3D HDTVs.

First, let’s clarify how we define a film patterned retarder. It is a film that is aligned and laminated to an LCD panel that provides polarization that is orthogonal on adjacent rows. That is, even rows are left circularly polarized and odd rows are right circularly polarized. 3D images are created from left and right eye image pairs by "interlacing" them into a single frame of video. Users wearing passive polarized glasses can separate the two images and see 3D. There is a loss of half the vertical resolution with this approach, however.

A second way to accomplish this is by using a glass sheet instead of a film sheet. Arisawa pioneered this technology and has been moderately successful with it in the professional market. However, the high cost has confined their approach to professional monitors.

AUO has recently developed a much less expensive way to do this on a glass substrate. It is currently supplying 65" panels to TV makers like Vizio.

The third major approach is an active polarization switch. When used with projectors, a large single-cell LCD panel rapidly switches the polarization state of the light coming out of the projector. RealD and Lightspeed Design produce polarization switching products that do this.

When the approach is applied to an LCD panel it is called a scanned or active retarder. In this case, the film or glass-based patterned retarder is replaced with an LCD panel. This panel can change polarization state by switching the voltage. To make this work in practice, the LCD panel is divided into a series of segments. Each segment is activated to change the polarization state in synchrony, with the scanning of the rows from top to bottom. The advantage of this approach is that there is no loss of resolution per eye in 3D mode. The downside is the cost of a second LCD panel.

Several companies are working on this approach. The most visible has been LG Display, which has been showing prototypes for over a year and a half. Initially, they’d hoped to commercialize this technology in 2011, but they have since pushed it out 1-2 years.

At CES, CPT was showing off its 15.6-inch 1366 x 768 active retarder type 3D display. Although it was shown installed in a laptop, the module size of 359.8 x 210 x 7.0 makes it a bit thick for a modern laptop display. The display looked great - there was no ghosting present in the images shown.

During CES, we also had a chance to see a new prototype developed by RealD and Samsung LCD in a suite at the Wynn. RealD calls its approach RDZ, and it uses 8 segments to create the polarization scanning. Demonstration units with screen sizes of 15", 17", 23" and 46" were developed. We saw the 46" demo and found it was quite good, with little ghosting.

The technology is clearly interesting, but the need to have an additional panel is a serious consideration. However, it does offer the performance of a shutter glasses 3D display with inexpensive glasses. If the cost of the scanning retarder panel can be kept to the cost of a pair of shutter glasses, then the approach might be viable.

By Chris Chinnock, DisplayDaily

Mediaset Backing ‘Third Way’ for 3DTV

While the two main commercial requirements for 3DTV standards have been defined within the DVB, covering Frame Compatible (phase 1) and Service Compatible (phase 2) technologies, Italian broadcasters are proposing a third, interim approach that makes Frame Compatible 3D services compatible with 2D televisions. If applied, the result would be that broadcasters can deliver one video stream that can be used for 3D viewing today but which can also present a full frame, upscaled HD picture to non-3D televisions.

The obvious drawback to this approach is that the picture reaching the TV screen is only half-resolution HD. However, the big prize for terrestrial broadcasters from combining 3D and 2D services into the same signal is that they do not have to simulcast.

Having to simulcast SD and HD makes life difficult enough for any terrestrial broadcaster that wants to maintain is position as a prestige content provider. Having to simulcast SD, HD and 3DTV would massively limit their possibilities for 3D television on digital terrestrial, due to bandwidth limitations. Broadcasters want to avoid splitting their audience between different channels, anyway.

Mediaset is one of Europe’s biggest broadcasters with its business divided between free-to-air content and a growing Pay TV business (now accounting for approximately 15% of revenues) called Mediaset Premium. The company is backing the new proposal, which has been outlined by the broadcasting associations HD Forum Italia and DGTVi, and which is listed in the latest version of Italy’s profile specifications document, HD-Book DTT 2.0. The proposal has now been presented to the DVB.

Marco Pellegrinato, Vice Direttore Ricerca e Progettazione Tecnica VIDEOTIME at Mediaset, says the broadcaster recognizes the benefits of 3DTV for consumers but has no plans for linear broadcast services today.

“One of the reasons is that very few people could enjoy 3DTV programmes due to the lack of 3DTV aware iDTV [integrated digital TV] sets,” he says. “Commercially speaking, it is not convenient. There is no critical mass. There is not a large enough customer base to launch a service.”

Mediaset also supports the full-resolution HD/3D service compatible mode that is outlined in the DVB’s phase two commercial requirements but the new proposal inside HD-Book DTT 2.0 could hasten the arrival of 3DTV over digital terrestrial.

“That would give broadcasters a chance to launch a 3DTV compatible channel that can be received by legacy HDTV sets. That maximises channel efficiency in terms of the potential viewer, so it represents an adequate customer base,” Pellegrinato explains.

According to HD Book DTT 2.0, annex M, the interim solution works with 1080i and 720p HDTV but cannot be used with the top-and-bottom Frame Compatible format due to the limited ability of many set-top boxes to perform vertical upscaling.

When the Frame Compatible signal is received by a set-top box connected to a 3D television, the set-top box (or integrated receiver decoder) recognises frame packing information and signals this via HDMI to the television to generate the 3D display. Frame cropping offsets and sample aspect ratio combinations needed for 2D service compatibility also form part of the signal but are ignored.

If the STB is connected to a 2D television, these frame cropping offsets and sample aspect ratio combinations are interpreted and instead it is the frame packing information that is ignored. The set-top box (or integrated receiver/decoder) outputs an upscaled, full frame 2D signal via HDMI to the television. For 2D compatibility, no additional PSI/SI signalling is needed beyond what is already defined for Frame Compatible 3DTV.

The new Frame Compatible 3D with 2D approach does require cropping and upscaling capabilities in HD set-top boxes that exceed the minimum requirements currently defined by DVB.

According to HD Book DTT 2.0, annex M: "Such service compatible modes give service providers the chance to transmit a single service that provides both Frame Compatible plano-stereoscopic 3DTV video and reduced-resolution (halved) HDTV video concurrently, whereas normally HDTV coverage with the same source content would be provided with a separate dedicated HDTV service."

Pellegrinato emphasises that this is only intended as an interim solution while a full-resolution 3DTV Service Compatible mode is developed. The full-resolution service compatible approach works on the basis of delivering a 2D service channel plus an extra Delta Channel that provides the differential between the left eye and right eye views, where this differential generates the ‘depth’. A standard HD television set (not 3D aware) can interpret and decode the 2D service element in isolation, in a standard HDTV frame (1920 x 1080) and render it in full HDTV resolution 2D. Mediaset expects this approach to be available for mass-market use by 2014/2015.

Mediaset is already delivering some 3DTV content, however, and using the digital terrestrial signal to do so, but not as linear broadcasters. As part of its Mediaset Premium service, the company is offering one 3D movie per month that is effectively ‘downloaded’ over-the-air, in what is technically termed datacasting. This is a commercial service, not a trial, and the first such movie was offered to Mediaset Premium subscribers last October.

Italian consumers can buy a Mediaset Premium set-top box (the Tele System TS7500HD) in retail to enjoy the Premium on Demand HD service, which provides a catalogue of 50 movies in any given week. 2D movies are broadcast to the hard disk, as well.

The format for this non-linear 3D content is Frame Compatible (side-by-side) and demonstrates Frame Compatible and 2D compatibility in practice, albeit for non-linear content. The set-top box detects the type of television it is connected to and varies the output accordingly. If it is connected to a 2D TV, it sends out the left side of the image, horizontally upscaled to reach full HD resolution. If the STB is feeding a 3D TV set, it sends out the side-by-side image.

By John Moulding, Videonet

Joint Venture to Create New Digital Standards

The BBC, ITV and Channel 4 are leading a cross-broadcaster initiative to establish common digital standards. A document on HD will be the first piece of work by the Digital Production Partnership (DPP), which is funded and led by the three terrestrial broadcasters and has representations on its working groups from Channel 5 and Sky, as well as the indie and post-production sectors.

The HD and SD guidelines, which the DPP hopes to release at the end of February, will address equipment and techniques and provide common technical standards, including format, video line-up, levels and gamut, the use of non-HD material and tape delivery.

ITV director of broadcast resources Helen Stevens, who is chair of the DPP, said the guidelines would benefit the entire broadcast community.

“I think this will touch all departments, from the point of completion through to marketing, access services, compliance - all the way through the supply chain,” she said.

A set of guidelines on file-based standards will follow “hard onthe heels” of the HD document,Stevens said, with the DPP also planning to address the issues of shared storage and cloud access and metadata standards.

“Metadata could be hugely complex, so we want a minimum, standardised set of metadata to be supplied with or added to completed programmes.”

Stevens said she had been surprised by the level of co-operation among broadcasters, but conceded that certain points would not be subjects for discussion among the rivals.

“We do have to be careful that areas we look at are not seen as commercially competitive. This is all about the practicalities of the supply chain.”

The HD and SD standards document will cover:
Video technical requirements:
- High definition format
- Video line-up, levels and gamut
- Blanking
- Aspect ratios
- Use of non-HD material
- Film for HD acquisition
- Photosensitivity epilepsy guidelines
- Safe area for captions
- Standards conversion

Audio technical requirements:
- Stereo audio requirements
- Surround sound requirements
- Sound to vision sync

Delivery Requirements:
- Programme layout/format
- Ident clock
- Tape delivery including information such as format, paperwork and timecode

By George Bevir, Broadcast

BSkyB Amends 3D Content Rules

BSkyB has changed its regulations to allow more 2D-to-3D converted content within programmes broadcast on its Sky 3D channel. It is a tacit acknowledgement of the high cost and technical difficulties associated with trying to film stereo 3D content entirely natively with 3D rigs.

Its new guidelines allow up to 25% of non-3D content to be used in any 3D programme, up from the strict 10% of converted material written in its original specifications, published last February. The new rules came quietly into force last year, but Sky has yet to update the specification on the technical section of its 3D website.

Sky said the change brought its 3D guidelines into line with its HD guidelines, which dictate that 75% of content should be in true HD. It said the change was also about taking a “pragmatic approach to supporting the growth of 3D production in the UK”.

The 2D-originated footage must be HD and in segments that do not exceed five minutes during any 15-minute period. This only applies to post-converted 2D-to-3D material, and Sky is still adamant that automated conversion of 2D HD programmes to 3D is not acceptable as “original 3D content”. However, it makes an exception for the use of live conversion tools for certain scenes or camera shots during live events.

By Adrian Pennington, Broadcast