Harmonise European HBB or Watch Google Prevail

There is a real risk that the cost of repurposing content for multiple Connected TV platforms will exceed the original cost of producing content, with the danger that broadcast content will be priced out of the market – weakening the whole concept of Connected TV and Hybrid Broadcast Broadband services. That was the warning delivered by Peter MacAvock, Programme Manager at EBU (European Broadcasting Union) at the OTT-TV World Summit in London during December.

MacAvock also told Europe’s broadcasting industry that while there was no desire for a Europe-wide standard for Hybrid Broadcast Broadband (HBB), more harmonisation is needed. Applications interoperability is the Holy Grail. He pointed to the global ambitions of Connected TV platforms from Google, Yahoo! and Apple and warned broadcasters that national solutions, built for local market needs, would not prevail.

“Vendor driven Connected TV is the bane of our lives in the European broadcasting community because whatever badge you have on a TV set, you have the same problem: they are all incompatible,” he said. “You have to sign deals with each of the vendors. CE manufacturers are missing a trick by not providing interoperability. The difficulty is that all these services will be driven by content but the content producers will become tired of having to develop their services so many times for so many platforms.

“We need to be very careful that the cost of repurposing content for different platforms does not overtake the cost of producing the content in the first place. That is a real cost we are dealing with in the content community, so there is a real risk of this happening.”

MacAvock noted the three key HBB environments: vendor driven Connected TV, broadcast-centric HBB with signalling (like MHP and HbbTV based solutions) and what he called managed HBB from the likes of YouView and Google TV. He pointed out that major European public broadcasters have driven the development of HBB in their respective markets, whether it is using HbbTV in France, MHP in Italy or YouView in the UK, for example.

“The bad news is that broadcasters are focused on domestic markets with slightly different requirements, leading to slightly different standards. We are seeing national solutions and a belief that national solutions can prevail. That is a real weakness because national solutions cannot work,” MacAvock told delegates.

Noting that Google, Yahoo! and Apple are multinational brands with international solutions, he continued: “Other operators are not only operating in France and Germany or Italy but they are operating all over the world, so the broadcasters who are the hybrid leaders have to work together now. Working together is the way we will prevail. Individual national solutions will fail because they do not have the same multinational or global view.”

MacAvock said EBU is well positioned to try to foster more cooperation and harmonization, given the role of its members in the DVB Project (which spawned MHP), the IRT (which is driving HbbTV) and YouView. “We are trying to harmonize different technical elements and we are working in the content domain so we know the best way to pave the road for hybrid. The Holy Grail has to be application interoperability so you can develop applications and port them onto different platforms without too much difficulty.”

MacAvock pointed out that the broadcast signalling used in MHEG-5 (used for UK interactive TV) and HbbTV is the same. “The question is whether YouView adopts the same signalling,” he said.

He told the OTT-TV World Summit audience that there were a number of areas where the industry can work together to find some harmonization. This includes working with CDNs (Content Delivery Networks) and ISPs to develop a common understanding of how to deliver media over the Internet. There is scope for cooperation on DRM and he believes the industry can probably agree a common set of standards for metadata so it is treated the same wherever content appears.

The EBU does not believe there can be one HBB standard for the whole of Europe. Different market requirements have to be respected, resulting in different platforms. But what the EBU does want is a core set of principles that the European industry can work from.

MacAvock outlined the motives that are driving broadcasters in different markets, dividing them into two main camps: Greenfield interactive TV markets with no significant popular interactive TV services today (e.g. France, Germany and Spain), and Brownfield markets where services already exist and must not be undermined by HBB (e.g. Italy and the UK). For the Greenfield markets, the big opportunity is a next-generation teletext and access to catch-up TV services beyond the PC.

In both markets, the aim is to deliver a rich experience that harnesses the strength of the broadband return channel and the increasing sophistication of receivers, which can call upon more processing power to render services more quickly and make them easier to use.

EBU has no doubt about the size of the HBB opportunity or the need for the European broadcast industry to make this a shared success. “The linear broadcast industry is about to confront a change that is probably bigger than the move from Black & White to Colour and it will change our paradigm,” MacAvock concluded.

By John Moulding, Videonet

3D in the Home and Cinema


Source: Sony Professional

ATIS IIF Launches Stereoscopic 3DTV Quality Initiative

The Alliance for Telecommunications Industry Solutions’ (ATIS) IPTV Interoperability Forum (IIF) has recently launched a new work program to address Quality of Service (QoS) and Quality of Experience (QoE) for Stereoscopic 3D IPTV.

Stereoscopic 3D is a rapidly developing area, but further data collection is necessary in order to best understand user perceptions of quality. To this end, the IIF will describe, analyze and recommend multiple quality-related metrics. Potential metrics include: depth maps and depth perception, loss of resolution caused by frame-compatible formats, video synchronization, 3D graphics and closed captioning, ghosting from left-right cross-talk, QoE issues such as 3D fatigue, and more.

Once the information gathering and analyses are complete, the IIF will produce specifications which address areas of applicable QoS and QoE.

Source: ATIS

Raising the Bar on 2D-to-3D Conversion

BitAnimate (Lake Oswego, OR) is a small start-up company developing 2D-to-3D conversion technology. They envision their technology being used in a variety of ways- from an on-line conversion service for users to upload their clips and watch them in 3D, to embedded application like 3DTVs. They also believe their technology is at such a level that they can approach Hollywood to dramatically reduce the time and cost of making theatrical-quality conversions. Other companies are already in this space, most notably JVC and DDD. After seeing a demo recently, it seems the main differentiating feature is that it actually works as advertised without as many artifacts. And, it even works in real-time.

I know, the purists will say that converted 3D content will never look as good as native 3D content. But, we have seen some horrendous ‘native’ content too. The result of either approach ultimately depends on the skill of the artists involved. The challenge with automated conversion is that the machine is making artistic decisions, not a person, and is doing so very rapidly in the case of real-time.

We were invited to BitAnimate’s offices for an exclusive demonstration of their technology prior to their planned demonstrations in a private suite at CES in January. They provided a side-by-side demonstration of their software with the built-in 2D-to-3D conversion on the Samsung 3DTV (based on the DDD software). The source material was Jurassic Park and Troy.

We tried to look at a lot of converted content while researching our 2010 Real-Time 2D-to-3D Conversion report and developed a flow-chart of how most conversion algorithms operate. There is a lot going on in a very short period of time; analyzing the scene to figure out what is in it, creating a depth map using visual and/or motion cue, constructing the 3D image — filling in as many missing detail as possible and finally, some post-processing to make sure the colors and gamma are the same for each eye. And you do this about 30 times per second — whew.

BitAnimate considers their conversion process to be a trade secret so they wouldn’t reveal too many details. When asked directly about their process, Behrooz Maleki, President of BitAnimate used an automotive analogy; "While you could describe two cars as having an 8 cylinder engine, two doors, a transmission and four tires - you don’t know if it is a Ferrari or a Pontiac." So, perhaps our flow chart is fairly accurate, but how they do each step is the secret sauce.

Maleki’s experience in video process goes back to his days at InFocus in the late 1990s. While there, he developed a chip to do deinterlacing and image process to compete with the market leading solution from Faroudja. "The Farouda chip was $230, but our solution was a $15 chip — and it looked better in a side-by-side demo," commented Maleki.

The same approach is being applied to 2D-to-3D conversion. Maleki says that not a lot of hardware is needed to implement his algorithms. The graphics cards on modern PCs and the video processing cores in TVs should be adequate, he says.

In addition, they are developing a new 3D web site that will potentially offer a 3D conversion service. Upload your 2D content and get back streaming 3D content. This could go to your iPhone or PC (using Silverlight). BitAnimate uses their own 3D player, which allows the user to choose the output format for the 3D platform they are viewing the content on.

It seems logical that if 2D-to-3D conversion can be done very well there should be a market for it. BitAnimate seems to be another example of a small company with a good technology having the potential to raise the bar on everyone.

By Dale Maunu, Display Daily

New Gadget Promises 3D Without the Headaches

In 1907 a Polish optical scientist named Moritz von Rohr unveiled a strange device named the Synopter, which he claimed could make two-dimensional images appear 3D. By looking through the arrangement of lenses and mirrors, visitors to art galleries would be drawn into the paintings, as if the framed canvas had become a window to a world beyond. But the Synopter – heavy and prohibitively expensive – was a commercial failure, and the device vanished almost without trace.

A century later, Rob Black is hoping to rekindle interest in von Rohr's creation. A psychologist specialising in visual perception at the University of Liverpool, UK, Black has designed and built an improved version he calls "The I" (UK Patent Application No. 1003690.3). Unlike some 3D glasses, the device uses no electronics, and works on normal 2D images or video.

Playing Tricks on Your Eyes
The device works in the opposite way to the 3D systems employed in cinemas. There, images on the screen are filtered so that each eye sees a slightly different perspective – known as binocular disparity – fooling the brain into perceiving depth. "The I" ensures that both eyes see an image or computer screen from exactly the same perspective. With none of the depth cues associated with binocular disparity, the brain assumes it must be viewing a distant 3D object instead of looking at a 2D image. As a result, the image is perceived as if it were a window the viewer is looking through, and details in the image are interpreted as objects scattered across a landscape.

The perceptual trick, called synoptic vision, is apparent on any nearby two-dimensional image, but is especially marked where other depth cues exist. For instance, the brain will naturally assume an animal in the 2D image is in the foreground if it is large, and far away if it is small.

No More Headaches
Black says that the device also avoids the headaches associated with other 3D technologies. In movie theatres, the eyes need to focus on the screen itself to see objects in focus, but the 3D effects can force the viewer to try to focus several metres in front of or behind the screen instead. "Even with if you use the world's best 3D kit, it can still present conflicting perceptual information," Black told New Scientist.

Because his device uses no binocular disparity the viewer isn't forced to attempt such impossible feats of focusing – instead, they can focus naturally on any object in the image, using other cues such as size to 'decide' what depth the object occupies. "By turning off that conflicting information, you can enjoy the scene in the way the artist depicted."

Currently the device is still a prototype, but Black hopes that his synoptic viewer will one day be incorporated into existing 3D systems. "I think 3D is impressive at the moment, but with this we can get significantly closer to reality simulation."

By Frank Swain, New Scientist

Technicolor Launches 3D Certification Program

Technicolor announced the launch of its 3D Certification program branded “Technicolor Certifi3D”. The certification program is geared towards broadcasters and network service providers with the goal of delivering quality and comfortable 3D experiences to end consumers.

Technicolor Certifi3D was created to ensure that 3D material meets minimum quality requirements before it is delivered to consumers. As part of the service, Technicolor evaluates each shot against a set of objective criteria for stereographic reproduction, including a 15-point quality checklist to identify common errors in production which result in suboptimal 3D content. The company will also offer training programs to broadcasters and content creators to help them migrate their production and post-production techniques from traditional television to the three dimensional medium.

Behind the technology that serves as the foundation for the Technicolor Certifi3D service is an advanced 3D analysis software tool that was developed by Technicolor’s Research and Innovation team. Utilizing the left and right source masters, the software builds a 3D model in real time giving an accurate pixel count for objects that are too close or too far away from the viewer that would result in discomfort. It also automatically detects and flags conflicts with the edges of the TV screen, another significant source of discomfort for 3D in the home.

“Our 3D certification platform allows our stereo technicians to quickly and precisely diagnose many of the issues that create viewer fatigue and discomfort” says Pierre Routhier, Technicolor’s Vice President for 3D product strategy and business development. “Our goal in launching the Certifi3D program was to take a proactive approach in support of the industry to ensure a consistent and quality end consumer 3D experience in the home.”

Technicolor is a leader in providing an array of 3D services to its media and entertainment customers ranging from 3D visual effects, post production, Blu-ray 3D services, 3D VOD encoding to mobile 3D.

Technicolor 3D Certification Poster

Source: Technicolor

Android’s Gingerbread Brings WebM to Mobile Phones

Monday’s release of Android 2.3, code named Gingerbread, is adding support for the WebM open video format to the smart phone platform. Gingerbread users will be able to play WebM videos in their device’s Chrome browser, and Android application developers will be able to make use of WebM as well.

The first version of Android will ship with libvpx 0.9.2, which is a slightly outdated version of WebM. Support for the newest WebM release 0.9.5, code-named Aylesbury, will be pushed out with a maintenance release, according to Google’s WebM product manager John Luther. Users will be able to access every WebM video stream or file with the WebM release included in Gingerbread, but the coming update should help with the format’s playback performance and memory footprint, amongst other things.

Support for Android is an important first step for WebM to gain market share. Google open sourced the video format in May, and it has since been integrated into Firefox, Chrome and Opera. WebM has also gotten more support from video vendors and application developers.

Luther said in November that 80 percent of YouTube’s popular videos are now available in WebM, and Skype’s client started to utilize WebM for its new group video chat functionality. The next step for WebM is to get on devices, and the first chipsets supporting hardware acceleration for WebM are expected to reach the market place in early 2011.

However, it may take a while before many Android users will be able to make use of WebM. Most network operators are notoriously slow to roll out new Android versions. In fact, 56 percent of all Android handsets are still running version 2.1 or older, despite the fact that 2.2 has been available for close to six months now.

By Janko Roettgers, GigaOM

3D Briefing Document for Senior Broadcast Management

This briefing document from the EBU Study Group helps broadcast managers make sense of 3D TV.

5 Reasons Google Bought Widevine

Google announced today that it will acquire Widevine Technologies, giving it access to technology necessary to securely deliver video to a wide range of connected devices. The acquisition is more than just a technology play on Google’s part; the Widevine purchase will also bring deep Hollywood relationships and improve its chances of getting Google TV deployed on consumer electronics devices.

Terms of the deal weren’t disclosed, but you can bet Widevine pulled in a pretty penny; the startup has raised $51.8 million in funding since recapitalizing in 2003, including a $15 million strategic investment last December led by cable operator Liberty Global and Samsung Ventures. Widevine could be invaluable to Google, as it provides technology and expertise in a number of fields that could help grow Google’s overall video business. Here are the top five reasons Google had its eyes on the company:

1. Everyone Needs DRM
Widevine is a digital rights management firm, first and foremost, and DRM isn’t going away. Providing a secure way for content owners to distribute video online and to a number of connected devices will be table stakes in Google’s broader video ambitions. Whether it’s getting premium content on YouTube or securing video distributed to Google TV-powered devices, Widevine will give Google the technology and peace of mind to strike those deals.

2. Cozying Up with Hollywood
Google has a problem — a content problem, that is. The company’s efforts to get long-form premium content on YouTube have generally fallen flat, and its Google TV products were met with universal disdain from media companies that acted quickly to block their online video streams from being accessible on those devices. Widevine has one thing that Google doesn’t: the trust of Hollywood. After providing the DRM technology used by a number of movie studios as well as online distributors like Netflix, Sonic Solutions and Lovefilm to deliver videos online and on connected devices, Widevine is in a unique position to make introductions to some key players in Hollywood.

3. Connecting Google TV to More Devices
Google launched its Google TV operating system on a series of TVs and Blu-ray players from Sony as well as broadband set-top boxes from Logitech, but it clearly desires to embed the technology on other devices, and is rumored to be courting Samsung, Toshiba and other manufacturers to do so. Well, Widevine’s technology is available on products from Apple, Haier, LG Electronics, Nintendo, Panasonic, Philips, Samsung and Toshiba, as well as more than 50 different set-top boxes. Google could leverage Widevine’s relationships with those manufacturers, and maybe even connect its technology into the broader Google TV code base.

4. YouTube Everywhere
In addition to getting Google TV on more connected devices, Widevine’s embedded technology could also help Google speed up distribution of YouTube video streams on more TVs, Blu-ray players and mobile handsets. Not just that, but by providing advanced DRM for those streams, Widevine could help make potential content partners more comfortable with those streams being delivered by YouTube.

5. Android Needs Adaptive Streaming
Android mobile devices are swarming the market, and with Flash installed, they promise users the ability to watch any video stream available on the web. There’s just one problem: Those videos, for the most part, aren’t optimized for mobile delivery. While Apple has built proprietary adaptive streaming technology for its mobile devices, Android phones don’t have a graceful way to deal with fluctuations in network bandwidth. Widevine, which makes video optimization technology in addition to DRM, could help solve that problem by helping Google to add adaptive streaming to future android devices.

By Ryan Lawler, GigaOM

Bram Cohen: BitTorrent Protocol & Live Streaming Don’t Mix

BitTorrent mastermind Bram Cohen knows the strengths and weaknesses of the P2P protocol he invented more than eight years ago, and he’s not ashamed to point out one particular downside: BitTorrent is the wrong approach for live streaming.

During an interview at NewTeeVee Live recently, he explained to me that BitTorrent just has too much latency to be viable for such applications. “Just the fact that it’s using TCP makes that completely impossible,” he said.

Watch the entire interview here:


Cohen has been working on his own live streaming solution for the last two years, and he said it has only recently become close to releasable. His new approach to P2P live streaming is being developed as a product of BitTorrent Inc., but the company has so far kept mum about what the product will eventually look like.

However, Cohen hinted at the possibility that BitTorrent will compete with live streaming sites like Ustream and Justin.tv. Asked whether this technology is for networks like ABC or a guy in his basement, he said: “ABC can afford to pay for whatever they want to do right now.”

Users just starting out, on the other hand, often don’t have the infrastructure available to deal with possibly overnight success. “Peer to peer is really a democratizing technology,” he said.

By Janko Roettgers, GigaOM

Good News: Flash Just Got Less Painful

Adobe released the first beta of its Flash player 10.2 today. The most notable improvement is advanced hardware acceleration, which should considerably reduce — and in some cases entirely eliminate — the CPU load of playing Flash videos on most modern computers. The improved efficiency is due to the implementation of Adobe’s Stage Video API, which makes use of a computer’s GPU for close to all video-related computation.

First reports indicate the impact of the new player is most notable under Windows, with heise.de reporting it was able to play 1080p HD video with a CPU load of zero percent. The online magazine reported CPU loads between four and five percent under Mac OS X. These loads could even be maintained while displaying overlays on an HD video — something that led to much higher CPU usage without Stage video.

I saw slightly less efficient CPU loads when I tried the new Flash player under OS X today, but 8 percent isn’t really all that bad for 1080p, either.


However, desktop users won’t be the only ones happy about the Flash player’s new efficiency. Flash 10.2 and its underlying stage video technology should also improve playback on set-top boxes and other connected devices.

From Adobe’s web site:
“The performance benefits of stage video are especially pronounced for televisions, set-top boxes, and mobile devices. These devices do not have CPUs as powerful as desktop computers, but they do have very powerful video decoders capable of rendering high-quality video content with very little CPU usage.”

In fact, Adobe cooperated with Google to bring Stage Video to Google TV, where the technology is currently up and running. We should see more devices making use of the optimized hardware acceleration soon.

By Janko Roettgers, GigaOM