Why Syncing the Second Screen is Key

Thirty per cent of tablet use and 33 per cent of Internet use happens in front of the TV. And if not curated carefully, this could start to negatively impact TV’s influence. The next billion televisions will be Internet connected. That's going to turn TV viewing into a much more social experience – and provides the perfect platform for second screen integration.

In fact, a growing body of research is predicting that second screen apps – and especially social integration – could have a serious impact on the TV industry. Research by the mobile and digital technology researcher Mobile Interactive Group (MIG) has predicted that smartphone adoption will drive TV and mobile multi-tasking in UK and US. If handled right, this could result in a more engaged audience, significantly increasing programme interaction.

Rather than distracting viewers, one of the key ways that the second screen can draw viewer engagement is with personalisation and interactivity. In the future, apps could connect viewers to the social graph, aid content discovery, encourage interactivity, most important of all - serve real-time related content based on what the TV is showing.

But of course the key to success is being able to synch the app to the TV – without this the second screen experience can easily slip from enhancing the TV experience to pulling viewers away from the main screen.

The Problem with Syncing
To create a compelling real-time user experience, any second screen app needs to synch with the main TV – and finding out what’s playing on the TV is not easy. The most basic solution to this problem is to create a second screen app for one TV show in particular – e.g. encourage viewers to download the “Mad Men” app to use while watching a show. However, this doesn’t completely solve the problem, because showings differ depending on timezone, pay TV provider, and region. In the USA this problem is especially important to solve because many TV programmes premiere at different times on the East and West coast.

Approaches to Synching
1. Listening
Many companies are looking at the problem of syncing from different angles. ABC is using audio fingerprinting on the iPad to hear what the TV is playing. Similarly, Shazaam has just raised $32 million to “listen” to the TV, identify commercials, and provide related advertising content. And 12 week old startup IntoNow was recently acquired by Yahoo for 20 million used the technology to provide second screen interaction that travels with you as you move between programmes.

However audio fingerprinting or “listening” can require advanced technology and infrastructure and there are inherent problems with lag time between identifying the show (it clips ten seconds of audio, sends to servers, identifies, then sends back), background noise interference and the fact it works with native applications only, not generic web applications. And it needs a microphone or it won’t work.

2. TV Checkin
TV checkin, pioneered by the likes of GetGlue and Miso, requires minimal infrastructure to roll out. TV checkin encourages viewers to sign into a particular show, and chat about it as they watch. The major downside of this approach is that checking in makes an extra step – only the most highly engaged consumers participate.

3. Automated Systems
Anthony Rose has forecast the rise of automated systems that know the content we are watching now on the main TV, matching content on companion apps, with companies like Flingo coming out of stealth mode. The cleanest way to synch the second screen is to interface with the set top box software according to Rose. This requires intelligence in set top boxes, but connected and smart TVs are steadily gaining a foothold in the market. The advantage of this approach is that the consumer doesn’t have to do anything – their tablet or smartphone “just knows” what they’re watching. This perfectly complements the lean back attitude of TV, helping consumers embrace the second screen.

4. Curated Systems
Another method of triggering second screen interaction is offered up by companies such as The Application Store (TAS), Ex Machina, Screach, MIG’s mVoy and others who offer up white-label customer management systems. The systems give a framework that is tied to playout timelines in the studios and allow for creating interactivity and engagement such as quizzes, voting, predictions, social media, chat etc. on second screen devices via apps and web pages. And does not necessarily need audio fingerprinting to synchronise.

For instance, the TAS Screentoo application framework from is integrated into UK-based, global broadcast playout system provider Snell's broadcast automation system called Morpheus, used by over 700 broadcasters worldwide. The broadcasters can create, with real-time data essential to maintaining synchronicity between the primary broadcast and second screen devices.

Netherlands-based Ex Machina, Social TV veterans, work closely with a number of major production companies to create real time engagement around reality and game shows in particular, offering up their strong background in gaming and game mechanics to their clients in this emerging space with their PlayToTv platform.

As synchronising the second screen becomes easier, expect to see TV becoming more interactive, engaging and personalised. This is a trend that enables a lot of other changes that are happening in the TV industry- namely targeted advertising, social TV, and personalisation.

By Emma Wells, Appmarket.tv

ADS Pitches Low Cost 3D Conversion

Advanced Digital Services has launched a new service offering that allows content owners and distributors to convert 2D content to 3D stereographic TV much faster and at a lower cost. The move comes as some content owners have been looking for ways to convert their content to 3D. Most notably, CBS has reportedly been pitching a 3D channel to operators that would feature prime time programming and other fare that had been converted to 3D.

ADS believes its solution would make it cost effective for producers and content owners to expand their 2D to 3D conversion efforts, which in the past had either produced lower quality product or was so expensive that it was only suitable for big budget Hollywood fare. That would also help overcome the lack of content that has limited consumer demands for new 3D TV, notes ADS CEO Thomas Engdahl.

Currently 2D to 3D for major feature films is an extremely labor intensive process can cost $60,000 to $90,000 per minute, or more, Engdahl explained in an interview. While high costs have traditionally made conversion of 2D TV programming uneconomical, he contends that ADS can convert content for only $3,000 to $5,000 a minute and that they can complete the process very quickly, doing a 46 minute episode in as little as 4 days.

Such costs would also make the conversion of library product as well as new production practical. "We are seeing a lot of interest from the studios in conversion, and some major studios are already using the technology," he notes. Science fiction, space programing, and content with open spaces, ranches, aerial shots, car races, CGI and animation would work particularly well in 3D, Engdahl says.

But he also stresses that it is important not to overdo the 3D effects in conversion, which can produce eyestrain in viewers, and that certain types of content, such as hand held reality fare, do not work well.

"There is a market void for 3D content that this can help content owners fill," he says. "It could really give library content a new life."

By George Winslow, Broadcasting & Cable

Orange Checks In with iPhone Social TV App

With TV Check, its new social TV app, Orange is looking forward to “revolutionising the social TV experience”. Available free, the TVcheck community app automatically recognises programmes users are watching on the 19 French DTT channels based on visual recognition technology. It is now available to all iPhone users whatever internet supplier is and soon for other mobile operating systems.

TVcheck provides a multi-screen and game experience for discussing about TV content. The application allows users to share their thoughts on the show they are watching thanks to a number of social features such as status updates, comments and program recommendations. Users will be able to log in using their Facebook credentials, making it easy for them to add friends from their Facebook account.

Each user has his or her own profile, which gives him access to an activity log, messages and friends and enables to display the favourite programs using the "I like" function on each program’s page.

"We are all looking for shared digital experiences to stay connected with others. With TVcheck, there's no need to wait until the next day to talk about a good show seen the evening before. One can immediately do it when sitting in front of the TV in the living room", said Patrice Slupowski, VP Digital Innovation & Communities at Orange.

More information in this video.

By Pascale Paoli-Lebailly, RapidTV News

Open Broadcast Encoder

Open Broadcast Encoder is a fresh implementation of an open source transcoder/transmuxer. It is designed from the ground up to be both flexible and scalable enough for use in both 24/7 live streaming and offline operations.

Open Broadcast Encoder aims to be compliant with all relevant broadcast industry standards and practices, with a special emphasis on quality and stability. Open Broadcast Encoder is based on existing high-quality open source projects including x264, FFmpeg and libmpegts.

The project is divided into two subprojects that share core components: OBE-RT for real-time 24/7 operation and OBE-VoD when dealing with file-based workflows.

Open Broadcast Encoder VoD 0.21

  • VBR and CBR Transport Stream Muxing, full control of settings (e.g. PIDs, languages)
  • DVB/CableLabs compliant VoD content muxing
  • High quality AVC encoding powered by the x264 encoder, including support for 10-bit and 4:2:2 input with high speed colourspace conversion
  • Input support from a huge range of formats through FFmpeg
  • Support for AC-3, MP2 and AAC (LATM and ADTS) audio
  • SCC caption file input. ATSC/Echostar captions supported
  • Support for soft pulldown
  • DVB 3D and CableLabs 3D support
  • DVB AU_Information support

Open Broadcast Encoder Realtime 0.1 Alpha
  • HD/SD SDI input with Decklink cards. 10-bit to 8-bit dithering and high speed 4:2:2 to 4:2:0 colourspace conversion
  • AVC encoding using the x264 encoder
  • MP2 audio encoding
  • Full control of the TS mux parameters
  • UDP/RTP output with unicast and multicast output support
  • Experimental IP-in to IP-out transcoding with audio and PID passthrough

OBE VoD and OBE-Realtime can be downloaded from Google Code.

Invidi Preps 'SnapPing' TV Tags for Second-Screen Interactivity

Targeted-ad solutions vendor Invidi Technologies has spent two years quietly developing a tagging system, dubbed SnapPing, that promises to let TV advertisers and networks deliver interactive experiences -- without having to go through a set-top box.

Invidi's SnapPing uses an on-screen "SnapTag" to indicate that there is interactive content associated with the TV show or ad. Then, using a phone or tablet device with the SnapPing app, a user identifies the tag using audio, text, voice or image recognition to link to the desired information.

For example, during an NFL game, the network might put up SnapTags to deliver additional player stats to viewers, while a Hollywood studio could embed a tag in an 30-second spot that kicks users to a longer-form version of a movie trailer.

"We're making television interactive without using the cable infrastructure," Invidi president and CEO Dave Downey said. "We wanted to create a companion app not for a show, or a network, but for all of TV -- the whole megillah."

The system hasn't launched yet, but will eventually be accessible through iPads, iPhones, Android devices and at SnapPing.com, according to the company. The second-screen approach addresses several things programmers and marketers hate about conventional interactive TV, Downey said. A smartphone or tablet doesn't obscure the TV screen real estate and provides an environment for richer interactive experiences.

"How can you have a Motorola DCT-2000 set-top compete with what you can do on an iPad? You can't," Downey said.

Unlike other approaches, such as Shazam for TV, SnapPing lets viewers identify content using a "SnapTag" not only via audio recognition but also speech, text entry and image recognition. Downey said that distinction is crucial: "Our research proves that audio detection is not enough. It is a nonstarter for adults over 35, and it is limited to situations where audio detection can work."

According to a $250,000 focus group study Invidi commissioned, consumers strongly preferred speaking or texting the SnapTag into a device, with Shazam-style audio recognition a distant third. Invidi is refining the format for the SnapTag, but Downey said it will work with both live or recorded shows.

The Invidi system also will include a "Snap an App" feature, which will let a user install an app after accessing a SnapTag, as well as autotags, which will let a user push one button to get all the interactive content associated with a show.

Still, Invidi's SnapPing will have to contend with Shazam's TV play. The Shazam apps, which originally were designed to ID songs, can now identify the audio in a TV show or ad (which has been ingested and processed ahead of time). Shazam for TV, since launching in February, has reached more than 100 million people and served 5.5 billion impressions, according to executive vice president of advertising sales Evan Krauss. "We have a huge user base," he said.

Advertisers that have aired "Shazamable" ad campaigns include Honda, Starbucks, Paramount Pictures' Transformers 3, Procter & Gamble and Progressive Insurance. Last month, Shazam raised a $32 million round of funding to be used to support Shazam for TV.

Invidi, for its part, has received $111.5 million in funding, from investors including DirecTV, Google, WPP's GroupM, Motorola Ventures, Experian, NBC, Verizon and venture capital firms Menlo Ventures, InterWest Partners, EnerTech Capital, Westbury Equity Partners and BDC Capital.

New York-based Invidi has about 80 employees. Its addressable-advertising system is under contract for 40 million homes with DirecTV, Dish Network and Verizon FiOS.

Invidi is suing addressable-ad rival Visible World and Cablevision Systems, alleging they infringe a patent Invidi owns for delivering addressable ads.

By Todd Spangler, Multichannel News

Companions Will Totally Disrupt Broadcast TV, for Good or Bad

The companion screen, especially in the form of the tablet, is going to create a major disruption to the television market by providing a compelling space where consumers can interact around content in an environment that is no longer controlled by the broadcaster.

That is the view of Anthony Rose, the Co-Founder and CTO of tBone TV, who is the former Chief Technology Officer at BBC iPlayer and YouView. But Rose believes there is also a great opportunity for broadcasters to harness these devices for social TV and counter the rise of on-demand viewing by enabling people to engage around live content.

As an example of how the companion screen can change the TV landscape, Rose forecasts the rise of automated systems that know the content we are watching now on the main TV so content on the companion can be synchronized.

“If the system knows the advert that is playing on TV, the advertiser can buy companion adverts for the companion viewing device. There could be a Nike advert on TV but on the companion you get a clickable Nike advert that will provide click-through statistics, analytics and targeting. It will be interesting to see whether broadcasters themselves get into the business of selling interactive adverts on the second screens or whether the advertising bureau will do that, or whether someone else becomes the new Google in this space and sells the new kind of advertising screen.”

Broadcasters could also lose control over how their content is treated on the companion devices. While they have been working to ensure the integrity of their programming on connected TVs, fighting against app-based overlays that they do not control, Rose does not believe they can maintain control of what apps providers do on the companion screen.

“If I am an app maker and the broadcaster does not let me do something on TV, that is not a problem because it is an open world out there,” he declares, referring to the second screen.

“We are going to see the dynamics change massively in this space and it will be very disruptive,” he continues. “The question is how quickly this new space emerges and I think it is going to be incredibly rapid because unlike with set-top boxes, where you have long lead-times for development, you can write something today for a companion device and have it live tomorrow, limited only by your imagination and audience take-up.”

tBone TV is still in stealth mode but from what Rose has revealed, the company is looking to provide software that can run in televisions and connect the TV experience across different screens and TV platforms.

“We want to create a new platform around what people are watching right now and create a huge and vibrant range of experiences around that,” he says. “Broadly speaking there is a new platform service and everything talks to the platform and that lets you create experiences that work in the home and out of home.”

It looks like social TV is one of the key applications that will make use of this platform. This could make viewers aware of what their friends are watching and help them communicate with them, regardless of which TV service or device they are using. Rose believes that in a world of infinite content it will be our friends who push us towards content, leading eventually to the demise of the Electronic Programme Guide, which he thinks is a terrible way of finding content, albeit the method we are familiar with. A simple example of social discovery would be a message to say that your friend is watching programme X, so you can join them on the same channel, while meanwhile their avatar appears and a chat window opens.

This is another potential source of disruption for broadcasters, especially those with good EPG placement, but Rose thinks they need to embrace the opportunities on the second screen and use them to find new and perhaps even bigger audiences.

“Broadcasters are both content providers and aggregators and today the aggregation play is the channel, though it could also be a portal,” he notes. “Though some are embracing it and some resisting it, broadcasters greatly fear VOD because it breaks the way they package content. But there is a new future where they can take their audience share - and the beauty of live is that lots of people are watching at the same time - and turn that into a seamless, automated two-way interactive experience.

“Broadcasters are uniquely placed to embrace the second screen experience because the audience is there. They still own the HD drama and the great visuals but they need to give away some metadata and build audience in a new way. Smart broadcasters, who embrace this, will find massive new audiences, probably skewed to the young initially.

“Instead of sitting in front of the TV like couch potatoes, we will have a range of activities around live TV that the broadcasters participate in. And this is not about using keyboards. People like to ‘veg’ [vegetate] in front of the TV but this is veg 2.0. It is a new way of discovering and watching content but it is no more difficult than the current way. Instead of flipping through the EPG you could easily flip through friends. It will be even easier to find something fun to watch.”

By John Moulding, Videonet

MPEG-4 AVC/H.264 Video Codecs Comparison

The main goal of this report is the presentation of a comparative evaluation of the quality of new H.264 codecs using objective measures of assessment. The comparison was done using settings provided by the developers of each codec. The main task of the comparison is to analyze different H.264 encoders for the task of transcoding video—e.g., compressing video for personal use. Speed requirements are given for a sufficiently fast PC; fast presets are analogous to real-time encoding for a typical home-use PC.

Overall Conclusions
Overall, the leader in this comparison is x264, followed by DivX H.264, Elecard and MainConcept. The DiscretePhoton encoder demonstrates the worst results among all codecs tested.

Average bitrate ratio for a fixed quality for all categories and all presets (Y-SSIM)

The overall ranking of the codecs tested in this comparison is as follows:

          1. x264
          2. DivX H.264
          3. Elecard
          4. MainConcept
          5. XviD
          6. DiscretePhoton
          • MSE encoder
          • WebM encoder

WebM and Microsoft Expression encoders could not be placed in this list because of their longer encoding time compared with other encoders. The leader in this comparison is x264. Its quality difference (according to the SSIM metric) could be explained by the special encoding option ("tune-SSIM").

The difference between the Elecard and DivX H.264 encoders is almost nothing, and between these encoders and MainConcept is not overly significant, so these encoders tied for second and third in this comparison. This rank is based only on the encoders’ quality results. Encoding speed is not considered here.

x264 settings

Codec Conclusions

DiscretePhoton - one of the fastest encoder for this comparison, but because of its speed the encoding quality was not very good.

DivX H.264 - one of comparison leaders, quite balanced encoder with not very big number of parameters, this fact could be comfortable for users. This encoder is designed as a free sample application for DivX Plus HD compliant video encoding, and is a feature-constrained, for-purpose application.

Elecard - one of comparison leaders, codec with good encoding quality and very flexible settings. Many adjustable encoding settings are provided.

Microsoft Expression Encoder - encoder with good encoding quality but due to the fact of long initial loading time, the encoding time for Microsoft Expression Encoder is significantly higher than for other encoders.

MainConcept - good balanced encoder; has many encoding settings that can be adjusted. The results for “Movie” use-case was second, so this codec has a good potential to be one of comparison leaders.

x264 - one of the best codecs by encoding quality; has very user-friendly predefined presets, as well as many adjustable encoding settings.

XviD - an MPEG-4 ASP codec; its quality could be very close to or even higher than that of some commercial H.264 standard implementations, especially for encoding “Movie” sequences, but not for “HDTV” sequences.

WebM - good new non H.264 encoder, it shows good quality but due to the low encoding speed it is not presented in encoders list by quality.

Source: MSU Video Group

Yes, Virginia, Toshiba Will Use the AUO Head-tracking Display

Let’s recap. At Display Taiwan last month AUO showed an extremely impressive 15.6-inch 2D/3D switchable head-tracking autostereoscopic (AS) display. (See the forthcoming issue of Mobile Display Report for a detailed description.) Senior engineer Garfy Lin told me the panel should appear in a notebook PC from a major manufacturer in Q3.

A week later in New York, Toshiba announced that in August it will introduce a notebook PC with AS-3D eye-tracking display. Product Manager Carrie Cowan said the panel is not from AUO. I was skeptical. How many 15.6-inch eye-tracking displays are likely to be introduced in a notebook from a major manufacturer in Q3?

When in doubt, be skeptical. Industry sources have now told Rebecca Kuo and Adam Hwang of Digitimes that Toshiba will be using the AUO display, and that the notebook will have an MSRP of about US $2100. The sources also said that some vendors of tablet PCs also intend to adopt head-tracking 2D/3D panels.

AS-3D eye-tracking display

While we’re on the subject of 3D, the Big 5 panel makers - Samsung, LG Display (LGD), AUO, Chimei Innolux (CMI), and Sharp - have been increasing shipments of 3D panels for television and expect market penetration to reach between 10% and 20% by the end of the year. Projections differ by manufacturer, report Digitimes’ Rebecca Kuo and Jackie Chang. CMI thinks the number is 20%; AUO 10%, and the other manufacturers spread out in between those extremes. LGD (of course) and AUO are the primary suppliers of micropolarizer panels (the kind that use passive glasses), while Samsung and CMI are producing the panels for systems that use shutter glasses (SG).

Does that mean that 10 to 20% of TV-watching hours will be spent on 3D content? Of course not. Panel-makers and set-makers now recognize that 3D viewing will be mostly for special events such as ball games and movies. Is anybody going to put on their 3D spectacles in the morning to watch Matt Lauer while they dress for work?

Frankly, I’m far more interested in what AUO and Toshiba are doing with the AS-3D head-tracking notebook PC. It will be very entertaining to see what uses software developers and users come up with when they have a really good AS-3D platform to play with.

By Ken Werner, Display Daily

Lytro Announces Light Field Camera

Lytro has announced a point-and-shoot light field camera targeting the consumer market. No details, such as price, availability, resolution, camera size, etc. were available for the camera, however. The press release from Lytro said vaguely the camera would be available "later this year."

"This is the next big evolution of the camera," said CEO and Founder Dr. Ren Ng. "The move from film to digital was extraordinary and opened up picture taking to a much larger audience. Lytro is introducing Camera 3.0, a breakthrough that lets you nail your shot every time and never miss a moment. Now you can snap once and focus later to get the perfect picture."

Light field science was the subject of Dr. Ng’s 2006 Ph.D. dissertation in computer science at Stanford, which was awarded the internationally-recognized ACM Dissertation Award in 2007. Dr. Ng’s research focused on miniaturizing a roomful of a hundred cameras plugged into a supercomputer in a lab. In 2011, the Lytro team will complete the job of taking light fields out of the lab and making them available in the form of a consumer light field camera.

Computational photography using light field reconstruction has been a research topic for a number of years. One of the problems with computational photography is the very large amount of data associated with a high-resolution image. To get the image quality a consumer associates with a normal 1MByte snapshot from an 8Mpixel camera, it may be necessary to store as much as 100Mbytes of data and use an imager with 800Mpixels. Obviously, this would not be practical in a point-and-shoot camera, so Insight Media is looking forward to seeing how Lytro solves this problem. Even at a professional level, 800Mpixel sensors aren’t really practical which is why researchers into computational photography have used a room full of a hundred individual cameras in the past.

Typically, both the sensor and the "lens" in computational photography are large in area but can, at least theoretically, be very thin. Since no specifications on the camera are available, it is not clear if the proposed point-and-shoot camera is also a pocket camera. Size isn’t necessarily a catastrophic barrier-people have accepted the 10" size of the iPad, for example, to get features and a display not available in a 4" smartphone. A 4" - 10" diagonal would be a reasonable size for a computational photography camera that promises to generate 3D images, as Lytro does.

Another problem with computational photography is that it doesn’t produce a viewable image until after the "computational" part. Presumably, any handheld camera from Lytro would include the basic software needed to produce an image visible on the camera display. Typically, computational photography involves post processing and image editing. Again, from a consumer point of view, is this what they want? Taking a photo but being unable to view it in its full glory except after a half hour or so of optimizing it on your computer is not really what point-and-shoot photography is all about.

Lytro has an on-line picture gallery of Adobe Flash photos that can be manipulated over the web to simulate what a consumer can do with his own computational photography images. While it is not stated, presumably these photos were generated with the Lytro camera, either a laboratory model or a prototype of the consumer version.

Computational photography is normally based on multi-aperture imaging, as was discussed recently in Display Daily. An expanded version of this story with the available details on Lytro’s business plans will appear in the upcoming edition of Mobile Display Report.

By Matt Brennesholtz, Display Daily

Japanese Firms Push 3D Overlay Materials

Two Japan based firms, Globalwave and Newsight Japan Ltd., both are offering new AS-3D overlay film solutions to turn the popular iPad or iPhone into a 3D display. At Display Taiwan, Kiyoto Kanda of Newsight Japan was showing off iPads creating an AS-3D image (no-glasses.) Later at the 3D and Virtual Reality Exhibition, Tadahiro Kawamura, CTO of Globalwave, showed a product called Pic3D.

Pic3D uses a lenticular lens overlay, while Kanda told us his technology was a parallax barrier overlay sheet, to create the AS-3D image on the iPad product. On the big screen TV side, Kanda showed an eight-zone 42-inch AS-3DTV, also using parallax barrier technology. But he is not locked into PB for 3D images. Kanda also had an 18.5-inch 3-D screen using a lenticular lens and added that he could provide lenticulars for displays up to 80-plus inches in diagonal.

The Pic3D lenticular lens helps the company get "a smoother, much more consistent picture." They claim several advantages to conventional parallax barrier method including 90% light transmission (versus only 30% from parallax barrier films), 120-degree viewing angle (versus 30 to 60-degrees) and support for up to 23-inch size displays using this new lenticular film. Pic3D has product offerings in 23-, 21.5- 15-, 12, iPad, and iPhone4 sizes.

In a YouTube video, Kawamura said about the Globalwave approach: "Basically it will work with video files which are in the side by side format, and if you input URLs for side by side formatted content on sites such as YouTube, it will work with them too. Right now we plan to begin sales in early August, and at first we plan to sell it through our own direct sales website." The product is sold on-line at the Pic3d web site with prices that range from $185 (23-inch size less VAT and other taxes) to about $25 for the iPhone overlay screen.

Kanda believes in a total ecosystem solution for 3D. His company, Newsight Japan, includes development of hardware, applications and content. His hardware solution extends from the largest 80-inch-plus LCDs from Sharp to mobile displays with his film overlay. Kanda also said they will develop 3D apps to help show-off the technology on the Apple (and other mobile) products. Newsight also does 2D to 3D content conversion, with several signs around Display Taiwan indicating 3D content shown (in the CPT booth for instance) which came from Newsight technology.

We think Kanda is right, and content will be the driving factor in adoption of these overlay films, particularly in the mobile space. Of course, this content is display agnostic: properly formatted 3D content will show correctly on either lenticular or parallax barrier systems. It won’t just be professional 3D content either. 3D images from new digital still cameras and other emerging CE devices will drive interest in displaying AS 3D on all size displays. Gaming will also help drive mobile 3D adoption, that could include content and film overlay distribution bundles with game software makers.

But the ultimate test will be in the image quality, and a "good-enough" AS-3D experience to warrant all the trouble. The engineers have built it, now will the sales come?

By Steve Sechrist, Display Daily

France Unleashes TNT 2.0 to Heal Fragmented World of Connected TV

France is pushing ahead with its connected TV platform, called TNT 2.0, and first full deployments are expected late 2011 or early 2012 from both free-to-air broadcasters and pay TV operators. While for the latter, the main motivation is to reach new customers and combat cord cutting by ensuring existing subscribers can access services from all their devices, free-to-air broadcasters are interested in the potential for monetizing content that is currently paid for by advertising, if at all.

The key security components enabling monetization are now in place, with the TNT 2.0 consortium having elected to offer implementers a choice of two DRM platforms, the open Marlin standards and Microsoft's PlayReady. This highlights the reality of Microsoft's continuing force in the PC arena, but also the rising status of Marlin as a universal DRM. Marlin has been adopted by some other European connected TV platform initiatives, including the UK's YouView promoted by the BBC, BT and others, which is otherwise out of line with the rest of Europe in not being based on HbbTV.

Marlin owes its growing status partly to having a flexible rights management engine called Octopus that allows operators to implement different business rules to suit their model, with the system components or entities such as users and devices represented as nodes in a graph, joined by lines denoting relationships among them. This graph defines who can access what content when, where and how. It runs on a variety of platforms, including smart cards, handsets and servers, and will support a variety of cryptographic systems.

Another key point is that Marlin is strongly promoted by the TV manufacturers partly responsible for the current fragmented state of the connected TV field, each having proprietary connected home technology. Marlin was founded by Panasonic, Philips, Samsung and Sony, and DRM technology vendor Intertrust. Not surprisingly, Marlin has been adopted by UltraViolet, the cloud-based digital locker system promoted by DECE (Digital Entertainment Content Ecosystem). DECE was largely instigated by Sony even though it now enjoys cross-industry support with members including operators, security vendors and a few big content houses such as Warner Brothers. Of the major content houses, only Disney Company has so far stayed firmly outside the UltraViolet camp, developing its own rival digital locker called KeyChest, but with no release date yet set it remains to be seen whether this ever sees the light of day.

The TNT 2.0 group in France is bullish about prospects for connected TV in the country, convinced that is has made the right technology choices and that interest will grow quickly. HbbTV has already been demonstrated in the recent French Open tennis championships by France Télévisions, the country's national public service broadcaster, in conjunction with IBM, which operated the tournament's website. Users required a Panasonic Viera TV costing about €2000 to tune in to this demonstration, which is rather an irony given that the idea of HbbTV is to allow access from multiple TVs and encourage competition to bring the price down. No doubt that will come.

TNT 2.0 insiders meanwhile are well aware there is a lot more work to do. They view their platform as an enabling step for tackling the inevitable security-related teething troubles that will have to be overcome to make broadband access to premium HD content a worldwide reality. They compare the current situation facing operators on the brink of connected TV with the one prevailing in the early years of digital pay TV, which arrived in Europe in 1996. The first five years was almost a prolonged honeymoon period in security terms with little piracy, creating a false sense that the smart card conditional access then in place was robust against attack. But as the prize gained in value with growing subscriber numbers, so the pirates moved in, bringing some operators to their knees in the early 2000s. While never resolved entirely, the situation was brought under control with new platforms enabling regular replacement of smart cards.

Broadband TV is in a similar situation now, with platforms such as TNT 2.0 getting ready to roll, although with no expectation this time of a long honeymoon period and no illusion that the platform will be secure from attack. This is because connected TV in its broadest sense involves delivery of content in an uncontrolled environment, which will therefore require collaboration among providers of networking infrastructure and above all the end devices. It still remains to be demonstrated that a broadband-based delivery platform often not be under the operator's control, delivering directly retail devices from anywhere, will be able to support the highest premium content. But the industry is at least tackling these issues, with several European initiatives now running to add more security to HbbTV for example.

By Philip Hunter, Broadcast Engineering


Videola is an enterprise-level video management system and video delivery platform. It allows you to create paid-access or free-access video websites which can serve video to the desktop, mobile, or television-based devices.

Create your own Netflix On-Demand style (subscription), Hulu style (ad supported), or Blockbuster / Amazon style (rental) streaming video websites with your own video content.

Built on Drupal, Videola is a highly flexible, easily expandable, feature-rich open source solution for organizing and managing video content, users, and ecommerce.

How Social is Your Favorite TV Show?

TV viewers are increasingly engaging in social media while watching their favorite shows. But which shows are drawing the most commenters and the largest share of the conversation? Cambridge, Mass.-based startup Bluefin Labs is using social media to not only measure engagement during TV shows, but also to find connections between shows that those viewers enjoy watching.

The company’s Response Level and Response Share metrics are essentially new ways to think about how a TV show stacks up against the competition. The new metrics are designed to measure how audiences talk about shows, both in terms of total number of commenters on a show, and the percentage of that show’s share of the conversation during the time in which it aired. And it’s rolling out its Bluefin Signals dashboard as a way for broadcasters and advertisers to see the connections between these shows and conversations.

Click to enlarge

The platform determines what people are commenting on without the more traditional use of scanning for hashtags or keywords during a certain show, using video fingerprinting technology to determine what’s happening onscreen instead. It then matches TV action with social media response. Deb Roy, co-founder and CEO of Bluefin, Labs refers to this matching up as “mapping the TV Genome.”

The platform measures social media responses to more than 3,000 TV shows and more than 100,000 individual airings of those shows. Each month, it processes about 3 billion social media comments, matching them up against 2 million minutes of live TV during that time. The company hopes to have full coverage of all live TV shows next year.

While knowing how many viewers are commenting about a particular show might be of some value to broadcasters and advertisers who use social media sentiment during a show as a proxy for engagement, the more valuable tool might be in Bluefin Labs’ ability to draw connections between shows. This data on “cross-show engagement” is useful in determining affinity between shows — for example, recognizing that fans of One Tree Hill are also likely to watch Hellcats and 90210.

Click to enlarge

That data can be used by broadcasters to help increase viewership — for example, running small ads on one show to promote another that fans might also like. Or it can be used by advertisers: If ad inventory on one show is in high demand, but its viewers have an affinity toward a show with a lower profile, agencies and brands can reach the same group of people with a less costly ad buy by betting on the second show.

Bluefin Labs isn’t the only company striving to measure social engagement with TV: Wiredset’s Trendrr recently introduced a social media TV chart, and SocialGuide aims to track social media mentions of shows in real-time.

By Ryan Lawler, GigaOM

Flingo is About to Make Your Smart TV Even Smarter

Seemingly everyone watches TV with some sort of second screen in front of them — be it a laptop, mobile phone or tablet device. But the tools for engaging with those viewers have largely been lacking, due to the absence of a feedback loop between the TV and second screen. San Francisco–based startup Flingo hopes to change that, with a bit of technology embedded into TVs that will let web and mobile applications know what’s being watched on the big screen.

Last year Flingo founder Ashwin Navin demonstrated for us how the technology can bring online video to the TV through white-label TV apps and an open API that any publisher can download and use itself. With that technology consumers can “fling” supported content to a queue of web videos on their TVs.

But the Flingo technology has a lot more to offer than just sending online video to the TV: It will also enable broadcasters and advertisers to build mobile and web applications that are aware of what television content you are watching. By doing so, broadcasters can provide additional relevant content to on-air shows in those applications, increasing engagement with users on the second screen.

Meanwhile, advertisers will be able to create interactive advertising using the technology. Consumers won’t have to go to a related website to find out more about a product being advertised on-screen: Clickable display ads can be surfaced in whatever browser or app they’re using while watching TV. This enables advertisers to extend offers to the second screen, and it adds a direct response mechanism to TV advertising, which has typically been just about branding and reach.

In addition to web and mobile apps, Flingo technology can be used to bring contextually relevant content directly to the TV through what it calls Hovercraft apps that take up a small portion of the screen. That will enable broadcasters to embed floating Twitter clients through the TV interface, for instance, or create interactive flash polls that can be answered on a mobile device or web browser.

Flingo isn’t the only company looking to offer this type of interactivity among TV and web and mobile apps. Yahoo offers something similar, which it calls Broadcast Interactivity, through its Yahoo Connected TV platform. Audio fingerprinting app maker Shazam is betting big that consumers will use its mobile application to identify what they’re watching and that in the process they will unlock additional content from publishers or deals from advertisers. IntoNow, which had similar technology, pitched it as a way for users to “check in” to shows with the intent of later unlocking similar content and interactive adds. IntoNow was bought by Yahoo just a few months after launch.

The beauty of what Flingo is doing is that it’s already embedded in TV chipsets, so there’s no audio fingerprinting or other workaround necessary to determine what viewers are watching. Flingo has licensed its technology to consumer electronics manufacturers like LG, Samsung, Vizio, Insignia, Sanyo and Western Digital. As a result, it’s already available on more than 5.7 million TVs, Blu-ray players and IP set-top boxes, with more being sold every day.

Most consumers will never know what Flingo is, as it will operate largely behind the scenes. But its technology could enable a whole new level of interactivity and engagement that was previously unavailable to broadcasters and advertisers.

By Ryan Lawler, GigaOM

H.264 is Still Winning the Codec War

H.264 remains the dominant force in online video, as the video codec now accounts for more than two-thirds of online video, according to a blog post by MeFeedia Thursday. Meanwhile, Google’s WebM format has yet to gain any significant traction after being released a year ago.

H.264's market share continues to widen over competing video formats, as it now accounts for nearly 70 percent of videos indexed by MeFeedia. That’s a huge increase in a very short amount of time, as just last May, when only about 25 percent of videos were available in the H.264 format. And while the percentage growth has slowed in recent quarters, it remains the dominant format for streaming video delivery.

The growth in H.264 encodes is being driven by the adoption of video on tablet devices like the iPad , as well as connected TVs, Blu-ray players and other broadband-enabled video devices. Due to hardware acceleration built into many existing connected device chipsets, H.264 is by far the dominant format for smart TVs and related products.

It’s also an acknowledgement of the strength of the iPad for mobile viewing. There are more than 200 million iOS devices on the market, including 25 million iPads, and H.264 video is the best way to reach those devices. According to MeFeedia, the iPad has the highest engagement among devices, with 40 percent more videos viewed per use than Android, iPhone and desktop users.

While H.264 continues to dominate, the latest numbers on Google’s open-source WebM video format show that it has yet to catch on with publishers. More than a year after its launch, WebM accounts for less than 2 percent of videos indexed, according to MeFeedia. While that is expected to grow — particularly as YouTube continues its process of transcoding all its videos into the WebM format — it’s still a pretty small number for a codec that boasts fairly broad browser adoption and growing support from consumer device manufacturers.

WebM is supported by Firefox, Opera and Google’s Chrome web browsers. In that latter case, in fact, WebM is the default video codec supported for HTML5 video playback, as Google removed support for H.264 in the latest version of Chrome. It’s also gained some hardware backing from consumer electronics manufacturers like Samsung, LG Electronics and Cisco, and has been available on Android devices since the release of Gingerbread.

Despite growing support, it may still take some time before WebM gains the type of hardware acceleration required for broad publisher usage. The good news for Google — and for WebM advocates — is that things can change quickly in the online video market. One need only look at the massive increase in H.264 adoption to see that.

By Ryan Lawler, GigaOM

Monitoring MPEG in an IP Network

The cost and ease-of-use advantages of moving video using IP have been welcomed in cable, IPTV and satellite applications around the world. For the most part, IP remains a rarity in most of today's television studios. However, these same benefits will eventually make the approach more prevalent in the broadcast market. Signal monitoring throughout the video delivery chain is essential to ensuring the viewer's quality of experience (QoE), and for this reason, broadcasters will benefit from having a thorough understanding of what is required to effectively deploy IP and accurately monitor IP content.

IP's Usefulness in Broadcast
While a broadcast facility may never have a fully deployed IP network backbone in the studio, IP links are set to replace ASI interconnects between equipment in many applications.

Encoders, multiplexers and other equipment located at the broadcaster's headend are already IP-capable devices, likely linked by ASI in most environments. Moving forward, IP networks with high-end switches or routers at the center will feed multiple pieces of equipment within the broadcast operation and become increasingly common as organizations begin recognizing the benefits of this technology. Benefits include relatively low-cost, facilitated transmission of signals to multiple destinations (multicasting) and ease of signal monitoring.

The use of IP in a variety of broadcast applications is becoming more common.

Another likely location for the imminent incursion of IP into the broadcaster's world is the connection between the studio and the transmitter. A transmitter is typically linked to the studio by microwave; however, some broadcasters are already considering replacing this link via an IP connection. Leasing fiber eliminates both the expense and uncertainty of microwave systems, which are subject to weather and other interference.

How Video Over IP Works
Digital video is packetized data — ones and zeros moving 188 bytes at a time in a transport stream. IP transmits data from one point to another, and because digital video is essentially data, it can also be arranged in Ethernet frames and transmitted. However, IP does pose fundamental problems as a video transfer scheme, most of them stemming from the nature of IP. IP was developed some 30 years ago as a way to move data quickly and efficiently from Point A to Point B. In traditional data transfers, timing and sequence do not matter very much. For example, when sending or viewing a Web page or e-mail, the order in which the data components arrive is unimportant as long as the content appears correctly once loading is complete. Should data be lost or corrupted in transit, the content can easily be retransmitted and loaded without the end user ever knowing the difference.

Transmitting live video is entirely different. The frames containing the data — typically MPEG — must arrive synchronized, on time and in sequence if the footage is going to appear as intended to the viewer. In a video application, retransmission of lost or corrupt data is nearly impossible because the appropriate moment for display has passed.

When MPEG-compressed video travels on an IP network, the content is typically arranged in groups of seven data packets, each wrapped in an Ethernet frame. These frames each contain source and destination addresses so that the data is routed appropriately within the network. As the frame arrives at the receiver-decoder (or other device), the MPEG packets are extracted and treated the same as they would have had they arrived by ASI.

Ethernet frames carry the compressed MPEG video information.

In an IP network that has been properly designed to carry video, the switches and routers are configured to prioritize video data. If the integrity of the video is to be maintained at a high level, then IP infrastructure must also be maintained and a monitoring solution put in place that addresses the problems unique to IP infrastructure — specifically jitter and dropped, lost or out-of-order data packets.

How Video Over IP is Monitored
In the cable and satellite industries where video delivery over IP is common, effective monitoring techniques have been tried and proven. Typically, a monitoring device performs multiple tests continuously on all inputs. Because almost all of these tests are based on the timing within the data transmission, the monitor's most fundamental job is to keep track of packet arrival times with a high degree of accuracy. Within the MPEG signal, the monitoring device assesses the timing to ensure that it meets a predetermined standard, such as ETSI TR 101 290 in Europe. The timing standard — designed to ensure QoE for the viewer — codifies acceptable arrival timing for the component parts of the stream and the Program Clock Reference (PCR), which contributes to timing accuracy. The component parts are audio and video data, as well as the tables that enable the consumer's television to perform decoding, display the image on the screen, and properly represent auxiliary items like a program guide and subtitles.

Effective monitoring of an IP-based system requires timestamping every Ethernet frame so that the rate and sequence can be tested. The first test that must be performed is for jitter, which is a measure of the cadence of the packets in the line. The packets should arrive at regular intervals, without bursts or prolonged gaps. To some extent, the buffer in the receiver-decoder can compensate for these issues, but if they become extreme, the buffer is overwhelmed and viewer experience suffers.

The second test is for dropped or out-of-order packets. These can be hard to recognize because the IP stream typically lacks both indicators of packet order and a means of notifying the network that a packet has been lost. To get around this, a monitoring device can penetrate the packet, scrutinize the MPEG packets within and use continuity counters to determine whether all the packets are present. This is the same kind of MPEG test that is conducted for an ASI or other traditional broadcast signal. With IP transport, the Ethernet frame adds another layer of complexity to the address with monitoring.

Monitoring devices typically incorporate slots for one or two cards that can perform either IP or ASI monitoring, depending on the needs of the system. In a traditional broadcast station, a monitor with four ASI inputs might be implemented to cover all the necessary streams at the headend or studio. One of IP's advantages over ASI is that a single IP input can simultaneously monitor all the traffic on the network — hundreds of IP multicasts — rendering multiple monitor inputs unnecessary and ultimately saving the broadcaster money. In fact, the number of IP transport streams is limited only by overall network capability.

Once the Ethernet frame is removed from the MPEG layer, the monitoring process is the same as for transport streams carried over any other physical medium. The monitor assesses the timing of elements such as PAT and PMT tables to ensure their rates meet the predetermined standard (ie: ETSI TR 101 290). The time-checks also reveal gaps between packets that contain video and audio. Beyond that, the monitor scrutinizes the accuracy of the PCR.

Because timing really is everything for optimal video delivery, the accuracy of the monitoring device is also important; even a small degree of inaccuracy distorts the information gained from the monitoring process. Some monitors unintentionally introduce delay and inaccuracy because they rely on an off-the-shelf network interface card to input the video streams and disregard the specialized requirements of delicate IP-based transport streams. Because these cards must subsequently pass the data through the operating system's software IP stack, much of the timing information's granularity is inevitably lost to processing delays.

A more effective technique is for a proprietary network interface card to timestamp the Ethernet frames at the time of input — without injecting processing delays. This can be accomplished by using specialized hardware on the input card to separate Ethernet frames carrying transport stream packets from those containing general IP traffic. The general traffic data can be passed through the operating system's normal IP stack while the frames containing transport stream packets are timestamped and passed directly to the analysis software, bypassing the IP stack.

The Ethernet frame timestamping is done using a highly accurate clock reference (such as an oven-controlled crystal oscilator), and then the frame timestamps can be inferred and transferred to the transport packets inside the frames by referencing the physical link speed on the Ethernet interface. This methodology yields more accurate data for subsequent reference and analysis by the specialized software. In fact, timestamping techniques like this have been proven effective and are common with single stream ASI inputs. In an IP network, where there may be hundreds of IP multicasts, it is less common but even more useful and necessary.

As digital television becomes the global standard, delivering video from source to home becomes an increasingly complex process that relies on multiple transport techniques. Each of these techniques — and each combination of them — is accompanied by a potential for error that can diminish the quality of the video signal being delivered. If signal quality deteriorates or is interrupted, viewers may change channels, switch providers, or even turn off the television set altogether. IP-based signal transmission schemes are not yet as familiar to broadcast engineers as their more traditional ASI and RF counterparts. However, as the use of IP-based signals increases, their effective and accurate monitoring becomes commensurately important to the viewer's quality of experience.

By Seth Vermulm, Broadcast Engineering

WiO Connects TV to Mobile, Makes TV Ads Interactive

A new mobile app platform called WiO is set to revolutionize the TV watching experience by allowing customers to immediately get information about the products and services they see advertised on screen, both in TV commercials and within the shows themselves.

Through a mobile app running on customers’ phones, marketers can offer a variety of follow-up actions to the TV viewer, including coupons, reminders, contact info and more. In total, there are 10 follow-up actions offered. And the consumer is in complete control of which ones, if any, they respond to.

The company behind WiO, WiOffer, is the creation of Andrew Pakula and Matthew Greene, both of whom have experience working in and with major media, tech and digital marketing firms, including DoubleClick, Yahoo and Ogilvy. This experience, Pakula explains, has allowed them to learn a lot about what customers respond to and how.

How the WiO-Enabled TV Commercials Work
Before a customer can use the WiO app, the commercial or TV show has to first be WiO-enabled. To do this, the advertiser sends WiOffer their asset – that is, their commercial or the portion of the show where the product placement is visible and/or mentioned by the characters within the program. Using the clip as a digital ID, the mobile WiO app running on customers’ phones can then “hear” when the commercial plays and pop up a screen offering more information.

There are 10 different options a customer can choose from, some of which are subject to what the advertiser is providing. These include access to coupons, PDF brochures, app downloads, website addresses, retail locators, contact information, calendar events, reminders and even one option which will auto-dial the advertiser directly, in the case of TV commercials where a phone number is displayed. The unique thing about connecting TV to the mobile platform in this way is how many of these tasks are automated. For example, choosing the reminders, calendar event or contact info options will instantly save that information to a customer’s phone, with no manual effort required on the customer’s part.

In addition, when coupons are provided, the customer can save these “WiOffers” on their phone, where they will be accessible until the expiration date. To use a coupon, the customer just has to show their phone to the retailer.

Meanwhile, on the advertisers’ side, metrics surrounding customer response can be tracked in real-time, allowing them to adjust their advertising and offers on the fly to boost engagement, as need be.

Competition in the New TV Landscape
WiO is not the only company with this same idea. The music identification app Shazam recently raised funds to push into television. Some TV shows and ads now tell viewers to “Shazam” them in order to receive bonus content and discounts. Another company, IntoNow, uses audio recognition to encourage users to “check in” to what they’re watching on TV. GetGlue offers something similar. Even Microsoft is getting into the action with its NUads advertising platform, which uses the voice and gesture control in its Kinect for Xbox 360 to create interactive TV ads.

But unlike with Xbox, WiO is device agnostic, Greene says. He insists that WiO is different than the so-called “social apps,” too. Even though WiO allows for sharing to Facebook, Twitter, SMS and email, its goal is not to socialize the TV-watching experience. “Check-ins are a bit of distraction,” says Greene, “if not an enormous distraction.” And, referring to Shazam, he claims the idea of connecting a music app to actual transactions is a bit complicated. The Shazam ads point you to a mobile landing page, he notes. WiO aims to connect the brand directly to the customer so they can start talking immediately.

The WiO app will launch in a few weeks, first on iPhone. It will arrive on Android 60 days later. The company says it can’t comment on its advertising partners at this time, but from what we saw, there are some well-known brands in talks with the company now.

By Door Sarah Perez, 365Online

The Technology Behind Google+ Hangouts

Ever since Google started to roll out its Google+ project on Tuesday, many of its users have been particularly excited about its group video chat service Hangouts. I agree, but not just because it’s fun and easy to use. The real kicker is the technology that powers the service. Even in its infancy, Hangouts is an interesting cloud service. But in the not-so-distant future, it could evolve into a standards-based video conferencing solution that runs natively in many browsers and on a whole range of devices.

Google has been quiet about its plans for Hangouts, and hasn’t revealed all that much about some of the components powering the service either. However, there have been some key developments in recent months that indicate what makes Hangouts work and where things are going:

The Cloud
Making video chat work at scale can require a lot of resources, which is why there has been a movement towards peer-to-peer (P2P) solutions to offload video and signaling traffic between the clients involved. Skype makes use of P2P for that very reason, as does Chatroulette. However, P2P can introduce latency, which can be especially bothersome if you chat with 10 people at a time. That’s why Google went down a different route for Hangout.

“To support Hangouts, we built an all-new standards-based cloud video conferencing platform,” explained Google Real-time Communications Tech Lead Justin Uberti in a blog post on Tuesday. He added that Hangouts uses a client-server model which “leverages the power of Google’s infrastructure.”

Click to watch the video

Browser Integration
Hangouts currently requires you to download the same plugin that also powers video chat within Google Talk. However, Google is working on making both Hangouts and Google Talk itself work in the browser, without the need for any plugins. This will be done in part through a new framework for realtime communications (read: text, voice and video chat) dubbed WebRTC that the company open-sourced in May. WebRTC is supported by Mozilla and Opera, and Google started to integrate the framework into its Chrome browser earlier this month. “Work has started to move Google Talk completely to WebRTC,” it says on the project’s web site.

At that point, users won’t need a plugin anymore to use Google Talk, and the same should eventually be true for Hangouts. Here’s what a Google spokesperson told me via email about the connection bewteen the Google+ video chat service and the framework: “A lot of the technology in Hangouts feeds into the WebRTC, and we contribute a lot of feedback to help shape the WebRTC interface. At this point though, our plug-in and the protocol are different efforts.” He refused to reveal any future plans, but trust me, the writing is on the wall…

Open Codecs
Google Talk and Hangouts currently use technology Google is licensing from Vidyo to facilitate video chats. Video is transmitted in H.264/SVC, with H.264/AVC and H.263 being used as fallback solutions. However, there are strong signs Google will eventually switch to open codecs.

Google open-sourced its VP8 video codec last year as part of the new WebM video format, and real-time communications were one of the big issues that VP8’s programmers wanted to improve with the codec from the onset. In fact, VP8 is already being used by Skype for its group video calling feature, and Google’s WebM project manager John Luther wrote in February that VP8 is an “exceptionally good codec for real-time applications like videoconferencing.”

So when will Hangouts be switching from H.264 to WebM? Google+ Project Lead Bradley Horowitz indicated on This Week in Google on Wednesday that his team is already testing alternatives to the current codec. A Google spokesperson didn’t want to discuss any future plans for Hangouts when I asked about the codec issue, but here’s a clue: WebRTC is based on the VP8 codec, which means that H.264 could get displaced as the default codec for Hangouts as soon as the video chat service rolls out its native browser integration.

Device Integration
This is where things get really interesting: Hangout’s cloud-based architecture and its upcoming browser integration will eventually make it possible to deliver an optimized group video chat experience to a whole range of devices. Desktop users will get to view full HD video, users on mobile devices will receive optimized streams to deal with bandwidth issues. And Google TV users could see Hangouts appear on their TV sets sooner than they think, because Google TV comes with a full-blown Chrome browser.

A few companies have started to bring multi-person video chat to mobile devices, but cross-device video conferencing is still in its infancy, and Google could have a good chance here to capture the market early on. Of course, the company didn’t want to comment on the specifics of bringing Hangouts to mobile devices, but what Google’s spokesperson told me wasn’t exactly a denial either:

“Again, we can’t comment on future product plans. However, Google Plus heavily invests in mobile products as we believe you should be able to share and communicate, whether you are on the web, tablet, or phone.”

By Janko Roettgers, GigaOM