The Digital Production Partnership Announces Two Major Initiatives to Accelerate the Move to Digital Production in Television

The Digital Production Partnership (DPP) – a partnership between ITV, Channel 4 and the BBC – has today made two major interventions in digital production in Television. The first is the release of a report, The Reluctant Revolution – Breaking Down Barriers to Digital Production in TV. The second is the announcement of common Technical & Metadata standards for file-based delivery of TV programmes to all major UK broadcasters.

Painting a picture of a technical and creative revolution that is struggling to ignite, the report argues the reason is not the indifference or ignorance of producers, but rather the failure of broadcasters, suppliers and manufacturers to understand the practical realities and frustrations of the production community.

Gathering the views and experiences from a broad range of production companies across the UK, the report, commissioned by the DPP from industry analysts MediaSmiths International, concludes that for all the new technology of recent years, there is no easily workable and affordable model for end-to-end digital production available to independent producers.

“The move to end-to-end digital production is inevitable,” says the report, “but the pace of change is limited by the lack of clear signposts, or standard ways of working, and therefore a reluctance in the production community to set off on the journey… The key to ignition for this slow-moving revolution is the acceptance by all concerned of the day to day realities faced by production communities, and an understanding of where and how the benefits can be identified and achieved.”

The report identifies a number of opportunities and interventions that could bring about revolutionary change in digital production. These include pay-as-you-go models for web and cloud based tools and services, a new role for existing trusted providers such as facility houses, and a more pro-active role for the Broadcasters.

Mark Harrison, Controller of Production, BBC North and BBC lead for the DPP, said of the report and its outcomes, “Those of us who have been evangelists for the creative and business benefits of fully digital production have been mystified by the slow pace of change. This report explains that slowness, and offers practical suggestions for how change can be accelerated – not least by recognising that Broadcasters must get more involved.”

The second announcement made today reflects the commitment on the part of Broadcasters to get more involved: the DPP has unveiled the key features of its Technical & Metadata Standards for File-based Delivery, which will be published in full at the end of this year.

In response to producers citing ‘unnecessary complexity and lack of standardisation’ as one of the key barriers to digital production, the DPP, as a starting point to gaining greater simplicity, has clarified the UK Broadcaster’s technical expectations around file based delivery.

Through the DPP, six broadcasters have agreed the UK’s first common file format, structure and wrapper to enable TV programme delivery by file. These new guidelines will complement the common standards already published by the DPP for tape delivery of HD and SD TV programmes.

By agreeing one set of pan-industry technical standards for the UK, the DPP aims to minimise confusion and expense for programme-makers, and avoid a situation where a number of different file types and specifications proliferate.

The DPP has also worked closely with the Advanced Media Workflow Association (AMWA) based in the US, on a new standard for HD files. AS-11 is planned to be published by AMWA by the end of the year, and the DPP guidelines will require files delivered to UK broadcasters to be compliant with a specified subset of this internationally recognised file structure.

Key Features of the File Standard
• Designed for completed programme deliveries
• Based on the MXF file format, AVC Intra compression at 100 Mb/s for HD, and IMX at 50 Mb/s for SD
• Founded on a new AMWA international standard, AS-11
• Includes a minimum set of requirements for Programme Editorial and Technical Metadata

The common metadata standards, which form an important part of the new file-based delivery guidelines, have been developed with reference to the European Broadcasting Union’s EBU Core.

The new standards aim to remove any ambiguity during the production and delivery process. A key aspect is the inclusion of editorial and technical metadata, which will ensure a consistent set of information for the processing, review and scheduling of programmes. As part of this requirement, the DPP is planning to provide an application to enable production companies to enter this metadata easily.

Kevin Burrows CTO Broadcast and Distribution, C4 and DPP Technical Standards Chair, said, “Having one set of standards for file-based delivery across the industry is of huge benefit in ensuring ease of exchange. It will also reduce costs for independent producers as well as minimising confusion amongst programme makers.”

The agreement of these new 'file based technical standards' does not signal an immediate move to file based delivery. Instead, the DPP provides clarity now around which file formats, structures and wrappers will become the expected standards for file-based delivery as it is phased in.

From 2012 BBC, ITV and Channel 4 will begin to take delivery of programmes on file on a selective basis. File based delivery will be the preferred delivery format for these Broadcasters by 2014. This announcement represents long notice lead-time to the industry, and will enable production and post production companies to ready themselves for this transition.

Source: Digital Production Partnership

Getting Machines to Watch 3D for You

How can 3D television signals be analysed automatically to provide quality of broadcast service? Mike Knee, consultant engineer, research & development at Snell is working on the answer, and provides a short overview of his work here.

Running a multi-channel TV installation brings new headaches when 3D is involved. For live monitoring of 2D TV channels, manufacturers have developed automated solutions covering many tasks, such as lip-sync measurement or compression quality estimation. 3D brings a new dimension to monitoring, because we additionally have to check the relationship between the left and right-eye signals.

Manual monitoring of 3D is more difficult than 2D because operators need to wear glasses or accept limitations of autostereoscopic displays. So there is a burgeoning interest in automatic monitoring of 3DTV. In this article we look at how various aspects of 3D television signals can be analysed.

Format Detection
Left and right signals may be packed into a single video channel in many ways. Some formats, such as left/right juxtaposition, are ‘loose packed’ because the two pictures are physically separate. Other formats, such as line interleaving, are ‘close packed’ because corresponding left and right pixels are close together.

One way to detect the packing format is to perform a trial unpacking with an assumed format and then detect whether the resulting images appear to be a stereoscopic pair. For loose packed formats, we look for relative similarity between the left and right images when compared with unrelated parts of the picture. For close packed formats, we look for relative differences between the left and right images when compared with adjacent pixels or lines.

Depth or Disparity Analysis
An important 3D analysis task is to measure the perceived depth of objects in the scene, which depends on disparity (the horizontal distance between left and right representations of the object). In 3D monitoring, we measure disparity and relate it to perceived depth for different display configurations.

The most important use of disparity measurement is to provide a warning if the viewer is likely to suffer eye strain. It can also be used to verify that the sequence really is 3D, to detect and correct for geometric distortions between the two channels, and to assist in the insertion of captions or subtitles at suitable depths.

One class of disparity measurement methods involves correlating the left and right images to generate a sparse disparity map. This approach is ideal for looking at the behaviour of different objects in the scene and for determining whether limits have been exceeded. Other methods generate a dense disparity map – a disparity value for every pixel. This approach would be necessary if the measurement were being used to drive post-processing, for example to change the effective camera spacing.

Left-Right Swap Detection
If the left and right images are inadvertently swapped, the result is disturbing, though it is not always obvious what is wrong. It would be useful to detect the swap automatically. A disparity map is a good starting point, but a 3D pair will often exhibit both negative and positive disparity values. So a simple disparity histogram analysis, for example, would not be enough.

One approach is based on the spatial distribution of disparity values. Objects at the centre and bottom of the screen are generally nearer than objects at the top and sides. A left-right swap detector could correlate measured disparity with a template of expected values to see which way round gives the better match.

A better method is based on the observation that closer objects occlude more distant objects. Occluded regions extend to the left of transitions in the left-eye view and to the right in the right-eye view. This observation enables us to determine statistically which view is which.

2D to 3D Conversion Detection
In the rush to deliver 3D content, it is tempting to use 2D to 3D conversion. Some automatic conversion is impressive, but concern remains that over-use of simple conversion algorithms may undermine the appeal of 3DTV. So it would be desirable when monitoring 3D content to detect the possible use of a converter.

One simple 2D to 3D conversion technique is to apply a fixed spatial disparity profile. Another technique is to introduce delay between two versions of the same moving sequence to give an impression of depth depending on motion. The use of these techniques can be detected using a combination of fingerprint comparison, temporal alignment and disparity estimation.

One can envisage a game of ‘cat and mouse’ whereby detection algorithms become ever more sophisticated in order to keep up with the increasing complexity of automatic 2D to 3D converters.

Source: TVB Europe

Plenoptic Lens Arrays Signal Future?

While producers and manufacturers are wrestling with the practicalities of existing 3D production, research and development is being put into next generation 3D capture. Plenoptic lenses and computational cinematography are two possible future means of capturing light in three dimensions.

Starting from the premise that current 3D production equipment is stone age - cumbersome, expensive and inaccurate - speakers at a session on the future of 3D at IBC gazed into their crystal balls. Sony’s Senior Vice President of engineering and SMPTE President, Peter Lude, gave his version of the future in five steps.

“Step one is the clunky, cabled and complex approach we have used to date. We are now into step two which is about greater automation and computer analysis which should make it easier to use rigs, correct errors and reduce manual convergence.

“It should be possible for a computer system to network together multiple cameras arrayed around a stadia, for example, and to toe those cameras at the same time to keep the object at the same convergence point so that when cutting between cameras there is no discomfort with viewer’s eyes having to readjust.”

Step 3 is to use advance image processing tools. One idea is to use a synthetic or virtual camera. For example a 35mm camera can be used as source for texture, colour and framing while subsidiary cameras either to the side or from other parts of the set capture additional information. This information can be used to create a ‘virtual camera’ in post, or used to derive information which can off-set occlusion.

Beyond that Lude suggested the industry should look to new image sensing technologies such as plenoptic and lightfield systems.

A plenoptic camera, such as the stills camera available from German firm Raytrix, permits the counter intuitive ability to select focus points in post processing after the shot has been taken. It also permits capture of 3D images with a single sensor.

Another idea is to use infra-red systems as used by Microsoft’s Kinect or LIDAR Light Detection And Ranging devices to scan a field of view and extract depth patterns which can be used to reconstruct scenes. Holographic technologies are perhaps the next step and for information on that check here.

Walt Disney Studios' Vice President of production technology, Howard Lukk, also has his eye on plenoptics. While a plenoptic lens is comprised of multiple micro-lenses which capture a slightly different area of a picture, he speculated what a rig fitted with up to 100 camera lenses might capture. “What if we could come up with new camera system that comprises more one single camera?” he asked.

Stanford University is leading research into this area and has indeed stacked 100 cameras into a single rig for one demonstration.

“It is computationally intensive but the idea that you can refocus an image after it is shot, readjusting focal length, is extremely powerful,” said Lukk. “From all these viewpoints it should be possible - given enough processing power and mathematical juggling - to extrapolate a detailed disparity map to create good 3D model which we can manipulate in post any way we like. In effect we create stereo in a very controlled environment.”

If 3D camera rigs are not the long term future of the industry, Lukk suggests that a hybrid approach will develop which will be a combination of capturing volumetric space on set and being able to produce the 3D in a post-production environment at the back end.

“This will give you much more versatility in manipulating the images. This idea feeds on the idea of computational cinematography conceived by Marc Levoy (a computer graphics research at Stanford University) a few years ago. Basically this says that if we capture things in a certain way, we can compute things that we really need in the back end.

“You can be less accurate on the front end. Adobe has been doing a lot of work in this area, where you can refocus the image after the event. You can apply this concept to high dynamic range and higher frame rates.”

Disney is currently researching this method at Disney Research in Zurich, Lukk added. In addition Lukk says that research is also being conducted at the Fraunhofer Institute in Germany.

“I think eventually we’ll get back to capturing the volumetric space and allowing cinematographers and directors to do what they do best - that is, capturing the performance,” he said.

Source: TVB Europe

DLNA Premium Video Support Big Step for Connected Home

The DLNA’s support for streaming of premium content within the home, announced at IBC and initially for Europe only, is a major milestone in the evolution of connected home services. The move might have come sooner, but DLNA was waiting for maturation of the key underlying technology, DTCP-IP (Digital Transmission Content Protection over IP) to ensure safe and yet transparent delivery of content over IP networks. DTCP itself evolved in the mid 1990s, developed by chip maker Intel in conjunction with four major CE (consumer electronics) companies, Hitachi, Panasonic, Sony and Toshiba.

Collectively known as 5C, these five companies appreciated early on the impending need for a protocol capable of protecting audio/video entertainment content from illegal copying, interception and tampering as it traverses digital interfaces, such as USB ports.

The extension to IP came later, as did the crucial support for mechanisms designed to prevent content from being transmitted from a device inside the home to another device somewhere else. This was a vital step in persuading major rights holders such as Hollywood studios that they can trust home networks for delivery of their valuable premium content.

Apart from satisfying content owners, there is another important reason for DLNA’s adoption of DTCP-IP for copy protection around the home, which is to enable consumers to exercise their digital rights to the full. To do this, DTCP-IP enables operators to enforce multi-level rights dependent on the content.

Until now consumer devices have displayed video via either analogue interfaces, or the digital HDMI (High Definition Multimedia Interface), which incorporates a copy protection mechanism developed by Intel called HDCP (High-bandwidth Digital Content Protection) precisely to prevent consumers copying or recording. This allows no rights beyond immediate display of the content.

DTCP-IP on the other hand allows consumers to copy and record content subject to permission from the operator or rights holder. The rights are specified in a licence issued by the Digital Transmission Licensing Administrator (DTLA), details of which can be obtained from its website. The idea is that the licence encodes rules into the content, so that, for example, free-to-air terrestrial broadcasts could be recorded and copied without any restriction. In the case of subscription channels, consumers may be allowed to record content for their own subsequent viewing, but not copy it for sending to friends. Then premium content such as movies or live sports purchased on demand or pay-per-view via a specific transaction would probably be fully protected against recording and copying.

This begs the question of how these rights are robustly enforced, given that DTCP-IP is a software-only mechanism, as a result of the decision by CE makers that it would be impractical and too expensive to enforce hardware protection on the whole constellation of CE devices since unnecessary incremental costs must be avoided. The idea then is that hardware protection is confined to the set top box or gateway linking the home network with external delivery network via the operator’s Conditional Access (CA) system. Such boxes may well have SoCs (System on Chips) incorporating security hardware blocks such as Cryptography Research’s Crypto Firewall.

But then, from the gateway or set top onwards through the home network, the DLNA platform will take over with the software based DTCP-IP, although this will operate in cooperation with the CA and DRM, enforcing whatever rights they specify. In order to provide robust security without hardware, the developers of DTCP-IP, spearheaded by Intel, have developed some clever tricks.

First, and most obviously, DTCP-IP requires the devices at each end of a link to validate each other’s DTLA licenses via a joint authentication procedure comprising a sequence of data swaps and key calculations. This is designed to prevent pirates from inserting a circumvention device that would record a copy protection data exchange or strip out the protection, since such a device would first have to be authenticated before any communication with it could take place.

Secondly the 5C consortium have implemented specific measures that act in combination to prevent a device in a home transmitting content, whether previously recorded or streamed, to any other device outside the home even next door, unless the rights allow this. One of these measures involves limiting how many routers any IP packet that is part of the protected video can traverse before being deleted, which itself is effective in preventing transmission outside the home. Another measure involves measuring the delay between source and destination before allowing video to be transmitted.

Between them, these measures enable the operator to determine whether a DTCP device is trying to communicate with another device in the same home, with a near neighbor’s, or a more remote one, and act accordingly depending on the rights. This is a significant advance in software-only security and is the reason DTCP-IP has been embraced by DLNA and welcomed so warmly by pay-TV operators, including BSkyB and Orange in Europe. However, the full response of major rights holders has yet to come, and will ultimately determine whether the industry has hit on the right content protection solution for the future digital home.

The DLNA has emerged as the universal standards body defining the overall platform for the digital home, embracing standards developed by others, including Universal Plug and Play (UPnP) for device discovery and the lower level physical networking standards such as MoCA, as well as DTCP-IP.

By Philip Hunter, Broadcast Engineering

Startup Umami Serves Side of iPad Content For TV

New York-based startup Umami will jump into the "second screen" fray with the expected release in the next few weeks of an iPad app, free to consumers, that will serve up contextually relevant content for shows on 40 broadcast and cable networks.

Umami fingerprints the audio in TV content across the 40 networks using a large-scale digital video recorder system. When a user fires up the app, it "listens" for which channel is currently on by comparing it to the Umami fingerprint database, then pulls up news, cast pages, episode guides and social media feeds from various sources in a flipbook-like format. The system works on DVR recordings, too.

The business model for the year-old firm is to land deals with TV networks and producers, to deliver ads and show-related material to avid fans.

"We wanted to make publishing to our platform dirt simple," Umami CEO Scott Rosenberg said. He claimed Umami already has several media partners lined up, though he declined to identify them.

Umami (pronounced "ooh-MA-mee") is a Japanese word that refers to a fifth taste of "savoriness." The idea: the app is like a flavor-enhancer for TV.

The company faces a slew of competitors, ranging from Nielsen -- whose MediaSync product uses audio watermarks for second-screen apps with live TV -- to Shazam Entertainment, Yahoo's IntoNow, Invidi Technologies and Spot411 Technologies. There's even another New York-based startup called SecondScreen Networks.

Rosenberg sees his primary competitor as networks trying to build apps themselves, a proposition he notes is time-consuming and results in a show- or network-specific app that's a narrow slice of the entire TV viewing experience.

On the other hand, an app like MTV's WatchWith, which provides content synchronized with the networks' top primetime shows, could coexist with Umami. "We don't think those activities are mutually exclusive," he said.

The founders tout their experience in TV and new media. Rosenberg most recently served as vice president of advanced advertising at Rovi, and has worked at BlackArrow, Intel and ReplayTV. Umami chief technology officer Bryan Slavin has worked at broadband video ad firm Lightningcast (acquired by AOL), Leap Wireless and BroadSoft.

The startup formed in the summer of 2010 and this spring raised $1.65 million in seed funding from Battery Ventures, New Enterprise Associates and independent investors.

By Todd Spangler, Multichannel News

Next-Generation Compression: The Market for HEVC and DASH

Despite increases in network capacity over the last decade, the thirst for more and higher quality video means compression is still a critical element in TV delivery. In this IBC2011 discussion, Dr Paul Stallard and Matthew Goldman from the CTO Group at Ericsson's TV Business consider next-generation compression technologies including HEVC and MPEG DASH.

HEVC could support full-resolution 3DTV, ultra high-def and provide a leapfrog option for HD service providers using M2. It could also ensure longer battery life for mobiles and tablets decoding video. Meanwhile, DASH could bring some much-needed rationalisation to the delivery of adaptive bit rate services.

Click to watch the video

By John Moulding, Videonet

adRise Brings Interactive Ads to Your Connected TV

For decades, TV advertising has suffered from a lack of interactivity and a lack of true measurability. New connected devices like TVs, Blu-ray players and streaming set-top boxes are changing all that by providing the same type of granular reporting and targetability available for web-based video ads directly on the TV. San Francisco-based startup adRise wants to be the platform to enable those ads to be delivered across multiple devices.

adRise offers up a real-time bidding exchange for video advertising on connected devices, enabling advertisers to serve into a number of different connected devices without having to do the hard work of integrating with various device platforms themselves. The startup has already integrated with Roku, Google TV, Boxee, Samsung, Yahoo Connected TV and other device platforms, with more on the way. adRise does all the device detection, transcoding, insertion of video assets on its end, so advertisers need only feed their existing Flash creative into its system once to have those assets reach a number of different devices.

On the publisher front, adRise provides a software development kit that publishers can use so that videos delivered to the TV through a Roku, Samsung TV or other device can serve up the same advertising. The startup has already teamed up with ad networks Tremor Video and Brightroll, and plans to announce partnerships with other major publishing and ad partners later this fall. According to CEO Farhad Massoudi, adRise has seen its ad inventory double every month over the last three months, and he expects that trend to continue at least through the end of the year.

The startup solves a problem not just on the delivery side of things, but on the reporting side as well. While Nielsen ratings are nice, there’s no substitute for the type of granular, deep-dive analytics that advertisers can get from IP-based delivery. adRise measures a number of unique characteristics, allowing advertisers to drill down into viewership by device, as well as to see how long ads were watched, if viewers clicked through to learn more about a product.

As more viewership happens on connected devices, advertisers will need a way to reach those viewers and publishers will need a way to monetize those views. Platforms like adRise seem well positioned to offer a solution to both.

By Ryan Lawler, GigaOM

Tablet TV: This is Just the Beginning

The tablet boom has already transformed the TV Anywhere and OTT strategies of some Pay TV operators but the real disruption is yet to come. The tablet market has so far been dominated by Apple with the iPad, but now attention is switching to Amazon whose product has been factored into forecasts by some analysts even before its launch. Forrester Research predicts it will give Apple a run for its money and notch up sales of up to 5 million units worldwide during the last quarter of this year. Apple, by contrast, will sell anywhere between 10 million and 22 million iPad2s in the same period depending on whose forecast you believe in a highly volatile and wildly fluctuating market.

For Pay TV operators the point is that the Amazon device, assuming analysts’ predictions are correct and that its launch is imminent, will retail for around $250, or perhaps under €200 in Europe, about half the price of the iPad2, and be designed with video in mind. It could turn the tablet into the second TV of choice for many homes during 2012, making it imperative that Pay TV operators act immediately to ensure that this is an opportunity rather than a threat to their business.

Some operators have already done this, with the consensus being that tablets should be embraced as companion devices acting as remote controls and programme guides or for associated activities such as voting in games or reality TV shows, as well as alternative TVs themselves. And without question there should be no extra charge for delivering content to tablets or any other device. This point was made before IBC by US satellite operator DISH Network, whose VP of Consumer Technology Vivek Khemka argued that extending TV services to tablets should not be viewed as an immediate revenue opportunity but as a competitive measure.

“I think the revenues will flow from the customer becoming stickier, and maybe upgrading to premium packages,” said Khemka.

The same line has been taken by Liberty Global with its multimedia gateway called Horizon, which was unveiled at IBC. This includes Wi-Fi ports to deliver TV services to tablet devices at no extra charge. Liberty Global is conducting field trials with Horizon in the Netherlands with commercial launch planned for Q1 2012 by its UPC operation there, followed by its operations in Switzerland and Germany soon after. The aim is to attract developers of Apps to enrich the service both on tablet devices and primary TVs.

“We have ripped up the set-top and made it into a platform so that users can seamlessly navigate content and integrate it onto many devices in multiple places,” said Mike Fries, President and CEO of Liberty Global.

For operators such as Liberty Global, the challenge is to tie tablets into their Pay TV package to prevent consumers from defecting to emerging services that may provide some of the same content free over-the-top. One point in their favour is that at present tablets will consume most TV content over Wi-Fi within the home, rather than over cellular 3G or 4G services that are as yet incapable of delivering premium video services through lack of consistent bandwidth. This means that operators who supply the broadband connection are well placed to provide content to tablets within the home, especially if they can integrate them with the service as companion devices.

Tablets in companion mode can also create the indirect revenue opportunities alluded to by Khemka at DISH Network by increasing engagement with the content being watched on the big screen, drawing more viewers in. During the IBC conference several speakers referred to the ability of companion devices, which admittedly could be smartphones or laptops as well as tablets, to boost audiences for less popular niche content by providing an added element of entertainment or enlightenment.

Such entertainment can involve integration with social media, and this has been exploited very effectively by the UK Eden channel, which re-broadcasts natural history and action programmes made by the BBC and others. The channel allows viewers to vote via companion devices on aspects of programmes, such as their favourite wildlife attraction, with prizes. It also features question and answer sessions via Facebook with major wildlife presenters such as David Attenborough.

“This has led to a 112% increase in viewing within the key 16 to 34 age group,” said Steve Plunkett, Director of Innovation and Technology at Red Bee Media UK, which worked with the Eden channel on the project. Speaking at an IBC conference panel, Plunkett described this as a striking result given that the Eden channel broadcasts content that is quite specialised rather than having mass-market appeal.

To be successful with tablets, operators or broadcasters must play to their strengths rather than just treating them as second TV sets, as the Eden channel has done. The failure of mobile TV so far can be attributed partly to an inability to exploit smaller screens properly, according to Sefy Ariely, VP for Sales and Marketing at IPTV middleware and content discovery specialist Orca Interactive.

“At the end of the day the reason I believe mobile TV was not successful is because it was trying to take the experience from one context to another,” said Ariely, speaking to Videonet at IBC.

This led to a temporary loss of interest in convergence between fixed and mobile TV, according to Ariely, but now the tablet is fast bringing it back. “We have watched how the iPad and tablet have sown the seeds for a tsunami of multi-screen and TV Anywhere discussions, and seen everyone scrambling while for us it was already built-in. We see this as another step in the trend towards personal TV.”

Indeed it is the potential of tablets to personalise the whole TV experience that holds the keys to success for operators, according to Neale Foster, VP of Global Sales at ACCESS, a provider of software for portable and wireless devices. “Apps can seriously enhance that experience and that is the point of companion and multi-screen devices,” said Foster, speaking on a panel hosted during IBC by CA and Pa TV software provider NDS. “They must be enjoyable and fun.”

Over time apps on tablets as companion devices also have potential to create those elusive new revenue opportunities that Pay TV operators are craving, by enabling adverts that play across both screens with scope for interactivity. “I think when the advertisers and media buyers get hold of this it is going to go stellar,” said Steve Godman, Sales Director at London-based digital media agency Skinkers, speaking on the same NDS sponsored panel. “The minute you get a really robust advertisement platform plugged into this stuff you will start generating revenue from it”.

The tablet boom has opened this door for advertising related apps, Godman added. “Tablets change the dynamic – you don’t have to fire up a laptop.” But for advertising, as with the associated programme, the context must be right for the device. “The opportunity lies in providing the right content for the device.”

The potential of tablets is not just as companion devices, or even as second TVs within the home, but also to usher in the era of full TV Anywhere that is not confined to locations where wired connectivity or Wi-Fi access is available. This full tablet potential will emerge gradually over the next few years, according to Andrew Baron, Chief Operating Officer at the UK cable operator Virgin Media.

“We are starting to see the ecosystem emerging, with video and the mobile getting ever-closer,” said Baron at an IBC conference. “I confidently predict the major theme here (at IBC) in the next three or four years will be mobile.”

This will require either further improvement in the ability of mobile 4G networks to provide the required bandwidth and Quality of Service for HD video, or else carpeting almost the whole country with Wi-Fi. With the arrival of the tablet, the end device is now driving mobile video forward rather than holding it back as before.

By Philip Hunter, Videonet

Improving QoE for IP Video Services

The boom in OTT and TV Anywhere services is underlined by rapid growth in IP video transmission at all stages of the content lifecycle, and this is expanding greatly the scope and demand for Quality Assurance (QA) products. Even leading proponents of OTT services still admit there is some way to go to provide acceptable Quality of Experience (QoE) for high-definition premium content over unmanaged networks in particular.

“One of the main obstacles to OTT is the lack of a great user experience,” says Helge Høibraaten, CEO of Vimond Media Solutions, a spin-off of Norwegian commercial TV station TV 2, which is commercialising its OTT broadcast platform internationally.

Speaking at a conference during the recent IBC exhibition in Amsterdam, Høibraaten indicated that an OTT platform was defined by the quality it delivers and must meet the needs of all devices including tablets, PCs and smartphones. Vimond itself has only just extended its applications suite to Apple iOS devices (iPad and iPhone), Android and Windows phones, in addition to Windows desktop PCs which it already supported. The message for vendors of OTT platforms, and for the services that run on them, is that they should only embrace new device types when acceptable quality can be guaranteed.

The definition of acceptable quality is admittedly rather subjective. It is certain, though, that IP networks are creating new challenges for providers of QA video products. These vendors have been extending their portfolios to tackle video delivery over both managed and unmanaged IP networks, with various announcements made at IBC.

While unmanaged networks including the Internet pose the greatest challenge, even managed IP networks require careful handling to avoid packet loss and latency resulting from congestion within the infrastructure. This can happen because unlike traditional broadcast networks, IP infrastructures do not have fixed end-to-end paths and have no pre-determined transmission times for each IP packet. It is possible for more packets to enter the network than can be delivered within an acceptable time frame, leading to congestion and either dropped packets, delays, or both. Either of these can cause loss of quality on receiving devices.

The remedy is to apply traffic shaping, which involves holding up IP packets that are less critical or which can afford a little delay in order to preserve capacity for the most important packets. This can be performed at the point of entry to the network or within the network by routers themselves or other dedicated devices, and the key with managed networks is that operators can control the traffic shaping process better. Potentially, packet loss can be eliminated and latency kept within acceptable limits, according to Per Lindgren, VP Business Development and Co-Founder of Net Insight, the Swedish-owned vendor of the Nimbra IP media transport platform. Net Insight tackles the managed IP quality issue by breaking the network down into separate segments and applying QoE mechanisms including traffic shaping to each.

The first step is to ensure that the routers themselves do not create problems under congestion by dropping packets as they pass through, so Net Insight has applied traffic shaping at this level to ensure this does not happen. “By traffic shaping even inside our MSRs (Media Switch Routers), we can traffic shape down until we ensure we do not lose any packets there,” says Lindgren.

The next step is to address the links through the core network between the routers and ensure that the QoS needs of each individual service are met. “Traditionally telcos have not been treating media traffic as a special service,” says Lindgren. “So we propose building service aware media networks. MSRs aggregate traffic so that the core network (provided by a telco) only handles aggregated flows rather than individual services. Our MSRs then handle the different protection needs of each service, and can add QoS enhanced links inside a media service network rather than just at the edges.”

In this way, by addressing both the routers and links between them separately as part of a coordinated traffic management approach, the network can achieve much higher levels of quality. Even then, though, the possibility of packet loss or delay cannot be discounted, and so the third element of Net Insight’s QA strategy is to monitor every link. “We can do continuous real-time monitoring of traffic between MSRs and see any packet loss sent between one MSR and another,” Lindgren explains. “That makes it much easier to troubleshoot.”

Within unmanaged IP networks, on the other hand, it is impossible for broadcasters or operators to do either traffic shaping or performance monitoring since they do not own the infrastructure. This is an increasing issue with the growth of cloud-based services where the infrastructure is normally owned and managed by a third-party with video delivered over some Content Distribution Network (CDN). In that case there is an apparent black hole between the cloud and the end user, making it difficult for a content provider to know what quality the customer is getting.

Another Swedish vendor specialising in distributed video delivery, Edgeware, has tackled this problem with its Convoy VDN, which is software operating within the company’s Distributed Video Delivery Network (D-VDN) platform. Announced at IBC, this operates by combining the receiving device’s capability with the QoS known to be provided by the delivery infrastructure, according to Edgeware’s Chief Marketing Officer Duncan Potter.

The point is that CDNs usually operate via adaptive streaming protocols to improve network efficiency and performance, breaking video up into multiple small file chunks that can take different routes before being reassembled at the destination. The network detects each user’s CPU capacity and bandwidth continuously and adjusts the quality of the stream in real-time to ensure that QoE is always as good as it can be at that point in time. But breaking up video into chunks does make it hard to monitor what is going on within the CDN, and this is the problem Edgeware has addressed with Convoy VDN. “As we are a network device we can see what is going through,” said Potter. “We work out what is sent, collect statistics via a central reporting engine, and that is integrated with the higher level CDN management system.”

Such measures may help ensure optimum quality when a service is working normally but do not cater for major outages within the infrastructure. While IP networks are becoming more reliable, there is rising dependence in an increasingly global content market on external communication links that may be unreliable. This is a particular problem for the growing number of niche and ethnic services that have a global audience distributed across numerous, often small, communities around the world.

Such ethnic services can be lucrative, with high profit margins for operators because consumers are prepared to pay a premium or a separate subscription to receive them, but the total revenue in a given region is usually relatively small. This means operators cannot afford to spend too much capital on protecting against failure of the service in a region beyond their control, according to Danny Wilson, CEO of TV performance monitoring vendor Pixelmetrix. “Typically if an operator imports content from, say, India, they are vulnerable to loss of signal from Delhi,” he points out.

Pixelmetrix is tackling this with software announced at IBC that enables its DVStor recording and playback platform to perform disaster recovery and start playing out the content in the event of an outage. “We are recording what is going on at a downlink coming in from overseas and have integrated this with our test and measurement devices,” says Wilson. “Then if there is any interruption, the sensor detects that input signal is lost, and this DVStor solution can then provide back-up recovery on a real-time basis.”

This, in effect, is a cloud-based disaster recovery service and could be incorporated within IP-based delivery infrastructures. It highlights the growing scope of Quality Assurance, bringing together elements of disaster recovery, troubleshooting and performance monitoring within an overall QoE package.

By Philip Hunter, Videonet

DLNA Now Supports Premium Video

The announcement by the Digital Living Network Alliance (DLNA) of support for premium video during IBC2011 heralds the coming of age for home networks. The extension of DLNA’s Interoperability Guidelines to include premium video including HD content plugs an important gap in the standard, which previously was confined to streaming user generated content (UGC) between connected devices within the home.

DLNA has emerged as the ‘standard of standards’ for the connected home, providing the framework for interoperability, communication and automated discovery among devices. It makes use of existing standards such as Universal Plug and Play (UPnP) for device discovery, as well as underlying physical connectivity technologies such as Wi-Fi and MoCA. DLNA has been closing on its vision of enabling any device to access any form of digital content no matter where it resides within the home network, without requiring any complex set-up or configuration by the user. DLNA was started by CE (Consumer Electronics) vendors but has since been joined by service providers who have been pushing hard for the premium video support.

One such service provider is Orange of France, which has already deployed DLNA on its Livebox residential gateway, its IPTV and media centre set-top box and the home library NAS (Network Attached Storage). The new support for premium video will boost Orange’s connected home offering and provide a full multi-screen experience, according to Paul-François Fournier, Executive Vice President at Orange Technocentre. “It will enable us to deliver TV services over numerous screens inside the home, such as tablets, Web phones and connected TVs,” he says.

The key enabling technology for DLNA’s premium video support is Digital Transmission Content Protection over IP (DTCP/IP), which was developed by five companies: chip maker Intel and CE giants Hitachi, Panasonic, Sony and Toshiba. This group, referred to collectively as 5C, formed an entity called the Digital Transmission Licensing Administrator to license the DTCP technology.

Designed specifically for the home network, DTCP encrypts content between devices within the home after checking that they both support the standard. Crucially, the content itself can carry information defining its rights and indicating whether it can be copied and to what extent, with the hope this will satisfy the big rights holders such as Hollywood studios. For example, some content may only be played, while other content might be recordable but still protected from copying.

In this way DTCP/IP is a mechanism for enforcing rights defined by whatever DRM is used by the service provider, but it is not a DRM itself. “It is assumed the content is DRM protected, and DLNA is agnostic to DRM schemes,” says Nidhish Parikh, Chairman and President of DLNA. “When streaming that content, DLNA takes the trouble to make sure it is protected using DTCP-IP. Premium video is building on that protected stream layer.”

More than 12,000 products are now DLNA certified , including TVs and PCs but also other devices such as cameras, tablets and even appliances like fridges which, according to Parikh, will increasingly feature inset monitors for watching TV in the kitchen. The total number of devices now certified is 500 million, and will top 2 billion cumulatively by 2014, according to Parikh.

By Philip Hunter, Videonet

Camargus Launches Live Picture Stitching

Belgian start-up Camargus has devised a picture stitching application for live sports which it claims to be superior to other products under development at Sony and Fraunhofer HII.

Still in prototype but due for release in mid-2012, the system combines a single rig of 16 HD 2/3-inch CCDs and software that combines the pictures into a panoramic image. An operator can zoom into any part of that panorama using a virtual camera to track players or freeze frame elements for analysis. It features instant replay with zoom control and playback.

Click to watch the video

“Sony’s application used three HD cameras. We cover the field of play with 16 HD cameras to provide far greater resolution. You can clearly see shirt numbers and sponsor logos," said CEO, Tom Mertens.

“Fraunhofer’s system is a lot bigger. Ours is more compact and requires very little to set up. You only need to focus the lenses and you are all set.”

The new system builds on two previous prototype’s built by Carmagus with Fletcher Chicago and trialed by ESPN in the US. The Maxx Zoom, was used for replay shots during the 2010 NFL season for ESPN's Monday Night Football while the All22 system linked a wide-angle multi-camera system to Camargus' video stitching technology.

Carmagus is a spin-off from Hasselt University in Belgium.

By Adrian Pennington, TVB Europe

Will a Proprietary Media Container Proliferate or Pigeonhole Content?

It’s been more than a year since a consortium of companies banded together to form the Digital Entertainment Content Ecosystem, or DECE, an effort to provide a unified set of standards for the digital distribution of premium content. The purpose of the initiative, now branded UltraViolet, is to allow consumers to purchase content from multiple sources, store it in a digital online locker, and view it on any compatible device. The DECE consortium includes manufacturers Sony, Intel, Cisco, and HP, software providers Microsoft and Adobe Systems, and content providers Comcast, Fox, NBC Universal, Netflix and Warner Bros.

Earlier this year, the Advanced Television Systems Committee released a specification of the proposed ATSC NRT (Non-real time) Standard that provides support for delivery of content in advance of use (i.e., files, as opposed to live content), to both fixed and mobile broadcast receivers. One of the provisions of ATSC NRT is that receivers can be built that support different codecs, compression formats, and container file formats, including AVC, MP3 and DTS-HD audio, as well as the multimedia container format profiled in DECE Media Format Specification.

A container format is a specification that defines how video, audio and subtitle content, intended for synchronous playback, may be stored within a compliant file. (A container can function as a file entity or as an encapsulation method for a live stream.) Examples of container formats are the MPEG Transport Stream, Microsoft Audio Video Interleave (AVI), Apple Quicktime, and now the UltraViolet Common File Format (CFF), which was derived from the ISO Base Media File Format.

An important element contained in DECE CFF is an encryption scheme and key mapping that can be used with multiple DRM (Digital Rights Management) systems capable of providing key management and protection, content usage control, and device authentication and authorization.

This summer, DECE LLC launched its licensing program for content, technology and service providers, and anticipates that, beginning this fall, consumers in the United States will be able to purchase select movies and TV shows with UltraViolet rights. According to the consortium, UltraViolet "will combine the benefits of cloud access with the power of an open, industry standard - empowering consumers to use multiple content services and device brands interchangeably, at home and on-the-go."

Some would argue that DECE CFF is proprietary and therefore not a standard, saying such status can only be conveyed by a regulatory body or sanctioned standard-developing organization. Nevertheless, there are enough content companies behind the specification that it could have the force of an "official" or de facto standard. And DECE-compatible content development tools are becoming available, from companies like DTS, the audio format developer best known for their multi-channel cinema system, and Digital Rapids, the television, movie and Web content hardware and software developer.

But several companies have resisted joining the consortium, most notably Apple and Disney, no doubt because Apple already has a stronghold on content distribution and Disney has developed its own DRM system, called KeyChest. (Don’t forget that Steve Jobs is Disney’s largest individual shareholder, too.) And DECE promoters will have to be extremely careful in how they market the system. While not actually a video format (i.e., compression codec), the system could potentially compete with hard media such as DVD and BluRay, and co-branding on those media (for simultaneous distribution) could irreparably confuse consumers about UltraViolet.

One can speculate how the name UltraViolet was decided; ultraviolet light, after all, cannot be seen by humans. Perhaps the DECE wants to keep the system as "invisible" as possible, so as not to burden users with a technical or business complication. The consortium’s greatest challenge could indeed be the right level of visibility.

By Aldo Cugnini, Display Daily

TV Anywhere Will Need Hardware Security

The debate over whether pay TV security is best done in hardware or software may now finally be resolved with the answer being that both are needed, especially if premium content is on offer. This is the thrust of some recent industry developments, such as the announcement by Verimatrix, vendor of the software-based VCAS system, that it is licensing Cryptography Research's CryptoFirewall security core technology to help build advanced solutions protecting video delivery revenue streams.

This seems to be recognition that some form of hardware security is needed to augment the Verimatrix public key-based approach to content delivery and provide an extra layer of defence. In fact Cryptographic Research itself acknowledges that even this may not ultimately be enough to enable premium pay TV services to work over the Internet and mobile networks, with other ingredients becoming necessary, including digital watermarking and also snooping agents that prowl around the network on the lookout for security attacks and breaches.

Pay TV security has two objectives, to buy time and create confidence among content providers that the service can be trusted. No system is immune from attack forever, and to an extent Cryptography Research has already shown it can meet these two fundamental requirements in the different field of Blu-ray discs. The company developed the digital rights management system called BD+ for Blu-ray based on the idea of self-protection where the content itself contains embedded codes needed for it to play. With the participation of the Blu-ray players, this meant that rights owners could change the DRM security in the event of a breach by changing the codes, without having to make any alternations to the players themselves. Eventually BD+ was cracked, but it took 10 years, so the technology succeeded on the first criterion of buying time. It also succeeded on the second count of creating confidence, since several movie studios cited Blu-ray Disc's adoption of BD+ as the reason they supported the Blu-ray Disc rather than the alternative HD-DVD. Cryptography Research is now hoping to generate similar confidence in the pay TV world around CryptoFirewall.

For OTT services it is also vital that any solution can work across multiple platforms, and on this front Cryptography Research has been signing up the major vendors of system-on-chip (SoC) silicon for pay TV devices including STBs, with ST Micro Electronics, Broadcom, MStar, and ViXS on board so far. This will ensure the firewall is compatible across most leading boxes, which is valuable for pay TV in general because it enlarges the target market and reduces costs. For OTT it will be essential that a given service reaches all devices irrespective of whose chip is in them.

The function of the CryptoFirewall is to protect session encryption keys themselves from attack by shielding them within the SoC, while also providing a layer of security that enhances the CA system it is working with. The point here is that CryptoFirewall is not a complete security system but designed to interoperate with a CA system from Verimatrix or some other CA vendor. In doing so, it provides a kind of double-layered security, in that compromise of either a given CA system or the CryptoFirewall on its own is insufficient to crack a pay TV service that uses both in combination. CryptoFirewall was designed on the assumption that the associated CA is insecure. Similarly, CA systems such as Verimatrix VCAS are stand-alone and do not intrinsically rely on any other component. Now Verimatrix's licensing of CryptoFirewall reinforces the security offered.

It remains to be seen whether systems such as the CryptoFirewall/CA combination will hold up against piracy, although there are signs that the studios and other content houses are gaining confidence for now. One thing is becoming clear though: Any security system must be transparent to consumers if it is to be successful. On this front, Cryptography Research's CTO and VP of engineering makes an important point when he argues that two factor security, where the user has a separate token to generate one-time passwords in synchronisation with service, will not work in pay TV, even though it is being used in some cases for online banking.

In the case of banking, the interests of both parties coincide, since neither the customer nor the bank want money to be stolen from an account. But in pay TV, a registered user may also be stealing the service by transmitting content on to friends. Two-factor security would still protect the user from having the service stolen, but not the pay TV provider. For example, the user could point a webcam at the token to transmit the temporary passkey immediately to friends.

The second reason two-factor will not work for pay TV follows from this. Since users do not care as much if the service is compromised as in the case of online banking, they will be less willing to endure the inconvenience of having to sign on via a separate device — i.e. the second factor. Any pay TV service that tries to impose two-factor security is therefore likely to have to withdraw it pretty quickly.

Instead therefore other remedies will be used to strengthen single-factor security. One will be to deploy supervisory processes within the network to provide defence in depth by monitoring for any signs of security breaches. The use of fingerprinting or watermarking in various ways to mark content can also help. Verimatrix, for example, uses on-screen display (OSD) fingerprinting in its recently launched VCAS for Internet, forcing periodic display of a device identifier overlaid on streamed content to deter its unauthorized retransmission.

It can be seen then the battle lines between the content security industry and pirates are being redrawn in the era of TV Anywhere and OTT.

By Philip Hunter, Broadcast Engineering

NDS Surfaces: the Next Revolution in TV

NDS provided the ‘blow you away’ demonstration for IBC2011 with its Surfaces concept, which takes the best of the big screen and companion screen experiences and throws them onto a single wall-sized display (or multiple walls) to create a feast of visual and interactive entertainment that still manages to maintain the lean-back characteristics of TV.

Surfaces is designed to exploit revolutionary advances in video display technology. NDS believes that wall-sized video displays, including video-capable ‘wallpaper’, will be available at reasonable prices within five years and has decided that there is no longer any reason to limit the TV experience to a 50 inch rectangular box. Surfaces will give platform operators the display real-estate to provide more immersive TV experiences when we want to fully relax, or a combination of entertainment, diaries, information, social media and connected home applications in a television-centric user interface at other times in the day.

In the demonstration, we were greeted by an ‘ambient’ display on the wall-sized screen showing large framed photos of family members and Facebook ‘speech bubbles’ with our latest social interactions. The first person to come down to breakfast is Mum, and as she is alone she selects ‘Mum’ on the controlling tablet and the display reorganises itself so that the equivalent of the BBC Radio 2 website appears in the centre of the wall, with music and details about the current show and the music playlist. To the right is a clock, the latest weather and diary items for the day. To the left are newspaper headlines that can be pursued for more information via the tablet.

Mum decides that she wants to watch the breakfast TV news so the screen reorganises itself so that the news bulletin appears in a 50 inch widescreen format at the centre-top of the wall. Radio 2 moves to the left and is muted as the audio switches to TV. But the radio playlist is displayed so Mum can switch back to a song she likes at any moment. Under the news are headline links, which can be clicked via the companion to learn more about the news stories. After the national news comes the regional news and the headline links change to local stories.

Then we return to the TV in the evening for some family entertainment. We choose the family profile on the tablet and The X Factor appears as an 80 inch widescreen TV display. Down the left-hand side are Twitter feeds relating to the show and below this is a live voting app where you can see viewer predictions about how each judge will vote, and you can cast your own vote via the tablet. The app is updated live so that as each judge makes their decision, a red cross or a green tick appears next to their photo.

For the purposes of the demo, NDS provides an ‘Immersive’ bar on the tablet that you manually adjust depending on how immersive you want your TV experience to be. At this point, we are watching X Factor at about half way on the immersive gauge, so we still have the social interaction on the left and on the right there are promotions for Amazon where you can buy the song that is being sung currently on the show. By sliding the immersive scale higher, these interactive and social elements disappear from the screen and the video content alone fills the entire wall. For good measure, the lights also dim to create a cinema ambience.

NDS then demonstrated what a 4k (ultra high-definition) movie looks like to confirm how the wall-sized screen can also act as your home cinema. Before we could relax too much, a video feed appeared as a picture-in-picture showing a baby crying in its cot upstairs, reminding us that not everyone gets to watch a movie uninterrupted! Mum and Dad can decide whether to keep an eye on that situation (the picture-in-picture can be reduced into the corner) or dismiss the babycam feed as one of them goes to settle the youngster. The demo clearly illustrates how the TV service provider can provide connected home applications in a way that make them much more useful and compelling.

Surfaces illustrates some exciting concepts. First, it expands the boundaries of TV in anticipation of advanced screen technologies that a few years ago seemed like science-fiction. Just as the television experience has already spilled out of the 40 inch widescreen and onto tablets and smartphones, it can now encompass an entire wall. Surfaces shows how you can make use of that real-estate to completely revolutionise the user experience and potentially introduce new services, from newspaper apps to home automation and videoconferencing, that will have additional value as part of an aggregated service provider user interface.

Surfaces takes all the richness of the convergence experience, like content and contextual apps and information, and gives viewers the option to have all that in one place without overlaying anything on the video itself. Then it allows consumers to decide how immersed they want to be in the video entertainment, so they can have less or more contextual apps and information to suit their mood and the time of day.

This is a stunning demo; the best I have seen personally in my 13 trips to IBC. Surfaces is revolutionary because if NDS is right, there will be no physical boundary to the television service in future. It was pointed out that you do not even have to produce TV for a rectangular display in this new world, so producers could experiment with new shapes and effects. And subtitles do not have to be contained within the screen frame, for example. The surfaces concept represents a fabulous opportunity for Pay TV operators to cement their position as the gateway to the home, building on what they are already doing in multi-screen and companion screen offers.

The implications for a Surfaces-enabled world (and of course we will expect other middleware/UI companies to embrace this concept) are dramatic. This looks like the easiest ‘sell’ to consumers of any TV innovation since colour. The public will be blown away by it and it will require little or no explanation. It will be an early adopter must-have with great ‘wow’ factor to impress neighbours and friends. This looks like something every service provider will end up offering once the technologies are priced for the mass-market.

This could also be the market-maker for ultra high-definition. When Steve Jobs introduced the Apple iPhone he gave us the reason to need mobile broadband. This will be the reason why millions of homes, rather than just a few palaces, will want ultra high-definition TV. It is worth noting that NDS upscaled HD automatically as the viewer slid the immersion scale upwards, and this still looked good across a 3.5 metre (approx) screen. And the company stresses that you can still use standard-definition TV too, since sometimes you will be viewing content in a 32 inch or 40 inch frame size. Nonetheless, cinema style displays, which is what you get when you slide the immersion scale to ‘full’, deserve more.

There will be homes that struggle to find a wall large enough, and without obstructions, to make this work. And as many of us eat breakfast in one room (e.g. the kitchen/diner) and relax in another (e.g. the living room) the full potential of Surfaces relies on screens being affordable enough to be present in more than one room. But this is so compelling you can imagine people wanting a wall display of some kind, however their home is configured, and this could also prompt a revolution in interior design so that rooms have one clear end for the screen. What does seem certain is that this UI will probably make traditional remote controls obsolete, as this is an experience that needs and deserves full tablet control.

Simon Parnall, UK Vice President of technology at NDS, said during the demo: "I have a 46 inch screen in the corner of my home and normally it is black. And whether it is news, sports or movies, I see all content in 46 inches. It is our fundamental belief that actually, the size of the image needs to change according to the kind of content I am watching to match my attitude towards the content and my degree of interest, or what we are calling my level of immersion, in it."

Nigel Smith, VP and Chief Marketing Officer at NDS, believes the rate of innovation in display technology means this will be realistic within five years, with pricing of $2,000 or less for full-wall displays that might even be a plastic film that can display video. He noted that husbands often want bigger screens today and are limited by what their wife will tolerate! With Surfaces, there is no screen to sit in the room as furniture, so this problem is removed.

NDS has based the Surfaces concept demo on its existing unified multi-screen headend (which provides common intelligence in the backoffice for video management and delivery) and its Service Delivery Platform, which provides an open API that acts as an interface between apps on devices, a TV platform and social networks or other Internet content, and opens the way to third-party development work in multi-screen and companion screen services. These are the foundation technologies for Surfaces.

As Smith points out: "We are not waiting for the screen technology to become available. We are working on getting the technology ready prior to what we think will happen anyway. We are waiting for the hardware technologies to catch up with the software." He adds that the CE vendors are looking beyond 3DTV for what will sell screens next and points out that 3D uptake has been slowed by lack of content. "We are helping them out because as soon as they launch these new screens, this will work."

Smith adds that everyone who saw the demo said they wanted this solution at home. We are not surprised. Like the screens they will harness, Surfaces and concepts like it will be the next big thing in TV.

By John Moulding, Videonet

IBC: Multi-screen Dominates, but Another Revolution is Brewing

There was a positive mood at IBC this year, based on our conversations with vendors and the amount of business they were doing at the show, and not surprisingly, multi-screen TV was the big theme again. It is becoming clear now how this is a transition almost as big and dramatic as digital TV itself, which is why it keeps rolling on as a subject.

Multi-screen is evolving and the discussion this year was about how platform operators can achieve scale cost-effectively as they move beyond tens of channels to hundreds of channels, and how the early movers can differentiate their services once everyone has content to all screens.

The answer to this second question seems to be an integrated and holistic multi-screen experience, which means companion apps like remote control from the smartphone, and pause-resume between devices. The bottom line is that duplicating content everywhere is not enough; the whole experience has to be enriched so that two plus two equals five.

Multi-screen should keep us all busy for some years yet, but the even better news from IBC is that there is another revolution on the way, eloquently demonstrated by NDS with its ‘Surfaces’ concept. This is the evolution of TV from a rectangular box in our home, and a piece of furniture, to wall-sized display surfaces, which means that all the contextual interactivity you can achieve across TV, smartphones and tablets can actually be replicated in one place, providing you get the balance between lean-back and lean-forward correct.

This demo made it very clear where the future of TV is heading in the home and it was stunning. In a way, convergence has given us such multimedia riches that we can no longer contain them on a 42 inch or 60 inch screen, thus the drive for companion experiences. But it appears that advanced display technology and an accompanying revolution in the TV user interface will give us the option to converge the post-convergence TV experience! That will not remove the need for companion apps but consumers will have more choice about where they have contextual apps and information.

NDS Surfaces

It often happens that the right technologies all come along at the same time, and that is not a coincidence, of course. Thus MPEG-4 AVC, DVB-S2, a new generation of decoders and lower priced flat-screen TVs arrived simultaneously to make the market for HDTV. So with wall-sized screens expected to become affordable, and a user interface that illustrates their potential for video and much more, it is probably time to start looking at ultra high-definition TV in more detail because NDS Surfaces looked to us like the reason mass-market consumers will want ultra high-def.

The HEVC (High Efficiency Video Coding) next-generation compression standard is progressing well and is expected to halve bit rates compared to MPEG-4 AVC, and we are told the CE industry is looking for something to sell after 3DTVs and video ‘surfaces’ is where they are focused. So we may be set for the next big thing after multi-screen and connected/hybrid TV (including hybrid broadcast broadband).

To sum up a few of the other interesting things we saw and learned:

The connected home: Service providers can exploit new opportunities beyond video including home automation. Enabling whole-home TV and multi-screen is still the big driver today. There is a growing interest in IP thin clients around the home including set-top boxes that support adaptive bit rate streaming for OTT and even service provider STBs that only support adaptive streaming. It is also becoming clear that a major challenge for multi-screen TV at home is getting the cost of customer support down, since operators are going to be held responsible when the tablet stops streaming, whether it is their fault or not.

Social TV: More focus on integrating social media into the TV experience. TV Genius had a nice demo showing how you can populate the EPG with pictures of your Facebook friends who like the programmes. Liberty Global and Virgin Media both outlined the importance of content recommendation, and Think Analytics announced a major deal with Liberty Global to provide the recommendations on the Horizon platform across multiple UPC territories, demonstrating that we are moving into mass roll-out phase for this technology.

Multi-screen video processing: When it comes to video delivery, it is all about scale now and providing a common headend for classic broadcast and multi-screen delivery. There is a trend towards hardware-based transcoding to enable more channels per rack unit. Encoding vendors with a heritage in ‘classic’ broadcast are strengthening their multi-screen offers and vendors who targeted IPTV and multi-screen are looking for opportunities in traditional TV over cable and satellite. The bottom line is that everyone wants an end-to-end solution so they can take care of all video delivery for their customers.

Content security: Pay TV OTT content protection was another important theme for the show. There is a feeling that multi-screen has reached a tipping point where all channels are expected on all screens, and platform operators will expect the same levels of security on smartphones and tablets as they have on the STB. They also want a managed security service and not just a DRM, and that is a role the CA vendors are only too happy to fulfil.

Tablets: Where do you start? They are everywhere in this industry and will eventually be everywhere in homes, and they are already having a notable impact on TV. There is a growing feeling that they will displace the PC and even the second TV in the home because they are so easy to use, the picture quality is so good and they boot up instantly. Tablets are encouraging more linear viewing. There are big opportunities and disruptions ahead because of synchronisation between the tablet and the main TV, with the tablet providing Greenfield advertising inventory. Broadcasters will have to fight third-parties to control the interactive advertising (or engagement) opportunities on synchronised tablets.

By John Moulding, Videonet

DisplayMate: ‘Human Vision’ Delivers Full HD Using Passive 3D Glasses

Adding more fuel to the "active vs. passive" debate, DisplayMate went public today with its Active vs. Passive 3D Glasses Shoot-Out. The study finds that passive 3DTVs, which use an alternating raster scan approach, deliver a full-HD resolution 3D experience due to image fusion in human visual perception. The findings are significant as it elevates the impact of human perception of image quality as a measure of the 3D experience, as specs alone seem inadequate. Will it end the active vs. passive debate? Probably not, but it does add additional credence to the claims of the passive camp.

Sharpness and resolution delivered with passive glasses is characterized by Soneira as, "By far the most controversial and misunderstood issue in 3DTV." And, maintaining full-HD resolution in each eye is often cited as the most significant reasons for using active shutter glasses in creating the 3D image. As the theory goes, a full-HD image is delivered by the TV at 120 Hz. The active glasses shutter between left and right image at 60Hz, delivering a 1080p image to each eye. Passive glasses based on film pattern retarder technology use micropolarized film applied to alternating rows in the display that (in theory) halve the resolution by dividing the image into left and right views-at the same 60 Hz.

Not so-says Soneira. Human visual perception takes place in the brain. "Because the 3D images are created in the brain, instruments can not be used to measure how sharp or muffled they appear on a given 3DTV - that can only be done with human vision by actually viewing 3D content - but this can be done in a very precise and analytical manner. What matters here is the actual 3D visual performance NOT an analysis of the display hardware diagnostic performance the way it is normally done for 2D displays," Soneira said.

To get there, Soneira created a "reverse vision test" to determine the display sharpness, "…by how small a text that can be read on a given 3DTV at a given distance when viewing regular Blu-ray movie content." The test measured the clarity of displayed text in 3D images from an IMAX film (Space Station 3D) using both active and passive technology. "In all cases, the small text (6 to 10 pixels in height) was readable on the FPR passive glasses 3DTV, which definitively establishes that there is excellent 3D image fusion and the passive glasses deliver full 1080p resolution in 3D, …if the passive glasses only delivered half the resolution, as some claim, then it would have been impossible to read the small text on the FPR TVs. So those half resolution claims are manifestly wrong - no ands ifs or buts!"

DisplayMate used quantitative analysis and a unique measuring approach created by the company to put an end, not only to the 3D technology sharpness debate. Surprisingly, DisplayMate found that "the measurements showed passive glasses 3DTVs perform much better than the active glasses 3DTVs across the board."

Soneira concludes with his high optimism for 3DTV in general. "The magic of providing a comfortable, convincing, and realistic 3rd dimension to TV viewing is what will make this 3D technology catch on and become successful in the future. 3DTV has finally come of age and arrived as a fun and pleasant enhancement to watching traditional 2D movies and TV content… It’s all backed up with solid evidence…" and that’s a comfort to know.

Note: See our extended article on this 3D technology shoot-out that includes brightness, flicker, crosstalk and ghosting, plus recommendations in the upcoming Large Display Report subscription newsletter from Insight Media.

By Steve Sechrist, Display Daily

The DVB Project Considers New 3DTV Standard

The DVB project finalised the first 3DTV broadcasting specification at the beginning of 2011. It’s now an ETSI standard - the world’s first 3DTV broadcasting standard. This is a ‘Frame Compatible’ format that can work with existing HDTV set top boxes – though they may need a download upgrade to take advantage of the capability to position subtitles at different scene depths, to match the positions of different characters in the scene.

Some terrestrial broadcasters argued that, if they were to use a 'Frame Compatible' system, they needed a way to serve 2D viewers at the same time, without taking up another delivery channel. One way, though not always possible, depending on nationally sold set top boxes, is to use MPEG signalling to instruct the set top box to stretch and display the left image as a 2D image. Another, is to use an interactive application (such as MHEG-5) to pull out, stretch, and display the left image as a 2D image as part of an interactive ‘red button’ service. Each approach has pluses and minuses.

Thousands of hours of 3DTV programming have been made; and, across the world, programme makers understand the supreme care that is needed to make 3DTV programmes to minimise eye discomfort. We understand more about what window violations are tolerable, how to handle scene cuts, converged and parallel shooting, and about maximum positive and negative disparities for home size screens. 3DTV production grammar is being tamed.

There is always a horizon. For 3DTV, in the longer term, this may be a multiview broadcast system, where autostereoscopic displays provide more than two images. Beyond that it may be an ‘object wave’ system, of which the hologram is a simple form. But these are probably not for the near future – more research and development is needed before we see large screen consumer autostereoscopic displays which are not head-position sensitive.

The issue now for DVB is whether an ‘improved’ plano-stereoscopic system will be needed in the next five years. The DVB Project has been discussing this and hopes to reach a conclusion in the autumn of 2011. This would be 3DTV ‘Phase 2’. True, it would need a new set top box (or equivalent), but the pay off would be extra quality or features.

One of the open issues is picture quality. The current 'Frame Compatible' (or Phase 1) delivery system is obliged to share the resolution capability available from an HDTV delivery channel between the left and right images. Would there be a seriously perceptible improvement in quality if the Phase 2 system delivered all the resolution capability of an HDTV channel in each image?

Would it be enough to justify a new delivery system? Blu-ray delivers full HD quality to each eye (well, at 24Hz anyway). Will broadcasters need to match this to be competitive? There is a terrestrial HD 3DTV system on air in Korea that transmits the left and right eye separately that is setting the world an example here.

A second issue is compatibility. Under what circumstances does it need to work? Will the new system need to be compatible with ‘Frame Compatible’ (FC) reception ( i.e. a Phase 1 signal with top-up)? Will the new system need to provide an HD 2D image for viewers with 2D only displays, together with something extra that can add up to two HDTV L and R images? This is called ‘2D Service Compatibility’. It could be nice if broadcasters had the option of providing either Service or FC compatible signals.

A third issue is viewer ‘depth range adjustment’. It is easy to move the whole 3D image - lock, stock, and barrel - backwards or forwards by shifting the left and right images together or apart. It is difficult to actually change the depth range in the scene. This needs sophisticated processing in the display and helper signals (L and R depth maps) with the broadcast. Will it be a sufficiently attractive new feature to justify the extra things needed? If we could adjust the ‘depth range’ of the 3D picture to suit our taste, our age (oldies prefer less depth), and our viewing distance from the screen, it would certainly be a ‘plus’. But, how much of one?

Furthermore, my simple mind is wondering if we could include in our DVB Phase 2, the Service Compatible system of separate L image and R images used in Korea today, as an option, would that be welcomed? Would we have the world covered in this case?

There are also thoughts about whether, beyond Phase 2, there could be a ‘Phase 3’, when the full majesty of the MPEG HE-AVC compression technology is ready for the market. This might save as much as 50% bit rate compared to MPEG-4 AVC. If so, we could work on the requirements for this Phase 3 in a few years’ time.

But the 64,000-dollar-question of what Phase 2 will be remains open as this is written. Listen to some of the debates at IBC this year, and you should find some clues.

By David Wood, TVB Europe

Multi-screen Video Processing

Multi-screen TV is approaching a tipping point now as the Pay TV pioneers look to expand their offers to cover more channels as well as more devices, and more service providers launch TV Everywhere packages. One of the important tasks for many operators walking around IBC this year is to work out how they can scale their multi-screen services beyond a sub-set of the channels they offer on the set-top box. Ultimately consumers will expect all their channels on all screens, of course.

Ericsson Harnesses Hardware and Software to Support More Channels
Ericsson is using IBC to highlight the scalability issue and has two new products that it believes will help operators expand their offers. These are the Ericsson SPR1200 Multiscreen Stream Processor, a true hardware approach to multi-screen compression, and the Ericsson NPR1200 Multiscreen Network Processor, a dense software-based adaptive streaming segmentation and encryption processor, designed to track dynamic updates in adaptive streaming formats and DRM systems associated with the needs of delivery to different types of devices.

The combined solution enables high quality and cost-effective processing of hundreds of channels into thousands of adaptive streaming profiles, the company says. It claims the SPR1200 and NPR1200 represent the most powerful and flexible solution for the growing multi-screen market.

Ericsson’s ConsumerLab research shows that 93% of consumers still watch linear TV and will continue to do so. “The expectation by consumers for multi-screen TV is that all of their content choices available in the home on the large screen will also be available on every screen,” it adds.

RGB has Multi-platform Headend for Large and Mid-sized Deployments
Meanwhile, RGB Networks claims that the combination of its Video Multiprocessing Gateway (VMG) (a carrier-class platform for multi-screen video delivery) and its adaptive streaming solution, the TransAct Packager, provides the most scalable solution available for deployment of advanced IP video services to any device, enabling operators to go straight from trial to deployment.

The company recently added a new member to the VMG product family, in the form of the VMG-8, which it says is ideal for small to medium-sized deployments or deployments at the edge. This product inherits the field-proven transcoding, transrating, ad insertion and other advanced video processing capabilities of the VMG family and packages them in a new 7RU high carrier-grade chassis. The VMG-8 holds up to eight modules and provides a compact alternative to RGB’s larger VMG-14.

In its fully redundant configuration the VMG-8 can be configured with three video transcoder modules, one audio transcoder module and a single controller module for transcoding programmes to over 140 streams for delivery to any IP-enabled device. In this redundant configuration, each module type has a back-up which can take over operation should the primary fail. Complementing its module redundancy, the VMG-8’s reliability is further enhanced with back-up power supplies and cooling fans which automatically take over if a primary unit fails.

Like the company’s VMG-14, the VMG-8 also benefits from recent enhancements to the TCM transcoder module, enabling transcoding of up to 60 SD or HD inputs and 240 adaptive bitrate outputs per VMG-8 chassis. The VMG-14 can now support up to 132 SD or HD inputs and 528 outputs per chassis.

Harmonic Supports Live and File-based Multi-screen Delivery
Harmonic is also focusing on the needs of content distributors and creators as they deliver more of their content to more screens. The company recently announced the ProMedia family of software solutions for optimizing live and file-based multi-screen video production and processing. The ProMedia family performs a broad range of functions, including transcoding, packaging and origination to enable high-quality video creation and delivery of live streaming, live-to-VOD, and VOD services to TVs, PCs, tablets, smartphones and other IP-connected devices. ProMedia is also considered an ideal solution for content creation in file-based workflows such as tapeless production environments.

The ProMedia family provides a suite of software products that can be deployed individually or as an end-to-end video processing solution, offering great flexibility. This solution is also integrated with leading DRM systems, asset management systems and content distribution networks, in addition to other Harmonic products including encoders, receivers, playout servers, and storage.

The ProMedia family leverages Harmonic’s strong H.264 video codec expertise and is based on the same intellectual property behind Harmonic's Electra encoders. The family includes ProMedia Live for real-time video processing and transcoding, featuring enhanced H.264 video codec technology developed by Harmonic and optimized for creating high-quality Internet video streams.

Another important product in the family is ProMedia Package, a carrier-grade adaptive streaming preparation system for secure, high-value Internet video services. ProMedia Package supports numerous HTTP streaming protocol standards and is capable of packaging in multiple output formats from a single video source, enabling a more scalable, distributed architecture.

Envivio Helps Move Content Package and Delivery to the Edge
Envivio has introduced a number of notable new products for multi-screen TV. These are the Halo Network Media Processor (NMP), 4Caster C4 Gen III multi-screen encoder, and the Envivio Genesis universal mezzanine output format.

Halo NMP enables operators to shift their content packaging and delivery processing to the edges of their existing video distribution infrastructure. “Moving these operations makes it possible to add support for delivering high quality, protected video to new devices without altering the headend,” Envivio declares. “Halo NMP complements existing broadcast infrastructure and simplifies distribution to the latest smartphones, tablets, connected TVs and PCs.”

Halo lets operators take advantage of the Genesis universal output format to control the bandwidth demands multi-screen TV makes on backbone networks. Genesis merges the bitrates and resolutions needed to deliver adaptive streams for major standards and technologies into a single, efficient output format. Envivio claims the result is a reduction of as much as 50% in the bandwidth demands multi-screen TV makes on backbone resources.

Video headends powered by the Envivio 4Caster C4 family of encoders provide support for the full spectrum of IPTV, Internet TV, mobile TV, cable, satellite and terrestrial applications. They enable operators to support the growing variety of formats needed to deliver video to any device at any time, including simultaneous video delivery from a single encoder to digital set-top boxes, connected TVs, PCs and Macs, as well as tablets and mobile screens.

Imagine Communications Supports 1,000 Multi-profile Transcoders
Imagine Communications will showcase its ICE Streaming System for streaming live multi-format video to multiple tablets. ICE is a new network-side transcoding platform that allows multi-screen service providers to deliver what it claims is uncompromised video quality across multiple devices with unmatched compression efficiency. The ICE Streaming System supports up to 1,000 stream-aligned, multi-profile transcodes from a single carrier-class blade system platform.

The ICE Streaming System is based on Imagine's widely deployed ICE Video Platform and combines picture quality, scalability and full support for integrated fragmentation, encryption and HTTP streaming.

ISILON and ATEME Partner to Boost Media Processing Performance
On the eve of IBC, ATEME announced that it has partnered with ISILON to support high performance content processing for video delivery to multiple screens. This results from the combination of the ISILON IQ Series NAS storage and ATEME’s TITAN File transcoding platform. The TITAN video processing speed is enhanced by ultra-fast storage. Meanwhile more content titles can be stored thanks to the superior compression efficiency of TITAN.

The companies say the partnership dramatically simplifies the operational challenges of multi-screen transcoding workflows. “Installed in a matter of hours, the solution scales out linearly with the expansion of the content catalogue, the migration to HD, or as new output formats are added to support more viewing devices. It takes only minutes to add transcoding blades or storage capacity: there is no need for re-design and no downtime.”

The combination of ISILON IQ NAS Storage and the ATEME TITAN transcoder is proven and delivers content for more than 40 million pay TV subscribers worldwide already. The partnership, announced in late August, will make it easy for many more service operators to access the solution as they move from tape to file based workflows or enhance their VOD offerings.

By John Moulding, Videonet