Harmonise European HBB or Watch Google Prevail

There is a real risk that the cost of repurposing content for multiple Connected TV platforms will exceed the original cost of producing content, with the danger that broadcast content will be priced out of the market – weakening the whole concept of Connected TV and Hybrid Broadcast Broadband services. That was the warning delivered by Peter MacAvock, Programme Manager at EBU (European Broadcasting Union) at the OTT-TV World Summit in London during December.

MacAvock also told Europe’s broadcasting industry that while there was no desire for a Europe-wide standard for Hybrid Broadcast Broadband (HBB), more harmonisation is needed. Applications interoperability is the Holy Grail. He pointed to the global ambitions of Connected TV platforms from Google, Yahoo! and Apple and warned broadcasters that national solutions, built for local market needs, would not prevail.

“Vendor driven Connected TV is the bane of our lives in the European broadcasting community because whatever badge you have on a TV set, you have the same problem: they are all incompatible,” he said. “You have to sign deals with each of the vendors. CE manufacturers are missing a trick by not providing interoperability. The difficulty is that all these services will be driven by content but the content producers will become tired of having to develop their services so many times for so many platforms.

“We need to be very careful that the cost of repurposing content for different platforms does not overtake the cost of producing the content in the first place. That is a real cost we are dealing with in the content community, so there is a real risk of this happening.”

MacAvock noted the three key HBB environments: vendor driven Connected TV, broadcast-centric HBB with signalling (like MHP and HbbTV based solutions) and what he called managed HBB from the likes of YouView and Google TV. He pointed out that major European public broadcasters have driven the development of HBB in their respective markets, whether it is using HbbTV in France, MHP in Italy or YouView in the UK, for example.

“The bad news is that broadcasters are focused on domestic markets with slightly different requirements, leading to slightly different standards. We are seeing national solutions and a belief that national solutions can prevail. That is a real weakness because national solutions cannot work,” MacAvock told delegates.

Noting that Google, Yahoo! and Apple are multinational brands with international solutions, he continued: “Other operators are not only operating in France and Germany or Italy but they are operating all over the world, so the broadcasters who are the hybrid leaders have to work together now. Working together is the way we will prevail. Individual national solutions will fail because they do not have the same multinational or global view.”

MacAvock said EBU is well positioned to try to foster more cooperation and harmonization, given the role of its members in the DVB Project (which spawned MHP), the IRT (which is driving HbbTV) and YouView. “We are trying to harmonize different technical elements and we are working in the content domain so we know the best way to pave the road for hybrid. The Holy Grail has to be application interoperability so you can develop applications and port them onto different platforms without too much difficulty.”

MacAvock pointed out that the broadcast signalling used in MHEG-5 (used for UK interactive TV) and HbbTV is the same. “The question is whether YouView adopts the same signalling,” he said.

He told the OTT-TV World Summit audience that there were a number of areas where the industry can work together to find some harmonization. This includes working with CDNs (Content Delivery Networks) and ISPs to develop a common understanding of how to deliver media over the Internet. There is scope for cooperation on DRM and he believes the industry can probably agree a common set of standards for metadata so it is treated the same wherever content appears.

The EBU does not believe there can be one HBB standard for the whole of Europe. Different market requirements have to be respected, resulting in different platforms. But what the EBU does want is a core set of principles that the European industry can work from.

MacAvock outlined the motives that are driving broadcasters in different markets, dividing them into two main camps: Greenfield interactive TV markets with no significant popular interactive TV services today (e.g. France, Germany and Spain), and Brownfield markets where services already exist and must not be undermined by HBB (e.g. Italy and the UK). For the Greenfield markets, the big opportunity is a next-generation teletext and access to catch-up TV services beyond the PC.

In both markets, the aim is to deliver a rich experience that harnesses the strength of the broadband return channel and the increasing sophistication of receivers, which can call upon more processing power to render services more quickly and make them easier to use.

EBU has no doubt about the size of the HBB opportunity or the need for the European broadcast industry to make this a shared success. “The linear broadcast industry is about to confront a change that is probably bigger than the move from Black & White to Colour and it will change our paradigm,” MacAvock concluded.

By John Moulding, Videonet

3D in the Home and Cinema


Source: Sony Professional

ATIS IIF Launches Stereoscopic 3DTV Quality Initiative

The Alliance for Telecommunications Industry Solutions’ (ATIS) IPTV Interoperability Forum (IIF) has recently launched a new work program to address Quality of Service (QoS) and Quality of Experience (QoE) for Stereoscopic 3D IPTV.

Stereoscopic 3D is a rapidly developing area, but further data collection is necessary in order to best understand user perceptions of quality. To this end, the IIF will describe, analyze and recommend multiple quality-related metrics. Potential metrics include: depth maps and depth perception, loss of resolution caused by frame-compatible formats, video synchronization, 3D graphics and closed captioning, ghosting from left-right cross-talk, QoE issues such as 3D fatigue, and more.

Once the information gathering and analyses are complete, the IIF will produce specifications which address areas of applicable QoS and QoE.

Source: ATIS

Raising the Bar on 2D-to-3D Conversion

BitAnimate (Lake Oswego, OR) is a small start-up company developing 2D-to-3D conversion technology. They envision their technology being used in a variety of ways- from an on-line conversion service for users to upload their clips and watch them in 3D, to embedded application like 3DTVs. They also believe their technology is at such a level that they can approach Hollywood to dramatically reduce the time and cost of making theatrical-quality conversions. Other companies are already in this space, most notably JVC and DDD. After seeing a demo recently, it seems the main differentiating feature is that it actually works as advertised without as many artifacts. And, it even works in real-time.

I know, the purists will say that converted 3D content will never look as good as native 3D content. But, we have seen some horrendous ‘native’ content too. The result of either approach ultimately depends on the skill of the artists involved. The challenge with automated conversion is that the machine is making artistic decisions, not a person, and is doing so very rapidly in the case of real-time.

We were invited to BitAnimate’s offices for an exclusive demonstration of their technology prior to their planned demonstrations in a private suite at CES in January. They provided a side-by-side demonstration of their software with the built-in 2D-to-3D conversion on the Samsung 3DTV (based on the DDD software). The source material was Jurassic Park and Troy.

We tried to look at a lot of converted content while researching our 2010 Real-Time 2D-to-3D Conversion report and developed a flow-chart of how most conversion algorithms operate. There is a lot going on in a very short period of time; analyzing the scene to figure out what is in it, creating a depth map using visual and/or motion cue, constructing the 3D image — filling in as many missing detail as possible and finally, some post-processing to make sure the colors and gamma are the same for each eye. And you do this about 30 times per second — whew.

BitAnimate considers their conversion process to be a trade secret so they wouldn’t reveal too many details. When asked directly about their process, Behrooz Maleki, President of BitAnimate used an automotive analogy; "While you could describe two cars as having an 8 cylinder engine, two doors, a transmission and four tires - you don’t know if it is a Ferrari or a Pontiac." So, perhaps our flow chart is fairly accurate, but how they do each step is the secret sauce.

Maleki’s experience in video process goes back to his days at InFocus in the late 1990s. While there, he developed a chip to do deinterlacing and image process to compete with the market leading solution from Faroudja. "The Farouda chip was $230, but our solution was a $15 chip — and it looked better in a side-by-side demo," commented Maleki.

The same approach is being applied to 2D-to-3D conversion. Maleki says that not a lot of hardware is needed to implement his algorithms. The graphics cards on modern PCs and the video processing cores in TVs should be adequate, he says.

In addition, they are developing a new 3D web site that will potentially offer a 3D conversion service. Upload your 2D content and get back streaming 3D content. This could go to your iPhone or PC (using Silverlight). BitAnimate uses their own 3D player, which allows the user to choose the output format for the 3D platform they are viewing the content on.

It seems logical that if 2D-to-3D conversion can be done very well there should be a market for it. BitAnimate seems to be another example of a small company with a good technology having the potential to raise the bar on everyone.

By Dale Maunu, Display Daily

New Gadget Promises 3D Without the Headaches

In 1907 a Polish optical scientist named Moritz von Rohr unveiled a strange device named the Synopter, which he claimed could make two-dimensional images appear 3D. By looking through the arrangement of lenses and mirrors, visitors to art galleries would be drawn into the paintings, as if the framed canvas had become a window to a world beyond. But the Synopter – heavy and prohibitively expensive – was a commercial failure, and the device vanished almost without trace.

A century later, Rob Black is hoping to rekindle interest in von Rohr's creation. A psychologist specialising in visual perception at the University of Liverpool, UK, Black has designed and built an improved version he calls "The I" (UK Patent Application No. 1003690.3). Unlike some 3D glasses, the device uses no electronics, and works on normal 2D images or video.

Playing Tricks on Your Eyes
The device works in the opposite way to the 3D systems employed in cinemas. There, images on the screen are filtered so that each eye sees a slightly different perspective – known as binocular disparity – fooling the brain into perceiving depth. "The I" ensures that both eyes see an image or computer screen from exactly the same perspective. With none of the depth cues associated with binocular disparity, the brain assumes it must be viewing a distant 3D object instead of looking at a 2D image. As a result, the image is perceived as if it were a window the viewer is looking through, and details in the image are interpreted as objects scattered across a landscape.

The perceptual trick, called synoptic vision, is apparent on any nearby two-dimensional image, but is especially marked where other depth cues exist. For instance, the brain will naturally assume an animal in the 2D image is in the foreground if it is large, and far away if it is small.

No More Headaches
Black says that the device also avoids the headaches associated with other 3D technologies. In movie theatres, the eyes need to focus on the screen itself to see objects in focus, but the 3D effects can force the viewer to try to focus several metres in front of or behind the screen instead. "Even with if you use the world's best 3D kit, it can still present conflicting perceptual information," Black told New Scientist.

Because his device uses no binocular disparity the viewer isn't forced to attempt such impossible feats of focusing – instead, they can focus naturally on any object in the image, using other cues such as size to 'decide' what depth the object occupies. "By turning off that conflicting information, you can enjoy the scene in the way the artist depicted."

Currently the device is still a prototype, but Black hopes that his synoptic viewer will one day be incorporated into existing 3D systems. "I think 3D is impressive at the moment, but with this we can get significantly closer to reality simulation."

By Frank Swain, New Scientist

Technicolor Launches 3D Certification Program

Technicolor announced the launch of its 3D Certification program branded “Technicolor Certifi3D”. The certification program is geared towards broadcasters and network service providers with the goal of delivering quality and comfortable 3D experiences to end consumers.

Technicolor Certifi3D was created to ensure that 3D material meets minimum quality requirements before it is delivered to consumers. As part of the service, Technicolor evaluates each shot against a set of objective criteria for stereographic reproduction, including a 15-point quality checklist to identify common errors in production which result in suboptimal 3D content. The company will also offer training programs to broadcasters and content creators to help them migrate their production and post-production techniques from traditional television to the three dimensional medium.

Behind the technology that serves as the foundation for the Technicolor Certifi3D service is an advanced 3D analysis software tool that was developed by Technicolor’s Research and Innovation team. Utilizing the left and right source masters, the software builds a 3D model in real time giving an accurate pixel count for objects that are too close or too far away from the viewer that would result in discomfort. It also automatically detects and flags conflicts with the edges of the TV screen, another significant source of discomfort for 3D in the home.

“Our 3D certification platform allows our stereo technicians to quickly and precisely diagnose many of the issues that create viewer fatigue and discomfort” says Pierre Routhier, Technicolor’s Vice President for 3D product strategy and business development. “Our goal in launching the Certifi3D program was to take a proactive approach in support of the industry to ensure a consistent and quality end consumer 3D experience in the home.”

Technicolor is a leader in providing an array of 3D services to its media and entertainment customers ranging from 3D visual effects, post production, Blu-ray 3D services, 3D VOD encoding to mobile 3D.

Technicolor 3D Certification Poster

Source: Technicolor

Android’s Gingerbread Brings WebM to Mobile Phones

Monday’s release of Android 2.3, code named Gingerbread, is adding support for the WebM open video format to the smart phone platform. Gingerbread users will be able to play WebM videos in their device’s Chrome browser, and Android application developers will be able to make use of WebM as well.

The first version of Android will ship with libvpx 0.9.2, which is a slightly outdated version of WebM. Support for the newest WebM release 0.9.5, code-named Aylesbury, will be pushed out with a maintenance release, according to Google’s WebM product manager John Luther. Users will be able to access every WebM video stream or file with the WebM release included in Gingerbread, but the coming update should help with the format’s playback performance and memory footprint, amongst other things.

Support for Android is an important first step for WebM to gain market share. Google open sourced the video format in May, and it has since been integrated into Firefox, Chrome and Opera. WebM has also gotten more support from video vendors and application developers.

Luther said in November that 80 percent of YouTube’s popular videos are now available in WebM, and Skype’s client started to utilize WebM for its new group video chat functionality. The next step for WebM is to get on devices, and the first chipsets supporting hardware acceleration for WebM are expected to reach the market place in early 2011.

However, it may take a while before many Android users will be able to make use of WebM. Most network operators are notoriously slow to roll out new Android versions. In fact, 56 percent of all Android handsets are still running version 2.1 or older, despite the fact that 2.2 has been available for close to six months now.

By Janko Roettgers, GigaOM

3D Briefing Document for Senior Broadcast Management

This briefing document from the EBU Study Group helps broadcast managers make sense of 3D TV.

5 Reasons Google Bought Widevine

Google announced today that it will acquire Widevine Technologies, giving it access to technology necessary to securely deliver video to a wide range of connected devices. The acquisition is more than just a technology play on Google’s part; the Widevine purchase will also bring deep Hollywood relationships and improve its chances of getting Google TV deployed on consumer electronics devices.

Terms of the deal weren’t disclosed, but you can bet Widevine pulled in a pretty penny; the startup has raised $51.8 million in funding since recapitalizing in 2003, including a $15 million strategic investment last December led by cable operator Liberty Global and Samsung Ventures. Widevine could be invaluable to Google, as it provides technology and expertise in a number of fields that could help grow Google’s overall video business. Here are the top five reasons Google had its eyes on the company:

1. Everyone Needs DRM
Widevine is a digital rights management firm, first and foremost, and DRM isn’t going away. Providing a secure way for content owners to distribute video online and to a number of connected devices will be table stakes in Google’s broader video ambitions. Whether it’s getting premium content on YouTube or securing video distributed to Google TV-powered devices, Widevine will give Google the technology and peace of mind to strike those deals.

2. Cozying Up with Hollywood
Google has a problem — a content problem, that is. The company’s efforts to get long-form premium content on YouTube have generally fallen flat, and its Google TV products were met with universal disdain from media companies that acted quickly to block their online video streams from being accessible on those devices. Widevine has one thing that Google doesn’t: the trust of Hollywood. After providing the DRM technology used by a number of movie studios as well as online distributors like Netflix, Sonic Solutions and Lovefilm to deliver videos online and on connected devices, Widevine is in a unique position to make introductions to some key players in Hollywood.

3. Connecting Google TV to More Devices
Google launched its Google TV operating system on a series of TVs and Blu-ray players from Sony as well as broadband set-top boxes from Logitech, but it clearly desires to embed the technology on other devices, and is rumored to be courting Samsung, Toshiba and other manufacturers to do so. Well, Widevine’s technology is available on products from Apple, Haier, LG Electronics, Nintendo, Panasonic, Philips, Samsung and Toshiba, as well as more than 50 different set-top boxes. Google could leverage Widevine’s relationships with those manufacturers, and maybe even connect its technology into the broader Google TV code base.

4. YouTube Everywhere
In addition to getting Google TV on more connected devices, Widevine’s embedded technology could also help Google speed up distribution of YouTube video streams on more TVs, Blu-ray players and mobile handsets. Not just that, but by providing advanced DRM for those streams, Widevine could help make potential content partners more comfortable with those streams being delivered by YouTube.

5. Android Needs Adaptive Streaming
Android mobile devices are swarming the market, and with Flash installed, they promise users the ability to watch any video stream available on the web. There’s just one problem: Those videos, for the most part, aren’t optimized for mobile delivery. While Apple has built proprietary adaptive streaming technology for its mobile devices, Android phones don’t have a graceful way to deal with fluctuations in network bandwidth. Widevine, which makes video optimization technology in addition to DRM, could help solve that problem by helping Google to add adaptive streaming to future android devices.

By Ryan Lawler, GigaOM

Bram Cohen: BitTorrent Protocol & Live Streaming Don’t Mix

BitTorrent mastermind Bram Cohen knows the strengths and weaknesses of the P2P protocol he invented more than eight years ago, and he’s not ashamed to point out one particular downside: BitTorrent is the wrong approach for live streaming.

During an interview at NewTeeVee Live recently, he explained to me that BitTorrent just has too much latency to be viable for such applications. “Just the fact that it’s using TCP makes that completely impossible,” he said.

Watch the entire interview here:


Cohen has been working on his own live streaming solution for the last two years, and he said it has only recently become close to releasable. His new approach to P2P live streaming is being developed as a product of BitTorrent Inc., but the company has so far kept mum about what the product will eventually look like.

However, Cohen hinted at the possibility that BitTorrent will compete with live streaming sites like Ustream and Justin.tv. Asked whether this technology is for networks like ABC or a guy in his basement, he said: “ABC can afford to pay for whatever they want to do right now.”

Users just starting out, on the other hand, often don’t have the infrastructure available to deal with possibly overnight success. “Peer to peer is really a democratizing technology,” he said.

By Janko Roettgers, GigaOM

Good News: Flash Just Got Less Painful

Adobe released the first beta of its Flash player 10.2 today. The most notable improvement is advanced hardware acceleration, which should considerably reduce — and in some cases entirely eliminate — the CPU load of playing Flash videos on most modern computers. The improved efficiency is due to the implementation of Adobe’s Stage Video API, which makes use of a computer’s GPU for close to all video-related computation.

First reports indicate the impact of the new player is most notable under Windows, with heise.de reporting it was able to play 1080p HD video with a CPU load of zero percent. The online magazine reported CPU loads between four and five percent under Mac OS X. These loads could even be maintained while displaying overlays on an HD video — something that led to much higher CPU usage without Stage video.

I saw slightly less efficient CPU loads when I tried the new Flash player under OS X today, but 8 percent isn’t really all that bad for 1080p, either.


However, desktop users won’t be the only ones happy about the Flash player’s new efficiency. Flash 10.2 and its underlying stage video technology should also improve playback on set-top boxes and other connected devices.

From Adobe’s web site:
“The performance benefits of stage video are especially pronounced for televisions, set-top boxes, and mobile devices. These devices do not have CPUs as powerful as desktop computers, but they do have very powerful video decoders capable of rendering high-quality video content with very little CPU usage.”

In fact, Adobe cooperated with Google to bring Stage Video to Google TV, where the technology is currently up and running. We should see more devices making use of the optimized hardware acceleration soon.

By Janko Roettgers, GigaOM

MPEG Dynamic Adaptive Streaming over HTTP

As HTTP became one of the most important protocols for the delivery of content over the Internet, MPEG launched an effort to use this standard for the delivery of multimedia data in an optimal way. At its 94th meeting, MPEG’s Dynamic Adaptive Streaming over HTTP (DASH) has reached the Committee Draft stage.

The DASH committee draft is based on the 3GPP Adaptive HTTP Streaming specification and improves it by adding several new features and extensions such as, support of live streaming of content, additional annotation capabilities, flexibility in combining multiple contents, enhancement of trick modes and random access, and support of multiple content management and protection schemes, delivery of MPEG-2 Transport Stream, Scalable Video Coding (SVC) and Multi-view Video Coding (MVC).

The new standard is expected to achieve Final Draft International Standard status in July 2011.

More information:
ISO/IEC 23001-6: Dynamic adaptive streaming over HTTP (DASH)

Source: MPEG

Shift to 3D Reignites TV Image Quality Competition

Three-dimensional (3D) TV is finally coming to the home, with sales making a strong start and expected to keep rising sharply. The trend has reignited the competition for better image quality, though, with manufacturers striving to slash crosstalk and boost screen brightness as a result. Japanese manufacturers are banking on their experience in creating beautiful imagery to put them back in the game.

"We've only just released it to the market, but it's selling a lot better than we hoped," says Shiro Nishiguchi, Executive Officer of Digital AVC Products Marketing Division of Panasonic Corp. of Japan.

The first 3D televisions, capable of displaying 3D imagery, hit the shelves six months ago, carrying with them manufacturer hopes for a new hot product to help them escape from a hopeless price competition (Fig. 1). All the involved manufacturers agree that sales are off to a great start.


Fig 1 - New 3D TVs Hit the Streets in Force
The major TV manufacturers began releasing a flood of 3D sets to the market in 2010, all claiming superior image quality. From late 2010 through early 2011, new faces in the industry are also expected to begin announcing 3D TVs.


Samsung Electronics Co., Ltd. of Korea was the first off the blocks, selling 600,000 units as of the end of June 2010, and rising its annual sales target for 2010 upward as a result. The company simultaneously released fifteen models, from LCD to plasma display panel (PDP), in the key US market. In LCD TVs, the 240Hz-drive system that is their core product can be switched to support 3D, but the price tag has only been boosted by about US$300 to support 3D imagery. This pricing strategy seems to have paid off big.

Companies like Panasonic, Sony Corp. of Japan and LG Electronics, Inc. of Korea, on the other hand, have not disclosed concrete sales numbers, but are just as optimistic about the market. Panasonic's Nishiguchi reveals "In the second quarter of 2010, 3D accounts for about forty percent of sales for 50-inch and larger TVs, for sets that are available in 3D models.

The situation is similar at Sony, according to Satoru Kuge, Senior Manager of Consumer AV Marketing Div.of Sony Marketing (Japan), who says "Since we began sale in June 2010 we have continued to hold the largest share in the domestic market by volume." Minimum 2010 sales targets are 2.5 million units for Sony, 2.0 million for Samsung, and 1.0 million apiece for Panasonic and LG.


A New Era of Intense Competition
Determined not to lag behind in the 3D boom, TV manufacturers both in Japan and overseas are joining in.

Sharp Corp. of Japan, which holds the top share of the domestic market by volume, announced 3D-capable sets in May 2010, followed by third-ranked Toshiba Corp. of Japan in July of the same year. Mitsubishi Electric Corp. of Japan is involved, as it Hitachi Consumer Electronics Co., Ltd. of Japan. From the 2010 Christmas shopping season a host of new TV manufacturers will join the competition, such as VIZIO, Inc. of the US.

Survey firm DisplaySearch of the US, according to Hisakazu Torii, Vice President of TV Market Researchof the firm's Japan office, predicts that demand for 3D TVs in 2010 will easily top 3.4 million units (Fig. 2). The forecast as of May 2010 was about 2.5 million units, so this DisplaySearch figure is based on surging demand. DisplaySearch's Torii adds "The market will continue to grow at a healthy pace as 3D functionality gradually becomes a standard feature."

Panasonic, in fact, already offers 3D display functions standard on all 42-inch and larger sets, except for some low-price models, according to the firm's Nishiguchi. DisplaySearch predicts that total global demand for 3D TVs will reach about 43 million units in 2014, with about 37% of all sets 40-inch and larger (thought the best size for 3D imagery viewing) supporting 3D display.


Fig 2 - Demand Rising Steadily
Global market demand for 3D TVs in 2010 was about 3.4 million sets, but this is expected to rise to about 43 million in 2014. Only 5% of 40-inch and larger TVs were 3D-capable in 2010, rising to an estimated 37% in 2014.



Getting Set with Content
The major TV manufacturers are clearly pinning their hopes on 3D, but there's no denying a certain degree of worry that it may all be just a flash in the pan. It is clear that the technology will spread to some areas, such as movie theaters, but the environment is simply not ready for regular viewing in the home.

The 3D TVs sold by Samsung are called "3D ready," with no 3D viewing glasses packaged in the carton with the set. It is possible that consumers may be snapping them up just to be ready for the future in style, even though they are not watching any 3D content. Any widespread adoption of 3D television will require a much broader array of 3D content, and technological improvement (Fig. 3).


Fig 3 - Prerequisites for Widespread Adoption
Further improvements in content selection and display technology will be needed to prevent the 3D TV boom from petering out. TV manufacturers are especially interested in improving 3D image quality.


It will take some time before content is up to snuff, but progress is being made. 3D broadcasting, viewed by many as the biggest key to success, is already planned for rollout by Panasonic, Sony and others, in cooperation with primarily American broadcasters. Panasonic and DIRECTV, Inc. of the US launched a suite of 3D channels on July 1, 2010. The firm says that 3D viewing is now possible to about 11 million households in the US.

Sony plans to begin broadcasting in 2011 in the United States, establishing a 3D-only broadcasting company together with Discovery Communications, Inc. of the US and IMAX Corp. of Canada, among others.

Manufacturers are also pushing ahead with cameras and camcorders capable of shooting 3D images, and 3D-capable game software, as well as building 2D-3D conversion functions into 3D sets to automatically convert 2D images into 3D.


Key Issues are Crosstalk and Brightness
With the emergence of 3D as a new axis of competition, TV manufacturers are once again forced into competition to deliver higher image quality, this time in 3D.

At present, they all use the frame sequential method of displaying 3D images (Fig. 4). Full high-definition (HD) images, each 1920 pixels x 1080 pixels, are alternated between left and right eyes each frame, using glasses with synched liquid crystal shutters to alternately block left and right eye vision, simulating 3D. In principle, anyone can make a "3D" television with just a display running at 120Hz or higher, glasses with liquid crystal shutters, and a sensor to keep them in synch.


Fig 4 - 3D Images Simple to Display
3D TV principle of operation for frame sequential design. Anyone can make one with a 120Hz-drive display, glasses with liquid crystal shutters, and IR sensor to synch the two.


There is still considerable room for improvement in 3D image quality, but technological expertise will be needed to make the imagery good enough for consumers to enjoy without stress. The biggest problems of the frame sequential method are crosstalk and reduced brightness (Fig. 5).


Fig 5 - Two Major Technical Issues
Many firms are developing technology to resolve problems with frame sequential 3D TV, namely crosstalk and reduced brightness.


Crosstalk refers to the overlap between the left and right eye images. Crosstalk not only degrades 3D image quality, it can also cause discomfort through eye fatigue and even motion sickness. Many manufacturers have made reducing crosstalk their top priority.

The other problem is image brightness. Shigeaki Mizushima, Executive Managing Officer of Sharp comments "Brightness with PDPs and LCD panels can drop to as low as 10% of regular 2D screen brightness." This is because in addition to dimming caused by the transparency of the polarized viewing glasses, light loss is also increased by measures designed to reduce crosstalk, as detailed below. The difference in image quality from these two causes is clearly visible to consumers.

Until recently, TV manufacturers have been competing in technologies to boost image quality far beyond the point that most consumers even notice, but with 3D TV these technologies may directly earn consumer appreciation. And as a source at Sharp explains, manufacturers newly entering the business are unable to easily duplicate image quality technologies even when concrete methods are disclosed.

Many 3D TV "test drive" rooms have been set up, mostly at mass merchandisers, and as consumers crowd to experience the new technology, Japanese manufacturers see an opportunity to regain market share through their expertise in creating beautiful imagery.

Already television manufacturers are beginning to claim the superiority of their own 3D TV imagery, splitting into PDP and LCD groups, just as happened when flatscreens were first released. The competition between PDPs and LCDs is back, as intense as ever.


PDPs Boast of High-Speed Response
The biggest problem mentioned by the TV manufacturers is crosstalk, and resolving it demands a very fast response by the device itself.

Panasonic, defending the PDP TV all by itself, strongly pushes the high-speed response of the technology. Plasma display panels are impulse displays, with very short emission and afterglow times. As Panasonic's Nishiguchi explains, "Response is much faster than LCD TVs, and that alone means crosstalk is much less of a problem."

Panasonic has developed ways to minimize crosstalk in its 3D TVs (Fig. 6). The firm developed a new phosphor with an afterglow time only one-third that of the company's own 2009 product. Panasonic also improved the way PDP gradations are controlled by overlaying light pulses.

Previously they emitted the shortest pulses first, overlaying pulses in increasing duration, but the new models have reversed this order. Between them, the two improvements significantly reduce residual image duration, thereby reducing superimposition of left and right eye images.


Fig 6 - Improvements in Phosphors and Emission Control

Panasonic has modified its PDP to minimize crosstalk, developing a phosphor with a shorter afterglow time and changing the emission control method.



LCDs Require 240Hz Drive
The LCD TV manufacturers, however, also claim their crosstalk countermeasures are up to snuff. LCD panels are hold displays, with relatively long emission and afterglow times. In general, they are more susceptible to crosstalk than PDPs. Manufacturers have boosted the LCD panel drive frequency from 120Hz to 240Hz, as well as incorporating other measures including improvements to the LED backlight emission method.

Concrete crosstalk suppression implementations vary by company. Sony, for example, displays the same frame twice in a row, for left and right eyes (Fig. 7a). The LCD backlight is turned on only for regions where there is no overlap between left and right eye images within the frame, and the liquid crystal shutter in the glasses opened accordingly.


Fig 7 - Sony and Samsung Reduce Crosstalk
Crosstalk can be suppressed in LCDs by driving the panel at 240Hz and using scan technology in the LED backlight. Sony addresses the problem by displaying the same frame twice in a row (a), while Samsung inserts a black screen (b).


Sharp uses the same approach, but with finer control of LED backlight emission. The image is split into five or six regions vertically, and emission controlled independently for each to minimize crosstalk.


Samsung and Toshiba Use Black
Samsung, on the other hand, interlaces a black image into both left and right eye image streams (Fig. 7b). The LED backlight emits in synch with image write. While the LED backlight is not using area emission control at present, the firm plans to split it into about eight regions vertically for individual control.

Toshiba also inserts black images in both left and right eye imagery. "This approach reduces crosstalk more effectively than showing the same frame twice," claims Yuji Motomura, Chief Specialist of Visual Products Companyat the firm. The LED backlight is controlled in sixteen vertical regions for the direct illumination type mounted on the rear of the panel, or two in the edge light type.


Sharp Offers the Brightest Imagery in the Industry
Many manufacturers have declined to discuss concrete measures being taken to address the problem of reduced brightness. Sony, for example, boosts the emission efficiency of the LED light source in the backlight higher for 3D imagery than for 2D. The number of LCD panel polarizers in the glasses has also been cut from two to only one to increase brightness, but no specific measurements have been disclosed.

The only company to disclose luminance data for 3D TVs is Sharp, which claims a brightness of 100cd/m2 or better, the highest in the industry, through viewing glasses. "Compared to 2D imagery, the surrounding luminance is cut to 18% through glasses with polarizers," says the firm's Mizushima. "Luminance would have to be increased to at least 90cd/m2 in order to achieve the same apparent brightness as a 2D display, which is 500cd/m2. Even the brightest 3D TVs announced by our competitors only achieve about 60cd/m2."

Sharp has developed four proprietary technologies for its 3D TVs: ultraviolet induced multi-domain vertical alignment (UV2A), the use of four primary colors, frame-rate enhanced driving (FRED) and scanning LED backlights. Together, they not only minimize crosstalk, but also increase luminance 1.8 times.


Adopting Four Key Technologies
Of these, UV2A and the switch to four primary colors have already been tapped for use in LCD TVs handling conventional 2D imagery. UV2A is a type of photo-alignment technology utilizing UV light to control the orientation of the liquid crystal molecules. When irradiated with UV light, the main chain of the polymers tilts accordingly, in a polymer thinfilm orientation film. This approach makes it possible to eliminate the slits and ribs necessary for alignment of liquid crystal molecules in the conventional vertical alignment (VA) method, which increases the panel's aperture ratio 1.2 times. Response is also improved to no more than 4ms, which is about half the standard time, helping suppress crosstalk.

The fourth primary color, added via an LCD panel color filter, is Y (yellow), added to the existing red, green and blue (RGB) colors. The Y wavelengths of the LED backlight are utilized, increasing optical utilization 1.2 times.

FRED and the scanning LED backlight are new technologies. FRED makes it possible to drive the display at 240Hz with one source line per pixel, which improves the aperture ratio 1.1 times. Until now, says a source at Sharp, 240Hz-drive LCD panels required two source lines per pixel. The scanning LED backlight primarily reduces crosstalk, using LEDs 1.1 times brighter than prior designs for improved screen brightness.


Continuing Improvements to Vertical Alignment
With the surging popularity of 3D TVs, LCD panel manufacturers in Korea and Taiwan are accelerating the development of technologies to improve response and aperture ratio for TVs using vertical alignment (VA), which is the most common approach. VA designs have been somewhat inferior to in-plane switching (IPS) designs in terms of display performance, but today they are at least as good, if not better.

Panel manufacturers are especially interested in developing and volume producing a display technology called polymer sustained alignment (PSA). Volume production is already under way at AU Optronics Corp. (AUO) of Taiwan and Samsung, and being considered by Chimei Innolux Corp. (CMI) of Taiwan.

Like Sharp's technology, PSA also uses UV light to control liquid crystal polymer orientation, eliminating the need for ribs and slits. It utilizes a liquid crystal material that has been mixed with a UV-setting plastic. The UV-setting monomers are injected into the panel together with the liquid crystal molecules, and the panel then irradiated with UV light while voltage is input, controlling molecule alignment.

According to a source at AUO, display performance is comparable with photo-alignment, achieving a response time of 4ms and a panel aperture ratio improvement of 20% over existing technology3). The cost of modifying an existing LCD panel manufacturing line to handle the new technology is less than that needed for photo-alignment technology.

The problem is that residual monomers degrade reliability. If the monomers do not set fully, the LCD panel will exhibit uneven image quality.


Higher Reliability than PSA
Sony is developing a proprietary display technology called field-induced photo-reactive alignment (FPA). Like PSA, it irradiates the panel with UV light while voltage is applied, controlling liquid crystal molecule alignment. The main chain of the orientation film has two parts, one with strong affinity for liquid crystal molecules and the other which sets under UV light. The orientation film was developed in-house, and evaluation of prototype cells showed characteristics as good as or better than PSA.

One of the advantages of FPA is that it can be manufacturing with the same process as PSA, but there is little worry of reliability deterioration due to residual monomers. As Shunichi Suwa, Device Engineer of Core Device Development Group of Sony explains, "It can be manufactured by any panel fab already using PSA. In the future, we hope to provide it to panel manufacturers under license."


Improving Glasses Performance with OCB Liquid Crystal
Improving 3D TV image quality with frame sequential technology will require improving the performance of the glasses as well as that of the displays. The LCD panels used in glasses packaged with 3D TVs are mostly the same super-twisted nematic (STN) designs used in monochromatic displays, such as in calculators. They are inexpensive, but require a drive voltage of about 20V to achieve a response time of a few ms. In addition, they have a narrow viewing angle, with degraded display performance except when viewed straight-on.

In an effort to resolve these problems, Toshiba Mobile Display Co., Ltd. (TMD) of Japan has developed a new LCD panels specifically for glasses, using the optically compensated bend (OCB) method for fast response. Response is 0.1ms from shutter open to closed, and 1.8ms in the other direction, for minimal crosstalk.

"Even STN liquid crystal can deliver fast response if the drive voltage is boosted, but boosting it to 20V is just not practical. OCB liquid crystal is driven at about 6V, which is practical even in relatively low-power designs," says Tatsuya Miyazaki, Group Manager of LCD Divisionof TMD.

Panel transparency is a high 33%, which helps improve 3D imagery brightness. The contrast ratio is 5000:1 from the front, and 1000:1 at the maximum viewing angle of ±30°.

TMD has been volume producing OCB LCD panels since 2004, but prices are still high because they have been targeting commercial applications. TMD's Miyazaki believes that for the sizes used in 3D glasses, costs will drop when volume production hits 10 million units.


Xpol Support for Full HD Resolution
In addition to the frame sequential method, there are also display technologies designed for use in home 3D TVs, one of which is the Xpol method using a circular polarizing film.

Xpol technology offers low crosstalk thanks to a special polarizing film called Xpol, developed by Arisawa Manufacturing Co., Ltd. of Japan, on the front of the LCD panel. Odd pixel lines (running horizontally) are rotated clockwise, and even pixels line counter-clockwise, using circular polarization. Viewing glasses make it possible for the right eye to see only the odd lines, and the left only the even lines, again using polarizing films, producing the 3D image. The technology is already in use in 3D TVs from LG and Hyundai IT Corp. of Korea, and in commercial 3D displays manufactured by Victor Co of Japan, Ltd. of Japan, Panasonic and Sony.

Because both left and right eye images are present in the same frame in Xpol technology, there is minimal crosstalk. There is no time multiplexing of left and right eye images, unlike the frame sequential method, which is said to reduce the load on the human brain as it tried to synthesize the 3D image. Viewing glasses weigh only about 20g, no more than half as much as those used with frame sequential.

The problem is that resolution in the vertical direction is halved. It is possible to use "4K x 2K" display (a display about 4000 pixels x 2000 pixels in size) to minimize definition loss, but this is impractical considering the poor manufacturing yield possible.

Arisawa Manufacturing and Victor Co. of Japan have jointly developed the HR-Xpol technology capable of displaying 3D imagery at Full HD resolution, without a 4K x 2K display. The technology was announced in May 2010 (Fig. 8), and makes use of a liquid crystal layer that can be switched between left and right rotation, again with circular polarization.


Fig 8 - Displaying Full HD Imagery with Circular Polarization
Arisawa Manufacturing and Victor Co. of Japan have jointly developed a way of viewing 3D imagery at Full HD 1080i resolution, using circular polarizing film. A liquid crystal shutter alternates the polarization between left and right eye images one frame at a time.


In HR-Xpol the first frame is allocated in the same way as in the existing technology, with odd lines for the right eye and even lines for the left. The second frame uses the reverse, and the polarization state of the glasses is switched in synch. Two frames are synthesized to give show 1080i resolution Full HD 3D imagery.

By Shinya Saeki, Nikkei Business Publications

Lack of 3-D Captioning Standard Stymies Development

As more content is being produced in 3-D, the need for captioning, now mandated by the U.S. government, has been brought to the forefront. While all of the vendors in this category are aware of the need to do it, very few customers have asked for it, which holds back development.

“We certainly have the capability to produce captions in 3-D space, but we’re not investing a lot in R&D until there is customer demand and a standard specification for how to do it,” said José M. Salgado, president and CEO of Los Angeles-based SoftNI, a veteran captioning and subtitling software provider.

To be clear, the issue has to do with closed-captioning, not necessarily “subtitling.” 3-D subtitling is typically predetermined by the content producer and is inserted into a plane (below, on the side or on top of the screen) that’s most aesthetically pleasing to the eye. Because subtitles are simply a part of the picture, there is no need for new technology to transmit or display them.

Closed-captioning, on the other hand, serves a greater need and must be done uniformly. This data is sent as text with timing information by a broadcaster or program provider and turned on or off at the TV set by the consumer. There is a method for doing this in 2-D (called CEA-708) that’s standardized by the Consumer Electronics Association. Every TV set sold in the United States must be able to recognize this code and display it when required. In that code, you can still control the positioning of the captioning but not the 3-D depth. The result is that the 3-D experience is often not the best it could be.

“Captions are still transmitted in 2-D, even for 3-D content, but there is no way yet to make use of 3-D depth in the captions,” said Jason Livingston, product manager at Computer Prompting & Captioning (CPC). “All of the 3-D TV sets sold today can only decode 2-D caption information. It’s a problem that people are starting to be aware of, but we’re a long ways from having an industrywide agreement.”

Currently, captioning material is sent to 3-D TV sets in the same manner as 2-D HDTV. Captioners can control the 2-D placement of the captions, just like they do now. As long as the captioner does a good job, it will not obscure anything important, and the portion that is obscured will be the same regardless of whether the video is 2-D or 3-D.

This has frustrated some viewers because all of the work that goes into framing a 3-D scene and the depth perception is lost when a caption box covers it. With no standard way of accommodating multiple layers within a scene, there’s no control over space and depth.

“You’ll still see closed-captions appearing where people are used to seeing them, but there’s no code for ensuring a pleasing 3-D viewer experience,” Livingston said. “As soon as the CEA and FCC establish a technical standard for how to transmit 3-D closed-captions, then it will be in our software. From a manufacturer’s perspective, it does not make sense to do it until the technical standards have been decided and published.”

For the time being, Livingston recommends that his customers produce closed-captioning in the same way they currently do for 2-D content.

“Captioning vendors don’t have any say in how the industry will ultimately decide how to handle 3-D closed-captions, and we can’t tell the TV set manufacturers, ‘This is the code we want you to use,’ because they won’t implement it until the CEA establishes an industry standard.” Livingston said. “So, we’re all waiting and doing a few tests until then.”

Unlike closed-captioning, which is transmitted separately from the picture and can be turned on and off, subtitles are burned into the picture and cannot be turned off by the viewer. But, captioning companies such as CPC and SoftNI are ready to offer 3-D subtitles in their software today.

“We’re just waiting for demand,” SoftNI’s Salgado said.

SoftNI offers its Subtitler Suite and Digital Suite software products for subtitle burn-in and metadata insertion into HD and SD digital files. CPC’s MacCaption (for Apple computers) or Caption Maker (for PCs) has the ability to encode closed-captioning and burn-in subtitles as well. Other vendors include Cheetah International and service providers National Captioning Institute and Boston public TV station WGBN.

“We’ve had some viewers ask about 3-D subtitling, but we have not much interest from content creators yet,” CPC’s Livingston said. “The basic structure is there in our products for doing it, but we’re still working on the user interface and trying to understand what tools content creators want.”

The fact that captioning is done in software bodes well for these captioning companies, because as soon as a standard is announced, the software can be easily upgraded via a free download to accommodate 3-D captions — and it can be done in a matter of weeks.

Adding to the issue, the government recently signed the Twenty-first Century Communications and Video Accessibility Act (S. 3304), which mandates captioning for Web-delivered content that has also appeared on traditional broadcast TV. The legislation also states that all CE receiving devices large enough for video must be equipped to support captioning functionality. So, a new set of concerns will become apparent in 2011 because there are a number of Web display formats that don’t support closed-captioning at all, and those that do use a number of incompatible standards

However, a number of vendors, including the ones mentioned here, support the Described and Captioned Media Program (DCMP) and can help add captions to Web-based video. For a list, visit this link.

By Michael Grotticelli, Broadcast Engineering

Over-The-Top Video

Over-the-top (OTT) video — the delivery of video via the Internet from a source other than the network service provider — has arrived. Several factors are fueling the development of OTT video initiatives. The first is that viewers are demanding more customized access to their content. Consumers want their content anywhere, on any device, at anytime and at their convenience.

A key second component of OTT’s fast rise is an array of both streaming content providers and new receivers. These free or inexpensive components have combined to create a disruptive marketplace for cablecasters while meeting consumer needs.

In-Stat principal analyst Keith Nissen says the industry is struggling with how to maintain revenue as consumers shift to on-demand viewing. Broadcast TV ad revenue is declining, the pay-TV market no longer has much new subscriber growth, and consumers are not, or cannot, continue to pay for 200 TV channels when they watch just a handful of channels.

A New Playing Field
The industry most affected by OTT technology is cable. Is there a threat to basic cable services from OTT?

Consider these recent statistics:

  • 21.4 billion online videos are viewed each month.
  • 82 percent (158 million) of the U.S. Internet audience watches online videos.
  • 500 minutes (more than eight hours) of online video is watched per month per average viewer.
As video content becomes more diverse and younger viewers take command of remote controls, pay-TV operators will need to adapt to these younger viewers’ demands and expectations. In addition, the FCC is going to promote a replacement to the failed CableCARD, so viewers may have new options in how they access content.

With the OTT model, consumers would rely on a broadband connection for the delivery of content. That content could consist of OTA broadcast signals; cable network programming like Disney, Turner and others; and VOD signals. This content would be accessed via devices from some new players, including Apple TV, Boxee, Google TV, Hulu and VUDU. All of these new devices make content easier than ever to find and the viewing process, perhaps, more customer friendly.

A J.D. Power and Associates survey released in October reports that consumers are more upset than ever with the high cost of pay-TV bills. In addition, cable viewers are more likely to feel ripped off than IPTV or satellite customers. Consumers prefer an à la carte solution. OTT can provide that option.

Some OTT Providers
The growing demand for OTT video is driving a litany of new players to enter the market space. In the United States, Netflix is dominant. In the first quarter of 2010, Netflix had 14 million subscribers. By the end of the year, Nissen predicts it will have 17 million subscribers. Sixty-six percent of Netflix subscribers are already using the company’s streaming service. More than half of those subscribers are streaming movies or TV episodes to their homes through devices such as Roku set-top boxes, Xbox 360 game boxes and Blu-ray players.

Also going over-the-top is DISH Network, which offers more than 180 international channels in more than 28 languages. The network announced early this year a multiyear partnership with NeuLion, an end-to-end IPTV service provider of live and on-demand international, sports and variety programming delivered via broadband. Under the agreement, certain DISH Network international channels will be distributed, using NeuLion’s IPTV service, to consumers without access to satellite TV.

Next year, look for Wal-Mart and Best Buy to promote their own online video services. Wal-Mart purchased VUDU, an on-demand video service that sells and rents movies and TV shows over the Internet. And, Best Buy and Blockbuster have teamed with online movie service Roxio CinemaNow.

The most talked about streaming provider, Hulu, launched Hulu Plus this year. This ad-supported premium subscription service costs $9.99 per month. It works across a variety of platforms, such as PCs, the iPhone, iPad, PlayStation 3 and Samsung Blu-ray players. The service boasts thousands of subscribers. And, in only six days after being released, the Hulu application for the iPhone and iPad was the most downloaded service in Apple’s App Store. In July, the U.K.’s Financial Times reported that Hulu had been working on plans for an international launch of Hulu Plus, with the UK and Japan as target markets.

Another player, called ivi, is less familiar. The online video service expects to charge customers $4.99 per month for a package of shows from all major American networks, plus some superstations most Americans haven’t seen since the early 1990s. The startup claims it offers more content than Hulu by providing online access to every network and syndicated show seen on New York and Seattle TV screens.

Broadcast Engineering readers may recall another company’s attempt to deliver OTA programming via the Internet. In 1999, a company called iCraveTV initially delivered 17 channels of programming from both Canadian and Buffalo, NY, TV stations. It took maybe a week for the lawsuits to begin. Within weeks, iCraveTV bit the dust. Both Hulu and ivi will likely find that without some form of payments to the content owners, the legal challenges will be endless.

Delivery
There are many ways to get packetized data to consumers, and, for the most part, these will be transparent to them. Adaptive streaming, caching and torrent technology are delivery methods, and Nissen expects that all will be used. After all, consumers don’t care how the content gets to the TV.

Expect to see MPEG-4 AVC and other advanced encoding technologies to be used to reduce bandwidth needs. Nissen also doesn’t think carriers will go to measured pricing, but he does believe that content producers ultimately will partner with pay-TV service providers to deliver both pay-TV and OTT video to consumers using hybrid set-top boxes. This allows content producers to market OTT content directly to consumers while delivering it over a secure, managed pay-TV access network. This will appeal to pay-TV operators because they are also content producers, they will get paid to carry the on-demand content, and they want to remain the gatekeeper for all paid digital entertainment.

As a result, consumers will be paying for a combination of pay-TV services and paid OTT video services. The shift in spending to paid OTT video services could permit content producers to eliminate low viewership pay-TV channels from pay-TV packages. Under this model, consumer spending won’t decline, but the value of pay-TV services would rise. This would lower the dissatisfaction that consumers currently have with pay-TV services.

A contrarian viewpoint was noted in a recent article from TechCrunch. The article quoted writer and entrepreneur Paul Kedrosky, “Many people are coming to the correct conclusion that in the age of Hulu, Boxee, BitTorrent, etc., that cable TV is an overpriced relic of another entertainment age.”

Maybe so, but if you have any cable stock, keep it. The cable industry still feeds television programming to 62 million homes. And, the industry has almost 42 million broadband customers.

By Susan Anderson, Broadcast Engineering

Video Compression Technology

The MPEG-2 and MPEG-4 standards are now at a relatively mature stage. At the same time, new implementations of MPEG-4 are still on the rise, especially using H.264/AVC. Both ATSC and DVB-T support this more efficient compression standard (with newer receiving devices, such as mobile displays), and newer codecs are emerging in a growing number of video applications.

While MPEG-2 and AVC are now ubiquitous in broadcast, cable and satellite distribution, other codecs have found an equally widespread home for the distribution of video over the Internet. Because we are seeing more applications that cross the various media, it is useful to understand the makeup of these various codecs.

Most Compression Systems Have Similarities
All compression systems function by removing redundancy from the coded information, and the highest amount of compression is almost always achieved by lossy coding, i.e., the decoded information, while presenting a faithful version of the original information does not produce an identical set of data. Essentially, most video codecs today function by reducing the information content of video in three ways: spatially, temporally and logically.

Spatial video content (in the horizontal/vertical image dimensions) is compressed by means of mathematical transforms and quantization. The former remaps the video pixels into arrays that separate out detail information; the latter reduces the number of bits required for each transformed pixel.

Temporal video content (in the time dimension) is compressed by means of residuals and motion estimation, and in some codecs, by quantization as well. Residuals reduce information by coding differences between frames of video, and motion estimation provides data reduction by accounting for the movement of pixel “blocks” (and groups of blocks, i.e., macroblocks) over time.

Logical content (i.e., strings of codewords representing spatial and temporal content) is further compressed by using various forms of entropy coding and/or arithmetic coding, which remove information by efficiently coding the strings in terms of their statistical likelihood of occurrence.

Each MPEG standard is actually a collection of different tools and operating parameters, grouped into levels and profiles. The level typically defines the horsepower needed for decoding the bit stream, as defined in macroblocks per second (or per frame) and the overall video bit rate. Profiles are used to group the different tools used during encoding. For example, MPEG-2 Main Profile @ Main Level is sufficient to encode SD digital TV broadcasts, while MPEG-2 Main Profile @ High Level is needed to encode HD video.

A huge amount of content on the Internet, however, does not use MPEG-2 or AVC coding. YouTube, for instance, almost exclusively uses Flash for video compression. Flash does not use one unique codec, but rather defines a format for FLV files. These files, in turn, encapsulate content usually encoded with either the On2 VP6 or Sorenson Spark video compression algorithms.

VP6, now owned by Google (which also owns YouTube), uses several standard compression techniques: a DCT block transform for spatial redundancy, motion compensation, a loop filter and entropy coding. (The loop filter is used to lower the appearance of block-edge artifacts.) While all of these are present in AVC compression, the loop filtering used in VP6 operates in what can be called a “predictive” manner. Instead of filtering blocks over an entire reconstructed frame, the VP6 codec only filters the edges of blocks that have been constructed by means of motion vectors that cross a block boundary. VP6 also uses different types of reference frames, motion estimation and entropy coding, compared with MPEG.

According to various sources, Sorenson Spark (more specifically the SVQ3 codec “Sorenson Video 3”) appears to be a tweaked version of H.264/AVC and has similarities to an earlier codec, H.263. While VP6 and Spark are essentially incompatible with non-Flash decoders, the most recent releases of Flash Player do support H.264/AVC video and HE-AAC audio.

VP6 and Spark (as well as AVC) are defined by various patents, with differing licensing terms for encoding, distribution and decoding. HTML5 (video) is another codec that has been defined for Internet use. It attempts to simplify (or remove) licensing fees. (The use of HTML5 has recently come to light regarding various video players, with the announcement that Apple would support it, and not Flash video, in its products.) Supporters of HTML5 want a codec that does not require per-unit or per-distributor licensing, that is compatible with the “open source” development model, that is of sufficient quality, and that does not present a patent risk for large companies.

Nonetheless, while HTML5 developers formerly recommended support for playback of video compressed in the Theora format, there is currently no specific video codec defined for it. In May, the WebM Project was launched to push for the use of VP8, a descendant of VP6, as the codec for HTML5. The project features contributions from more than 40 supporters, including Mozilla, Opera, Google, and various software and hardware vendors. Perhaps not coincidentally, in August, the licensor of H.264, MPEG LA, announced that it will not charge royalties for H.264-encoded Internet video that is free to viewers.

New Versions of Codecs
Current codecs are also being improved by means of new and emerging extensions, which have applications for storage and content management. A number of extensions to H.264/AVC support high-fidelity professional applications; scalability and multiview video have also been defined. MPEG collectively refers to the “High” profiles as the “fidelity range extensions” (FRExt), which include the High 10 profile (10 bits per sample), and the High 4:2:2, and High 4:4:4 profiles.

AVC has generally been viewed as providing a doubling of coding efficiency over MPEG-2, but the quest for more efficiency goes on. The ISO/IEC and ITU-T standardization committees have now embarked on the specification of a new video encoding standard that targets improved encoding efficiency for HD video sources.

Again, the goal is to cut the bit rate in half relative to existing codecs, e.g., AVC. This new specification is being referred to as the High-Efficiency Video Coding (HEVC) standard, and the target applications are broadcast, digital cinema, low-delay interactive communication, mobile entertainment, storage and streaming. Depending on the proposed technology, a final standard could be developed by July 2012.

Standards for multiview video coding based on MPEG-2 and H.264/AVC currently exist, but support is generally limited to a single stereo view that requires glasses to view the 3-D content. MPEG is now planning to standardize a new format for 3-D that supplements stereo video with depth/disparity information and could be used more effectively with glasses-free displays.

By Aldo Cugnini, Broadcast Engineering

EBU Publishes New MXF Timecode Recommendation

The EBU Strategic Programme on the harmonisation and interoperability in file-based HDTV production (SP-HIPS) , has updated the EBU Recommendation on how to use timecode in Material Exchange Format (MXF) files. It is the first public deliverable from the Group.

MXF is the most important standard for file exchange between professional media organisations. To improve the interoperability of MXF products, end of 2009 the EBU HIPS-MXF Group was set up, led by Mr Christoph Nufer (IRT). Mr Nufer's team started with updating the existing EBU R 122 "Material Exchange Format Timecode Implementation" recommendation, which was created in 2007. The new document includes, amongst others, information on handling 50/60 Hz timecode and timecode with new HDTV essence types.


MXF Timecode Carriage Mechanisms


The work of the HIPS-MXF Group now continues with specifying the recommended ways of carrying subtitling in MXF. The draft for this EBU Recommendation is far finished, and already available to participants in the MXF Group for review.

Source: EBU

SBJ SMT: The Business of 3D

It’s no secret in the industry that 3D production is expensive, but, at Sports Business Journal’s Sports Media Technology Conference this week in New York, attendees found out just how much it costs.

“It was about six times the cost of a normal game,” Ray Hopkins, COO of the YES Network, reported. “And we do 14 cameras for a normal Yankees game. From a cost perspective, it’s somewhat prohibitive. You need a separate truck, separate technical personnel, announcers, cameras.”

Added Jerry Passaro, SVP of network operations and distribution for MSG Media, “the biggest deterrent is the cost.”

Alternative Production Methods
To alleviate some of those costs, ESPN is working toward being able to use one production team to produce both a 2D and 3D broadcast of a game, and the NBA has done some experiments with single-camera 3D.

“At that point, our intention is to give you a courtside seat, with everything that comes with that,” explained Steve Hellmuth, EVP of operations and technology for NBA Entertainment. “You’ll see the referees walking in front of you, the play going down to the other end of the court and coming back to you. I’ve watched entire NBA games from a single 3D camera, and it’s a great experience. That’s one way people can experience 3D without the huge fees required for multiple-camera events.”

Such a solution also works for events where the main cover camera — say, on the 50-yard line in football — does not offer much depth because it must be placed so far away from the field of play. Another option to bring costs down is to incorporate selected 2D cameras into a 3D broadcast. For CBS’s production of the Final Four, the production team used a 2D aerial camera that gave a big-game look to the broadcast and was upconverted to a makeshift 3D using the switcher.

“From overhead, you don’t really get that 3D effect anyway,” explained Ken Aagaard, EVP of engineering, operations, and production services for CBS Sports. “I mixed that into the 3D feed because you want to make the event look important. The dilemma that we all face is, you have to be able to show the three dimensions but also show the ball going into the hole. There’s an interesting dynamic there that we all struggle with.”

Is 3D Like SD to HD?
Mark Hess, SVP of advanced business and technology development for Comcast Cable, said the good news is that, this time around, the distributors are 3D-ready; with HD, they were not. However, without turning 3D into a business, Passaro said, the medium has a limited future.

“Affiliate revenue will make it a business,” Hopkins added. “We’ve seen this movie before, in HD. When the parent companies call and say we want games in 3D, that’s when we’ll get it done.”

Although some are quick to liken the SD-to-HD transition to today’s HD-to-3D jump, others are not sure that the parallel is quite equal.

“I don’t see 3D being like SD to HD,” Aagaard said. “I see 3D being more of a niche. Hopefully, it will be a big niche. For us, the manufacturers have been paying for the party as it relates to production costs. When that well goes dry, where is that revenue really going to come from?”

Varied Demand for 3D Content
The other important question to ask is where does the demand for 3D come from?

“When I grew up, I was my dad’s remote control,” Hess said. “We all sat there as a family and watched TV. Now my wife’s doing something on her laptop, my daughter’s on her phone — it’s difficult to get a group of people together and really enjoy a 3D experience.”

Internationally, however, there may be additional opportunities for 3D production.

“We’re starting to engage with our international licensees about delivering 3D,” Hellmuth said. “Canal Plus did a six-camera 3D telecast of FC Barcelona versus the Lakers, and it looked really good. There will be some interest in cinemas for the NBA, especially in Asia, some distribution via computer, and then potential for the league for compilation DVDs. There’s no single wow factor here, but it’s just beginning.”

Added Chuck Pagano, EVP of technology for ESPN, “once you get more of a complement of programming together, you start to create business models that make sense.”

By Carolyn Braff, Sports Video Group

SENSIO Puts SENSIO 3D Technology and Expertise to Work for Videotron

SENSIO Technologies is pleased to announce that it has signed an agreement with Videotron, the leading integrated communications company in Quebec. As part of its broadcasting activities, Videotron will launch a 3D content offering this coming December.

The cable-TV distributor has selected SENSIO for its SENSIO 3D technology as well as its expertise, acquired through over ten years in the industry, in order to deliver the most immersive experience to the consumer, whatever the type of television set owned.

For 3DTVs, programming will be offered in the SENSIO 3D format. SENSIO 3D technology enables broadcasting of 3D content over the conventional 2D infrastructure, via cable, satellite or IPTV, while providing superior quality to common frame-compatible formats. SENSIO 3D delivers ‘visually lossless’ 3D images, which are so faithful to the originally captured images that the difference is imperceptible to the eye of the viewer.

For non 3D-enabled TVs, SENSIO’s know-how enables the best-quality anaglyph (viewed through the red-and-cyan glasses), which provides an unrivalled level of viewing comfort. Through these technologies, Videotron’s customers will be able to view diversified 3D content including movies via video-on-demand, sporting events and concerts.

Source: SENSIO Technologies

3DTV Analysis Year One

The 3DTV bandwagon shows no signs of slowing down, but crunch time could come in a year when broadcasters will review just how well the consumer experience has gone down.

“If we do bad 3D then this will fail but if we do good 3D it will become something incredible and set the stage for where we want to get to which is glasses free,” says Sky Director of Product Development Brian Lenz. “That could be 10 years away and right now we have to prove demand and prove the concept.”

Screen Digest has identified 21 dedicated channels already launched or planned for 2010, 13 of which are in Europe. These include the Sky DTH family of BSkyB, Sky Italia and Sky Deutchland which launched simultaneously carrying the Ryder Cup golf and dedicated cable broadcasts in Finland, Estonia and the Czech Republic. Add in major trials and planned commercial launches worldwide and the figure tops 50.

Around half of the launches are for a permanent standalone channel or segments within pre-existing channels rather than one-off events, with most channel plans including sports content. Most one-off event activity this year was aligned with the French Open Tennis and FIFA World Cup and it could be the London Games 2012 when the next round of major launch activity takes place.

PayTV operators are driving the charge, using a frame compatible format that allows existing HD set-top boxes to receive a 3D signal from existing HD infrastructure. However this still requires customers to purchase a 3D-ready set and while manufacturers are planning to retail more product with 3D as standard, by 2014 in the UK just 10% of all installed TV sets will be 3D-capable with France and Germany around 8% (according to Screen Digest).

As HD gathers mainstream momentum PayTV, operators spy in 3D a new premium format with which to differentiate themselves in the market — but gaining a return on the investment in additional bandwidth and production costs will be a long term project.

According to Screen Digest senior TV analyst Tom Morrod, 3D is an expensive addition to a PayTV operator’s bouquet: It’s going to be difficult for most payTV operators, even the largest established ones, to justify the costs of 3D production. Only customers who pay for Sky’s top-tier subscription package and have Sky+ HD set-top boxes, paying approximately £60 per month, will get Sky 3D free of charge.

“It’s an upsell, a retention benefit, a market differentiator, a premium content proposition,” says Lenz. “All those things turn into monetisation opportunities for us. 3D doesn’t have to be a standalone price point for us to derive direct financial benefit.”

Whereas the SD to HD transition could be readily made by down-converting the HD signal to cover SD and HD reception, originating a 3D signal requires another set of cameras, another production workflow, another uplink which, says Screen Digest TV Analyst Tom Morrod, “doubles and in some cases more than doubles the HD bandwidth and costs.”

A simulcast of a 2D and a 3D broadcast is not remotely close to the right way to go with 3D, agrees Lenz. “This is especially true for live sport which requires two totally different production and editorial processes.

“When you have the luxury in post to craft the image and generate a 2D cut from the 3D, then same time production is fine and I don’t feel I am sacrificing either way. But I don’t believe 2D and 3D simulcast on the same signal is ever a good idea — even if that means a separate 3D production is more expensive.”

The FTA Conundrum
Futuresource Consulting confirms that no DTT operators have launched or are as yet even trialling 3D in Europe. Although most free to air (FTA) broadcasters, BBC included, have voiced interest in 3D programming, with penetration of 3DTV sets remaining low in the short term there’s no clear economic rationale or market demand for FTA broadcasters to rush to launch 3D services.

Moreover, now that the first flush of one-off events has passed, operators are declaring difficulty in sourcing 3D content without entering into original production. Studios and content owners are also placing a higher premium on 3D programming. BSkyB recognised this and is funding a substantial portfolio of original 3D production to complement its movies and sports package, but few other operators have the pockets to follow suit.

“The lack of true 3D content in the marketplace has created a sellers’ environment and on occasion has resulted in broadcasters being expected to pay unrealistic premiums for content, particularly when they themselves are currently unable to charge the customers a premium,” says Futuresource Research Consultant David Watkins.

Between 30% to 50% premium, audience size dependent, has been cited as realistic for a broadcaster to pay for quality 3D content. “To help fill the 3D content gap at a lower cost, many broadcasters, particularly in Asia, have sought out high quality short-form content as a cheaper alternative,” says Watkins.

The cost of content is likely to dip as the market for 3D develops. By 2014 for example 40% of all digital cinema screens in the UK (1,285) will be 3D and in France (1,534) and Spain (901) the percentage tops 45% of digital installed screens over the same period (Screen Digest). Producers could tap into the growing market (currently 10% of all 3D cinema box office receipts) for live event and alternative content screenings in movie theatres.

Also driving 3D to the home will be the computer games market and packaged media. Futuresource predicts 30 million 3D Blu-ray players will enter the European market in 2014; that’s 93% of total Blu-Ray shipments. As 3DTV rolls out there remain some significant barriers. There’s market uncertainty, content availability and question willingness to pay.

Even Sky’s Lenz is forced to admit that the jury is hung. “3DTV will not be for everyone. We are confident that 3D will be a huge success but we need to be careful. It’s not just up to us but to technology partners, content suppliers and CE manufacturers to make this work. Right now we are all in a fortunate position because the destiny of 3D is in the industry’s hands.”

3D Activity in Select European Markets
Germany: Sky Deutchland’s 3D service launched with the Ryder Cup last month covering a potential 1 million Sky HD box owners in Germany and Austria. Key 3D sport events will include UEFA Champions League, DFB-Cup, German Ice Hockey League, and when the DFL offers 3D production, the Bundesliga. The service launched on satellite, and through the KBW cable service. Since September Deutsche Telecom’s IPTV service Entertain has begun offering on-demand 3D movies, sports and documentaries.

UK: Sky 3D launched October 1 including Premiership football, documentary 3D Flying Monsters, sections of content from National Geographic and movies like Monsters vs Aliens and Alice in Wonderland in the run-up to Christmas. Discovery Communications has a UK 3D license and is likely to launch on Sky’s platform next year. Virgin has launched a 3D package on its FilmFlex on demand service including BBC Films’ StreetDance 3D and animation Despicable Me 3D. The BBC has conducted trials of natural history, live sport and other genre and has signalled plans to capture elements of the 2012 London Games in 3D.

France: Public broadcaster France Télévisions produced some 3D coverage of the French Open in May and commercial rival TF1 offered 3D coverage of several World Cup matches. IPTV operator Orange launched its 3D service on the back of the Roland Garros tournament in May and airs a limited amount of sports, special events and factual programming. Canal+ aired select 3D World Cup matches, and plans a full-scale channel launch before year-end. Broadband operators SFR and Free also carry TF1 and Canal+ 3D programming. The country’s largest cable operator Numéricable launched a 3D demo channel, in cooperation with Panasonic, ahead of a VoD launch later this year.

Spain: Canal+ Spain, branded Canal+ en 3D, demoed a concert in 3D in April and also aired select matches of the summer’s World Cup in the format. Transmissions are carried on Digital+, the Sogecable-owned satellite pay-TV operator, to subscribers with iPlus boxes.

Italy: Sky Italia launched a dedicated 3D sport channel coinciding with the Ryder Cup in October. Pay-TV rival Mediaset has a premium on-demand 3D service featuring The Legend of Beowulf and Journey to the Centre of the Earth 3D among 50 TV/film titles. Public broadcaster RAI has conducted 3D tests and is providing 3D programming for SES Astra’s pan-European 3D channel.

The Netherlands: Cable company Ziggo is to carry Net 5 3D, a new channel owned by SBS. The plan is to upgrade Net 5’s regular programming to 3D, while during the night native 3D programming will be scheduled. Ziggo previously aired a political debate in 3D. Fellow Dutch cable operator UPC transmitted the US Masters Golf in 3D, played out by Netherlands-based provider Digital Media Centre.

3DTV Europe Forecasts 2014

  • 1m 3DTVs expected to ship in Europe in 2010
  • 4.5m in 2011 (8% of total television shipments)
  • Over 20m in 2014 (36% of total television shipments)

3D Blu-ray Europe Forecasts 2014
  • More than 500,000 3D Blu-ray devices expected to ship in 2010 in Europe
  • Nearly 3m in 2011 (26% of total Blu-ray shipments)
  • Almost 30m in 2014 (93% of total Blu-ray shipments)
[Source: Futuresource]

By Adrian Pennington, TVB Europe

3D Monitor Red-Lines Depth Budget

JVC Professional’s new 24-inch 2D/3D professional-grade production monitor can warn in realtime if you try to overspend your depth budget, making it easier to shoot good 3D.

The DT-3D24G1 measures the depth and parallax in the picture, and allows users to set depth limits. If you exceed them, it shows by how much. You can set a negative depth budget of up to 4% and a positive depth up to 20% (although those extremes would be ill-advised).

“The line changes colour if you go over it and will show how many pixels you are out and how much percentage,” said Gustav Emrich, European product manager at IBC.

“It also has two waveform monitors and two vectorscopes and can also check the stereo alignment of the cameras and show timecode one and two and any difference information.”

It uses an Xpol Circular Polarising system compatible with the RealD system, so users can view it through inexpensive polarised glasses. It accepts and processes signals from dual camera systems, stereo-rigs and coded Side-by-Side and Line-by-Line 3D signals. It is 3Gbps ready, has 10-bit processing, and can be used as a field monitor, but requires 24-volt power.

It should be available by the end of December for €8,200.

By David Fox, TVB Europe

Eurosport to Pursue 3D Live Events but No Channel

Eurosport has confirmed plans to launch a 3D service in 2011 but states that this will be event-led and unlikely to be a full linear 3D channel. The sportscaster, owned by French media group TF1, could produce its next 3D experience around the Australian Open Tennis in January but believes there is not yet enough content, or demand, to justify a 24/7 channel.

Speaking to TVBEurope Eurosport’s Francois Schmitt, deputy managing director Broadcast and New Media said: “Over the next few months we aim to propose, if not a complete 3D channel, then a service based around key live events of the properties we own.”

These include the Grand Slam tennis tournaments Australian Open, French Open — during which Eurosport made its first foray into 3D transmissions earlier this year — and the US Open as well as the World Touring Car Championship.

“We are discussing how to develop different market offers with our rights holders,” he said.

“For every sport we have to test the concept of 3D because there won’t be the same recommendations to production for each one. 3D will have a different effect on each sport.”

The broadcaster would like to replicate the success it has had rolling out HD channels across Europe and has no intention of launching 3D as a loss leader. Last year it launched Eurosport 2, which is already available in over 890,000 households. Eurosport HD has 5.2 million households.

“When we launched into HD this was extra value for Eurosport in terms of distribution on different platforms and this has been a successful business. If there is no revenue associated with the cost of new technology then Eurosport has no interest in developing a 3D channel.”

Neither is 2D converted backcatalogue content an option for Eurosport. “Even though we know 3D conversion works and is acceptable you cannot consider a 3D channel launch if you only use upconversion,” said Schmitt. “If you want to create a real 3D channel you need native 3D. Eurosport is in any case built on live event sport.

“Since 3D productions require a separate production path, the cost is double that of HD right now — but we will work with others to obtain a high level of maturity for the technology and with that costs will come down.”

By Adrian Pennington, TVB Europe

Super Hi-Vision Advances Over IP

As live Super Hi-Vision pictures are successfully sent over the internet it looks increasingly as if it will have become a practical broadcasting system by the end of the decade.

The world’s first international Super Hi-Vision (SHV) transmission test over IP networks was conducted September 29 at BBC TVC by BBC Research & Development in collaboration with Japanese broadcaster NHK.

The project, spread over two days, featured a live 30-minute performance of The Charlatans recorded for digital radio station BBC 6Music, and a live 30-minute transmission back to Tokyo of Taekwondo by the British Olympic team. SHV is the ultra-high-definition video format being developed by NHK for next-generation TV broadcasting, with a resolution of 7680 x 4320 pixels, sixteen times higher than existing full HD.



It’s not the first time the two broadcasters have collaborated on SHV tests. In 2008 the first public live SHV transmission was made between London and Amsterdam (to IBC). The BBC’s main contribution has been on the encoding side, first with Dirac and subsequently in conjunction with NHK on the MPEG and ITU-T standardisation process for HEVC (High Efficiency Video Coding).

Two years ago MPEG-2 was used for compression, delivering data rates of 650Mbps. The rate has now been more than halved thanks to MPEG-4 AVC/H.264 but, though not exclusively focussed on SHV, HEVC has a target rate of 150Mbps which would be at the outer limits for compressing SHV over fibre optic broadband to the home by NHK’s target broadcasting date of 2020.

“Since the SHV frame rate is 60fps we have also started to look at generating higher frame rates,” explained John Zubrzycki, BBC R&D principal technologist. “With movement in the scene, at higher resolutions the picture will look blurred -- but if you increase the frame rate you increase sharpness.”

Having tested high definition decades before it was ever a practical proposition, investigation into futuristic formats is normal for BBC R&D. “With HDTV up and running our remit is to look at what may be the next generation visual technology a decade or even 25 years hence,” he said.“It could be 4K digital cinema or 3D, both of which we are exploring, as well as SHV.”

The recent tests were shot using one of only three prototype SHV cameras sporting three (RGB) 1.25-inch 33 megapixel (8K) resolution sensors. Previously the SHV camera featured two 4K green sensors with half-pixel offset (plus 4K red and blue sensors) which achieved full resolution for vertical and horizontal luminance detail only.

Since existing lenses are not simply sharp enough to capture all the information the SHV sensors can receive, NHK has had high precision lenses custom made -- each with a basic 10:1 zoom and built larger than existing lenses for the 1.25in sensor.

From the camera the video and audio signals were fed to a Fujitsu-developed SHV encoder composed of eight 1080/60p MPEG-4 AVC/H.264 encoding units. The SHV image is divided into eight overlapping tiles by a format converter and each of them fed to an encoder unit. The output streams of these eight encoders are in MPEG TS format and multiplexed together using a TS multiplexer.

“The principal that NHK are deploying to handle the data is to treat it as 16 HD SDI signals in parallel in production. A new development is that these signals are able to be muxed uncompressed into a single optical stream, carried over a single triax cable from the camera,” said Zubrzycki.

In the control gallery the feed was demultiplexed from the optical stream displayed around the studio on 4K monitors and locally stored on an array of 16 Panasonic P2 solid state recorders, each encoding and storing a sixteenth tile of the the SHV picture using AVC-Intra (100Mbps) -- giving a total recorded bit rate of 1.6Gbps.

“It’s quite a challenge to manage 16 HDTV signals in parallel around a studio although in theory it can be done today with a big enough router and storage devices,” said Zubrzycki. “We added nothing fancy to the broadcast, no caption generators or vision mixing -- things which will be added to a mature SHV system.

“The real challenge, and the real reason for the test, was to prove that we could transmit the signal live half way round the world and if we could do that, we could do it anywhere,” he added.



Equally important it demonstrated that SHV could be carried over the internet as a contribution signal from an event to the broadcaster’s studio, avoiding the expense of a satellite transponder and with an eye to IP as the way images will be transmitted in future.

The eight 45Mbps encoded streams combined to create a stream with a bitrate that varied between 320-380Mbps. These were IP encapsulated and packet streamed for error protection then sent over broadband IP network to Japan.

The test was routed over various research IP networks including JANET (the UK's 40Gbps education and research network), GEANT in Europe, Internet2 in the US and Gemnet in Japan (coordinated by NTT). On reception at NHK’s Science & Technology Research Labs, the reverse operations were conducted to reproduce SHV video and audio.

“The packets were sent in defined routes through the networks since we felt that with that amount of content we needed to monitor and verify what was going on,” said Zubrzycki. “It was an effective demonstration but there was some packet loss and our investigation now is to find out what is happening to those packets. This was the real reason we did the tests – to find out how we can send SHV via IP.”

The camera’s sensitivity is the key hurdle to overcome at the acquisition end. Although the camera has half the 40kg weight and size it was in 2008 the goal has to be to make it more manoeuvrable. “For the trial we used more light than today’s normal HDTV but with similar lighting levels to HDTV at a similar stage of its development,” noted Zubrzycki.

NHK believes it can start experimental satellite broadcasting using the Ka band (21 GHz) in 2020. Before that however it has to solve the problem of attenuation caused by the amount of rainfall Japan receives in the Ka band.

“We have a plan for experiments using the Ku band (12 GHz) prior to the Ka-band experimental broadcasting,” explained Keiichi Kubota, director general of NHK R&D. “Research is also ongoing for SHV broadcasting over terrestrial broadcasting systems, although we don’t think it will be practical until after satellite broadcasting starts.”

Kubota likens the situation to that at the early stage of HD when there was no market demand when it started R&D on the format. “It is our job to create new demand, even before anyone notices the need,” he said. “The most important thing for us is to make people recognise that SHV will bring such a wonderful future. The Kyushu National Museum, Fukuoka, Japan has already introduced an SHV system. Our final goal is still SHV broadcasting to public; but before that, we have to put SHV to practical use in theatres.”

By Adrian Pennington, TVB Europe