AVC-I

Broadcast contribution applications like newsgathering, event broadcasting or content exchange currently benefit from the large availability of high-speed networks. These high-bandwidth links open the way to a higher video quality and distinctive operational requirements such as lower end-to-end delays or the ability to store the content for further edition.

Because a lighter video compression is needed, the complexity of common long-GOP codecs can be avoided, and simpler methods like intra-only compression can be considered. These techniques compress pictures independently, which is highly desirable when low latency and error robustness are of major importance. Several intra-only codecs, like JPEG 2000 or MPEG-2 Intra, are today available, but they might not meet all broadcasters' needs.

AVC-I, which is simply an intra-only version of H.264/AVC compression, offers a significant bit-rate reduction over MPEG-2 Intra, while keeping the same advantages in terms of interoperability. AVC-I was standardized in 2005, but broadcast contribution products supporting it were not launched until 2011. Therefore, it may be seen as a brand new technology, and studies have to be performed to evaluate if they match currently available technologies in operational use cases.

Why Intra Compression?
Video compression uses spatial and temporal redundancies to reduce the bit rate needed to transmit or store video content. When exploiting temporal redundancies, predicted pixels are found in already decoded adjacent pictures, while spatial prediction is built with pixels found in the same picture. Long-GOP compression makes use of both methods, and intra-only compression is restricted to spatial prediction.

Long-GOP approaches are more efficient than intra-only compression, but they have also distinctive disadvantages:

  • Handling picture dependencies may be complex when seeking in a file. This makes editing a long-GOP file a complex task.

  • Any decoding error might spread from a picture to the following ones and span a full GOP. This means that a single transmission error can affect decoding for several hundred milliseconds of video and, therefore, be very noticeable.

  • Encoding and decoding delay might be increased using long-GOP techniques compared to intra-only because of compression tools complexity.

Another problem inherent to long-GOP compression relates to video quality that varies significantly from picture to picture. For example, the figure below depicts the PSNR along the sequence ParkJoy when encoding it in long-GOP and in intra-only. While the quality of the long-GOP pictures is always higher than the one of their intra-only counterparts, it varies considerably. On the other hand, the quality of consecutive intra-only coded pictures is much more stable.


Long-GOP versus intra-only compression.


Therefore, intra-only compression might be a better choice than long-GOP when:
  • Enough bandwidth is available on the network;
  • Low end-to-end latency is a decisive requirement;
  • Streams have to be edited; and
  • The application is sensitive to transmission errors.

Several intra-only codecs are currently available to broadcasters to serve the needs of contribution applications:
  • MPEG-2 Intra — This version of MPEG-2 compression is restricted to the use I-frames, removing P-frames and B-frames.

  • JPEG 2000 — This codec is a significantly more efficient successor to JPEG that was standardized in 2000.

  • VC-2 — Also known as Dirac-Pro, this codec has been designed by BBC Research and was standardized by SMPTE in 2009. Like JPEG 2000, it uses wavelet compression.

Older codecs like MPEG-2 Intra benefit from a large base of interoperable equipments but lack coding efficiency. On the other hand, more recent formats like JPEG 2000 are more efficient but are not interoperable. Consequently, there is a need for a codec that could be at the same time efficient and also ensure interoperability between equipment from various vendors.

What is AVC-I?
AVC-I designates a fully compliant variant of the H.264/AVC video codec restricted to the intra toolset. In other words, it is just plain H.264/AVC using only I-frames. But, some form of uniformity is needed in order to ensure interoperability between equipment provided by various vendors. Therefore, ISO/ITU introduced a precise definition in the form of profiles (compression toolsets) in the H.264/AVC standard.

H.264/AVC Intra Profiles
Provision to using only I-frame coding was introduced in the second edition of the H.264/AVC standard with the inclusion of four specific profiles: High10 Intra profile, High 4:2:2 Intra profile, High 4:4:4 Intra profile and CAVLC 4:4:4 Intra profile. They can be described as simple sets of constraints over profiles dedicated to professional applications. Thez Table below gives an overview of the main limitations introduced by these profiles:



Because the intra profiles are defined as reduced toolsets of commonly used H.264/AVC profiles, they don't introduce new features, technologies or even stream syntax. Therefore, AVC-I video streams can be used within systems that already support standard H.264/AVC video streams. This enables the usage of file containers like MPEG files or MXF, MPEG-2 TS or RTP, audio codecs like MPEG Audio or Dolby Digital, and many metadatastandards.

AVC-I and JPEG-2000 Artifacts
Below 100Mb/s, a problematic defect was observed similarly on both codecs: Pictures can exhibit an annoying flicker. This issue is caused by a temporal instability in the coding decisions, amplified by noise. It seems to appear below 85Mb/s with JPEG 2000 and below 75Mb/s with AVC-I. And, it worsens as the bit rate decreases. At 50Mb/s and below, the flicker is extremely problematic, and it was felt that the video quality was too low for high-quality broadcast contribution applications, even when the source is downscaled to 1440 × 1080 or 960 × 720.

Around 100Mb/s, both codecs perform well, even on challenging content. Pictures are flicker-free, and coding artifacts are difficult to notice. However, noise or film-grain looks low-pass filtered, and its structure sometimes seems slightly modified. Even so, it wasn't felt this was an important issue.

All those defects are less visible as the bit rate is increased. But, while AVC-I picture quality raises uniformly, some JPEG 2000 products may still exhibit blurriness artifacts, even at 180Mb/s. Using available JPEG 2000 contribution pairs, a bit rate at which compression is visually lossless on all high-definition broadcast content was not found. On the other hand, some encoders appeared visually lossless at 150Mb/s, even when encoding grainy content like movies.

Bit Rates in Contribution
The subjective analysis of an actual AVC-I implementation on various broadcast contribution content allows us to categorize its usage according to the available transmission bandwidth. The table below presents findings on 1080i25 and 720p50 high-definition formats:



Because AVC-I does not make use of temporal redundancies, 30Hz content (1080i30 or 720p60) is more difficult to encode than 25Hz material. Additionally, to achieve the same perceived video quality level, bit rates have to be raised by 20 percent.

Conclusion
The availability of high speed networks for contribution applications enables broadcasters to use intra-only video compression codecs instead of the more traditional long-GOP formats. This allows them to benefit from distinctive advantages like: low encoding and decoding delays; more constant video quality; easy edit ability when the content is stored; and lower sensitivity to transmission errors. However, currently available intra-only video codecs require one to choose between interoperability and coding efficiency.

AVC-I, being just the restriction of standard H.264/AVC to intra-only coding, avoids making difficult compromises. It is more efficient than other available intra-only codecs, but, more importantly, it benefits from the strong standardization efforts that permitted H.264/AVC to replace MPEG-2 in many broadcastapplications.

Finally, a subjective study across a range of products from multiple vendors identified specific coding artifacts that may occur and confirmed the visual superiority of AVC-I versus MPEG-2 and JPEG 2000, when measured at high bit rates.

Pierre Larbier, CTO for ATEME, Broadcast Engineering

EBUCore: the Dublin Core for Media

EBUCore was first published in 2000. It was originally a set of definitions for audio archives, applied to the Dublin Core, which is itself a generic set of descriptive terminology that can be applied to any content. XML was then in its infancy but its use would grow dramatically, demanding more structured information to describe audiovisual content. Since then, other semantic languages have greatly influenced the way this information is modelled. EBUCore followed this evolution to become what it is today: the Dublin Core for media, a framework that can be used to describe just about any media content imaginable.

EBUCore is the fruit of well-defined requirements and an understanding of user and developer habits. User friendliness, flexibility, adaptability and scalability are more important than richness and comprehensiveness allied to impossible compliance rules. The richer the metadata, the higher the likelihood that implementers will reinvent their own. History is full of such examples. The golden rule for EBUCore was and remains "keep it simple and tailor it for media".

EBUCore covers 90% of users’ needs and its use is no longer restricted to audio or archives. Based on the simple and flexible EBU Class Conceptual Data Model (CCDM), EBUCore's ontology (categories and structure), which is expressed in RDF/OWL (Resource Description Framework/Web Ontology Language), can be used right through to the delivery of content to the end user. It responds to the need for more effective querying. It also paves the way for effective metadata enrichment using Linked Open Data (LOD).

EBUCore was designed to be a metadata specification for “users with different needs” and duly serves this goal. Delegates at the EBU’s Production Technology Seminar last January heard a wealth of evidence pointing to the key role that EBUCore is now playing. Several speakers explained how they have deliberately chosen and benefited from EBUCore.

The EBU-AMWA FIMS project, creating a vendor-neutral specification to interconnect production equipment, has adopted EBUCore. The FIMS 1.0 specification uses EBUCore as its core descriptive and technical metadata. FIMS is a vital project for the future of file-based production and feedback received from participants has influenced the most recent version of EBUCore. Early adopters of FIMS, such as Bloomberg, are using this metadata.

The UK’s Digital Production Partnership (DPP), which recently published its new specification for file-based programme delivery, is mapping its metadata to EBUCore and TV-Anytime. (TV-Anytime was co-founded by the EBU, who chaired the metadata activities and now actively maintains the specification on behalf of ETSI).

The work on EBUCore and EBU's CCDM greatly influenced the development of W3C Ontology for Media Resources, and vice versa. MA-ONT, as it is known, is a subset of the EBUCore ontology and the RDF/OWL representation rules are common to both. This work is also being used to propose extensions to the schema.org in order to describe TV and radio programmes and associated services and schedules.

EBUCore is also used as the solution for metadata aggregation in EUScreen, the European audiovisual archives portal and now a key contributor to Europeana, the European digital library. Two forms of EBUCore are used in this context, the EBUCore XML metadata schema and also the EBUCore RDF ontology.

Other on-going or planned activities using EBUCore include:
• EBUCore will be listed as a formal metadata type by the SMPTE. The EBU is arranging for software to be available to embed EBUCore metadata in languages such as XML or JSON.

• The NoTube project has combined egtaMeta (an EBU specification extending the EBUCore for the exchange of commercials) and TVAnytime to develop innovative solutions in targeting advertising.

• EBUCore is also used in combination with MPEG-7 in the VISION Cloud project exploring technologies for storage in the cloud. The EBU is directly involved in the definition and promotion of the new MPEG-7 AVDP profile.

• Singapore’s national broadcaster, MediaCorp, has implemented and adapted EBUCore/SMMCore into its internal company metadata framework.

• The EBU is engaged with several broadcasters for the adaptation of EBUCore in different contexts such as a common metadata format for file exchange.

The above is just a small selection of developments. For example, EBUCore is also republished by the Audio Engineering Society (AES) as AES60, and is available in XML, SMPTE KLV, JSON and RDF/OWL.

Watch this space as the EBU will soon publish a user-friendly EBUCore mapping tool on its website.

By Jean-Pierre Evain, EBU Technical Magazine

Let’s DASH!

In the last century, access to video delivered over networks was almost exclusively dominated by scheduled consumption on dedicated devices – broadcasters distributed premium content at a specific time to TV sets. Broadband internet, both fixed and mobile, as well as highly capable devices such as smartphones and tablets have changed video consumption patterns dramatically in recent years. Video is now consumed on-demand on a multiplicity of devices according to the schedule of the user.

Recent studies conclude that mobile data traffic will grow by a factor of 26 between 2011 and 2016 and that by 2016 video traffic will account for at least two-thirds of the total. The popularity of video also leads to dramatic data needs on the fixed internet. In North America, real-time entertainment traffic (excluding p2p video) today contributes more than 50% of the downstream traffic at peak periods, with notably 30% from Netflix and 11% from YouTube.

HTTP Delivers
The astonishing thing is that these data needs are not driven by traditional broadcast, IP multicast or managed walled-garden services, but by over-the-top video providers. One of the cornerstones of this success is the use of HTTP as the delivery protocol. HTTP enables reach, universal access, connectivity to any device, fixed-mobile convergence, reliability, robustness, and the reuse of existing delivery infrastructure for scalable distribution.

One of the few downsides of HTTP-based delivery is the lack of bitrate guarantees. This can be addressed by enabling the video client to dynamically switch between different quality/bitrate versions of the same content and therefore to adapt to changing network conditions. The provider offers the same media content in different versions and the client can itself select and switch to the appropriate version to ensure continuous playback. The figure below shows a typical distribution architecture for dynamic adaptive streaming over HTTP. HTTP-based Content Delivery Networks (CDNs) have been proven to provide an easy, cost-efficient and scalable means for large-scale video streaming services.


Click to enlarge


Setting a Standard
MPEG has taken the lead on defining a unified format for enabling Dynamic Adaptive Streaming over HTTP (DASH). MPEG-DASH was ratified in 2011 and published as a standard (ISO/IEC 23009-1) in April 2012. It is an evolution of existing proprietary technologies that also addresses new requirements and use cases. DASH enables convergence by addressing mobile, wireless and fixed access networks, different devices such as smartphones, tablets, PCs, laptops, gaming consoles and televisions, as well as different content sources such as on-demand providers, broadcasters, or usergenerated content offerings.

The standard defines two basic formats: the Media Presentation Description (MPD) uses XML to provide a manifest of the available content, its various alternatives, their URL addresses, and other characteristics; and Segments, which contain the actual media streams in the form of chunks, in single or multiple files. In the context of part 1 of MPEG-DASH the focus is on media formats based on the ISO Base Media File Format and the MPEG-2 Transport Stream.

With these basic formats MPEG-DASH provides for a very wide range of use cases and features, including support for server and client-side component synchronization and for efficient trick modes, as well as simple splicing and (targeted) ad insertion and session metrics. DASH can support multiple Digital Rights Management systems, content metadata, and support for advanced codecs including 3D video and multi-channel audio.

Towards Deployment
With the completion of the standard the focus has shifted towards deployment and commercialization of DASH. In this context MPEG will later this year publish Conformance Software and Implementation Guidelines and continues to work on client implementations and optimizations. This is especially relevant for a stable and consistent user experience under varying network conditions.

On the distribution side, coming optimizations include DASH for CDNs – to improve efficiency, scalability and user experience – along with integration into mobile networks and transition between unicast and multicast distribution.

The creation of the DASH Promotors’ Group will help to address interoperability and promotional activities. The EBU is among the 50 major industry players that make up this group. Support is also provided for other standards planning to include MPEG-DASH to enable over-the-top video, including HbbTV, DLNA, the Open IPTV Forum and 3GPP. Furthermore, the W3C consortium is considering extensions to the HTML5 video tag that would aid the integration of DASH into web browsers.

The significant efforts currently under way to deploy DASH in a wide range of contexts raise the expectation that MPEG-DASH will become the format for dynamic adaptive streaming over HTTP.

By Thomas Stockhammer, EBU Technical Magazine

What is AS02 and Why Do You Need It?

A nice introduction to AS02 by Bruce Devlin.

HTTP ABR Streaming

Cisco Systems Visual Networking Index (VNI) predicts that more than 50 percent of all global Internet traffic will be attributed to video by the end of 2012. It also confirms, in addition to television screens, video delivery to cell phone and computer screens will be increasingly common Globally, Internet video traffic is projected to be 58 percent of all consumer Internet traffic in 2015, up from 40 percent in 2010. At that time, three trillion minutes of video content are projected to cross the Internet each month, up from 664 billion in 2010, when 16 percent of consumer internet video traffic in 2015 will be TV video. There is no doubt that if you are in the business of transmitting video, you will likely be using IP in the near future.

Delivering acceptable video quality over IP to TV viewers and other devices has led to a still-evolving delivery infrastructure. The required network scale has higher packet loss and error rates than smaller managed networks. Adaptive Bit Rate (ABR) delivery protocols like Apple's HLS and Microsoft's Silverlight, among others, help address these issues. These protocols use HTTP over TCP to mitigate data loss by dynamically adapting bit rates to adjust to networks that can provide only unpredictable instantaneous bandwidths.

Using a CDN to distribute the content to a range of servers located close to the viewers is another key feature to successful deployments to avoid the congestion and bottlenecks of centralized servers. Yet, despite more complex protocols to handle a range of transport issues, high-quality performance is not guaranteed. Cost-effective operations and a good viewer experience depend on good monitoring observability and targeted performance metrics for rapid problem identification, location and resolution.

ABR Protocols
ABR video delivery mechanisms over IP that enable this rapidly growing Internet video market are effective, but complex. Not only do they require the usual video compression encoders to achieve practical bit rates, but they also require a host of other devices and infrastructures, including segmenting servers, origin servers, a CDN and a last-mile delivery network.

ABR protocols help deliver a quality video experience to viewers by overcoming common IP data network performance issues such as packet arrival jitter, high loss rates, unpredictable bandwidth and security firewall issues. HTTP delivery solves most firewall issues as it is almost universally unblocked since it is also used for web browsing. HTTP, which uses TCP, assures loss-free payload delivery as well. While predictable instantaneous bandwidth levels are a challenge in unmanaged networks, by using variable encoding rates and these protocols, the viewer's client device can dynamically select the best stream bit rate for the instantaneously available bandwidth.

Apple's HTTP Live Streaming (HLS) is an example of a protocol that successfully navigates the challenges of unmanaged networks to transfer multimedia streams using HTTP. To play a stream, an HLS client first obtains the playlist file, which contains an ordered URI list of media files to be played. It then successively obtains each of the media files in the playlist. Each media file is, typically, a 10-second segment of the desired multimedia stream. A playlist file is simply a plain text file containing the locations of one or more media files that together make up the desired program.

The media file is a segment, or “chunk,” of the overall presentation. For HLS, it is always formatted as an ISO 13818 MPEG-2 TS or an MPEG-2 audio elementary stream. The content server divides the media stream into media files of approximately equal durations at packet and key frame boundaries to support effective decoding of individual media files. The server creates a URI for each media file that allows clients to obtain the file and creates the playlist file that lists the URIs in play order.


On the transmitting end, the adaptive encoder creates segments with fixed duration at different bit rates and an index file that acts as a playlist to indicate the sequence of the segments. On the receiving end, the adaptive protocol buffers video segments in the correct sequence, selecting the best quality possible for the bit rate available at each interval before playing them seamlessly.


Multiple playlist files are used to provide different encodings of the same presentation. A variant playlist file that lists each variant stream allows clients to dynamically switch between encodings. Each variant stream presents the same content, and each variant playlist file has the same target duration. If the playlist file obtained by the client is a variant playlist, the client can choose the media files from the variants as needed based on its own criteria, such as how much network bandwidth is currently available. The client will attempt to load media files in advance of when they will be required for uninterrupted playback to compensate for temporary variations in latency. The client must periodically reload the playlist file to get the newest available media file list, unless it receives a tag marking the end of the available media.

CDN Operation
Using HTTP client-driven streaming protocols like HLS effectively supports adaptive bit rates, handles high network error rates and firewall issues, and supports both on-demand and live streaming. However, with millions of clients establishing individual protocol sessions to receive video, scalability must be considered. Further challenging the system design are sudden spikes in requests from “flash crowds” or “SlashDot effects” that may be caused by current events where a sudden, unexpected demand overwhelms servers, and content becomes temporarily unavailable.

The CDN is a collection of network elements that replicates content to multiple servers to transparently deliver content to users. The elements are designed to maximize the use of bandwidth and network resources to provide scalable accessibility and maintain acceptable QoS. Particular content can be replicated as users request it or can be copied before requests are made by pushing the content to distributed servers closer to where it is anticipated users will be requesting it.

In either case, the viewer receives the content from a local server, relieving congestion on the origin server and minimizing the transmission bandwidth required across wide areas. Caching and/or replica servers located close to the viewer are also known as edge servers or surrogates. To realize the desired efficiencies, client requests must be transparently redirected to the optimal nearby edge server.

Content distribution and management strategies are critical in a CDN for efficient delivery and high-quality performance. The type and frequency of viewer requests and their location must dynamically drive the directory services that transparently steer the viewer to the optimum edge server, as well as the replication service, to assure that the requested content is available at that edge server for a timely response to the viewer. A simplistic approach is to replicate all content from the origin server to all surrogate servers, but this solution is not efficient or reasonable given the increase in the size of available content. Even though the cost of storage is decreasing, available edge server storage space is not assured. Updating this scale of copies is also unmanageable.

Practically, a combination of predicted and measured content popularity and on-demand algorithms are used for replication. Organizing and locating edge server clusters to maintain optimum content availability relies on policies and protocols to load balance the servers. Random, round robin or various weighted server selection policies, along with selections based on number of current connections, number of packets served, and/or server CPU utilization, health and capacity are all utilized and varied based on load persistence considerations.

Quality Assurance
Cost-effective operations rely on good monitoring observability and performance metrics for rapid problem identification and resolution. QoS performance monitoring metrics provide needed information about stream delivery quality, key information about the types of impairments and their causes, as well as warnings of impending impairments for ABR streaming networks. Combined with end-to-end monitoring, QoS monitoring used in production network monitors network delivery quality of the flows and for other applications such as system commissioning and tuning.

In adaptive streaming environments, QoS should be monitored post caching server and at the client. This chart shows how the VeriStream metric characterizes instantaneous network delivery quality on a 1-5 scale:

1 - Severe underrun: Interval between segments and the file transfer time are slower than the drain rate.
2 - Underrun: Segment interval is slower than the drain rate, but file transfer time is faster than the drain rate.
3 - Warning: Interval between segments and the file transfer time are marginal.
4 - Growing buffer: Interval between segments and the file transfer are faster than the drain rate.
5 - Balanced system: Interval between segments is balanced, and the file transfer is faster than the drain rate.

Such metrics are intended to analyze streams susceptible to IP network device and client/server impairments. For adaptive streaming environments, it is also important to monitor QoS at the client end point, which can be used to assess the dynamic performance of network and system delivery. QoS metrics for ABR must continuously analyze the dynamic delivery of stream segments.


Comprehensive monitoring in real time at strategic network locations for rapid problem detection and fault isolation can be combined with control plane and content quality monitoring for optimum system management.


Summary
Leaving the well-managed network domain of provider IP networks requires new adaptive bit-rate protocols that are rapidly proving their effectiveness. A comprehensive, end-to-end monitoring strategy gives content and service providers the streamobservability and fault-isolating capabilities needed for timely and efficient adaptive bit-rate network delivery deployments.

By James Welch, Broadcast Engineering

World's First Ultra High Definition Shoulder-Mount Camera

This is the world's first compact shoulder-mount Ultra High Definition camera. Developed by NHK, it uses a single-chip color imaging sensor to produce 33MP video.

By reducing the size and weight of the camera, the portability had been improved, making it more maneuverable than previous prototypes, so it can be used in a wide variety of shooting situations. This compact head can also be used with commercially available still camera lenses.

As the single-chip sensor uses a Bayer color filter array, where only one color component is acquired per pixel, researchers at NHK have also developed a high quality up-converter, which estimates the other two color components to convert the output into full resolution video.

Next, NHK will develop a camera control unit to perform signal processing specifically for this head. This will improve the picture quality and functionality of the camera.


By Don Kennedy, DigInfo TV

HbbTV on Brink of Global Expansion

HbbTV is now odds on to emerge from the cluster of interactive and hybrid TV standards to become the dominant platform uniting broadcast and connected TV.

Currently sweeping through continental Europe, and under trial in a number of other major countries including China, Japan and the U.S., it looks like the winning hybrid TV platform is emerging, partly through not being too ambitious and sticking clearly within defined boundaries. It leaves plenty of scope for apps vendors, broadcasters and pay TV operators to innovate around the platform and stamp their own distinctive flavor on their products or standards.

The closest to an actual HbbTV deployment outside Europe is in South Korea, where national broadcaster KBS is launching services based on the country’s OHTV (Open Hybrid Television), which is a separate development but almost identical technically and now likely to be aligned completely with HbbTV given its momentum in other countries. In Korea, the service is hybrid digital terrestrial and broadband, as is the case with many early HbbTV deployments in Europe.

HbbTV evolved in 2009 as a joint project between France and Germany, and it is those two countries along with Spain and the Netherlands that have made the early running with HbbTV deployments. In Germany, at least eight broadcasters, including public service broadcaster RTL and pan European media distributor ZDF, are now offering HbbTV apps over terrestrial or satellite networks.

Such apps can combine multiple delivery channels into one coherent service, or provide additional viewing options within a single channel. An example of the first of these use cases is German home shopping channel QVC, which is exploiting HbbTV to unite its various existing distribution channels including TV, Internet and mobile networks, within one coherent service allowing customers to search for products. An example of the second is German private broadcaster Vox, part of the RTL Group, which is using HbbTV for a cooking channel, allowing viewers to access different recipes for demonstration within a show by pressing the “red button.”

In France, public broadcaster France Télévisions has been leading the way and is using HbbTV to expand coverage of the French Open tennis tournament over the next two weeks. It will allow viewers to choose from a number of matches at a given time, again using red button functionality.

Spain, meanwhile, has agreed to adopt HbbTV as its system for connected TV, with pilots completed by broadcaster Mediaset España and Telco Telefonica. These involved Mediaset’s Telecinco importing content from Telefónica’s services, including Movistar Imagenio, Movistar Videoclub and Terra TV.

Similarly, Dutch broadcasters, including SBS, NPO and RTL, have agreed to use HbbTV as their standard for hybrid connectivity, while launches have just occurred or are imminent in Switzerland, Austria, the Czech Republic and Poland.

HbbTV has also won over the Nordic region, which originally was planning to build hybrid broadcast around the alternative Multimedia Home Platform (MHP) developed by the DVB, with the main original difference being that HbbTV is based on HTML while MHP is written in Java language. Germany originally went with MHP, but it failed miserably, partly because there was then little demand for interactive TV in the country.

Now, the Nordic region comprising Denmark, Finland, Iceland, Norway and Sweden, oddly also including Ireland, has replaced DVB-MHP with HbbTV as the common API for hybrid digital receivers within its NorDig digital TV specification. The stated reason is that HbbTV now has much wide market acceptance, with a range of TV applications and new hybrid services, and crucially HbbTV compatible receivers from a number of manufacturers such as Humax of South Korea. Also significant is that HbbTV software stacks are now incorporated in leading hybrid chip sets from Broadcom and Sigma Designs. Most major players in the connected TV arena are now members of the HbbTV Forum.

The success of HbbTV can be put down to three factors: its flexibility; foundation on existing standards that are being implemented anyway as part of OTT and IPTV deployments; and support from industry groups, notably the European Broadcast Union (EBU), whose influence extends outside the continent.

The EBU is hoping that the Olympics will give HbbTV a big lift, and has laid the ground by offering “white-label” HbbTV applications free of charge to its members. These apps are currently being customized and rebranded for deployment just ahead of the games, when they will deliver interactive services to peak audiences during the Olympics.

Equally crucial is the approval of the Open IPTV Forum (OIPF), which has emerged as the major body forging standards for OTT and general online video service delivery over unmanaged infrastructures, as well as IPTV within closed ‘walled garden’ networks.

The key component for OTT that has been adopted by HbbTV is the OIPF’s Open Internet Profile, based on its Declarative Application Environment (DAE), which is a browser for TVs with support for various presentation mechanisms including HTML4, HTML5, SVG, and CE-HTML.

“Its key component is the set of JavaScript objects which permit the manipulation of media for both content on demand as well as live streaming, interactions with local and remote storage, and control of adaptive streaming,” said Nilo Mitra, OIPF president. “This specification is reused by the HbbTV Consortium for providing catch-up services via broadcaster portals, and is now implemented in retail TVs sold by major manufacturers in several EU regions,” Mitra added, arguing that this was the only fully open specification for content delivery over unmanaged networks available today.

The OIPF specification includes a mechanism for adaptive delivery of MPEG-2 transport streams over HTTP, and this has been incorporated into one of the profiles of the newly just published ISO DASH specifications, according to Mitra. DASH is likely to become the standard mechanism for adaptive streaming, taking over from existing proprietary systems such as Microsoft Smooth Streaming, and possibly Apple’s HLS (HTTP Live Streaming).

Support for DASH streaming was an addition to HbbTV. But, at the outset, it had the right foundation by being built on two relevant and mature technology sets, one being web standards already included in web browsers for embedded devices, and the second being the Digital Storage Media Command and Control (DSM CC) specification already part of the MHEG-5 interactive platform adopted in several countries, including the UK and Australia.

Use of existing web standards provided the basis for broadband access, while DSM CC specified a common approach for interactive services uniting two way online delivery with one way broadcast. DSM CC, therefore, delivered the hybrid component, and although it is a complex set of technologies the basic principle is simple. The aim is to facilitate control over transport of audio and video streams within interactive services in both a bi-directional environment such as a cable TV or VOD system, and also a uni-directional service such as satellite or digital terrestrial.

The challenge was how to simulate interactivity in a one-way environment when there was no return path. The simple answer was to adopt a carousel approach — hence the name. As the receiver in a traditional broadcast environment has no return path and so cannot request specific files from a server, DSM CC periodically transmits every file, and the receiver then grabs the ones it wants as they pass by on the carousel. Of course, this is not particularly efficient since if a receiver misses a file it has to wait for it come round again. But, techniques have been developed and embodied in HbbTV to improve the interactive performance for one way broadcast services.

It will become clear during the rest of 2012 how well HbbTV does perform in a variety of hybrid environments including those involving one-way broadcast, with the Olympics providing useful feedback. But, HbbTV has still not convinced all its doubters, even in Europe. Italy is the notable outsider, having gone its own way by deploying MHP for hybrid services.

The UK is the other major odd one out, since its much delayed connected TV platform, YouView, took a different approach. However, the UK is itself a bit of a hybrid, since the Digital Technology Group (DTG) responsible for digital TV and particularly terrestrial standards in the UK developed an extension to MHEG-5 called the MHEG-5 Interaction Channel (MHEG-IC), allowing broadcast interactive services to be delivered via an IP connection. This made it more like HbbTV, and, subsequently, the DTG has fully endorsed HbbTV for Freeview DTT.

By Philip Hunter, Broadcast Engineering

NHK 33 Megapixel 120fps Ultra High Definition Imaging System

NHK, in conjunction with Shizuoka University, has developed an Ultra High Definition imaging system that outputs 33MP video at 120fps.

As Ultra High Definition broadcasts at full resolution are designed for large, wall sized displays, there is a possibility that fast moving subjects may not be clear when shot at 60fps, so the option of 120fps has been standardized for these situations.

To handle the sensor output of approximately 4 billion pixels per second with a data rate as high as 51.2Gbps, a faster analog-to-digital converter has been developed to process the data from the pixels, and then a high-speed output circuit distributes the resulting digital signals into 96 parallel channels.

This 1.5-inch CMOS sensor is smaller and uses less power when compared to conventional Ultra High Definition sensors, and it is also the world's first to support the full specifications of the Ultra High Definition standard.

From now on NHK plan to increase the light sensitivity of this Ultra High Definition sensor.


by Don Kennedy, DigInfo

NHK Hybridcast Making broadcast TV Interactive

Hybridcast is an infrastructure system being developed by NHK, with a view to commercial use in 2013. This system combines broadcasting with the Internet, to enable a variety of TV-centered services.

At this year's NHK Science & Technology Research Laboratories Open Day, NHK exhibited prototype receivers, developed together with manufacturers, as well as the service concept.


Source: DigInfo

DPP Unveils Digital Workflow Guide

The Digital Production Partnership (DPP) has unveiled a major industry report, The Bloodless Revolution: A Guide to Smoother Digital Workflows in Television. The report is the first published guidance on digital workflows to be issued on behalf of ITV, Channel 4 and the BBC. It seeks to help producers and suppliers achieve a smoother transition to fully digital production.

The new guide follows the publication of the DPP’s report on breaking down the barriers to digital production The Reluctant Revolution, September 2011. One of the claims made in the first report was that the pace of change in the industry was held back by a lack of commonly agreed ways of working. It observed that greater guidance is needed if the industry is to complete its move from tape-based to file-based production.

The DPP’s new report now provides such guidance. It sets out to identify the smoothest, most efficient digital workflows for use with currently available technology, while providing sufficient background information to help maintain a view of the wider production landscape. It also identifies opportunities for collaboration, cost saving, and better creative outcomes.

The guide sets out a clear high level workflow as a framework for providing information, guidance and direction to digital production workflows. The overall process has been broken into four steps: planning – which covers the process up to the point of shooting, including the different conventions and practices that need to be adopted right at the outset; rushes management – which looks at the capture and handling of content on location or in studio up to the point of rushes archive and management; post production – which goes from the ingest of material for editing through to completion of the master: and delivery – the production of masters for delivery to broadcasters, clients or the audience.

The report was commissioned by the DPP from industry analysts MediaSmiths International. Its starting point was the views and experiences offered by dozens of attendees from all over the UK at the DPP’s regular industry forums.

From the outset ‘The Bloodless Revolution’ acknowledges that programme makers have no desire to see their world reduced to a series of workflows. Many may feel that by over-describing the process, the magic of television production will be driven out.

But the report goes on to offer a user-friendly map by which to navigate the potentially complex processes of file-based production – and in so doing offers a guide that, while first appearing analytical, is actually liberating.

Source: Digital Production Partnership

ITU Sets Standards for Ultra HD

The London Olympics will provide the first live trials of Ultra High Definition TV (UHDTV) based on standards finally agreed last week by the International Telecommunications Union (ITU) after a decade of research and development in which the EBU was heavily involved.

The ITU has defined the UHDTV standards, 4K and 8K, as multiples of the existing 1080p1920 format defined in the ITU-R Rec. 709 standard. HD 1080p, at present often referred to as full HD, displays at a resolution of 1920 pixels wide by 1080 high in progressive scan, corresponding to a widescreen aspect ratio of 16:9. Various frame rates are supported including 24, 50 and 60.

4K is defined simply by doubling 1080p1920 in each direction to yield pictures with four times the spatial resolution, at 3840 pixels wide by 2160 high, which is 8 mega pixels. 8K then doubles up again to resolution 7680 wide by 4320 high, spatially 16 times 1080p, or 32 mega pixels. But, the ITU has also added support for a higher frame rate option of 120, which experiments have shown may be necessary for accurate portrayal of motion at these very high resolutions on large wall sized displays.

Without the corresponding increase in frame rate, there is a danger that UHDTV will display brilliant images of slow moving action, but then exhibit slight jerkiness for high speed shots in some sporting events for example.

The bandwidth implications of these standards will alarm some operators and broadcasters, for an 8K programme running at the full 120 frames per second would require 320 times the bit rate of current HD transmissions given that these are often not yet even 1080p, but usually either 720p or interlaced 1080i. But, as the EBU pointed out, these new standards are unlikely to start working their way into mainstream transmissions for the best part of a decade.

That will coincide with the introduction of new frameless displays in which picture size can vary, and that blend into the background when not in use. Some vendors of pay-TV software are already developing platforms in anticipation of Ultra HD delivery to such large screens. For example, UK-based conditional access and middleware vendor NDS has a platform called Surfaces that was first demonstrated delivering 4K UHDTV to large displays at IBC 2011 in Amsterdam.

Most broadcasters will first upgrade to full HD at 1080p, which will for now meet all quality expectations, certainly for screens up to 60 inches diameter. For example, in the UK, the Freeview HD platform used by the BBC and ITV has been specified to provide full HD capability.

But although UHDTV may be some years away for TV, the 4K version has already been adopted for digital cinematography and computer graphics, using slightly different resolutions than the new ITU standard. 4K is also supported by YouTube, the only video hosting service to do so, in a different version again, allowing uploading of 4K videos at a resolution of 4096 x 3072 pixels, or 12.6 megapixels.

By Philip Hunter, Broadcast Engineering