ISO Vote on 3D Safety Guidelines Closes Nov. 7

At the 3D @ Home User Experience Technical Conference (UETC) in September, we had a chance to hear more about standards development activities based on initiatives from Korea and Japan. In subsequent email dialogs with the Japan group, we have now learned that ISO (International Organization for Standardization) is closing a ballot on drafting 3D safety guidelines on November 7. Information on these guidelines can be downloaded here.

This is just the first phase of the process, however. Following the vote, a period of discussion on the voting results and comments will follow, leading to the development of a working draft document by April 2012. This will be circulated again, followed by a series of votes.

Japan issued its first draft 3D safety guidelines document back in 2002 (published by JEITA) and began working with ISO in 2004 via an international workshop. During the discussions, they found out that guidelines are necessary and important for 3D viewing comfort and safety.

Current activity is underway in a new working group called WG 12 on Image Safety, which is under ISO/TC 159 (Ergonomics) / SC 4 (Ergonomics of human-system interaction). The purpose of these discussions is, "To provide requirements and recommendation from a viewpoint of ergonomics for reducing the potential for visual discomfort, asthenopia and visual fatigue when viewing stereoscopic images."



The group’s efforts regarding stereoscopic images are focused on the ways to mitigate the effects of Visually Induced Motion Sickness (VIMS) and Visual Fatigue caused by Stereoscopic Images (VFSI). The latest version of the full safety guidelines document, developed by the 3D Consortium (3DC), JEITA and AIST, was released last year and covers hardware, content and viewer responses. This is a Japanese-only document now, but English and Chinese versions should be available shortly.

Hiroyasu Ujike, a researcher at National Institute of Advanced Industrial Science and Technology (AIST), is leading the efforts with the ISO. In an email exchange with him, he noted that, "Based on discussions in the WG, I would like to find out and build a common framework for 3D Image Safety, and sharing it as "international guidelines."

Potential areas of discussion include:
  • Interocular difference of images, as optical stimuli, in terms of geometrical distortions, luminance, etc.
  • Binocular parallax and disparity
  • Enhancement of 2D problems by the stereoscopic presentation
  • Temporal changes in the above items
  • Viewing environment and viewing conditions

By Chris Chinnock, Display Daily

Understanding AVCHD

When DVCAM, DVCPRO and DVCPRO50 were introduced, manufacturers positioned these proprietary formats as “professional” compared to the “consumer” DV format. After working with all four formats, it became clear that differences were confined to their tape recording system. DV, DVCAM, DVCPRO and DVCPRO50 all use the same video codec. (DVCPRO50 employs dual 25Mb/s DV codecs.)

AVCHD, developed jointly by Panasonic and Sony, is a proprietary version of H.264/AVC. Specifically, AVCHD employs both the H.264 Main Profile (MP) and High Profile (HP). The HP codec provides important image quality advantages over the MP codec. Thus, although AVCHD is marketed as a single codec, it uses a pair of codec profiles. (The HP codec is downward compatible with the MP codec.) Moreover, although AVCCAM and NXCAM are marketed as professional formats, both use the same AVCHD HP codec. As you can see, understanding AVCHD, AVCCAM and NXCAM is more complex than understanding DVCAM, DVCPRO and DVCPRO50.

Figure 1 - HD H.264/AVC profiles and levels


Baseline Profile
The lowest profile used by an HD camera is BP. BP supports only the less efficient context-adaptive variable-length coding (CAVLC). Level 3.1 supports 720p30 at up to 14Mb/s, while Level 3.2 and Level 4.0 support 720p60 at up to 20Mb/s — although at such a low data rate, only 720p30 would be visually acceptable. Level 4.1 supports 720p60 at up to 50Mb/s.

Main Profile
MP offers the next performance level. MP supports both CAVLC and the more efficient context-adaptive binary-arithmetic coding (CABAC). MP also supports B-slices in addition to I- and P-slices. Because B data packets provide H.264 with its greatest encoding efficiency, MP decreases the probability of compression artifacts upon rapid motion. AVCHD uses MP and higher profiles.

A B-reference is generated when two motion vectors are defined from the displacement between the Current Block and Reference Blocks. With H.264, “bi” means two vectors — not two directions as it does for MPEG-2.

Several levels may be used with MP. Level 4.0 supports 720p59.94 and 1080i59.94 up to 20Mb/s (17Mb/s), while Level 4.1 supports data rates up to 50Mb/s (22Mb/s to 24Mb/s). The ability of Levels 4.0 and 4.1 to support 1080i59.94 means that 23.976fps can be recorded after applying 2:3 pulldown. This capability also means that 1080p29.97 can be recorded as 1080i59.94/29.97PsF because its frame rate is equal to the 29.97fps used by 1080i59.94.

High Profile
HP offers all the capabilities of MP (CABAC coding and B-slices) plus an optional capability that greatly improves codec efficiency — the ability to dynamically switch between 8 × 8 and 4 × 4 submacroblocks during compression. Image areas with high detail are compressed using 4 × 4 pixel blocks, while areas with low detail are compressed using 8 × 8 pixel blocks. The latter generates less data; therefore, more bandwidth is available for data from areas with fine detail.

During encoding, each 16 × 16 pixel macroblock is partitioned into four 8 × 8 submacroblocks and 16 4 × 4 submacroblocks. The encoder can switch among working with 16 × 16 blocks, 8 × 8 blocks and 4 × 4 blocks.

Figure 2


When predictions are made for 16 × 16 macroblocks, four modes are used:

Figure 3


When predictions are made for 8 × 8 submacroblocks, nine modes are used:

Figure 4


Canon AVCHD camcorders were the first to use HP H.264. Shooters quickly found MP software decoders were unable to decode Canon recordings.

An HP encoder supports 720p59.94 and 1080i59.94 using multiple levels. Level 4.0 supports data rates up to 20Mb/s (17Mb/s). Level 4.1, used by AVCHD, AVCCAM and NXCAM, supports data rates up to 50Mb/s (22Mb/s to 24Mb/s). Blu-ray employs Level 4.1 using a video data rate up to 40Mb/s.

Level 4.2, available in camcorders using AVCHD 2.0, supports a data rate up to 50Mb/s (28Mb/s) for 1080p59.94. When AVCHD is recorded on a DVD, the disc's maximum spin speed limits the data rate to 17Mb/s. Therefore, when you shoot either MP or HP Level 4.1, or HP Level 4.2, you will not be able to archive to a DVD.

GOP Structure
Each frame is encoded as one or more I-, P- and B-slices. Typically, every half-second, an H.264 encoder outputs an I-frame — a picture with all intra-encoded slices.

Audio Encoding
H.264/AVC encodes stereo audio using ACC or LPCM audio. AVCHD audio is restricted to AC-3 Dolby Digital 2.0 stereo or 5.1 surround. (NXCAM camcorders record un-compressed audio using PCM audio sampled at 48kHz.)

H.264/AVC I- and P-slice Encoding
One of the many characteristics of H.264/AVC that makes it difficult to understand is its use of terms similar to those used when discussing MPEG-2 — for example, “I,” “P” and “B.” An H.264 I-slice is a portion of a picture composed of macroblocks, all of which are based upon macroblocks within the same picture.

Thus, H.264 introduces a new concept called slices — segments of a picture bigger than macroblocks but smaller than a frame. Just as there are I-slices, there are P- and B-slices. P- and B-slices are portions of a picture composed of macroblocks that are not dependent on macroblocks in the same picture.

H.264 encoding begins by chroma downsampling to 4:2:0. Next, each incoming picture is divided into macroblocks. (When interlaced video is encoded, both fields are compressed together.) Many of the same techniques used to compress an MPEG-2 I-frame are used to compress macroblocks making up an I-slice. Each 16 × 16 pixel macroblock is further partitioned into four 8 × 8 submacroblocks. (See Figure 2.) The encoder can switch between working with 16 × 16 blocks and 8 × 8 blocks.

Blocks, of course, are located next to other blocks. For example, the Current Block (yellow) in the Figure 1 frame to be encoded has a block to the left (green) and a block above (blue). The latter two blocks are Previous Blocks. Reference Pixels are located at the left (dark green) and lower (dark blue) boundaries between Previous Blocks and the Current Block. Four different types of prediction methods (modes) are used with 16 × 16 macroblocks. (See Figure 3.)

When predictions are made for 8 × 8 submacroblocks, nine modes are used. (See Figure 4.)

In all cases, the mode that best predicts the content of the Current Block is selected as the Current Prediction Mode. The Current Prediction Mode is linked to the Current Block. Each Predicted Block (from the column and row of Reference Pixels) is “subtracted” from the Current block, thereby generating a Residual (difference) Block. Each Residual Block is compressed, linked to the Current Block, and during decoding used as a picture “correction” block.

Once an I-slice has been encoded, P-slices are encoded. Motion estimation is methodically performed, and macroblocks in other frames are searched for the contents of the Current Block. H.264 supports searching within up to five pictures before or after the current picture. (AVCHD supports searching within four pictures.) Obviously, the greater the number of reference pictures used, the greater the memory that must be in an encoder. For this reason, AVCHD cameras typically only support one or two reference frames.

The block with the best measured content match becomes a Reference Block. A P-reference is generated when only a single motion vector is defined by the displacement between Current and Reference Blocks. Each motion vector and each P-slice compressed Residual Block are linked to a P-slice.

BY Steve Mullen, Broadcast Engineering

AMWA Draws Up Delivery Spec

AS-02, which has been in development since 2007, is designed for use by post-production facilities, broadcasters and distributors that face the challenge of distributing programmes to a variety of platforms.

“A particular episode of a programme may need to be rendered to tens of different versions for linear broadcast, on-demand platforms, use by airlines and so on,” AMWA executive director Brad Gilmer told Broadcast. “Together with audio tracks and subtitle files, that might result in hundreds or thousands of elements that need to be held together.”

AS-02 will be a specific way of using MXF (Material Exchange Format) with video codec that will support MPEG-2, H.264 and JPEG2000. It will also wrap multiple mono, stereo and surround audio tracks, and carry subtitles.

The application specification is currently undergoing AMWA’s intellectual property rights (IPR) review process. The review closes on 11 November, with ratification of the spec expected to take place at AMWA’s board meeting on 14 November.

A statement from Avid described the spec as “a cornerstone of the future content creation paradigm”. It said: “With migration of physical to file-based workflows largely complete, the coming phase of media enterprise consolidation will be predicated upon technologies like AS-02.”

Gilmer added: “There is a lot of interest in this from companies facing the challenges of professional media distribution. “It used to be easy to distribute content when you sent a tape to playout and everyone watched the programme at the same time, but now people watch on myriad devices and the goal is to get content to as many platforms as possible.”

By George Bevir, Broadcast

BXF Explained

For years, broadcast automation systems and business systems needed manual access and conversion interface applications to convert metadata to/from their respective systems. The multitudes of proprietary interfaces are difficult to keep up with, especially as system upgrades and enhancements were added to either side.

The new SMPTE 2021 BXF 1.0 schema standard is one of the biggest advances in broadcast automation in this decade. The holy grail of automation has always been to provide a system that uses a central database for metadata between traffic and master control. Since centralizing a database between business systems and master control/operations is easier said than done, the next best thing is to standardize on a communication schema for the exchange of mission-critical data.

Technology standards are needed to organize varying systems and technologies. While manufacturers offer the promise of tight integration between varying systems, they still offer varied proprietary systems. BXF changes that. The new open SMPTE schema standard levels the playing field for manufacturers. By enabling their systems to work within the protocol's framework, manufacturers can assure broadcasters of getting nonproprietary full-feature metadata conversions and messaging systems.

History and Stats
In 2008, SMPTE developed and published a schema standard called BXF (Broadcast Exchange Format) 1.0 or SMPTE 2021. In a nutshell, BXF was developed to replace the various archaic text conversion schemas that have been developed over the years to interface, access and transfer schedules, playlists, dubs lists, record lists, delete lists, etc., from business systems to automation systems.

Today, SMPTE representatives note there are dozens of manufacturers that have developed applications and workflow systems using the BXF schema. There have been more than 150 national and international SMPTE members, including industry-leading manufacturers, involved in the development and enhancement of BXF. In this new digital world of broadcasting where multichannel, multimedia operations are the norm, the BXF schema standard helps manufacturers build applications for automating processes and procedures to next-generation enterprise levels.

The current BXF 1.0 includes an Exchange Schema Definition (XSD) collection for schedules, as-run, content, content transfers, etc. The BXF schema helps manufacturers simplify and automate the communication and workflow between a broadcaster's diverse business and transmission systems such as traffic, program management, content delivery and automation. The master control and traffic departments are the most common broadcast uses. When properly implemented, BXF-based applications automate the workflow process, streamline operations, maximize value of content and inventory, and increase flexibility for sales and advertisers.

As an XML-based communication schema, BXF allows for near-real-time messaging and updating between disparate systems. The XML-based messages include instructions about program or interstitial changes, allowing an automated approach to as-run reporting and schedule changes. Other BXF capabilities include near-real-time dub orders, missing spots reports and content management.

In the past, a phone call to/from traffic was the norm. Seeing a traffic department representative in master control to make changes to the paper schedule is usually a daily event. In today's world, business departments need to know exactly when a program or interstitial has aired and if it aired correctly, and they need to know it as soon as possible.

Revenue Optimization
One of the most important factors about BXF-based applications is that they allow the decision-making aspects of master control schedule changes to be made in the traffic department. Traffic personnel can maximize revenue opportunities by providing lucrative replacements to any missing spot scenario. Or, when lucrative missing “copy” finally arrives and is ingested into playout video servers, traffic can make decisions on which interstitials/programs to drop and replace. Traffic has advertiser contract information giving them the ability to switch programs and interstitials to more lucrative advertisers.

The sales department also benefits from BXF-based applications. Because of the automated near-real-time fashion of the BXF messaging schema, the sales department can make last-minute, higher-revenue interstitial or program additions to the on-air schedule. So while BXF schemas lower costs through standardizing, streamlined processes and minimized manual changes/inputting, they also generate more revenue through revenue optimization.

Comprehensive Event Structure
In creating playout schedules, the goal is to create a schedule with the minimum and most efficient amount of effort. BXF-based applications simplify the creation of complex multiline event situations by automating the creation of multiple event lines within a playout schedule. In the most efficient configuration, traffic does little to activate a complex playout scenario like a live news break for example. For traffic personnel, it may be as simple as creating a one-line traffic schedule with a predefined identification number. A BXF-based application and the master control automation system take that one-line traffic schedule and convert it into a complex multiline playout schedule with all the needed secondary events. If BXF-based applications are properly configured with predefined conversion rules, master control personnel are not saddled with creating or fixing complex multiline event structures.

Latest Applications
News production automation is the latest craze in broadcast automation. A handful of manufacturers have developed systems to automate live newscast productions. The more advanced news production automation systems repurpose content for distribution via Internet, mobile devices, VOD and syndication. A key aspect of these systems is the ability to monetize content assets. Interfacing with traffic and billing systems, via BXF-based applications, helps to maximize advertising avails to other platforms. BXF-based applications automate the heavy lifting of scheduling, changing and verifying ads in live on-air and live streaming productions.

Content Metadata Management
Beyond schedules and as-runs, access and distribution of database metadata is another of BXF's benefits. Business systems such as sales, programming and rights management use BXF-based schemas and applications to automatically populate centralized data warehouses with cost and scheduling data. The master control automation database can be populated with extensive and accurate metadata from traffic systems. Media Asset Management (MAM) and Digital Asset Management (DAM) systems use database information from business systems also. News production systems use BXF-based applications to automate schedule changes and verify information for on-air, VOD, mobile and IPTV schedules. BXF-based applications and features can allow for the exchange of metadata among systems that may not have direct access to content.

Content Movement Instructions
As rich media content moves from place to place, the metadata associated with this content moves also. This usually is a manual process or one with error-prone work-arounds such as hot-folders. Today, there are BXF-based applications that can automate the transfer of metadata that originates from advertising agencies and business systems to master control, nearline and archive MAM/DAM systems.

For example, let's say traffic makes a change request via a BXF schema message to master control, and a new interstitial is added to the master control playout schedule. Once the message is accepted by master control and the event is added to the schedule, the master control system will begin searching for that rich media within its automation database. If the rich media is located on a nearline and/or archive system, the master control automation or MAM/DAM system will activate a transfer request for that rich media. Metadata from the business systems will populate the master control and media asset management systems database. BXF-based applications can create move-instruction messages to activate a system's physical transfer of content from source to destination.

The Spotlight Moves to Business Systems
As BXF-based applications become more popular, we can see business systems playing a larger role in the control and monitoring of broadcast production systems such as master control automation, MAM, DAM, etc. It's clear that improving and advancing operations, procedures and workflows that are upstream of master control is now more important than ever for broadcasters. The spotlight will shift to the traffic, programming, sales and rights management systems. For example, it makes sense for traffic to be responsible for master control metadata and schedule changes. With advertiser contracts in hand, the traffic department has the information to make the best possible decisions.

Cost Versus Benefit
We've mentioned many times during this report that BXF-based applications and their open-standard schema save on costs. To factor how much, you must first define cost and values to each aspect of the workflow and operation, multiply personnel and wage costs by the hours it takes to transfer files, manually update databases, manually correct schedules, manually enter and correct data in databases, plus e-mails, phone calls, meetings, etc. Define the costs of how much time and effort is being exerted by functioning in a manual mode.

Value is the next factor. What is the average value of your interstitials and programs? How much revenue would be lost if an interstitial or program did not air or it aired incorrectly, requiring a make-good? Value can also mean potential revenue. By offering automated processes, last-minute changes can incur additional revenue. Near-real-time updating is constantly showing commercial avails. These benefits have value. Value can also be given to your on-air look. How do we compare to the competition? Automated systems by definition give you a higher up-time percentage and better on-air look than stations without automation.

Implementation
Implementing BXF-based applications involves hardware, software and a good amount of workflow changes. The majority of a BXF implementation is reorganizing and revamping your workflow process. In fact, you'll spend more time on redefining duties and tasks than you will with the physical implementation of hardware and software. In physical terms, the BXF-based applications and their schemas run best on server-class hardware with modern network accessibility to all parties involved.

To implement BXF in your facility, you must first understand the needs. Then, understand how BXF will benefit your system. You must also understand the manufacturer and its integration of BXF schema standards in its products. Once you've pinpointed the areas where BXF-based applications can be used, devise a plan. Creating a diagram and documenting is always a good first step.

Even though automating simplifies an operation, it's only smart to have accurate documentation. The main reasons for documentation include the training of new staff, for trouble-shooting issues and for future configuration changes or enhancements. Test offline and verify the results. Train staff on how the new processes and procedure will work, and then activate your BXF-based applications.

BXF 2.0
The SMPTE BXF standard and schema is alive and constantly changing and updating. SMPTE representatives note there are big advances coming in the next version of BXF. SMPTE balloting and voting are still required, but there are a few new advances worth noting. If voting passes, the next BXF version standard will soon provide support for simultaneous program events in master control.

A simultaneous program event scenario occurs when there is a closing credits DVE squeeze while simultaneously starting the next program. BXF will properly report timestamps and durations for programing and interstitials. Previously, secondary automation events such as DVE, logos, crawls, animation keys, etc. were considered nonprogram events. In BXF 2.0, the plan is that secondary events can be identified as program events for proper automation as-run reporting.

Multilanguage support is also planned for BXF Version 2.0. If committee voting passes, the BXF schema will be enhanced to allow for multiline, noncontrol program titles that can be places on the schedule in multiple languages. The noncontrol information lines are used by program managers to properly schedule and verify, via as-run, multilanguage programming. Master control operators will also benefit by knowing if a program will run on other output channels in another language or that the program has multilanguage audio channels.

The Future
There are many enhancements coming in future releases of the BXF schema standard. Most notably is how the BXF schema will be used in application to interface with rich media MXF files. BXF-based applications will someday have the ability to map and extract metadata information from MXF files. For example, if a station or network receives an MXF file from a distributor, a BXF schema-based application can extract the metadata from the MXF file without having to wait for a hard copy sent separate via paper timesheet or e-mail.

Combining metadata with rich media is a common operation in many applications for European broadcaster. For example, metadata extraction is automatically entered into the master control automation system for playout. Databases in master control and traffic for spot or programming metadata is not common like it is in the U.S.

The EBU, Advanced Media Workflow Association (AMWA) and their Framework for Interoperability Media Services (FIMS) initiative are working to improve how metadata and rich media are managed in a Service-Oriented Architecture (SOA) environment. It is hoped that the output of this initiative will soon be brought to SMPTE for due process standardization.

We can also expect more rights management support in the future. As our industry is quickly moving from multichannel to multichannel/multimedia operations, rights management is more important than ever. Both broadcasters and content owners will benefit by accessing near-real-time information regarding their content. BXF schema-based application manufacturers are working to make these options and features a reality.

Thus far, advertising agencies have not used BXF. SMPTE representatives hope that one day ad agencies will also be able to benefit from BXF. National advertising and content metadata begins with advertisers and ad agencies. By adding ad agencies to the broadcasting workflow, metadata accuracy can be improved and operations can be more streamlined. For example, today interstitials have unique agency identification code. If they used BXF-based schema and applications, this agency identification code would stay with the metadata throughout the entire end-to-end workflow. The metadata would begin at content creation, then stay through advertising buys, content distribution, playout, as-run, business reconciliation and finally to verification, affidavit creation and billing.

Why BXF?
Many manufacturers think the adoption of the BXF schema standard shows a commitment to and support of a broadcaster's right to choose the best systems available. Inventory and revenue optimization work extremely well with the BXF standard in the mix. Competition is a good thing for the industry, and it raises the bar of functionality. Manufacturers are eager to compete to ensure broadcasters remain competitive in a fast-changing multichannel, multimedia digital world. Standards such as BXF are the best way to ensure that happens.

BXF schema-based applications are becoming an essential component of highly automated broadcast operations. The notion of both eliminating cumbersome manual file exchange and having a near-real-time exchange of data between production and business systems is a good example of how today's broadcast technology provides more functionality and requires less time to manage.

By Sid Guel, Broadcast Engineering

200-inch Full HD Glasses-Free 3D Display is World's Largest


Click to watch the video

Source: DigInfo TV

Jobs on Apple TV: “I Finally Cracked It”

Apple may drop a full-fledged television, according to the new biography on Steve Jobs, which is published today in the US. “I finally cracked it,” said Jobs to author Walter Isaacson. According to the Washington Post, the new book, “Steve Jobs”, has a major product reveal: a proper connected TV set from Apple.

Rumours about a TV set from Apple have been circulating for months. It seems logical that after cracking the music market, Jobs had set its sights on also cracking movies and television. But so far, Apple did not succeed in producing a similar solution for the TV screen.

At the moment, the company sells a set-top box that company officials have called a “hobby.” “He very much wanted to do for television sets what he had done for computers, music players, and phones: make them simple and elegant,” Isaacson wrote.

Isaacson continued: “‘I’d like to create an integrated television set that is completely easy to use,’ he told me. ‘It would be seamlessly synced with all of your devices and with iCloud.’ No longer would users have to fiddle with complex remotes for DVD players and cable channels. ‘It will have the simplest user interface you could imagine. I finally cracked it.’”

So far, other connected TV ventures from software companies have failed to take off, such as those from Microsoft and Google. However, Apple has a very powerful tool in its hands with the iTunes ecosytem, which could be adopted for television.

During the past few months, Apple has registered a number of patents related to its TV product. The Patently Apple blog from watcher Jack Purcher has opened an archive on all posts about Apple patents related to the Apple TV.

By Robert Briel, Broadband TV News

3D Broadcasting Dominates IBC

3DTV was arguably the hot topic at IBC, thanks largely to the presence of James Cameron who made at least six public appearances and explored further in this IBC round up.

The central message from the show was that if 3D is to go mass market then a hardboiled business model needs cementing. The emphasis was on the production aspect of that model in acknowledgement that without practical and inexpensive technology and workflows the content gap needed for 3D channels will not be filled.

“The current phase of 3D began with the clunky, cabled and complex approach,” noted Sony’s senior VP of engineering and SMPTE President, Peter Lude, in a conference session examining 3D’s future. “We are now into phase two which is about greater automation and computer analysis with the aim of making it easier to use rigs, correct errors and reduce manual convergence.”

That is something that Cameron Pace Group (CPG), 3ality Technica and others are working toward in the outside broadcast environment although the substitution of all convergence ops and stereographers by machines, which appears to be CPG’s line in the interests of economic efficiency, is a bone of contention.

“Just as you wouldn’t replace the creative skill of a camera operator who is framing a scene in accordance with the context of the action in front of them, so a convergence puller’s critical judgement can’t be easily replaced,” insists stereographer Richard Hingley.

Greater automation and more agile kit is needed and there is an argument, acknowledged even by their manufacturers, that stereo rigs are a rung on an evolutionary ladder.

“Rigs are large and cumbersome and heavy and a greater degree of electronics will help streamline the systems but quality stereo work can only be achieved using top of the range imagers and mirror systems which give a wide range of interaxial distance and control,” observed Florian Schaefer, product specialist at P+S Technik.

The counter argument can be heard from companies like Meduza Sales which has begun taking orders for its dual lens 4K-capable imager although cinematographers have yet to provide public feedback.

“It’s clear that rigs have a limited lifespan,” claims CEO, Chris Cary. “Today’s S3D rigs are great cousins of the ones invented in 1905. The industry needs to move on and find the next generation which in our view is portable, high resolution, truely flexible systems.”

Panasonic says it has no interest in allying with a rig developer and is also strategising for a day when rigs become obsolete.

“Our starting point is to make 3D acquisition easy and mobile,” European Product Manager, Rob Tarrant stated. “That’s what we are doing with our first generation of integrated 3D cameras. With our range you get easier operation, mobile operation and truer 3D because the interaxial distance mirrors what we naturally see.”

Panasonic now has three integrated camcorders on the market, the latest of which the HDC-Z10000 prosumer unit includes a ‘black box’ technology which makes it seem like the interaxial distance between the fixed lenses are adjusted during the shoot.

“One of the issues with twin lens cameras is the fixed interaxial which this macro convergence function helps overcome,” Tarrant added. “With it we can shoot objects as close as 45cm from the lens.”

Sony’s first professional integrated camcorder, the shoulder mounted PMW-TD300 is also shipping priced around €25,000 and with an optional wireless link permitting remote control by MPE-200 processor. Devised by Broadcast RF, the link will make steadicam action more feasible, and is something that Panasonic does not yet offer.

Presteigne Charter made the first purchase in Europe of this unit and will put the RF functionality to test. According to Sony's 3D sports expert, Mark Grinyer: "By using the link the camcorder effectively looks to a convergence operator in a OB truck as if it were a 3D rig. This is something that live 3D sports productions in particular have been crying out for."

3D Rig Tweaks
Rig manufacturers continue to tinker with design to aid ease of use. P+S Technik was showing an adjustable riser as an accessory which enables Freestyle rigs to be tilted up and down. For live OBs the rig can now be integrated with Sony’s MPE-200 and HDFA-200 fibre multiplexer so that all the rig parameters including power, sync and genlock can be managed by a single cable.

Fellow European rig vendors were also showing expanded ranges usually with lightweight versions for steadicam and sturdier ones for mounting heavier camera configurations. As one IBC visitor put it the EU manufacturers “are finally starting to pull themselves out of the hobby market, to build a few rigs and rent them into a proper business world.”

For example, the Production Rig from Screen Plane is now being manufactured and sold by lens control specialists Cmotion with a compact Stead-Flex rig available from year end. Similar to the P+S riser, the tilt angle of the Production Rig can be adjusted from the rig’s centre of gravity in increments of 2.5 degrees. It can also be side-mounted on an accessory devised by Italian firm Cartoni for even greater tilt range.

Binocle’s Brigger I and its bigger brother Brigger II are well respected but still largely confined to the French feature and TV market. They are being put to use on an ambitious semi-fictional feature in the Amazon.

After two years R&D, Bournemouth’s Teletest launched the Binorig, at €10,000 claimed to be the world’s most affordable. “We designed a complete package contained in two flight cases for stereographers or cameramen with little experience shooting S3D,” says MD Nick Rose. “It produces images which are as good as those produced by rigs costing ten times the price.”

CPG Ally with GVG
The star wattage of Cameron and Pace’s IBC presence masked the fact that CPG wasn’t actually exhibiting. It inadvertently masked a little of what could have been IBC’s biggest 3D news which was the merger of 3Ality Digital with Element Technica announced just a week before.

The benefits of the marriage, which unites ET machining with 3Ality software engineering, were demonstrated with Sony F3s mounted on an ET Pulsar connected to a SIP and showing a wide range of data. “It’s great for our customers who have been mixing the two technologies anyway, but they had to integrate them themselves. Now that we can offer them fully integrated technology,” said 3ality Technica CEO, Steve Schklair.

In the show’s biggest news impacting outside broadcasters (signed literally a day before IBC), CPG flagged a partnership with Grass Valley which will see them jointly develop equipment and equip new scanners. CPG runs three dedicated mobile units and has a stock of 100 3D camera systems in the US but its GV pairing will enable it to export its Shadow systems and business model into Europe.

“Our message is that you can use our equipment for 3D just the same as for 2D - there are no special bolt-ons,” said Grass Valley 3D specialist, Lyle Van Horn. “For example, we have internal flipping of the image, standard on our LDK 8000 series cameras, so there is no external processing needed to flip the image and put another kink in the chain when mounting on a beam splitter rig.

The Kayenne and Karrera switchers process each eye as two separate 2D streams paired, so an operator is using the same standard set of buttons for both 2D and 3D. 3D is complex enough without adding a separate set of equipment to do the same job which is why we have the only Super Slo-Motion system, which again handles both 3D & 2D sources with the same hardware (a combination of the Summit server and Dyno replay control system).”

The view that 3D rigs have a limited lifespan for non-live projects at least was given heavyweight support by Walt Disney Studios' VP, Production Technology, Howard Lukk. “There are enough things for the DOP, director and camera operators to try to track on the set as it is, without having to track interaxial and convergence,” he argued. “We are making it more complicated on the set, where I think it needs to be less complicated.”

Lukk suggested a hybrid approach which would supplement a 2D camera with smaller ‘witness’ cameras to pick up the 3D volumes, then apply algorithms at a VFX or a conversion house to create the 3D and free the filmmaker from cumbersome on-set equipment. It is something that Disney is researching.

Source: TVB Europe

Red Teases with 4K Laser Powered Projector

Red Digital Cinema has trialled the launch of a laser illuminated passive 3D projection system capable of 4K resolution for home and cinema use. The announcement has been mischievously timed to disrupt the launch of Sony’s new 4K home theatre projector, which is claimed to be a world’s first.

Red founder Jim Jannard posted comments to a Red user forum from industry pundits to whom he had demonstrated the Red Ray projector. These pundits included 3Ality Technica senior vice president Stephen Pizzo, who said its image quality was “so clean and so vibrant", comparing it only to Cibachrome (otherwise known as the Ilfochrome print film once made by Ilford).

Details are sketchy and speculation is filling in the gaps. Among these are suggesting that Red is working on a range of 4K capable displays and projection systems with the projector believed to cost anywhere from $30,000-$50,000. No release date is given except that product will be available in the next 12 months.

That it features laser illumination is interesting. One of the main criticisms of 3D projected features is that they are so dark. Lasers rather than LEDs could provide a more powerful light source.

Red and Sony have 4K acquisition sewn up so it makes sense that if content is to be shot in the format, there is a means of displaying it. Over 14,000 of Sony’s SXRD 4K theatre projection systems have been sold to date, mostly in the US.

Its VPL-VW1000ES 4K home theatre projector, due later this winter, features a ‘Super Resolution 4K’ upscaler that is claimed to ‘dramatically enhance 1080P content, allowing viewers to get the most from their existing Blu-ray Disc libraries. For greater versatility, the release continues, it can also display Full HD 3D and 4K upscaling 3D movies, as well as 2D and 3D anamorphic film.

Interestingly, the Blu-ray specification can only handle 1920 x 1080, not 4K resolution, nor 48 fps content either, making a Blu-ray package of The Hobbit as director Peter Jackson intended the feature to be seen, somewhat problematic.

Source: TVB Europe

Sony Puts Record Straight on High frame Rates

Sony has hit back at perceptions that competitor Christie is leading the charge to introduce high frame rate projection systems into cinema exhibition, claiming that its systems are already advanced for HFR display today.

Christie made headlines when it received the backing of James Cameron in demonstrations of HFR at Cinemacon and IBC earlier this year. It also signed a five year pact with Cameron’s Lightstorm Entertainment to develop and market high frame rate systems.

The series II projectors of Christie and its fellow DLP-based TI licensees, Barco and NEC, require a software upgrade and the installation of an Integrated Media Block (IMB) which overcomes the bandwidth limitations in the connection from server to projector.

All the major projection systems vendors are likely to offer the relatively simple HFR upgrades so that exhibitors avoid having to fund a full system replacement. One estimate puts the cost of an IMB plus firmware upgrade at US $3,000.

Sony also requires a firmware upgrade but has an IMB already incorporated in the design of its SXRD 4K projectors. Peter Jackson’s The Hobbit could be projected at 2x48 fps (96 fps) today with the addition of that firmware, Sony claims. All that is missing is the revision of the DCI specifications, which currently only support 48 fps mainly for the purpose of 2x24 fps playback for 3D.

HFR is claimed to improve the image quality, particularly over standard 24fps, on fast-moving scenes and camera pans by reducing visual artefacts such as motion blur, judder, and flicker. The effect is even more apparent on 3D features, since the human eye and brain are more sensitive to such artefacts when separate left and right images are projected, “in particular with systems using triple flash,” said Oliver Pasch, head of european digital cinema sales at Sony Professional.

In fact, the brunt of the cost of switching to higher frame rates will be borne by post production, which is required to increase disk storage space and allocate more time to render special effects. This increases again if 4K projection of 3D content (96fps 4K) is desired.

The DCI specifies that films must be compressed using JPEG 2000 with a maximum bit-rate of 250Mbps. That would ramp up to 400-480Mbps for 48fps and 500-600Mbps for 60 fps. The size of the DCP will increase in similar proportions.

Matt Cuson, senior director of cinema, Dolby Laboratories, notes that some industry studies suggest that an 8K resolution really starts to have a dramatic effect, but HFR will be the most dramatic visual improvement the industry will see in the short term.

Source: TVB Europe

ATSC Issues Reports on Next-Generation Broadcast TV, 3-D TV Broadcast

The Advanced Television Systems Committee (ATSC) has published final reports of two critical industry planning committees that have been investigating likely methods of enhancing broadcast TV with next-generation video compression, transmission and Internet Protocol technologies and developing scenarios for the transmission of 3-D programs via local broadcast TV stations.

The final reports of the ATSC Planning Teams on 3D-TV (PT-1) and on ATSC 3.0 (PT-2) are available now for download from the ATSC website.

PT-1, the 3-D TV planning team, reviewed the visual sciences, existing technology, and the development of content for three-dimensional presentation. While 3-D television broadcasts provide the potential for significant enhancement to the viewer's experience, it was found that the how the content is created and presented are both important to a positive viewing experience.

Substantial sections of the report deal with human visual issues sometimes associated with 3-D viewing, with the planning team noting that many of the contributing factors are described, explained and accommodated by insuring proper viewing distances from the screen.

While recognizing limitations of depicting 3-D objects on a 2-D display, the PT-1 report also details various options for transmission of 3-D material, including use of both high-definition and mobile DTV channels and non-real-time caching of 3-D content for future viewing.

PT-2, working on next-generation broadcast technologies, was charged with exploring options for what's been dubbed ATSC 3.0, including candidate technologies, potential services and likely timeframes, and without a requirement that the new system be backwards compatible with existing broadcasts.

Several potential technology components were identified by PT-2, including improved audio and video codecs and more-efficient modulation approaches. The Planning Team also looked into ways that TV broadcasts could seamlessly converge with a hybrid device that might get content from the Internet or other methods. Among the subjects probed by the team were content personalization and targeting, more immersive presentation forms, and advanced non-real-time content downloading services.

Source: Broadcast Engineering

The Lytro Camera


Click to watch the video

Source: CNET

Transport Streams 101

A Transport Stream (TS) is comprised of one or more packetized and multiplexed compressed video signals and their associated audio, along with program descriptors and other data. In broadcast DTV, there are two essential parts to the final Transport Stream.

First is the actual packetized compressed audio and video data as well as the tables and other data required to locate and extract them. The second major part is called Program and System Information Protocol (PSIP), which is required by the DTV receiver.

The PSIP data is what enables the receiver to know what channel is being received as well as what analog channel is associated with it and the names of the major and minor channels. PSIP also carries viewer information such as program guides, time of day and ratings for the programs.

Packetized Elementary Stream
To combine or multiplex the various data streams that make up the TS, they need to be packetized. The Elementary Streams (ES) such as MPEG-2 video or AAC audio are encapsulated into defined serialized data bytes called Packetized Elementary Streams (PES). Once the elementary streams are packetized, they can be combined and multiplexed into a single data stream known as a TS. Each packet is 188B, or 1504b, long.

Packet Identifiers
With several different packets used for each program (e.g. MPEG-2 video, one or more AAC audio channels, metadata, etc.) and one or more programs, the number of packets can increase quickly. Because the individual streams are no longer separate but combined as packets, this requires a system to identify and sort through the packets to extract the correct program.

Packet Identifiers (PIDs) are used for this function and are attached to each packet in the TS. PID numbering is arranged to keep associated packets grouped together. For example, the MEPG-2 video would be PID 65, while the associated AC-3 audio is PID 68; another program has its MPEG-2 at PID 81 and AC-3 at PID 84.

Program Map Table
The Program Map Table (PMT) is a list of the PIDs used for each program and what they are; there is one PMT for each program within a TS. The PMT also has a PID and it is always the first (lowest number) PID for the program. In the above example, the program with PIDs 65 and 68 has a PMT with a PID 64, and the program with PIDs 81 and 84 has a PMT with a PID 80. The information contained within the PMT lists the PIDs for all the packets and a description of what the packet is (e.g. MPEG-2, AAC, AC-3, data, etc.).

Program Association Table
The Program Association Table (PAT) is a list of all programs contained within the TS. This is where the PIDs for the PMTs are found and is the first step in extracting the desired program from the stream.

Program Clock Reference
Program Clock Reference (PCR) is used to lock the local 27MHz clock to the one used to create the encoded stream. There is a PCR for each program within the TS, and it can share the same PID as one of the PES. The PCR is a time stamp and its value is derived from a counter, running at the encoder, taken at the moment the packet leaves the multiplexer.

Differences between the received time stamps and the local clock are seen as errors, and the local clock is adjusted or reset. If there is a fault with the PCR, then a number of errors can occur, such as loss of lip sync, picture freeze and dropped frames. The PCR is developed at the encoder from either the horizontal sync of an analog signal or the bit rate of an SDI input.

The PCR is a component of the TS and is used to keep the local 27MHz clock locked to the originating encoder’s clock. The encoder’s 27MHz clock is derived from the input video, either SDI or analog video. This means that the stability of the input video will determine the stability of the 27MHz clock and, therefore, the accuracy of the PCR for that program.

The Presentation Time Stamp (PTS) is part of the coded audio and video streams; this tells the decoder when this particular video or audio must be presented to the viewer. The time stamps are derived and compared to the 27MHz clock. The PTS is what keeps and locks the audio and video together and maintains lip sync.

Errors in the PCR can be introduced by the encoder/multiplexer or in the transmission path, including remultiplexers and network transmission errors. The tolerance for PCR is 500ns.


Program and System Information Protocol
Program and System Information Protocol (PISP) is the last item added to the TS. It provides much of the glue that holds the disparate elements of the stream together. PSIP contains the Terrestrial Virtual Channel Table (TVCT), Master Guide Table (MGT), Rating Region Table (RRT), System Time Table (STT) and Event Information Tables (EIT).

All of these provide for an easier user interface as well as tuning and channel branding:
  • The TVCT indicates which DTV channels are associated with which analog TV channels and what frequencies and modulation modes are used. It also provides channel names and tuning information.

  • The MGT lists all the other tables available in PSIP.

  • RRT is where various types of program ratings are located for all the programs in the TS.

  • STT provides time of day information referenced to UTC.

  • EIT contains lists of TV programs and their start times contained in the TS.

Once the decoder is locked onto the data stream and the packets sorted, PSIP can begin to populate the receiver with its information. This information for the viewer consists of the System Time Table (STT), which supplies the current date and time from the station (the program guide is based on this clock time, so any offset or error can cause viewers to miss programs); the Region Rating Table (RRT), which supplies ratings of the programs within the TS so different types of program ratings can be transmitted (e.g. MPA, FCC, etc.); and the Event Information Tables (EIT) 0-3, which list the next 12 hours of programming.

The Terrestrial Virtual Channel Table (TVCT) contains a list of all the channels that are or will be online, plus their attributes. This includes the major and minor channel numbers and their names. Channel 6 analog also has Channel 35 digital, so the major channel is six for both analog and digital, and the minor channels are 0 for analog, one for the first digital channel, and so on. The minor channels do not have to be sequential and can be any number from one to 999. Stations might do this to denote different programming sources. Major and minor channel names also come from the PSIP, such as WREY for the major channel, and each minor channel has its own name as long as it fits in seven spaces.

A major component of PSIP is the Master Guide Table (MGT) that lists all the other tables within the PSIP as well as sizes and version numbers, so tables can be updated.

Demultiplexing the Stream
The multiplexer combines all the PES and the associated data packets into one continuous serial bit stream. To extract the PES and view it, we need to first demultiplex the stream to separate out the individual PES and convert them into Elementary Streams (ES) of compressed video and audio. From there, they can be decompressed, converted to analog and monitored.

To begin the demultiplexing process, the decoder’s 90kHz clock must be synchronized with the multiplexer’s, and to do that, the sync byte must be found in the TS. Every packet contains a sync byte at its start. The sync byte comprises 8 bits, and because all packets are 188 bytes long, the next sync word comes around in another 188 bytes. This repetition makes it easier to find the sync byte and lock to it.

Once the demultiplexer has seen the sync byte at least five times, it then knows it has a good lock on the clock and can examine the rest of the stream. Then the individual packets can be clocked in and their Packet Identifiers (PIDs) can be read and sorted.


Once the decoder is locked to the TS, the Program Association Table (PAT) is used to find all the data elements within it. The PAT holds a list, or table, of all the PIDs and what they are for in the TS. When the viewer selects minor Channel 3, which is listed as “sports” (all this data comes from the TVCT), the PAT directs the decoder to PID 80, which is where the Program Map Table (PMT) is located, and lists the PIDs for the PES — in this case, they are PID 81 for MPEG-2 video, PID 84 for AC-3 audio (English) and PID 90 for AC-3 audio (Spanish).

These packets are then converted (this is where the PCR comes in) into a Program Stream (PS), and from there into their original Elementary Streams (ES) with the help of the system clock reference (i.e. individual serial data for the MPEG-2 video and the AC-3 audio). They are decompressed then supplied to the outputs for display and monitoring.


Source: Broadcast Engineering

MPEG-2 Basic Training

The MPEG-2 standard is defined by ISO/IEC 13818 as "the generic coding of moving pictures and associated audio information." It combines lossy video compression and lossy audio data compression to fulfill bandwidth requirements. The foundation of all MPEG compression systems is asymmetric because the encoder is more sophisticated than the decoder.

MPEG encoders are always algorithmic. Some are also adaptive, using a feedback path. MPEG decoders are not adaptive and perform a fixed function. This works well for applications like broadcasting, where the number of expensive complex encoders is few and the number of simple inexpensive decoders is huge.

The MPEG standards provide little information about encoder process and operation. Rather, it specifically defines how a decoder interprets metadata in a bit stream. MPEG metadata tells the decoder what rate video was encoded at, and it defines the audio coding, channels and other vital stream information.

A decoder that successfully deciphers MPEG streams is called compliant. The genius of MPEG is that it allows different encoder designs to evolve simultaneously. Generic low-cost and proprietary high-performance encoders and encoding schemes all work because they are all designed to talk to compliant decoders.

Before SDI
Asychronous Serial Interface (ASI) is a serial interface signal where a start bit is sent before each byte, and a stop signal is sent after each byte. This type of start-stop communication without the use of synchronized fixed time intervals was patented in 1916 and the key technology making teletype machines possible. Today, an ASI signal is often the final product of MPEG video compression, ready for transmission to a transmitter, microwave or fiber. Unlike uncompressed SDI, an ASI signal can carry one or multiple compressed SD, HD or audio streams. ASI transmission speeds are variable and depend on the user's requirements.

There are two transmission formats used by the ASI interface, a 188-byte format and a 204-byte format. The 188-byte format is the more common. If Reed-Solomon error correction data is included, the packet can grow an extra 16 bytes to 204 bytes total.

Making MPEG-2
An MPEG-2 stream can be either an Elementary Stream (ES), a Packetized Elementary Stream (PES) or a Transport Stream (TS). The ES and PES are files.

Starting with analog video and audio content, individual ESs are created by applying MPEG-2 compression algorithms to the source content in the MPEG-2 encoder. This process is typically called ingest. The encoder creates an individual compressed ES for each audio and video stream. An optimally functioning encoder will look transparent when decoded in a set-top box and displayed on a professional video monitor for technical inspection.

A good ES depends on several factors, such as the quality of the original source material, and the care used in monitoring and controlling audio and video variables upon ingest. The better the baseband signal, the better the quality of the digital file. Also influencing ES quality is the encoded stream bit rate, and how well the encoder applies its MPEG-2 compression algorithms within the allowable bit rate.

MPEG-2 has two main compression components: intraframe spatial compression and interframe motion compression. Encoders use various techniques, some proprietary, to maintain the maximum allowed bit rate while at the same time allocating bits to both compression components. This balancing act can sometimes be unsuccessful. It is a tradeoff between allocating bits for detail in a single frame and bits to represent the changes (motion) from frame to frame.

Researchers are currently investigating what constitutes a good picture. Presently, there is no direct correlation between the data in the ES and subjective picture quality. For now, the only way of checking encoding quality is with the human eye, after decoding.

The Packetized Elementary Stream
Individual ESs are essentially endless because the length of an ES is as long as the program itself. Each ES is broken into variable-length packets to create a PES, which contains a header and payload bytes.

The PES header is data about the encoding process the MPEG decoder needs to successfully decompress the ES. Each individual ES results in an individual PES. At this point, audio and video information still reside in separate PESs. The PES is primarily a logical construct and is not really intended to be used for interchange, transport and interoperability. The PES also serves as a common conversion point between TSs and PSs.

Transport Streams
Both the TS and PS are formed by packetizing PES files. During the formation of the TS, additional packets containing tables needed to demultiplex the TS are inserted. These tables are collectively called PSI. Null packets, containing a dummy payload, may also be inserted to fill the intervals between information-bearing packets. Some packets contain timing information for their associated program, called the Program Clock Reference (PCR). The PCR is inserted into one of the optional header fields of the TS packet. Recovery of the PCR allows the decoder to synchronize its clock to the rate of the original encoder clock.


The Transport Stream is defined by the syntax and structure of the TS header


TS packets are fixed in length at 188 bytes with a minimum 4-byte header and a maximum 184-byte payload. The key fields in the minimum 4-byte header are the sync byte and the Packet ID (PID). The sync byte's function is indicated by its name. It is a long digital word used for delineating the beginning of a TS packet.

The PID is a unique address identifier. Every video and audio stream, as well as each PSI table, needs to have a unique PID. The PID value is provisioned in the MPEG multiplexing equipment. Certain PID values are reserved and specified by organizations such as the Digital Video Broadcasting Group (DVB) and the Advanced Television Systems Committee (ATSC) for electronic program guides and other tables.

In order to reconstruct a program from all its video, audio and table components, it is necessary to ensure that the PID assignment is done correctly and that there is consistency between PSI table contents and the associated video and audio streams.

Program Specific Information
Program Specific Information (PSI) is part of the Transport Stream (TS). PSI is a set of tables needed to demultiplex and sort out PIDs that are tagged to programs. A Program Map Table (PMT) must be decoded to find the audio and video PIDs that identify the content of a particular program. Each program requires its own PMT with a unique PID value.

The master PSI table is the Program Association Table (PAT). If the PAT can’t be found and decoded in the Transport Stream, no programs can be found, decompressed or viewed.

PSI tables must be sent periodically and with a fast repetition rate so channel-surfers don’t feel that program selection takes too long. A critical aspect of MPEG testing is to check and verify the PSI tables for correct syntax and repetition rate.

Another PSI testing scenario is to determine the accuracy and consistency of PSI contents. As programs change or multiplexer provisioning is modified, errors may appear. One is an “Unreferenced PID,” where packets with a PID value are present in the TS that are not referred to in any table. Another would be a “Missing PID,” where no packets exist with the PID value referenced in the Transport Stream PSI table.

Good broadcast engineers never forget common sense. Just because there aren’t any unreferenced or missing PIDs doesn’t guarantee the viewer is necessarily receiving the correct program. There could be a mismatch of the audio content from one program being delivered with the video content from another.

Because MPEG-2 allows for multiple audio and video channels, a real-world “air check” is the most common-sense test to ensure that viewers are receiving the correct language and video. It’s possible to use a set-top box with a TV set to do the air check, but it’s preferable to use dedicated MPEG test gear that allows PSI table checks. It’s also handy if the test set includes a built-in decoder with picture and audio displays.

By Ned Soseman, Broadcast Engineering

What is HLS (HTTP Live Streaming)?

HTTP Live Streaming (or HLS) is an adaptive streaming protocol created by Apple to communicate with iOS and Apple TV devices and Macs running OSX in Snow Leopard or later. HLS can distribute both live and on-demand files and is the sole technology available for adaptively streaming to Apple devices, which is an increasingly important target segment to streaming publishers.

HLS is widely supported in streaming servers from vendors like Adobe, Microsoft, RealNetworks, and Wowza, as well as real time transmuxing functions in distribution platforms like those from Akamai. The popularity of iOS devices and this distribution-related technology support has also led to increased support on the player side, most notably from Google in Android 3.0.

In the Apple App Store, if you produce an app that delivers video longer than ten minutes or greater than 5MB of data, you must use HTTP Live Streaming, and provide at least one stream at 64Kbps or lower bandwidth. Any streaming publisher targeting iOS devices via a website or app should know the basics of HLS and how it’s implemented.

How HLS Works
At a high level, HLS works like all adaptive streaming technologies; you create multiple files for distribution to the player, which can adaptively change streams to optimize the playback experience. As an HTTP-based technology, no streaming server is required, so all the switching logic resides on the player.

To distribute to HLS clients, you encode the source into multiple files at different data rates and divide them into short chunks, usually between 5-10 seconds long. These are loaded onto an HTTP server along with a text-based manifest file with a .M3U8 extension that directs the player to additional manifest files for each of the encoded streams.

The player monitors changing bandwidth conditions. If these dictate a stream change, the player checks the original manifest file for the location of additional streams, and then the stream-specific manifest file for the URL of the next chunk of video data. Stream switching is generally seamless to the viewer.


HLS uses multiple encoded files with index files directing the player to different streams and chunks
of audio/video data within those streams.


HLS File Preparation
HLS currently supports H.264 video using the Baseline profile up to Level 3.0 for iPhone and iPod Touch clients and the Main profile Level 3.1 for the iPad 1 and 2. Audio can be HE-AAC or AAC-LC up to 48 KHz, stereo. The individual manifest files detail the profile used during encoding so the player will only select and retrieve compatible streams. This allows producers to create a single set of HLS files that will serve iPhone/iPod touch devices with Baseline streams and iPads with streams encoded using the Main profile.

Though encoded using the H.264 video codec and AAC audio codec, audio/video streams must be segmented into chunks in an MPEG-2 Transport Stream with a .ts extension. All files are then uploaded to an HTTP server for deployment. In a live scenario, the .ts chunks are continuously added and the .M3U8 manifest files continually updated with the locations of alternative streams and file chunks.

Before producing files for HLS, you should read through Apple’s Tech Note TN2224 which contains detailed recommended configurations (resolution, data rate, keyframe interval) for distributing both 4:3 and 16:9 video to all compatible iDevice and Apple TV players.

Content Protection and Closed Captions in HLS
HLS doesn’t natively support digital rights management (DRM) though you can encrypt the data and provide key access using HTTPS authentication. There are several third-party DRM solutions becoming available, including from AuthenTec, SecureMedia, and WideVine.

HLS can support closed captions included in the MPEG-2 Transport Stream.

Deploying HLS Streams
Delivery via HTTP has several advantages; no streaming server is required and the audio/video chunks should leverage HTTP caching servers located in the premises of internet service providers, cellular providers, and other organizations, which should improve video quality for viewers served from these caches. HTTP content should also pass through most firewalls.

Apple recommends using the HTML5 video tag for deploying HLS video on a website.

On the Playback Side
On computers and iPad devices, the Safari browser can play HLS streams within a web page, with Safari launching a full-screen media player on iPhones and iPod touch devices. Starting with version 2, all Apple TV devices include an HTTP Live Streaming client.

Producing HLS
As discussed, the HLS experience has two components: a set of chunked files in .ts format and the required manifest files. In an on-demand environment, you can encode the alternative files using any standalone H.264 encoding tool, with the latest version of Sorenson Squeeze offering a multiple file HLS encoding template. More recently, Telestream updated Episode to include command line HLS multiple file creation. Cloud encoding services like those provided by Encoding.com can also typically produce HLS-compatible files.

Once you have the encoded streams, you can use Apple tools to create the chunked files and playlists:
  • Media Stream Segmenter - Inputs an MPEG-2 Transport Stream and produces chunked .ts files and index files. It can also encrypt the media and produced encryption keys.

  • Media File Segmenter - Inputs H.264 files and produces chunked .ts files and index files. It can also encrypt the media and produced encryption keys.

  • Variant Playlist Creator - Compiles the individual index files created by the Media Stream or Media File Segmenter into a master .M3U8 file that identifies the alternate streams.

  • Metadata Tag Generator - Creates ID3 metadata tags that can either be written to a file or inserted into outgoing stream segments.

  • Media Stream Validator - Examines index files, stream alternates, and chunked .ts files to validate HLS compatibility.

For live HLS distribution, you need an encoding tool that can encode the files into H.264 format, create the MPEG-2 Transport Stream chunks and create and update the manifest files. When Apple first announced HLS in 2009, only two live encoders were available; one each from Inlet (now Cisco) and Envivio. Now most vendors of encoding hardware also offer live HLS-compatible products, including Digital Rapids, Elemental Technologies, Haivision, Seawell Networks, and ViewCast.

Real Time Transmuxing
The other approach to live or on-demand streaming to HLS-compatible players is via transmuxing, which is offered by multiple streaming server vendors and CDNs. Specifically, these servers input an H.264-stream originally compatible with Flash or Silverlight (or other formats) and then dynamically re-wrap the file into the required MPEG-2 Transport Stream chunks and create the required manifest files.

Server-based implementations include:
  • Adobe Flash Media Server 4.5
  • Wowza Media Server
  • Microsoft IIS Media Services
  • RealNetworks Helix Universal Server

Akamai also offers “in the network” repackaging of H.264 input files for HLS deployment.

In these applications, any live encoding tool that can deliver multiple streams of input to the server, like the Adobe Flash Live Media Encoder, Haivision, Microsoft Expression Encoder Pro, or Telestream Wirecast, can serve as the encoding front end for multiple-platform adaptive distribution including HLS.

Not surprisingly given the level of technology support, many of the larger online video platforms are now starting to support HLS distribution, including Brightcove, Kaltura, and Ooyala.

Conclusion
The iOS platform is a critical target for virtually all streaming publishers, and HLS can deliver the best possible experience to that platform, and others that support HLS playback. Fortunately, the streaming industry had embraced HLS with tools and technologies that make this very simple and affordable.

HLS Resources
One of the reasons that HLS has been so successful is that Apple has created multiple documents that comprehensively address the creation and deployment of HLS files.

You can also watch this video tutorial.

By Jan Ozer, StreamingMedia

DVB Steering Board Approves Next Step for 3DTV

The DVB Steering Board has approved the Commercial Requirements for a second 3DTV delivery system. Termed 'Service Compatible', the second system is a solution required by content deliverers that enables the 2D and 3D versions of a programme to be broadcast within the same video signal, so that new 3D televisions and next-generation STBs can receive 3D programmes, while consumers with existing 2D HDTV receivers and set-top boxes can watch the 2D version. This 2D picture will probably be either the left or right image of the 'stereo pair'.

In February 2011, the DVB Steering Board approved the specification for a first phase 3DTV delivery system. This system was developed for broadcasters and content deliverers needing a system that works with existing HDTV receivers, provided they are used with a 3D display. This approach, termed 'Frame Compatible', is now a principal system in use for 3DTV delivery throughout the world.

For convenience, this second approach is termed DVB-3DTV 'Phase 2a'. The Commercial Requirements will shortly be available as a 'BlueBook' on the DVB website. The DVB Technical Module has been asked to complete the preparation of the specification for Phase 2a before the end of summer 2012.

Phase 2a will provide additional opportunities for 3DTV services, complementing the first specification, which is referred to now for convenience as 3DTV Phase 1.

The DVB is also taking into account the requirements of content deliverers wanting to continue the use of a Phase 1 signal, but wish to provide additional information to improve the image quality for those with 'new' receivers. This may result in a Phase 2b specification in due time.

Source: DVB

The Next Big Video Squeeze

Digital video is in the process of getting another major haircut — a development that promises to provide tremendous relief for bandwidth-constrained mobile networks, as well as for the delivery of ultra-high-definition TV.

The High Efficiency Video Coding specification, also referred to as H.265, will be even more efficient than H.264 MPEG-4 Advanced Video Coding. HEVC-based commercial products could arrive starting in 2013.

According to industry experts, HEVC could shave off 25% to 50% of the bits needed to deliver video that looks as good as H.264.

“It seems like every decade we come out with a better compression standard,” said Sam Blackman, CEO of video-processing systems vendor Elemental Technologies.

HEVC is being designed to take advantage of increases in processing power in video encoders and devices. The developers of H.264 had elements they wanted to include, “but the computational costs were considered too high 10 years ago,” Blackman said. “You’ve also had research over that time to improve the standard for the next time.”

HEVC is being developed by the Joint Collaborative Team on Video Coding (JCT-VC), which brings together working groups from the International Telecommunication Union and Moving Picture Experts Group (MPEG). More than 130 different companies and organizations have participated in the development of HEVC to date, according to Microsoft video architect Gary Sullivan, who is one of the co-chairs of the JCT-VC project.

The next milestone for the spec: In February 2012, a draft of HEVC is expected to be circulated for comments, and the first edition of the standard should be finished in January 2013.

Initially, the clear winners for HEVC will be mobile network operators. “If you look at any of the market data, 70% of the traffic will be video in the next year,” Blackman said. HEVC will also help broadcasters and cable ops deliver Ultra HD formats, which provide four to 16 times the resolution of current 1080p HDTV.

Elemental, whose customers include Comcast and Avail-TVN, expects eventually to incorporate HEVC into its software-based encoding solution that is based on off -the-shelf graphics processing units.

Other video-processing equipment vendors also are tracking HEVC. Andy Salo, director of product marketing at RGB Networks, said the company’s engineering team is working closely with industry engineers that are active contributors to HEVC.

HEVC adoption won’t happen overnight. An entire ecosystem of devices needs to incorporate new decoder chips that support H.265.

Another caveat: New technologies often look better on paper than in practice. It has historically taken time with a new video standard to gain the theoretical efficiencies, according to Joe Ambeault, Verizon Telecom’s director of product management for media and entertainment.

“It’s only so good until the engineers get into the development,” Ambeault said. “You look at the PowerPoints and say, ‘Well, maybe I’ll get that kind of efficiency a couple years from now.’”

By Todd Spangler, Multichannel News

BitTorrent Expands Live Streaming Tests

BitTorrent has quietly been testing its upcoming live streaming platform, and now the company is ready to take the next step with a new round of scalability tests that could include the live streaming of indie concerts. A new BitTorrent Live website built with these kinds of tests in mind launched a few days ago, but a company spokesperson cautioned that “a broad beta is still a couple of months away.”

BitTorrent has been testing its live streaming platform with a limited number of users at a no-frills website that hasn’t been publicized but has nonetheless been publicly accessible for some time at live.bittorrent.com. At the end of last week, the site suddenly received a significant face-lift, complete with installation instructions for the BitTorrent Live software and a brief explanation that reads:

"BitTorrent Live is a whole new P2P protocol to distribute live streamed data across the internet without the need for infrastructure, and with a minimum of latency."

Users can download BitTorrent live clients for Windows, Mac OS and Linux. The client simply works in the background to facilitate data transfer and doesn’t allow any configuration. Video streams display in the browser via Flash, and a Facebook plug-in allows users to chat with one another while watching a stream. An additional tab offers access to the audio and video bitrate and other debug data.

Speaking of video: BitTorrent’s spokesperson told me that the tests have so far been restricted to “simple pre-recorded content loops to test latency and audio/visual sync.” I was able at one time to tap into a prerecorded stream of a winter sports event but at other times simply didn’t get to see anything.


However, those P2P live streaming tests could get a lot more exciting soon: “One of the ideas is to invite a few of our favorite indie artists into our office to broadcast content and help us kick the tires with their fans,” said BitTorrent’s spokesperson. Still, don’t expect BitTorrent to stream Coachella anytime soon. I was told that “the redesign isn’t intended to suggest we’re out of the R&D stage of designing, building and testing the product.”

BitTorrent inventor Bram Cohen has worked on the live streaming platform for close to three years, and he told me at NewTeeVee Live last year that his efforts included writing a complete new P2P protocol from scratch. The BitTorrent protocol itself, he said, simply introduced too much latency to be a viable live streaming solution.

By Janko Roettgers, GigaOM