Cine3D Stereographer

If you are director, DOP, photographer, stereographer or student, the Cine3D team has released the App you expected. Stereographer provides a totally innovative feature: a real-time depth simulation. This tool allows you to see the consequences of your choices for the viewer. This is also a very efficient learning tool.

Stereographer is the most intuitive stereo3D calculator available for iPhone and iPod Touch. This App was designed by the Cine3D team after one year of research. A must have!

Cine3D Stereographer

Key Features:
  • Many cameras, 3D-Rigs and screen presets (customizable)
  • Suitable for Motion Picture and Stills work, Film or Electronic cameras
  • Different shooting modes available with detailed explanations
  • Real-time depth simulation (theatre, TV, laptop, IMAX, mobile phone…)
  • Real-time warnings about the consequences for the viewer (pleasant3D, painful3D,...)
  • Customizable target audience (adults, pre-school,…)
  • Binocular disparity threshold warning (diplopy)
  • On-screen offset in % or cm/in
  • Manual adjustment of the calculated values
  • Data base of the saved parameters / e-mail tool to send the elements for future use or post-production
  • Operates in imperial or metric
  • Customizable units
  • Easy-to-use interface
  • Night or day display mode
  • Help button on each page
  • Link on Cine3d website (glossary, theory, news,...)

Source: Cine3D

Safe Graphic Insertion for Stereographic Material

This white paper describes a proposed format and methodology that assures that a consistently matched quality of overlay is applied to stereographic material as is currently enjoyed and expected by the viewing public in 2D.

By Jonathan Jenkyn and Simon Hailes, Screen Subtitling Systems

Subtitling for Stereographic Media

This white paper describes and summarises the stereographic delivery mechanisms being considered and their implications for subtitling. It also summarises some proposed recommendations and solutions with regard to these mechanisms and implications.

Source: Screen Subtitling Systems

BBC to Include 3D in Super Hi-Vision Tests

BBC R&D is to carry out stereo 3D tests as part of the live Super Hi-Vision (SHV) transmission trials it is conducting with NHK this week. The corporation’s research team will carry out the experiment as part the collaborative i3DLive project which aims to develop tools for the multi-camera capture of live action, allowing a virtual camera to synthesise views from any angle or stereoscopic data.

The test transmission of the 7680 x 4320 pixel Super Hi Vision is being done to ensure that the corporation can get signals from London to Japan.

Lead technologist Oliver Grau, the man carrying out the 3D experiments, said: “During the Super Hi Vision test transmissions BBC Research & Development will carry out two experiments, the first aims to generate 3D and special effects such as crane shots and steady cam from the footage generated by a static Super Hi Vision camera.

“The second will attempt to add stereo to Super Hi Vision using one Super Hi Vision camera and 10 High Definition cameras placed strategically around the studio.”

3D footage will be for capture only and will not be transmitted.

I3DLive builds on the work of the earlier ORIGAMI and iview projects which set out to develop new tools for the creation of high-quality scenes incorporating both real and virtual objects.

The BBC hopes to show SHV footage during the 2012 Olympic Games.

Roger Mosey, the BBC’s director of London 2012, said earlier this year: “The Super High Vision screen in your home is many years away. But BBC R&D will be carrying out tests this September with a view to showing SHV footage during the Games at one or two cinema-style locations.”

By Will Strauss, Broadcast

Decoding the 3D Standards Debate

Having agreed the first specifications for 3DTV broadcasts, the DVB has begun thinking about further phases of standardisation more applicable to the needs of public service broadcasters and that of multiview autostereoscopic displays.

As more broadcasters gear up to launch 3DTV channels the DVB Project has responded by agreeing the commercial requirements for the format. The group is also preparing the technical specification itself and according to DVB 3DTV Chair David Wood, “this is on the home straight.”

While the group has opted for the Frame Compatible Plano-Stereoscopic System, which is already operational by Sky, Orange, ESPN, DirecTV (and will be by Canal+ when it launches) there could be room for another set of standards within the DVB Project which meet other requirements, notably those of PSBs.

“Other DVB members have expressed the potential need for a set of standards that are appropriate to a different set of commercial requirements,” states its document. “These commercial requirements are in the process of being discussed and agreed.”

The Frame Compatible system arrays left and right images in a ‘spatial multiplex’ compressed variously side-by-side, checkerboard or top/bottom (MPEG-2 in the US, MPEG-4 AVC in Europe) into one HD picture for broadcast and requires that the customer only invest in a 3D-ready TV to decode the images on reception.

In the longer term the industry would prefer to move to a Service Compatible format based on MPEG MVC in which the left eye is broadcast as a standard MPEG AVC picture and the data for the right eye is derived using the left as reference (or 2D+Delta).

“Frame compatible can address existing HD STBs and TVs and is a very quick means for broadcasters to deliver 3D to the consumer,” says Simon Gauntlett, DTG technology director. “The downside is that because each image is compressed by half of the horizontal resolution you are getting a lower resolution of 3D.”

The side-by-side version favoured by Sky cannot be viewed on a 2D set and requires broadcasters to operate two separate services (for 2D and 3D), which for Sky necessitates two satellite transponders. A Service Compatible system on the other hand would require new decoders within the STB but mean that that right eye data would be ignored on transmission to a 2D set.

“Colour television would not have got off the ground if the picture wasn’t able to be viewed on legacy black and white sets, and it’s a similar situation here,” observes Bill Foster, senior technology consultant at Futuresource Consulting.

The MPEG AVC delta signal still incurs an overhead, currently anything from 30% to 70%, with the variation largely dependent on the type of content.

“With Frame Compatible methods, if you have a 3D and an HD service you would need 8Mbps for each of the services whereas with the 2D+Delta model you’d require 8Mbps plus 3-4Mbps in extra capacity so it is more spectrally efficient,” says Gauntlett.

“Broadcasters need to service new 3D and legacy 2D viewers, therefore, saving costs by reducing the bandwidth needed is one of the key factors,” agrees Manuel Gutierrez Novelo, CEO, TD Vision which claims to have invented and patented (in 2003) the 2D+Delta method and has its IP incorporated into the MVC standard. “Instead of using 200% (100% to service 2D legacy and 100% to send a frame-compatible 3D viewed by a small part of the population) we optimise the 2D+Delta to service both with around 130%-140% bandwidth.”

The Service Compatible 3D (2D+Delta) can be implemented in many ways (Source: TD Vision)

Infrastructure Upgrades
Broadcasters are betting, not without reason, that the amount of overhead will reduce significantly over time as compression technologies improve. MPEG-2 SD for instance was introduced at 15Mbps, but can be below 2Mbps these days.

“MPEG MVC is the standard used by Blu-ray and is a strong contender for broadcasters because it’s apolitical,” says Foster. “There would still be a license fee but it won’t be a proprietary system with which broadcasters are uncomfortable.”

However there are still a number of questions as to how widely adopted 2D+Delta will become. According to Matthew Goldman, Head of Compression Technology, Solution Area TV, Ericsson, “it is favoured by bandwidth-restricted networks such as DTT to overcome the inability of these networks to support separate 2D and 3D versions of the same service but there is currently no industry-wide agreement on how to provide the ‘Delta’. This is a further obstacle in DTT where public broadcasters cannot fully control the STB population.”

To optimise 2D+Delta for transmission it would be useful to have MPEG MVC adopted. However, what is less clear is how large the demand for 2D plus difference is going to be.

“In both contribution and transmission, the jury is still out,” says Mark Cronin, Technology Director, Arqiva Satellite & Media. “It may be that the developments in L+R mean that 2D plus difference has less advantage than was originally hoped for.”

While the DVB recognises the main MPEG candidate for a 2D Service Compatible signal to be MVC, there is an MPEG call for proposals for a more efficient system. “One of the issues for DVB members could be whether to run with MVC, like Blu-ray, or wait, however many years, for a more efficient system,” reports Wood.

However the move to full 3D channels rather than isolated events based on MPEG MVC would require a massive upgrade to infrastructure beginning with contribution circuits but eventually encompassing the broadcast centre. “It will be a challenge to manage production of a whole channel including continuity and commercials remotely from an OB truck where most 3D events are currently produced,” notes Foster. “You’d want the incoming left and right feeds to be ingested within a stereo-capable broadcast HQ rather than created side by side at the point of capture.”

The extension of this is that MVC – or multiview video coding – can be ramped up to accommodate more than today’s stereoscopic 3DTV’s two views. This will come in particularly handy when auto-stereoscopic TVs go mainstream and MVC could potentially be required to handle fifteen or more simultaneous views – something it is in theory capable of (the first iteration will use the stereo profile).

Realtime MVC Encoders
“Quite how content will be created to cover multiple views is something no-one’s really started to plan for,” Foster observes. A more immediate concern for broadcasters considering a 3DTV launch based on MPEG MVC is that there are, to the best of TVBEurope’s knowledge, no commercially available realtime MVC encoders.

TD Vision is perhaps closest, debuting an nVidia PC-based realtime encoding platform at NAB2010 and launching the TDV 3D Quantum Licensing Early Adoption Programme along with Magnum Semiconductors last January to provide “a reference design for a broadcast quality realtime stereoscopic encoder,” says Novelo. “It delivers an amazing 1920x1080@60fps per eye using 2D+Delta in around 12-16Mbps, a format that is not even currently supported by 3D Blu-ray discs.”

Harmonic says it already has multichannel HD capability within the Electra 8000 which could potentially cater for dual channel or MVC encoded 3D content and that it is listening to customer requirements to understand how it should develop this capability.

“VoD delivery can extend the prospects for frame compatible mode through the use of 1080p 50/60, provided an HDMI connection can support the increased clock rate,” says Ian Trow, the company’s director of broadcast solutions. “This overcomes the realtime bandwidth overhead associated with broadcast services. However, whilst it’s relatively simple to incorporate 3D into VOD, which works well for 3D movies, it doesn’t address live sports.

“When we look beyond the frame compatible mode this raises the issue of broadcasters’ desire to improve quality and address 2D/3D compatibility and MVC can address these needs,” says Trow. “But this requires significant standardisation work and broadcaster investment at both the head-end and STB to make a reality.

Another crucial issue is the acceptance of 3D shot material in a 2D environment since the two production techniques are very different. The prospect of achieving greater compression efficiency through combining services has previously cropped up in SVC encoding within MPEG-2, allowing a base layer to be built on through additional enhancement layers.

“A typical scenario would be a SD base layer that is enhanced to produce an HD channel,” explains Trow. “While such systems can achieve modest compression gains, the increased system complexity and the restrictions in terms of frame rate and compression standards between the base and enhancement layer have deterred broadcasters from adopting such a strategy to date.”

Standards in the Works
Dovetailing with the work of the DVB is SMPTE’s development of a file format needed for producing and transporting 3DTV in the studio. This will probably need to include all Frame Compatible and other formats.

“Meanwhile the HDMI consortium has agreed formats for inputs to 3D displays with the introduction of HDMI v1.4a, and the 3D@Home consortium is seeking to get display manufacturers to agree to an interoperable shutter glass signalling system,” says Wood.

An alternative and probably faster route to market are ‘universal’ glasses, like universal remotes, that work with any brand of TV. XpanD is about to launch these. Telecommunications standardisation body the ITU-T has also been busy. Its classification system contains several ‘Generations’ of 3D, and four levels in the first generation on which it will initially focus.

“The four ‘Generations’ are four new technologies, which may emerge as a result of research at intervals of possibly ten years,” explains Wood. “Needless to say, this is a guess, but you have to start somewhere. The first generation will be the two channel world of left and right signals, used with a display and glasses. The second generation will be the multiview environment with auto-stereoscopic displays. The third may involve multiview in both the horizontal and vertical directions. The fourth may involve recording a continuous object wave passing through a given area.”

With that we’re into the realm of volumetric or holographic displays—and that’s a whole different ball game.

By Adrian Pennington, TVB Europe

AMWA/EBU FIMS Reports First Results and Roadmap

During IBC, EBU and AMWA organised a round table session around SOA and the Framework for Interoperable Media Services (FIMS) project jointly coordinated by the two organisations. During the round table, which comes after a successful meeting with respondents Amberfin, BBC, Cinegy, IBM, and Sony to its request for technology, FIMS was privileged to announce the following roadmap for developing specifications.

Phase 1 will consist of a common service definition format, after the high level architecture and framework described in the request for technology is first refined. This framework will cover all system and management requirements (service management, awareness and communication, content and time awareness, security, framework extension). The framework will be built upon IBM (SOA based Media Services Framework) and Sony (Media SOA Framework) proposals with the valued experience of service developers and users from Amberfin, BBC and Cinegy. The project will also address container issues seeking maximum compatibility with AAF and MXF.

Phase 2 will subsequently investigate the possibility of defining common services using the framework developed in phase 1. All respondents have already suggested key services. It is very encouraging to see that these contributions have been received in the framework of the AMWA licensing policy, which is compensation free. This is essential to the successful standardisation of the core technology on which the project is working.

The work will continue in a public manner via the FIMS wiki and open meetings that will allow third parties to also actively contribute to this important work. Direct participation will be subject to the signature of a ‘participation agreement' to safeguard the favourable licensing conditions under which FIMS operates.

Source: EBU

CRC Provides Depth Estimation from Stereoscopic Video

Depth estimation is indispensable for a variety of tasks associated with the processing, editing, and displaying of stereoscopic 3D video. Stereoscopic video consists of two image sequences: one for the left eye and the other for the right eye. The pixel-by-pixel displacement between the left- and right-eye images is directly related to the depth of objects that the human visual system perceives.

Communications Research Centre Canada (CRC) has developed a fast algorithm for automatic depth estimation from stereoscopic video, which generates accurate and dense depth maps. The disparity map generator calculates disparity values per blocks, which can be as small as 2x2 pixels. It provides a dense disparity map that includes the maximum and minimum disparity values of the scene. It can also be used to provide vertical disparity if necessary.

The disparity map generator is implemented in C/C++ and is in the process of being implemented in VHDL. Its applications include text/graphic integration and editing in stereoscopic video, adjustment of displayed depth, and stereoscopic to multi-view video conversion.

3D TV Bombshell-Future Tech Revealed at 3D Summit

At the 3rd Annual 3D Entertainment Summit, a two day conference of key 3D industry players that ended last week, Josh Greer President and Co-founder of 3D technology company RealD revealed potentially game-changing information about his company’s new 3D home display technology.

During the “The Future Drivers in the 3D Ecosystem” panel Greer announced that ReadD technology licensees will be able to offer the first “Full HD” passive 3D HDTVs in 2011, allowing the use of inexpensive, lightweight glasses (like the one you’re provided when visiting a 3D movie theater).

Current consumer 3D TVs require battery powered active shutter 3D glasses that retail from $130-$200 each. All 2010 model 3D TVs must have a built-in or add-on infra red emitter to sync the TV to the shutter-type active glasses.

The RealD system uses patented ZScreen technology, an electro-optical system built into the front of a flat panel that very rapidly changes the light from clockwise circular polarization to counterclockwise and back again.

The RealD circular polarized passive glasses act like shutters, permitting the left image to go to the left eye by making the right eye black out, during which time the right eye shows the right eye view while blacking out the left eye. Images are displayed sequentially on the flat panel, just like the current 3D TVs.

The major advantages of passive 3D eyewear are: very light weight, no battery to recharge or replace and low cost (from less than one dollar apiece). Eliminating the need for an infra-red sync emitter within or attached to the TV has a major benefit: no longer will the 3D effect be lost if the beam is blocked by someone walking in front of the set or if a viewer turns his or her head away from the emitter.

Eyeglass maker Luxottica will offer its Oakley brand of 3D compatible passive glasses in uncorrected or with prescriptions later this year.

To date, the only passive large screen flat panels available in the US are expensive (>$6000) 46″ commercial monitors from JVC and Hyundai and these systems are only capable of one-half HD resolution at 1920 x 540 (versus 1920 x1080 for Full HD).

In other remarks Greer said he expects that active shutter 3D TVs will continue to be sold alongside the new 3D “passive glasses” sets for the next 4-5 years.

Passive “Full HD” using the RealD technology transfers the bulk of the cost of the 3D feature from the active glasses to the TV. No word yet on the cost premium for these new TVs or which of the RealD licensed TV makers (Sony, JVC, Samsung, Toshiba and Panasonic) will be the first to offer this new class of 3D flat panel HDTVs.

We expect the US introduction at the 2011 International Consumer Electronics Show in Jan. with availability next spring.

By Michael Fremer, HD Guru

Is 3D Over the Internet the Next Big Opportunity?

Distribution of 3D content over the broadcast infrastructure and via Blu-ray packaged media has begun. However, if you want to get over the top or direct streaming (or even downloaded) 3D content, one would be hard pressed to find a source. That could change if a new company called General 3D (New York, NY) is successful.

In an exclusive meeting with Insight Media, General 3D CEO Keith Fredericks sat down with us to explain the company’s technology, strategy and near-term plans. To start, Fredericks is a long time hand in the 3D world. He was most recently with Newsight as their CTO, but had lived through several previous versions of the company, too. Newsight focused on autostereoscopic 3D, but General 3D will focus primarily on glasses-based solutions.

Newsight finally folded last April and Fredericks began to look for the next opportunity in 3D. He had a strong relationship with the former Newsight group based in Germany, so together they decided to focus on streaming 3D over the web. With some seed capital, the company has developed some very interesting technology, which the company will publicly debut on Oct. 10, 2010 as a live 3D streaming demo. The company has nine employees.

The new service is called 3DFEED (the Es are supposed to be reversed, but my keyboard does not like to do that). Fredericks hopes to grow this into a 3D channel built upon aggregated 3D content and delivered to consumers via its proprietary 3D player, to be built into next-generation web browsers.

In fact, focusing on these next generation browsers, which will be based on OpenGL and HTML5, is key to their strategy. The Firefox, Chrome and Safari next-generation browsers are now in alpha or beta level development, which is good enough for General 3D’s purposes. The big unknown is Microsoft - what will they do with their Silverlight streaming technology and its support for 3D?

Currently, videos that play in a browser use a plug-in, like a Flash player. Next generation browsers will allow integration of the video player into the browser so there is no plug-in to download. The 3D player that General 3D is developing will be integrated into these browsers, allowing for the streaming of 3D videos and even 3D graphics.

The horsepower to decode, transcode and playback 3D to various output devices is not insignificant, which is why General 3D will focus on PC-based solutions to start, but migrating this to a set-top box, Blu-ray player, or eventually, the 3DTV, is not unreasonable.

Fredericks explained that the player can be used to view the web page in 3D, too. That allows users to keep their glasses on as they scroll around the net looking at web sites in 3D and viewing thumbnails of 3D content in 3D. He also wants the content to be scalable, which he defines as being able to display on multiple platforms with various 3D formats and resolutions.

Fredericks also gave us a demo of the streaming capability in our offices. He simply connected to the Internet and began to stream 3D content from his web site to his 3D laptop. We have a relatively fast download capability in the office (up to 10 Mbps), and it was not clear at what bit rate and resolution the content was coming, but the end result was impressive. Click on an icon of a 3D movie and it almost immediately starts playing in 3D — no caching or waiting. And, the image quality seemed fine.

General 3D will initially build up its site and service for early adopters and developers, who can help build out the capabilities. Then, advertisers may start to become interested if enough content is aggregated and as the next-generation web browsers get installed on millions of PCs. The company has time to build out this capability and it will be fun to watch.

By Chris Chinnock, Display Daily

Sony, Imax Tout Lasers in Cinema

Cinema companies, including Imax and Sony, are creating an industry group to ease the regulatory climate facing next-generation projection systems that are based on laser-light sources. Laser is the latest digital-cinema buzzword, offering the industry several advantages over traditional bulb technology, including higher light output, which is a critical element for 3D and large-format projection.

Imax was the first to bet on lasers and is currently finalizing an equity investment in start-up Laser Light Engines to co-develop a laser-based system that could illuminate its largest screens and retrofit its existing network. Other movie-projector manufacturers are expected to shift to lasers as well if lab tests prove successful in the real world.

However, lasers add a regulatory wrinkle that the new organization, the Laser Illuminated Projection Association, or LIPA, will try to iron out. Light shows and other displays that use lasers are regulated by the U.S. Food and Drug Administration's Center for Devices and Radiological Health, which is in charge of ensuring laser equipment safety.

When handled "improperly," lasers can cause eye injury, skin burns and fire, and can distract pilots and drivers. "While the lasers themselves can cause injuries, laser light shows that are produced in accordance with FDA regulations keep hazardous lasers away from the audience," FDA said.

The manufacturers of lasers and the operators of laser-light equipment require variances from the FDA, which can take several months to get approved.
Bill Beck is co-founder of Laser Light Engines and, along with Imax, Sony and other cinema-industry players, is a driving force behind LIPA's creation.

"What we're doing is not a laser-light show even though it's technically classified as a laser-light show," Mr. Beck said. "Our approach was to develop a new category called laser-illuminated projection where the science and the application are completely different than if you're just aiming a laser beam out into a crowd at a rock concert." Mr. Beck acknowledged that high-power lasers need regulating, but the fact is xenon lamps can be dangerous, too.

Peter Lude, senior vice-president of solutions engineering at Sony, said the company has been studying various laser technologies for a couple of years, and while it hasn't committed to making a laser-illuminated projector yet, "we think it's more a matter of 'when' than a matter of 'if.'" He said the current regulations aren't inappropriate for what they were originally intended, but didn't anticipate the application Sony and others have in mind now.

"It's really no longer laser light coming out of the projector system," he said. The goal of the group is to find a balance between keeping the public safe with rules that aren't much more burdensome than those already in place to protect consumers in today's movie houses. An FDA official said the agency is aware of developments in laser-projection cinema systems and that it is in discussions with the industry trade group and is "taking its input into account as we consider various options."

By Andy Georgiades, The Wall Street Journal

Kodak Announces Revolutionary 3D Digital Movie Projection Technology

Eastman Kodak Company announced that it has developed revolutionary laser projection technology that delivers both 2D and bright 3D images within today's stringent technical standards for digital motion picture presentation. Kodak is currently demonstrating a prototype projector incorporating the KODAK Laser Projection Technology to key industry leaders.

According to Les Moore, Kodak's chief operating officer for Digital Cinema, the KODAK Laser Projection Technology is a key ingredient to potential improvements in digital cinema picture quality for future movie-goers. This technology offers the potential for a reduction in total cost of ownership through a cost-conscious design, combined with the efficiencies gained through using laser illumination systems including, lower energy consumption and the anticipated long lifetime of lasers.

Kodak is currently in discussions with potential licensing partners to commercialize a projector product using the KODAK Laser Projection Technology.

Kodak has invited industry participants to a series of demonstrations of the projector at the company's in-house theater, Theater on the Ridge, located in Rochester, NY. For more information about Kodak's 3D laser projector and technology demonstrations, please visit

Source: Kodak

Teranex Introduces 3D Applications

Teranex is introducing a suite of 3D applications for the VC100 product family. Engineered as a dual channel architecture, the VC100 product family is uniquely suited for stereoscopic processing.

The programmable platform can evolve with the transition to 3D to provide a feature set that meets the needs of the industry. Existing VC100 product family customers are able to take advantage of the new 3D applications without the need to install new equipment, making the switch to 3D very simple.

The three applications available are VC1-3DSP for stereoscopic processing allowing 2D to 3D conversion, VC1-3D-CCP for capture and correction processing of 3D content, and VC1-3DE for encoding and decoding of various 3D formats.

Advanced stereoscopic processing is available with the VC1-3DSP software application used to transition from 2D to 3D enabling broadcasters to use standard 2D programming and simulate 3D programmes. VC1-3DSP gives clients the ability to mix 2D content with 3D and create seamless 3D content including the addition of 3D logos.

Capture and correction processing is available with the VC1-3D-CCP application which includes format and frame rate conversion, positional and axial rotation adjustments to compensate for mechanical, optical and electronic stereoscopic camera misalignments, and includes 3D logo insertion.

The VC1-3DE application provides encoding and decoding of 3D streams, and supports all popular 3D formats as well as some optional proprietary transmission formats.

Source: Teranex

CEL-Soft to Introduce Stereoscopic Analyser

Compatible with all current versions of Microsoft Windows, Cel-Soft's Cel-Scope3D allows stereoscopic camera alignment to be performed quickly and confidently so that the 3D is accurate from the moment of capture. Running on a suitably powerful PC platform, Cel-Scope3D can display left and right channels simultaneously plus actual depth dynamics.

Its display window can each be set to show the usual waveform, vectorscope and histogram graphics as well as differences in video parameters between each channel. Geometry issues can be easily identified using built-in real-time image manipulation. Quality-control tests can be performed on live stereoscopic video sources in any SD, HD or 2K format from industry standard capture cards or Firewire inputs, or alternatively from file playback.

Screenshot of Cel-Scope3D

Cel-Scope3D is designed for use both on set with live inputs and in post-production, reviewing and playing back 3D media files. Captured footage or edits in a wide range of file formats can be viewed and assessed in real time. Disparities are analysed and displayed as clear and intelligible graphics on 2D or 3D monitors. Anaglyph display, touch-screen control and auto-alarm are all supported. Displays can be scaled and arranged as six or eight windows on one or two PC monitors and also on a 3D monitor.

The most important setting the stereographer or operator has to do is to assign the target depth budget for the production in percentage of screen width or in pixels. The live 3D analysis displays then include depth analysis with depth budget markers, vertical disparity, depth histogram and vertical disparities histogram. These can be colour-coded to correspond to the false colour used on the depth map, for easy problem-area recognition.

Left/right focus-difference and colour-balance-difference displays allow camera matching to be checked easily. Each display mode is configured via a simple menu. Up to 20 configurations can be stored on preset buttons for fast recall. Embedded audio, stereo or multi-channel surround-sound can also be extracted, displayed, monitored and checked alongside the video. Logging and GPI options enable Cel-Scope3D to monitor content at any part of the 3D distribution chain.

An optional 3D recording facility allows dual stream 3D be captured direct to hard disk in a number of alternative formats.

Source: Live Production

TestVid's 3D Stereoscopic Test Sequences

TestVid will be launching the World’s first comprehensive 3D stereoscopic video test sequences designed specifically for testing the quality of video codecs - T3D003 Europe.

There is very little 3D content available - and most of what is available comprises commercial movies, which are only available compressed at relatively modest bit-rates, and cannot be freely used for demos and tests. T3D003 Europe solves this problem: it comprises more than 60 pairs of uncompressed Left + Right video sequences, in HD and 2K D-Cinema formats, with the usage rights to do tests, trade shows, public demos - even use on websites.

T3D003 Europe is intended both for broadcasters and for broadcast equipment manufacturers (such as server or encoder companies), to give full test coverage of just about any type of 3D video feature that an encoder is likely to encounter.

As well as the video itself, 3D Tvids are fully documented, so it is easy to find video which will stress a 3D codec in many ways, with a wide range of subjects including 'difficult' video such as fast scene changes, reflections, lots of detail, night-time highlights, hand-held camera - and the issues specific to 3D such as inter-ocular and convergence, alignment, matching camera parameters, lenses.

Source: TestVid

Binocle Starts Commercialization of the Disparity Tagger

At IBC Amsterdam, held from 9 to 14 september 2010, French company Binocle will start the commercialization of its real-time high-definition stereoscopic correction unit: the DisparityTagger. A prototype was shown at NAB in April 2010 and was a major success among visitors.

One of the challenges in mastering 3D cinema and television is related to the difficulty in transmitting deformation-free images, despite the extreme care needed for 3D shooting. The DisparityTagger allows the 3D TV viewers to experience corrected 3D video, stripped of vertical disparities, when watching stereoscopic 3D live broadcasts. Vertical disparities cause visual discomfort when viewing stereoscopic images by causing eyestrain due to the geometric deformations intrinsic to 3D shooting.

The DisparityTagger is the universal tool for monitoring stereoscopic shots, allowing to automatically detect in real time every issue that can arise while shooting. Moreover – with the new SDI out capability – the DisparityTagger can automatically correct in real time the stereoscopic streams on the fly to reach a shot free of vertical disparities.

An off-line version (DisparityKiller) is also available for post-production.

Screenshot of the DisparityTagger

The DisparityTagger is the result of 12 years of stereoscopic shooting experience by Binocle, and 4 years of research by the INRIA Research Institute, in the “French Silicon Valley” of Grenoble. High-definition real-time processing is made possible by the extraordinary computational power of the NVIDIA SDI Quadro solutions.

Binocle was awarded in IBC 2009 with the SVG Sports Award for it's coverage of the French Open Roland Garros in 3D. Binocle is also responsible for the stereoscopy of the first French Stereoscopic Feature Film: Derriere les murs. Binocle is part of the 3DLive research consortium, funded by National Research Agency (ANR) in France.

Binocle proposes today the tool which was missing from the 3D images production line, and this tool will boost the invention of a new art form for television and cinema.

IBC visitors will be able to see demonstrations and order the DisparityTagger at IBC booth (11.C50b).

Binocle will be also on following exhibitions:
CINEC Munich - 18-20th September - Std 2-E34
SATIS Paris - 19-21st October

Sisvel Technology Shows New 3D Tile Format

At IFA 2010 Sisvel Technology is demonstrating the 3D Tile Format, the new solution to improve 3D broadcasting trough a creative way to format two stereoscopic images. This new technology enables delivery of HD 3D content with a higher quality than current solutions and allows broadcasters to transmit a single video service to 2D and 3D audiences. Backwards compatibility of the 3D Tile Format guarantees that consumers with a traditional 2D TV set can still enjoy the service in 2D format.

Sisvel Technology's 3D Tile Format

Current Issues in 3D Broadcasting
The idea behind 3D video is the transmission of two separate images (Left & Right), to reproduce human stereoscopic vision, packed in a single stream. The current systems squeeze the Left and Right images into a single High Definition frame and the service provider reuses part of the existing production and the entire distribution infrastructure. This approach not only causes a loss in the video quality, halving the vertical or horizontal resolution of the source image, but also makes the 3D transmission unsuitable for viewing on 2D TV receivers. The challenge was to avoid the drawbacks of the current frame packing techniques.

Sisvel Technology's Solution to Improve 3D Broadcasting
To overcome these drawbacks in 3D broadcasting, Sisvel Technology has created the 3D Tile Format, a solution allowing the integrated storage of two stereoscopic 720p frames into a single 1080p frame. The reconstructed Left and Right pictures keep their original resolution and will not be affected by the unbalance of the Vertical and Horizontal resolution. The layout of the images within the Tile Format, together with standard compliant technologies developed by Sisvel Technology, seamlessly allows the decoding of 2D video from the 3D transmission.

Source: Sisvel Technology

AS-03 MXF Program Delivery Specification

The AMWA has released issue of AS-03 MXF Program Delivery Specification. It was developed to meet a real need from the PBS network for a single file format to deliver ready-to-air programs to their member stations.

The many stations have mix of video server makes and models, and PBS did not want a different format for each server. Although the servers could accept MXF files, the specification was very loose, and OP1A, for example, is insufficiently constrained to be considered a file delivery specification.

AS-03 describes a vendor-neutral subset of the MXF file format to be used for delivery of finished programming from program producers and program distributors to broadcast stations. AS-03 Files are intended to be delivered in their entirety to be cached before playout.

AS-03 constrains the video essence to frame-by-frame interleaving, MPEG-2 or H.264, and the audio to be PCM pairs, AC-3 or Dolby E. AS-03 files contain defined sets of metadata for identification of content and for verification of content versus progam traffic metadata that is delivered separately.

The specification can be further constrained by a “shim”. Each shim provides a set of constraints that reduce the range of variability that may be needed in well-defined categories of applications. These categories may address particular type of programming or programming genres, or they may address requirements of particular broadcast station groups, for example defining bit rate, aspect ratio, and sound essence schemes. In the case of PBS their shim defines coding rates of 25Mb/s for MPEG-2 and 18Mb/s for H.264 among many other parameters.

The development of AS-03 acknowledges the reality that the great flexibility allowed by the MXF standards impedes interoperability between equipment from different vendors. By defining a subset of MXF, AS-03 simplifies the issues around file interchange between equipment.

SES ASTRA Takes 3D Initiative

SES ASTRA announced a new initiative to support the introduction of 3D television in Europe. As part of the initiative, the industry reached a common understanding of the minimum technical specifications for the introduction of 3D television and broadcasting. The initiative is backed by SES ASTRA and major European broadcasters including public and private TV channels as well as representatives from the consumer electronics industry.

Initial satellite 3D transmissions will use either the side-by-side (for 1080i resolution) or top-bottom (for 720p resolution) formats which make them compatible with existing High Definition (HD) set top boxes. Side-by-side and top-bottom are two transmission formats for 3D satellites broadcasts whereby two pictures will be arranged either horizontally or vertically in one frame.

The delivery of two different images for the left and right eye is necessary in order to create a 3D stereoscopic experience for the viewer. Free-to-air 3D services will be signalled using mechanisms defined under an updated Digital Video Broadcasting (DVB) standard which will allow automatic switching of the display from 2D to 3D and from 3D to 2D broadcasts.


Discovery Warns Against Conversions

Discovery Communications has taken a strong stance against 2D to 3D conversion for programming intended for its 3D channel launching in the US next January.

“We have played around with conversion and the technology is shaky,” said John Honeycutt, Discovery's EVP & Head of International Business Operations. “Conversion is a concern because some consumers may have an adverse physical reaction when viewing it. The effect is like that of reading a book in a moving car.”

Until April this year Honeycutt was CTO of the business responsible for Discovery’s global strategic media technology planning and instrumental in bringing together Sony Pictures Entertainment and IMAX to launch a joint venture 3D channel with Discovery. Now based in London, Honeycutt is responsible for guiding the operational structure of Discovery's international operations in more than 180 markets including the rollout of its 3D channel.

Discovery has commissioned a range of 3D content for launch which will be shot in native 3D. Sony and Imax will also supply content to the channel.

“My philosophy is this: you have one opportunity to impress consumers and you better not waste that chance,” he said. “When we launched HD Theater (in 2002) the channel was materially a 100% HD network. For us to put out anything substandard goes against the DNA of the brand.

“We’re going to spend an appropriate amount on 3D content without being foolish about it. We are commissioning now to a high standard shooting native 3D. We are not going to put out low quality content. That said, if there is a single shot that just cannot be achieved natively and you can spend the time and effort to generate a good conversion in post then we’ll consider it.

“We have been pushing manufacturers for a 2D to 3D conversion tool which features the ability, like a telecine, to be able to control light and control the image itself but we are not going to take our library or any individual show and pass it through any realtime 3D processing.

“It’s all about the consumer,” he emphasised. “If they turn on the TV and see poor quality they will turn off. On the other side if they experience amazing quality they will want to watch.”

The US launch date of Discovery’s 3D channel was confirmed by Honeycutt as January 2011. It will screen a mix of natural history, space, exploration, engineering, science and technology programmes.

Earlier this month Discovery Communications Europe secured a UK broadcast licence for the Discovery 3D channel. It will most likely secure a birth on Sky’s platform making it the first third party 3D channel to join with Sky 3D which is launching October 1 to the home.

Discovery is pursuing a similar license on India, one of the world’s fastest-growing markets for pay-TV.

“As a business we want to put our content in as many places as possible,” Honeycutt says. “I am looking at appropriate platforms and timing. We have always been a technologically innovative company but we also need to time a 3D launch with when the market is right.”

He added: “It will be a very, very long time before we create every piece of content in 3D. Certain things just don’t work in 3D for us. Many of our shows which are filmed in remote and hostile places are hard enough for a cameraman to film in HD let alone in 3D.

By Adrian Pennington, TVB Europe

CableLabs Publishes First 3D Content Encoding Specification

CableLabs announced that it has published a new specification as a guide for producers, programmers and aggregators of stereoscopic 3D programming. This new specification, called Content Encoding Profiles 3.0 Specification details exact requirements for formatting or “panelizing” the 3D content into a frame-compatible format for use by cable television systems.

“This spec release marks a great step in the commercialization of 3D TV because it is the first public specification that fully describes the coding and signaling for these top-and-bottom and side-by-side 3D video formats,” said Tony Werner, CTO of Comcast.

“A key part of this specification includes the definitions for signaling 3D content over existing digital video infrastructure that uses either MPEG-2 or MPEG-4 (AVC/H.264) coding,” said Jim Occhiuto, Vice President of Technology and Engineering, Showtime Networks. “This signaling is critical for the receiver/decoder to enable automatic format detection and simplified user experiences when going between 2D and 3D programs,” he added.

The new CEP specification replaces the previous VOD-Content Encoding Profile 2.0 specification that was widely used within the industry. This new specification builds upon the existing 2D coding framework defined by the previous version of this document and will be used as the reference for both 2D and 3D video coding going forward. It represents the first step in a continuing process to define 3D formats for cable television that works with existing equipment and infrastructure. Work continues at CableLabs on the development of standards for the delivery of future 3D delivery systems that will expand the resolution and the quality as new equipment becomes available.

Source: CableLabs

ACMA Lets Nine's 3D Trial Go Ahead

Australia media regulator the ACMA will allow a new 3D TV trial to go ahead, after receiving reports from the broadcasters involved in an earlier project. But the trial is likely to be the last in Australia for a while.

The ACMA had initially refused the Nine Network permission for 3D coverage of the National Rugby League (NRL) final, saying it had not received Nine’s conclusions on its rugby State of Origin 3D TV trial. But the regulator confirmed today that reports from both the Nine Network and SBS, which offered the FIFA World Cup football in 3D, had now been received.

As well as Nine’s coverage of the NRL final, the ACMA also announced it had licensed another 3D TV trial, for Seven Network’s coverage of the AFL (Australian Rules) football final. Finals will be played on September 25 (AFL) and October 3 (NRL).

But there will be no more trials for some time after these, while the regulator conducts a review of “certain spectrum, licensing and consumer policy issues associated with 3D TV.” The ACMA has released a discussion paper on 3D TV and other emerging technologies to assist in the review.

“The world-first, free-to-air trials conducted by the Nine Network and SBS demonstrated some of the challenges of 3D TV technology,” said ACMA Chairman Chris Chapman. “There is still much for the industry in Australia and internationally to learn about 3D TV production, transmission and reception. Accordingly, the ACMA is pleased to facilitate these additional trials by the Seven Network and the Nine Network.”

And the ACMA has reiterated that trials will involve lower-power transmissions than are used for regular free-to-air broadcasts so not all consumers wil be able to receive the transmissions “even if they have a 3D television”.

“The ACMA moved quickly to facilitate and then approve the first round of applications for 3D TV trials and considers it appropriate to facilitate these further trials of this evolving technology in September and October. However, the ACMA is suspending authorisation of any additional trials beyond that, until these policy issues have been considered,” said Chapman.

“Vacant spectrum is only available on a temporary basis and technical standards are still evolving. The discussion paper released by the ACMA today is designed to assist interested parties to address these policy issues and to offer views on the future of 3D TV, as well as other emerging technologies.”

Download reports:
Temporary trials of 3D TV and other emerging technologies
Nine Network Australia

By Rose Major, RapidTV News