This report addresses the market opportunity for 3DTV, reviews 3DTV activities to date, considers the opportunity for on-demand 3D and looks at why service providers are launching services with Frame Compatible technology. It also considers the arguments for and against Service Compatible 3D as a second generation technology.
By John Moulding, Videonet
This report addresses the market opportunity for 3DTV, reviews 3DTV activities to date, considers the opportunity for on-demand 3D and looks at why service providers are launching services with Frame Compatible technology. It also considers the arguments for and against Service Compatible 3D as a second generation technology.
All 64 matches of the 2014 World Cup in Brazil will be covered in stereo 3D provided rights holders can profit from the production. While the technology needed to produce consecutive live 3D was proven by FIFA in South Africa the commercial model remains untested.
“I would love to do all 64 matches in 3D but right now there is no clear model as to how live 3D sports broadcasts are going to get monetised,” Peter Angell, director of production and programming, HBS told TVB Europe. “The decision will depend on whether broadcasters and cinema operators believe there is a sustainable revenue stream in order to bid for and acquire the rights.”
Budgets for live 3D sports will remain high while a separate 3D production of an event is needed. Angell believes some sports, like tennis, could happily accommodate the same camera angle and editorial coverage for both 2D and 3D, but that this is not applicable to soccer.
“The overwhelming feedback we received was that 3D football production needs to be vastly different to 2D,” he says. “With 3D football, take everything you know about 2D coverage and throw it away. You need cameras in different positions and they are used in a completely different way.”
Angell also stated that dual stream production in which both signals are kept separate rather than compressed side by side throughout the chain was, for HBS at least, the way forward.
“I believe there’s no risk to operating dual stream and the benefit is you keep quality at the highest resolution,” he says. “We’ll definitely be taking a dual stream approach through to transmission going forward.”
By Adrian Pennington, TVB Europe
Thursday, August 26, 2010
Little 3DTV activity is expected from the broadcasters that make up the EBU (European Broadcasting Union) membership over the next three years but that does not stop the organisation planning for the evolution of this format. There is considerable interest in the format among its members, who responded to an EBU survey about their 3D needs with unprecedented speed. One of the reasons for that could be that European broadcasters realize they have an opportunity to shape 3D standards from the very beginning.
“With HDTV there was a well established HD market and standards, and that market was happening with or without Europe,” Andy Quested, Chair of the EBU 3D Group, recalls. “This time around everyone is at the same level and in some cases European service providers are ahead of others. So there is an opportunity to make sure we take a leading role in standards setting and production techniques.”
In a survey of its members, EBU asked whether current standards meet their current requirements for 3DTV and half answered yes. The figure also means that half the EBU membership believes it needs something else to meet its current needs.
What those future needs are has yet to be defined, however. Does it include Service Compatible 3DTV (where the 2D and 3D pictures are broadcast in the same video stream)? “We don’t know yet,” Quested admits. “Service Compatible is not out or in.”
The key to success, in Quested’s opinion, is that the broadcast industry must completely disengage emission (transmission) technology from production technology. This points to the use of discrete left eye and right eye views and low levels of compression for content that is being created and contributed for storage in archives. Quested warns that basing production standards on the current limitations of emission standards would be “completely wrong.”
“This is about making good 3D content and the requirement at the moment is how to create and preserve it longer term. That is something that really has to be established,” he states.
Quested does not believe that 3DTV is a ‘make or break’ service for any platform operator today. “Nobody is worried that they are going to fall behind and lose subscribers and lose audience if they do not have 3D. If that were the case people would be looking at what they want to do next but I don’t think it is. At the moment 3DTV is very interesting but it is a small market and it will be for some time.”
This means the industry has time to get the standards right and Frame Compatible means that services can be launched by broadcasters like BSkyB while the business of developing the long-term technologies for 3DTV goes ahead, assuming we do eventually need something beyond Frame Compatible.
Quested notes that while standardisation is never pain-free, the main arguments usually centre around issues of legacy and future proofing and one of the major advantages of the 3D market is that there is no legacy today. As long as Frame Compatible continues to dominate while the standards are being developed, that will remain the case.
Quested believes 2012 is the right time frame for public service broadcasters to think about 3DTV. And he points out that the market for 3DTV does not only cover domestic displays but also big screens and the public venue market, including pubs and clubs, which represents an exciting possibility for broadcasters.
“There is a nice, established market out there for big screens and shared experiences for 3D,” he points out. He thinks big screen experiences will be an important feature of 3DTV in the early days, with people more likely to watch a major event for a few hours in a bar than sit down all evening to watch 3DTV. He believes 3DTV will be an ‘appointment to view’ market for the next 6-9 months.
EBU’s role in the development of 3DTV includes helping to define what public broadcasters want, inputting into the standards creation process and eventually making recommendations on the implementation of 3DTV (based on standards) in Europe. One of the organisation’s goals is to encourage a viable, home-grown 3DTV market in Europe so that the television industry is not reliant on imported content. Another is to ensure a market for programme sales by make sure content can be exchanged between regions working closely with the key industry suppliers. The organisation is cooperating with SMPTE, DVB and ITU-R on 3D standardisation issues.
By John Moulding, Videonet
Early 3DTV broadcast services are dominated by Frame Compatible last mile delivery but a number of alternatives are being talked about. With standards to be developed and business models to be proven, this is very early days for the new format. In the run-up to IBC, Harmonic, which provides a range of headend solutions including HD encoders suited to Frame Compatible 3DTV, has provided this FAQ document to provide some more insight on this subject.
Why has 3D suddenly become such a hot topic?
The CE industry wants to trigger a TV set replacement cycle which generates higher margins at the beginning of product life cycles. At the same time major theatrical releases are in theatres internationally, stirring consumer interest. Second, the rapid increase by theatres of digital cinema technologies permits conversion of theatres to 3D with comparative ease, versus the film-based technologies of the past.
Why have the movie studios suddenly focused so heavily on 3D movies? After all, they have tried this several times before, each time being only a short fad.
The difference this time is digital cinema technologies are rapidly replacing film-based delivery systems worldwide. Beyond this, the studios spend a huge amount of money making, shipping, and maintaining film copies of each movie. The necessary standards have been published by SMPTE, the manufacturers of digital cinema equipment have ramped up production, and the studios are assisting in financing the conversion. The time, to use an old expression, is right. Once a theatre has been converted to digital cinema technology, the addition of 3D capabilities is fairly trivial and quite low cost.
What are the types of glasses technologies being used in the home?
There are three basic types of glasses currently in use in the home. The first, and most primitive, are the red/cyan anaglyph glasses. There is a slightly enhanced version that compensates for the way the eye treats red and cyan differently (called diopter glasses).
The second, with much more satisfactory performance, are circular polarized glasses. What is currently missing is standardization as to which eye gets which polarization; not a crucial issue - until you have to buy replacement glasses.
The third type are active shutter glasses, in which the display mounts a transmitter which signals the glasses (via IR or RF) to block one eye at a time in sync with the display. These glasses are totally immune to the head movement artifacts which affect polarized glasses. They are somewhat more expensive and require batteries.
How are two streams of HD pictures carried through a single transport?
Pairs of images may be “multiplexed” together in several different ways. Each method has its pros and cons. First, spatially multiplexed image formats include “Side by Side,” “Top and Bottom,” “Checkerboard,” “Line Interleave,” and “Column Interleave.” These result in an up to 50% reduction in picture resolution, but if well done are perfectly acceptable to viewers. These techniques are properly termed “Frame Compatible.”
Second, temporally multiplexed image formats termed “Time Interleave” or “Page Flip” result in only one half of the temporal rate being delivered. With movie content at 24 frames a second and 48 Hz systems being feasible, this method can be used for movies. For full motion video, however, especially sports, it would result in lowering the 60 frames per second image to 30 frames per eye. This is generally not acceptable to viewers.
What additional bit-rates do these frame-compatible methods require?
That is content dependent. For computer generated images, such as are used in animated movies, the extra bit-rate requirements are minimal. For live 3D images, however, studies have shown a bit-rate increase of up to about 35%, depending upon the number of hard edges in the picture. This is because the process of creating the frame-compatible single image increases the amount of high frequencies in the picture. High frequency data takes more bits to faithfully reproduce.
What are some of the many competing forms of image multiplexing?
One takes the left and right eye signals and just transmits one eye plus the difference between left and right eye only. This is the basis of patents pending with TD Vision. The signal for the second eye is then reconstructed at the viewer’s equipment (either in the set-top box or in the TV itself).
Advocates for this claim this is the most efficient method to enable a simulcast of 2D and 3D services. Opponents claim that this attribute is not valuable as content has to be inherently different due to the different viewing experience. Moreover, they claim that this process does not work well for occlusion (which is where you are trying to simulate an object that is only visible with one eye and being blocked from view by another near object for the other eye).
Another method subsamples both the left eye and right eye signals either vertically or horizontally to allow the combination of left and right signals to be transmitted in the same channel bandwidth as the original 2D signal. Another variant creates a checkerboard segmentation of the information. Like the horizontal or vertical subsampling above (often referred to as “side by side” and “top and bottom”) this does cause some loss of fidelity but allows the 3D signal to fit into a 2D channel bandwidth. Other systems transmit tightly synchronized but independent left-eye and right-eye views.
I have heard about the new MPEG “MVC” and “SVC” standards. Can you tell me more?
First, the acronyms: MVC stands for Multi-View Coding and is capable of compressing a potentially large number of related images (termed “views”) into one stream. It is intended for multi-camera angle services, but MPEG has retrofitted a new “Stereo Profile” aimed at 3D into the document.
SVC stands for Scalable Video Coding, and provides a way to send various “layers” of images which permit lower bit-rate, lower resolution, or lower quality images to be recovered in situations where the full quality/resolution images cannot.
The bad news related to MVC is that its efficiency may not meet the targets in all situations, especially with live video. This is to say that in some cases there will be no efficiency gain beyond “simulcasting” of the various images via AVC. The good news is that both MVC and SVC are additions to the AVC standard and documented in it as new Annexes.
Why are there two new coding schemes?
MVC and SVC are intended for different uses. MVC is an extension of AVC in a fully compatible manner. SVC extends AVC in a sometimes incompatible manner. There are sound technical reasons for this. The drawback to SVC is that a Main Profile AVC decoder may under some circumstances not necessarily be able to decode the “base layer.”
What’s all this “base” and “enhancement layer” stuff?
Both MVC and SVC are “layered” coding schemes. This means that both will generate a “base layer” and one or more “enhancement layers.” In the case of MVC Stereo Profile, the base layer will typically be the left eye view, while the enhancement layer carries the right eye view. Some have proposed carrying a frame-compatible image in the base layer and having the enhancement layer carry the portions of the picture pair which were dropped in the multiplexing process.
SVC is intended for applications in which (for example) the base layer is 720p and the enhancement carries the additional picture data to construct a 1080p image. See the question regarding “2D compatibility” below.
Now that MVC has been adopted by the Blu-ray association, will that become the dominant technology base?
What is important to the Blu-ray community is the need to supply both 2D and 3D in the same package, and they have no real constraints in bandwidth compared to operators. Operators will be delivering 3D content quite independently from any 2D counterpart and thus MVC has less to recommend it.
Is 2D compatibility really important?
What we have heard from those who actually have produced 3D content, and they have found this especially true for sports, is that the production grammar is very different between 2D and 3D productions. Camera angles which are compelling for 3D are confusing for 2D, while those widely used for 2D do not render much 3D effect. This translates into needing dual production, which adds to the cost. This factor, plus the small number of 3D viewers initially, translates into pay-per-view being the content provider’s method of choice for 3D, especially sports.
Will glasses of some form always be needed to create 3DTV?
A number of manufacturers continue to try to create auto-stereoscopic sets, with little success so far. Physics is not very favourable to this method, and thus far a big disadvantage of these sets has been the limited number of “sweet spots” for viewing. In years to come it is possible that new display technology (such as holographic projection) could overcome this, but that is purely speculative at this time. Glasses are not a big hindrance to widespread consumer adoption, however, as many see them as “cool.”
By John Moulding, Videonet
SDI Media announced that Sony Creative Software has adopted and integrated SDI's XML schema into their Z-Depth 3D subtitle editing application. Z-Depth creates the information needed for proper placement of subtitles or menus in the 3D space of a 3D title presentation.
The purpose of this document is to propose a standardized XML based file format for storing Z-axis data for 3D subtitles applicable to Theatrical (Digital Cinema, Digital Intermediates), Blu-Ray and Broadcast venues. Recording only the changes in Z values over the course of the picture should be sufficient as it eliminates the redundancy of recording individual Z values for each frame of the picture.
Source: SDI Media
A joint effort of the 3D@Home Consortium and the MPEG Industry Forum 3DTV Working Group
Download the glossary
Friday, August 20, 2010
Thursday, August 19, 2010
Although most 3DTV broadcasts today are using Frame Compatible mode, where two separate HD pictures are fused into a single, 3D-compatible HDTV video stream, compression vendor Envivio is looking ahead to alternatives that could provide a higher quality viewing experience. At NAB the company was demonstrating full-resolution, 1080p, stereoscopic 3D content using its current-generation 4Caster C4 Three Screens video encoder and outlining the suitability of its flexible software architecture for accommodating changes brought about by new standards such as the MVC (Multiview Video Coding) extension to the MPEG-4 AVC standard.
That demo provided two 1080p HD pictures, with one view running at 8Mbps and the ‘Delta’, MVC specific stream running at around 4Mbps – meaning the total bandwidth requirement for what apparently amounts to a full 3DTV experience was 12Mbps. Although Frame Compatible is widely viewed as good enough for today’s requirements, discussions about ‘second generation’ 3D delivery technology are well underway.
According to Envivio, a stereoscopic 1080p 3DTV broadcast in MVC takes 1.5 times more bandwidth than a single stream of HDTV in 1080p. So that means there is a considerable efficiency saving if MVC delivers the same quality ‘full 3DTV’ experience as you would get from two completely separate left and right streams of 1080p HD.
When you compare MVC to Frame Compatible, the benefits are even more obvious, it seems. Boris Felts, VP of Marketing at Envivio, explains: “MVC carries both the 3D and the 2D compatible view and MVC has twice the resolution. If live video was to be broadcast in both 2D and also 3D Frame Compatible, we would have approximately 8 Mbps for the 2D stream and 12 Mbps for the 3D stream, but with only half resolution, making a total of 20 Mbps. On the other hand, MVC would carry the 2D and 3D versions in a single stream at 12 Mbps, in full resolution.”
MVC combines a 2D view with additional ‘Delta’ information that can be used to create the 3D effect and this means the same stream can be used to deliver 2D television (where set-top boxes ignore the additional information) and 3D television. And the set-top boxes capable of handling MVC decoding are on their way, according to Envivio. Felts comments: “Chipsets are available. We expect to obtain an STB reference design shortly for our lab and we anticipate that you will see the first [MVC capable] STB available before the end of the year.”
From an encoding perspective, Envivio says you need more processing power (CPU cycles) for MVC than is necessary for single HD - between 1.5 to two times the power, in fact. “However, we were able to do our MVC demonstration using our current generation encoder,” Felts confirms, adding that the latest, field deployed versions of the 4Caster C4 video encoder can support MVC.
“Even though MVC relies on AVC, you still have to develop a new algorithm, or more precisely extend the current AVC algorithm,” he adds. “The benefit of creating a new algorithm is that there are some additional compression improvements that can be realized by taking advantage of the high redundancy between the left and right views.”
According to Envivio, there is no quality benefit from delivering 3DTV as two separate streams of 1080p (one for each eye), even if there is plentiful bandwidth – which might be the case on FTTH. “With MVC we are delivering full 1080p to each eye already. All you are doing [by delivering two 1080p streams] is eating bandwidth using two discreet streams – there is no quality gained,” Felts says. “One important issue is that in order to do that, you would need a set-top box with dual HD decoding. MVC can deliver the same thing using a single decoder.”
Envivio believes that Frame Compatible can be good for launching 3DTV because it keeps the initial costs down and will enable more content providers and service providers to deliver more content - which is critical to generating consumer interest. How long Frame Compatible dominates remains to be seen. “It will really depend on the viewer adoption of 3D services,” says Boris Felts. “If the pick-up is low, this might stay for a long time. If the pick-up is high, it’s likely to be replaced by ‘service compatible’ MVC much more quickly as providers seek to win the quality battle.”
The company expects an eventual migration towards MVC. “Long-term, once that initial momentum is created, the shift to MVC will ensure the quality of experience required for long-term success,” it says. “With chipsets capable of MVC decoding available in Blu-ray DVD players, we would see the industry converging towards the same approach and using MVC. Remember the chipset manufacturers are the same for Blu-ray players and STBs.”
By John Moulding, Videonet
As 3D has moved out of theme parks and into story-driven movies like the game-changing Avatar, their presence in theatres has grown exponentially. Early arrivers like 2005’s Chicken Little and 2007’s Beowulf grew into around five 3D films in 2008, ten in 2009, and over 20 scheduled for 2010. 2011 already has over 30 films planned for 3D release—and those are only the ones that have been announced.
Helping to drive this exponential growth is the process of 2D-to-3D conversion. Instead of capturing the 3D on-set using a stereoscopic camera with two distinct lenses, a number of companies use proprietary processes to convert an image from 2D to 3D. After Avatar made waves with its 3D in late 2009, many studios selected projects for conversion, quickly increasing the amount of features planned for 3D release.
“Most of the 2D-to-3D process is a visual effect, it’s just a specific application of different [FX] things,” explains Matt DeJohn of In-Three, which converted part of Alice in Wonderland and G-Force. “We’re creating mattes like you would on a green screen, we’re keying out characters, we’re doing paint like you would in rig [lines used in stunt work] removal, and we’re modeling, like you would in CG.”
While every conversion company goes about the process in a different way, each with its relative strengths, conversion generally requires three steps: separating out the different elements in a shot, adding depth, and painting in the gaps. In the first step, effects artists define each element and separate out characters, objects and background elements. Here, the biggest challenge for many companies is elements like transparencies, smoky areas and wispy hair, which are difficult to separate from the background. In-Three uses rotoscoping for this step, a technique in which artists and/or computer programs trace over live-action sequences. “In Alice in Wonderland, she has a lot of wispy hair,” DeJohn notes. “A bad version of the roto for her would make it look like she had a haircut that took all the wispy hair off her head, or conversely, it could be falling to the background and stuck to the background.”
Legend3D, which also had a hand in Alice in Wonderland, doesn’t use rotoscoping for this step, instead applying a pixel-specific process developed from the company’s original focus, colorizing black-and-white films. As Barry Sandrew, the founder and president/COO, describes it, the creative team will decide which pixels in an object to separate in a couple of representative frames in a shot. At the end of the day, they send their work to India, which carries out the work over the rest of the shot, a labor-intensive process in which those two frames may grow to ten seconds’ worth of frames. Because the Indian office is twelve and a half hours ahead of San Diego, work goes on continuously, passed off to the next office at the close of each business day.
In the second step, artists gauge the depth in a scene, inserting depth cues that pop out elements and draw others farther back. The work involves both skill and the opportunity to make artistic decisions. DeJohn, who started out as an artist, speaks highly of In-Three’s artists. “We’ve been doing it for five years and our best artists have been there for that long. We know how to look at a 2D image and recreate the depth realistically. That’s definitely a learned skill.”
Prime Focus, which converted Clash of the Titans, emphasizes its ability to change depth in real time. “What’s cool about our process is it’s iterative,” Rob Hummel, CEO of post-production in North America, enthuses. “Meaning that if you don’t like the depth cue, if we made it look like he’s five feet in front of the tree and you want it to look like he’s ten feet in front of the tree, we can stop and make those adjustments. In fact, we can stop on a frame and show you these adjustments in real time, interactively, and kick off a render and show you the new rendered version with new depth cues to you 15 minutes later. There’s not a negative cost impact if you change your mind late in the game.”
Besides creating space between elements, artists also need to mold out the objects. “You want to convert a beach ball so it doesn’t look like a disc, it looks like a sphere,” Hummel offers as an example. Overly flat objects can make the 3D objects look like cardboard cutouts. To round out an object, some companies create a CG model of the object, on which they overlay the image. Sandrew feels that process is “primitive.” Legend3D “separates out all the different parts of every single object, and then within our program we mold it. We don’t create CG elements, we actually mold this into the actual object in 3D space.”
In the final step of the process, artists fill in gaps that appear when you create more extreme depth. Part of the reason why Prime Focus’ “iterative” depth process works is because it avoids the paint step. In the over 2,000 shots in Clash of the Titans, Hummel says they used the paint step just 12 times, and in another feature film they recently participated in, only 30 adjustments were painted out of over 600 shots.
However, not everyone in the conversion business accepts this kind of work. DeJohn is skeptical of a process that avoids the paint step. “If they actually created distinct separation between their objects, they would have to paint because there’s just no way to create the effect otherwise. So what they’re doing is compromising the quality of the depth in order to avoid the detailed paint and matte work.” DeJohn describes the Prime Focus approach as creating more of a “rubber sheet” effect. “If you imagine the movie screen like a rubber sheet and you push and pull to get a 3D effect, it’s popping off the screen and pushing into the screen. You essentially have no distinct separation,” he declares.
In response, Hummel says, “I think it’s quaint to say that you can only do it with paint, but that’s the same kind of person that, if they were going to manipulate a photograph, would do it in a darkroom, rather than with Photoshop. We live in a digital world with digital manipulation tools where we’re able to manipulate the images in ways that don’t require that kind of burden. It’s not like we never paint, but I would say it’s less than one percent of the shots we do.”
Legend 3D also doesn’t do extensive paint work, using a technique that Prime Focus also employs. “We can create two eyes, which is a more natural way of handling it,” Sandrew explains. While most companies keep the original flat image as the left-eye image, for example, and create a new right-eye image, Legend3D will usually create two new images, which minimizes the size of the gaps. “If you do one eye, the gaps are quite large, but it’s only in one eye. So anyone with compositing skill can clean that up. If you create two eyes, then the gaps are separated into both eyes, so they’re about half as wide. We have automatic gap-filling that’s algorithmic—a lot of people don’t have that—provided the gap is not too wide.”
2D to 3D conversion has a wide range of applications and offers an alternative to many of the challenges of shooting “natively” in 3D. Even filmmakers shooting with a 3D camera may need to use 2D-to-3D conversion for select shots or to fix technical problems. Because each technique has strengths and weaknesses, the public will be seeing more 3D films that use both cameras and conversion to achieve their effect.
“I think the hybrid approach is probably the smartest approach for a lot of the studios, a lot of the producers and directors,” Sandrew observes. “One of the projects we are about to get into was originally supposed to be a 2D-to-3D conversion completely, and it turned out to be about 50% captured natively and 50% 3D conversion. For most of the heavy effects scenes, it’s much more efficient and effective to use 2D-to-3D conversion rather than capturing in a camera.”
“3D conversion is just another tool in a filmmaker’s arsenal,” Toni Pace, a senior visual effects producer at Legend3D, explains. “Even if they are committed to shooting in stereo on-set, the camera can be misaligned, it could be too big to take to a location, it could be a hostile environment like water.” The technique also gives filmmakers a chance to “revise their decision” in post if they want to change the depth in a scene, for example.
2D-to-3D conversion was even used in that pinnacle of 3D cinema, Avatar. Prime Focus did several shots for the movie, a fact not widely known. “A couple times the best way to incorporate the composite was to convert part of the image,” Hummel reveals. “On an occasion or two with the original photography, there was a problem, like one lens was closer than the other, so rather than resizing the image, it was easier to convert one of the images to match the stereoscopic imagery. However, it was always done in service of the visual effects and under the guidance of the Avatar creative team.”
The conversion of library titles into 3D is also anticipated to be a huge market. Currently, conversions are underway for 2D films with upcoming 3D sequels. “I think it will take one really big film to test the waters to see if the public will go back to the theatre to see it in 3D,” DeJohn speculates.
The home market will provide yet another venue for 3D viewing. The newest line of HD televisions is going to be bundled with 3D capability for no extra charge, making the technology easy to embrace. Studios hope 3D titles will ramp up Blu-ray sales. Currently, theatres have an edge on the 3D viewing experience. Many home 3D sets use expensive active shutter glasses, and “a little bit of depth on a smaller screen is pretty unimpactful,” DeJohn observes, noting that a film like Avatar will still look good on a small screen.
So far, 2D-to-3D conversions have generated some negative comments in the marketplace, but this hasn’t stopped their growth. With so much business coming in for more conversions, no one seems worried. “I don’t think 3D is going to disappear at all because there’s too much momentum right now from the consumer-electronics industry, the studios, the exhibitors. It’s huge. The momentum that’s been built up is so significant, it’s unstoppable,” Sandrew argues.
The whipping boy of 2D-to-3D conversion was this April’s Clash of the Titans, which drew the ire of critics, moviegoers, and people that didn’t see the movie. “Even though it was trashed worse than any movie I’ve ever seen, it still broke a record for an Easter film and made over $200 million,” Sandrew says of the movie, which was converted by Prime Focus, a competitor. “The only problem they had was taking on a project that didn’t [give] enough time to do it right, that’s the only thing that they were responsible for. The audience is demanding 3D, and no matter how badly critics trash a movie, people still go to it. The Last Airbender also got trashed, it was done by StereoD, and that did well too.”
Those working in visual effects acknowledge that bad 2D-to-3D conversion exists, but see it more as a growing pain that market forces will faithfully correct. “The quality is there, it’s a question of whether the studios will pay the money to get that quality product, and give them enough time to do it to the full degree,” DeJohn observes.
Companies also expect that audiences will become more discerning consumers. “I think what’s important is for the audience to become more sophisticated observers—once that happens, the quality is going to go up significantly,” Sandrew predicts. Pace, a recent hire who worked on Avatar, thinks the unique emotional effect of 3D films will sway audiences. “I think that people really want immersive experiences, and 3D stereo projection in theatres is the next generation of immersive experiences. I think it’s only going to get better from here.”
Exhibiting 3D films properly requires adjustments that not every cinema has mastered. “3D exhibition is kind of like the Wild West right now,” Hummel states. After seeing Clash vilified, “I believe that something happened in the rollout of that film into theatres. People saw what they saw, but a lot of the descriptions that I read on the Internet were of eyes being flopped—meaning the projectors got out of phase and the left eye was seeing the right-eye information and the right eye was seeing the left-eye information.”
Another potential problem in 3D viewing is “ghosting.” In some theatres, you can see a double image in high-contrast areas. Dialing down the contrast can help fix the problem, but that also reduces the quality of the image.
One of the most noted problems is the loss of light that occurs when audiences put on 3D glasses, which filmmakers such as Christopher Nolan have pointed out as a major defect of the 3D experience. Most projectors throw 14 foot-lamberts, but theatres will still show a movie if their projector is measuring at 12 foot-lamberts, Hummel explains, since it’s just a 14% loss in light. The problem is when you add 3D to the equation. 3D glasses are equivalent to a two-stop loss in light. Going down one stop reduces the amount of light by half, so going down two stops gives you 75% less light. If a projector starts out at 14 foot-lamberts, glasses bring it down to dim 3.5, but if it started out at 12 foot-lamberts, it’s down to just two. What would have been a 14% loss in light in 2D becomes an almost 50% loss in 3D.
“A two foot-lambert swing on normal digital cinema is not that bad, it doesn’t damage the picture that much,” Hummel observes. “A two-foot-lambert swing on 3D is death. It will absolutely devastate the image and ruin the experience for the audience.”
Some films, like Avatar, created different DCPs (Digital Cinema Packages) for exhibitors based on the amount of light they could get from their projectors. “No one other than James Cameron and Jon Landau, they made sure the audience was seeing the best representation of the movie,” Hummel enthuses. “If they knew the theatre could show it at six foot-lamberts, they delivered a file to that theatre that looks good at six foot-lamberts.”
As 3D exhibition becomes more commonplace, conditions are expected to improve. Sandrew thinks the “demands of the industry” as well as technological improvements will drive change. “The professionals that are producing the creative product don’t want their product to deteriorate in any way, and certainly when it’s being exhibited they want it shown the same way that they intended.”
While 3D films seek consistent exhibition quality, ironically every audience member experiences 3D in a unique way. “I’m startled,” Hummel says. “I worked many years at Technicolor ensuring that every single print looked exactly the same, and then suddenly people make 3D movies and it doesn’t seem to matter anymore that the people sitting in row five are having a different experience than the people sitting in row twenty-five?”
When audiences watch a 3D film, their seat in the theatre will determine where their eyes converge to see the stereoscopic effect. Looking for that big 3D effect? Viewers seated farther back will experience amplified depth, with images appearing both deeper and popping out more. “If you walk up to the screen, all the things that seem deep behind the screen won’t seem very deep at all,” Hummel explains. “But then as you walk to the back of the theatre, everything will look incredibly deep. If you walk laterally left to right, you’ll notice that the behind-screen images tend to rotate on an axis at the screen plane.”
Perhaps because each audience member views the image differently, 3D movies engender an individual experience—and not just because it’s easier to hide those Pixar tears behind your 3D glasses. “The nice thing about 3D is that it becomes a personal experience. Every audience member is taking it in in a personal way, whereas in a 2D film it’s more about group dynamics,” Sandrew reflects.
3D movies also have the potential to bring viewers outside of the film by throwing an image out into the audience, breaking the fourth wall. “There are some feature films where the director really wants to come out and scare the audience, which is something that most of us hope goes away,” Sandrew says. “There are places for that, there are some movies that are intended to be cheesy like that, and it works. But for the most part, what we like to see is 3D being used as a way of telling the story in a more immersive way. That doesn’t involve scaring the audience by having something fall in their lap, so we try to avoid that.”
One of the “wow” scenes in Avatar, for example, involved floating seeds from the Tree of Souls. “Those seeds were actually significantly out into the audience,” DeJohn points out, “While in some scenes there is just as much depth as if they were pointing a spear at the audience, they’re not breaking the fourth wall in terms of the story, so people don’t notice the effect as much because it’s not screaming, ‘Look at me!’” While a movie like G-Force will break the fourth wall—“It’s a kids’ movie and that’s the right genre to do that,” DeJohn believes—movies like Alice in Wonderland create extreme depth without distracting the audience from the story.
“A lot of the shots in Alice in Wonderland were actually outside of the screen near the audience,” Sandrew reveals, “But you didn’t realize it, because it wasn’t there for shock value, it was there for aesthetics. If you look at a scene that we did for Alice, you’ll see that maybe half of the interior of a shot is outside of the screen.”
Alice in Wonderland was also originally planned to use the extra dimension for a Wizard of Oz-type effect. The real world would be shown in 2D, and Wonderland in 3D. “From what I understand,” Sandrew remarks, “Tim [Burton] was worried that people would just take off their glasses, think it wasn’t working, or think it’s bad 3D, so what they did was create a shallow representation of stereo [3D] in the bookends.”
In G-Force, “we played a lot with the sense of scale,” DeJohn recalls. “When we had a hamster’s point-of-view shot of a human, we would accentuate the depth of the human, so the human would have more or less the same depth as a skyscraper. We used 3D as a metaphor. I think we’re going to start seeing more of that. Just as composing a shot brings meaning to the shot if you shoot it from a low angle, or if you stage your characters in such a way to indicate who’s more powerful in the scene,” depth can bring meaning to a scene, he expounds.
For 3D to be more than a gimmick, this kind of storytelling will need to become a recognizable part of the 3D experience. Hollywood has undergone great formal changes before, from the lightning-speed transition to sound in just a few years, to the gradual predominance of color films in the marketplace, which took decades. Early entrants couldn’t resist flaunting their new tricks with vivid, saturated color and song-and-dance routines that delighted in the novelty of the experience. Primitive 3D movies, in fact, predate sound, but throughout 3D history the medium has been used for spectacle, not story. 3D faces the bizarrely contradictory goal of making the 3D effect part of the story and thus more invisible, while convincing audiences to spend five extra bucks for a more immersive experience. Compared to 3D’s brief heyday in the 1950s, this generation of 3D has better technology, more exhibitors lined up, and the ability to convert a 2D image into 3D. Audiences have already voted with their glasses, turning out in waves, setting box-office records, and propelling one movie, Avatar, to Titanic levels. This time, 3D may be here to stay.
The Science of 3D
What makes 2D-to-3D conversion so promising is that it’s all an illusion anyway. Whether you shoot natively in 3D or convert afterwards, no one would argue that the depth they see in a 3D movie matches what they see in real life, though they might not be able to explain why.
The 3D effect is created through a stereoscopic illusion, which forces your eyes to go against nature and focus on one place and converge in another. In real life, your eyes focus and converge at the same point. Hold a pencil in front of your nose. Each eye focuses on the pencil and the brain converges on the object, creating a composite image and also gauging depth by interpreting the difference between what your left eye sees and what your right eye sees. If you rotate between shutting your left eye and right eye, for example, the pencil will appear to jump left and right, because your understanding of an object in space is determined by both eyes. Each eye has a slightly different perspective on an object.
Stereoscopic illusions work by separating the left-eye image from the right-eye image (the job of those polarized or active-shutter glasses) and giving each eye slightly different pictures of the pencil, forcing the eyes to meet or converge in front of or behind the screen. Because your eyes are focusing on the screen plane but converging in front of or behind it, the illusion of depth is created. When you take off your glasses, you can see double images on the screen. The farther they are apart, the more depth there is when you put on your glasses.
If people are forced to maintain a huge difference between their convergence and focus points for a long time, a headache may ensue. According to Rob Hummel, Prime Focus’ CEO of post-production in North America, that’s also the reason traditional 3D theme-park rides with in-your-face 3D top out at less than 20 minutes: Eyes can’t take it much longer.
The stereoscopic illusion also has to battle with “primal” monocular cues. An object jumping out in front of you initiates the fight-or-flight reaction, a technique well-suited to horror movies like Final Destination 3D but less so to other genres.
The stereoscopic illusion is also at risk as an object moves to the edges of the screen. If a 3D object gets clipped by the screen border, the illusion can be destroyed. For that reason, many movies use both floating windows, which mask the left and right sides of frame, as well as top-and-bottom masking. For most of the film, these areas will appear black, but occasionally an object will break the edge of the screen, 3D effect intact. The technique has been around for decades, and many studios, such as Disney, employ it on their 3D films, including G-Force and Toy Story 3.
Matt DeJohn, whose company In-Three converted G-Force, explains: “Certain individual objects are allowed to go outside the border. Because they’re already playing out in audience space, you’re not perceiving it as an aspect-ratio change, you’re just perceiving it as being farther out of screen and in the theatre with me.” Besides avoiding depth conflicts, use of these borders can also create intense 3D effects.
The stereoscopic illusion is also at risk when filmmakers add too much depth too quickly. “For off-screen effects to work really well, you need to bring the objects off slowly, because you’re asking your eyes to do something they don’t normally do, they cross in a way. Your eyes need to be able to track with it,” Hummel explains.
Aware of this effect, companies try to keep depth and eye placement consistent from shot to shot. “What we often do is make sure that where you’re looking in the scene from one shot is fairly well-matched to the next shot, so the focal elements are matched in depth,” DeJohn elaborates. “We’ve been able to play with pretty aggressive depth through entire films. Because of the way the depth was controlled, and transitioned from shot to shot. It was still a comfortable viewing experience.”
So why doesn’t 3D look like real life? “In real life, you have infinite planes of depth that exist instantaneously wherever you are focused,” Hummel reveals. “When you shoot with a camera, you can’t escape the fact that when viewing the images you will be forced into disconnecting your natural focus/convergence connection. The images have a type of depth illusion, but never quite what we see with our natural binocular vision”—a fact many watching a horror movie appreciate.
By Sarah Sluis, Film Journal International
German and Swiss researchers on a EUREKA project have come up with technology that they think could soon affordably deliver the thrills and immediacy of 3D into our homes, as well as into some other unexpected places like operating rooms with a level of quality never reached before.
"The seed of this project was just three friends chatting on the web" recalls Arnold Simon, Chief Technical Officer at the German company Infitec. At the time Simon was working as a consultant for Infitec and one of the other friends was Helmut Jorke, Chief Executive of Infitec, which had developed some of the best 3D technology for cinemas.
The friends chatted about the next challenge in 3D: how to develop a 3D LCD flat-screen monitor capable of displaying the full resolution of the new high-definition television formats. On that online chat, Jorke decided his company should create that screen. "The consumer market is the biggest and most interesting focus," says Simon. Last year, in the UK alone, prior to the country's switch over from analogue to digital, 10 million television sets were sold.
Infitec had made its name in the 3D world by developing more sophisticated technology based on the principle of the old red and green glasses. The company's glasses use a narrow colour band wave to improve the quality of the image, using specific wavelengths of red, green and blue for the right eye and different wavelengths of the same colours for the left eye. The glasses filtering out very specific wavelengths give the spectator the illusion of a 3D image. Backed by EUREKA, Infitec partnered up with Optics Balzers, a Swiss company it knew that specialised in 3D filters, and the pair secured funding to start developing the 3D LCD screen – a mission they called Dualplex Display.
While Infitec researched the best signal and lighting to use in the monitor and software for it, Optics developed special filters for the lighting unit and the glasses. The project was not an easy one. Obtaining sample backlighting units from suppliers was not easy for two relatively small companies. Then the first demonstrator did not work and the partners decided they needed to create a brand new optic design for the monitor. They finally combined four light-emitting diode lamps (LEDs) – two green ones, one red and one blue one – to create the colour range they needed.
After two years of hard work, the partners have a demonstrator 23-inch monitor that they are proud to say pushes the boundaries of 3D technology. The quality of the image causes less strain on the eyes than other 3D technologies, the glasses do not darken the ambient light and the screen can be viewed from all angles without distorting the 3D images.
"Viewers will be able to lie down on the sofa to watch the screen, they can turn their heads in any direction and the image won't change," explains Simon.
The partners have applied to patent the screen in Germany and are in the process of submitting patents for other countries. They have presented the screen at conferences around the world and potential customers have been impressed with their demonstrator. However, the Dualplex Display team wants to further improve their screen and has secured funding for a follow-up project to brighten its images.
The Dualplex team's final goal is to sell its 3D LCD screen for HD to ordinary consumers, but initially the partners think they will find it easier to target niche professional markets such as medical professionals. Using 3D imaging could help surgeons doing operations, for instance. Until now, the 3D imaging was too poor to interest them, says Simon.
With fully managed delivery via satellite, DSL and fiber-optics, Paris, France-headquartered SmartJog offers “reduced transportation costs” and “simplified logistics” that avoid use of said cans and other physical media such as drives, DVDs and USB keys. “SmartJog facilitates the secure storage, digital asset management and digital delivery of trailers and feature films,” according to the company’s d-cinema offer. In essence, DCPs (Digital Cinema Packages) “are sent from post houses and distributors to theatres where SmartJog’s library server receives and stores all digital-cinema content.”
In June, SmartJog so delivered the 100th film to theatres in its pan-European network. The A-Team landing at 45 locations in France, later on followed by three more theatres in Belgium and Spain, marked Fox’s sixth title with the provider. At the same time, SmartJog itself recorded 2,630 hours of DCP traffic sent in less than six months. That’s an amazing accomplishment by any measure, but even more so in view of the relatively short company timeline. Considering that the inaugural 2K file was first delivered to Pinewood Studios in February 2005 and that phase one of a circuit-wide installation --at CGR in France-- was completed in June 2008, SmartJog has been jogging at a steady pace indeed.
Nicolas Dussert, the company’s European theatrical sales director, confirms the ratio between cinema-related services and all others, such as broadcast, new media and home entertainment, to be already 30/70. While d-cinema has opened additional doors when it comes to delivering the completed picture, he confirms that studios have already been using SmartJog services during production for the exchange of “time-critical digital dailies” and “fast-turnaround” transfers of visual-effects shots, for instance. “We already have all the necessary connections with the labs,” he says. “The distributor can deliver and retrieve the DCP to and from anywhere in the world.”
Closer to release, audio and reference picture too are delivered to post houses for local-language versions and local master creation. In preparation of a film’s launch, SmartJog then takes on “global servicing” of trailers, electronic press kits and television spots to local distribution offices and broadcast stations.
Under the headline of “Smart D-Cinema,” marketing materials summarize this “all-in-one solution” as providing “a centralized place for digital-cinema needs.” Even more, when it comes to theatrical solutions, “SmartJog is the vetted and approved digital delivery system used by the entertainment industry for the safe transfer of their most valuable assets.”
So what do theatre owners need to get access to those assets? “Exhibitors purchase the satellite dish, which can be had for about 1,000 Euros,” Dussert estimates (US$1,300), “and one of our two server options. They will also need an Internet connection,” he says of the validation process for sending and receiving. “Via a secure web interface, both distributors and exhibitors can manage, track and receive e-mail notification of their file transfers.” The “Gateway Server” of 3 TB is recommended/used for the delivery of DCPs only and not used as a storage device, whereas the “Central Library Server” comes with 12 TB, translating into up to 85 movies at 150 GB each, or 20 hours of JPEG-2000 encoded content.
“All of our servers come fully integrated with the satellite receiver hardware,” Dussert assures, “and contain all the necessary software for automatic and time-saving uploading of content to playout servers.” Speaking of the latter, SmartJog is “seamlessly integrated and totally interoperable” with the cinema servers and theatre management systems (TMS) from ADDE, Datasat Digital, Dolby, Doremi, Dvidea, Qube and XDC.
Although the goal is to deliver everything electronically, SmartJog’s servers feature the option to load content locally from physical media or via both internal and external networks as well. This way, Dussert reasons, exhibitors can cover all ways in which distributors may be delivering. “Also, in case of any problem such as file corruption, we have partnerships with the labs for every country in order for the distributor to be able to deliver a hard drive if need be. If we have a bad satellite receiver,” he elaborates, “we can call the lab to arrange for a back-up. But we don’t deal with their actual distribution and shipping of any physical media.”
Consequently, when Dussert calls the SmartJog network a hybrid, he is not referring to the physical media options but to the combination of fiber, DSL connections and satellite dishes on the cinema roof. “Because of a very good infrastructure in France, we can deliver a full feature one of three ways. Most theatres receive content via satellite, as it gives theatres a dedicated capacity. Distributors and post houses can benefit from multicast, sending one-to-many. For example, when delivering the same DCP/file to hundreds or thousands of theatres, the delivery time is cut down.” The receiver allows live broadcasts with one and the same equipment set-up too. Speaking technically, Dussert explains, “In Europe for satellite we currently have 72 Mbps of bandwidth capacity, which will allow us to deliver content for live events in the future as well as our store-and-forward service.”
Regardless of the bandwidth, SmartJog also covers delivery of pre-show and cinema advertising. The experience with SmartJog in that particular area prompted pan-European exhibitor Kinepolis Group to expand their relationship in Belgium (13 locations), France and Spain (seven and three, respectively) in November 2009. “Kinepolis had been using the SmartJog system and network for a few years already to receive advertising content from Médiavision, as well as cinema trailers to our cinemas in France,” recalls Bob Claeys, research and development director for the Group. “Since we were very actively rolling out digital equipment in all of our cinemas in France, Belgium and Spain, we needed to find a solution capable of scalability in terms of storage and digital delivery. SmartJog’s all-in-one-solution offers just that and is already compatible with our equipment such as our Dolby servers. Adding their ‘Central Library’ allows us to store content received in our multiplexes without the need to add multiple reception and storage equipment. The flexibility of the SmartJog solution also lets us manage the delivery of our own promotional content to all of our cinemas in Europe seamlessly.”
With exhibitor partners that also include CGR (34 sites connected), Europalaces (67 sites of Gaumont and Pathé), independent and other theatre chains (82) at the home base in France, Benelux and Spain, SmartJog’s connected d-cinema network “counts 196 sites with over 850 screens in eight countries and growing,” Dussert assures. “After France and Benelux we’re now rolling out in Germany with help from fellow TDF subsidiary Media Broadcast, and deploying in Switzerland, Italy, Spain and Portugal. In addition to these key markets, we will certainly go to other countries, of course, like the United Kingdom and Ireland.” For now, “including those in cinemas, SmartJog has a total of 922 servers deployed in 65 countries. Working with every studio, there are 92 distributors and vendors in 25 countries that utilize our service to send their DCPs. We work with all distributors and currently deliver an average of four feature films each week.”
Another “really important” aspect of those deliveries is that SmartJog checks DCI compliance of the packages, Dussert continues. “If you make a hard drive, you don’t verify the copy.” With SmartJog, however, “when labs put the DCPs on their server, there is a verification of the conformity of all files to guarantee that the DCP is complete, there are no missing files, and all is in compliance with d-cinema standards. This integrity check happens in real time, automatically. Otherwise, you could have a corruption.” Again, e-mail notifications will confirm, “You received a DCP and it is complete.”
Completing the picture further, “we have a dedicated data center where distributors can store their DCPs,” Dussert affirms. “Via USB key, they can go in locally, be it Belgium or Switzerland, and access their full catalog to select any and all versions to send to a certain theatre. Also, the French market can share a DCP with another country, for example. Our system gives clients the ability to share elements with vendors, local offices. If one cinema gets the original version and the one dubbed into French, we send the first DCP completely. For the second DCP, the system automatically detects which version has already been sent from the lab and only adds those parts that differ from and/or are supplemental to the one originally delivered.” As a case in point, that could be the dubbed language track or Swiss and German subtitles. “We only transmit the percentage that is different, cutting down on time and bandwidth—and on cost, of course.”
A Quick Jog Through Time
Developed only ten years ago by France Telecom, SmartJog became an independent company in early 2002 and was subsequently purchased by TDF, a leading provider of audiovisual, new media and broadband services, in September 2006. Today, SmartJog counts more than 4,600 clients worldwide that cover everything from new media, broadcasting and digital dailies to video-on-demand, mobile TV and in-flight entertainment. On the theatrical front, SmartJog works with distribution, post-production, dubbing facilities, advertising agencies, print labs, and digitally equipped movie theatres, of course.
Some milestones for d-cinema include:
February 2005: SmartJog delivers first 2K files to Pinewood Studios in the U.K.
December 2005: Launch of French initiative ISA for DCI-compliant film distribution to digital theatres in Europe.
September 2006: SmartJog demonstrates the electronic distribution of Paris Je t’Aime, the first 4K DCP ever done in France (Éclair Laboratories) at the Entertainment Technology Center at the Warner Pacific in Hollywood.
April 2007: SmartJog connects its first commercial theatre in France: Cineplex Paris Forbach.
May 2007: SmartJog transmits U23D DCP from Los Angeles to the Cannes Film Festival.
December 2007: SmartJog unveils its first Central Library server with 3 or 6 TB of storage and signs reselling agreements with Cinemeccanica, Cine Digital Services and Decipro, three leading installers in France.
February 2008: SmartJog sends a feature film DCP to Kinepolis Nancy (France) over a metropolitan fiber-optic.
June 2008: Phase 1 completed of SmartJog rollout to CGR theatres in France.
July 2008: Screenvision France transmits pre-show content and trailers in DCI-compliant format to five CGR theaters via SmartJog.
August 2008: Pathé Distribution is first distributor to use SmartJog for the electronic distribution of Faubourg 36 trailer DCPs to 14 connected theatres in France (ten CGR theatres, two at Europalaces, Le Paris Forbach and Le Stella Janze).
August 2009: SmartJog reaches milestone of 500 digital screens connected and 100 theatres connected.
June 2010: SmartJog delivers 100th movie to theatres in Europe, Twentieth Century Fox's The A-Team
July 2010: SmartJog delivers over 22,000 GB of traffic for broadcasters, both domestically and internationally, for the 2010 FIFA World Cup. That is the equivalent of 6,000 ten-minute playouts.
By Andreas Fuchs, Film Journal International
Frame Compatible 3DTV is the technology of choice for delivering 3D content to customer homes, whether over satellite, cable or IPTV, because it is effectively an HDTV signal that can therefore use HD encoders and set-top boxes. It looks as if Frame Compatible will be around for some time for direct-to-home broadcasting so attempts to advance 3D technology will happen first in the contribution segment.
We reported previously how GlobeCast handled contribution of live 3D games from the FIFA World Cup in South Africa recently. Contribution from the stadia to the International Broadcast Centre (IBC) was performed as separate left eye and right eye streams using JPEG2000 compression and feeds out of the country (via satellite) were in side-by-side Frame Compatible format using MPEG-2 HD.
Simon Farnsworth, Global Head of Contribution at GlobeCast, expects the broadcast contribution market to evolve so that it uses simultaneous left eye and right eye contribution signals, synchronized for live content. And that is exactly what ESPN, the US sports broadcast network, is already doing today with its ESPN 3D channel, which launched in June and is available in the US via DIRECTV (satellite), Comcast (cable) and AT&T (IPTV).
According to Jon Pannaman, Senior Director Technology at ESPN, the company is using discrete left and right eye HD signals from its event locations to its main broadcast facility in Bristol, Connecticut. This requires the use of specialist encoders and decoders with left/right eye feed synchronisation. The company is then transforming the dual HD feed into a Frame Compatible signal for transport to its distribution partners.
Pannaman points out that delivering Frame Compatible 3D content to a distribution partner (like a platform operator) means they can simply pass the signal through without adjustments. However, ESPN wants to move another step forward and reach the point where it is delivering discrete left and right HD feeds all the way to its partners.
Pannaman told Videonet recently: “We want to deliver discrete left and right feeds to our affiliates as soon as it is practical. Obviously some technology is needed for that so it is not imminent but it is our goal. That way the different delivery methods affiliates could use will be supported. For business reasons people will want to do different things. We cannot supply every possible variant so our goal is to get them the best quality [discrete feeds] that they can then use.”
Effectively, delivering discrete HD left and right feeds is viewed as the raw materials from which distribution partners can create their own Frame Compatible output or move towards any of the ‘full HD 3DTV’ compression solutions that are currently being talked about. The key point is that the format supplied by ESPN will not force anybody to compromise what they are doing further down the chain.
ESPN 3D offers 3D sports programming and went on air with the opening game of the FIFA World Cup finals, then broadcast 25 3D matches from South Africa including the final. A key principle for the channel is that it shows 3D content or nothing at all (except a 3D ‘slate’). There are no 2D fillers. ESPN 3D is effectively a live events network today.
By John Moulding, Videonet
Samsung said it would start streaming some 3D movie trailers over the Internet, establishing a new way for consumers to view 3D content on TV sets. The consumer electronics company will provide access to 3D movie trailers through its online application store, and will expand the lineup of 3D content in the future, said Olivier Manuel, director of content at Samsung's consumer electronics division, during a press event in New York.
"In a couple of years, 3D streaming will be ubiquitous. This is just the first step to make that happen," Manuel said. "The potential is pretty exciting."
Manuel declined to comment on the type of content that the company will deliver through its applications store. However, he acknowledged full-length 3D movies or 3D games could be possibilities. A company representative said Samsung is striking deals with film studios for the trailers.
3D content requires considerably more bandwidth than regular video feeds, said Roger Kay, president of Endpoint Technologies Associates. With the public Internet already under strain, quality-of-service and buffering could become issues in streaming 3D movies and online 3D games to homes.
Manuel acknowledged bandwidth is an issue that must be solved before 3D streaming gains a foothold. He said 3D streaming would require 5Mbps to 7.5Mbps broadband connections, which are already available in about 20 percent of U.S. broadband households. By comparison, streaming 720p high-definition images requires a 2.5Mbps, Manuel said.
But as bandwidth issues are mitigated, more users will access 3D content, like they access high-definition movies, Kay said. As more 3D content is developed, and hardware like gaming systems and TVs populate homes, the Internet will inevitably become a key content delivery mechanism, Kay said.
Another issue is whether users are interested in 3D at all, with some finding it uncomfortable to wear 3D glasses, Kay said.
By Agam Shah, IDG News
Telecom giant Verizon will produce and broadcast next month what it says will be the first NFL game in 3D on TV. It will show the Sept. 2 preseason game between the New York Giants and New England Patriots for its FiOS TV subscribers in parts of New York, New Jersey, Massachusetts and Rhode Island. Customers must have 3D TV sets, 3D glasses and a high-definition set-top box to view the broadcast.
"This is the next major step in our development of 3D experiences for our FiOS TV customers," said Terry Denson, vp of content strategy and acquisition for Verizon. "Broadcasting the first 3D NFL game delivers on our promise to FiOS customers to provide a superior TV offering, including 3D, HD and VOD programming, as well as interactivity that cable can't match."
Verizon is also working with two sports bars in New York and Providence, R.I., to give fans a firsthand look at the 3D experience by setting up 3D TVs for promotional events.
In July, the company broadcast Major League Baseball games between the New York Yankees and the Seattle Mariners in 3D.
By Georg Szalai, The Hollywood Reporter
The Material eXchange Format (MXF) is arguably the most successful file format in the broadcasting industry. Six years after its ratification as a SMPTE standard, this article descibes its roots, its main goals and its current adoption by the TV broadcasting industry.
By Pedro Ferreira, MOG Solutions
The emergence of stereoscopic 3-D TV as a practical option for broadcasters has opened up a whole new set of issues in relation to best practices for channel branding graphics. Key issues include where to place the 3-D graphics in terms of the perspective, or Z-depth, for optimal viewing and how to control the Z-depth of graphics during playout to compensate for changes to the program perspective.
Getting the Z-Depth Right
To illustrate this Z-depth issue and some of the associated challenges, it’s worth considering a simple case of logo insertion for a stereoscopic 3-D program with a changing 3-D perspective.
When the program has a near (negative) depth, with the 3-D effect appearing to come out of the screen toward the viewer, there is a requirement to have the logo positioned in front of the action to maintain a natural perspective (Case 1).
During a sequence with a flat perspective, or a far depth (with the perspective effect going into the distance), the branding graphics need to be just in front of the action (Cases 2 and 3).
If the Z-depth positioning of the graphic is incorrect at any point, the channel branding may lose its 3-D effect or, worse, the presence of the logo or graphic may interfere with the 3-D effect of the program itself.
Adjusting the Graphics Z-Depth
The Z-depth of a channel branding graphic can be changed to suit a program sequence by adjusting the horizontal separation of the left and right branding graphics, which are required to create a stereoscopic 3-D logo. This method of controlling the Z-depth of elements is often called Horizontal Image Translation (HIT). By separating the right and left images of the graphics in one direction, the graphics will appear to come out of the screen. Conversely, when the left and right branding graphics are moved horizontally relative to each other in the other direction, they will appear to move into the screen.
Hence, one way to address a logo’s Z-depth problem is to have multiple versions of the logo, each with different left and right horizontal separations. This is not difficult to accomplish, as stereoscopic 3-D graphics can be created using standard graphics tools such as Adobe’s Premiere and After Effects, which are much easier to use than traditional 3-D news graphics (nonstereoscopic).
However, an obvious problem with this approach is that it demands more complex media management to cope with all the different depth positions required for a logo. Therefore, this approach has not been widely adopted, and the focus has been toward controlling the Z-depth by dynamically changing the separation of a single pair of left and right branding images. Moving the Z-position of the graphics is relatively simple; it’s more difficult to decide when to move them.
Controlling the Z-Depth for Live Content
There are multiple options for automated and manual control of the Z-depth of channel branding graphics. One factor that influences the approach is the type of content to be played out, in terms of whether it is live or prerecorded.
A significant number of the initial applications for stereoscopic 3-D TV are likely to be live events, such as sports. With this type of programming, playout automation can be used for driving graphics, such as bugs, on and off for different segments. However, it can’t easily be used for Z-depth control because of the unpredictable nature of live programming.
In this case, it’s often best to supplement the automated control of branding with manual Z-depth control, using a branding control panel with depth presets. By using presets, the operator can quickly and easily rectify graphics Z-depth issues using smooth depth adjustment transitions.
In the future, it’s anticipated that advances in 3-D metadata playout will enable more sophisticated automated control. The channel branding processors will be able to read Z-depth metadata, probably in a similar manner to reading AFD metadata, and automatically adjust the position of the channel branding to optimize the presentation.
Another related automated control option is dynamic measurement of the Z-depth by the channel branding processor, or an associated signal processor, and performing on-the-fly adjustment of the branding graphics according to the depth data. This may represent a good back-up solution in the absence of Z-depth metadata. However, both of these advanced automated control techniques are still in the formative stages and are not fully proven to date.
Z-Depth Control of Recorded Content
Broadcasters that play out channels of recorded content, such as 3-D movies, have additional options for Z-depth control. One simple approach is prerecording of the channel branding with the content ahead of playout on-air. However, this is not very practical because the show content includes the branding graphics, which limits the reuse of the same copy of the content on different channels or with different branding.
Another alternative is to have the automation system control the Z-position of the logo inserter by recalling the position presets using either a serial command or general purpose input. This is more flexible but requires an extra piece of information that has to be entered in the traffic or automation system, which is not possible or practical for many operators.
A third option is to have the content creator specify the position as the content is edited or reviewed, and to enter this information as metadata in the program. This is broadly similar to the approach currently used for presenting closed captions in 3-D Blu-ray productions. An operator manually adjusts the branding Z-depth using a fader while rerecording the content. This is a time-consuming process, but it offers the advantages of simplicity and consistent quality control. Standards committees are working diligently on Z-depth metadata standards to enable this simplified playout model.
In summary, we’re still very much in the early adoption phase of 3-D TV, with the playout equipment still being developed and the associated workflow processes still being refined. There are some obvious parallels between the current state of channel branding for 3-D TV and the early phase of HDTV with its associated aspect ratio control issues. However, equipment vendors, broadcasters and organizations such as SMPTE are now working together closely to overcome the current obstacles, and more elegant and more efficient solutions to 3-D TV branding are already on the horizon.
By Michel Proulx,CTO of Miranda Technologies, BroadcastEngineering
The purpose of this white paper is to describe some of the more common issues that can exist in 3D material. Providing the terminology and examples throughout the document will assist in finding and avoiding issues. Although many of the issues that may exist in 3D content are inherent to the source, it is important that the issues are understood and communicated to the appropriate groups or individuals. The appropriate groups or individuals can then determine whether adjustments need to be made.
By Juan Reyes and Paulette Pantoja, BluFocus and 3D@Home
Thursday, August 05, 2010
Discovery Channel has received a licence from the UK communications regulator Ofcom for a potential 3D channel. The award of the Digital Television Programme Service (DTPS) licence to Discovery Communications Europe Limited is made in the July list of newly licensed television services.
A spokeswoman for Discovery Communications Europe confirmed that the broadcaster had been issued with a licence, but said there was nothing further to announce. Earlier this year, Discovery Communications, Sony and Imax announced plans to launch a 3D television network in the United States. A launch is scheduled for early 2011. The 24-hour network headed by Discovery executive VP and chief operating officer Tom Cosgrove will screen a mix of natural history, space, exploration, engineering, science and technology programmes.
Should a European launch be forthcoming it would be the first third party channel to join the Sky platform after its own Sky 3D, which was recently given a consumer launch date of October 1. In March Sky reopened in launch queue to 3D and HD broadcasters. 3D channels will be required to broadcast no fewer than six hours of non-repeating programmes during a seven-day period. This compares to 12 hours for other TV and radio channels listed in its EPG. Sky says the minimum may be increased as additional 3D content becomes available.
Growth of 10% in its US networks and 15% internationally helped Discovery Communications grow revenues by $98 million (€74m) to $963 million. Operating Income Before Depreciation and Amortization (OIBDA) grew 18% to $455 million.
By Julian Clover, Broadband TV News
Live 3D sports production will have a new player come September. 3D-4U is looking to transform the consumer experience of live 3D with its new patent-pending camera/hardware/software system. A camera platform with 3D cameras mounted to provide either a 180- or a 360-degree view of the action gives viewers at home the opportunity to select their favorite camera angle.
“It’s as big as a bushel basket and weighs as much as a briefcase, plus it only takes 90 minutes to set up,” says Jason Coyle, COO of Silver Chalice New Media, the Chicago-based company that represents 3D-4U in the sports, entertainment, and new-media industries and provides strategic advisory and other business services. “It provides an immersive experience where we are basically [placing the viewer] in a courtside or front-row seat.”
The 360-degree system has 10 small, proprietary 3D camera systems mounted around it, all pointing outwards. Signals from them are pumped through a server located in or near the TV-production truck.
“The secret sauce,” says Coyle, “is in the camera hardware and its ability to tie the images together with software.”
Dr. Sankar Jayaram, founding partner/chief technology officer of 3D-4U, says the resolution is better than HD and the convergence is automated via the software. Signals are transmitted at 8 Mbps per camera.
“One camera system can capture the whole scene,” he says. For basketball, three systems would provide the best experience, with one mounted behind each goal or basket and another at the scorer’s table. Football, he adds, would be best experienced with the system mounted on the sideline cart. The more camera systems deployed the more immersive the 360-degree 3D experience becomes.
“There is also directional audio and full surround-sound audio,” says Jayaram. “So, when the viewer hears a sound, they can use the joystick to look at where the sound is coming from.”
In terms of getting the signals to homes, early distribution is expected to be via the Internet. In September, a Web browser will be available that uses a graphical user interface or keyboard commands for viewer control. The system’s signals are compatible with standard 3DTV screens, 3D computer monitors, 3D headsets, and even 2D sets (with those signals not being viewable in 3D).
Plans include the ability to deliver content directly to consumers’ 3DTV sets. That will require distribution deals with cable, satellite, and IPTV operators, as well as software that will reside on the cable set-top box. Delivery to the home would require about the same amount of bandwidth as three linear HD channels.
The sports market is just one important segment the company is addressing. Nature programming, live events like concerts and theater, and even security also hold potential.
“This gives the viewer an opportunity to experience a live event or the planet in an immersive way,” says Coyle. “It can stand alone as a supplemental second-screen experience or be available via traditional broadcast.”
By Ken Kerschbaumer, Sports Video Group
Cinesite has announced the launch of stereoscopic 3D visual effects services catering for films shot in stereo, as well as conversion of 2D films into 3D. Cinesite’s first stereo 3D project will be Pirates of The Caribbean: On Stranger Tides, for which it has been awarded a significant volume of stereoscopic visual effects work.
To accommodate its new stereo 3D services, Cinesite is undergoing a major expansion which includes growing its production staff by 40% and taking on a new floor in its building to accommodate 85–100 additional visual effects artists — adding another 7,000 square feet to its 30,000 square feet of custom-designed post production facilities. Having already added 250 terabytes of storage from BlueArc, Cinesite anticipates that it will have added almost half a petabyte of disc space and around 1,000 cores by the end of 2010, with an additional 1,000 cores in 2011.
As well as investing in a site licence of The Foundry’s Nuke compositing software, providing an additional 500 visual effects seats, Cinesite has purchased a Dolby stereoscopic projection system for its 36-seat screening room, a stereo 3D-capable Scratch viewing system and stereo 3D editing suites.
Telenet has signed a three-year research contract with the Laboratory for Neurophysiology and Psychophysiology at the KU Leuven Medical School. The research project led by Professor Guy Orban will focus on the neutral processing of 3D images in the human brain using functional imaging.
The cooperation between the Flemish cableco and the Katholieke Universiteit Leuven was established in the as part of the “Digital Wave 2015” plan, an innovation programme launched by Telenet to help Flanders maintain its pioneering role in an environment that is very quickly turning digital.
“For Telenet, it is not only important to know how the customer reacts to new television products in commercial terms,” said Jan Vorstermans, executive vice president, technology and solutions, Telenet. “The way in which viewers process 3D images, and the influence of 3D on the general viewing experience are equally relevant when outlining new digital plans”. Vorstermans said it was essential to clearly assess how sustainable and how responsible new products are.
Professor Orban and his team are world leaders in research on the neural mechanisms of visual 3D shape processing. “We have discovered the gradient neurons that extract this 3D shape from the flow of visual information reaching the brain from both eyes”, says Professor Orban. “We have been able to identify the areas where these neurons are situated in the human brain using functional imaging.”
As part of the current project Professor Orban will investigate how information reaching the brain is then used to perceive actions in 3D and how processes then differ in real life. “In concrete terms we can, for example, compare the experience of being present at a football match to watching the same football match on the new digital 3D TV”.
The co-operation between cableco and university will last for an initial three years with both parties able to use any findings that emerge.
By Julian Clover, Broadband TV News
TDVision Systems is pitching the cable industry and its key video suppliers on a patented technology that can produce full-resolution 3DTV signals without blowing out MSO bandwidth budgets. Comcast, DirecTV, Cablevision and others are starting off with "frame-compatible" formats that knit together two half-resolution, high-definition TV signals that can be fed to today's digital boxes. Comcast kicked off its stereoscopic 3DTV efforts in April with coverage of The Masters golf tournament using MPEG-2, but is now requiring consumers to get MPEG-4-capable boxes to access ESPN's new part-time 3D channel.
Offering frame-compatible, half-resolution 3DTV signals also keeps bandwidth requirements in check, since they take up roughly the same space as a linear 2D-HD channel, plus an overhead encoding premium in the neighborhood of 10 percent. Offering 3D MPEG-4 format offers even more savings.
But in the future, full-res 3DTV is expected to cost a much heavier premium. Estimates vary, but TDVision chief marketing officer Ethan Schur says a linear, full-res 3DTV channel, without using any special techniques, could more than double the bandwidth requirement if operators intend to deliver HD to each eye. On top of that, the operator may still need to set aside another channel if it intends to simulcast any 3DTV programming that can still be displayed on older 2D-only TVs.
The Bridge to Full-res 3DTV
TDVision is starting to come on the cable radar as the industry continues along an "interim" path with frame-compatible 3DTV. The first phase is already underway with half-resolution signals. The next phase is expected to help take some of the manual settings out of the equation and make sure that TVs and set-tops can toggle between 3D and 2D programming on the fly.
TDVision's interest in cable (and perhaps vice-versa) will grow as the industry starts to pursue full-resolution 3DTV and the use of Multiview Video Coding (MVC), a bandwidth-saving compression technique that's already been applied to H.264/MPEG-4 AVC, and has been adopted by Blu-ray for its 3D spec.
Schur claims TDVision's technology, when linked to the MVC process, can result in big bandwidth savings. He says operators will be capable of delivering a full-resolution 3DTV signal (plus the 2D-compatible version) on one stream and require a bandwidth penalty of about 35 percent -- versus the 215 percent required for a full-res 3D and a simulcasted 2D version.
TDVision claims it can offer this sort of encode-once/deploy-everywhere process using what it calls the "2D+Delta System." The company plans to demonstrate it at next month's CableLabs Summer Conference in Keystone, Colo.
As described, the patented system saves on bandwidth during the encoding process by combining the left-eye and right-eye view and throwing out the similarities between the two. The anticipated 35 percent bandwidth penalty represents the 3D "delta" information that's compressed and stored in the video stream alongside one of the views as the 2D version.
In 2D mode, the TV set would only decode the delta-free left-eye version and ignore the 3D info. A 3D-capable TV would also decode the 3D info to create the full-resolution 3D image. The good news for operators, then, is that they would only have to send a single broadcast stream to all customers, rather than simulcasting 2D and 3D versions of the same content.
Schur hopes the process will help 3DTV gain mass adoption among consumers and give studios a way to enter the third dimension without having to produce versions for the 2D and 3D audience. One likely exception to that is sports programming. Different angles are needed to enhance the 3D effect, which is one reason why ESPN is producing its 3DTV programming separately.
Blu-ray is already using some of TD Vision's intellectual property in its 3D specs, but it's also trying to fit into cable's encoding and decoding ecosystem as it prepares to join the full-res 3DTV era.
According to Schur, Magnum Semiconductor is making a headend encoder that integrates TDVision's 2D+Delta System. But the deal's not exclusive, so anyone can license TDVision's technology. He says the chips being used by major TV manufacturers will be able to decode streams that use the vendor's technology, and hopes the same will be true for next-gen set-tops that will be capable of decoding full-res 3DTV signals.
But how soon will cable's move into this phase of 3DTV begin? Schur thinks it will start happening within the next six months, noting that some will be "surprised" at how quickly cable moves in this respect.
"But the first thing that's important is getting the encoders in there. The next step is to include the support in set-tops coming into the market," he says.
But TDVision, a privately held firm with under 20 employees, won't be the only one charging into that market. Dolby Laboratories and RealD have both started off with frame-compatible solutions and are looking at ways to make the jump to full-res 3DTV.
By Jeff Baumgartner, Light Reading CABLE
In 2008 cooperation between the Walt Disney Company and the Swiss Federal Institute of Technology Zürich (ETH) was announced as part of the Disney Research group. This unusual combination of no-nonsense Swiss thinking and the nonsense-centric Disney creativity raised some eyebrows and many thought this is not going anywhere. Well, in March of 2010 "Disney Research Zürich" was officially opened with a group of 20 scientists and is poised to grow to 40 by 2011.
As nobody really knew what the goal was for this research group, the targets are now becoming clearer. In an ETH publication two main focus areas were described as modeling human faces (a holy grail for animated movies) and 3D movies. The latter one became clearer with a recent announcement. 2D editing methods for movies don’t really work well with 3D and don’t address any 3D issues. New is an algorithm that allows editors to correcting the depth of individual objects. This can be done in a post-processing step or even on the fly for live events. It may also allow converting 2D content into 3D content.
The attached 3D anaglyph pictures (from ETH Life - Ein Algorithmus für mehr 3D-Sehgenuss) show one of the key issues that exist with today’s 3D imaging.
If an object is coming out of the screen (top picture) it can create eye strain if the out-of-screen effect is too strong. It can also lead to an uncomforting viewing experience when the out-of-screen object overlaps the edge of the display (which it does). This breaks the 3D illusion by creating contradictory depth cues. In such cases, our brain has to resolve this issue through more processing. If we do this in a computer the processor, will heat up and the fan will start spinning faster to keep it cool. But in the human case, we just get a headache. This is a typical case where new technology is required to make 3D ready for the consumer not only in the movie theater but also in the home.
Disney Research is addressing this issue by making it possible to alter the depth structure at the pixel level. This allows varying the depth structure of single objects or planes of the complete scene. In the lower picture, the car is pushed back into the picture and the edge of the car is closer to screen depth so as not to have a conflict with the border edge.
Nonlinear Disparity Mapping for Stereoscopic 3D
By Norbert Hildebrand, Display Daily