Arriraw T-Link Certificate Verifies Recorders for D21

The ARRIRAW T-Link (Transport Link) certificate has been developed to allow certified data recorder manufacturers to market their ability to record the raw data from the ARRIFLEX D-21 film style camera.

The ARRIRAW T-Link is a simple and standardized method of transporting raw data from the ARRIFLEX D-21 to a recorder. It is a method of packing the ARRIRAW 12 bit data into the RGBA HD file format. Transmission is achieved over a standard dual link HD-SDI connection per SMPTE 372M. At this point, certified manufacturers working in partnership with ARRI include Codex Digital and S.two. Data recorder manufacturers can receive the T-Link certificate if they fulfill the following conditions, that their product can record the ARRIRAW T-link stream, can show a live preview while recording, can play back the ARRIRAW images, can de-squeeze an anamorphic image for preview and playback and outputs raw data in a format that can be read by the ARRI Image Booster software.

Source: BroadcastBuyer

Studios Sign Digital Deployment Deals

More studios will be signing digital cinema deployment deals in the next few weeks, with 2,500-3,000 screens ready by the time DreamWorks Animation debuts its first digital 3-D pic Monsters vs. Aliens in March, Jeffrey Katzenberg said Tuesday. Prediction, which was significantly more optimistic than the toon studio topper's comments in April, came during a call with analysts following a solid earnings report driven by the strong bow of Kung Fu Panda.

Katzenberg, who said in April that the pace of movie screens transitioning to digital had been "pretty disappointing," said he was encouraged by Fox's recent agreement with Digital Cinema Implementation Partners, the org formed by AMC, Cinemark and Regal to fund the installation of digital projectors.

"It has taken a good 60 days longer than we or anyone else anticipated," the exec granted. "Now that the first one is done, I think we're going to see very rapidly, in the next two to three weeks, at least two or three additional studios come onboard."

If most of the 2,500-3000 screens Katzenberg predicts will be digital by March are devoted to Monsters vs. Aliens, DreamWorks should be able to project the film in 3-D at about half of its playdates.

In a follow-up interview, DreamWorks Animation prexy Lew Coleman said that the level of digital deployment should allow the toon company to at least break even on the $15 million extra it spends to produce its pics in 3-D, with profits rising as more screens make the transition. Extra revenue will come from higher ticket prices. Katzenberg said he is expecting the average price increase to be $5 for digital 3-D films.

DreamWorks Animation reported a $27.5 million profit for the quarter ended June 30 on $140.8 million in revenue. That's down substantially from last year, when the studio was raking in cash from the megahit Shrek the Third. But Wall Street seemed pleased since it was fueled by Kung Fu Panda, which is on track to be the studio's biggest nonsequel. Jack Black starrer has already grossed more than $510 million worldwide. A sequel is likely, though Katzenberg said no decisions have been made yet.

Film has already recouped distributor Paramount's print and marketing costs and brought in $46.4 million to DreamWorks this quarter. Slightly less than half came from what the studio described as the "completion of a strategic relationship," which is believed to be its deal with Toshiba to distribute in the now defunct HD DVD format. Majority of the rest came from consumer products.

Kung Fu Panda is expected to bring in substantially more revenue this quarter and next from international box office and then homevideo. Company also earned $29.9 million from Shrek the Third, primarily in the pay TV window, and $25.5 million from last fall's disappointing performer Bee Movie via international DVD. Pics have sold 20.3 million and 7.1 million DVDs worldwide, respectively.

In November, DreamWorks releases sequel Madagascar: Escape 2 Africa.

Studio also said Tuesday that it is spending $85 million to expand and upgrade its Glendale HQ in the next two years. In addition, Coleman has extended his contract through 2011, and executive chairman Roger Enrico is transitioning to non-exec chairman, meaning he won't play a day-to-day role in running the company.

By Ben Fritz, Variety

Thomson Infinity Camcorder Enhanced By New CineForm I-J2K Suite With Adobe CS3 & Apple FCP

Thomson has joined with CineForm to develop the I-J2K Suite that enables Adobe CS3 and Apple Final Cut Pro (FCP) users to have a complete, real-time, 10-bit, 4:2:2 workflow for JPEG 2000 files captured with the Thomson Grass Valley Infinity Digital Media Camcorder (DMC).

The new I-J2K Suite, which is part of CineForm’s HD Link technology, can be found in the company’s Prospect HD and NEO HD products. These products will include a plug-in decoder module that seamlessly converts JPEG 2000 files generated with the Infinity DMC to a file format that is easily recognized by PC and Mac users of the Adobe CS3 Production Suite, and Apple Final Cut Pro Studio. In addition to making the JPEG 2000 files available for high-quality, file-based, non-linear editing, it creates AVI or QuickTime files for preview and playback on both Windows and Mac systems, allowing JPEG 2000 files to be as easily edited and viewed as other popular media file types.

In addition, CineForm has developed separate DirectShow (Windows OS) and QuickTime (Mac OS X) components that allow the company to OEM its new (Infinity) JPEG 2000 decode plug-in to other third-party companies, thus enabling them to build the capability to convert Infinity JPEG 2000 files into their respective transcoding products for use with Avid Technology and other systems.

The Windows version is now available for purchase from CineForm’s website, and the Mac version will be available soon. Licenses for Windows and Mac versions of CineForm’s I-J2k Suite will cost $99 per seat.

Source: BroadcastBuyer

One Monster Piranha, Lots of Plug-Ins & 123 3D VFX Shots

Award-winning VFX studio Frantic Films VFX, a division of Prime Focus Group, created all that and more as lead visual effects provider for the recently released film, Journey to the Center of the Earth, the first fully live-action VFX-driven feature released in 3D. Frantic’s secret? The not-so-easy but infinitely rewarding process of developing and deploying a set of custom plug-ins for eyeon’s Digital Fusion 5, called Awake, to guide the 3D VFX workflow during what could have been a hellacious ride. The software arm of the studio has also made those plugs available to everyone else starting down the 3D road.

Sean Konrad, Frantic’s Pipeline Designer/Technical Director, “hovers between the studio and software division” of the company, giving him a very long and deep view of the plug-in development process. “A lot of these new plug-ins evolved out of tools we’d had before,” he says, “but the intensity of the Journey production forced us to put more into the existing tools, as well as create brand new ones to manage the complexity of the stereoscopic workflow. The grim reality is that native tools just can’t do an adequate job. We developed these out of necessity.” For example, Awake’s Lens Distort and Digital Camera Noise tools, created and used for Superman Returns, were only modified for the new release and weren’t used heavily during Journey VFX post. But Frantic relied heavily on other tools it has put into the pack, including Depth Blur and Stereo Image Stacking and Unstacking.

With a fully 3D pipeline, then, where do you start? “About two months before we started to work on the project, we got a few test shots and started looking at them,” says Konrad. “We thought about creating separate project workspaces for each of the eyes, so the idea was to work on the right eye and then copy everything that had been done for the right eye over into a new workspace for the left eye. But it became very clear early on that this setup would result in a lot of investment in time and hardware.” He and the team also discovered early on that the two cameras had exposed slightly differently when shooting the VFX sequence. “We were trying to determine whether or not the difference between the two could be reduced or removed, or if the threshold was large enough that a well rounded keyer could take care of it,” he says. “Our VFX Supervisor, Mike Shand, was trying this one day and found the workflow of copying and pasting tools too time consuming, and decided to combine the two images into a single frame by increasing the canvas size and putting the left image on top and the right image on the bottom.”

The result is Awake’s Stereo Image Stack plug-in, which combines stereo images, getting rid of the clutter of stereo flows. It works in tandem with Awake’s Stereo Image Unstack plug. “The idea was to be able stack images top-bottom, side-by-side, or interlace them in the output image,” says Konrad. “Scripting let us view only left or right only flows at any given time, meaning you need less time to generate previews. Prior to this, I'd been working on a system that would create a linked but separate workspace for each eye, such that the artist would get near final with work done on one eye, and then run a script that would spawn the second eye. We would use the right eye as a sort of layout, and have scripts to mark certain tools as being left eye specific (such as roto masks), so that future updates to the right eye could be ported to the left eye with another script. It became obvious early on that this was going to be wrought with problems.” For example, he says, mundane things, such as making sure mask connections updated to new tools correctly, or more critical problems, such as not being able to look at stereo issues “in an abstracted way in the same comp window. The stacking method that Mike Shand came up with provided an easy out. It made the process of compositing two eyes both intuitive and well managed.”

After the concept was mapped out, Konrad and Kert Gartner, Frantic’s 2D Technical Lead, went to work analyzing the workflow. “We identified early on what we would need,” he says, including these primary requirements: “To be able to work on a single eye; to be able to create masks independently for each eye, to be able to separate the images for processes that do image sampling such as blurs, as the act of combining the two images into a single frame would cause blurs to bleed through the median point; a way to view the images in rudimentary stereo in-comp; and a visible way to tell the images apart. The results are a few tools that allow you to select the eye the tool should be outputting and the eye that the comp is currently active with, so that you can work in stereo and in mono in the same comp.”

Beyond developing three digital characters end-to-end and the entirely CG water and mist (using Frantic’s well-known Krakatoa volumetric point renderer), Frantic also created a CG sail for the raft itself. “The nice thing was that the director, Eric Brevig, really considered it to be a character as well,” Konrad says. That echoes what Brendan Fraser, the film’s star and also an executive producer, said on the record when Journey came out: The Piranha sequence is his favorite in the entire film.

Konrad says that working on the film opened his eyes to the breadth of artistic and technical choices available to 3D filmmakers. “What’s going to happen when more of these 3D movies get made,” he says, “is the language of 2D film will have to change significantly. As the 3D medium matures, there will be a lot more interesting uses of 3D that will result. I assume people like James Cameron [with films-in-progress like Avatar] are doing with 3D what Hitchcock did when he analyzed his films. Hitchcock found that moving the camera right created a sense of forward momentum, and moving the camera left created a sense of foreboding. Unfortunately, a lot of that language is diluted when it’s in a 3D environment. Journey did a lot to clarify that new language, and it will be a wild ride as we all map it out together.”

The Awake plug-in pack has a list price of $299 and is compatible with Fusion 5 and higher.

By Beth Marchant, StudioDaily

Fox Reaches Digital Cinema Upgrade Deal

News Corp's Twentieth Century Fox has reached an agreement in principle with a group of theater chains, paving the way for a long-delayed $1.1 billion digital cinema upgrade that Hollywood hopes will boost attendance and cut costs, sources familiar with the deal said on Thursday. One of the sources said the deal was contingent upon other studios also agreeing to help co-finance the digital roll-out for the movie chain consortium.

Fox is the first of six studios engaged in year-long talks with the Digital Cinema Implementation Partners (DCIP) -- formed by Regal Entertainment Group, Cinemark Holdings Inc and AMC Entertainment Inc, who operate 14,000 screens -- to reach a deal to help finance the theater upgrades.

Walt Disney Co and Viacom Inc, which owns Paramount, are expected to soon reach a DCIP deal, other sources familiar with the deal said. Officials from the studios were either unavailable or declined comment on the negotiations. Travis Reid, chief executive of DCIP, confirmed a studio had reached a deal but declined to disclose which one. "A party has signed a deal and we think it won't be long until we have multiple studios," he said.

DCIP was formed over a year ago and first hoped to reach a deal by the fourth quarter of 2007. Discussions also involve General Electric Co's Universal Pictures, Time Warner Inc's Warner Bros, Sony Corp and third party financiers. Negotiations had been slowed by disputes over terms requiring studios, exhibitors and content providers to pay usage and other fees to help pay off loans provided by financial institutions such as JPMorgan Chase to buy and install new digital equipment. Market volatility and issues involving standards, equipment procurement and performance criteria also delayed the talks.

Hollywood has a lot riding on the industry's digital conversion, as studios like DreamWorks Animation SKG Inc, whose films are distributed via Paramount, are planning to make all future movies in 3-D and need enough 3-D enabled screens to support their slates.

Various theater chains have been upgrading individually but the DCIP deal is being watched closely because it accounts for so large a portion of the nation's box office.

"When the DCIP deal drops, then digital cinema is really on its way," said Michael Lewis, chief executive of RealD, a provider of 3-D systems for the cinema market.

About 5,000 of the 37,000 cinema screens in the United States are digitally equipped and the ultimate aim is to transform all 125,000 screens worldwide.

The upgrades will enable studios to beam film via hard drives or satellite dishes to theaters, saving them billions of dollars in print and delivery costs. Once outfitted with digital projectors, theaters can add 3-D capabilities. There are close to 1,300 3-D screens in the U.S., primarily provided by RealD, said Lewis, who has commitments for another 4,250 3-D screens, many of which are dependent on clinching the DCIP deal.

News of a DCIP studio deal was first disclosed by Regal Entertainment Group Chief Executive Mike Campbell on a conference call on Thursday. "We can't disclose which studio, but we consider it to be a major milestone. It is always difficult in getting someone to be willing to be the first," he said, adding he expected other studios to soon follow suit.

Studios look to the success of the 3-D concert movie, Hannah Montana/Miley Cyrus: Best of Both Worlds Concert Tour, which commanded substantially higher ticket prices, as a template for the future.

The latest 3-D release, Journey to the Center of the Earth, reaffirmed the trend, with 954 RealD screens generating over 55 percent of the opening weekend box office receipts, yielding a per-screen average of $12,118, compared with the $4,116 average for standard 2-D screens, Lewis said.

Studios are collectively preparing about 40 3-D films over the next three to five years.

"I believe 3-D will enhance the moviegoing experience. It's the hook that will bring people back for the experience," said Chuck Viane, president of Disney domestic distribution.

By Sue Zeidler, Yahoo News

Beijing Theater Installs Laser Projector

A movie theater in Beijing has installed what it claims is the world's first projector that uses a laser rather than conventional light, breaking ground on projection technology that U.S. theaters are barred from exploring because of safety regulations. The laser projection system has been jointly developed by Phoebus Vision Co. and the CAS Academy of Opto-electronics.


The Beijing Hua Xing Ultimate Movie Experience opened Thursday with a screening of John Woo's epic Red Cliff using a projector made by a local company.

"Beijing Phoebus Vision Co. provided us with the world's first set of laser-screening instruments" Han Jie, spokeswoman with Beijing UME said Monday. The projector was installed in an existing 120-seat hall in the Chinese capital at a cost of about 1.2 million yuan ($176,000).

"It is the first laser-screening set in the world," a Beijing Phoebus Vision spokesman said. Han said that UME's normal cinema projectors cost about 700,000 yuan ($102,000).

Several companies, including Mitsubishi, have demonstrated laser projection systems, said industry analyst Matt Brennensholtz of Norwalk, Conn.-based research firm Insight Media. These systems are usually very costly, he added.

"I'm not aware of anybody that's used a laser projector in a movie theater before," Brennensholtz said. "There were a number of tests, but I've never head of a public theater where you pay your ticket and go in and see one of these."

According to Michael Karagosian, president of Los Angeles-based MKPE Consulting, laser projection systems offer broader color range and use less power than traditional systems. In addition, the bulbs -- which cost several thousand dollars each -- can last 10 years instead of six months. However, because of safety concerns the laser systems are not approved for public use in the U.S. and in many other countries, said Karagosian, a technology consultant to the L.A.-based National Association of Theatre Owners.

"What you don't want is a little kid standing up looking back at the projector and suddenly have a ray in his eye," he said.

UME's Han said the tickets to see films projected with the new laser technology would remain the same for now "in order to let more people experience and accept the new technology."

UME plans to install another such laser projector at its Shuang Jing cinema in Beijing by the end of 2008. The chain does not plan to install the model in the other cities where it operates: Shanghai, Chongqing, Guangzhou and Hangzhou.

By Alex S. Dai, Maria Trombly and Alicia Yang, The Hollywood Reporter

XDT Introduces Catapult

Developer of advanced software-based solutions for digital film, XDT has introduced and announced the immediate availability of catapult. Catapult provides the world’s fastest point-to-point frame transfers, and is capable of delivering the same local throughput performance of its storage arrays out to connected systems utilizing standard networking infrastructure.

Catapult is an integrated hardware and software solution. The hardware consists of a 3RU, enterprise grade server that delivers a maximum data rate of more than 1,000 Megabytes per second read or write via a local RAID5 protected array (up to 11TB capacity). This hardware platform is then exposed onto the LAN or Switched Fabric with XDT’s revolutionary catapult software component.

XDT’s cross-platform slingshot client provides an intuitive interface to transferring frame data to and from catapult. Point-to-point data rates of 600 Megabytes per second are achievable over 6 Gigabit Ethernet interfaces. XDT’s flipstream application, also cross-platform, enables customers with uncompressed film frames to review them from any standard workstation accessing a network attached catapult system. This is in contrast to the currently used method, which requires frames to be transferred to locally attached storage on a dedicated review system.

Saving countless hours of network transfers, catapult systems are capable of playing back two concurrent, uncompressed 2K (10-bit dpx log 1.85:1) streams over only 4 Gigabit Ethernet connections allowing for a highly cost-effective way to review stereoscopic material.

Source: BroadcastBuyer

SMPTE to Establish 3-D Home Entertainment Task Force

The Society of Motion Picture and Television Engineers (SMPTE) is establishing a task force to define the parameters of a stereoscopic 3-D mastering standard for content viewed in the home.

Called "3-D Home Display Formats Task Force", the project promises to propel the 3-D home entertainment industry forward by setting the stage for a standard that will enable 3-D feature films and other programming to be played on all fixed devices in the home, no matter the delivery channel. The inaugural meeting of the Task Force is open to entertainment technology professionals interested in participating in the effort, subject to available space (SMPTE membership not required). It takes place on August 19, 2008 and will be hosted by the Entertainment Technology Center (ETC) at the University of Southern California, near downtown Los Angeles.

“Digital technologies have not only paved the way for high quality 3-D in the theaters, they have also opened the door to 3-D in the home,” explained SMPTE Engineering Vice President Wendy Aylsworth. “In order to take advantage of this new opportunity, we need to guarantee consumers that they will be able to view the 3-D content they purchase and provide them with 3-D home solutions for all pocketbooks.”

The 3-D Home Display Formats Task Force will explore the standards that need to be set for 3-D content distributed via broadcast, cable, satellite, packaged media and the Internet and played-out on televisions, computer screens and other tethered displays. After six months, the committee will produce a report that defines the issues and challenges, minimum standards, evaluation criteria and more, which will serve as a working document for SMPTE 3-D standards efforts to follow.

The first 3-D Home Display Formats Task Force gathering will feature demonstrations of 3-D technologies. All technology professionals in content creation and distribution, consumer electronics and entertainment tools and services who are considering joining the group are welcome to attend. Non-members will be asked to pay a small fee for the initial meeting, and ongoing participation in the work requires membership in the SMPTE Standards Community.

Source: BroadcastBuyer

New "Telescopic Pixel" Displays Could Outperform LCD and Plasma

Liquid crystal displays (LCDs) have become the overwhelming choice for both desktop and mobile computing because they offer the best combination of image quality, price, and power efficiency of the current display technologies. But LCDs still have a lot of room for improvement, as they only transmit 5 to 10 percent of the total backlight to the user, and can account for up to 30 percent of the total power consumption of a laptop. In this week's Nature Photonics, researchers from Microsoft and the University of Washington report a new display technology called "telescopic pixel" that transmits 36 percent of backlight radiation.

The new pixel design is based on a tried-and-true technology: the optical telescope. Each pixel consists of two opposing mirrors where the primary mirror can change shape under an applied voltage. When the pixel is off, the primary and secondary mirrors are parallel and reflect all of the incoming light back into the light source. When the pixel is on, the primary mirror deforms into a parabolic shape that focuses light onto the secondary mirror. The secondary mirror then reflects the light through a hole in the primary mirror and onto the display screen.

Schematic of a telescopic pixel in the off position

Each pixel is produced in two halves by standard photolithography and etch techniques. The secondary mirror is simply a lithographically patterned array of aluminum islands on glass, but making the primary mirror is a bit more complicated. First, an indium tin oxide (ITO) electrode is deposited on a glass substrate and coated with polyimide. The polyimide acts as a support and electrical insulator for the primary mirror. Aluminum is then sputtered onto the polyimide and photolithography is used to pattern 20 µm diameter holes, forming a two-dimensional array that will eventually line up with the secondary mirrors.

The last step in the primary mirror fabrication is a dry etch, which preferentially removes polyimide from under the holes in the aluminum layer, resulting in sections of aluminum that are suspended in free space. These free-hanging sections of aluminum can be deformed by applying a voltage between the metal and the ITO layer. Once assembled, each pixel is 100 µm in diameter. This fabrication method is low-cost and compatible with the infrastructure currently used for LCDs.

Internal structure of a telescopic pixel display

Performance tests on arrays of telescopic pixels suggest they hold substantial promise for future displays. As mentioned above, backlight transmission was measured at 36 percent, and simulations indicated that this could reach 56 percent with design improvements. In a modern laptop with a five-hour battery life, this increase in efficiency could lead to almost 45 extra minutes of battery time without reducing screen brightness. Pixel response time was 0.625 ms—a dramatic improvement over LCDs, which have 2 to 10 ms response times. These response times may also be fast enough to allow sequential color processing where colors are displayed as rapid pulses of red, blue, and green from each pixel, streamlining fabrication and device design. Also, the intensity of each pixel can be smoothly varied from zero to one hundred percent for more realistic gray scales and color shades.

The worst and possibly crippling property of the displays was contrast. Experimental measurements conducted with non-collimated light showed a contrast of 20:1. Simulations indicate that ratios of up to 800:1 may be possible, which would put these displays on par with LCDs. Several easily correctable experimental factors led to the low contrast numbers, and I would expect to see much better numbers in future prototypes.

Given the substantial performance gains, amenability to current fabrication methods, and Microsoft's involvement, this report could signal the beginning of a new display technology. These displays have the potential to be faster than LCDs, more scalable than plasma, and cheaper and more energy efficient than both. It's a long, arduous path from the university lab bench to the laptop on your desk, but I wouldn't wager too much against seeing telescopic pixels in some capacity in the future.

By Adam Stevenson, Ars Technica

Upcoming 3D Releases

Boxoffice Magazine and Dolby are providing a comprehensive list of upcoming 3D releases. This list will be continually updated as new 3D movies are announced.

Filmlight Struts its Stuff

On Tuesday night, at an event aimed strictly at the press, I saw a demo at Filmlight’s Los Angeles offices. The main point of the evening was to see Baselight 8’s 4K color grading, but while they were readying that suite (apparently, it had crashed after an entire day’s worth of demos), we got another treat.

TrueLight product manager Peter Postma demonstrated Baselight’s new 3D grading capabilities. With 3D filmmaking on the upswing, all kinds of manufacturers, from cameras to color correction, are eager to provide solutions. Postma showed the demo with two HD streams (left eye/right eye) playing simultaneously. The user can grade either both eyes at the same time, or grade one eye and then apply the grades to the second (all in real-time, although complex grades may need to cache material before playback). The 3D version of Baselight has all the same tools as the 2D version, said Postma. Also provided are tools to tweak the convergence of the eyes, to increase or decrease the amount of depth perception. According to Postma, Filmlight plans additional tools for keystone correction (for adjusting the two lenses if they’re not properly aligned) and a tool for offset of HD cameras. He pointed out that users can grade in 3D with the Truelight tool. The software release will be available shortly after IBC. It’s available for use now to any Baselight customers and any prospective customers. No facility is beta-testing at the moment. For Baselight users, the 3D capabilities will be a standard software upgrade.

The main course was Filmlight’s 4K DI, which got a real-life application with the indie film “Reach for Me,” which was shot in 4K with the Dalsa Origin camera, recorded to Codex Digital media, and offlined in FCP. The demo was done by Jacqui Loran, head of European support, Mike Grieve, worldwide sales director, Michael La Fuente, Baselight manager and Avid application editor Steve Hollyhead.

The demonstration first showed the latest results of Filmlight’s working relationship with Avid, which was bi-directional metadata content exchange between Media Composer and Baselight via Unity. Post Logic Studios colorist Doug Delaney (who was not the colorist on “Reach for Me” but knew how it was done) showed how robust the system is. He “pushed and pulled” the color correction on several sequences. “There’s a lot of horsepower here,” he said. He then shared the material with the Avid editor, in real-time. “As a colorist, being quick and visceral is a critical part of keeping the session going,” said Delaney. “You can’t say, give me a few minutes to render because it’s 4K. You have to keep things moving which, with the Baselight, is more do-able.” The current version was Baselight 8 with 18 CPUs. “The idea is that you can grade and do changes in editorial simultaneously,” said Grieve. “It’s parallelization.”

(As an aside, Loran showed an improvement to the conform tool that enables the user to filter for UID, timecode or keycode to find the correct images, nearly instantaneously.)

Last but definitely not least, the Filmlight crew demonstrated the company’s own GPU technology, which will go into beta in August and be released in October with Baselight 8. In a word: impressive. Grieve started off by playing 17 layers in real-time at 4K; but he later threw up 43 layers and there was no slow-down in performance. Grieve said that the company is still deciding where to beta-test the GPU. The next point release will provide pre-determined looks. “We’re showing the limits with what you can do in real-time,” he said. “The more the DoP and director can do, the more time they’ll spend in the suite making something that’s beautiful.” Now those are sweet words to post house execs.

By Debra Kaufman, StudioDaily

800Mbs+ Dulce Systems ProRX RAID Solutions for Final Cut Pro

Today I profile storage supplier Dulce Systems, specifically their new ProRX RAID solution for Final Cut Pro and other high data-rate creative applications. Dulce has a reputation as being one of the best and most reliable vendors in the storage marketplace, and as this profile demonstrates, they have long and extensive knowledge of mass storage products and their customers needs. I sat down with their DoTS (Director of Technical Stuff) Robert Leong who gave me the details on the ProRX.

Click here to download the video

By Larry Jordan, HDFilmtools

HP DreamColor LP2480zx

It’s not often you encounter a product that can shake up the marketplace. The press release may promise something that’s "revolutionary," but that’s rarely the case. With the HP DreamColor LP2480zx 24-inch LCD display, however, the product really could live up to the hype. Based on what I’ve seen, the DreamColor is just what the industry has been looking for. Or nearly so, as nothing is perfect.

This is a first-look review, because at the time of this writing, the monitor wasn’t shipping and wasn’t available for an in-house review. I was, however, able to have some quality hands-on time with it at the DreamWorks studios in Glendale, California. It wasn’t quite the same as loading up your own applications and eyeballing your usual video and graphics files, but I came away impressed with the display’s wide color gamut, image detail in areas that would normally be too dark to evaluate and intelligently designed features.

It was designed by HP in collaboration with DreamWorks Animation. In a nutshell, that’s why this monitor is particularly well suited for post-production. DreamWorks has been in the same boat as everyone else in the industry. If accurate color reproduction is critical to your work, you could either hang on to your old CRT monitor or buy a new color-critical LCD monitor that could cost as much as $23,000.


HP DreamColor LP2480zx


Over One Billion Served
The DreamColor display has a list price of $3,499. Given the specs, that’s a real bargain. We’re talking about a 30-bit display (10 bits per primary color) that can reproduce more than one billion colors when operating in its native mode. As you might guess, it uses an LED backlight system to achieve this broad color gamut. Significantly, it can handle 97 percent of the DCI-P3 specification, which means you could use this monitor to edit video destined for a digital cinema theater. I spoke with one of the animators at DreamWorks Animation, who said that 97 percent was sufficient for their work (if necessary, you could run the video on a DCI-P3 compliant projector to check that last three percent). It has a 1920 x 1200 native resolution and impressive 178-degree viewing angle.

Another key issue for color-critical work is the monitor’s contrast range. CRTs have been able to provide blacker blacks than LCDs can offer. The DreamColor monitor has a 1000:1 contrast ratio with the black reaching all the way down to 0.05 cd/m2 (versus a maximum white luminance rating of 250 cd/m2). The DreamWorks animators were much more concerned with the dark end of the contrast range because they wanted to emulate a low-light theatrical experience when editing.

DreamWorks will also benefit from the DreamColor’s calibration stability. Currently, they have to recalibrate their CRT monitors after 75 to 100 hours of use, which translates in practical terms to about twice a week. The DreamColor can retain its calibration for 1,000 hours of use, which translates to about twice a year. According to HP, after that 1,000 hours, the luminance level could vary by about one nit. The color and white point parameters would continue to be stable.

The DreamColor has a generous array of video inputs: DVI-I (dual), DisplayPort 1.1, HDMI 1.3, Component (YPbPr), S-Video and Composite. Keep in mind that only the DisplayPort and HDMI interfaces are capable of handling 30-bit color. You’ll also need a graphics card, graphics card driver and software that are 30-bit compatible. Of course, that would be true for any 30-bit monitor you purchase.

Are there any limitations to this monitor? It doesn’t have 120-Hz scanning, which can significantly improve fast-moving video on an LCD display. Both the Sony BVM-L230 ($23,000) and Panasonic BT-LH1760 ($4,500) have 120-Hz scanning. In addition, the DreamColor’s pixel response rate is 6ms (gray to gray), which is good, but not great. That said, I saw no evidence of motion blurring or monitor-induced artifacts when viewing video files. If you work with fast-moving video, such as sports coverage or street documentaries, you should test the monitor with typical video sequences.

The Final Frontier
DreamColor’s onscreen menus also reflect the particular needs of video professionals. The color space presets include sRGB, Adobe RGB, Rec. 601, Rec. 709, DCI-P3 emulation (97 percent) and full gamut. An optional HP DreamColor Advanced Profiling Solution lets you customize the monitor’s color space by defining the parameters for primaries, gamma, white point and luminance. The kit includes a colorimeter and associated software. It should be available in July for less than $500.

The HP DreamColor LP2480zx fills an important price gap for color-critical content creation, especially animation work. Even DreamWorks can’t provide a $23,000 monitor for everyone who works on a project. Given the relatively low price, the DreamColor could also appeal to digital photographers, fashion designers and even magazine publishers. It’s a prime example of the benefits that can follow when you involve advanced users in the design of a highly technical product.

Specifications
   - Wide Aspect Ratio: 24-inch LCD
   - 1920 x 1200 native resolution
   - LED backlight system
   - Over one billion colors active in native mode
   - 1000:1 contrast ratio
   - 178-degree viewing angle
   - 250 cd/m2 maximum white luminance level
   - 40 cd/m2 minimum white luminance level
   - 0.05 cd/m2 black luminance level
   - Video Inputs: DVI-I (dual), DisplayPort 1.1, HDMI 1.3, Component (YPbPr) and S-Video Composite
   - Six Factory Programmed Color Space Presets: sRGB, Rec. 709, Rec. 601, Adobe RGB, DCI-P3 emulation (97 percent) and full gamut; one user– programmable color space preset
   - Three-year parts, labor and on-site service

By David English, StudioDaily


Add to Cart


3-D Film

The Three Dimensional Cinema the term 3-D (or 3D) is used to describe any visual presentation system that attempts to maintain or recreate moving images of the third dimension, the illusion of depth as seen by the viewer. The technique usually involves filming two images simultaneously, with two cameras positioned side by side, generally facing each other and filming at a 90 degree angle via mirrors, in perfect synchronization and with identical technical characteristics. When viewed in such a way that each eye sees its photographed counterpart, the viewer's visual cortex will interpret the pair of images as a single three-dimensional image. Modern computer technology also allows for the production of 3D films without dual cameras.

Anaglyph images
Anaglyph images are used to provide a stereoscopic 3D effect, when viewed with 2 color glasses (each lens a chromatically opposite color, usually red and cyan). Images are made up of two color layers, superimposed, but offset with respect to each other to produce a depth effect. Usually the main subject is in the center, while the foreground and background are shifted laterally in opposite directions. The picture contains two differently filtered colored images, one for each eye. When viewed through the "color coded anaglyph glasses," they reveal an integrated stereoscopic image. The visual cortex of the brain fuses this into perception of a three dimensional scene or composition.

Anaglyph images have seen a recent resurgence due to the presentation of images and video on the Internet, Blu-ray HD disks, CDs, and even in print. Low cost paper frames or plastic-framed glasses hold accurate color filters, which typically, after 2002 make use of all 3 primary colors. The current norm is red for one channel (usually the left) and a combination of both blue and green in the other filter. That equal combination is called cyan in technical circles, or blue-green. The cheaper filter material used in the monochromatic past, dictated red and blue for convenience and cost. There is a material improvement of full color images, with the cyan filter, especially for accurate skin tones.

Video games, theatrical films, and DVDs can be shown in the anaglyph 3D process. Practical images, for science or design, where depth perception is useful, include the presentation of full scale and microscopic stereographic images. Examples from NASA include Mars Rover imaging, and the solar investigation, called STEREO, which uses two orbital vehicles to obtain the 3D images of the sun. Other applications include geological illustrations by the USGS, and various online museum objects. A recent application is for stereo imaging of the heart using costly 3D ultra-sound with plastic red/cyan glasses.

Anaglyph images are much easier to view than either parallel (diverging) or crossed-view pairs stereograms. However, these side-by-side types offer bright and accurate color rendering, not easily achieved with anaglyphs. Recently, cross-view prismatic glasses with adjustable masking have appeared, that offer a wider image on the new HD video and computer monitors. With the techniques outlined below it is possible to convert stereo pairs from any source into anaglyph images.

Polarized 3D glasses
Polarized 3D glasses create the illusion of three-dimensional images by restricting the light that reaches each eye, an example of stereoscopy. To present a stereoscopic motion picture, two images are projected superimposed onto the same screen through orthogonal polarizing filters. The viewer wears low-cost eyeglasses which also contain a pair of orthogonal polarizing filters. As each filter only passes light which is similarly polarized and blocks the orthogonally polarized light, each eye only sees one of the images, and the effect is achieved.

The difficulty arises because light reflected from a motion picture screen tends to lose a bit of its polarization. However, this problem is eliminated if a "silver" or "aluminized" screen is used. This means that a pair of aligned DLP projectors, some polarizing filters, a silver screen, and a computer with a dual-head graphics card can be used to form a relatively low-cost system for displaying stereoscopic 3d data simultaneously to tens of people wearing polarized glasses. Such a system, called a GeoWall, has been used for several years now in the Earth Sciences thanks to the GeoWall Consortium, with several open source and commercial packages available.

When stereo images are to be presented to a single user, it is practical to construct an image combiner, using partially silvered mirrors and two image screens at right angles to one another. One image is seen directly through the angled mirror whilst the other is seen as a reflection. Polarized filters are attached to the image screens and appropriately angled filters are worn as glasses. A similar technique uses a single screen with an inverted upper image, viewed in a horizontal partial reflector, with an upright image presented below the reflector, again with appropriate polarizers. Polarizing techniques are most simply used with cathode ray technology, as polarizers are used within ordinary LCD screens for control of pixel presentation - this can interfere with these techniques. In 2003 Keigo Iizuka discovered an inexpensive implementation of this principle on laptop computer displays using cellophane sheets.

Polarized stereoscopic pictures have been around since 1936, when Edwin H. Land first applied it to motion pictures. The so called "3-D movie craze" in the years 1952 through 1955 was almost entirely offered in theaters using polarizing projection and glasses. Only a minute amount of the total 3D films shown in the period used the anaglyph color filter method. What is new is the use of digital projection, and also the use of sophisticated IMAX 70mm film projectors, with very reliable mechanisms. Whole new generations of 3D animation films are beginning to show up in the theaters, all using some form of polarization. Polarization is not easily applied to home 3-D broadcast or DVD presentation. At this point only anaglyph glasses may be used to view the new HD shows and are beginning to be aired occasionally by NBC and the Discovery Channel.

Alternate-frame sequencing
Alternate-frame squencing (sometimes called Alternate Image, or AI) is a method of showing 3-D film that is used in some venues. It is also used on PC systems to render 3-D games into true 3-D.

Applications in film
The movie is filmed with two cameras like most other 3-D films. Then the images are placed into a single strip of film in alternating order. In other words, there is the first left-eye image, then the corresponding right-eye image, then the next left-eye image, followed by the corresponding right-eye image and so on.

The film is then run at 48 frames-per-second instead of the traditional 24 frames-per-second. The audience wears very specialized LCD shutter glasses that have lenses that can open and close in rapid succession. The glasses also contain special radio receivers. The projection system has a transmitter that tells the glasses which eye to have open. The glasses switch eyes as the different frames come on the screen.

This system is not generally used anymore in venues in favor of polarization. It is used, however, in home 3-D movie systems.

Applications in Gaming
The same method of alternating frames can be used to render modern 3-D games into true 3-D, although it has been used to give a 3D illusion on consoles as old as the Sega Master System and Nintendo Famicom. Here, special software/hardware to used generate two channels of images, ofset from each other to create the stereoscopic effect. High frame rates (typically ~100fps) are required to produce seamless graphics, as the perceived frame rate will be half the actual rate (each eye sees only half the frames). Again, LCD shutter glasses synchronised with the graphics card complete the effect.

Autostereoscopy
Autostereoscopy is a method of displaying three-dimensional images that can be viewed without the use of special headgear or glasses on the part of the user. These methods produce depth perception in the viewer even though the image is produced by a flat device.

Several technologies exist for autostereoscopic 3D displays. Currently most of such flat-panel solutions are using lenticular lenses or parallax barrier. If the viewer positions their head in certain viewing positions, they will perceive a different image with each eye, giving a stereo image. Consequently, eye strain and headaches are usual side effects of long viewing exposure to of autostereoscopic displays that use lenticular lens or parallax barriers. These displays can have multiple viewing zones allowing multiple users to view the image at the same time. Other displays use eye tracking systems to automatically adjust the two displayed images to follow the viewer's eyes as they move their head.

A wide range of organisations have developed autostereoscopic 3D displays, ranging from experimental displays in university departments to commercially available displays. Examples include: Alioscopy, Apple, Dimension Technologies, Fraunhofer HHI, Holografika, i-Art, NewSight, Philips, SeeFront, SeeReal Technologies, Spatial View, and Tridelity. Sharp also claim to have the technology, although not for commercial sale at the moment.

Stereogram
A stereogram is an optical illusion of depth created from flat two-dimensional image. Originally, stereogram referred to a pair of stereo images which could be viewed using stereoscope. Other types of stereograms include anaglyphs and autostereograms.

Stereogram was discovered by Charles Wheatstone in 1838. He found an explanation of binocular vision which led him to construct a stereoscope based on a combination of prisms and mirrors to allow a person to see 3D images from two 2D pictures. Stereograms were re-popularized by the creation of autostereogram on computers, where a 3D image is hidden in a single 2D image, until the viewer focuses the eyes correctly. The Magic Eye series is a popular example of this. Magic Eye books refer to autostereograms as stereograms, leading most people to believe that the word stereogram is synonymous to autostereogram. Salvador Dalí created some impressive stereograms in his exploration in a variety of optical illusions.

Pulfrich effect
The Pulfrich effect is a psycho-optical phenomenon wherein lateral motion by an object in the field of view is interpreted by the brain as having a depth component, due to differences in processing speed between images from the two eyes. The effect is generally induced by placing a dark filter over one eye. The phenomenon is named for German physicist Carl Pulfrich who first described it in 1922.

In the classic Pulfrich effect experiment a subject views a pendulum swinging in a plane perpendicular to the observer’s line of sight. When a neutral density filter (a darkened lens – typically grey) is placed in front of, say, the right eye the pendulum seems to take on an elliptical orbit, appearing closer as it swings toward the right and farther as it swings toward the left.

The widely accepted explanation of the apparent depth is that a reduction in retinal illumination (relative to the fellow eye) yields a corresponding delay in signal transmission, imparting instantaneous spatial disparity in moving objects. This seems to occur because visual system latencies are generally shorter for (the visual system responds more quickly to) bright targets compared to dim targets. This motion with depth is the visual system’s solution to a moving target when a difference in retinal illuminance, and hence a difference in signal latencies, exists between the two eyes.

The Pulfrich effect has typically been measured under full field conditions with dark targets on a bright background, and yields about a 15ms delay for a factor of ten difference in average retinal illuminance. These delays increase monotonically with decreased luminance over a wide (> 6 log-units) range of luminance. The effect is also seen with bright targets on a black background and exhibits the same luminance-to-latency relationship. The effect can occur spontaneously in several eye diseases such as cataract, optic neuritis, or multiple sclerosis. In such cases, symptoms such as difficulties judging the paths of oncoming cars have been reported.

The Pulfrich effect has been utilized to enable a type of stereoscopy, or 3-D visual effect, in visual media such as film and TV. As in other kinds of stereoscopy, glasses are used to create the illusion of a three-dimensional image. By placing a neutral filter (eg., the darkened lens from a pair of sunglasses) over one eye, an image, as it moves right to left (or left to right, but NOT up and down) will appear to move in depth, either toward or away from the viewer.

Because the Pulfrich effect depends on motion in a particular direction to instigate the illusion of depth, it is not useful as a general stereoscopic technique; for example it cannot be used to show a stationary object apparently extending into or out of the screen; similarly, objects moving vertically will not be seen as moving in depth. It can, however, be effective as a novelty effect in contrived visual scenarios. One advantage of material produced to take advantage of the Pulfrich effect is that it is fully compatible with "regular" viewing without the need for "special" glasses.

The effect achieved a small degree of popularity in television in the 1990s. For example, it was used in a "3D" motion television commercial in the 1990s, where objects moving in one direction appeared to be nearer to the viewer (actually in front of the television screen) and when they moved in the other direction, appeared to be farther from the viewer (behind the television screen). To allow viewers to see the effect, the advertiser provided a large number of viewers with a pair of filters in a paper frame. One eye's filter was a rather dark neutral gray while the other was transparent. The commercial was in this case restricted to objects (such as refrigerators and skateboarders) moving down a steep hill from left to right across the screen, a directional dependency determined by which eye was covered by the darker filter.

The effect was also used in the 1993 Doctor Who charity special Dimensions in Time and a 1997 special TV episode of 3rd Rock from the Sun. In many countries in Europe, a series of short 3D films, produced in the Netherlands, were shown on television. Glasses were sold at a chain of gas stations. These short films were mainly travelogues of Dutch localities. A Power Rangers episode sold through McDonalds used "Circlescan 4D" technology which is based on the Pulfrich effect. Animated programs that employed the Pulfrich effect in specific segments of its programs include The Bots Master and Space Strikers; they typically achieved the effect through the use of constantly-moving background and foreground layers. The videogame Orb-3D for the Nintendo Entertainment System used the effect (by having the player's ship always moving) and came packed with a pair of glasses. So did Jim Power: The Lost Dimension in 3-D for the Super Nintendo, using constantly-scrolling backgrounds to cause the effect. In the United States and Canada, six million 3D Pulfrich glasses were distributed to viewers for an episode of Discovery Channel's Shark Week in 2000.

ChromaDepth
Chromadepth is a patented system from the company Chromatek that produces a stereoscopic effect based upon differences in the diffraction of color through a special prism-like holographic film fitted into glasses. Chromadepth glasses give the illusion of colors taking up different positions in space, with red being in front, and blue being in back. Any media piece can be given a 3D effect as long as the color spectrum is put into use with the foreground being in red, and the background in blue. From front to back the scheme follows the visible light spectrum, from red to orange, yellow, green and blue.

The system was used variously for comic-books, educational books for children, light show displays at such planetariums as the Hayden Planetarium in New York City, and other printed and projected applications. This method was also used for I Love the '80s 3-D and glasses could be bought at Best Buy. Chromadepth has recently been put into use in Crayola's 3D Chalk. This technique was also used by the production company Quirky Motion for the music video promoting New Cassettes 2007 single "recover/retreat" as part of the BBC Electric Proms.

By Srivenkat Bulemoni, Filmmaking Techniques

Movies Coming to PlayStation3, XBox 360

Bypassing the set-top box and the PC, movies are coming to a gaming console near you, according to announcements by Sony, Microsoft and Netflix. Sony Computer Entertainment America this week launched a new video delivery service on PlayStationStore for the PlayStation3 and PlayStationPortable systems in the United States. The company said consumers will have the ability to download full-length movies, television shows, and original programming. The services will offer nearly 300 movies and more than 1,200 TV episodes, many available in both standard-definition and HD.

Sony said the arrangement includes content from 20th Century Fox, Lionsgate Entertainment, MGM Studios, Paramount Pictures, Sony Pictures Entertainment, Warner Bros. Entertainment, plus titles for rent from The Walt Disney Studios and several television partners.

PS3’s progressive downloading means users can view content shortly after the downloading process begins, or they can download in the background and continue with other features, including gaming, Sony said. Consumers will have 14 days to start watching the content. Once playback is started, they’ll have access to the content for 24 hours. Prices range from $2.99 to $5.99 for rentals and $9.99 to $14.99 for purchased movies. Sony has also adopted Marlin Digital Rights Management technology, developed by Sony, Matsushita (Panasonic), Samsung and Philips.

The Sony announcement comes on the heels of one by Microsoft and Netflix to offer consumers the ability to instantly stream movies and TV episodes from Netflix to the television via the Xbox 360 video game and entertainment system. The service will be available to Xbox Live Gold members who are also Netflix subscribers at no additional cost.

The companies promised a growing library of more than 10,000 movies and TV episodes when it launches on Xbox Live in late fall. The companies say movies will be available for viewing as little as 30 seconds after being selected. Xbox Live members can already download movies and purchase TV shows from the Xbox Live Marketplace Video Store, which offers more than 6,000 hours of TV shows and movies.

Source: TV Technology

Digital Cinematography

Digital cinematography is the process of capturing motion pictures as digital images, rather than on film. Digital capture may occur on tape, hard disks, flash memory, or other media which can record digital data. As digital technology has improved, this practice has become increasingly common. Several mainstream Hollywood movies have now been shot digitally, and many vendors have brought products to market, including traditional film camera vendors like Arri and Panavision, new vendors like RED and Silicon Imaging, and companies which have traditionally focused on consumer and broadcast video equipment, like Sony and Panasonic. The benefits and drawbacks of digital vs. film acquisition are still hotly debated, but digital cinematography cameras sales have surpassed mechanical cameras in the classic 35mm format.

Technology
Digital cinematography captures motion pictures digitally, in a process analogous to digital photography. While there is no clear technical distinction that separates the images captured in digital cinematography from video, the term "digital cinematography" is usually applied only in cases where digital acquisition is substituted for film acquisition, such as when shooting a feature film. The term is not generally applied when digital acquisition is substituted for analog video acquisition, as with live broadcast television programs.

Sensors
Digital cinematography cameras capture images using CMOS or CCD sensors, usually in one of two arrangements. High-end cameras designed specifically for the digital cinematography market often use a single sensor (much like digital photo cameras), with dimensions similar in size to a 35mm film frame or even (as with the Vision 65) a 65mm film frame. An image can be projected onto a single large sensor exactly the same way it can be projected onto a film frame, so cameras with this design can be made with PL, PV and similar mounts, in order to use the wide range of existing high-end cinematography lenses available. Their large sensors also let these cameras achieve the same shallow depth of field as 35 or 65mm motion picture film cameras, which is important because many cinematographers consider selective focus an essential visual tool.

Prosumer and broadcast television cameras typically use three 1/3" or 2/3" sensors in conjunction with a prism, with each sensor capturing a different color. Camera vendors like Sony and Panasonic, which have their roots in the broadcast and consumer camera markets, have leveraged their experience with these designs into three-chip products targeted specifically at the digital cinematography market. The Thomson Viper also uses a three-chip design. These designs offer benefits in terms of color reproduction, but are incompatible with traditional cinematography lenses (though new lines of high-end lenses have been developed with these cameras in mind), and incapable of achieving 35mm depth of field unless used with depth-of-field adaptors, which can lower image sharpness and result in a loss of light.

CMOS Sensor
Complementary metal–oxide–semiconductor (CMOS), is a major class of integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits. CMOS technology is also used for a wide variety of analog circuits such as image sensors, data converters, and highly integrated transceivers for many types of communication. Frank Wanlass got a patent on CMOS in 1967 (US Patent 3,356,858).

CMOS is also sometimes referred to as complementary-symmetry metal–oxide–semiconductor. The words "complementary-symmetry" refer to the fact that the typical digital design style with CMOS uses complementary and symmetrical pairs of p-type and n-type metal oxide semiconductor field effect transistors (MOSFETs) for logic functions.

Two important characteristics of CMOS devices are high noise immunity and low static power consumption. Significant power is only drawn when the transistors in the CMOS device are switching between on and off states. Consequently, CMOS devices do not produce as much waste heat as other forms of logic, for example transistor-transistor logic (TTL) or NMOS logic, which uses all n-channel devices without p-channel devices. CMOS also allows a high density of logic functions on a chip.

The phrase "metal–oxide–semiconductor" is a reference to the physical structure of certain field-effect transistors, having a metal gate electrode placed on top of an oxide insulator, which in turn is on top of a semiconductor material. Instead of metal (usually aluminum in the very old days), current gate electrodes (including those up to the 65 nanometer technology node) are almost always made from a different material, polysilicon, but the terms MOS and CMOS nevertheless continue to be used for the modern descendants of the original process. Metal gates have made a comeback with the advent of high-k dielectric materials in the CMOS process, as announced by IBM and Intel for the 45 nanometer node and beyond.

CCD Sensor or Charge-coupled device
A charge-coupled device (CCD) is an analog shift register, enabling analog signals (electric charges) to be transported through successive stages (capacitors) controlled by a clock signal. Charge coupled devices can be used as a form of memory or for delaying analog, sampled signals. Today, they are most widely used for serializing parallel analog signals, namely in arrays of photoelectric light sensors. This use is so predominant that in common parlance, "CCD" is (erroneously) used as a synonym for a type of image sensor even though, strictly speaking, "CCD" refers solely to the way that the image signal is read out from the chip.

The capacitor perspective is reflective of the history of the development of the CCD and also is indicative of its general mode of operation, with respect to readout, but attempts aimed at optimization of present CCD designs and structures tend towards consideration of the photodiode as the fundamental collecting unit of the CCD. Under the control of an external circuit, each capacitor can transfer its electric charge to one or other of its neighbors. CCDs are used in digital photography and astronomy (particularly in photometry), sensors, electron microscopy, medical fluoroscopy, optical and UV spectroscopy and high speed techniques such as lucky imaging.

Acquisition Formats
While many people make movies with MiniDV camcorders and other consumer and prosumer products that have lower resolutions or shoot interlaced video, cameras marketed as digital cinematography cameras typically shoot in progressive HDTV formats such as 720p and 1080p, or in higher-end formats created specifically for the digital cinematography market, such as 2K and 4K. To date, 1080p has been the most common format for digitally acquired major motion pictures. In the summer of 2007, director Steven Soderbergh began shooting The Argentine and Guerrilla, due for release in 2008, with prototype Red One 4K camera, making these the first two major motion pictures shot in the 4K format.

Data Storage: Tape vs. Data-Centric
Broadly, there are two paradigms used for data storage in the digital cinematography world. Many people, particularly those coming from a background in broadcast television, are most comfortable with video tape based workflows. Data is captured to video tape on set. This data is then ingested into a computer running non-linear editing software, using a deck. Once on the computer, the footage is edited, and then output in its final format, possibly to a film recorder for theatrical exhibition, or back to video tape for broadcast use. Original video tapes are kept as an archival medium. The files generated by the non-linear editing application contain the information necessary to retrieve footage from the proper tapes, should the footage stored on the computer's hard disk be lost.

Increasingly, however, digital cinematography is shifting toward "tapeless" workflow, where instead of thinking about digital images as something that exists on a physical medium like video tape, digital video is conceived of as data in files. In tapeless workflow, digital images are usually recorded directly to files on hard disk or flash memory based "digital magazines." At the end of a shooting day (or sometimes even during the day), the digital files contained on these digital magazines are downloaded, typically to a large RAID connected to an editing system. Once data is copied from the digital magazines, they are erased and returned to the set for more shooting. Archiving is accomplished by backing up the digital files from the RAID, using standard practices and equipment for data backup from the Information Technology industry, often to data tape.

Compression
Digital cinema cameras are capable of generating extremely large amounts of data; often hundreds of megabytes per second. To help manage this huge data flow, many cameras or recording devices designed to be used in conjunction with them offer compression. Prosumer cameras typically use high compression ratios in conjunction with chroma subsampling. While this allows footage to be comfortably handled even on fairly modest personal computers, the convenience comes at the expense of image quality.

High-end digital cinematography cameras or recording devices typically support recording at much lower compression ratios, or in uncompressed formats. Additionally, digital cinematography camera vendors are not constrained by the standards of the consumer or broadcast video industries, and often develop proprietary compression technologies that are optimized for use with their specific sensor designs or recording technologies.

Lossless vs. lossy compression
A lossless compression system is capable of reducing the size of digital data in a fully reversible way -- that is, in a way that allows the original data to be completely restored, byte for byte. This is done by removing redundant information from a signal. Digital cinema cameras rarely use only lossless compression methods, because much higher compression ratios (lower data rates) can be achieved with lossy compression. With a lossy compression scheme, information is discarded to create a simpler signal. Due to limitations in human visual perception, it is possible to design algorithms which do this with little visual impact.

Chroma subsampling
Chroma subsampling is the practice of encoding images by implementing more resolution for luma information than for chroma information. It is used in many video encoding schemes—both analog and digital—and also in JPEG encoding.

Rationale
Because of storage and transmission limitations, there is always a desire to reduce (or compress) the signal. Since the human visual system is much more sensitive to variations in brightness than color, a video system can be optimized by devoting more bandwidth to the luma component (usually denoted Y'), than to the color difference components Cb and Cr. The 4:2:2 Y'CbCr scheme for example requires two-thirds the bandwidth of (4:4:4) R'G'B'. This reduction results in almost no visual difference as perceived by the viewer.

How subsampling works
Because the human visual system is less sensitive to the position and motion of color than luminance, bandwidth can be optimized by storing more luminance detail than color detail. At normal viewing distances, there is no perceptible loss incurred by sampling the color detail at a lower rate. In video systems, this is achieved through the use of color difference components. The signal is divided into a luma (Y') component and two color difference components (chroma).

Chroma subsampling deviates from color science in that the luma and chroma components are formed as a weighted sum of gamma-corrected (tristimulus) R'G'B' components instead of linear (tristimulus) RGB components. As a result, luminance and color detail are not completely independent of one another. There is some "bleeding" of luminance and color information between the luma and chroma components. The error is greatest for highly-saturated colors and can be somewhat noticeable in between the magenta and green bars of a color bars test pattern (that has chroma subsampling applied). This engineering approximation (by reversing the order of operations between gamma correction and forming the weighted sum) allows color subsampling to be more easily implemented.

Bitrate
Video and audio compression systems are often characterized by their bitrates. Bitrate describes how much data is required to represent one second of media. One cannot directly use bitrate as a measure of quality, because different compression algorithms perform differently. A more advanced compression algorithm at a lower bitrate may deliver the same quality as a less advanced algorithm at a higher bitrate.

Intra- vs. Inter-frame compression
Most compression systems used for acquisition in the digital cinematography world compress footage one frame at a time, as if a video stream is a series of still images. Inter-frame compression systems can further compress data by examining and eliminating redundancy between frames. This leads to higher compression ratios, but displaying a single frame will usually require the playback system to decompress a number of frames that precede it. In normal playback this is not a problem, as each successive frame is played in order, so the preceding frames have already been decompressed. In editing, however, it is common to jump around to specific frames and to play footage backwards or at different speeds. Because of the need to decompress extra frames in these situations, inter-frame compression can cause performance problems for editing systems. Inter-frame compression is also disadvantageous because the loss of a single frame (say, due to a flaw writing data to a tape) will typically ruin all the frames until the next keyframe occurs. In the case of the HDV format, for instance, this may result in as many as 6 frames being lost with 720p recording, or 15 with 1080i recording.

Digital vs. film cinematography: Predictability
When shooting on film, response to light is determined by what film stock is chosen. A cinematographer can choose a film stock he or she is familiar with, and expose film on set with a high degree of confidence about how it will turn out. Because the film stock is the main determining factor, results will be substantially similar regardless of what camera model is being used. However, the final result cannot be controlled when shooting with mechanical cameras until the film negative has been processed at a laboratory. Therefore, damage to the film negative, scratches which are generated by faulty camera mechanics can not be controlled.

In contrast, when shooting digitally, response to light is determined by the CMOS or CCD sensor(s) in the camera and recorded and "developed" directly. This means a cinematographer can measure and predict exactly how the final image will look by eye if familiar with the specific model of camera being used or able to read a vector/waveform.

On-set monitoring allows the cinematographer to see the actual images that are captured, immediately on the set, which is impossible with film. With a properly calibrated high-definition display, on-set monitoring, in conjunction with data displays such as histograms, waveforms, RGB parades, and various types of focus assist, can give the cinematographer a far more accurate picture of what is being captured than is possible with film. However, all of this equipment may impose costs in terms of time and money, and may not be possible to utilize in difficult shooting situations.

Film cameras do often have a video assist that captures video though the camera to allow for on-set playback, but its usefulness is largely restricted to judging action and framing. Because this video is not derived from the image that is actually captured to film, it is not very useful for judging lighting, and because it is typically only NTSC-resolution, it is often useless for judging focus.

Portability
Ultra-lightweight and extremely compact digital cinematography cameras, as the SI:2K mini, are much smaller and lighter than mechanical film cameras. Other High-end digital cinema cameras can be quite large, and some models require bulky external recording mechanisms (though in some cases only a small strand of optical fiber is necessary to connect the camera and the recording mechanism).

Compact 35mm film cameras that produce the full 35mm film resolution and accept standard 35mm lenses cannot be sized down below a certain size and weight, as they require at least space for the film negative and basic mechanics.

Smaller form-factor digital cinema cameras such as the Red One and SI-2K have made digital more competitive in this respect. The SI-2K, in particular, with its detachable camera head, allows for high-quality images to be captured by a camera/lens package that is far smaller than is practically achievable with a 35mm film camera and is used in many scenarios to replace film - especially for stereoscopic productions.

Dynamic Range
The sensors in most high-end digital video cameras have less exposure latitude (dynamic range) than modern motion picture film stocks. In particular, they tend to 'blow out' highlights, losing detail in very bright parts of the image. If highlight detail is lost, it is impossible to recapture in post-production. Cinematographers can learn how to adjust for this type of response using techniques similar to those used when shooting on reversal film, which has a similar lack of latitude in the highlights. They can also use on-set monitoring and image analysis to ensure proper exposure. In some cases it may be necessary to 'flatten' a shot, or reduce the total contrast that appears in the shot, which may require more lighting to be used.

Many people also believe that highlights are less visually pleasing with digital acquisition, because digital sensors tend to 'clip' them very sharply, whereas film produces a 'softer' roll-off effect with over-bright regions of the image. Some more recent digital cinema cameras attempt to more closely emulate the way film handles highlights, though how well they achieve this is a matter of some dispute. A few cinematographers have started deliberately using the 'harsh' look of digital highlights for aesthetic purposes. One notable example of such use is Battlestar Galactica.

Digital acquisition typically offers better performance than film in low-light conditions, allowing less lighting and in some cases completely natural or practical lighting to be used for shooting, even indoors. This low-light sensitivity also tends to bring out shadow detail. Some directors have tried a "best for the job" approach, using digital acquisition for indoor or night shoots, and traditional film for daylight exteriors.

Resolution
Substantive debate over the subject of film resolution vs. digital image resolution is clouded by the fact that it is difficult to meaningfully and objectively determine the resolution of either.

Unlike a digital sensor, a film frame does not have a regular grid of discrete pixels. Rather, it has an irregular pattern of differently sized grains. As a film frame is scanned at higher and higher resolutions, image detail is increasingly masked by grain, but it is difficult to determine at what point there is no more useful detail to extract. Moreover, different film stocks have widely varying ability to resolve detail.

Determining resolution in digital acquisition seems straightforward, but is significantly complicated by the way digital camera sensors work in the real world. This is particularly true in the case of high-end digital cinematography cameras that use a single large bayer pattern CMOS sensor. A bayer pattern sensor does not sample full RGB data at every point; each pixel is biased toward red, green or blue, and a full color image is assembled from this checkerboard of color by processing the image through a demosaicing algorithm. Generally with a bayer pattern sensor, actual resolution will fall somewhere between the "native" value and half this figure, with different demosaicing algorithms producing different results. Additionally, most digital cameras (both bayer and three-chip designs) employ optical low-pass filters to avoid aliasing. Such filters reduce resolution.

In general, it is widely accepted that film exceeds the resolution of HDTV formats and the 2K digital cinema format, but there is still significant debate about whether 4K digital acquisition can match the results achieved by scanning 35mm film at 4K, as well as whether 4K scanning actually extracts all the useful detail from 35mm film in the first place. However, as of 2007 the majority of films that use a digital intermediate are done at 2K because of the costs associated with working at higher resolutions. Additionally, 2K projection is chosen for almost all permanent digital cinema installations, often even when 4K projection is available.

One important thing to note is that the process of optical duplication, used to produce theatrical release prints for movies that originate both on film and digitally, causes significant loss of resolution. If a 35mm negative does capture more detail than 4K digital acquisition, ironically this may only be visible when a 35mm movie is scanned and projected on a 4K digital projector.

Grain & Noise
Film has a characteristic grain structure, which many people view positively, either for aesthetic reasons or because it has become associated with the look of 'real' movies. Different film stocks have different grain, and cinematographers may use this for artistic effect.

Digitally acquired footage lacks this grain structure. Electronic noise is sometimes visible in digitally acquired footage, particularly in dark areas of an image or when footage was shot in low lighting conditions and gain was used. Some people believe such noise is a workable aesthetic substitute for film grain, while others believe it has a harsher look that detracts from the image.

Well shot, well lit images from high-end digital cinematography cameras can look almost eerily clean. Some people believe this makes them look "plasticky" or computer generated, while others find it to be an interesting new look, and argue that film grain can be emulated in post-production if desired. Since most theatrical exhibition still occurs via film prints, the super-clean look of digital acquisition is often lost before moviegoers get to see it, because of the grain in the film stock of the release print.

Digital Intermediate Workflow
The process of using digital intermediate workflow, where movies are color graded digitally instead of via traditional photochemical finishing techniques, has become common, largely because of the greater artistic control it provides to filmmakers. In 2007, all of the 10 most successful movies released used the digital intermediate process.

In order to utilize digital intermediate workflow with film, the camera negative must be processed and then scanned. High quality film scanning is time consuming and expensive. With digital acquisition, this step can be skipped, and footage can go directly into a digital intermediate pipeline as digital data.

Some filmmakers have years of experience achieving their artistic vision using the techniques available in a traditional photochemical workflow, and prefer that finishing process. While it would be theoretically possible to use such a process with digital acquisition by creating a film negative on a film recorder, in general digital acquisition is not a suitable choice if a traditional finishing process is desired.

Sound
Films are traditionally shot with dual-system recording, where picture is recorded on camera, and sync sound is recorded to a separate sound recording device. In post-production, picture and sound are synced up.

Many cameras used for digital cinematography can record sound internally, already in sync with picture. This eliminates the need for syncing in post, which can lead to faster workflows. However, most sound recording is done by specialist operators, and the sound will likely be separated and further processed in post-production anyway. Also, recording sound to the camera may require running additional cables to the camera, which may be problematic in some shooting situations, particularly in shots where the camera is moving. Wireless transmission systems can eliminate this problem, but are not suitable for use in all circumstances.

Archiving
Many people feel there is significant value in having a film negative master for archival purposes. As long as the negative does not physically degrade, it will be possible to recover the image from it in the future, regardless of changes in technology. In contrast, even if digital data is stored on a medium that will preserve its integrity, changes in technology may render the format unreadable or expensive to recover over time. For this reason, film studios distributing digitally-originated films often make film-based separation masters of them for archival purposes.

Economics: Low-budget / Independent Filmmaking
For the last 25 years, many respected filmmakers like George Lucas have predicted that electronic or digital cinematography would bring about a revolution in filmmaking, by dramatically lowering costs.

For low-budget and so-called "no-budget" productions, digital cinematography on prosumer cameras clearly has cost benefits over shooting on 35mm or even 16mm film. The cost of film stock, processing, telecine, negative cutting, and titling for a feature film can run to tens of thousands of dollars according to From Reel to Deal, a book on independent feature film production by Dov S-S Simens. Costs directly attributable to shooting a low-budget feature on 35mm film could be $50,000 on the low side, and over twice that on the high side. In contrast, obtaining a high-definition prosumer camera and sufficient tape stock to shoot a feature can easily be done for under $10,000, or significantly less if, as is typically the case with 35mm shoots, the camera is rented.

If a 35mm print of the film is required, an April 2003 article in American Cinematographer found the costs between shooting film and video are roughly the same. The benefit to shooting video is that the cost of a film-out is only necessary should the film find a distributor to pick up the cost. When shooting film, the costs are upfront and cannot be deferred in such a manner. On the other hand, the same article found 16mm film to deliver better image quality in terms of resolution and dynamic range. Given the progress digital acquisition, film recording, and related technologies have seen in the last few years, it is unclear how relevant this article is today.

Most extremely low-budget movies never receive wide distribution, so the impact of low-budget video acquisition on the industry remains to be seen. It is possible that as a result of new distribution methods and the long tail effects they may bring into play, more such productions may find profitable distribution in the future. Traditional distributors may also begin to acquire more low-budget movies as better affordable digital acquisition eliminates the liability of low picture quality, and as they look for a means to escape the increasingly drastic "boom and bust" financial situation created by spending huge amounts of money on a relatively small number of very large movies, not all of which succeed.

Hollywood
On higher budget productions, the cost advantages of digital cinematography are not as significant, primarily because the costs imposed by working with film are simply not major expenses for such productions. Two recent films, Sin City and Superman Returns, both shot on digital tape, had budgets of $40 million and close to $200 million respectively. The cost savings, though probably in the range of several hundred thousand to over a million dollars, were negligible as a percentage of the total production budgets in these cases.

Rick McCallum, a producer on Attack of the Clones, has commented that the production spent $16,000 for 220 hours of digital tape, where a comparable amount of film would have cost $1.8 million. However, this does not necessarily indicate the actual cost savings. The low incremental cost of shooting additional footage may encourage filmmakers to use far higher shooting ratios with digital. The lower shooting ratios typical with film may save time in editing, lowering post-production costs somewhat.

Shooting in digital requires a digital intermediate workflow, which is more expensive than a photochemical finish. However, a digital intermediate may be desirable even with film acquisition because of the creative possibilities it provides, or a film may have a large number of effects shots which would require digital processing anyway. Digital intermediate workflow is coming down in price, and is quickly becoming standard procedure for high-budget Hollywood movies.

Digital cinematography cameras
Professional cameras include the Sony HDCAM Series, RED One, Panavisions Genesis, SI-2K, Thomson Viper, Vision Research Phantom, Weisscam, GS Vitec noX, and the Fusion Camera System. Independent filmmakers have also pressed low-cost consumer and prosumer cameras into service for digital filmmaking.

Industry acceptance of digital cinematography
For over a century, virtually all movies have been shot on film and nearly every film student learns about how to handle 16mm and 35mm film. Today, digital acquisition accounts for the vast majority of moving image acquisition, as most content for broadcast is shot on digital formats. Most movies destined for theatrical release are still shot on film, however, as are many dramatic TV series and some high-budget commercials. High-end digital cinematography cameras suitable for acquiring footage intended for theatrical release are on the market since 1999/2000, and have meanwhile gained widespread adoption.

By Srivenkat Bulemoni, Filmmaking Techniques

Is 3D TV Ready for the Big Time?

Pay a visit to Steven Spielberg's favourite cinema this summer and it won't be the latest blockbuster that catches your eye. Walk through the foyer at the Bridge Theater in Los Angeles and you'll be awe-struck at the sight of wall-mounted displays throwing out images in glorious 3D – and you won't need to wear googly glasses to see them.

The professional displays are the work of Philips 3D Solutions, which is currently rolling them out worldwide, appearing in airports, shopping malls, casinos and, of course, cinemas. The best thing is that the technology Philips uses is fairly straightforward, promising great things for the TVs we're used to having at home.

The Philips WOWvx (the wow-effect, plus Visual eXperience) uses a series of tiny lenticular lenses mounted in front of a regular high definition display. If lenticular sounds familiar then that's because it is – Philips WOWVx uses a similar technique to the animated 3D postcards you can find at tourist traps the world over. And you don't need to wear silly glasses to view those either.

2D-plus-Depth
Of course, having a lenticular lens in itself isn't enough – you also have to create a stereoscopic image to give the illusion of depth. Philips has two different formats for content creation – a standard version called 2D-plus-Depth, plus and an extended version it dubs Declipse (as in the opposite of eclipse).

2D-plus-Depth does exactly what it says on the tin. It takes a 2D image and then adds depth to give you a 3D representation. If a normal display has pixel information as 'x' and 'y' co-ordinates, then 2D-plus-Depth adds 'z' to describe how deep the image portrayed by that pixel should be.

The exciting thing for all of us, and for Hollywood, production studios and broadcasters, is that 2D-plus-Depth is backwards-compatible. Any movie, TV programme or music video you've ever watched can now be presented in 3D, adding a new whole dimension – literally – to your viewing experience.

Games may drive 3D
Bjorn Teuwsen, marketing and communications manager at Philips 3D Solutions, says the depth information can be added in post-production using the company's own BlueBox video production suite, which creates the 2D-plus-Depth format, that you'll eventually see as a single 3D entity on a compatible display.

You can avoid this step, of course, by using a stereoscopic camera to automatically film the different angles you need. That's something broadcasters and film studios are doing increasingly. You only have to look to forthcoming features like Toys Story 3 and Ice Age – Dawn Of The Dinosaurs, both of which are due in cinemas in 2009.

Declipse takes our perception of depth one stage further. It effectively splits the picture up into separate background and foreground layers that enable you to add true depth to an image – Philips calls it the 'look around' view. And it can again be added to existing 2D material.

3D graphics and games
One of the obvious drawbacks when it comes to creating 3D images from 2D originals is one of occlusion, which is when one object is situated in front of another, obscuring your view of the whole scene. Bjorn Teuwsen cites the example of a girl standing in a field with a forest behind her.

Looking at the 2D image you obviously won't know what was behind the girl, making a 3D image difficult to create, so Declipse steps in, depicts the background layer, and creates the missing picture information using specially-created algorithms. It works in a stereoscopic image because you're not having to create all of the missing information – you're not showing a 360-degree view here.

It is, of course, easier if the 3D image has been created in full already, as is done with computer-generated images like animation, graphics and video games. In that case, Declipse simply takes the existing 3D data – via plugins for programs like Autodesk Maya and 3D Studio Max – and uses that to show 3D on the display. Just imagine what your favourite video games would be like if you could really play them in three dimensions?

A 3D future
The obvious drawback with any of this right now is that you need content that's been specially created – either during the production process, or in post-production. That obviously means you can't watch regular TV in 3D, no matter how much you like the idea of a three-dimensional Fiona Bruce sitting in your lounge, while reading the news. However, Teuwsen does hold out some hope. Since creating 3D image from 2D is simply about data processing, it's only a matter of time before the technology can be packaged inside the video processors that make up ordinary TVs.

We may not have to wait too long. In January, at CES 2008, some of the biggest names in tech showed off prototype 3D TVs that could go on sale as early as 2009 or 2010. Business users, of course, can already pick up a range of WOWvx displays from Philips, with the 42-inch 3D display costing around €7,000 (£5,539).

"It's just one way to stand out, Bjorn Teuwsen says, "from the 3,000 advertising messages we receive every day."

Is 3D a vision too far?
Stereoscopic images have been around for 150 years or so, but the first moving 3D images appeared in cinemas during the 1950s as a way for Hollywood studios to compete against the rising threat of television. Of course then, as now, displaying a 3D image has required viewers to wear glasses (either stereoscopic, or those with red and green lenses), which are not only inconvenient, but can be uncomfortable too.

Part of the reason for that is that 3D images can induce dizziness, or nausea. This is due to cone transitions – the crossing over of one part of a stereoscopic image designed for one eye appearing in the other, causing blurriness and eye-strain.

Teuwsen says Philips has largely been able to minimise effects this with its 3D displays to the extent that it's now barely noticeable. After all feelings of nausea are the last things advertisers want you to feel when faced with a 3D representation of their produce.

By Rob Mead, Techradar