Plenoptic Lens Arrays Signal Future?
While producers and manufacturers are wrestling with the practicalities of existing 3D production, research and development is being put into next generation 3D capture. Plenoptic lenses and computational cinematography are two possible future means of capturing light in three dimensions.
Starting from the premise that current 3D production equipment is stone age - cumbersome, expensive and inaccurate - speakers at a session on the future of 3D at IBC gazed into their crystal balls. Sony’s Senior Vice President of engineering and SMPTE President, Peter Lude, gave his version of the future in five steps.
“Step one is the clunky, cabled and complex approach we have used to date. We are now into step two which is about greater automation and computer analysis which should make it easier to use rigs, correct errors and reduce manual convergence.
“It should be possible for a computer system to network together multiple cameras arrayed around a stadia, for example, and to toe those cameras at the same time to keep the object at the same convergence point so that when cutting between cameras there is no discomfort with viewer’s eyes having to readjust.”
Step 3 is to use advance image processing tools. One idea is to use a synthetic or virtual camera. For example a 35mm camera can be used as source for texture, colour and framing while subsidiary cameras either to the side or from other parts of the set capture additional information. This information can be used to create a ‘virtual camera’ in post, or used to derive information which can off-set occlusion.
Beyond that Lude suggested the industry should look to new image sensing technologies such as plenoptic and lightfield systems.
A plenoptic camera, such as the stills camera available from German firm Raytrix, permits the counter intuitive ability to select focus points in post processing after the shot has been taken. It also permits capture of 3D images with a single sensor.
Another idea is to use infra-red systems as used by Microsoft’s Kinect or LIDAR Light Detection And Ranging devices to scan a field of view and extract depth patterns which can be used to reconstruct scenes. Holographic technologies are perhaps the next step and for information on that check here.
Walt Disney Studios' Vice President of production technology, Howard Lukk, also has his eye on plenoptics. While a plenoptic lens is comprised of multiple micro-lenses which capture a slightly different area of a picture, he speculated what a rig fitted with up to 100 camera lenses might capture. “What if we could come up with new camera system that comprises more one single camera?” he asked.
Stanford University is leading research into this area and has indeed stacked 100 cameras into a single rig for one demonstration.
“It is computationally intensive but the idea that you can refocus an image after it is shot, readjusting focal length, is extremely powerful,” said Lukk. “From all these viewpoints it should be possible - given enough processing power and mathematical juggling - to extrapolate a detailed disparity map to create good 3D model which we can manipulate in post any way we like. In effect we create stereo in a very controlled environment.”
If 3D camera rigs are not the long term future of the industry, Lukk suggests that a hybrid approach will develop which will be a combination of capturing volumetric space on set and being able to produce the 3D in a post-production environment at the back end.
“This will give you much more versatility in manipulating the images. This idea feeds on the idea of computational cinematography conceived by Marc Levoy (a computer graphics research at Stanford University) a few years ago. Basically this says that if we capture things in a certain way, we can compute things that we really need in the back end.
“You can be less accurate on the front end. Adobe has been doing a lot of work in this area, where you can refocus the image after the event. You can apply this concept to high dynamic range and higher frame rates.”
Disney is currently researching this method at Disney Research in Zurich, Lukk added. In addition Lukk says that research is also being conducted at the Fraunhofer Institute in Germany.
“I think eventually we’ll get back to capturing the volumetric space and allowing cinematographers and directors to do what they do best - that is, capturing the performance,” he said.
Source: TVB Europe