Behind the Scenes at Sony's Broadcast Lab
One reason is that making 3D camera rigs is mechanically fiddly: two cameras working in stereo are needed to make a 3D image, and the distance between them must vary as they zoom. Sony labs in Japan revealed a prototype dedicated 3D camera just yesterday. But its UK engineers are hoping to sidestep the complications of stereo filming by making 3D footage from normal, "flat" video.
Rob Porter at Sony's UK labs demonstrated how prototype software creates a slightly modified version of a video feed to simulate a second stereo viewpoint. To create the 3D effect, the virtual viewpoint has a different perspective to the real camera, which is simulated by mapping the real camera feed onto a crude virtual 3D map of the real scene, created beforehand.
At present, this technique requires the camera to be static. But Porter used soccer stadium footage to show that doesn't have to be a problem. The output of several static cameras can be stitched together like a panoramic photo and then transformed into a super-high-resolution 3D video feed. It's possible to digitally pan across or zoom into the resulting 3D feed, giving the viewer the impression of a real 3D camera rig moving around, without actually having to use one. The idea is that a software upgrade can quickly give an extra dimension to the sports coverage of a stadium that already has a video camera setup.
"Each camera films a third of the pitch. Because those three fixed cameras are set up at the same focal point, they can be stitched together. And because we have the depth information for every shot we can synthesise a 3D impression be effectively positioning the pixel to different depth positions in the 3D composition," explained John Stone, general manager of research and development at Sony Professional.
By Colin Barras, NewScientist