Microsoft Releases Kinect SDK 1.7 Enabling 3D Scanning Capability

Microsoft on March 18, 2013 launched the Kinect for Windows SDK v1.7. The new SDK includes the Kinect Fusion tool that enables the Kinect for Windows sensor to scan and create accurate 3D models. A 3D model produced by a Kinect sensor and the new software is illustrated below.


  Kinect Fusion enables developers to create accurate 3-D renderings in real time.

In announcing the new Kinect SDK Bob Heddle, Microsoft, Director Kinect for Windows, has described Kinect Fusion as one of the most affordable 3D scanning tools available today for creating 3D renderings of people and objects. Heddle goes on to say, “Kinect Fusion fuses together multiple snapshots from the Kinect for Windows sensor to create accurate, full, 3D models.

Developers can move a Kinect for Windows sensor around a person, object, or environment and “paint” a 3D image of the person or thing in real time. These 3D images can then be used to enhance countless real-world scenarios, including augmented reality, 3D printing, interior and industrial design, and body scanning for things such as improved clothes shopping experiences and better-fitting orthotics.”

Reporting from Microsoft Research’s annual TechFest event (Mar. 5-7, 2013, Microsoft Conference Center, Redmond, Washington, USA) IEEE Spectrum has posted a video in which Microsoft researchers describe and demonstrate 3D scanning using Kinect Fusion.





In the video, researcher Toby Sharp of Microsoft’s Cambridge UK group describes the operation of the commercial Kinect sensor with the new Kinect Fusion software as it scans and is able to “reconstruct the world in 3D.” The video also describes the challenge presented in processing the large amount of data required for this task at 30 frames per second in near real time as the Kinect camera is moving about the object being scanned.

The video illustrates that relatively detailed 3D scans and solid models of objects and people can be captured with good detail using the inexpensive Kinect sensor when combined with a good deal of processing power provided by a presumably high performance GPU (Graphic Processing Unit).

In a prior entry on Microsoft’s Kinect for Windows blog, the operation of Kinect Fusion is further described as “…taking the incoming depth data from the Kinect for Windows sensor and using the sequence of frames to build a highly detailed 3D map of objects or environments. The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading. This allows Kinect Fusion to gather and incorporate data not viewable from any single view point. Among other things, it enables 3D object model reconstruction, 3D augmented reality, and 3D measurements. You can imagine the multitude of business scenarios where these would be useful, including 3D printing, industrial design, body scanning, augmented reality, and gaming.”

The business scenarios environed by Microsoft would seem to fit in well with the current interests of businesses and consumers in 3D printing. Moreover, the ability to quickly create accurate 3D solid models of objects, people and environments using relatively inexpensive equipment should open up a wide range of market applications. In releasing the new SDK, Microsoft has taken a large step toward enabling new opportunities for developers and end users.

By Phil Wright, Display Central