Human interaction with computers could improve with the new Kinect for Windows sensor, which will be better at recognizing gestures, motion and voice.
Developers will be able to write applications with the sensor, announced Thursday by Microsoft, that bring voice, gesture and other forms of natural interaction to computers. The sensor follows the announcement earlier this week of Kinect for Xbox One gaming console. That Kinect sensor and the console are both due out later this year, while the Kinect for Windows sensor will become available next year.
Kinect for Windows. (Photo: Microsoft)
The Kinect sensors will "revolutionize computing experiences," said Bob Heddle, director of Kinect for Windows, in a blog post Thursday.
Microsoft has already implemented touch in Windows 8 for PCs and tablets. More precise tracking and a wider field of view could help improve motion recognition, while a sophisticated microphone could boost voice interaction.
More Kinect for Windows details will be revealed at the Build conference in June. Developers will also get the SDK (software development kit), from which human-computer interaction programs can be written.
Kinect for Windows will have a high-definition camera and a noise-isolating microphone to recognize relevant sounds in rooms. Another new technology in the sensor is "Time-of-Flight" technology, which Heddle said "measures the time it takes individual photons to rebound off an object or person to create unprecedented accuracy and precision."
A feature called "skeletal tracking" follows more points on a human body to better track movement of multiple users. The sensor will be able to create more accurate avatars all the way down to the wrinkles on a person's body, Heddle said.
The gesture and image recognition will get a boost with the new "active IR" feature, which will help to recognize facial features. The feature will allow the sensor to work in multiple lighting conditions.
"The precision and intuitive responsiveness that the new platform provides will accelerate the development of voice and gesture experiences on computers," Heddle wrote.