Software engineers are using Intel tools to explore new ways people can use their voices, hand gestures and head-and-eye movements to operate computers.
In coming years, their research is expected to yield tools that could help developers build computer games, help doctors control computers used in surgery or guide firefighters as they enter flaming buildings.
"We don't really know what this work will become, but it's going to be fascinating to watch it play out," said Craig Hurst, Intel's director of visual computing product management, in an interview at the Mobile World Congress trade show. "So far, what we've seen has gone beyond what we thought of originally."
Last fall, Hurst's group released several software development kits that third-party programmers can use to create new applications.
One of the tool kits, the Perceptual Computing SDK, was distributed to outside developers building applications that will be judged by Intel engineers. Intel is planning on awarding $1 million in prizes to developers this year for the most original application prototypes, not only in gaming design but also in productivity and other areas.
Barry Solomon, a member of Intel's visual computing product group, demonstrated how Windows developers are using the tool kits. With a special depth-perception camera clipped to the top of the lid of a laptop, and connected via USB to the computer, Solomon showed how software built with an Intel SDK rendered his facial expressions and hand gestures on the computer screen, with the images accompanied by an overlay of lines and dots to show the precise position of his eyes and fingers.
With that tracking information, a developer can quickly insert a person's face and hands into an augmented-reality scenario.
A company called Touchcast is building a green-screen application that will be available later this year.
This version of this story was originally published in Computerworld's print edition. It was adapted from an article that appeared earlier on Computerworld.com.