
He has strapped a depth camera that can interpret 3D gestures to the inside of my wrist. It uses the data it gathers to build a personalised model of how I perform certain movements – the finger lift, the fist, the open palm. It then matches each one to a function, such as typing the letter "R".
Typing using gestures isn't Way's goal, partly because speech recognition software is already good enough to replace most keyboard functions. Instead, he has built the typing app as a way to test what gestures are easy for people to learn and perform. Awkward motions will be discarded in favour of those that people can easily make over and over again. Ultimately, Way wants to design an interface that learns the personal preferences of each user and adapts accordingly. For now, his work is a step towards creating a gestural interface for Glass that is polished and intuitive. Google itself seems to be working on this too
Other researchers are exploring alternative ways to bring gesture technology to heads-up displays. For example, start-up firm 3dim, whose prototype also began at MIT as a hack of Google Glass, is using infrared light.
Instead of a wrist-worn depth camera, company founders Andrea Colaco and Ahmed Kirmani have added an infrared LED and photodiode sensors across the brow of the Glass headset. The sensors pick up light that is reflected from the user's hand as they wave it.
The key advantages with the system, say the researchers, are that the components are cheap, use far less battery power than a depth camera, and can be integrated directly into Glass.
This kind of gesture control is not precise enough to type; instead it is designed to pick up larger hand and arm motions. This could be useful for swiping away notifications or navigating through the Glass menu.
"You could look through Google Glass and draw a letter," says Colaco. Drawing an "M" in the air could prompt Glass to search for items that begin with M, like the maps app.
Shahzad Malik, who works on hand-tracking at Intel in Toronto, Canada, says such instrumentation will be key to enabling a host of augmented reality applications envisioned for Google Glass.
"With a head-mounted display, you want some subtle but expressive way to interact with the content being presented in your view, so being able to easily detect finger movements makes the most sense," he says. "You could be standing at a bus stop, and just by making small movements with your fingers while your hand is at your side, you could be typing up an email or searching for something on Google with a virtual keyboard that only you can se
Post a Comment