Up until now, wearable computing has been confined to some odd bulky wristwatches. Most people are probably aware of the Augmented Reality Glasses, commonly referred to as Google Glass that Google has been working on for quite a while. Google Glass is still in limited release and not available to everyone. So, in the meantime, you can use your Raspberry Pi (RBPi) to fill in the gap. The project has everything you desire – small in size, light in weight and light in power consumption; a cheap lithium-ion battery makes it run for hours.
The project has two RBPi Single Board Computers working as close as possible to the universal translator of the Star Trek fame. The two displays are a pair of digital glasses, quite off-the-shelf. Other standard equipment used are a Jawbone Bluetooth microphone and a Vuzix 1200 Star wearable display. When fully functional, the system uses Microsoft’s publicly accessible API or Applications Programming Interface to perform voice recognition and translation on the fly.
For example, Will Powell, the originator of the project, uses the glasses to have a conversation with Elizabeth, who speaks Spanish. Although Will has never learned Spanish, he is able to converse meaningfully returning the answers in English. Powell’s blog shows a video of the system in action and shows the details of the build.
This project glass inspired translating unit works in real time and displays conversation as subtitles on your glasses. Both RBPi run the Debian squeeze operating system. For using the system, individual users will be wearing the Vuzix 1200 Star glasses, and these are connected to the s-video connector on his RBPi. For a clean and noise cancelled audio feed, Will uses a Jawbone Bluetooth microphone connected to either a smartphone or a tablet.
The Bluetooth microphone picks up the speakers voice and streams it across the network to pass it through Microsoft’s translation API service. For regularly used statements, a caching layer improves the performance. The subtitles face their longest delay when passing through this API service, The RBPi picks up the translated text the server passes back and this is then displayed on the glasses display.
Once a person has spoken, it takes a few seconds of delay before the translation pops-up on the other persons glass display. Moreover, the translations are not always fluid or coherent. However, that has nothing to do with the technology used here; rather it is based on the inaccuracies of the translation API. It is really amazing as to how such a relatively simple setup could offer speech recognition and translation at very near real-time.
At this rate, Augmented Reality Glasses will become popular very soon, and Google has suggested they will make their Glass project commercial very soon. Mobile communication is standing on the brink of a revolutionary technology that Google’s Glass is sure to bring about. However, Powell’s work shows there is still a lot of room to experiment and explore different kinds of functions and applications in this field.
The project also shows that very soon it may not matter what language you speak, anyone will be able to understand you, provided everyone is wearing the right glasses.