Dapper Vision has put their Open Glass project on Github for all to play with and improve on. The premise of this project is to allow for annotations real time of the images Glass is capturing and feed that view back to the user on the Glass Prism.
The Annotations and augmenting are done server side not resident on Glass itself though maybe a future version of Glass it will have the horsepower to perform such tasks for certain scenarios. For now though the team at Dapper relies on Amazon’s Mechanical Turk, manual notations and Twitter to do the heavy lifting for what is streamed back to the user.
While there are plenty applications long term that would benefit from information rich overlays of the users field of view even near term this could be useful if the lag can be mitigated. We have talked in the past about telemedicine, and other telepresence efforts where Glass could be useful, and this project starts moving towards that capability.
We have already seen doctors in the operating room use Glass to consult with doctors in remote locations so with Open Glass those consulting doctors could overlay medical procedural advice to the OR to aid in the surgery. A Firefighter team being fed an annotated route through the building to trapped survivors or a Swat team about to breach having mobile command on site providing the annotations would be useful even if it is the command center manually making the notations in real time to the blueprints.
While the paint like overlays may seem crude compared to the movies, it is a first step toward an augmented reality use case for Glass and “Big Things have small Beginnings”