We normally gather information through our senses. We see, we touch, we feel. Though tradition describes us as having five senses, we actually have closer to 20, depending on how you define sense. We are gathering information about the world around us as well as the world inside us on an ongoing basis. Much of that information can also be gathered and represented digitally.
Computers are essentially gaining the ability to sense the world, and sense our own functions, in ways similar to our own. Twenty years ago it was safe to imagine that computers only thought of the real world as another set of code, and not a particularly coherent set at that. Even the very AI-centric movie The Matrix insisted that the world of the matrix was just rapidly streaming green code.
We use digital devices to enhance our senses in various ways, absorbing more information from our environment as we go through our regular routines. Some of this was predicted, like most technology, by Star Trek. The Geordi Laforge character was born blind. Instead of curing his vision, scientists decided to bolt on a device that let him scan the rest of the electromagnetic spectrum. Called the visor, the device was both painful on the show, for narrative reasons (special powers require sacrifice) and painful for the actor himself.
It was not clear why the scientists didn’t just pick out the normal spectrum and grant him the same vision shared by everyone else, but that’s television. Later in the show, the VISOR was switched out for implants, making it easier on both character and actor.
We are already getting closer to converting light to digital information, then transferring that to the brain—effectively giving blind people the power of sight. The signal process between the retina and brain has been partly solved as well. Multiple efforts are making progress on this front.
As Google evolves the ability to think in terms of objects instead of just code, we keep adding sensory devices to our selves. We have devices that report our location, activity levels, heart rates, and even do blood tests on us as we move around. There are people who are adopting extreme versions of this by constantly tracking every biometric and reporting it online.
Meanwhile we constantly take pictures and tag people and objects, adding descriptions. Image search tools pull together related images, and photo-management tools use algorithms to sort these images into stories. We are slowly educating our devices on what can be interpreted of the real world visually and biometrically.
Is this good? Bad? Are we creating a digital universe that knows too much about us? Or a system that will help us track our health problems and potentially live longer? In any case, it may be inevitable—but we should at least consider what we’re doing and how best to do it.
Google arranging our pictures into stories: http://www.wired.com/2014/05/google-photo-stories/
Facial recognition and the right to lie: http://www.theatlantic.com/technology/archive/2014/06/bad-news-computers-are-getting-better-than-we-are-at-facial-recognition/372377/
Consumer ready brain scanners: http://www.npr.org/blogs/alltechconsidered/2014/05/29/317037186/think-internet-data-mining-goes-too-far-then-you-wont-like-this
Cuff links that link: http://www.technewsworld.com/story/Cuff-Gives-Link-New-Meaning-79993.html
Smart Jacket gives feedback on activity: http://www.industrytap.com/new-smart-jacket-gives-visual-feedback-training-running/18849
Report on the future of eHealth: http://www.psfk.com/2014/06/creating-ideal-vision-health-using-wearable-tech-future-health.html
Eating habit tracking: http://www.nbcnews.com/tech/innovation/chew-wearable-tech-tracks-eating-habits-n136036
Signup below to receive updates about what we are up to.