I’ve been thinking about the hubbub that Color has generated by raising $41 million dollars in venture capital from an either just release, or unreleased iOS and Android app. Some people are saying it is the heralding of a new bubble, others are saying that it is not. Being an engineer and a budding usability and product aficionado, as well as a contributor to the EFF I have a slightly different take on it.
First, let’s define what Color is. People are saying that it is a photo sharing app, that isn’t it at all, that is a placeholder app concept that gets it past censors, and answers simply, the question of what would I use it for? What Color is, is a mechanism for harnessing the entirety of the sensors in your smartphone, for those who don’t know modern smartphones have a camera ( video camera ), accelerometer, gyroscope, aGPS ( GPS assisted by cellular triangulation ), microphone, and a capacitve touch sensor built into it. Most mobile applications use this for various applications.
The team at Color seem to have approached this from a slightly different perspective, instead of asking the question, what can we build with all of these sensors, they asked what if all of these sensors were live streaming their data directly to our servers? From a purely technical perspective this is absolutely brilliant, and they have come up with some very clever techniques to use this data in the service of determining who you are currently around as well as where you are. This application heralds a paradigm shift in thinking about geo data, instead of where you are, which is marginally important, this application tries to solve the problem of who you are around, which is much more important.
Therein lies the rub, however. I don’t want this application anywhere near me. I don’t want my friends using it around me, I don’t want them taking pictures of me with it, etc… The potential for misuse of this data is too great. It makes Facebook look like someone peering at your house with a telephoto lens, relative to an intruder actually inside your home. From a privacy perspective it is a disaster. What is it doing with the audio? Does it send it to the servers, or is it all local? Do they do any facial recognition in the pictures, which would lead to them, and people I only marginally know, or don’t know, to knowing who I am, and people who I am around. There is a perfectly good reason why Google, when doing their augmented reality solution, backed away from facial recognition. It is too creepy and too much of a risk. I would tend to agree.
Color’s value as a startup however, has nothing to do with this. Often having a good idea means doing what your competitors are either unable, or unwilling to do, and they certainly have done a lot that most are unable, and / or unwilling to do. What that means for them in the future is interesting, but I’ll need to see a few more privacy features, and policy statements from the company before I will use it. Whether or not the broader audience will use it is unclear and also irrelevant, someone will find a purpose for this product if they can get to 10 million users. I think they can, but they will need to conceptually tighten up their security and privacy policies first. There are times where I don’t care if everyone knows where I am, WWDC, Google I/O, SXSW for example, and there are other times where I may not be interested in people knowing where I am and who I am with, when I am at a school play with my kids for example. I don’t really want the entire internet to know where my kids go to school, what they look like, who they go to school with, what their names are etc… This thing gives up the user’s entire pattern, and is a stalker / predators dream come true, not to mention the potential for hacking using social engineering.
Bottom line, its amazing tech, I think that they are worth the money, however the policy aspect will hold them back until they establish viable privacy controls.