Google Glass: just because you can…

Paul Bernal weighs out the privacy implications that are set to arise from the new Google Glass creation.

Image: google by hellabella CC BY-NC-SA 2.0

As a bit of a geek, and a some-time game player, it’s hard not to like the look of Google Glass. Sure, it makes you look a little dorky in its current incarnation, but people like me are used to looking dorky, and don’t really care that much about it. What it does, however, is cool, and cool in a big way. We get heads-up displays that would have been unimaginable even a few years ago, a chance to feel like Arnie in the Terminator, with the information about everything we can see immediately available. It’s cool – in a dorky, sci-fi kind of way, and for those of us brought up on a diet of SF it’s close to irresistible.

And yet, there’s something in the back of my mind – well, OK, pretty close to the front of my mind now – that says that we should be thinking twice about pushing forward with developments like this. Just because we can make something as cool as Google Glass, doesn’t mean that we should make it. There are implications to developments like this, and risks attached to it, both direct and indirect.

Risks to the wearer’s privacy

First we need to be clear what Google Glass does – and how it’s intended to be used. The idea is that the little camera on the headset essentially ‘sees’ what you see. It then analyses what it can see, and provides the information about what you see – or information related to it. In one of the promotional videos for it, for example, as the wearer looks at a subway station, the Glass alerts the wearer to the fact that there’s a delay on the subway, so he’d better walk. Then he looks at a poster for a concert – it analyses the poster, then links directly to a ticket agency that lets him buy a ticket for the concert.

Cool? Sure, but think about what’s going on in the background – because there’s a lot. First of all, and almost without saying, the Google Glass headset is tracking the wearer: what we can ‘geolocation’. It knows exactly where you are, whenever you’re using it. There are implications to that – I’ve written about them before – and this is yet another step towards making geolocation the ‘norm’. The idea is that Google (and others) want to know exactly where you are at all times – and of course that means that others could find out, whether for good purposes or bad.

Secondly, it means that Google are able to analyse what you are looking at – and profile you, with huge accuracy, in the real world, the way to a certain extent they already do in the online world. And, again, if Google can profile you, others can get access to that profile – either through legal means or illegal. You might have consented to giving others access, in one of those long Terms and Conditions documents you scrolled down without reading and clicked ‘OK’ to. The government might ask Google for access to your feed, in the course of some investigation or other. A hacker might even hack into your system to take a look…

…and this last risk, the risk of hacking, is a very real one. Weaknesses in Google Glass have already surfaced. As the Guardian reported a few days ago:

“Augmented reality glasses could be compromised by a hacker who would be able to see and hear everything the wearer does”

This particular weakness may or may not turn out to be a real risk – but the potential is there. Where data exists, and where systems exist, they are hackable – Google Glass, by its nature, could be a clear target. And what they get, as a result, could be seriously dangerous and damaging.

Risks to others’ privacy

Equally worrying are the risks to those the wearer looks at. There are specific risks – anyone who knows about the concept of ‘creepshots’ – surreptitiously taken photographs, usually of young women and girls, up skirts, down blouses etc, posted on the internet – should be see the possibilities immediately. As Gizmodo put it:

“Once these things stop being a rich-guy novelty and start actually hitting the streets, the rise in creepshots is going to be worse than any we’ve ever seen before”

They’re right – and the makers of Google Glass should be aware of the possibilities. Some people are even working on developing an app to allow you to take a picture using Google Glass just by winking, which would extend the possibilities of creepshots one creepy step forward – at the moment, at least, voice commands are needed to take shots, alerting the victim, but with winking or other surreptitious command systems even that protection would be gone.

Creepshots are just one extreme – the other opportunities for invasions of privacy are huge. In mitigation, some say ‘Oh, at least you can see that people are wearing Google Glass, so you know they’re filming you’. Well, yes, but there are lots of problems with that. Firstly, should we really need to check the glasses of everyone who can see us? Secondly, this is just the first generation of Google Glass. What will the next one look like? Cooler, less like something out of Star Trek? And the technology could be used in ways that are much less obvious – hack and disguise your own Google Glass and make it look like a pair of ordinary sunglasses? Not hard for a hacker. They’ll be available on the net within a pretty short time.

Normalising surveillance

All these, however, are just details. The real risk is at a much higher level – but it may be a danger that’s already been discounted. It’s the risk that our society goes down a route where surveillance is the norm. Where we expect to be filmed, to have our every movement, our every action, our every word followed, analysed, compiled, and aggregated for the service of companies that want to make money out of us and governments that want to control us. Sure, Google Glass is cool, and sure it does some really cool stuff, but is it really worth that?

Now there may be ways to mitigate all these risks, and there may be ways that we can find to help overcome some of the issues. I’d like it to be so, because I love the coolness of the technology. Right now, though, I’m not convinced that we have – or even that we necessarily will be able to. It means, for me, I think we need to remember that just because we can do things like this, it doesn’t mean that we should.

Share this article

Google+ Delicious Digg Facebook Google LinkedIn StumbleUpon Twitter Reddit Newsvine E-mail

Comments

Comments (0)

This thread has been closed from taking new comments.

By Paul Bernal on May 14, 2013

Featured Article

Schmidt Happens

Wendy M. Grossman responds to "loopy" statements made by Google Executive Chairman Eric Schmidt in regards to censorship and encryption.

ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views

People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.

ORG Events