Trolls, Plato & The Invisible Man

Patrick Ireland explores whether Plato or H.G. Wells might be able to help us understand the psychology of 'trolling' and online abuse.

In recent weeks social media has been ablaze with controversy after an article entitled '24 Signs She's A Slut' began appearing across Facebook and Twitter. The article, which was published on the shamelessly sexist blog 'Return of Kings', proudly describes itself as being “for masculine men”. Other articles from the site include 'How Game of Thrones Depicts the Ultimate Feminist' and the 'Farce of Rape Culture in the Workplace'. Real masculine stuff.

As Bertie Brandes rightfully states in an amusing and equally important response to the article on vice.com: these bullies aren't worth worrying about. But let us pretend for a moment that these people are worth our time and discussion. What if we did take the so-called 'trolls' seriously? Or as seriously as you can take a group of people who claim "fat people don't deserve love” and that Game of Thrones somehow promotes a matriarchal feminist agenda. The question is why do we consistently see individuals using the web as a means of either expressing twisted worldviews or attacking unsuspecting innocents? 

Perhaps, the answer can be found in philosophy (although admittedly, answers are very rarely found in philosophy). In Book II of Plato's Republic reference is made to a mythical artefact named the 'Ring of Gyges' which has the power to render its wearer invisible. Yes, I think we now know where J.R.R Tolkien drew his inspiration from... 

In the chapter, Plato considers whether an individual can be moral or virtuous given the ring's power as they would not have to fear being punished for their actions. It is thus suggested that morality is simply a social construct: “for all men believe in their hearts that injustice is far more profitable to the individual than justice". This is a theme which is also explored heavily in H.G. Well's classic novel 'The Invisible Man'. Once again, invisibility allows the central character, a brilliant scientist by the name of Griffin, to seemingly escape the restrictive moral norms of society because he cannot be identified or punished for the crimes he commits. He robs various people (including his father who later commits suicide) and also attempts to murder his former best friend. 

Is this essentially what's happening with online bullies? Are they suffering from a kind of 'invisible man' syndrome?

Recent thought seem to suggest that this could actually be the case. Online disinhibition effect - a term coined by Professor John Suler in 2004 – is described as a "loosening (or complete abandonment) of social restrictions and inhibitions that would otherwise be present in normal [social] interactions”. According to Suler, it can take various distinct forms:

A Loss of Inhibition: people on the web become less guarded about their emotions and are thus more prone to extremity.

You Don't Know Me: anonymity provides a sense of protection.

You Can't See Me: the Ring of Gyges and the Invisible Man.

See You Later: someone can simply express their opinions (however offensive or extreme) and not contend with the emotional consequences of their actions.

It's All In Your Head: allows fantasies to be played out in the mind - certain stereotypes or emotions can be placed onto the 'faceless user' when engaging in discussion.

It's Just A Game: cyberspace is perceived as being like a game and therefore not 'real'.

No Authority: there are no authority figures on the web and as a result individuals feel like they can act freely.

It is clear that the web significantly alters how we would normally socially interact with other people. Speaking to someone online is radically different from a face-to-face encounter. As a psychologist specialising in this area, Graham Jones explains, "in the real world people subconsciously monitor the behaviour of others around them and adapt their own behaviours accordingly". Online however, we do not have such feedback mechanisms - most obviously reading body language, facial expressions and eye contact.

But as our lives become more integrated with the digital world should we expect to see a rise in people suffering from the disinhibition effect? Skype? Facebook? Twitter? Nobody can deny that more kinds of social interaction are frequently taking place over the internet. And if we're not socialising over the internet, most of us are browsing it anyways.

Olivia Solon makes an interesting point in a similar article ('The Psychology of Trolling') for wired.co.uk. She claims that there is a potential parallel between online abuse - a 'digital version of the poison pen letter' - and Broken Windows Theory . The theory, which was first developed in the 1980s, posits that areas hit hard by vandalism will actually encourage more vandalism in that area because it is then seen as a kind of social norm. Solon goes onto suggest that this could potentially happen with trolling unless serious action is undertaken; whether it be stronger social condemnation or outright censorship.

Indeed, I am still amazed at how quickly - and the extent to which - our lives have gone digital. After all, it's not just chatting to your friends that has spilled over onto the web. Politics. Shopping. Entertainment. Art. Sexuality. Sometimes it's as if we're all trying to replace ourselves with a digital double. Regardless, maybe it's about time we started to take these  'trolls' more seriously. 

Patrick Ireland is currently doing an internship at the Open Rights Group. He is also a Correspondent for the digital newspaper Shout Out UK and hopes to attend the London Film School in January 2014 to pursue MA Filmmaking. 

Image: Plato's Republic by Quinn Dombrowski via Flickr (CC BY-SA 2.0)

Surveillance by Consent

Wendy M. Grossman takes a look at the use of CCTV cameras after a recent survey found that 76% of people feel safer knowing that CCTV is in operation.

Only 20 years ago the UK had barely a surveillance camera in sight. Wikipedia tells me that the change to today's 2 million-odd was precipitated by a 1994 Home Office report praising the results of a few trials. And off we went. That's not counting private cameras, estimated by the British Security Industry Association estimated to outnumber public ones 70 to one. Coverage is, of course, uneven. 

The good news, such as it is, is that the conversation that should have taken place years ago to assess the effectiveness of the cameras for their stated purpose - cutting crime, improving public safety, combating terrorism - seems to be starting now. This may be simple economics: central government money paid for cameras in the 1990s, but now having to foot the bill themselves is making councils take a harder look. Plus, there is greater recognition of the potential for abuse. In June, as required by the 2012 Protection of Freedoms Act, the UK government published a new code of practice covering surveillance cameras (PDF) and, in September, appointed Andrew Rennison, the forensic science regulator to the post of surveillance camera commissioner

Rennison was in action on Wednesday leading the first Surveiillance Camera Conference ("sounds like the Photocopier User Group" a Twitter follower quipped). Throughout the day, he called for several things that privacy advocates have long wanted: better evidence regarding the efficacy of the cameras, and greater transparency and accountability. 

The bad news for those who would like to see Britain let go of its camera fetish is that it seems clear they're popular. One quoted survey conducted earlier this year by ICM for Synectics found that 76 percent of people feel safer in public areas knowing that CCTV is in operation, 62 percent would like to see more cameras in their area, and 72 percent would be worried if their council decided to reduce the number of cameras to save money.

I would love to see how that survey was constructed. Would the numbers be the same if every public space didn't have signs and constant public announcements propagandizing that the cameras are there "for your safety and security"? Would they feel the same if they knew that, as the representative from the Local Government Association estimated that it costs £200,000 a year for a medium-sized council to run a surveillance camera system? Or would they think again about the trade-offs and the foregone opportunities for that money?

However it was achieved - catching and prosecuting rioters in the summer of 2011 seems to have done a lot - the groundswell of public opinion in favor seems to be real. Neil Harvey, the operations and control officer in charge of the Nottingham police camera systems, for example, said he gets five or six requests a week for new cameras to be added to his current network of 204. Each camera is capable of nine positions; they are monitored by five operators.

"I don't track people," he objected after a carelessly constructed question. "I watch areas". Fair enough.

A bigger source of unhappiness for many is that the rules only apply to camera systems installed by public bodies, which several argued were more transparent already; the privately owned ones are exempt. Some attendees theorized that trying to regulate privately owned systems would have been politically impossible. At any rate, Rennison commented that the legislation is having an impact: private operators are adopting the code of practice even though it's not legally required of them.

Several other interesting trends emerged. For one thing, several commenters noted that TV shows like 24 and many others (I'll name Bones, Las Vegas, and various editions of CSI as particularly egregious "magic technology" offenders) have left the public with a completely unrealistic idea of what the cameras can do. A camera pinned up on a lamp post can't easily see faces if the person is looking down. Similarly, many cameras are old (grainy pictures, low resolution). Even so, there's general recognition that these systems are improving all the time both in image quality and in portability; the current vogue is for cameras that can be easily redeployed to trouble spots as they emerge. Several, therefore, said they welcomed public visits to their control rooms so they can impart a more realistic understanding of what the technology can do.

Several speakers actually said that cameras shouldn't be installed just because they're popular. The demand for better evidence about how the cameras can best be used seems genuine - particularly in the case of Rennison, whose background in forensic science makes him particularly aware of the issues surrounding error rates.

But here's my favorite story of the day, again from Neil Harvey. It seems that the protection of the cameras is welcomed by a sector of the population that you might not expect: drug dealers. They prefer to conduct deals in sight of the cameras. Apparently they don't like being robbed either.

Wendy M. Grossman’s Website has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard or follow on Twitter.

Image: Surveillance in NYC's financial district by Jonathan McIntosh via Flickr (CC BY-SA 2.0)

Just Google It!

Milena Popova looks at the new Google Terms of Service and its broader implications as corporate powers continue to mine the web for personal information.

The fact that the company name has come to mean "search on the internet" can often make us forget about the range of other services Google provides and, in particular, how they make their money. While Google may have started as a search engine - a market where it still holds a 70% global share, Google's empire has spread well beyond that.

It makes the operating system running on nearly 30% of mobile devices in use and nearly 80% of devices currently shipping; it runs YouTube, the service where 63% of us choose to watch our friends' embarrassing antics; it provides Gmail, the world's largest email service; its mapping application is slightly less likely to kill you than some of its major rivals; and it runs a social network which, by any formal measures, continues to be tiny compared to behemoths like Facebook and Twitter, but which is nonetheless a key building block in the company's strategy. There's also the browser, the store, the translator, the blogging platform and a whole bunch of other bits of the Google empire, all of which know things about you.

In March 2012, Google merged the privacy policies across its services and with that, also consolidated its user data into a single database.  As of next month, Google is making another change to its terms and conditions: it will now use your name and photo as well as information about things you like in its advertising. It calls this feature "shared endorsements" - and you can be forgiven if you're a little confused about it. You see, Google already uses your +1s in its search results and advertising. What is new as of November is that other activity - such as comments and reviews - will now also be displayed to your unsuspecting friends in their search results.

Of course, Google tells us that we are in control here: 

So your friends, family and others may see your Profile name and photo, and content like the reviews you share or the ads you +1’d. This only happens when you take an action (things like +1’ing, commenting or following) – and the only people who see it are the people you’ve chosen to share that content with. On Google, you’re in control of what you share.

It is that sentence in bold that really caught my attention. It sounded awfully familiar. It sounded a bit like "we give you control of your information" (Facebook privacy policy, circa 2006). Of course we all know how that went, and if you don't, the Electronic Frontier Foundation has an excellent chronology of the evolution of Facebook's privacy policy up to 2010. To get a good idea of the current state, I highly recommend the Us vs Th3m Facebook security simulator.

So take this as your semi-regular reminder that on Google (and many other places on the internet), it is you who's the product. Do read the changes to the Terms of Service (they are mercifully short and reasonably well written), and at least think about following the link to the shared endorsements preferences page to tell Google not to use your data in this way. If, like me, you use Adblock Plus and are wondering what all the fuss is about, do remember that ABP only treats the symptom. Just because you can't see the ads doesn't mean companies like Google aren't collecting masses of data on you. Think about adjusting the cookie settings in your browser and using Ghostery to minimise the amount of tracking you're subjected to. Make an informed decision about using some non-Google services: on the one hand, you are giving your data to more companies; on the other you're not concentrating quite so much sensitive information about yourself in the hands of a single provider. Which is the lesser evil? You decide.

Milena Popova is an economics & politics graduate, an IT manager and active campaigner for digital rights, electoral reform and women's rights. She is also a member of ORG's board and continues to write for the ORGzine in a personal capacity.  

Image: Google Logo in Building43 by Robert Scoble via Flickr (CC BY 2.0)

Trouble With the Short Term

Robert Seddon looks to the future and examines the risk of short-term thinking in digital policy.

The future, allegedly, is here already. Or if it isn’t quite here yet, it’s hurtling towards us in Internet time, causing future shock as we struggle to keep up with ever-accelerating change.

It’s an exciting myth, especially since it’s partly true to experience. Yet it is a myth, or a misleading oracle: one that leads to what Owain Bennallack calls 'future lag', the sober realisation that not everyone does buy into relentless change through relentlessly newer technology, and so the future is always further away than exciting and excitable predictions make it seem. (The ‘death of the high street’ thanks to online shopping may be real, but it’s a rather drawn-out kind of death.) Between the promised potential of technological advancement, the actual and uneven pace of technological adoption, and the actual pace of social change, there is ample space for recklessly short-termist policy when the future seems closer than it really is.

It’s hard to think about the long term when there’s a constant churn of news today. It’s hard to talk about planning for gradual benefits and eventual social goods when politicians’ practical incentives are geared towards the next election. It’s hard to leave room to think about future generations when they have no present vote or voice. Long-termism is something politics is short of at the best of times: attention favours ‘known, existing problems [over] emerging problems that are not defined yet’. The giddy challenges of digital policy are a catalyst: the apparent speed of technological change encourages us to look for urgent solutions to what may be emerging problems.

We’re much better at seeing children as the children they are than as the adults of tomorrow. Correspondingly, it’s much easier to see the Internet as a place where a child might be imperilled today than as a persistent realm within which much of that child’s future life will be led. It’s therefore easier to grasp the immediate ramifications of policies to preserve children’s innocence (addressing an urgent problem) than to assess whether children are being allowed to mature fast and freely enough (an emerging problem). Similarly, it’s easier to articulate the immediate dangers of terrorism or organised crime than to explain the more distant risks that come with pervasive surveillance.

The Internet in particular really has changed considerably in its young existence. Predicting what it could be like in a few more decades, let alone when our great-grandchildren are its server administrators, is conceptually hard, and therefore often left to futurologists and authors of speculative fiction. Demanding changes to its infrastructure in response to present sources of alarm, such as terrorist communications or unfiltered pornography, is conceptually easy (though the implementation isn’t). So that’s what happens.

This doesn’t mean that digital policy is simply where thinking about the longer term goes to die. An obvious counterexample to such a claim is this area’s overlap with copyright policy, not necessarily for the happiest of reasons. Another is the mantra that the Internet never forgets, and cannot effectively be commanded to, so you should prudently tailor your online presence to suit the expected prejudices of job interviewers over the next half-century or so.

The danger is not that people will outright ignore the long-term viability of digital culture. It’s that well intentioned people will make policies for an eternal present (or past) instead of a gradually emerging, open-ended future. An emerging problem, such as working out what it means to grow up safely and healthily in a globally interconnected era, gets compressed into a series of concerns delineated by technological novelties: for example, concern about‘sexting’, which combines a popular communications protocol with one of the oldest fascinations of our species. Urgency is not always misplaced; these are not always mere moral panics. However, emerging problems are hardly guaranteed to have short-term fixes.

What is tricky to convey urgently is that short-termism is a problem. (The future is already here; anything still not here is therefore a secondary priority. The joke goes that commercial fusion power is always twenty years away, and presumably it’s to be hoped that a similar rule applies to Orwellian panopticons.) The difficulty is not that people can’t reason abstractly about the eventual implications of present policy, but that it seems almost paradoxical to call something both distant and dangerous. In contrast, impending threats make imposing headlines.

Making policy for an eternal present isn’t always a bad thing; it’s pretty much what we often do with historic buildings, under the name of conservation or preservation. However, while it can be a sensible and noble thing to do to cultural artefacts, it’s a problematic way to treat cultural spaces in which people develop and act and interact: such spaces as the Internet. Nevertheless, I think there may be something to learn from the ‘heritage sector’, and more specifically from the language of stewardship, which has become influential in fields linked to archaeological and museum ethics.

Archaeologists deal professionally with large timescales, and also with the aftermath of destructive practices by other people, including previous generations of archaeologists whose techniques were less refined. This (along with increasingly contentious questions about who properly ‘owns’ excavated objects which might have belonged to the ancestors of living people) has given rise to the idea that archaeologists and museum curators properly act, not as appropriators, but as stewards of sites and objects. This stewardship is construed as part of multiple generations’ shared project of uncovering and preserving information about the human past. It evokes the theological language of ‘stewards of the Earth’ by way of the ethics of ecological conservation, and applies it to the handling of human cultural goods.

The language of digital stewardship has been employed in archival contexts, but it might prove to be more broadly useful. When those in power seek to meddle with people’s use of technology, it might be interesting to learn what they think makes a responsible steward of digital culture.

Robert Seddon’s doctoral research was on the ethics of cultural heritage. He is currently a member of the Alliance for Future Generations

Image: Pen to Paper by Dwayne Bent via Flickr (CC BY-SA 2.0)

Cameron's Porn Filter - The Real Winners

Matt Baxter-Reynolds explains that the only ones who will benefit from Cameron's proposed porn filter are the companies providing the actual filters themselves.

There are myriad problems with David Cameron's porn filter -- as an idea it's doomed to failure and does nothing to address the underlying problems. The only ones who will benefit from such a scheme are the vendors that build the software that makes it all "work". And, oh my, what a gravy train it is. 

MARKETS: Overnight, the government's proposal creates an enormous market for providers of such systems. The ONS this month reported that 21 million households in the UK have internet access. This arrangement is all upside to the vendors. Here's why:

* If you've ever sold computer systems, you'll know that a "21 million seat license" is beyond vast. As I'll come onto, it favours slow-moving, lacking-in-innovation companies with lacklustre products.

* The requirement for these systems will be written into law. Everyone has to buy something, so if you're one of the two or three in the running this is looking like a very safe business indeed.

* Only already established vendors will make it in. Remember, the ISPs won't care about this system -- it's not profitable, nor does it create distinction in the market. They'll be looking for the safest bet, which means the one with the best case studies and references.

* Because it's written into law, once you're in, you're in, unless they change the law.

* There will be no incentive to the customer (the ISP) to change you once you're in. Again, the ISPs don't care about this. Unless you're actually hurting their customers, no one is going to swap you out.

RULES: As technologists we know that internet filtering of this type is impossible to pull off with any finesse. The best that we can hope for is a blacklisting approach that doesn't create too many false positives, and that doesn't cut off genuinely valuable, mislabelled sites that don't take into account the entire remit of the human condition. (Classic example here is the unavailability of LGBT support sites to young children working to understand themselves.)

This system won't be a profit centre for the ISPs either -- they'll be looking to pass on the smallest amount possible to their end customer. The negotiations will go something like this, with the ISPs saying: "we have to implement this thing, we don't really care about it, what do you have that just about works?" So vendors will be pressured into providing the most basic system that they can, whilst maximizing the on-going revenue from that system. What's a good price for this? If I were selling it, I'd be looking at £6 per household per year, which gives rise to a £126 million-per-year market.

If we assume £10-£15 per household for broadband, that adds 50p per month. Yes, that might get absorbed at the start, but it'll certainly get passed on as prices organically rise. The market will certainly stand that sort of spend, even if it seems astronomical. But that's the point -- for vendors of this sort of system, Cameron's porn filter is like winning the lottery every year.

Large-scale, competitive markets follow the 'Rule of Three'. Simply, this states that there tends to be three, large, incumbent competitors in any market. This means we can expect three companies to enter the UK market and carve up the ISPs between them. The largest one will most likely snaffle 50 percent of the ISPs. As it'll likely be the most successful competitor, they'll get the biggest ISPs and have the lion's share of BT, TalkTalk, Sky, Virgin Media, and EE. That one competitor will - by definition - have a significant share of the actual households.

Let's assume the best competitor nets 80% of the households, based on the 80:20 rule (the Pareto principle), that suggests the big winner in this will be looking at a £100 million-per-year contract. All for providing the most basic system possible. ...That won't work. Does that seem high? Perhaps. But then, it'll be the households that are paying for it, not the ISPs.

Matt Baxter-Reynolds is a mobile software development consultant, mobile technology industry analyst, author, blogger, and technology sociologist with 20 years experience in server-side and mobile client software development. Read his blogs at ZDNet and The Platform. Or you can talk to him on Twitter.

Image: Cameron Portrait by Thierry Ehrmann via Flickr (CC BY 2.0)

Bad People

Can recent advances in technology solve some of our social problems? Wendy M. Grossman examines - from airport facial recognition software to the so-called next generation of "Old Bill" policeman.

The MAGICAL airport of 2020 doesn't exist, and already I feel trapped in it like the nettlesome fly in the printer in the 1985 movie "Brazil". The vision, presented by Joe Flynn, the director of border and identity management services for Accenture Ireland at this year's Biometrics Conference probably won't sound bad to today's frustrated travelers. He explained MAGICAL - mobility, analytics, gamification, collaboration intelligence, automation, low-touch - but we know the acronym wagged this dog.

MAGICAL goes like this. The data you enter into visa and passport systems, immigration data, flight manifests, advance passenger information, all are merged and matched by various analytics. At the airport, security and immigration are a single "automated space": you move through the scanner into a low-touch, constantly surveilling environment ("massive retail space") that knows who you are and what you're carrying and ensures no one is present who shouldn't be. Your boarding pass may be a biometric.

Later, when you step off the plane, you are assessed against expected arrivals and risk profiles . As you sprint down the people movers elbowing slowpokes out of your way (doesn't everyone do this?) accelerated face-on-the-move recognition systems identify you. You cross an indicator line so you know you've entered another country, but as a known traveler you flow through seamlessly. Only 5 to 10 percent of travelers - the unwashed unknowns - are stopped to go through gates - or be checked by an officer with a mobile device. Intervention is the exception, although you will still have to choose a customs channel.

My question was: what happens when it goes wrong?

"We don't replace the border guards and people," Flynn said reassuringly. Rasa Karbaukaite, a research officer from Frontex, the Polish-based organization that coordinates and develops integrated European border management, noted that "automated" is not "automatic": there will be human supervision at all times.

But I was worrying about the back end. What happens when some database makes a mistake and you get labeled bad news? In "Brazil" that meant the goon squad invaded your home and carted you off. In a modern-day airport, well…what?

This concern re-emerged when Simon Gordon and Brian Lovell, respectively the founder and research leader of Facewatch outlined "cloud-based crime reporting", a marriage of social networking with advanced facial recognition systems to help businesses eliminate low-level crime. Say a handbag is stolen in a pub. The staff can upload the relevant still and moving CCTV images in a few minutes along with a simple witness statement. Police can review the footage, immediately send back the reference number needed to claim on insurance, and perhaps identify the suspect from previous crimes.

Speeding up crime reporting and improving detection aren't contentious, nor are, in and of themselves, the technical advances that can perform facial recognition on the grainy, blurred footage from old CCTV cameras. The many proprietary systems behind CCTV cameras pose an expensive challenge to police; Facewatch overcomes this by scraping the screens so that all uploaded images are delivered in a single readable format.

But then Gordon: "[The system] overcomes privacy issues by sharing within corporate and local groups." It's not illegal for Sainsbury's to share information across all its branches that would effectively blacklist someone. Shopwatch and Pubwatch groups can do the same - and already are. Do we want petty criminals to be systematically banned? For how long? What happens when the inevitable abuse of the system creeps in and small businesses start banning people who don't do anything illegal but annoy other customers or just aren't lucrative enough? Where does due process fit in?

A presentation from Mark Crego, global lead for border and identity management, Accenture Ireland, imagined "Biometrics Bill" - the next-generation "Old Bill" policeman. Here, passively collected multi-modal biometrics and ubiquitous wireless links allow an annoyingly stereotyped old lady who's been mugged to pick her attacker out of a line-up assembled at speed on an iPad from her description (ignoring the many problems with eyewitness testimony and local video feeds and instantly submit a witness statement. On-the-fly facial recognition allows the perpetrator to be spotted on a bus and picked up, shown the watertight case against him on screen, and be jailed within a couple of hours. Case file integration alerts staff to his drug addiction problems and he gets help to go straight. Call me cynical, but it's all too perfect. Technology does not automatically solve social problems.

The systems may be fantasy, but the technology is not. As Joseph Atick, director of the International Biometrics and Identity Association said, "The challenge is shifting from scalability and algorithm accuracy to responsible management of identity data." Like other systems in this era of "big data", identity systems are beginning to draw on external sources of data to flesh out individual profiles. We must think about how we want these technologies deployed.

"We don't want to let the bad people into our country," said Karbauskaite in explaining Frontex's work on creating automated border controls. Well, fair enough, it's your country, your shop, your neighborhood. But where are we going to put them? They have to go somewhere - and unfortunately we've run out of empty continents.

Wendy M. Grossman’s Website has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard or follow on Twitter.

Image: Birmingham Central Police Station by West Midlands Police via Flickr (CC BY-SA 2.0)

In Name Only

Wendy M. Grossman criticises the proposed policies outlined by Nominent in regards to how we should deal with seemingly "offensive" domain names in a provocative and timely piece.

Registering domain names sounds like a simple, straightforward job. In fact, until 1998, the entire domain name system – the thing that lets you use the wordlike string pelicancrossing.net (which I preferred to the then-unregistered wendygrossman.com) lead you to my Web site rather than the four to 12-digit numbers the computers actually use behind the scenes – was managed by one person: Jon Postel. Plans had already begun to shift the DNS into a more professional structure when he died. The result is global management by the Internet Corporation for Assigned Names and Numbers, which subsidiarily has a registry for each top-level domain (.net, .com, or a country code like .uk), which in turn has registrars that do the actual work of selling and setting up registrations. Even as ICANN was being set up, many foresaw that it could become a central point of censorship. That's taken longer to happen than was feared, but as Monica Horten highlights in her new book "A Copyright Masquerade", seizure of registered domains has become a weapon of copyright enforcement actions.

In the UK, the country code authority is Nominet. Nominet has been asking for comments on whether and how it should deal with "offensive" domain name registrations. These are due November 4.

First question: what's offensive? I happen to find those repeated announcements in train stations and elsewhere that the ubiquitous CCTV cameras are "to enhance safety" offensive propaganda, but my chances of getting them banned are approximately nil.

In the case of domain names, the driver behind this push seems to be John Carr, whose career as a campaigner for online child safety has grown up alongside the Internet since at least the mid 1990s. His targets, therefore, as explicated in Nominet's outline of the situation, seem to be domain names that hint at incest, rape, and pornography. Nominet's documentation gives a short list of sites that have been complained about. A couple weren't registered, a couple were referred to the Internet Watch Foundation. The most significant were the two in which the offensive part of the name was actually at the second level – beyond the scope of Nominet's control. This suggests that as a practical matter avoiding offensive domain name registrations is pretty much impossible: there are too many workarounds.

The more important point is that what matters is the content of a Web site. Only rarely – for example, in cases of bullying – is the name important as a signifier.

Pause for some Internet history to illustrate the pitfalls of judging by name only. Back in the early 1990s, when the big online party was Usenet, there was a game you could play. It relied on two things. One: that anyone could start a newsgroup in the alt hierarchy. Two: that university users (who were most of the Usenet population before 1994) would see the list of new newsgroups available when they turned on their reader software in the morning. So the game was to make up silly and provocative names. Things ending in "die.die.die" were in vogue for a while; also things ending in ".sucks", And there were some deliberately intended to shock and outrage. Many of those newsgroups remained empty shells, the point of their existence fulfilled by their creation.

Names also played a significant role in a very early – 1991 or so – tabloid attack on the (in 2013, still going) UK's CIX conferencing system. At that time, online participants were considered weird, so some intrepid reporter accordingly joined the service and went on a pornography hunt. He found the "adult" conference, discovered its list of files that people had uploaded. When he saw one labeled "Japanese schoolgirls????" he assumed the worst and didn't check. I'm told the "schoolgirl" bit was ludicrous.

The resulting sensationalist article led to some unpleasantness. The conference moderator, a prominent technology journalist, had stones thrown at his house, breaking a window and frightening his children. The actual contents of the adult conference got lost in a disk crash.

I'm going to bet that given that child abuse images are illegal in almost all countries, no one is going to advertise them by registering a domain name that invites police scrutiny. What's more likely is that an outrageous domain name will be used to attract custom for extreme – but legal – material, just as the MPAA's X rating generated ads for "XXX-rated" films.

But even that is giving the domain name system too much importance. Carr's complaint is so last-century, when people found things by typing in domain names, often having to guess. Today the circumstances under which people guess domain names are relatively few. Of the 200 signals that determine a site's Google ranking, hardly any use the domain name itself, and only a few more use data about the domain name. On mobile phones domain names are even less important; apps are obviously where the action is.

In sum: there's clearly a need for dispute resolution when two people or organizations claim rights to the same domain name. There's clearly a need to remove criminal sites that fleece people or engage in other illegal activities (including bullying or libeling someone). But patrolling for "offensive" domain names just isn't a sensible priority.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.

Image: Naked Keyboard by Freddy via Flickr CC BY-NC-SA 2.0)

Why Digital Rights are a Human Rights Issue – Blog Action Day 2013

A personal look (or an outsider's perspective) at the relationship between digital rights and human rights.

This is ORGZine's offering for Blog Action Day. This year’s theme is Human Rights. Patrick looks at what the assumptions we make about what are and are not human rights issues, giving a personal angle on discovering digital rights issues.

 

Digital rights = a human rights issue. Digital rights, and issues surrounding Internet freedom and privacy, have been at the very forefront of international news stories in recent months. The latest documents released by Edward Snowden seem to indicate that the NSA secretly accessed computers within the Indian Embassy in Washington as a part of its clandestine effort to mine electronic data held by the South Asian superpower.

Indeed, the sheer extent to which the mass surveillance programme PRISM has been apparently monitoring both individuals and states from across the globe has sent shockwaves throughout the international community. Very few appear to be unaffected.

Dr Agnes Callamard, executive director of the human rights organisation Article 19, claims Edward Snowden's disclosures have triggered a necessary and long-delayed public debate about mass surveillance online”. She adds that “rather than investigate and prosecute those that have ordered and conducted one of the most unprecedented global violations of our rights” in recent history, governments from across Europe have instead chosen to support the United States and thus, in the process, shoot the proverbial 'messenger'.

In light of recent events, it would seem strange for a person to not consider digital rights a human rights issue. Until (recently) joining the Open Rights Group – combined with the ongoing media coverage of PRISM and Snowden – I may have been one of those people. I feel somewhat embarrassed, and possibly even a little bit of shame, in admitting this. My conception of human rights was, unknowingly, limited to stereotypical third world problems: massacres in Burma or the right to access clean water and medicines in various impoverished parts of the world.

It wasn't that I didn't think digital rights counted as human rights it was that I failed to make such a connection at the get-go. In my mind, they appeared to be completely separate issues. And I suspect that I wasn't the only person who didn't immediately see a fundamental relationship between digital rights and human rights. The Internet, on the outset, appeared to be just a commodity – another lifeless object to be sold and consumed on the market. However it soon became clear to me that the Internet is more than a mere commodity. It is interactive, dynamic, seemingly infinite and constantly changing. Moreover, it is essentially owned by nobody (although I am sure if certain large corporations get their way it soon might be).

There are, of course, a number of human rights that have been identified as relevant in relation to the Internet. Data protection and privacy have featured prominently in international news stories covering PRISM, WikiLeaks, Snowden and TEMPORA. But there is also the equally important freedom of expression and freedom of association. These seem to be the staples – built into the very framework – of what constitutes a fair and equal society. Without them, where would we be?

Human rights are commonly defined as a set of fundamental rights to which a person is inherently entitled simply because he or she is a human being. They are universal. All 'men' created equal if you want to get American. So why shouldn't these extend into the digital world? Technology and the Internet have become such an integral part of our daily lives and we shouldn't be exploited by paranoid states or wealthy international corporations looking to get more wealthy. I don't want Google or Tesco peering into my bathroom window because they're desperate to know what kind of toilet roll I use.

And that's what this is essentially all about. It is another example of normal people struggling with the powerful for what should be basic human freedoms. I think it is important to understand that digital rights and human rights are the same issue.

Patrick Ireland is currently doing an internship at the Open Rights Group. He is also a Correspondent for the digital newspaper Shout Out UK and hopes to attend the London Film School in January 2014 to pursue MA Filmmaking. 

Image: CC-BY 2.0

Personal Data and Disclosure

Paul Morris looks at Data Protection laws, and whether companies use them in order to implement their own safeguards.

It never ceases to amaze me how many data controllers (those people or organisations in charge of your personal data) refuse to let you have copies on the grounds they are protecting your data – protecting them from you! The media are full of stories about organisations both public and private which are nevertheless quite happy to give or sell your data to others or even leave it in skips or on the train.

In the Sunday Times (25 August), we heard of yet another example of the police refusing to provide the name and address (personal data) of the registered keeper of a vehicle which had demolished a wall on the grounds that, “it was against data protection rules”. The owner of the wall or perhaps I should say now, pile of rubble, wanted to sue the driver for the cost of repair. Of course the police have no right to refuse such a request if the requestor is considering legal action (which he was). Section 35 of the Data Protection Act refers. It was a, ‘cop out’; the police must, ‘cough’’.

There is a huge misunderstanding about data protection laws. You may wish to make a change on a holiday booking but the call centre won’t talk to you if you didn’t make the booking even though you are one of the party going on holiday and just want to change your own details. It is quite understandable that they don’t want to make amends which are not approved by the ‘lead name’ but they can talk to anyone providing that the lead name has given permission. In addition, they wouldn’t get into trouble if they talked to someone without permission provided that it was reasonable to do so. In reality, they are just using ‘data protection’ as an excuse for implementing their own safeguards which is a bit naughty.

The main two provisions in the Data Protection Act 1998 which are useful for data subjects (ordinary people) are firstly to obtain your personal data – to see what organisations are holding about you – and secondly, to rectify any inaccuracies.

Providing you send proof of who you are (to make sure it’s not someone else trying to take a peek at your data) and you specify what you want, then data controllers have 40 days in which to reply. If you discover something that is inaccurate, you can ask the data controller to rectify it. If you have proof that the data are inaccurate, then data controllers will make the corrections. If they don’t, then you can ask the Information Commissioner to send an enforcement notice or you can even issue legal proceedings. If the inaccuracies are trivial such as spelling errors, then let them go. Only if the inaccuracies are affecting your life should you pursue rectification – a major blot on your credit record for example (that shouldn’t be there).

Bear in mind that responding to a request for personal data is very tedious for the data controller and so the task is usually delegated to a junior member of staff who is given a set of rules to follow such as redacting (putting a black marker pen) through names of other people (because their name is their personal data). This may sound reasonable but it can lead to a moronic response. One of our correspondents asked for his personal data from the Home Office. He was much amused to find that the name, ‘Jack Straw’ had been redacted. How would he know that this name had been redacted? Because the title, ‘Home Secretary’ was next to the redaction! Of course this was when Jack Straw was Home Secretary so most people would have known the name of the Home Secretary. It was hardly a state secret!

Both the Freedom of Information Act 2000 and the Data Protection Act 1998 are very powerful tools for the individual against an ever-encroaching State. The former revealed the scandal of MPs expenses; the latter is used by individuals to discover what information companies and the Government keep about them and to rectify those data if they are inaccurate and damaging.

It is surprising how often organisations assume powers they don’t have. It’s usually worth checking what legislation they rely on - ask them. We have found that often it is a fiction.

Paul Morris,  The Data Protection Society.

Image: SentrySafe H2300 by Pig Monkey CC BY-NC-SA 2.0

Face-recognition-book

Stephen McLeod Blythe looks at Facebook's new proposed features.

Facebook have hit the news again with the latest proposed amendments to two of their key operating documents: the ‘Data Use’, and Privacy Policies. There are a few different issues under scrutiny this time around, including the use of users’ pictures and other content for advertising purposes, but perhaps what has been most contentious of all is the wording that potentially allows Facebook to deploy facial recognition software across its entire network. This is something that is already in place in a few jurisdictions, but not currently active for European users after it got battered by Germany the first time it reared its head.

There’s no doubt that Facebook have a less than perfect record when it comes to issues of privacy. Time and again there seems to be little respect shown for the rights of multi-faceted individuals when it comes to implementing Mark Zuckerberg’s vision of a ‘single identity’; a concept that I take serious issue with. However, I have to confess that this latest row was not something that I immediately felt a great deal of consternation about. The level of outrage that has been expressed by many of the outlets giving coverage seemed to have a predictably reactive tone. This was compounded further by the repetition of only a few sources (almost verbatim in many cases), and the lack of explanation as to why it actually matters. Going in with an open mind, I decided to explore what the actual implications of such a thing might be.

So what’s the issue?

As far as I can see, there are a few main concerns about the possible implementation of facial recognition software across Facebook:

  1. The system will be enabled by default, with users having to explicitly ‘opt-out from inclusion
  2. It’s ‘creepy’
  3. The NSA and other Government agencies may have access to this data, and use it for more serious purposes than simply enabling speedier tagging on Facebook
  4. Strangers will be able to work out who you are by seeing you identified in other people’s pictures

Frankly, quite a lot of what happens on Facebook falls under number 2, so with that exception, let’s explore these a bit:

A closer look

The system will be enabled by default, with users having to explicitly ‘opt-out from inclusion

As the result of years of working as a nightclub photographer, for a long time I have restricted those I am ‘friends’ with on Facebook to a select few that I rarely get the chance to see in the flesh. To facilitate this quasi-secret online life, I make substantial use of the privacy controls that the platform now affords. This hasn’t been a particularly straightforward task at the best of times, but it has allowed me to keep a relatively low profile.

Let us leave aside the navigational difficulties of adopting a privacy-centric configuration that exist even for the more technical savvy of users, as well as the general ideological argument that advocates an opt-in approach to issues of consent. In this specific instance, a wholesale implementation of this technology will render users’ existing privacy settings completely ineffective. What use is it requiring prior review of every ‘tag request’ that comes in from a friend, if you are going to automatically be identified in every picture uploaded to the platform? The answer to this should be obvious, and it highlights only the tip of a very large iceberg. For the purposes of the rest of the considerations, let’s assume that this element is taken out of the equation completely, and focus on them in isolation.

The NSA and other Government agencies may have access to this data, and use it for more serious purposes than simply enabling speedier tagging on Facebook

Given the proximity of these privacy changes to the revelations of mass surveillance by Government bodies, it’s natural to question the extent of information that tech companies who have been named as complicit are gathering. One of the most pertinent questions that is raised as a result is what exactly happens should you choose to opt out of the facial recognition database. Does Facebook erase your data completely, or simply stop the associated notifications to the front-end? This remains unclear, and given the company’s track record in this area (try and permanently delete your account for an example), I doubt the answer will be either forthcoming, or what we would hope for.

Having said this, whilst it is important that the wider policy considerations are taken into account, we should be wary of blurring the lines too much between data collected by services that we actively choose to participate in, and the unauthorised access of that data by the State. There are issues inherent with any platform that encourages us to share information about every facet of our lives, and they should be challenged, but the real focus of dissent with regards to the NSA should be those in the political sphere who have allowed it to take place. Whether they make use of the data available in Facebook or not, they’re going to do it anyway. This is a bigger question than social networks alone.

Strangers will be able to work out who you are by seeing you identified in other people’s pictures

At first I genuinely couldn’t see a problem with this.

If you are tagged in a picture with a group of friends, then by design you will be exposed to their entire list of contacts - at least some of which will not know your identity before seeing the picture and associated profile link. This is true no matter what your privacy settings, and if your concern relates to strangers working out your identity, then the only real way to avoid this is to not be identifiable in any pictures at all.

Except, that’s not how it’ll work.

Instead of a scenario where you are identified as one of the main subjects in the foreground of a photograph owned or taken by someone that you have interacted with in person to some degree, this technology has the capability to identify you across the entire network. At the thin end of the wedge, this means that you could end up being identified by Facebook in other people’s pictures as you walk past in the background, but at the other end of the spectrum there is a far more chilling possibility.

What if somebody deliberately wants to find out who you are?

Sure, we’ve all taken an interest in finding out more about someone that we didn’t know much about... or whom we’ve had a fleeting interaction with at some point, but that usually comes along with a shared connection whether that be through another person, an event, or social group. The danger is that this technology could conceivably be used by anybody to snap a picture and identify exactly who you are, wherever you are, be that at work; in a shop; at school... in the playground?

At present it can take a whole lot to jump from a face in the street to somebody’s name, but there isn’t a whole lot of distance between knowing somebody’s name and face online to finding out far more personal details such as home addresses and telephone numbers. In fact, it is staggeringly easy.

So it appears that there is something to be alarmed about. Facial recognition technology attached to the largest database of people in the world could have consequences that don’t even bear considering. That said, with the advancements in this area of technology that already exist, is it really Facebook itself that we need to be concerned with, and is it too late?

You can get the ‘red line’ PDF that highlights the changes that are being proposed to the Facebook ‘Date Use’ Policy here, and the Privacy Policy here. To see the company’s own explanation for the changes, see this blogpost. Further reading on privacy and facial recognition in general can be found inthis Harvard Law article by Yana Welinder.

Stephen McLeod Blythe is an Internet law geek with an LLM on the way, and Digital Marketing Manager for tech firm Amor Group. You can find him onTwitterGoogle+, and his own blog - iamsteve.in

Image: Gezichtsherkenning / Face recognition by mooste CC BY-NC-ND 2.0

Featured Article

Schmidt Happens

Wendy M. Grossman responds to "loopy" statements made by Google Executive Chairman Eric Schmidt in regards to censorship and encryption.

ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views

People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.

ORG Events