Banned Books Week

A musing on the different attitudes we have to censorship of books and censorship of the internet.

This week (September 30 – Oct 6, 2012) is Banned Books Week, a national week in America “celebrating the freedom to read”.

 There isn’t really a UK equivalent and, although the lists of ‘most challenged books’ that were released at the start of the week are from American libraries and retailers, it rightfully gains discussion and attention in the UK too.

 Last year I sat down and read the entirety of the American Library Association’s list of challenged books. There was a certain mockery in my reading, mixed with the outrage; “who are these ridiculous people?” I think to myself, “in the UK” I crow “we are above such behaviour”.

Because we don’t record challenges to our libraries as thoroughly as the Americans do I assumed these challenges don’t happen here, but some quick questions and googling this week reveals that people ask for books to be removed from our council libraries frequently, so no matter where we are based issues of censorship are always relevant.

 Banned Books Week gains excellent coverage partly because most people can say:

“Banning To Kill A Mockingbird or Brave New World is idiotic! Don’t they recognise literary excellence?”

and thus parents who try to ban Huxley  can be made to seem silly and inconsequential, even where they are effective there is an element of outraged snobbery in the discourse surrounding that kind of decision.  

That attitude becomes a bit more difficult to sustain when you read about Fifty Shades of Grey being challenged. After all, isn’t that a kind of pornography? I don’t see porn in the dvd section of the library so I can at least understand the sentiment. And whilst some may lay claims to this strain of erotica leading to feminine empowerment and embracing kink, the BDSM community isn’t embracing the books and the weak-willed Anastasia isn’t giving female self-determination a good name. However, it is this train of thought that results in  your own morals (in my case strident feminism) above someone else’s reading or learning. We all have something that we find offensive.

I read a great deal of sites that discuss these issues and have found that it is sex and illustrations that bring up the most heated debates. Eight out of ten of the most challenged books in the ALA’s list of ‘The top 10 most frequently challenged books of 2011’ were done so for ‘sexually explicit’ or ‘nudity’. It is why the Comic Book Legal Defense Fund have a harder time defending manga and Lost Girls than libraries do explaining why the violence in I Know Why the Caged Bird Sings is permissable.

Where words alone are concerned we believe in protection, education and literary value, but with the introduction of images we can become far more aggressive.

The National Coalition against Censorship in the Guardian describe challenges to The Dirty Cowboy, a children’s book, because its namesake is depicted having a bath. The logic that says a nude image is obscene and might lead to pornography is the same logic that sees advocacy of online censorship prevail.

Some people believe in ensuring that content is always ‘age-appropriate’. However Joan Bertin, aruges, “Ratings obscure the value of literature and inevitably lead to censorship.”

 I agree . Everyone develops at a different rate; I for one read Jaws at age nine and was only freaked out by the sex when I read it again at twelve. My parents held the same double standard that so many of us do about books. We weren’t allowed to watch 18 films till that birthday (apart from a long sustained campaign by my sister and I to watch Quadrophenia because of the motorbikes) but they never kept track of the books we read.

 The Government readily disccuses putting an 18-rating on content across the internet, when we don’t see bills being passed on the content of libraries (see the recent Department for Education consultation on parental controls). The Open Rights Group researched the kind of content that gets banned online using ‘adult content’ filters on mobile phones and found:

  • Bars and pub websites (because you can’t enter a pub until you are 18)
  • Parenting and breastfeeding sites (because you have to be over 18 to have a child)
  • Sexual education sites (because you have to be over 18 to have sex)
  • Forums and chatrooms (because it’s not safe for people under 18 to talk to each other)

 One of the arguments defending darkness in children’s books is that they deal with the real world. There are young people who have had to deal with death or suicide or family problems or sex or abuse or violence or racism. We can’t pretend it doesn’t happen. This makes some people uncomfortable, but there is a great tradition of defending this content in young adult books for its relevance.  

 I think we should remember that these same young people can find advice and support on the internet for these same issues. I spent some time working on a support forum for young people, giving advice on relationship issues. This is the kind of forum – dealing with self-harm, suicide, anorexia and sex that some people would want banned but, like the books discussed this week, hiding children from the real world doesn’t make them any safer.

 

Ruth Coustick is ORGzine Editor and likes to talk/blog in various places about books, feminism and human rights.

 

Further reading for Banned Book Week:

 

 

 

 

 

Image: CC-BY 2.0 Flickr: thejester100

Clean IT: a symptom of the pinata politics of privatised online enforcement

Joe McNamee, Executive Director of European Digital Rights, explains the European Commission's ‘Clean IT’ project, one of many subtle threats to regulate the internet.

Over recent years, the European Commission has been both running and outsourcing various projects that all have one core aim – to encourage Internet companies to police, prevent and/or prosecute potential breaches of the law. The suggestion is usually that the companies change their terms of service to give themselves the power to undertake such activities. These projects often come with more or less subtle threats to regulate if industry does not “voluntarily” take action. One of these projects is called “Clean IT”.

There has been a lot of attention to the “Clean IT” project since EDRi published a leaked draft document last week, on 21 September 2012. Since then, the project organisers have said that the statement on the front page saying that “this document contains detailed recommendations” was incorrect and that it also contained (unidentified) other mistakes.

Project coordinator But Klaasen explained on Twitter that the leak was little more than a “discussion document.” According to the Clean IT website, this is the output of a series of two day meetings across European locations from 2011- 2012 According to the website of Clean IT, which has produced 23 pages of bullet points of policy suggestions, there will be just one more meeting (Vienna, November 2012) before a final presentation is made in February 2013. Mr Klaasen also explained on Twitter that all suggestions received thus far are only “food for discussion”, because they do not censor the ideas they receive.

Clean IT is therefore part of a wider problem – a conveyor-belt of ill-defined projects whereby industry is expected to do “something” to solve ill - or even undefined problems on the Internet. For example, it takes an almost impressive amount of fragmentation for the European Commission to be simultaneously funding two different and uncoordinated projects (Clean IT and CEO Coalition on a Safer Internet for Kids) developing “voluntary” industry standards on “notice and takedown”,  “upload filters” and “reporting buttons” ; all with little or no analysis of the specific problems that need to be solved.

Worse still, Clean IT was born out of a failed “voluntary” project organised directly by the European Commission on “illegal online content”. That project failed because it did not define the problem. Without knowing what problems it was trying to solve, it ended up going round in ever smaller circles before finally disappearing down the proverbial drain.

Sadly, no lessons were learned before the Commission committed to funding Clean IT, which is currently making the same mistakes all over again.

Even bigger mistakes have been made with this approach. In the Commission-organised “dialogue on illegal uploading and downloading”, a proposal was made for widespread “voluntary” filtering of peer-to-peer networks. This was resisted by the Internet access provider industry and ultimately ruled by the European Court of Justice (Scarlet/Sabam case C70/10) to be in breach of fundamental rights.

All of this experience meant that EDRi could not possibly participate in Clean IT without seeking to ensure that the project did not make the same mistakes that we have seen over and over again. In 2011, as a precondition of participation, we therefore set very reasonable demands:

1. Identify the specific problems to be solved.

At different moments, Clean IT was meant to address “Al Quaida influenced” networks, “terrorist and extremist 'use' of the Internet” and “discrimination” or “illegal software”.

2. Identify the scope of the industry involvement. Listing every single type of online intermediary is neither credible nor effective.

3. Actively seek to identify and avoid the possibility of  unintended consequences for both fundamental rights and addressing illegal content.

The project leader rejected all of these preconditions, regrettably leaving us no option but to stay outside the process. As a result, they have a project that seeks to use unspecified industry participants to solve unidentified problems in ways which may or may not be in breach of the European Union and international law.

It would be unconscionable for EDRi to participate in these circumstances.

We have also been contacted via Twitter by Commissioner Kroes' spokesperson. Mr Heath's comments suggest that CleanIT is only a “brainstorming” session and that the Commission has spent hundreds of thousands of Euro just for lists of possible policies.

It is very important to stress that absolutely nothing in the document that we released last week has been officially approved as European Commission policy. The recommendations, insofar as they are recommendations, are the sole responsibility of the CleanIT project. Commissioner Malmström has acted to distance herself from the project and has made this very clear via Twitter messages. There are, however, serious questions that are still to be asked regarding the budget  processes that lead to such projects being approved for public funding.

The law is quite clear – indeed the Charter of Fundamental Rights, the Convention on Human Rights and the International Covenant on Civil and Political Rights are quite clear – restrictions on fundamental rights must be foreseen by law and not introduced as unpredictable, ad hoc projects by industry. The rule of law cannot be defended by abandoning the rule of law and EDRi will continue to defend this principle.

Links

EDRi: Clean IT – Leak shows plans for large-scale, undemocratic surveillance of all communications (21.09.2012)
http://edri.org/cleanIT

Clean IT rebuttal of our comments
http://www.cleanitproject.eu/edri-publishes-clean-it-discussion-docume...

Mr Heath's comments
https://twitter.com/EDRi_org/status/250524464499023872

Mr Klaasen's tweet
https://twitter.com/ButKlaasen/status/249145735453487105

Commissioner Malmström's tweets
https://twitter.com/MalmstromEU/status/250573911471845376
https://twitter.com/MalmstromEU/status/250574119991660545
https://twitter.com/MalmstromEU/status/250641266038173696

 

 

This article was originally published on the EDRi’s website as ENDitorial: Clean IT is just a symptom of the piñata politics of privatised online enforcement

Joe McNamee is Executive Director of European Digital Rights, an association of 32 digital civil rights organisations (including ORG) from 20 European countries. He has an undergraduate degree in Modern Languages and Masters’ Degrees in International Law and in European Politics.

Twitter: @edri_org Website: http://edri.org

 

Image: CC-BY-NC-SA think. responsible design & ideas

Social Media Prosecutions: ‘Grossly Offensive’ to some

A nineteen-year old was arrested for posting an aggressive status update to his Facebook account in March 2012. Matt Bradley writes a clear timeline of the Azhar Ahmed case and compares it to other cases of social media prosecutions.

On Thursday 20th September the Director of Public Prosecutions, Keir Starmer issued a statement on the subject of social media prosecutions. His comments were in reference to a specific case of a CPS decision not to prosecute Daniel Thomas for a homophobic tweet about Tom Daley and his fellow diver Peter Waterfield:

“There is no doubt that the message posted by Mr Thomas was offensive and would be regarded as such by reasonable members of society. But the question for the CPS is not whether it was offensive, but whether it was so grossly offensive that criminal charges should be brought. The distinction is an important one and not easily made.”

Starmer explained that the decision to not prosecute had been taken in light of a number of factors including:

“However naïve, Mr Thomas did not intend the message to go beyond his followers, who were mainly friends and family”

Starmer’s statement continued with more generalised remarks about freedom of speech online, expressing a desire for a debate about the limits of that freedom and that

“the CPS has the task of balancing the fundamental right of free speech and the need to prosecute wrongdoing” adding that in his view,

“the time has come for an informed debate about the boundaries of free speech in an age of social media.”

Right now there is one specific social media prosecution which fills me with a creeping dread. This is a case which I have been following with mounting dismay ever since I first became aware of the arrest in March this year.

Nineteen-year old Azhar Ahmed of West Yorkshire was arrested on 10th March 2012 after posting the following status update to his Facebook account from his mobile phone:

“People gassin about the deaths of Soldiers! What about the innocent familys who have been brutally killed.. The women who have.been raped.. The children who have been sliced up..! Your enemy’s were the Taliban not innocent harmful familys. All soldiers should DIE & go to HELL! THE LOWLIFE FOKKIN SCUM! gotta problem go cry at your soldiers grave & wish him hell because thats where he is going..”

Ahmed has posted the update at around 7pm on the 8th March after seeing updates on his Facebook timeline expressing condolences for six British Soldiers who had been killed in Afghanistan on the 6th. It was the biggest single loss of the life sustained by British forces since the conflict in Afghanistan started, and was very public news at the time.

To paraphrase the words of DPP Keir Starmer, there is no doubt that the message posted by Mr Ahmed was offensive, but what happened next is really quite terrifying.

Azhar Ahmed’s Facebook status was picked up by various people, who began passing around online either in the form of Facebook shares or screengrabs. A large number of negative comments appeared in reply to Ahmed’s status update. Eventually, Ahmed deleted his post, and even sent a few direct messages of apology to people who had expressed upset.

By about 10pm the same night, an address and telephone number which purported to be that of Ahmed had been posted online. It was in fact the address and telephone number of a friend of his. The telephone number in question began receiving abusive and racist telephone calls until the early hours of the morning. At approximately midnight, cars began pulling up outside the house, and torches were shone in through the windows.

The following morning, at 9am, the company which Ahmed had listed as his employer in his Facebook profile began receiving telephone calls accusing them of being a “racist company”.

By the time Ahmed agreed to meet with West Yorkshire police 2 days after his post, he must have been seriously rattled. He had been threatened and abused online and there had been attempts to get to him at his home address. During his meeting with police Ahmed was arrested and subsequently charged with “racially aggravated public order offence”.

Skip forward to March 20th and Ahmed’s first case hearing at Dewsbury Magistrate’s Court. Even a cursory search online would reveal that the case was already the subject of much attention from the right wing press, far right and nationalist groups. Police were expecting trouble. When I arrived at court that day there was a substantial police presence.

During that first hearing, we heard that the CPS has elected to drop their prosecution for a “racially aggravated public order offence”, and instead charge Ahmed with sending a “grossly offensive” message via a public telecommunications network contrary to Section 127 of the Communications Act 2003. I’m sorry to report that this didn’t come as great surprise to me; there was little prospect of CPS securing a conviction under the previous charge. Section 127, on the other hand, is open to a very wide interpretation in that it applies highly subjective terms such as “grossly offensive” and “indecent”. It has in the past proved to be a very effective way of securing a conviction against social media users.

Leaving court, I was dragged painfully out of my internal contemplation of the legal abstract and back into the savage realities of what has brought this case to court. Since the hearing began a sizeable mob of nationalists and far right activists had gathered outside the courthouse. The scene was ugly. There was an undeniable threat of violence and police were struggling to contain it. Within 15 minutes of the hearing finishing, the mob had split into numerous small groups running the narrow side streets of Dewsbury with hoodies and scarves wrapped around their faces. What began as the angry political outburst of a young British Asian Muslim had now ballooned some weeks later into a near riot.

The full trial took place the 14th September at Huddersfield Magistrate’s Court, in front of a public gallery filled almost entirely with far right supporters.  Throughout the case there has been no dispute that Ahmed posted the status which he posted. In legal parlance, the “actus reus” of Ahmed making the post is accepted. What was in discussion was the “mens rea”: did he post his status with the intention, or the reasonable expectation that it would be found “grossly offensive” to those who saw it?

After a six hour hearing, the sitting District Judge Jane Goodwin found Azhar Ahmed guilty under s127 of the Communications Act 2003. Whilst acknowledging Ahmed’s right to freedom of expression, she said that he must have been aware of the soldiers who had died in the Afghanistan on 6th March, and that she was therefore satisfied that he intended his Facebook status to be “grossly offensive” to those who read it.

A sentencing hearing for Azhar Ahmed will take place in October and I would be well advised not to speculate on the outcome of that hearing. From my point of view, whether Ahmed receives the full maximum 6 months’ imprisonment which his conviction allows, or a fine of only £1, it will be too much. This young man (now 20 years old) will be given a criminal conviction for expressing an unwelcome political opinion. This to my mind is completely intolerable in a free and just society.

I’m criminally over my word limit here, but I’d like to leave you with one more thing. After a hard fought campaign going all the way to the High Court in front of the Lord Chief Justice himself, Paul Chambers successfully had his conviction under s127 overturned. In his judgment, the Lord Chief Justice said of the Communications Act 2003

“The 2003 Act did not create some newly minted interference with the first of President Roosevelt’s essential freedoms – freedom of speech and expression. Satirical, or iconoclastic, or rude comment, the expression of unpopular or unfashionable opinion about serious or trivial matters, banter or humour, even if distasteful to some or painful to those subjected to it should and no doubt will continue at their customary level, quite undiminished by this legislation.”

 

Matt Bradley has been working on the Internet for ten years and is director of Invent Partners Ltd (http://www.inventpartners.com), a website and software development firm. He also writes a blog on social media prosecutions and internet freedom of speech at http://www.pitkanary.com

Image: CC-BY-NC s_Falkow

Don't take ballots from smiling strangers

Wendy Grossman received a series of amateurish emails suggesting that she sign up for electronic ballots – spam or anti-voting fraud? She explains how both problems are prevalent for voters.

Friends, I thought it was spam, and when I explain I think you'll see why.

Some background. Overseas Americans typically vote in the district of their last US residence. In my case, that's a county in the fine state of New York, which for much of my adult life, like clockwork, has sent me paper ballots by postal mail. Since overseas residents do not live in any state, however, you are eligible to vote only in federal elections (US Congress, US Senate, and President). I have voted in every election I have ever been eligible for back to 1972.

So last weekend three emails arrived, all beginning, "Dear voter".

The first one, from nysupport@secureballotusa.com, subject line "Electronic Ballot Access for Military/Overseas Voters":

An electronic ballot has been made available to you for the GE 11/6/12 (Federal) by your local County Board of Elections. Please access www.secureballotusa.com/NY to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

The second, from "NYS Board of Elections", move@elections-ny.gov, subject "Your Ballot is Now Available":

An electronic ballot has been made available to you for the November 6, 2012 General Election. Please access https://www.secureballotusa.com/NY to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

If you have any questions or experience any problems, please email NYsupport@secureballotusa.com or visit the NYS Board of Elections' website at http://www.elections.ny.gov for additional information.

The third, from nysupport@secureballot.com, subject, "Ballot Available Notification":

An electronic ballot has been made available to you for the GE 11/6/12 (Federal) by your local County Board of Elections. Please access www.secureballotusa.com/diaspora_ny-1.5/NY_login.action to download your ballot.

Due to recent upgrades, all voters will need to go through the "First Time Access" process on the site in order to gain access to the electronic ballot delivery system.

In all my years as a voter, I've never had anything to do with the NY Board of Elections. I had not received any notification from the county board of elections telling me to expect an email, confirming the source, or giving the Web site address I would eventually use. But the county board of elections website had no information indicating they were providing electronic ballots for overseas voters. So I ask you: what would you think?

What I thought was that the most likely possibilities were both evil. One was that it was just ordinary, garden-variety spam intended to facilitate a more than usually complete phishing job. That possibility made me very reluctant to check out the URL in the message, even by typing it in. The security expert Rebecca Mercuri, whose PhD dissertation in 2000 was the first to really study the technical difficulties of electronic voting, was more intrepid. She examined the secureballotusa.com site and noted errors, such as the request for the registrant's Alabama driver's license number on this supposedly New York state registration page. Plus, the amount of information requested for verification is unnerving; I don't know these people, even though secureballotusa.com checks out as belonging to the Spanish company Scytl, which provides election software to a variety of places, including New York state.

The second possibility was that these messages were the latest outbreak of longstanding deceptive election practices which include disseminating misinformation with the goal of disenfranchising particular groups of voters. All I know about this comes from the 2008 Computers, Freedom, and Privacy conference, a panel organized by EPIC's Lillie Coney. And it's nasty stuff: leaflets, phone calls, mailings, saying stuff like Republicans vote on Tuesday (the real US election day), Democrats on Wednesday. Or that you can't vote if you've ever been found guilty of anything. Or if you voted in an earlier election this year. Or the polling location has changed. Or you'll be deported if you try to vote and you're an illegal immigrant. Typically, these efforts have been targeted at minorities and the poor. But the panel fully expected them to move increasingly online and to target a wider variety of groups, particularly through spam email. So that was my second thought. Is this it? Someone wants me not to vote?

This election year, of course, the efforts to disenfranchise groups of voters are far more sophisticated. Why send out leaflets when you can push for voter identification laws on the basis that voter fraud is a serious problem? This issue is being discussed at length by the New York Times, the Atlanticelsewhere. Deceptive email seems amateurish by comparison.

I packed up the first two emails and forwarded them to an email address at my county's board of elections from which I had previously received a mailing. On Monday, there came a prompt response. No, the messages are genuine. Some time that I don't remember I ticked a box saying "OR email", and hence I was being notified that an electronic ballot was available. I wrote back, horrified: paper ballot, by postal mail, please. And get a security expert to review how you've done this. Because seriously: the whole setup is just dangerously wrong. Voting security matters. Think like a bank.

 

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of the earlier columns in this series.

 

Image: CC-BY-NC Flickr: Keith Ivey

Health Information: For Adults Only

Katherine Norman discusses her experience of finding maternity, midwifery and parenting sites blocked by Three and how her research led her to conclude that as a parent she will not make use of ISP parental controls.

As a parent I will not be making use of the parental controls offered by ISPs. This may seem counter-intuitive. Why would I not do everything to protect my children online? But my recent experiences with adult-content filtering on one of the big mobile phone networks, Three, has highlighted for me the issues involved with filtering content - the unreliability of the filters, the problems with letting someone else decide what is suitable for your child, the problems with complaining about blocked sites to a company, and the potential issues for adults of accessing essential information if parental-controls are turned on at ISP level or shared devises.

Whilst on holiday I found that a highly respected evidence-based breastfeeding support site Kellymom.com is blocked by Three. I complain to Three, sure that this was over-blocking, since most breastfeeding sites aren't blocked. But over the coming weeks I found lots of similar blocked sites. Eventually the pattern emerges - Three is blocking maternity sites, including many national organisations and (based on a small sample) about half of independent midwives. I found over 50 blocked sites in a couple of hours surfing. 

Now as it turns out I don't have any of the IDs required to verify age so although I am well over 18 I couldn't turn off the adult-content filtering on my Pay As You Go phone. By now I don’t want it turned off – I want to know what is being blocked!

After five weeks, and several hours on the phone to Three I discover that most of these sites are intentionally blocked as ‘adult-only’ pregnancy and sexual health sites. This has now changed and these sites are no longer blocked. Three also said that they would be changing the verification for adult-only content and that they needed to look into procedures for dealing with complaints about blocked sites. But it shouldn't take an almost obsessive effort by one parent to find out why something is being blocked and have it dealt with. 

 

What is adult-only content?

Every parent is going to have different ideas about what is suitable for their children, and suitability for a five year old is vastly different to that of a seventeen year old. Someone at Three at some point  decided that pregnancy is an adult-only topic. For me maternity is far from adult-content - there are children's picture books on the subject! Birth is an essential part of the human experience and these sites have immense educational value in a world where most of us reach adulthood with a poor understanding of pregnancy, birth or looking after a baby. The blocking wasn’t about potentially rude words, or the images used, purely the subject of the sites.

Many adults will assume that parental controls are only going to be used to block things that are widely accepted as being adult-content - sexual content, and violence. But the consultation on parental controls and the application of filters in this case suggests otherwise. Some parents will want nudity blocked - others will welcome non-sexual nude pictures as artistic, cultural and educational - think Michelangelo's David. Sexual health and sexual education sites raise other issues of access to materials for, not only personal wellbeing but for completing homework. 

It is essential that any company providing parental-controls has to detail what content is blocked and why, and that parents can choose what is suitable for their children. Otherwise parental controls do just the opposite - they remove a parent's control. 

 

Filters aren't reliable

Suppose I did agree that pregnancy and sexual health sites should be adult only. My experience with the filters used by Three has clearly shown that you can't rely on them to block content. Most breastfeeding sites, and an apparently random half of independent midwives weren't blocked. So there is a real danger of parents relying on filters that won't block all the sites they want them too. Filtering is no substitute for parental supervision.

Apart from these fundamental problems with filtering I came across others that have potential to be a problem, if the procedures aren't in place to prevent this. 

There is a risk that the need to verify age with opt-out parental controls may result in adults being unable to access content, with the potential to marginalise those unable to get a credit card, drive or afford a passport. Or by choosing parental-controls at ISP level parents may inadvertently find themselves unable to access information they need at short notice - a particular problem with maternity and health information. When parents have to turn off parental-controls to function as parents it brings into question the whole concept of parental controls. 

We need information on what is blocked and why, and procedures for reviewing blocking of sites in a transparent and timely way, and the ability to find out which sites are blocked. Without these there is a risk that blocking of sites, either by mistake or due to differences of opinion, may affect not only businesses, but create problems for everyone trying to access every day information. Rather than empowering parents, if implemented badly parental-controls at ISP level have the potential to frustrate and dis-empower. 

 

 

Image: CC-BY-NC-ND Flickr: Robert Carlos Pecino

The lesson we must never forget

Jon Fuller, anti-censorship campaigner with the Consenting Adult Action Network, gives his argument on what is wrong - in terms of privacy and civil liberties - with extreme pornography laws, and how cases like Simon Walsh's play out like a 21st Century ducking stool.

 Following a brutal killing in 2003 in which the murderer (Graham Coutts) was found to have regularly accessed violent pornography online, a campaign was launched to make the possession of extremely violent imagery a criminal offence.

The last Labour government conducted a public consultation, in which most respondents explained the dangers in proceeding with legislation of the nature proposed. But government pressed on; was warned again of the dangers in the House of Lords, but the legislation (CJIA 2008) was passed, criminalising possession of 4 categories of extreme pornography.

Two categories were easy to understand and, provided you know, are easy to comply with – you must not possess any images that depict sex with an animal (bestiality) or a dead person (necrophilia). The other two categories were of particular concern to civil liberties campaigners. They criminalise possession of images that depict sexual violence that is life threatening and/or images that could cause serious injury to breasts, anus or genitals.

Many will be tempted to say “that sounds reasonable”, but Government categorically refused to spell out precisely what these clauses mean and said that interpretation would be decided by the courts, on “a case by case basis”, over the years to come (the legislation came into force in 2009). Those of us concerned by this asked some simple questions, for example, does “serious injury” to genitals include a “Prince Albert” style of body piercing? But we still do not know if a nipple or genital piercing or a tattoo constitutes serious injury.

Since the legislation came into force, over 2,600 people have been charged, thousands more cautioned by police and over 150 convicted in court. This data indicates that large numbers of cases are still to reach full trial: -

Crown Prosecution Service data on people charged and reaching a first hearing in magistrates court:

 

2008/9

2009/10

2010/11

2011/12

S63(7)(a): life threatening

0

5

38

40

S63(7)(b):serious injury to breasts, anus or genitals

0

52

132

102

S63(7)(c):necrophilia

0

0

0

6

S63(7)(d): bestiality

2

213

995

1171

Total

2

270

1165

1319

 

Ministry of Justice data on convictions:

 

2010

2011

S63(7)(a): life threatening

0

3

S63(7)(b): serious injury to breasts, anus or genitals

9

11

S63(7)(c): necrophilia

0

0

S63(7)(d): bestiality

48

67

Total

57

81

 

 In August a barrister, Simon Walsh, ended up in court charged with possession of various material. He had apparently been sent  an e-mail depicting some images of activities in which he had been a participant. The police initially were concerned that some could feature an underage male and that fetish wear (a gas mask) could constitute “life threatening” because it could restrict breathing. Other images included male on male “fisting” and “urethral sounding” (in which a rod is inserted into the penis to heighten sexual pleasure).

Now, this might all sound a tad unconventional, some might find it downright kinky, others might be horrified but, we in CAAN, believed there was only one issue – did the images portray anyone under the age of consent? We could not understand why the other “extreme pornography” elements were prosecuted. Thankfully the utterly stupid suggestion that a gas mask was “life threatening” (a machine intended to save life) was dropped before the trial but the other two elements were put to a jury.

Not only did the jury accept evidence that the male in question was an adult but it also determined that the images depicting “fisting” and “urethral sounding” did not constitute serious injury to anus or genitals (they are not illegal to perform so how can imagery that depicts legal activities be criminalised?). During the process of proving his innocence every private, intimate detail of Simon’s sex life was aired in public. This being the UK, his career, his ambitions and his life plans were utterly destroyed. He seriously contemplated suicide, while weighing the implications of the options before him.

There is a lesson from this that we must never forget. When government shows arrogance, ignores intelligent, considerate and humane representations to new legislation, we must expect the worst. What we see in CJIA 2008 is the 21st century equivalent of the ducking stool, where you must be utterly destroyed if you are to prove your innocence. Plead guilty and you will avoid publicity, humiliation for your family and receive a lighter sentence.

You might think that CJIA 2008 represents the most extreme abuse of natural justice but it gets worse. Most will be appalled that images of bestiality even exist but poorly conceived and drafted legislation threw up another travesty of justice.

Last year a man forwarded a joke e-mail which became known as the infamous “Tiger Porn” case. It featured a man in a tiger suit in simulated sex with a tiger, with the punch-line “it beats eating Frosties”. He too ended up in court but was duly acquitted.  Yet, despite these acquittals, the Crown Prosecution Service has categorically refused to update its guidance on what adults can and cannot choose to watch. So tomorrow someone you know might have their life plans destroyed if they send a viral joke or retain images relating to intimate body piercings or tattoos.

People into BDSM and, post “50 Shades of Gray”, you might be forgiven for thinking that’s most are in particular danger if they view anything that depicts the goings-on between Anastasia and Christian.  But remember one more point – if you get any of this wrong, if you make a simple error of judgement, you stand to face up to 3 years in prison, a £5,000 fine and being placed on the sex offenders register. 

We must never forget the lesson that this teaches. We must never trust government to act sensitively, in the best interests of all; we must never trust the House of Lords to stop bad legislation; and we can never trust the enforcement agencies to protect us where Parliament fails. We have to stop bad legislation before it is enacted.

 

Jon Fuller: Anti-censorship campaigner with the Consenting Adult Action Network  www.caan.org.uk He can be contacted at info@caan.org.uk

Image: freely available via Cornell University Library

Sock Puppets: A Necessary Evil

The publishing industry has faced a series of revelations about sock puppet reviews, authors praising themselves and slating ‘rivals’ under pseudonyms. Ian Clark looks at what this behaviour means for the rights of anonymity online.

The revelations regarding authorsusingsockpuppetsto post fake reviews and praise their own books, has sent ripples throughout the publishing industry.  Ostensibly designed to either enhance their own sales or damage a rival’s, the practice inflicts serious damage on the symbiotic relationship between reader and author. 

In response to the fake reviews, a number of authors (including Iain Rankin, Lee Child and Charlie Higson) signed a strongly worded letter condemning the practice and committing to never use suchtactics.  Fellow author BradThor argued that:

“In addition to fake reviews being morally wrong, they’re incredibly harmful to our industry.

There is no denying that this controversy has damaged perceptions of the publishing world.  However, whilst these revelations have led to much hand-wringing amongst the literati, the use of dubious tactics to get ahead of rivals is not particularly unusual.

In 2002, David Vise (a Pulitzer Prize winning correspondent for the Washington Post) appeared to have devised a neat trick to manipulate the bestsellers list - apparently buying his way onto the list.  Purchasing 20,000 copies of his book at a discount rate, Vise claimed he was doing so to sell signed copies via his website. This was somewhat undermined by the subsequent return of 17,500 copies for a refund.

It is also worth remembering that publishers are in the business of selling books and are not reluctant to employ underhand tactics to increase their profits.  Indeed, the big publishers will purchase space inbook store chains to ensure their products are located in prime sites throughout the store.  Just because the publishing world is supposedly filled with liberal, artistic types, it does not mean that they are above gaming the system.

The onus, however, is on Amazon to manage interactions more effectively and thus restore confidence.  There is nothing to prevent them from requiring only those who have purchased items from Amazon can add reviews for products on the site.  If users were only able to review items purchased via the Amazon website, it would make it easier to link reviewers to individual accounts.  This would also have the added advantage of preventing the glut of irrelevant reviews of items before they are even released.  Amazon have the tools to manage the system more effectively, it is a case of whether they choose to use them.

However, whilst limiting sock puppets and anonymous accounts is a relatively straightforward issue in terms of the purchase and review of goods from an online retailer, it is significantly more complicated when applied across the internet.  Complicated and illiberal.

Whilst the growth of social media has seen the topic raised with increasing regularity, debates around anonymity online date back long before the emergence of Facebook et al.  In 1995, Karina Rigby looked at the issues surrounding anonymity and firmly  concluded that “anonymity on the internet must be protected”.  Rigby points out that anonymity is a part of society and is therefore ‘unavoidable’.  As she goes on to argue, it is not as if the use of anonymity is a recent phenomenon.  Before the internet anonymous phone calls and letters were hardly unusual.  But there are bigger, more significant reasons why we should resist the calls to remove anonymity online.

In her paper, Rigby also underlines that the societies in which we live can be:

“...extremely conservative, often making it dangerous to make certain statements, have certain opinions, or adopt a certain lifestyle.”

Anonymity enables people to confront issues that would otherwise result in their persecution if their identities were known (either at the hands of society in general or the state) .  It is a particularly vital tool for people living in repressive regimes as the ability to hide behind anonymous accounts can literally saves lives and change societies.

The revolution in Egypt last year certainly underlines the extent to which anonymity can protect those fighting against repressive and corrupt regimes.  There is no doubt that without the ability to do so anonymously, Wael Ghonim would never have created the Facebook Page that played an important role in facilitating the overthrow of the Mubarak regime.  Whilst Facebook did initially pull the page for breaches of its terms (it was restored after alternative admins offered to take it on), the fact that it was possible to create it in the first place was crucial.  Anonymity provides the protection the dispossessed require to take on their oppressors.  Without it, they have no chance of challenging the status quo.

Despite the events in Egypt, Facebook appears to be determined to eradicate “fake accounts” from the network. Paul Bernal recently highlighted efforts by Facebook to encourage users to ‘snitch’ on their friends, asking users to confirm identities of friends that appear to have fake accounts (they have now ceased encouraging users to do so).  For those living under repressive regimes, the consequences of encouraging people to snitch on one another over their use of social networks is obvious (as Albier Sabers case in Egypt underlines). Of course, some might argue that by encouraging ‘snitching’ Facebook itself is behaving like a repressive regime, encouraging its ‘citizens’ to report on one another for failing to adhere to the rules.

The protection offered by anonymity does enable a certain degree of abuse, this is clear.  From authors faking reviews to those seeking to offend and distress by engaging in hate speech.  However, the advantages of protecting online anonymity far outweigh the disadvantages.  It allows the corrupt to be held to account, it ensures individuals are able to speak without fear of persecution from the state or their peers and it enables the powerless to challenge repressive regimes. Rigby’s conclusion is as relevant in 2012 as it was in 1995, anonymity on the internet must be protected.

 

Ian Clark tweets at @ijclark and blogs at infoism.co.uk/blog

Image: CC-BY-NC Flickr: Alex Brown

This is not (just) about Google

In February this year Google overrode ‘Do Not Track' preferences in Safari. Privacy International founder, Simon Davies, led a meeting at LSE last week on the impact of this escapade; Wendy Grossman attended and discusses the conclusions.

 We had previously glossed over the news, in February, that Google had overridden the "Do Not Track" settings in Apple's Safari Web browser, used on both its desktop and mobile machines. For various reasons, Do Not Track is itself a divisive issue, pitting those who favour user control over privacy issues against those who ask exactly how people plan to pay for all that free content if not through advertising. But there was little disagreement about this: Google goofed badly in overriding users' clearly expressed preferences. Google promptly disabled the code, but the public damage was done - and probably made worse by the company's initial response.

In August, the US Federal Trade Commission fined Google $22.5 million for that little escapade. Pocket change, you might say, and compared to Google's $43.6 billion in 2011 revenues you'd be right. As the LSE's Edgar Whitley pointed out on Monday, a sufficiently large company can also view such a fine strategically: paying might be cheaper than fixing the problem. I'm less sure: fines have a way of going up a lot if national regulators believe a company is deliberately and repeatedly flouting their authority. And to any of the humans reviewing the fine - neither Page nor Brin grew up particularly wealthy, and I doubt Google pays its lawyers more than six figures - I'd bet $22.5 million still seems pretty much like real money.

On Monday, Simon Davies, the founder and former director of Privacy International, convened a meeting at the LSE to discuss this incident and its eventual impact. This was when it became clear that whatever you think about Google in particular, or online behavioural advertising in general, the questions it raises will apply widely to the increasing numbers of highly complex computer systems in all sectors. How does an organization manage complex code? What systems need to be in place to ensure that code does what it's supposed to do, no less - and no more? How do we make these systems accountable? And to whom?

The story in brief: Stanford PhD student Jonathan Mayer studies the intersection of technology and privacy, not by writing thoughtful papers studying the law but empirically, by studying what companies do and how they do it and to how many millions of people.

"This space can inherently be measured," he said on Monday. "There are wide-open policy questions that can be significantly informed by empirical measurements." So, for example, he'll look at things like what opt-out cookies actually do (not much of benefit to users, sadly), what kinds of tracking mechanisms are actually in use and by whom, and how information is being shared between various parties. As part of this, Mayer got interested in identifying the companies placing cookies in Safari; the research methodology involved buying ads that included codes enabling him to measure the cookies in place. It was this work that uncovered Google's by passage of Safari's Do Not Track flag, which has been enabled by default since 2004. Mayer found cookies from four companies, two of which he puts down to copied and pasted circumvention code and two of which - Google and Vibrant - he were deliberate. He believes that the likely purpose of the bypass was to enable social synchronizing features (such as Google+'s "+1" button); fixing one bit of coded policy broke another.

This wasn't much consolation to Whitley, however: where are the quality controls? "It's scary when they don't really tell you that's exactly what they have chosen to do as explicitly corporate policy. Or you have a bunch of uncontrolled programmers running around in a large corporation providing software for millions of users. That's also scary."

And this is where, for me, the issue at hand jumped from the parochial to the global. In the early days of the personal computer or of the Internet, it didn't matter so much if there were software bugs and insecurities, because everything based on them was new and understood to be experimental enough that there were always backup systems. Now we're in the computing equivalent of the intermediate period in a pilot's career, which is said to be the more dangerous time: that between having flown enough to think you know it all, and having flown enough to know you never will. (John F. Kennedy, Jr, was in that window when he crashed.)

Programmers are rarely brought into these kinds of discussions, yet are the people at the coalface who must transpose human language laws, regulations, and policies into the logical precision of computer code. As Danielle Citron explains in a long and important 2007 paper, Technological Due Process, that process inevitably generates many errors. Her paper focuses primarily on several large, automated benefits systems (two of them built by EDS) where the consequences of the errors may be denying the most needy and vulnerable members of society the benefits the law intends them to receive.

As the LSE's Chrisanthi Avgerou said, these issues apply across the board, in major corporations like Google, but also in government, financial services, and so on. "It's extremely important to be able to understand how they make these decisions." Just saying, "Trust us" - especially in an industry full of as many software holes as we've seen in the last 30 years - really isn't enough.

Wendy M. Grossman’s Web site net.wars has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

Image: CC-BY-NC Flickr: Dystopos

Digital Classrooms in Wales

Owen Hathway, Policy Officer for National Union of Teachers Cymru, discusses the Welsh Government’s ambitions for digital classrooms and the problems with such plans as ‘Bring Your Own Device.’

A concern that is becoming ever more prominent for teachers in Wales is how the electronic dream of a digital classroom can be delivered in reality.  The majority of teachers in Wales will be able to draw up a list of lessons based on innovative plans delivered through laptops, notebooks, iPads, video cameras and other uses of the latest digital technologies.  The idea of a new generation of children falling in love with the written word through new methods such as kindles and iBook’s; or children interacting with the world wide web in their own safe classroom space, cutting videos for YouTube or using social media to bring to life their passion for subjects, is something we would all wish to foster and encourage.

Back in March of this year the Welsh Government commissioned Digital Classroom Teaching Task and Finish Group published its review of digital teaching in Wales; Find it, make it, use it, share it: learning in digital Wales.

The NUT shares the ambition that is expressed in the report to ensure that teaching in Wales is at the forefront of innovation and use of new technologies.  Technology plays an ever more important role in society and the more learning in schools reflects the skills needed to empower individuals as they go into the workplace, the better.  Certainly we are supportive of the overarching objectives of the report that the processes of learning and teaching can, and must, take advantage of what digital technologies offer.  The report’s commitment to prioritising training and commissioning new resources are particularly welcomed by the profession.  Providing the opportunity for practitioners to develop new skills and increase their competence and confidence in using modern tools is a key component for the success of this strategy.

Unfortunately, the reality of what we face in Wales is the ambition to succeed but without the resources to support it.  Last year the Welsh Government, in conjunction with local authorities, announced £1.4bn in funding for its school building programme 21st Century Schools.’ There is no doubt this is a significant amount of investment.  However, this was a scaled back figure from the £4bn that was originally outlined. It is becoming increasingly difficult for Welsh schools to focus on the need to use modern technologies in the classroom when those very same classrooms are in desperate need of repair. 

There are anecdotal examples of schools who are already struggling to provide the basic materials such as pens, paper, sports facilities, crafts and even an adequate number of teachers to maintain reasonable class sizes, to name just a few financial constraints.  Often these provisions are available only because teachers are spending their own wages to provide them for their pupils.

There is little doubt that the use of technologies in Welsh classrooms is something that should be part of the norm, not an exception.  We know the major benefits that digital learning brings to education.  It offers a freedom to teachers that is otherwise hard to achieve.  No longer are they stuck in a central point as dispensers of information.  Instead, they are liberated to act as facilitators, working hand in hand with technology, to empower students to strive and achieve an important skill of working under their own initiatives. 

Teachers often report that working with new digital technologies to deliver lessons also ensures greater motivation amongst students; higher levels of self-esteem; greater use of outside resources; development of technical skills and greater levels of collaboration between peer groups.  What is more we have some fantastic examples of schools in Wales leading the way on modern teaching techniques routed in the digital classrooms.  In many cases it is not the expertise, and certainly not the enthusiasm that is missing, but a simple question of finance.

Welsh students, as shown by the last available figures, are underfunded by £604 per head in comparison to their counterparts in England.  That underfunding naturally has an immediate and recognisable impact on education provisions in Wales.  While the Welsh Government continue to assert the somewhat flawed view that funding levels do not impact on standards, it is more difficult to argue that they do not have an impact on the availability of up to date and fit for purpose technology.  The choices faced by some schools in Wales’ most deprived communities are not ones of iPods or iPads but of pens or pencils.  We don’t have teachers who struggle to work with Facebook or Twitter, but those who struggle to work with overcrowding and dilapidated buildings. 

While the task and finish groups report suggested ideas such as a ‘bring your own device’ approach to technology in the classroom, many members fear this will lead to the alienation of children who do not have access to expensive devices. This is likely to become a more evident issue for many families hit by low levels of employment and rising living costs.  The more deprived communities in Wales will inevitably have greater difficultly in developing a digitally focused learning environment if use of personal equipment is the basis of implementation.  Both within individual schools and on a cross-school basis the differing levels of equipment available could detract from the success of the scheme, and in some cases lead to a stigmatisation for individual children and their communities.

The view stated in the report that a digital learning environment helps improve education standards, prepares learners for life and careers, and supports the Welsh economy is one NUT Cymru fully supports.  The Welsh Government’s Digital Wales Delivery Plan states its ambition is to ensure that “everyone is able to enjoy the benefits of digital technologies”. This ambition has to be matched by funding.  Wales doesn’t suffer from a lack of innovation, inspiration or dedication to delivering learning hubs for the modern generation, but we must provide the physical ability to make use of that good will.  Failure to do so will mean we will see yet another well-meaning strategy that becomes no more than empty rhetoric.

 

Owen Hathway is the Wales Policy Officer at the National Union of Teachers.  Having previously worked in a variety of communication and research roles for Plaid Cymru he took up his current position with the union in August 2011.  A graduate of the Aberystwyth University politics department, Owen has also studied towards qualifications with the London School of Journalism, Chartered Institute of Marketing and Chartered Institute of Public Relations.

Image: CC-BY-SA Flickr: BarbaraLN

Looking for a Job goes Orwellian

The Department for Work and Pensions are introducing changes to job seeking called ‘Universal Jobmatch’ and ‘Universal Credit’. Consent.me.uk explains the massive privacy impact of these new measures.

By late Autumn the Government's Department of Work and Pensions plans to introduce a  new Universal Jobmatch website for anyone seeking a job. The US company commissioned to deliver this service is Monster Worldwide, an online recruitment and technology service company infamous for numerous major personal data losses through hacking, including the US equivalent of Universal Jobmatch, usajobs.gov.

Millions of part-time workers and unemployed receiving welfare benefits will be mandated to register, thereby bypassing all semblance of consent[i], letting Jobcentre staff and external contractors have full access to all of their user activities, including reading correspondence with employers, viewing full content of CV's, pending and submitted job applications, jobsearches done and saved, feedback from employers, interviews offered and personal profiles. Jobcentre staff will also be able to attach job vacancy details to a user’s account, which they must apply for.

All of this is being driven by the plan to make all welfare benefit and related services digital by default and mandated for under the Welfare Reform Act (2012), coupled with the introduction of Universal Credit. These welfare reforms come with unprecedented coercive powers, such as “claimant commitments”, including the core mandatory requirement to give evidence of spending a whopping 35 hours per week[ii] doing Jobsearch activities, with non-compliance leading to loss of welfare payments through sanctions of up to 3 years[iii] and or the imposition of mandatory unpaid work(fare)[iv], for showing a lack of effort. Based on the evidence we believe that the unemployed and part time workers[v] receiving welfare benefits will be mandated to register.

This snoopers’ charter of a service will be integrated with many central, local government and private sector databases, covering wages paid, hours worked, credit ratings, electoral role and tax liability to name a few.

What about privacy and consent?

Under current Jobcentre rules and practice for welfare benefit recipients, specifically for Jobseekers Allowance,  they are only required to show (not be collected) a record of jobseeking activities, which can just be a short diary account of jobs applied for and jobseeking activities undertaken to find employment.

Currently the collection and retention of actual correspondence with employers, job applications, emails, personal email addresses and telephone numbers requires informed consent and as such can be declined without consequence. However there is evidence that many of the Government's external contractors are subverting informed consent and forcing the sharing of such personal data by unlawful threats of benefit sanctions:

“Claimants [were] bullied into signing an agreement to supply prospective employers’ details, for the provider to claim a job outcome payment.”

Soure: National Audit Office May 2012 report ‘Preventing fraud in contracted employment programmes’

Privacy invasion

Whilst some may say this level of privacy invasion is proportionate with regards the conditionality of welfare benefits, but this does not address the level of free no cost access to initially anonymised personal data purely on the pre-text of being an employer. Even though candidates can choose if they do want to release their full CV or complete a standard job application form, people claiming Jobseekers Allowance or Universal Credit will have a mandatory tracked sanctionable obligation to apply for any employer inviting them to apply. The contract documents indicate the only basic verification needed to open an employer’s account will be a landline or mobile phone number and a verified post code.

Privacy impact?

As no dedicated privacy impact assessment has been undertaken for Universal Jobmatch it is difficult not to conclude it will contravene the Data Protection Act in terms of consent and engage the Human Rights Act (ECHR Article  8) principle that “Everyone has the right to respect for his private and family life, his home and his correspondence.” Nor has anything been published about how the technology will be used, by the Government and external contractors, to view the activities of people not claiming any welfare benefits. 

A quick insight into the likely privacy impact, for all users of Universa Jobmatch can be found on the Monster commissioned website usajobs.gov, under its Terms and Conditions and curiously not its Privacy Policy it says:

“All access or use of this system constitutes user understanding and acceptance of these terms and constitutes unconditional consent to review, monitoring and action by all authorized government and law enforcement personnel. While using this system your use may be monitored, recorded and subject to audit. ”

 

consent.me.uk was developed to address reported exploitative bullying practices of external DWP Welfare to Work(fare) Contractors/Providers and inform people of their Data Protection rights of consent and more general welfare rights. The author wished to remain anonymous.



[i]     Currently the Europe Commission is consulting on introducing a tougher unified EU wide Data Protection regime, including strengthening the definition of consent.

[ii]    The Jobseeker’s Allowance Regulations 2012:   "...we propose that claimants are expected to have spent 35 hours a week...looking or preparing for work."

[iii]   Higher-level sanctions. Draft The Jobseeker’s Allowance (Sanctions) (Amendment) Regulations 2012 : "156 weeks, where there have been two or more previous sanctionable failures by the claimant"

[iv]   Mandatory Work Activity aka Workfare, is a month’s full time unpaid work, whilst Jobseekers Allowance is paid.
"It helps them re-engage with the system, refocus their job search and gain work-related disciplines. Failure to complete a Mandatory Work Activity placement without good cause will result in the sanction of Jobseeker's Allowance for three months. This will rise to six months for a second breach. From later in 2012 a three year fixed sanction will apply for a third violation."

[v]   Powers to make Universal Jobmatch mandatory are: Section 17 of the Welfare Reform Act (2012) Work search requirements

 Jobseeker’s directions

 Jobseeker's Agreement

Via external DWP contractors working under it's 'Framework for the Provision of Employment Related Support Services'

The Jobseeker’s Allowance (Employment, Skills and Enterprise Scheme) Regulations 2011 give contractors delegated powers to make work search activities mandatory.

Image: Reflected CC-BYFlickr: boliston

Featured Article

Schmidt Happens

Wendy M. Grossman responds to "loopy" statements made by Google Executive Chairman Eric Schmidt in regards to censorship and encryption.

ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views

People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.

ORG Events