The Halifax Regional C@P Association: The importance of youth, community and collaboration in the information age.
Laura Conrad looks at the importance of computer literacy education amongst students, with particular focus on the Halifax Regional C@P Association in Canada.
Most of them are young, but some are much younger than others. They work alone or in teams, in larger cities to the smallest rural corners of the region. Some of them will never meet each other in person, but they work together every day, towards achievements that are shared by communities across the nation and the world.
These are youth interns of the Halifax Regional C@P Association (HRC@P), a non-profit organization based in Nova Scotia, Canada, that takes a grassroots approach towards eliminating digital divides. The majority of interns being high school or first-year college students, the importance of their positions is often undermined; however, anecdotal and statistical evidence shows young adults as the most computer literate demographic of today.
Despite the many ups and downs the organization has faced over the years, Halifax’s C@P program highlights an issue of global importance: the need for community collaboration in addressing the complexities of the information age.
Officially founded in the year 2000, HRC@P’s original intention was to create a network of community sites that provide internet access, in order to make use of information technology for the sake of social and economic benefits to the individual as well as the community. Despite the many twists and turns that unfolded over its 12 years of existence, the program continues to run successfully, to the great benefit of many social groups and individuals.
The program first intended to fund up 240 sites at various libraries, schools, recreation centres and other popular community areas. There are some sites that offer special services to favour certain groups such as seniors, youth at risk, individuals at risk of homelessness, and those with low literacy skills or learning disabilities.
HRC@P has struggled as an organization over the years by having to face the possibility of funding cuts. Each C@P site has typically received between $2,500 and $4000 in funding, from both the federal and provincial governments. This allows each site to determine where the bulk of the funds will be spent, depending on the particular community’s needs. This is what makes each site unique, and allows for the success of the program to be shaped differently by each community.
By 2012, the federal government withdrew its contributions to HRC@P, a $650,000 grant from Industry Canada. The provincial government, however, continued to commit its portion of the grant (approximately $348,000), allowing HRC@P to continue to employ approximately 200 students over the summer.
The role of the youth intern
Despite the funding cuts, some C@P sites have continued to flourish. A lot of this is dependent on the site’s distribution of resources, as the most successful sites were those with the financial resources to hire students or extra staff, as opposed to relying on support from volunteers.
This is important to note, as this is what highlights the importance of the role of the youth intern. The youth internship program is a fundamental component of HRC@P as it serves more than one purpose for communities; it ensures there is a staff person to administer the C@P program and to cater to the computer literacy needs of patrons, while also providing an opportunity for a young person to gain and/or leverage their computer literacy skills before entering the workforce. Young people use the internship as a means to connect to their communities in different ways. Some have developed innovative ideas to help the organizations they’re partnered with, such as by creating social media strategies or by holding tutorial sessions at their sites to teach basic web and software skills.
The interns work with minimal supervision, with the majority of reporting to supervisors being done via digital networking. They work in networks, connecting with their site supervisors during their shifts while also submitting weekly reports to those in management positions with HRC@P. Interns will also network with each other by sharing stories over Facebook Groups, Skype chats and blogs. By the end of each internship, students have gained a great deal of experience facilitating communication both internally and externally for a non-profit organization, using online tools to leverage their organization’s digital presence and soliciting additional funding on behalf of a non-profit organization.
Community collaboration vs. digital divides
C@P has been a successful program for several reasons; one of them being that it’s foundation is built upon community collaboration. The desire to learn is leveraged by the opportunity to connect with another individual, to share an experience from which both parties can grow. Many members of any given community are motivated to improve their computer literacy skills on account of the fact that doing so will allow them to become an integral part of a community operation, on which their participation is dependent for success.
Another reason why C@P has been so successful as a program is because its goals are focused on growth of computer literacy and digital education. The nature of technological growth in western society is now such that the younger generation is in a position to thrive; this is a result of a wide adaptation of technology in public education systems. According to a report recently released by Pew, 92 per cent of teachers have said the internet impacts their teaching abilities. In the same report, 73 per cent of teachers said their students use mobile devices to complete assignments, while 45 per cent use digital e-readers and tablets.
With this kind of education, young people are more equipped to address digital literacy issues within the community than ever before, allowing collaborative programs like C@P to thrive. Despite the harshness of the federal funding cuts last year, the program is still reaching its full potential because of the efforts and achievements of its youth interns. Like so many new developments in the digital world, the future potential of HRC@P is largely dependent on the younger generation.
When examining the benefits the program has had in Nova Scotian communities, it becomes clear why governments should continue to fund initiatives like HRC@P; it not only guarantees a culture of technology literate, empowered youth, but also allows all members of a community (even its most vulnerable) equal opportunities for progression in the workforce.
Wendy Grossman looks at the issue of privacy policies, and suggests that the system must be 'fixed' in order for users to completely understand what information they are signing away.
The modern usability movement as it applies to computer software and hardware design began in 1988 when Donald Norman published The Design of Everyday Things. Norman, as he's patiently retold many times since, was inspired to write that book by six frustrating months in England, where he was constantly maddened because nothing, not even light switches, worked logically. His most recent book, Living with Complexity, looked at the design of complex systems, trying to pinpoint how to make the services we navigate every day less frustrating.
I was thinking of this recently, when the Open Rights Group hosted a meeting on the mid-May Sunday Times story that mobile network operator and ISP EE was sharing detailed customer data with the market survey company Ipsos Mori. EE and Ipsos Mori sent representatives, as did the Information Commissioner's Office. Essentially, they said a small pilot project had been misunderstood.
Privacy is a complicated issue because even experts do not have good answers to questions like how big a risk over what period of time is posed by the disclosure of a particular set of data. We know this much: today's "anonymized" data is tomorrow's reidentified data as more and more datasets come online to help triangulate it, much the way today's strong cryptography will be weaker tomorrow as computational power continues to grow. The ability to make accurate assessments is complicated by unknown externalities. How many users remember what they posted under which terms and conditions five years ago? And users themselves have varying understanding of what they think is happening.
We were into privacy policies and user consent, when I began to imagine what these might look like under a more stringent data protection law. It will be like today's omnipresent cookie authorization requests? Click OK to post this data. Click OK to share this data with our partner who just wants to sell you stuff. Click OK to let us reuse this data to personalize the video on the billboard you're about to pass. Click OK to…you mean, you didn't want to send your personal data to the US National Security Agency?
I am not suggesting we fix the users. The users aren't broken. Fix the *systems*.
The problem, someone pointed out to me afterwards, is that a lot of people think that their government knows everything about everyone anyway. But there's a big difference between that casual cynicism and seeing proof. Right on cue, the next day's newspaper headlines. The Guardian and the Washington Post say that under a previously unknown program called PRISM the NSA has direct access to the systems of US-based companies: Facebook, Google, Apple, AOL, Skype, PalTalk, and YouTube. (A number of these companies are quoted denying they have given such access.) Direct access as in, walk right in and pick the data they want. Also: the NSA is collecting the phone records of millions of customers of Verizon, one of the biggest US telcos. And: the UK's GCHQ has had access since 2010.
Worse, US government politicians are defending it: Democratic senators Harry Reid (Nevada) and Dianne Feinstein (California in the Wall Street Journal, President Obama in the Guardian. Charles Arthur has a helpful and rational decoding of all this and Nick Hopkins explains the UK's legal situation with respect to phone records.
At Computers, Privacy, and Data Protection earlier this year, the long-time privacy activist Caspar Bowden discussed the legal and technical framework for surveillance-as-a-service and the risks for EU users of cloud computing (which includes social media sites). Essentially, if there is a back door installed in these systems, "interception" is no longer a useful concept, and encryption is no longer a useful defense. Inside those data centers, data is perforce decrypted, and legally authorized direct access to stored uploaded data under the Foreign Intelligence Amendments Act (since the Fourth Amendment does not protect non-US persons) is not interception of communications.
Before the Internet, it was pretty simple to avoid being surveilled by a foreign country: you just didn't go there. So the first thing we need to make explicit in users' mental models is that uploading photographs and personal data to sites like Google and Facebook is digitally entering the US. We could start maybe by requiring large pictures of the services' national flag.
Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted irregularly to the net.wars Pinboard - or follow on Twitter.
Aaron Stein looks at the importance of social media during the protests in Turkey.
After weeks of protests, Turkish Prime Minister Recep Tayyip Erdogan met with members of the Taksim Solidarity Group – an umbrella group for the demonstrators in Gezi Park – at his official residence in Ankara last night, in order to come to an arrangement to resolve the current crisis. The meeting lasted for several hours and the two sides appeared to have struck a tentative agreement to end the protests.
The meeting came on the heels of more tough talk from the Prime Minister, as well as an aggressive government led information campaign to relay the Justice and Development Party’s (AKP) version of events to the Turkish and international audience. The AKP has relied heavily on closely coordinated talking points that attribute the protests to a plot by “foreign circles" uncomfortable with Turkey’s economic and political progress. The government relayed its version of events via Erdogan’s frequent speeches, Turkey’s state-run Anadolu news agency, interviews wither friendly journalists, and social media.
The counter-information offensive was designed to combat the protest movement’s successful use of social media and to try and cast the protesters as marginal. To be fair, some of the flags flying in Gezi and Taksim suggest the penetration of a slew of radical leftist organizations, whose causes belie the portrayal of the protesters as apolitical youth solely intent on carving out a more democratic future. However, those groups are a small minority, of what is largely a leaderless spasm of anger at the AKP’s rule.
The protests began as a small sit-in style movement to prevent the razing of Gezi Park – a small park adjacent to bustling Taksim square. Protesters and some journalists chronicled the protests daily on twitter; posting pictures of excessive tear gas using the twitter hash tag #dailygasreport. The twitter campaign gradually gained traction, anger swelled, and eventually boiled over after Reuters photographer Osman Orsal captured the ghoulishly iconic image of the elegant woman in the red dress being sprayed at close range with excessive amounts of tear gas.
After the police did manage to clear Gezi Park, the subsequent brutal put-down of the peaceful sit-in at Taksim Square, which touched off two days of intense street clashes, was once again captured and broadcast to the world via social media. As the clashes unfolded, Turkish citizens had little choice but to follow the events live via twitter or Facebook. Turkish media outlets opted not to broadcast the events, choosing instead to air their regularly scheduled programing. CNN Turk, for example, opted to air a documentary on Penguins on Saturday evening, even though intense clashes continued in cities all through out Turkey.
Media outlets in Turkey are owned by large business conglomerates, dependent on government contracts for the financial well being of their numerous different subsidiaries. While the tight relationship between the media and the party in power is not new in Turkey, the emergence of social media has allowed people to circumvent the government’s attempts to stifle news coverage.
The AKP, while having embraced twitter to spread its own version of events, has not reacted well to the use of social media. In 2012, Ankara Mayor Melih Gokcek, sued 600 people on twitter for insulting him. Moreover, after the tragic bombing of a town near the border with Syria in May 2013, a local court banned coverage of the events and the publishing of images.
Nevertheless, images were quickly uploaded to social media sites and spread via numerous retweets, independent blogs, and Facebook accounts. For example, Elliot Higgins, the blogger behind the excellent Brown Moses Blog, created two databases of photographs and videos of the tragedy, in spite of the ban in Turkey and the local media’s inability to publish its own images. The Reyhanli coverage, therefore, foreshadowed the power of social media to circumvent the AKP’s media bans.
During the protests, Erdogan labelled social media a “menace” and a device to “spread lies.” Turkish police have arrested 25 people in the coastal town of Izmir for allegedly using social media to incite violence. In tandem, the Transport Ministry is now investigating twitter, claiming that “ [it] doesn’t have a legal basis in Turkey. They take ads but they do not pay tax in Turkey. It should establish a company compliant with the Turkish Commercial Code, like Facebook and YouTube.”
Despite these efforts, the number of social media users in Turkey continues to increase. However, rather than embrace the use of this new medium as a tangible expression of freedom of speech – which is guaranteed under Article 26 of the Constitution – the government appears intent on finding the legal justification to prevent the use of twitter. The AKP’s current legal effort to stifle twitter undermines its campaign rhetoric and its carefully cultivated image as the party responsible for the deepening of Turkish democracy. Thus, while the AKP may have a point about a small sliver of the protesters not being real democrats, its handling of the crisis, as well as its current effort further curb freedom of speech, is hardly representative of a party intent on deepening personal freedoms.
The AKP has a responsibility to protect the right to freedom of expression and should not shy away from embracing the growing use of social media. The party is the first in Turkey to have to govern in an environment where an increasing number of journalists, academics, and interested citizens are using social media to relay their thoughts about current events in real time. Thus, the party has a choice: It can either embrace freedom of expression and protect the use of social media, or it can opt to invent legal justifications to curb its citizens’ rights to freedom of speech.
If the government continues to pursue the latter of these two options, the AKP will have failed to embrace and embody its numerous campaign pledges to strengthen Turkish democracy. Moreover, it will have established a worrying precedent for future party’s to follow, should they be faced with a similar protest movement in the future.
Aaron Stein is a doctoral candidate at King's College London and a researcher specializing in Turkish politics at the Istanbul-based Centre for Economics and Foreign Policy Studies. He blogs at Turkey Wonk. Follow him on Twitter: @aaronstein1.
Milena Popova looks at how Kindle and Amazon are attempting to revive the world of fanfiction
So Amazon has decided to boldly go where… quite a few people have tried to go before actually, in its recent move to try to monetise the creative talent (or otherwise) of the fanfiction community. If you hang around fandom long enough, you realise that roughly every seven years someone pops up who thinks there’s a pot of gold at the end of the fandom rainbow, with this most recent effort very likely prompted by the success of the Fifty Shades of Grey trilogy which started life as a piece of Twilight fanfiction.
What differentiates Amazon from its predecessors in this field is that it has actually acquired the rights to – so far – three pieces of creative real estate. US-based fans of “The Vampire Diaries”, “Pretty Little Liars” and “Gossip Girl” will soon be able to write certain types of fanfiction for these shows and books and try to flog them to fellow fans with Kindles.
I say certain kinds because Amazon places quite a few restrictions on what you can and can’t publish as part of its Kindle Worlds initiative.Crossovers (works building on two or more existing universes, like Doctor Whooves – the Doctor Who equivalent of My Little Pony Land) are a big no-no, probably because of the rights headaches they would entail. Pornography and “offensive descriptions of graphic sexual acts” - a definition of which will presumably be expanded upon in the detailed content guidelines provided by each licensor - are also banned from Kindle Worlds. Given the prevalence of both crossovers and explicit erotic content in fanfiction, these choices on Amazon’s part are both understandable and likely to get the project dismissed outright by a large proportion of the fanfiction community. Though according to an Amazon spokesperson “Fifty Shades of Grey” wouldn’t count as pornography under their definition as it depicts “consensual sex between adults”. Amazon may be in need of a dictionary. Offensive content, including but not limited to “racial slurs” and “excessive use of foul language” is not permitted either. That’s right, Amazon wouldn’t publish the works of Mark Twain or Irvine Welsh.
Other mechanics of the scheme are also interesting. While Amazon state in their content guidelines that they do not accept poorly formatted books, quality control in this area is unlikely to include the services of a professional editor. Amazon Publishing is of course already full of poorly written, badly spelled crimes against literature. A little purple prose set in an existing universe, written by a thirteen-year-old figuring out the mechanics of writing is hardly going to make a difference here. Except when it comes to fanfiction, it will: there are already countless sites all over the internet where fans can get such stories – as well some excellent work that could easily compete with published authors – for free. From a reader’s point of view, Kindle Worlds hardly provides any added value in exchange for your $3.99.
And what’s in it for writers? Well, for anything longer than 10,000 words, Kindle Worlds will be offering royalties of 35% of the customer sales price. Shorter works of 5,000 – 10,000 words get you a royalty rate of 20%. What’s notable here is the copyright deal: in theory you keep the copyright to any original, copyrightable elements of your work. In practice, the minute you submit your story to Kindle Worlds, Amazon gets an exclusive license to it, and can then grant a license to your copyrightable elements (e.g. a new character) to the original licensor (i.e. the rightsholder of the universe you’re writing in).
The only way this can look attractive is if you haven’t been in fandom long enough to know your way around it; to know, for instance, that much like EL James and Cassandra Cla[i]re you can easily file the serial numbers off your fanfic and suddenly make 70% of the customer sales price rather than 35; to know that you’d be competing with work available for free; to know that what you’re doing counts as transformative under US law and thus the fact that Amazon hold the rights to the universe you’re writing in probably isn’t a huge amount of added legal value to you either.
Which brings us to the choice of properties Amazon has decided to partner with for Kindle Worlds. As has been pointed out elsewhere, one of the motivations behind the three particular worlds Amazon has licensed is that Alloy, the company behind all three, is a book packager creating cookie-cutter content, the rights management for which is probably easier than for works created by individual writers and then sold to media companies.
Yet, taking this from a different angle, they look like strange choices. “The Vampire Diaries” is perhaps the most obvious one, jumping on the bandwagon that brought us “Twilight” with its 200,000+ fanworks across the two major fanfiction archives (Fanfiction.net/FFN and Archive of Our Own/AO3). Yet “The Vampire Diaries” barely has 30,000 fanworks associated with it. “Gossip Girl” has a grand total of 483 on AO3 and zero on FFN, and “Pretty Little Liars” has just under 5,000 across both archives. There are more fics about the hung (ahem) Westminster Parliament of 2010 than in some of these fandoms. Thriving communities these are not.
What the three “Worlds” do have going for them is that they are all aimed at teenage girls and young women – precisely the kind of people who are likely to not have been in fandom for long enough to know their way around. This is where Kindle Worlds does potentially pose a threat to the fanfiction community. By convincing kids that the “right” and “legal” way to publish and read fanworks is through a paid-for, restrictive service, Amazon has the opportunity to shape the idea of what fandom is and how it works for a whole new generation of fans.
This is where the fanfiction community needs to step up. Yes, it’s easy to dismiss Kindle Worlds as the latest in a series of poorly thought-out attempts to cash in on fanfiction, and yes, we’ve collectively seen off plenty of the predecessors. However, what Amazon may actually be doing is deliberately sidestepping the existing community in favour of changing the game for the next generation. With any luck, the next generation will know how to use Google, and will find the free, unrestricted, and often deeply strange world of the fanfiction community before they find Kindle Worlds; but it never hurts to reach out and put up some signs pointing in the right direction. The Organisation for Transformative Works is already doing a fantastic job here, with projects ranging from the AO3 archive to legal advocacy, academic study of fandom, and the preservation of fandom history and at-risk fannish works. Now would be a good time for the community to further rally behind them.
Wendy Grossman looks at the EU's plans to update the data protection directive.
It's very difficult to gauge the progress of the EU's attempt to reform the data protection directive, whose text is due to be agreed by the end of this year. Basically, it comes down to the difficulty of understanding what is going on in EU government at any given time. There seems to be more than 4,000 amendments (not exaggerating), an endless succession of committee votes, and little way to understand their order of precedence. Couple that general confusion over the EU's legislative process with the fact that a Mad Man trying his hardest could not have come with a term that sounded less engaging, and you have a subject that fights to get mainstream press attention.
At the beginning of the process, which will take until 2014 to complete, it hardly seemed to matter. A bunch of European regulators put forward plans to update the existing directive. The claim that reform was necessary seemed logical enough, since the directive was passed in 1995, when the Internet had only just been opened to commercial traffic, the Web was still a bunch of text pages listing links to other text pages, and the founder of Facebook was 11 years old. Yet what's opened up in the months since is the possibility that instead of a few tweaks and update we will get the substantial weakening of a law that offers European citizens some redress of the balance of power between themselves and the large organizations they transact with, often perforce.
The 1995 data protection principles have held up remarkably well, in large part because they *are* principles and not restrictions on specific technologies. Talk about robots and algorithm-driven decision making, for example, to a data protection expert and they're likely to see little difficulty in applying the principles to constrain potential damage to consumers and allocate liability. In that sense, the big change since 1995 isn't the advent of large, data-driven companies but global interconnection. In a world in which a public company the size of Netflix is built on Amazon's cloud services and, as Frances Cairncross predicted in 1997, distance is dead, the data you entrust to your local solicitor may be stored just about anywhere. How and where data may flow is one of the most contentious issues in the debates over reform, along with requirements for data breach notification.
Member states were required to transpose the directive into national law by October 1998 (the year Google was founded. By early 1999, as I see from my February 1999 piece for Scientific American (TXT)Simon Davies, then the executive director of Privacy International, went so far as to predict a trade war when US companies found themselves blocked.
“They fail to understand that what has happened in Europe is a legal, constitutional thing, and they can no more cut a deal with the Europeans than the Europeans can cut a deal with your First Amendment," he told me at the time.
Ah, yes, well, that was then. The EU and the US went on to negotiate a safe harbour agreement, and when the US wanted Passenger Name Record data the EU caved. Critical reports, such as this one from 2008 pop up in a search, and despite EU law, the US's big data data companies are demonstrating accelerating growth in the EU as elsewhere.
The EU law has been widely emulated. In 2000, Canada passed its equivalent law, PIPEDA. Meanwhile, the 2000s trend toward outsourcing means gave countries like India and the Philippines powerful motivation to copy the EU's data protection principles so they can sell call centers and other services to the EU. The US remains the outlier, stuck on its 15-year-old insistence on a free market approach - only now it has much bigger companies to finance lobbying efforts.
And there has been plenty of lobbying, both traditional and copy and paste. The latest, as the European Digital Rights Initiative documents, is questionable evidence built on assumptions that have no quantifiable basis.
It's a curious dissonance I wish someone would study in a PhD dissertation that data protection law has spread alongside increasing surveillance. Last week, Slate, under the influence of former Microsoft European privacy chief Caspar Bowden, argued that some amendments to the data protection directive have been written with US surveillance powers specifically in mind. Slate cites a report Bowden co-authored in January (PDF)studying the issues relating to cloud computing in the EU. Among the concerns raised by the report is the potential for the loss of control over the data stored in the cloud, as well as the fact that US companies offering cloud services are subject to the PATRIOT (2001) and the Foreign Intelligence Surveillance Amendments (2008) Acts. In other words, the US claims surveillance rights over EU citizens.
In other words: this dull-sounding labyrinthine process could cost EU citizens rights currently thought to be indelible. We'd better pay attention.
Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Stories about the border wars between cyberspace and real life are posted throughout the week at the net.wars Pinboard - or follow on Twitter.
Loz Kaye, Leader of the Pirate Party UK, looks at why the Snoopers Charter has crawled its way back into the political agenda, and why it will it may not solve the problems that politicians are hoping it will.
Looking at the news, Britain feels quite a grim place at the moment. From Woolwich to Wales high profile murders have left us wondering who we are as a nation - and how we stop such atrocities in the future. Over the years I can't count the times I have looked away from the tabloids in the local Coop with the haunting repeated images of the latest missing or murdered child wishing it would just stop.
The responsibility for trying to make it stop falls, to great degree, on our politicians. It seems many have the same wish, that there would be one easy thing to do that could make it go away. As so often, technology is given the blame, it's the Internet's fault and Google's "deadly web of poison and hatred" as the Daily Mail would have it. From casual use of computers and phones, it is tempting to think there is an easy off switch. Basically the only concrete thing the government and opposition have offered as a response to recent events is surveillance, web blocking and filters.
In the immediate aftermath of the tragic events in Woolwich the likes of Theresa May and Labour's Lord Reid were quick to call for the return of the Snoopers' Charter. The implication was that its blanket surveillance powers would somehow have prevented a thuggish street attack, with the Home Secretary claiming that it was part of giving law enforcement "the tools that they need".
However, no one has demonstrated in a concrete way how the CDB would have prevented this particular incident, with security service sources saying in fact it would not have helped. Lord Reid seems to suggest that increased blanket powers would deal in a general way with a great nebulous threat he paints. This is despite for example Danish police recently reporting that data retention has not helped them, in fact the information is described as "unusable".
This boils down to political opportunism, a chance to take a pot shot at all of us who have fought to protect civil liberties. Even MI5 sources characterised this as cheap.
The other side of the Internet blame game since Woolwich has been laying the responsibility for radicalisation at the door of "Sheikh Google", with impressionable young people creating their own cut and paste ideology. This is hardly a new approach, the 2011 Prevent strategy review to deal with countering domestic extremism declared "Internet filtering across the public estate is essential".
Even so, the very same document concedes that there is not the evidence to back this up nor is there the capacity to do it: "We do not yet have a filtering product which has been rolled out comprehensively across Government Departments, agencies and statutory organisations and we are unable to determine the extent to which effective filtering is in place..." The situation on the ground is fundamentally the same as two years ago. No new powers over the Internet were called for when Theresa May said it was "a strategy that will serve us well for many years to come". What has changed since then, other than a desire for sound bites about hate preachers?
Similar calls have come in the wake of the most distressing crimes I think most of us can conceive of, the death of young children. Once again the speed to which some commentators have used specific cases to push a particular agenda is profoundly unsettling. This has also come from quarters which should frankly know better. The Guardian editorial which wilfully mixed together legal images and records of attacks on children to make a case for "banning Internet pornography", however they think that might be achieved, was deeply irresponsible.
Let's be clear. Images of attacks on children are evidence of crimes. I, and society at large, expect these crimes to be investigated, that has always been the case and nothing has changed about that.
However, there is no way to get from that to the calls like John Carr 's for Google to change its default settings, or the more diffuse thundering that 'something should be done' by the tabloids. Agreeing that you are 18 plus is hardly a high barrier, and it is not even likely that this will happen in the way Carr describes. Nor is there evidence that further restrictions would be productive. Much is made both in relation to extremism and pornography to the increasing ubiquity of the Internet and availability of material. But there is no demonstrable surge in sexual assaults attributable to this factor.
Moreover, where blocking has been tried it has been found to be ineffective, in the Netherlands for example the Internet Safety task force found filters did not "contribute to the jointly formulated goal and therefore cannot be employed effectively". While there is not evidence to back up blocks benefitting the social good, what we do know is the collateral damage such attempts make. ORG has clearly demonstrated the effects of over blocking on mobile networks.
David Cameron has been promising "good, clean WiFi" in public spaces to give parents peace of mind. But he is not in a position to offer any such thing. We should not be in the business of outsourcing moral choice, nor encouraging parents to think it is possible, let alone desirable. Nor should we just focus on just one part of culture and society, however fashionable it is to hold forth about the web and social media at the moment. I haven't seen calls on the Publishers' Association to somehow make bookshops default 18 plus following the 50 shades of profit their members have made from erotica. Even if we wish it otherwise, the uncomfortable truth is this. Humanity does not have factory settings. There is no button to push to make evil stop.
There are always a huge number of complex factors that feed in to complex social problems. The Prevent strategy highlights a range of settings which are important in addressing radicalisation - the criminal justice system, schools, universities, health delivery, faith institutions and organisations, prisons and probation. One can say the same about attacks on children, and of course many institutions have a role to play in combatting them, perhaps not least the Catholic Church.
To make the Internet the key factor is wrong headed. Two major elements identified in Prevent as to why people are attracted to extremism are being in a lower socio-economic group, and that extremist views are "significantly associated ... with experience of racial or religious harassment".
It is vital that as a digital rights movement we do not just protect our interests, without taking a wider interest in the society in which we take part. That will rightly lay us open to the charge of being shortsighted and anti-social. But of course poverty, abuse and racism are difficult to deal with. It is far more attractive for politicians to blame the Internet when they are under pressure from the tabloids. It's simpler to hold out promises of magic technological solutions even if they have no basis in reality. In austerity Britain it's cheaper to make social policy Google or BT's problem, and their expense. But this is lazy thinking, and worst of all will do nothing to address the full range of causes of some of our most worrying problems. We may yet come to pay dearly for current politicians' lack of imagination.
Professor Douwe Korff gives his thoughts on the ICO's letter to the Ministry of Justice on the 'Data Protection Regulation'
The "Data Protection Regulation" is currently being discussed by European policy makers. We think it could offer better privacy protection and give people more control over their data, which is much needed. The Ministry of Justice and the Information Commissioner's Officer have both expressed concerns about the proposals, however, suggesting the new law could be too burdensome. The Information Commissioner recently wrote to the Secretary of State Chris Grayling at the Ministry of Justice setting out his overarching concerns.
Here, Professor Douwe Korff gives a quick, off-the-cuff immediate commentary on the letter.
(You can read the ICO's letter here )
After paying some lip service to the importance of data protection, this is a typically negative attitude by the ICO to any worthwhile data protection regime. Here are my specific comments on his main points of criticism:
"...too much emphasis on punishment in stead of awareness raising and education"
Read: the ICO wants to continue with its useless "lets sort this out between friends" approach to big business (just like the HMRC deals with big corporations). It is not totally toothless, but basically refuses to bite (other than in one or two show cases against local authorities losing millions of records repeatedly).
"only data breaches that pose 'significant risk' should have to be reported to the ICO, otherwise it would cause too much work"
Comment: This would leave it to businesses themselves to assess if there is such a serious risk that they should report their own failures. It would result in most security breaches still going unreported and undetected. How much work is it for the ICO to quickly sift through reports of minor breaches, to fish out the more serious ones?
The ICO is against "prior authorisation" for some international data transfers.
Comment: this is a crucial safeguard that should remain in the regulation. again goes to show the ico doesnt really care about our data protection rights and interests.
The Information Commissioner doesnt want to be forced to impose administrative sanctions for mere "process failures" which did not lead to real privacy risks.
Comment: He basically doesnt like enforcing the law, but he ought to! what is he there for?!
He doesnt like having to take part in the "consistency mechanism"; it is "insufficiently risk-based" and "contains unrealistic time-limits".
Comment: The consistency mechanism is essential to ensure that the regulation is applied the same throughout the EU, and interpreted strictly (rather than arbitrarily and loosely, as is the case with the ICO's approach to the UK DPAct and the current DP Directive)! it again goes to show that he really wants to keep the UK as a country where data protection is not seriously enforced, even by the national DPA.
Oh, and of course he isn't asking for serious money to uphold our fundamental rights:
"... given the state of public finances across the EU and the more obviously higher priority causes competing for public funding, it is surely questionable that there will be more money available for DPAs than there is now."
Comment: At the moment, the ICO costs only £16 million a year, which is about 25p per citizen ...
To learn more about the Data Protection Regulation and how to contact your MEP, see the campaign website Naked Citizens.
Does the #FBrape campaign challenge our freedom of speech? Are feminists censoring the internet? Soraya Chemaly, one of the founders of the campaign, gives her insight into the issue.
Last week, I along with Jaclyn Friedman of Women, Action and the Media and Laura Bates, of Everyday Sexism, led a movement challenging Facebook's policies about content moderation. Facebook responded by saying it had failed to deal adequately with misogynistic content depicting violence against women and outlined the steps it would take to change a cultural tolerance for violence against women. The social activism, which involved raising awareness and asking advertisers to boycott the company until it acted in accordance to its own terms and guidelines, is notable because it is a rare public acknowledgement that misogyny and sexism are real, that they are harmful. Corporations, like Facebook, have a responsibility to treat hate based on gender in the same manner that they do other forms of hate speech.
Many people are saying this is a case of feminists censoring the Internet. I'd like to address this head on to explain why this is not the case. As a feminist and a writer, I understand free speech and hold it dear, but there are two issues being conflated in the concern that #FBrape, the name of the campaign in social media, will reduce speech. One is: how does Facebook regulate speech in its service? The second is: SHOULD Facebook be regulating speech?
Our initiative dealt with number one, how is Facebook regulating speech? Facebook is clearly regulating speech - they have a moderation policy and a detailed reporting and review process. The issue is that they were not interpreting these processes in a way that treated girls and women fairly and equally. That was the issue addressed in our program.
Page with names like "Raping Your Girlfriend," and text and images of popular rape memes depicting about-to-be-raped, incapacitated girls were easily found. Pictures and videos of girls and women frightened, humiliated, bruised, beaten, raped, gang raped, bathed in blood, and, in a recently publicized case, beheaded were "liked" by tens of thousands. In a milder example that went viral through our campaign, Facebook declined to remove an image of a woman, mouth covered in tape, in which the caption read, "Don't tap her and rap her. Tape her and rape her." Facebook's response to readers who reported it read, "We reviewed the photo you submitted, but found it did not violate our community standards."
Content like this defied reason and Facebook's own terms, which prohibit posts that "attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition."
Facebook "censors" content every day already. The company had in place the formal language of a reasonable content policy geared toward ensuring users' safety, but it was not implementing it effectively. This failure disproportionately affected girls and women. That is why we demanded that the company reassess it's definition and interpretation of "hate speech" and train moderators to recognize why violence against women is a real problem and, when graphically represented in the ways found, hateful and threatening,
This content isn't "offensive". The photographs and videos we found depict gross human rights violations for the cruel use, entertainment and profit of others. The offense is that these depictions are considered funny or controversial.
Facebook is not "the internet." We chose it because it had content and community guidelines. The company, with more than a billion users, is an influential force. It is both a mirror and a microcosm of a global culture. As such, it is no more or less sexist or misogynistic than any other company or aspect of media. However, by creating a review process it became an arbiter of norms and provided a way to challenge those that encourage and perpetuate gross and easily demonstrable prejudices against girls and women. We are hopeful that this is a first step in making safer spaces both online and off.
The question of WHETHER Facebook should have a content moderation and review process is an entirely separate one.
Soraya Chemaly is a cultural critic and feminist activist. Her work and writing focuses on the role of gender in culture, in media, politics, religion and more, with am emphasis on the role that sexualized violence plays in sex-based prejudices and gender inequality.
Will open data support values of democracy, openness, transparency, and social justice? Wendy Grossman explores the question.
At the recent OpenTech, perennial grain-of-sand-in-the-Internet-oyster Bill Thompson, in a session on open data, asked an interesting question. In a nod to NTK's old slogan, "They stole our revolution – now we're stealing it back", he asked: how can we ensure that open data supports values of democracy, openness, transparency, and social justice? The Internet pioneers did their best to embed these things into their designs, and the open architecture, software, and licensing they pioneered can be taken without paying by any oppressive government or large company that cares to. Is this what we want for open data, too?
Thompson writes (and, if I remember correctly, actually said, more or less):
…destruction seems like a real danger, not least because the principles on which the Internet is founded leave us open to exploitation and appropriation by those who see openness as an opportunity to take without paying – the venture capitalists, startups and big tech companies who have built their empires in the commons and argue that their right to build fences and walls is just another aspect of ‘openness’.
Constraining the ability to take what's been freely developed and exploited has certainly been attempted, most famously by Richard Stallman's efforts to use copyright law to create software licenses that would bar companies from taking free software and locking it up into proprietary software. It's part of what Creative Commons is about, too: giving people the ability to easily specify how their work may be used. Barring commercial exploitation without payment is a popular option: most people want a cut when they see others making a profit from their work.
The problem, unfortunately, is that it isn't really possible to create an open system that can *only* be used by the "good guys" in "good" ways. The "free speech, not free beer" analogy Stallman used to explain "free software" applies. You can make licensing terms that bar Microsoft from taking GNU/Linux, adding a new user interface, and claiming copyright in the whole thing. But you can't make licensing terms that bar people using Linux from using it to build wiretapping boxes for governments to install in ISPs to collect everyone's email. If you did, either the terms wouldn't hold up in a court of law or it would no longer be free software but instead proprietary software controlled by a well-meaning elite.
One of the fascinating things about the early days of the Internet is the way everyone viewed it as an unbroken field of snow they could mold into the image they wanted. What makes the Internet special is that any of those models really can apply: it's as reasonable to be the entertainment industry and see it as a platform that just needs some locks and laws to improve its effectiveness as a distribution channel, as to be Bill Thompson and view it as a platform for social justice that's in danger of being subverted.
One could view the legal history of The Pirate Bay as a worked example, at least as it's shown in the documentary TPB-AFK: The Pirate Bay – Away From Keyboard, released in February and freely downloadable under a Creative Commons license from a torrent site near you (like The Pirate Bay). The documentary has had the best possible publicity this week when the movie studios issued DMCA takedown notices to a batch of sites.
I'm not sure what leg their DMCA claims could stand on, so the most likely explanation is the one TorrentFreak came up with: that the notices are collateral damage. The only remotely likely thing in the documentary to have set them off – other than simple false positives – is the four movie studio logos that appear in it.
There are many lessons to take away from the movie, most notably how much more nuanced the TPB founders' views are than they came across at the time. My favorite moment is probably when Fredrik Tiamo discusses the opposing counsels' inability to understand how TPB actually worked: "We tried to get organized, but we failed every single time." Instead, no boss, no contracts, no company. "We're just a couple of guys in a chat room." My other favorite is probably the moment when Monique Wadsted, Hollywood's lawyer on the case, explains that the notion that young people are disaffected with copyright law is a myth.
"We prefer AFK to IRL," says one of the founders, "because we think the Internet is real."
Given its impact on their business, I'm sure the entertainment industry thinks the Internet is real, too. They're just one of many groups who would like to close down the Internet so it can't be exploited by the "bad guys": security people, governments, child protection campaigners, and so on. Open data will be no different. So, sadly, my answer to Bill Thompson is no, there probably isn't a way to do what he has in mind. Closed in the name of social justice is still closed. Open systems can be exploited by both good and bad guys (for your value of "good" and "bad"); the group exploiting a closed system is always *someone's* bad guy.
Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted irregularly during the week at the net.wars Pinboard - or follow on Twitter.
Nick Pearson explains the functions of a VPN, and how to best choose one that will ensure your privacy is protected.
As the global debate over online government surveillance rages on, it's reasonable to assume the use of privacy tools to foil state-spying efforts will only increase. The protection of online privacy is already a booming industry online, with a number of Virtual Private Networks (VPNs), claiming to protect your data from government intrusion. VPNs can do a lot of things, such as allowing you to get around regional YouTube restrictions, or helping you escape the online parameters of whatever censorious regime you may be living under. But can they really stop governments from accessing your data, and what will happen if a government asks an VPN for information on a customer?
What is a VPN?
A VPN, to quote Wikipedia, “enables a host computer to send and receive data across shared or public networks as if they were an integral part of the private network with all the functionality, security and management policies of the private network.” A VPN in the context of a privacy platform, is a network that ensures all the data you’re sending and receiving is encrypted and never logged, thus preventing spying. But while the acronym “VPN” has become a byword for online privacy, not all VPNs are actually privacy services – and even the ones that are may not be serious about protecting privacy.
The key issue concerns the storing of data. The European Data Directive mandates that all ISPs must store user data, which includes logs of who you've emailed and logs of what websites you've visited, for at least one year after the user leaves the ISP's service. In the US, there is no data retention law – although that may change – but ISPs are free to store data for as long as they like, and many happily do so in order to better assist law enforcement. Whether or not a VPN can protect your privacy revolves around the integrity of its own data retention policy.
A study from TorrentFreak shows, many VPNs retain user data in exactly the same way as an Internet service provider (ISP), which renders them pretty much useless as a privacy service. VPNs have to abide by the laws in their jurisdiction. If law enforcement demands a VPN hand over its data on a customer, then they must comply. But if there's no data to hand over, then a user's privacy is always protected. Sure, law enforcement could demand a VPN start logging data on a particular user (which is probably what happened in the case of HideMyAss and Lulzsec), but any VPN serious about privacy would shut down the service before complying with such an order.
Some VPNs retain data because it essentially makes their lives easier and is used to troubleshoot problems with the network. Others retain data because they believe it's necessary to comply with the law – even though that may not be the case. If they are honest, such VPNs would not market themselves as a privacy service. But not all are honest; some downright lie, and others simply hide behind the conflation of the words 'VPN' and 'privacy'.
How to choose a VPN
So if you want to use a VPN for privacy purposes, what should you do? Firstly, examine the VPN's terms and conditions closely. Make sure it's very clear about how long it stores data. If in doubt ask them. Most genuine privacy services will only retain data for a few hours maximum. Secondly, find out what the VPN will do if the laws in its jurisdiction concerning data retention changes. Any privacy service worth its salt, should be prepared to move jurisdiction if changing laws compromise user privacy (admittedly there's some grey areas here, but a commitment to moving jurisdiction is a good sign the VPN takes privacy seriously). Finally, ask the VPN how far it's willing to go to protect the privacy of its users in the face of demands from law enforcement. You may not get a straightforward answer to this question, but if a VPN has built its business on privacy commitments then it's more likely to put-up as much resistance as possible to protect its business' reputation.
Nick Pearson is the founder of IVPN. IVPN is a VPN privacy service and Electronic Frontier Foundation member aimed at journalists, people living in areas of online censorship, and privacy-conscious individuals.
The Halifax Regional C@P Association: The importance of youth, community and collaboration in the information age.
Laura Conrad looks at the importance of computer literacy education amongst students, with particular focus on the Halifax Regional C@P Association in Canada.
ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views
People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.
PRISM - What is it and how does it affect UK?