While the geeks get rich, I thought I'd go the other way
"The whole idea of what a homeless service is, is a soup kitchen," one of the representatives for The Connection at St Martin-in-the-Fields said yesterday. But does it have to be?
It was in the middle of "Teacamp", a monthly series of meetings that sport the same mix of geeks, government, and do-gooders as the annual UK Govcamp we covered a couple of weeks back. Meetings like this seem to be going on all the time all over the place, trying to figure out ways to use technology to help people. Hardly anyone has any budget, yet that seems not to matter: the optimism is contagious. This week's Teacamp also featured Westminster in Touch, an effort to support local residents and charities; the organization runs a biannual IT Support Forum to brainstorm (the next is March 28).
I have to admit: when I first read about Martha Lane Fox's Digital Inclusion initiative my worst rebellious instincts were triggered: why should anyone be bullied online if they didn't want to go there? Maybe at least some of those 9 million people who have never used the Internet in Britain would like to be left in peace to read books and listen to - rather than use - the wireless.
But the "digital divide" predicted even in the earliest days of the Net is real: those 9 million are those in the most vulnerable sectors of society. According to research published on the RaceOnline site, the percentage of people who have never used the Net correlates closely with income. This isn't really much of a surprise, although you would expect to see a slight tick upwards again at the very top economic levels, where not so long ago people were too grand, too successful, and too set in their ways to feel the need to go online. But they have proxies: their assistants can answer their email and do their Web shopping.
When Internet access was tied to computers, the homeless in particular were at an extreme disadvantage. You can't keep a desktop computer if you have nowhere - or only a very tiny, insecure space - to put it or power it, and you can't afford broadband or a landline. A laptop presents only slightly fewer problems. Even assuming you can find free wifi to use somewhere, how do you keep the laptop from being stolen or damaged? Where and how do you keep it charged? And so The Connection, like libraries and other places, runs a day center with a computing area and resources to help, including computer training.
But even that, they said, hasn't been reaching the most excluded, the under-25s that The Connection sees. When you think about it, it's logical, but I had to be reminded to think about it. Having missed out on - or been failed by - school education, this group doesn't see the Net as the opportunity the rest of us imagine it to be for them.
"They have no idea of creating anything to help their involvement."
So rather than being "digital natives", their position might be comparable to people who have grown up without language or perhaps autistic children whose intelligence and ability to learn has been disrupted by their brain wiring and development so much that the gap between them and their normally wired peers keeps increasing. Today's elderly who lack the motivation, the cognitive functioning, or the physical ability to go online will be catered to, even if only by proxy, until they die out. But imagine being 20 today and having no digital life beyond the completely passive experience of watching a few clips on YouTube or glancing at a Facebook page and thinking they have nothing to do with you. You will go through your entire life at a progressively greater disadvantage. Just as we assume that today's 80-year-olds grew up with movies, radio, and postal mail, when *you* are 80 (if the planet hasn't run out of energy and water and been forced to turn off all the computers by then), in devising systems to help you society will assume you grew up with television, email, and ecommerce. Whatever is put in place to help you navigate whatever that complex future will be like, will be completely outside your grasp.
So The Connection is helping them to do some simple things: upload interviews about their lives, annotate YouTube clips, create comic strips - anything to break this passive lack of interest. Beyond that, there's a big opportunity in smart phones, which don't need charging so often and are easier to protect - and can take advantage of free wifi just as a laptop can. The Connection is working on things like an SMS service that goes out twice a day and provides weather reports, maps of food runs, and information about free things to do. Should you be technically skilled and willing, they're looking for geeky types to help them put these ideas together and automate them. There are still issues around getting people phones, of course - and around the street value of a phone - but once you have a phone where you can be contacted by friend, family, and agencies, it's a whole different life. As it is again if you can be convinced that the Net belongs to you, too, not just all those other people.
The Government squabbles over fiscal sovereignty while signing up to a treaty that will suppress our rights as consumers
Earlier this month, the Internet went on strike. Thousands of websites, including high-profile ones like Wikipedia, blacked out for 12 hours on January 18th to protest US Congress attempts to severely limit civil liberties on the Internet in order to protect the vested interests of the content industry. Two days later, Congress shelved SOPA and PIPA, the two contentious bills. Yet barely two weeks later, we face a bigger challenge.
One of the more striking things about ACTA - the Anti-Counterfeiting Trade Agreement signed last week by the European Union and 22 of its member states - is the spectacularly undemocratic way in which it has come about and continues to be pushed through.
Those following the history of the treaty since its inception in 2008 will know that transparency has been a persistent issue throughout the negotiations. Its designation as a trade agreement essentially give the governments involved carte blanche to conduct negotiations behind closed doors. Drafts of the treaty were leaked repeatedly, one worse than the other. While the final draft has been considerably defanged, there are still significant areas of concern, both around the process and the content. La Quadrature du Net provides a detailed analysis of what is still wrong with the final version of the treaty. Most worrying is the provision which enshrines the undemocratic process in which ACTA has been negotiated for any future treaty amendments.
Yet reading the European Commission's Anti-Counterfeiting information page, all appears to be well: ACTA will not stop poor countries accessing cheap medicines, it will not get your iPod searched at the border or cut you off from the Internet. Nor, so the Commission, have there been any issues of transparency and democratic process in the ACTA negotiations.
A slightly different picture emerges from Kader Arif, the European Parliament's rapporteur on ACTA. In a statement on his blog explaining his resignation as rapporteur last week, Mr Arif acknowledges that ACTA is still extremely problematic and claims that "everything is being done to prevent the European Parliament from having its say in this matter."
Given that certain factions of the Conservative Party are utterly obsessed with the EU's alleged "democratic deficit", one would have thought they would be all over this. After all, not only must we now have a referendum every time someone corrects a typo in the Lisbon Treaty, but parts of the party want an immediate "in or out" referendum too. So when the only directly elected institution of the European Union complains that it has been consistently prevented from having a say on an international treaty which affects EU citizens, what does the UK government do? It signed ACTA, along with 21 other EU member states and the European Commission on January 26th. Not only that, but Parliament has abdicated any responsibility for scrutinising ACTA, pushing it back up to the European Parliament.
So where do we go from here? As the EU has admitted that ACTA is a legally binding international treaty, it cannot come into force without approval from the European Parliament. It is ironic that for all the Tories' clamouring on the "democratic deficit", we are left with only the EU between us and ACTA, with our democratically elected government having washed its hands. It is time to start lobbying your MEPs now. The Open Rights Group has some great suggestions on where to start.
The right to access, correct, and delete personal information held about you and the right to bar data collected for one purpose from being reused for another are basic principles of the data protection laws that have been the norm in Europe since the EU adopted the Privacy Directive in 1995. This is the Privacy Directive that is currently being updated; the European Commission's proposals seem, inevitably, to please no one. Businesses are already complaining compliance will be unworkable or too expensive (hey, fines of up to 2 percent of global income!). I'm not sure consumers should be all that happy either; I'd rather have the right to be anonymous than to be forgotten (which I believe will prove technically unworkable), and the jurisdiction for legal disputes with a company to be set to my country rather than theirs. Much debate lies ahead.
But the furore isn't about that, it's about the single pool of data. People do not use Google Docs in order to improve their search results; they don't put up Google+ pages and join circles in order to improve the targeting of ads on YouTube. This is everything privacy advocates worried about when Gmail was launched.
Australian privacy campaigner Roger Clarke's discussion document sets out the principles that the decision violates: no consultation, retroactive application; no opt out.
Are we evil yet?
In his 2011 book, In the Plex, Steven Levy traces the beginnings of a shift in Google's views on how and when it implements advertising to the company's controversial purchase of the DoubleClick advertising network, which relied on cookies and tracking to create targeted ads based on Net users' browsing history. This $3.1 billion purchase was huge enough to set off anti-trust alarms. Rightly so. Levy writes, "...sometime after the process began, people at the company realized that they were going to wind up with the Internet-tracking equivalent of the Hope Diamond: an omniscient cookie that no other company could match." Between DoubleClick's dominance in display advertising on large, commercial Web sites and Google AdSense's presence on millions of smaller sites, the company could track pretty much all Web users. "No law prevented it from combining all that information into one file," Levy writes, adding that Google imposed limits, in that it didn't use blog postings, email, or search behavior in building those cookies.
Levy notes that Google spends a lot of time thinking about privacy, but quotes founder Larry Page as saying that the particular issues the public chooses to get upset about seem randomly chosen, the reaction determined most often by the first published headline about a particular product. This could well be true - or it may also be a sign that Page and Brin, like Facebook's Mark Zuckberg and some other Silicon Valley technology company leaders, are simply out of step with the public. Maybe the reactions only seem random because Page and Brin can't identify the underlying principles.
In blending its services, the issue isn't solely privacy, but also the long-simmering complaint that Google is increasingly favoring its own services in its search results - which would be a clear anti-trust violation. There, the traditional principle is that dominance in one market (search engines) should not be leveraged to achieve dominance in another (social networking, video watching, cloud services, email).
SearchEngineLand has a great analysis of why Google's Search Plus is such a departure for the company and what it could have done had it chosen to be consistent with its historical approach to search results. Building on the "Don't Be Evil" tool built by Twitter, Facebook, and MySpace, among others, SEL demonstrates the gaps that result from Google's choices here, and also how the company could have vastly improved its service to its search customers.
What really strikes me in all this is that the answer to both the EU issues and the Google problem may be the same: the personal data store that William Heath has been proposing for three years. Data portability and interoperability, check; user control, check. But that is as far from the Web 2.0 business model as file-sharing is from that of the entertainment industry.
Wendy Grossman reports on this year's UK GovCamp and the hope for real change in attitudes towards technology
"Why hasn't the marvelous happened yet?" The speaker - at one of today's "unconference" sessions at this year's UK Govcamp - was complaining that with 13,000-odd data sets up on his organization's site there ought to be, you know, results.
At first glance, GovCamp seems peculiarly British: an incongruous mish-mash of government folks, coders, and activists, all brought together by the idea that technology makes it possible to remake government to serve us better. But the Web tells me that events like this are happening in various locations around Europe. James Hendler, who likes to collect government data sets from around the world (700,000 and counting now!), tells me that events like this are happening all over the US, too - except that there this size of event - a couple of hundred people - is New York City.
That's both good and bad: a local area in the US can find many more people to throw at more discrete problems - but on the other hand the federal level is almost impossible to connect with. And, as Hendler points out, the state charters mean that there are conversations the US federal government simply cannot have with its smaller, local counterparts. In the UK, if central government wants a local authority to do something, it can just issue an order.
This year's GovCamp is a two-day affair. Today was an "unConference": dozens of sessions organized by participants to talk about...stuff. Tomorrow will be hands-on, doing things in the limited time available. By the end of the day, the Twitter feed was filling up with eagerness to get on with things.
A veteran camper - I'm not sure how to count how many there have been - tells me that everyone leaves the event full of energy, convinced that they can change the world on Monday. By later next week, they'll have come down from this exhilarated high to find they're working with the same people and the same attitudes. Wonders do not happen overnight.
Along those lines, Mike Bracken, the guy who launched the Guardian's open data platform, now at the Cabinet Office, acknowledges this when he thanks the crowd for the ten years of persistence and pain that created his job. The user, his colleague Mark O'Neill said recently is at the center of everything they're working on. Are we, yet, past proving the concept?
"What should we do first?" someone I couldn't identify (never knowing who's speaking is a pitfall of unConferences) asked in the same session as the marvel-seeker. One offered answer was one any open-source programmer would recognize: ask yourself, in your daily life, what do you want to fix? The problem you want to solve - or the story you want to tell - determines the priorities and what gets published. That's if you're inside government; if you're outside, based on last summer's experience following the Osmosoft teams during Young Rewired State, often the limiting factor is what data is available and in what form.
With luck and perseverance, this should be a temporary situation. As time goes on, and open data gets built into everything, publishing it should become a natural part of everything government does. But getting there means eliminating a whole tranche of traditional culture and overcoming a lot of fear. If I open this data and others can review my decisions will I get fired? If I open this data and something goes wrong will it be my fault?
In a session on creative councils, I heard the suggestion that in the interests of getting rid of gatekeepers who obstruct change organizational structures should be transformed into networks with alternate routes to getting things done until the hierarchy is no longer needed. It sounds like a malcontent's dream for getting the desired technological change past a recalcitrant manager, but the kind of solution that solves one problem by breaking many other things. In such a set-up, who is accountable to taxpayers? Isn't some form of hierarchy inevitable given that someone has to do the hiring and firing?
It was in a session on engagement where what became apparent that as much as this event seems to be focused on technological fixes, the real goal is far broader. The discussion veered into consultations and how to build persistent networks of people engaged with particular topics.
"Work on a good democratic experience," advised the session's leader. Make the process more transparent, make people feel part of the process even if they don't get what they want, create the connection that makes for a truly representative democracy. In her view, what goes wrong with the consultation process now - where, for example, advocates of copyright reform find themselves writing the same ignored advice over and over again in response to the same questions - is that it's trying to compensate for the poor connections to their representatives that most people have. Building those persistent networks and relationships is only a partial answer.
"You can't activate the networks and not at the same time change how you make decisions," she said. "Without that parallel change you'll wind up disappointing people."
Marvels tomorrow, we hope.
Milena Popova explains what happens when you play by the copyright rules
I recently gave a talk on the economics of copyright, and why content is a public good. I'm not big on seven-level nested bullet points but I did want some visual aids for my audience so I set out to create a slide deck. Let me be perfectly, crystal clear here: I'm talking about a one-hour talk on a topic that I was intimately familiar with, not writing new material from scratch; I'm talking about 16 slides, 13 of which had any actual content. This should not have taken more than two hours. 12 working hours later...
The reason I took nearly an hour for every slide was that - since I was talking about copyright - I thought I should at least try and do this by the book. That meant that every image I used had to fulfil one of three conditions:
- I had to own the copyright;
- the image had to be either in the public domain or licensed under an appropriate Creative Commons license;
- or I had to get permission from the rights holder.
Additionally, every image had to be properly credited.
So what effect did "copyright by the book" have on my output and productivity? I've already stated the obvious: the whole exercise took about six times as long as it had any right to. Finding appropriate images to support what I was saying was suddenly not a simple matter of a Google search - I had to restricte my sources to those I could be certain would meet the above criteria. Flickr's CC search functionality helped, and so did the Creative Commons website's search; the Wikimedia Commons was an invaluable resource. But even with those, just appropriately crediting the 16 or so images took hours.
Next were the images I had to get permission for. I cheated slightly here, in that I only approached people I was reasonably certain would grant permission in the first place, and whom I could approach easily (generally through the magic of Twitter). Even with that slight workaround it wasn't until the day of the talk that I had confirmed permission for all images, so I had to line up back-ups or risk not having an image or using one I didn't have the rights to. A final cheat was used when I declared a screen capture from an anti-piracy video to be under the "fair dealing exception for criticism/review purposes" as I was criticising the video in question.
Perhaps the most unpleasant impact of doing copyright by the book was that I felt restricted in what I could and couldn't say or display. When referring to popular television shows, for instance, I couldn't use an image from the show and had to find a workaround. This, of course, had an impact on the quality of my work, so that I found myself faced with a choice between breaking the law or not producing the best possible work I could.
Personally, I was mildly inconvenienced by my "copyright by the book" adventures. There are, however, wider implications here. Go into any office in the UK, sit in any meeting, go to any industry conference, and you will find hundreds of thousands of media files (images, music, videos) used without permission. They're the soundtrack to your motivational video, the image you use to illustrate a point in a presentation, or the Dilbert cartoon you email to your team on a Friday afternoon. They are the things that make death by PowerPoint slightly less... well, deadly. And yet, did you know that to license a single Dilbert strip for a one-time use in a presentation (and that means no sneakily making the slides available to your colleagues afterwards!), you'd have to fork out US$85 at a minimum? That includes a t $10 "handling charge" by Universal Unlick Reprints and discounts the fact that navigating Scott Adams' licensing system will take you a good half hour. If you don't believe me that this sort of thing happens in every single office, you only need to browse BoingBoing to find that even copyright trolls use unlicensed content for fun and profit.
If tomorrow we all turned up for work and suddenly started doing copyright by the book, the economy would grind to a halt. The output of pretty much anyone with a desk job would be halved, the rest of their time spent trying to work out who owned the copyright on something they wanted to use, trying to license content or trying to work out what they should use instead.
The Intellectual Property Office is currently running a consultation on the changes to UK copyright law proposed in the Hargreaves Review. If you believe that the current copyright system is neither effective nor sustainable, you should think about responding. If you need a starting point, Glyn Moody's responses will do as a good a job as any.
Wendy Grossman reflects on an incident-packed 2012 - only two weeks into the year
You have to think that 2012 so far has been orchestrated by someone with a truly strange sense of humor. To wit:
- EMI Records is suing the Irish government for failing to pass laws to block "pirate sites". The way PC Pro tells it, Ireland ought to have implemented site blocking laws to harmonize with European law and one of its own judges has agreed it failed to do so. I'm not surprised, personally: Ireland has a lot of other things on its mind, like the collapse of the Catholic church that dominated Irish politics, education, and health for so long, and the economic situation post-tech boom.
- The US Congress and Senate are, respectively, about to vote on SOPA (Stop Online Piracy Act) and PIPA (Protect Intellectual Property Act), laws to give the US site blocking, search engine de-listing, and other goodies. (Who names these things? SOPA and PIPA sound like they escaped from Anna Russell's La Cantatrice Squelante.) Senator Ron Wyden (D-OR) and Representative Darrell Issa (R-CA) have proposed an alternative, the OPEN Act (PDF), which aims to treat copyright violations as a trade issue rather than a criminal one.
- Issa and Representative Carolyn Maloney (D-NY) have introduced the Research Works Act to give science journal publishers exclusive rights over the taxpayer-funded research they publish. The primary beneficiary would be Elsevier (which also publishes Infosecurity, which I write for), whose campaign contributions have been funding Maloney.
- Google is mixing Google+ with its search engine results because, see, when you're looking up impetigo, as previously noted, what you really want is to know which of your friends has it.
- Privacy International has accused Facebook of destroying someone's life through its automated targeted advertising, an accusation the company disputes.
- And finally, a British judge has ruled that a Sheffield student Richard O'Dwyer can be extradited to the US to face charges of copyright infringement; he owned the now-removed TVShack.net site, which hosted links to unauthorized copies of US movies and TV shows.
So many net.wars, so little time...
The eek!-Facebook-knows-I'm-gay story seems overblown. I'm sure the situation is utterly horrible for the young man in question, whom PI's now-removed blog posting said was instantly banished from his parents' home, but I still would like to observe that the ads were placed on his page by a robot (one without the Asimov Three Laws programmed into it). On this occasion the robot apparently guessed right but that's not always true. Remember 2002, when several TiVos thought their owners were gay? These are emotive issues and, as Forbes concludes in the article linked above, the more targeting gets good and online behavioral advertising spreads the more you have to think about what someone looking over your shoulder will see. Perhaps that's a new-economy job for 2012: the digital image consultant who knows how to game the system so the ads appearing on your personalized pages will send the "right" messages about you. Except...
It was predicted - I forget by whom - that search generally would need to incorporate social networking to make its search results more "relevant" and "personal". I can see the appeal if I'm looking for a movie to see, a book to read, or a place to travel to: why wouldn't I want to see first the recommendations of my friends, whom I trust and who likely have tastes similar to mine? But if I'm looking to understand what campaigners are saying about American hate radio (PDF), I'm more interested in the National Hispanic Media Coalition's new report than in collectively condemning Rush Limbaugh. Google Plus Search makes sense in terms of competing with Facebook and Twitter, but mix it up with the story above, and you have a bigger mess in sight. By their search results shall ye know their innermost secrets.
Besides proving Larry Lessig's point about the way campaign funding destroys our trust in our elected representatives, the Research Works Act is a terrible violation of principle. It's taken years of campaigning - by the Guardian as well as individuals pushing open standards - to get the UK government to open up its data coffers. And just at the moment when they finally do it, the US, which until now has been the model of taxpayers-paid-for-it-they-own-the-data, is thinking about going all protectionist and proprietary?
The copyright wars were always kind of ridiculous (and, says Cory Doctorow, only an opening skirmish), but there's something that's just wrong - lopsided, disproportionate, arrogant, take your pick - about a company suing a national government over it. Similarly, there's something that seems disproportionate about extraditing a British student for running a Web site on the basis that it was registered in .net, which is controlled by a US-based registry (and has now been removed from same). Granted, I'm no expert on extradition law, and must wait for either Lilian Edwards or David Allen Green to explain the details of the 2003 law. That law was and remains controversial, that much I know.
And this is only the second week. Happy new year, indeed.
Quiet Riot Girl cautions against a five-digit victory salute over the outcome of the recent obscenity trial
In the UK, the year 2012 has begun with a trial that could have come straight out of the 1960s – and even has some resonance with 19th century sexual morals and laws. R v Peacock, which already has its own wikipedia page, has been described as the obscenity trial of the decade.
The defendant in the case, a male escort called Michael Peacock, was cleared of all charges of 'depraving and corrupting' the people who watched the DVDs he sold, featuring men involved in sadomasochistic acts. Writing in the Guardian after Peacock's acquittal, Nichi Hodgson asked: 'Why is [the verdict] so important? For one, Peacock… challenged the notion of obscenity in law, a law that was last updated in 1964, and has stood since. A law that is expressly designed to tell us what is “deprave and corrupt” – defined by Justice Byrne in 1960 as “to render morally unsound or rotten, to destroy the moral purity or chastity; to pervert or ruin a good quality.”'
Chris Ashford, an academic with specific knowledge in the field of law and sexuality, also commented on the outcome of the trial, saying: 'The case brings some much needed clarity to this area of complex criminal law. I understand that the Metropolitan Police will be sitting down with the CPS and the BBFC and this is a welcome step'.
The overwhelming verdict from those outside the courtroom seemed to agree with both the jury and the 'liberal' press. As Hodgson put it in the Guardian, with a cheeky reference to the four-finger rule employed by many pornographers featuring 'fisting' in their work: 'For gay rights campaigners and for everyone of us that believes in social and sexual liberty, it's a day to make a five-digit victory sign.'
I too welcome the verdict but I am not quite so jubilant as many seem to be about it. Nor do I like the tone and possible 'agendas' appearing in some of the media discourse around the case.
My first problem is with the fact this case was brought to the courts at all, in the digital 21st century. Shouldn't we be up in arms about this puritanical and oppressive legislation, before celebrating that someone has avoided being criminalised by it?
As Michel Foucault put it more eloquently than I could: “But the guilty person is only one of the targets of punishment. For punishment is directed above all at others, at all the potentially guilty.”
It is not just the archaic and anachronistic Obscenity Laws that are directed at 'potentially guilty' actors in the sexual sphere. Contemporary legislation exists that continues to execute the 'Law of Sex' both in the courts and out. In 2009 for example, extreme pornography legislation was included in the Criminal Justice and Immigration Act. This makes it illegal to possess and even view pornography that shows injury to the breasts, anus or genitals, or that suggests a potential threat to life. This has potentially criminalised whole sections of society, including myself, who express sadomasochistic desire.
As Jane Fae has indicated, maybe we should keep the champagne on ice. On her blog she wrote: 'what comes next is likely to be a thoroughgoing review of obscenity and, in the current climate, my expectation is that that will see a widening and toughening of existing restrictive laws such as the Criminal Justice Act (2008) – more colloquially known as the 'extreme porn law'.
On the politics.co.uk website Fae also pointed out the difference in numbers between prosecutions under the OPA and the 'Extreme Porn' law.
'This once proud piece of legislation [OPA], intended to be the last word in moral high ground, was down to 71 prosecutions last year – as against just shy of 1,000 for “extreme porn” and several thousand each for various forms of malicious communication and indecent images of children.
The prosecution attempted to use the 'extreme porn' law in R v Peacock, as the prosecution also did in the Vincent Tabak (murder) trial. Both attempts failed but it shows how this law is very much at the forefront of lawyers' minds, and their legal artillery, when it comes to cases of sexuality and (sexual) violence.
One of these attitudes is the idea that some people are 'normal' sexually, and others are abnormal, or perverts.
Again as Michel Foucault has said (and as he was partially quoted in the Peacock case): “...if you are not like everybody else, then you are abnormal, if you are abnormal, then you are sick. These three categories, not being like everybody else, not being normal and being sick are in fact very different but have been reduced to the same thing”.
Who are the 'perverts' and the 'sick' and 'abnormal' people in this 'permissive' age? Well, apart from the obvious 'paedophiles', judging by this and previous obscenity cases, people who commit 'violent' acts in a consensual sexual context are still considered perverse to some degree. Especially men who do so. It is a rarely quoted fact, that the 'dominatrix' trade continues to boom without too much regulation (apart from isolated incidents e.g. the Max Mosley case) or criticism, because there it is women doling out the 'violence', usually to men. In our culture, women dommes 'punishing' willing men victims, seems to many to represent some kind of 'justice' or 'payback' for all the apparent crimes of 'patriarchal' men against women.
And when it comes to heterosexual men, feminism demonises them so successfully that often they do not have to be brought to trial in courtrooms at all. Men are 'the potentially guilty' in the Foucauldian sense. Think of the discourse of rape culture that presents all men (all heterosexual men) as potential rapists (of women) and we can begin to see how this 'law of sex' works. In other words, as Mark Simpson has observed, 'The feminist is Ms Whiplash'.
I also think that the emphasis in the media surrounding this trial on the 'gay' identity of the defendant and the people who watch his porn, is positioning other men who have sex with men who do not identify as gay, as 'abnormal'.
Hodgson in the Guardian emphasised the significance of the defendant here being 'gay' and called this a victory for 'gay rights campaigners'. I disagree. Though Peacock himself identifies as 'gay', there is no evidence that the actors in the DVDs he sold or the people who bought and watched them are 'gay'. As Mark Simpson has written, straight men enjoy watching men's cocks in pornography. Also, many women watch 'gay' pornography. Again as Simpson has told us, Manlove for the Ladies is a big market and getting bigger. And many men who act in 'gay' porn are only 'gay for pay'. So this divide between 'gay' and 'straight' porn is false and limiting. During the trial I didn't see any 'gay rights campaigners' speaking up for Peacock (with the exception of Chris Ashford). Maybe this was because 'gay rights' activists are often puritanical themselves, as they separate the 'gay' identity from 'homosexual' sex, making it respectable and almost 'heterosexual'. If the men had have been heterosexual, and fisting and urinating on women, how would the feminist Guardian have presented the case?
Currently people involved in S&M activities, if they commit 'serious' assault on each other as part of their consensual sexual acts, for example by drawing blood, are breaking the law.
Myles Jackson, Obscenity Lawyer, wrote: 'I urge legislators and the Law Commission to reconsider the law surrounding consent to sexual assault.' But as yet he has not had a commitment from the Commission that they will do so.'
Whilst very few people have been convicted for 'assaulting' their partners during known consensual sexual activity, the fact the law exists matters. It has ramifications for domestic violence and sexual assault cases. If someone is accused of either of these crimes, and violence has definitely occurred, it is impossible for the defence to argue that 'consent' is a significant factor in the case.
Again this situation is highly gendered. Men were only counted amongst potential rape victims in the UK in 1994, and in United States in 2012! And, in UK law, women are not able to 'rape' men technically, as a penis is required for that specific crime. This enables feminists to continue their assault on 'rape culture' and to portray men as predators of women.
I welcome this 'not guilty' verdict. I hope it leads to the end of the obscenity law in the UK. But I do not think it necessarily signifies the end of 'puritanical' or 'oppressive' law in the realm of sexuality in the UK. I believe the 'discourse' of sexuality is where most of the power occurs. And, the discourse around this case has not been 'liberating' so much as business as usual for those such as feminists who invest in continuing sexual repression, and in particular the demonisation of men's sexualities.
'The problem with Facebook's privacy controls... is that they exist'.
Yesterday's news that the Ramnit worm has harvested the login credentials of 45,000 British and French Facebook users seems to me a watershed moment for Facebook. If I were an investor, I'd wish I had already cashed out. Indications are, however, that founding CEO Mark Zuckerberg is in it for the long haul, in which case he's going to have to find a solution to a particularly intractable problem: how to protect a very large mass of users from identity fraud when his entire business is based on getting them to disclose as much information about themselves as possible.
I have long complained about Facebook's repeatedly changing privacy controls. This week, while working on a piece on identity fraud for Infosecurity, I've concluded that the fundamental problem with Facebook's privacy controls is not that they're complicated, confusing, and time-consuming to configure. The problem with Facebook's privacy controls is that they exist.
In May 2010, Zuckerberg enraged a lot of people, including me, by opining that privacy is no longer a social norm. As Judith Rauhofer has observed, the world's social norms don't change just because some rich geeks in California say so. But the 800 million people on Facebook would arguably be much safer if the service didn't promise privacy - like Twitter. Because then people wouldn't post all those intimate details about themselves: their kids' pictures, their drunken, sex exploits, their incitements to protest, their porn star names, their birth dates... Or if they did, they'd know they were public.
Facebook's core privacy problem is a new twist on the problem Microsoft has: legacy users. Apple was willing to make earlier generations of its software non-functional in the shift to OS X. Microsoft's attention to supporting legacy users allows me to continue to run, on Windows 7, software that was last updated in 1997. Similarly, Facebook is trying to accommodate a wide variety of privacy expectations, from those of people who joined back when membership was limited to a few relatively constrained categories to those of people joining today, when the system is open to all.
Facebook can't reinvent itself wholesale: it is wholly and completely wrong to betray users who post information about themselves into what they are told is a semi-private space by making that space irredeemably public. The storm every time Facebook makes a privacy-related change makes that clear. What the company has done exceptionally well is to foster the illusion of a private space despite the fact that, as the Australian privacy advocate Roger Clarke observed in 2003, collecting and abusing user data is social networks' only business model.
Ramnit takes this game to a whole new level. Malware these days isn't aimed at doing cute, little things like making hard drive failure noises or sending all the letters on your screen tumbling into a heap at the bottom. No, it's aimed at draining your bank account and hijacking your identity for other types of financial exploitation.
To do this, it needs to find a way inside the circle of trust. On a computer network, that means looking for an unpatched hole in software to leverage. On the individual level, it means the malware equivalent of viral marketing: get one innocent bystander to mistakenly tell all their friends. We've watched this particular type of action move through a string of vectors as the human action moves to get away from spam: from email to instant messaging to, now, social networks. The bigger Facebook gets, the bigger a target it becomes. The more information people post on Facebook - and the more their friends and friends of friends friend promiscuously - the greater the risk to each individual.
The whole situation is exacerbated by endemic, widespread, poor security practices. Asking people to provide the same few bits of information for back-up questions in case they need a password reset. Imposing password rules that practically guarantee people will use and reuse the same few choices on all their sites. Putting all the eggs in services that are free at point of use and that you pay for in unobtainable customer service (not to mention behavioral targeting and marketing) when something goes wrong. If everything is locked to one email account on a server you do not control, if your security questions could be answered by a quick glance at your Facebook Timeline and a Google search, if you bank online and use the same passwords throughout...you have a potential catastrophe in waiting.
I realize not everyone can run their own mail server. But you can use multiple, distinct email addresses and passwords, you can create unique answers on the reset forms, and you can limit your exposure by presuming that everything you post *is* public, whether the service admits it or not. Your goal should be to ensure that when - it's no longer safe to say "if" - some part of your online life is hacked the damage can be contained to that one, hopefully small, piece. Relying on the privacy consciousness of friends means you can't eliminate the risk; but you can limit the consequences.
Facebook is facing an entirely different risk: that people, alarmed at the thought of being mugged, will flee elsewhere. It's happened before.
Ignoring the ignorant will only make matters worse in the technology debate
My father was not a patient man. He could summon up some compassion for those unfortunates who were stupider than himself. What he couldn't stand was ignorance, particularly wilful ignorance. The kind of thing where someone boasts about how little they know.
That said, he also couldn't abide computers. "What can you do with a computer that you can't do with a paper and pencil?" he demanded to know when I told him I was buying a friend's TRS-80 Model III in 1981. He was not impressed when I suggested that it would enable me to make changes on page 3 of a 78-page manuscript without retyping the whole thing.
My father had a valid excuse for that particular bit of ignorance or lack of imagination. It was 1981, when most people had no clue about the future of the embryonic technology they were beginning to read about. And he was 75. But I bet if he'd made it past 1984 he'd have put some effort into understanding this technology that would soon begin changing the printing industry he worked in all his life.
While computers were new on the block, and their devotees were a relatively small cult of people who could be relatively easily spotted as "other", you could see the boast "I know nothing about computers" as a replay of high school. In American movies and TV shows that would be jocks and the in-crowd on one side, a small band of miserable, bullied nerds on the other. In the UK, where for reasons I've never understood it's considered more admirable to achieve excellence without ever being seen to work hard for it, the sociology plays out a little differently. I guess here the deterrent is less being "uncool" and more being seen as having done some work to understand these machines.
Here's the problem: the people who by and large populate the ranks of politicians and the civil service are the *other* people. Recent events such as the UK's Government Digital Service launch suggest that this is changing. Perhaps computers have gained respectability at the top level from the presence of MPs who can boast that they misspent their youth playing video games rather than, like the last generation's Ian Taylor, getting their knowledge the hard way, by sweating for it in the industry.
There are several consequences of all this. The most obvious and longstanding one is that too many politicians don't "get" the Net, which is how we get legislation like the DEA, SOPA, PIPA, and so on. The less obvious and bigger one is that we – the technology-minded, the early adopters, the educated users – write them off as too stupid to talk to. We call them "congresscritters" and deride their ignorance and venality in listening to lobbyists and special interest groups.
The problem, as Emily Badger writes for Miller-McCune as part of a review of Clay Johnson's latest book, is that if we don't talk to them how can we expect them to learn anything?
This sentiment is echoed in a lecture given recently at Rutgers by the distinguished computer scientist David Farber on the technical and political evolution of the Internet (MP3) (the slides are here [PDF]). Farber's done his time in Washington, DC, as chief technical advisor to the Federal Communications Commission and as a member of the Presidential Advisory Board on Information Technology. In that talk, Farber makes a number of interesting points about what comes next technically – it's unlikely, he says, that today's Internet Protocols will be able to cope with the terabyte networks on the horizon, and reengineering is going to be a very, very hard problem because of the way humans resist change - but the more relevant stuff for this column has to do with what he learned from his time in DC.
Very few people inside the Beltway understand technology, he says there, citing the Congressman who asked him seriously, "What is the Internet?" (Well, see, it's this series of tubes...) And so we get bad – that is, poorly grounded – decisions on technology issues.
Early in the Net's history, the libertarian fantasy was that we could get on just fine without their input, thank you very much. But as Farber says, politicians are not going to stop trying to govern the Internet. And, as he doesn't quite say, it's not like we can show them that we can run a perfect world without them. Look at the problems techies have invented: spam, the flaky software infrastructure on which critical services are based, and so on. "It's hard to be at the edge in DC," Farber concludes.
So, going back to Badger's review of Johnson: the point is it's up to us. Set aside your contempt and distrust. Whether we like politicians or not, they will always be with us. For 2012, adopt your MP, your Congressman, your Senator, your local councillor. Make it your job to help them understand the bills they're voting on. Show them that even if they don't understand the technology, there are votes in those who do. It's time to stop thinking of their ignorance as solely *their* fault.
Wendy Grossman explains why she has ditched the Google search engine for 'disruptive' young upstart DuckDuckGo
Back in about 1998, a couple of guys looking for funding for their start-up were asked this: How could anyone compete with Yahoo! or Altavista?
"Ten years ago, we thought we'd love Google forever," a friend said recently. Yes, we did, and now we don't.
It's a year and a bit since I began divorcing Google. Ducking the habit is harder than those "They have no lock-in" financial analysts thought when Google went public: as if habit and adaptation were small things. Easy to switch CTRL-K in Firefox to DuckDuckGo, significantly hard to unlearn ten years of Google's "voice".
When I tell this to Gabriel Weinberg, the guy behind DDG – his recent round of funding lets him add a few people to experiment with different user interfaces and redo DDG's mobile application – he seems to understand. He started DDG, he told The Rise to the Top last year, because of Google's increasing amount of spam. Frustration made him think: for many queries wouldn't searching just Delicio.us and Wikipedia produce better results? Since his first weekend mashing that up, DuckDuckGo has evolved to include over 50 sources.
"When you type in a query there's generally a vertical search engine or data source out there that would best serve your query," he says, "and the hard problem is matching them up based on the limited words you type in." When DDG can make a good guess at identifying such a source – such as, say, the National Institutes of Health – it puts that result at the top. This is a significant hint: now, in DDG searches, I put the site name first, where on Google I put it last. Immediate improvement.
This approach gives Weinberg a new problem, a higher-order version of the Web's broken links: as companies reorganize, change, or go out of business, the APIs he relies on vanish.
Identifying the right source is harder than it sounds, because the long tail of queries require DDG to make assumptions about what's wanted.
"The first 80 percent is easy to capture," Weinberg says. "But the long tail is pretty long."
As Ken Auletta tells it in Googled, the venture capitalist Ram Shriram advised Sergey Brin and Larry Page to sell their technology to Yahoo! or maybe Infoseek. But those companies were not interested: the thinking then was portals and keeping site visitors stuck as long as possible on the pages advertisers were paying for, while Brin and Page wanted to speed visitors away to their desired results. It was only when Shriram heard that, Auletta writes, that he realized that baby Google was disruptive technology. So I ask Weinberg: can he make a similar case for DDG?
"It's disruptive to take people more directly to the source that matters," he says. "We want to get rid of the traditional user interface for specific tasks, such as exploring topics. When you're just researching and wanting to find out about a topic there are some different approaches – kind of like clicking around Wikipedia."
Following one thing to another, without going back to a search engine…sounds like my first view of the Web in 1991. But it also sounds like some friends' notion of after-dinner entertainment, where they start with one word in the dictionary and let it lead them serendipitously from word to word and book to book. Can that strategy lead to new knowledge?
"In the last five to ten years," says Weinberg, "people have made these silos of really good information that didn't exist when the Web first started, so now there's an opportunity to take people through that information." If it's accessible, that is. "Getting access is a challenge," he admits.
There is also the frontier of unstructured data: Google searches the semi-structured Web by imposing a structure on it – its indexes. By contrast, Mike Lynch's Autonomy, which just sold to Hewlett-Packard for £10 billion, uses Bayesian logic to search unstructured data, which is what most companies have.
"We do both," says Weinberg. "We like to use structured data when possible, but a lot of stuff we process is unstructured."
Google is, of course, a moving target. For me, its algorithms and interface are moving in two distinct directions, both frustrating. The first is Wal-Mart: stuff most people want. The second is the personalized filter bubble. I neither want nor trust either. I am more like the scientists Linguamatics serves: its analytic software scans hundreds of journals to find hidden links suggesting new avenues of research.
Anyone entering a category that's as thoroughly dominated by a single company as search is now, is constantly asked: How can you possibly compete with [name]? Weinberg must be sick of being asked about competing with Google. And he'd be right, because it's the wrong question. The right question is, how can he build a sustainable business? He's had some sponsorship while his user numbers are relatively low (currently 7 million searches a month) and, eventually, he's talked about context-based advertising – yet he's also promising little spam and privacy – no tracking. Now, that really would be disruptive.
So here's my bet. I bet that DuckDuckGo outlasts Groupon as a going concern. Merry Christmas.
Wendy M. Grossman responds to "loopy" statements made by Google Executive Chairman Eric Schmidt in regards to censorship and encryption.
ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views
People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.