Hackgate - why did they do it?

Wendy M. Grossman on the culture of the press in the UK, and how to change it

The late, great Molly Ivins warns (in Molly Ivins Can't Say That, Can She?) about the risk to journalists of becoming "power groupies" who identify more with the people they cover than with their readers. In the culture being exposed by the escalating phone hacking scandals the opposite happened: politicians and police became "publicity groupies" who feared tabloid wrath to such an extent that they identified with the interests of press barons more than those of the constituents they are sworn to protect.

I put the apparent inconsistency between politicians' former acquiescence and their current baying for blood down to Stockholm syndrome: this is what happens when you hold people hostage through fear and intimidation for a few decades. When they can break free, oh, do they want revenge.

The consequences are many and varied, and won't be entirely clear for a decade or two. But surely one casualty must have been the balanced view of copyright frequently argued for in this column. Murdoch's media interests are broad-ranging. What kind of copyright regime do you suppose he'd like? But the desire for revenge is a really bad way to plan the future, as I said (briefly) on Monday at the Westminster Skeptics.

For one thing, it's clearly wrong to focus on News International as if Rupert Murdoch and his hired help were the only contaminating apple. In the 2006 report What price privacy now? the Information Commissioner listed 30 publications caught in the illegal trade in confidential information. News of the World was only fifth; number one, by a considerable way, was the Daily Mail (the Observer was number nine). The ICO wanted jail sentences for those convicted of trading in data illegally, and called on private investigators' professional bodies to revoke or refuse licenses to PIs who breach the rules. Five years later, these are still good proposals.

Changing the culture of the press is another matter. When I first began visiting Britain in the late 1970s, I found the tabloid press absolutely staggering. I began asking the people I met how the papers could do it.

"That's because *we* have a free press," I was told in multiple locations around the country. "Unlike the US." This was only a few years after The Washington Post backed Bob Woodward and Carl Bernstein's investigation of Watergate, so it was doubly baffling. Tom Stoppard's 1978 play Night and Day explained a lot. It dropped competing British journalists into an escalating conflict in a fictitious African country. Over the course of the play, Stoppard's characters both attack and defend the tabloid culture.

"Junk journalism is the evidence of a society that has got at least one thing right, that there should be nobody with power to dictate where responsible journalism begins," says the naïve and idealistic new journalist on the block.

"The populace and the popular press. What a grubby symbiosis it is," complains the play's only female character, whose second marriage – "sex, money, and a title, and the parrots didn't harm it, either" – had been tabloid fodder.

The standards of that time now seem almost quaint. In the movie Starsuckers, filmmaker Chris Atkins fed fabricated celebrity stories to a range of tabloids. All were published. That documentary also showed in action illegal methods of obtaining information. In 2009, right around the time The Press Complaints Commission was publishing a report concluding, "there is no evidence that the practice of phone message tapping is ongoing".

Someone on Monday asked why US newspapers are better behaved despite First Amendment protection and less constraint by onerous libel laws. My best guess is fear of lawsuits. Conversely, Time magazine argues that Britain's libel laws have encouraged illegal information gathering: publication requires indisputable evidence. I'm not completely convinced: the libel laws are not new, and economics and new media are forcing change on press culture.

A lot of dangers lurk in the calls for greater press regulation. Phone hacking is illegal. Breaking into other people's computers is illegal. Enforce those laws. Send those responsible to jail. That is likely to be a better deterrent than any regulator could manage.

It is extremely hard to devise press regulations that don't enable cover-ups. For example, on Wednesday's Newsnight, the MP Louise Mensch, head of the DCMS committee conducting the hearings, called for a requirement that politicians disclose all meetings with the press. I get it: expose too-cosy relationships. But whistleblowers depend on confidentiality, and the last thing we want is for politicians to become as difficult to access as tennis stars and have their contact with the press limited to formal press conferences.

Two other lessons can be derived from the last couple of weeks. The first is that you cannot assume that confidential data can be protected simply by access rules. The second is the importance of alternatives to commercial, corporate journalism. Tom Watson has criticized the BBC for not taking the phone hacking allegations seriously. But it's no accident that the trust-owned Guardian was the organization willing to take on the tabloids. There's a lesson there for the US, as the FBI and others prepare to investigate Murdoch and News Corp: keep funding PBS.  

 

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

Image: CC-AT-SA Flickr: World Economic Forum

The Grey Hour

There is a fundamental conundrum that goes like this. Users want free information services on the Web. Advertisers will support those services if users will pay in personal data rather than money. Are privacy advocates spoiling a happy agreement or expressing a widely held concern that just hasn't found expression yet? Is it paternalistic and patronizing to say that the man on the Clapham omnibus doesn't understand the value of what he's giving up? Is it an expression of faith in human nature to say that on the contrary, people on the street are smart, and should be trusted to make informed choices in an area where even the experts aren't sure what the choices mean? Or does allowing advertisers free rein mean the Internet will become a highly distorted, discriminatory, immersive space where the most valuable people get the best offers in everything from health to politics?

None of those questions are straw men. The middle two are the extreme end of the industry point of view as presented at the Online Behavioral Advertising Workshop sponsored by the University of Edinburgh this week. That extreme shouldn't be ignored; Kimon Zorbas from the Internet Advertising Bureau, who voiced those views also genuinely believes that regulating behavioral advertising is a threat to European industry. Can you prove him wrong? If you're a politician intent on reelection, hear that pitch, and can't document harm, do you dare to risk it?

At the other extreme end are the views of Jeff Chester, from the Center for Digital Democracy, who laid out his view of the future both here and at CFP a few weeks ago. If you read the reports the advertising industry produces for its prospective customers, they're full of neuroscience and eyeball tracking. Eventually, these practices will lead, he argues, to a highly discriminatory society: the most "valuable" people will get the best offers – not just in free tickets to sporting events but the best access to financial and health services. Online advertising contributed to the subprime loan crisis and the obesity crisis, he said. You want harm?

It's hard to assess the reality of Chester's argument. I trust his research through the documents of what advertising companies tell their customers. What isn't clear is whether the neuroscience these companies claim actually works. Certainly, one participant here says real neuroscientists heap scorn on the whole idea – and I am old enough to remember the mythology surrounding subliminal advertising.

Accordingly, the discussion here seems to me less of a single spectrum and more like a triangle, with the defenders of online behavioural advertising at one point, Chester and his neuroscience at another, and perhaps Judith Rauhofer, the workshop's organizer, at a third, with a lot of messy confusion in the middle. Upcoming laws, such as the revision of the EU ePrivacy Directive and various other regulatory efforts, will have to create some consensual order out of this triangular chaos.

The fourth episode of Joss Whedon's TV series Dollhouse, "The Gray Hour", had that week's characters enclosed inside a vault. They have an hour to accomplish their mission of theft which is the time between the time it takes for the security system to reboot. Is this online behavioral advertising's grey hour? Their opportunity to get ahead before we realize what's going on?

A persistent issue is definitely technology design.

One of Rauhofer's main points is that the latest mantra is, "This data exists, it would be silly not to take advantage of it." This is her answer to one of those middle points, that we should not be regulating collection but simply the use of data. This view makes sense to me: no one can abuse data that has not been collected. What does a privacy policy mean when the company that is actually collecting the data and compiling profiles is completely hidden? One help would be teaching computer science students ethics and responsible data practices. The science fiction writer Charlie Stross noted the other day that the average age of entrepreneurs in the US is roughly ten years younger than in the EU. The reason: health insurance. Isn't is possible that starting up at a more mature age leads to a different approach to the social impact of what you're selling?

No one approach will solve this problem within the time we have to solve it. On the technology side, defaults matter. The "software choice architect" of researcher Chris Soghoian is rarely the software developer, more usually the legal or marketing department. The three of the biggest browser manufacturers who are most funded by advertising not-so-mysteriously have the least privacy-friend default settings. Advertising is becoming an arms race: first cookies, then Flash cookies, now online behavioral advertising, browser fingerprinting, geolocation, comprehensive profiling.

The law also matters. Peter Hustinx, lecturing last night believes existing principles are right; they just need stronger enforcement and better application.

Consumer education would help – but for that to be effective we need far greater transparency from all these – largely American – companies.

What harm can you show has happened? Zorbas challenged. Rauhofer's reply: you do not have to prove harm when your house is bugged and constantly wiretapped. "That it's happening is the harm."

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

Image: Martin Pettitt by CC-BY-2.0

Free Speech, not data

Congress shall make no law…abridging the freedom of speech…

Is data mining speech? This week, in issuing its ruling in the case of IMS Health v Sorrell, the Supreme Court of the United States took the view that it can be. The majority (6-3) opinion struck down a Vermont law that prohibited drug companies from mining physicians' prescription data for marketing purposes. While the ruling of course has no legal effect outside the US, the primary issue in the case – the use of aggregated patient data – is being considered in many countries, including the UK, and the key technical debate is relevant everywhere.

IMS Health is a new species of medical organization: it collects aggregated medical data and mines it for client pharmaceutical companies, who use the results to determine their strategies for marketing to doctors. Vermont's goal was to save money by encouraging doctors to prescribe lower-cost generic medications. The pharmaceutical companies know, however, that marketing to doctors is effective. IMS Health accordingly sued to get the law struck down, claiming that the law abrogated the company's free speech rights. NGOs from the digital - EFF and EPIC - to the not-so-digital - AARP, - along with a host of medical organizations, filed amicus briefs arguing that patient information is confidential data that has never before been considered to fall within "free speech". The medical groups were concerned about the threat to trust between doctors and patients; EPIC and EFF added the more technical objection that the deidentification measures taken by IMS Health are inadequate.

At first glance, the SCOTUS ruling is pretty shocking. Why can't a state protect its population's privacy by limiting access to prescription data? How do marketers have free speech?

The court's objection – or rather, the majority opinion – was that the Vermont law is selective: it prohibits the particular use of this data for marketing but not other uses. That, to the six-judge majority, made the law censorship. The three remaining judges dissented, partly on privacy grounds, but mostly on the well-established basis that commercial speech typically enjoys a lower level of First Amendment protection than non-commercial speech.

When you are talking about traditional speech, censorship means selectively banning a type or source of content. Let's take Usenet in the early 1990s as an example. When spam became a problem, a group of community-minded volunteers devised cancellation practices that took note of this principle and defined spam according to the behavior involved in posting it. Deciding a particular posting was spam requires no subjective judgments about who posted the message or whether it was a commercial ad. Instead, postings are scored against a bunch of published, objective criteria: x number of copies, posted to y number of newsgroups, over z amount of time., or off-topic for that particular newsgroup, or a binary file posted to a text-only newsgroup. In the Vermont case, if you can accept the argument that data mining is speech, as SCOTUS did, then the various uses of the data are content and therefore a law that bans only one of many possible uses or bans use by specified parties is censorship.

The decision still seems intuitively wrong to me, as it apparently also did to the three remaining judges, who wrote a dissenting opinion that instead viewed the Vermont law as an attempt to regulate commercial activity, something that has never been covered by the First Amendment.

But note this: the concern for patient privacy that animated much of the interest in this case was only a bystander (which must surely have pleased the plaintiffs).

Obscured by this case, however, is the technical question that should be at the heart of such disputes (several other states have passed Vermont-style laws): how effectively can data be deidentified? If it can be easily reidentified and linked to specific patients, making it available for data mining ends medical privacy. If it can be effectively anonymized, then the objections go away.

At this year's Computers, Freedom, and Privacy there was some discussion of this issue; an IMS Health representative and several of the experts EPIC cited in its brief were present and disagreeing. Khaled El Emam, from the University of Ottawa, filed a brief (PDF) opposing EPIC's analysis; Latanya Sweeney, who did the seminal work in this area in the early 2000s, followed with a rebuttal. From these, my non-expert conclusion is that just as you cannot trust today's secure cryptographic system to remain unbreakable for the future as computing power continues to increase in speed and decrease in price, you cannot trust today's deidentification to remain robust against the increasing masses of data available for matching to it.

But it seems the technical and privacy issues raised by the Vermont case are yet to be decided. Vermont is free to try again to frame a law that has the effect the state wants but takes a different approach. As for the future of free speech, it seems clear that it will encompass many technological artefacts still being invented – and that it will be quite a fight to keep it protecting individuals instead of, increasingly, commercial enterprises.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

Image:

Bits of the realm

Wendy M. Grossman on BitCoins and whether they can succeed where others have failed

Money is a collective hallucination. Or, more correctly, money is an abstraction that allows us to exchange - for example – writing words for food, heat, or a place to live. Money means the owner of the local grocery store doesn't have to decide how many pounds of flour and Serrano ham 1,000 words are worth, and I don't have to argue copyright terms while paying my mortgage.

But, as I was reading lately in The Coming Collapse of the Dollar and How to Profit From It by James Turk, the owner of GoldMoney, that's all today's currencies are: abstractions. Fiat currencies. The real thing disappeared when we left the gold standard in 1972. Accordingly none of the currencies I regularly deal with – pounds, dollars, euros – are backed by anything more than their respective governments' "full faith and credit". Is this like Tinker Bell? If I stop believing will they cease to exist? Certainly some people think so, and that's why, as James Surowiecki wrote in The New Yorker in 2004, some people believe that gold is the One True Currency.

"I've never bought gold," my father said in the late 1970s. "When it's low, it's too expensive. When it's high, I wish I'd bought it when it was low." Gold was then working its way up to its 1980 high of $850 an ounce. Until 2004 it did nothing but decline. Yesterday, it closed at $1518.

That's if you view the world from the vantage point of the dollar. If gold is your sun and other currencies revolve around it like imaginary moths, nothing's happened. An ounce just buys a lot more dollars now than it did and someday will be tradable for wagonloads of massively devalued fiat currencies. You don't buy gold; you convert your worthless promises into real stored value.

Personally, I've never seen the point of gold. It has relatively few real-world uses. You can't eat it, wear it, or burn it for heat and light. But it does have the useful quality of being a real thing, and when you could swap dollars for gold held in the US government's vault, dollars, too, were real things.

The difficulty with Bitcoins is that they have neither physical reality nor a long history (even if that history is one of increasing abstraction). Using them requires people to make the jump from the national currency they know straight into bits of code backed by a bunch of mathematics they don't understand.

Alternative currencies have been growing for some time now – probably the first was Ithaca Hours, which are accepted by many downtown merchants in my old home town of Ithaca, NY. What gives Ithaca Hours their value is that you trade them with people you know and can trust to support the local economy. Bitcoins up-end that: you trade them with strangers who can't find out who you are. The big advantage, as Bitcoin Consultancy co-founder Amir Taaki explains on Slashdot, is that their transaction costs are very, very low.

The idea of cryptographic cash is not new, though the peer-to-peer implementation is. Anonymous digital cash was first mooted by David Chaum in the 1980s; his company Digicash, began life in 1990 and by 1993 had launched ecash. At the time, it was widely believed that electronic money was an inevitable development. And so it likely is, especially if you believe e-money specialist Dave Birch, who would like nothing more than to see physical cash die a painful death.

But the successful electronic transaction systems are those that build on existing currencies and structures. Paypal, founded in 1998, achieved its success by enabling online use of existing bank accounts and credit cards. M-pesa and other world-changing mobile phone schemes are enabling safe and instant transactions to the developing world. Meanwhile, Digicash went bankrupt in 1999 and every other digital cash attempt of the 1990s also failed.

For comparison, ten-year-old GoldMoney's latest report says it's holding $1.9 billion in precious metals and currencies for its customers – still tiny by global standards. The most interesting thing about GoldMoney, however, is not the gold bug aspect but its reinvention of gold as electronic currency: you can pay other GoldMoney customers in electronic shavings of gold (minimum one-tenth of a gram) at a fraction of international banking costs. "Humans will trade anything," writes Danny O'Brien in his excellent discussion of Bitcoins. Sure: we trade favors, baseball cards, frequent flyer miles, and information. But Birch is not optimistic about Bitcoin's long-term chances, and neither am I, though for different reasons.

I believe that people are very conservative about what they will take in trade for the money they've worked hard to earn. Warren Buffett and his mentor, Benjamin Graham, typically offer this advice about investing: don't buy things you don't understand. By that rule, Bitcoins fail. Geeks are falling on them like any exciting, new start-up, but I'll guess that most people would rather bet on horses than take Bitcoins. There's a limit to how abstract we like our money to be.

 

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

Image: CC-AT Flickr: covilha

Prevent - incompetence

'Unnecessary, inefficient, and above all, counter-productive.' Oliver Keyes on the government's counter-terrorism review

The British government has released a new counter-terrorism review, named Prevent, covering both how the strategies they've got at the moment are working, and what strategies they plan to implement in the future. At first glance it might not be clear why this is of interest to open rights campaigners– after all, most of it covers working with faith-based groups, disseminating counter-ideologies, so on, so forth. But then you get to Chapter 10 and the implications of the review become worrying.

Currently, Sections 1-3 of the Terrorism Act 2006 make it illegal to engage in terrorism, disseminate terrorist literature, or fail to remove such literature after legal notice has been given. That's been applied not only to physical publications, but also to the net, although Prevent notes that no notices have actually been posted – UK-based ISPs have generally been responsive enough to remove any problematic literature before the courts have had to get involved. Evidently, however, this isn't enough; after all, a lot of websites are hosted overseas, where UK criminal law can't apply.

The solution, according to Prevent, is to ensure a "filtering product" rolled out equally across government agencies, schools, libraries, colleges, and other public buildings; while they have already tried to do this, there are evidently concerns that it hasn't been adopted widely enough (apparently nobody implementing it thought to include a mechanism to check uptake rates).

This "filtering product" for public sector facilities would be sent, along with a list of "violent and unlawful" URLs, to voluntary filtering organisations such as the Internet Watch Foundation (IWF), which are partnered with ISPs to provide filtering for their customers.

At first glance, it's difficult to see how these ideas can be anything but unnecessary, inefficient, and above all, counter-productive.

Let's take the voluntary filtering organisations, for example. The key word there is "voluntary"; some ISPs use them, some ISPs do not. The result of this is that if you base your counter-terrorism strategy on the idea of distributing blacklists to these bodies, there will inevitably be some organisations not covered.

This means the government will have to run two distinct processes in parallel: one for those ISPs who sign up to such services, in which lists of "violent and unlawful" websites are provided, and one for everyone else, in which the security services have to go around providing URLs individually and hoping the ISPs are nice enough to block them before the courts get involved.

In any case, this filtering strategy doesn't seem necessary – after all, the same report which claims ISPs need additional filtering is the one which says ISPs are so responsive to blacklists and takedown notices the government hasn't even had to apply the law yet. It's also the same report which admits in Chapter 8 that "comparatively few texts circulate on the internet" in the firstplace.

It also has to be pointed out that internet filtering isn't perfect; as with anything that involves humans, it's subject to errors. The report's use of the IWF as an example might have been because it's the most well-known filtering organisation, but in fact, it has that status largely due to a series of widely-publicised errors that ended with the body blocking Wikipedia for hosting child porn – specifically the cover of a Scorpions album.

Now, I hate 80s hair-metal as much as the next man, but even I think that's a bit harsh. Regardless, the point remains: internet filtering already leads to the collateral damage of blocking educational and legitimate content, and that collateral damage can only increase as the amount of filtering does.

The other edge of the sword, of course, is that just as internet filtering inevitably over-filters educational content, it will inevitably fail to filter precisely the sort of extremist literature which Prevent is attempting to suppress.

The lists simply cannot contain every "violent and unlawful" website, and even if they could, this sort of suppression cannot cover IRC conversations, usenet or FTP transfers, or any of a thousand other ways of distributing problematic content via the internet. All this filtering strategy will achieve is driving extremist literature underground, making it harder to find and track while also providing an utterly false sense of security.

Mostly, of course, this strategy is simply counter-productive. Forget, for a second, the inefficiencies of implementing this, or whether or not doing so is even justified. Even if it is justified, and even if it can be implemented in some perfect format in which no educational content is blocked and all extremist work is, it's not worth doing.

Why? Because if you want to create an environment in which the "the western world is out to get us" mantra of extremism has some credence, I can think of no better way than to dedicate significant time and effort to suppressing anyone who expresses a contrary opinion, forcing them underground to places where no counter-argument exists.

As it happens, an official comment on this subject supports this argument, saying that "radicalisation tends to occur in places where terrorist ideologies, and those that promote them, go uncontested and are not exposed to free, open and balanced debate and challenge". Unfortunately, that comment is from....oh. The Prevent report itself.

Inefficient, unnecessary, counter-productive – and now not even consistent. Welcome to the British government.


Oliver Keyes is a blogger and writer on the subjects of politics, constitutional law and civil liberties

 

Image: CC-AT-SA Flickr: Steve Cadman/

The Wild West web

Jag Bahra on the rule of law online and the importance of due process

As the internet continues to become a more significant part of everyone's lives it presents new legal challenges. Due to the slow legislative process the law has continued to stumble several steps behind as the web has matured and flourished in the 21st century. 

The decentralised nature of the internet also means that the enforcement of laws such as copyright, defamation and privacy has become increasingly difficult. Individual culprits are difficult or impossible to track down – and often when they are found it is pointless taking them to court as the aggrieved party is unlikely to obtain any real damages from them.

Consequently we are beginning to see legislators, law enforcement and litigants turn their attention to intermediaries – such as ISPs, domain registrars, and online services such as Twitter and Facebook – in order to enforce the law online. Of course it is pointless to argue that enforcement measures should not be available on the internet - the law needs to be upheld just as it would in the real world. 

However bullying intermediaries into acting at the drop of a hat is not the way to do it. We need to think carefully about where we should draw the line, and adequate safeguards must be put in place to ensure that innocent users are not affected.

Intermediaries – whether ISPs or services such as Twitter – are the gatekeepers of the online world. They play an increasingly important role in the flow of ideas and information. As regulation inevitably becomes more commonplace it is of paramount importance that due process is followed. If it is ignored then the internet will become shaped by threats of litigation and scary letters from expensive lawyers.

An ISP may be liable for secondary copyright infringement under s.23 of the Copyright, Designs and Patents Act 1988, however some protection is afforded by the Copyright in the Information Society Directive 2001 and E-Commerce Directive Regulations 2002. These essentially exclude liability where the ISP makes a copy of a protected work as an intermediary and as part of a technical process (such as caching and hosting).

The protection for hosting is subject to the requirement that the ISP acts expeditiously to remove the content on obtaining knowledge that that content is unlawful. American law offers ISPs a similar protection thanks to the 'safe harbour' provisions of the DMCA.

The anti-piracy measures of the Digital Economy Act create significant obligations for ISPs, who will be forced to play an important role in enforcement. When the Act eventually comes into force ISPs will have to notify users of infringement reports, provide rights-holders with infringement lists whenever required, and may be compelled to impose the dreaded 'technical measures' such as limitation of bandwidth and account suspension. Failure to adhere to these obligations could result in the imposition of a fine of up to £250,000.

Over the last few years some ISPs have entered into voluntary 'self-regulation' schemes in which they will comply with requests to remove content in order to be absolved from liability. This may be as a measure against copyright infringement (as in the case of Irish ISP Eircom). But more commonly, self-regulation schemes are put in place in order to block access to child pornography. The UK's major ISPs operate under such a system in conjunction with the Internet Watch Foundation (IWF). Finnish ISPs were bullied into voluntary blocking by politicians who threatened to legislate if they did not comply. (For more details see my previous article on web blocking.)

There has been recent debate over the position of Nominet - the registry for .uk domain names. To date 2,667 domains have been suspended, mostly for the purposes of protecting consumers from counterfeit goods, phishing, and fraud. They are currently in a state of ad-hoc self-regulation, taking instructions from the Police Central e-crime Unit and the Serious Organised Crime Agency.

There is no formal procedure in place and the police have no specific powers of suspension. Nominet are asked – not ordered – to take material down, but the police have threatened that by failing to comply Nominet may leave themselves open to civil or even criminal liability as an accessory to the offence. No such case has yet been brought as Nominet always co-operate.

ISPs can be held liable for defamatory postings made by their users, but again may find protection in the E-Commerce Directive Regulations 2002, as in the case of Bunt v Tilley (2006). AOL, Tiscali and BT were sued for defamatory remarks that were made by internet users in chat rooms. It was ruled that "an ISP which performs no more than a passive role in facilitating postings on the internet cannot be deemed to be a publisher at common law." All three ISPs were found to be not liable for the defamatory comments made.

However the Court went on to say that "if a person knowingly permits another to communicate information which is defamatory, when there would be an opportunity to prevent the publication, there would seem to be no reason in principle why liability should not accrue." The ruling therefore follows the 2002 Regulations, indicating that an ISP will face liability if it receives notification of the defamatory statements and takes no action to remove them.

Twitter has been making headlines after many of its users breached a privacy injunction taken out by Ryan Giggs to stop the media reporting his indiscretions. This seems to have sparked a panic among politicians, furthering the idea that the internet is currently some sort of lawless 'Wild West' that needs to be tamed. Last week the Attorney General warned that Twitter users who continue to breach privacy injunctions could be held in contempt of court, stating that he would intervene in proceedings if necessary.

Twitter has also recently been the subject of much controversy for disclosing the personal details of its users in connection with libel proceedings. South Tyneside Council went all the way to the Californian Court in order to obtain an order which required Twitter to release the details (IP addresses, email addresses, names, telephone numbers) of the user accounts which had tweeted links to a blog which contained allegations that Councillors had been involved in various forms of misconduct.

Despite receiving criticisms for its action, Twitter actually acted as fairly as it could given the circumstances. The court order could not be ignored. Instead Twitter contested it so that it could be 'unsealed' – allowing the relevant users to be notified of the impending action and mount a defence should they choose to do so (users connected with Wikileaks are currently defending their own case in the Californian court.) By contrast, Google and Facebook, who regularly receive and comply with such requests in volume, do not bother to do this.

Twitter made their position clear at the e-G8 Summit in France, where Tony Wang stated that "Platforms have a responsibility, not to defend that user but to protect that user's right to defend him or herself". He went on to announce that Twitter would generally hand over details when required by law, but would continue to notify concerned parties of any disclosure of details.

So why should we be concerned with all of this? Online services are subject to the rule of law just like everybody else. But as the facilitators of communication it is essential that they are not regulated in a way which will impede lawful use. An unbalanced, overzealous approach could have dangerous implications for freedom of expression.

For example, as far as copyright infringement and libel are concerned, ISPs clearly do not have the time or resources to consult a lawyer every time they receive a takedown request. There is little incentive to thoroughly investigate whether or not an allegation of infringement or libel has any merit. If faced with liability, they will seek to block or remove content quickly, and as a result some content will inevitably be taken down whether or not it is in fact unlawful.

We have already seen examples of innocent websites suffering in this way. In 2007 a libel dispute arose between Uzbek millionaire Alisher Usmanov and human rights activist Craig Murray. Murray made supposedly defamatory comments on his blog, claiming Usmanov had a criminal past. The administrator of the blog refused to remove the content, so the hosting company Fasthosts terminated the administrator's account, taking down all of the administrator's websites. These happened to include the website of Boris Johnson, who was reportedly furious and declared "We live in a world where internet communication is increasingly vital, and this is a serious erosion of free speech."

Many websites that suffered obviously had nothing to do with the dispute, making clear the potential for collateral damage in a system in which takedowns are effected without any real consideration. With regard to libel, it is also clear that the system may be abused by parties who wish to protect their own reputations at the expense of free speech or honest reporting by others.

Individuals or corporations could potentially 'cry defamation' whenever any legitimate criticisms are made. ISPs would remove the posting for fear of leaving themselves open to liability, and all criticism would be silenced. This presents serious potential for a chilling effect on freedom of speech.

It is clear that proper procedures with judicial oversight must be put in place. Nominet's current system of informal self-regulation allows the Police to suspend websites without even proving that a crime has been committed. For example see the suspension of the Fitwatch website after the student protests in November 2010.

Do we really want to create a situation in which intermediaries are forced to remove content on the basis of accusation, regardless of whether any offence has actually been committed? This would certainly undermine the fundamental right to a fair trial. The need for a court order would afford at least a minimum standard of protection to internet users, and give accused parties the chance to fight their corner.

Regulation of what happens on the internet will always be tricky, but this should not be an excuse to disregard basic rights and principles. If we get this wrong, the internet could cease to be the bastion of free speech that it currently is - and this would be a disaster for all of us.

 

Jag Bahra is a law graduate, civil liberties & copyleft enthusiast

Image: CC-AT-NC Flickr: tarotastic (Taro Taylor)

The democracy divide

Wendy M. Grossman reports from CFP 2011 day two, where the TSA & internet filtering were the main topics being discussed

Good news: the Travel Security Administration audited itself and found it was doing pretty well. At least, so said Kimberly Walton, special counsellor to the administrator for the TSA.

It's always tough when you're the raw meat served up to the Computers, Freedom, and Privacy crowd, and Walton was appropriately complimented for her courage in appearing. But still: we learned little that was new, other than that the TSA wants to move to a system of identifying people who need to be scrutinized more closely.

Like CAPPS-II? asked the ACLU's Daniel Mach? "It was a terrible idea."

No. It's different. Exactly how, Walton couldn't say. Yet.

Americans spent the latter portion of last year protesting the TSA's policies – but little has happened? Why? It's arguable that a lot has to do with a lot of those protests being online complaints rather than massed ranks of rebellious passengers at airport terminals. And a lot has to do with the fact that FOIA requests and lawsuits move slowly. ACLU, said Ginger McCall, has been unable to get any answers from the TSA except by lawsuit.

Apparently it's easier to topple a government.

"Instead of the reign of terror, the reign of terrified," said Deborah Hurley (CFP2001 chair) during the panel considering the question of social media's role in the upheavals in Egypt and Tunisia. Those on the ground – Jillian York, Nasser Weddady, Mona Eltawy – say instead that social media enabled little pockets of protest, sometimes as small as just one individual, to find each other and coalesce like the pooling blobs reforming into the liquid metal man in Terminator 2. But what appeared to be sudden reversals of rulers' fortunes to outsiders who weren't paying attention were instead the culmination of years of small rebellions.

The biggest contributor may have been video, providing non-repudiable evidence of human rights abuses. When Tunisia's President Zine al-Abidine Ben Ali blocked video sharing sites, Tunisians turned to Facebook. "Facebook has a lot of problems with freedom of expression," said York, "but it became the platform of choice because it was accessible, and Tunisia never managed to block it for more than a couple of weeks because when they did there were street protests."

Technology may or may not be neutral, but its context never is. In the US for many years, Section 230 of the Communications Decency Act has granted somewhat greater protection to online speech than to that in traditional media. The EU long ago settled these questions by creating the framework of notice-and-takedown rules and generally refusing to award online speech any special treatment. (You may like to check out EDRI's response to the ecommerce directive (PDF).)

Paul Levy, a lawyer with Public Citizen and organizer of the S230 discussion, didn't like the sound of this. It would be, he argued, too easy for the unhappily criticized to contact site owners and threaten to sue: the heckler's veto can trump any technology, neutral or not.

What, Hurley asked Google's policy director, Bob Boorstin, to close the day, would be the one thing he would do to improve individuals' right to self-determination? Give them more secure mobile devices, he replied. "The future is all about what you hold in your hand." Across town, a little earlier, Senators Franken and Blumenthal introduced the Location Privacy Protection Act 2011.

Certainly, mobile devices – especially Talk to Tweet – gave Africa's dissidents a direct way to get their messages out. But at the same time, the tools used by dictators to censor and suppress internet speech are those created by (almost entirely) US companies.

Said Weddady in some frustration, "Weapons are highly regulated. If you're trading in fighter jets there are very stringent frames of regulations that prevent these things from falling into the wrong hands. What is there for the internet? Not much." Worse, he said, no one seems to be putting political support behind enforcing the rules that do exist. In the West we argue about filtering as a philosophical issue. Elsewhere, he said, it's life or death. "What am I worth if my ideas remain locked in my head?"

 

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

Image: CC-AT-NC-SA Flickr: GodzillaRockit (Ionan Lumis)

Public private lives

Wendy M. Grossman, from CFP 2011, on Do Not Track and privacy in the US and EU

A bookshop assistant followed me home the other day, wrote down my street address, took a photograph of my house. Ever since, every morning I find an advertising banner draped over my car windshield that I have to remove before I can drive to work.

That is, of course, a fantasy scenario. But it's an attempt to describe what some of today's web site practices would look like if transferred into the physical world. That shops do not follow you home is why the analogy between web tracking and walking on a public street or going into a shop doesn't work.

It was raised by Jim Harper, the director of information policy studies at the Cato Institute, on the first day of ACM Computers, Freedom, and Privacy, at his panel on the US's Do Not Track legislation. Casual observers on the street are not watching you in a systematic way; you can visit a shop anonymously, and, depending on its size and the number of staff, you may or may not be recognized the next time your visit.

This is not how the web works. Web sites can fingerprint your browser by the ecology of add-ins that are peculiar to you and use technologies such as cookies and Flash cookies to track you across the web and serve up behaviorally targeted ads. The key element – and why this is different from, say, using Gmail, which also analyzes content to post contextual ads – is that all of this is invisible to the consumer.

As Harlan Yu, a PhD student in computer science at Princeton, said, advertisers and consumers are in an arms race. How wrong is this?

Clearly, enough consumers find behavioral targeting creepy enough that there is a small but real ecology of ad-blocking technologies – the balking consumer side of the arms race – including everything from Flashblock and Adblock for Mozilla to the do-not-track setting in the latest version of Internet Explorer. (Though there are more reasons to turn off ads than privacy concerns: I block them because anything moving or blinking on a page I'm trying to read is unbearably distracting.)

Harper addressed his warring panellists by asking the legislation's opponents, "Why do you think the internet should be allowed to prey on the entrails of the hapless consumer?" And of the legislation's sympathizers, "What did the internet ever do to you that you want to drown it in the bathtub?"

Much of the ensuing, very lively discussion centered on the issue of trade-offs, something that's been discussed here many times: if users all opt out of receiving ads, what will fund free content? Nah, said Ed Felten, on leave from Princeton for a stint at the FTC, what's at stake is behaviorally targeted ads, not *all* ads.

The good news is that although it's the older generation who are most concerned about issues like behavioral targeting, teens have their own privacy concerns. My own belief for years has been that gloomy prognostications that teens do not care about privacy are all wrong. Teens certainly do value their privacy; it's just that their threat model is their parents.

To a large extent Danah Boyd provided evidence for this view. Teens, she said, faced with the constant surveillance of well-meaning but intrusive teachers and parents, develop all sorts of strategies to live their private lives in public. One teen deactivates her Facebook profile every morning and reactivates it to use at night, when she knows her parents won't be looking. Another works hard to separate his friends list into groups so he can talk to each in the manner they expect. A third practices a sort of steganography, hiding her meaning in plain sight by encoding it in cultural references she knows her friends will understand but her mother will misinterpret.

Meantime, the FTC is gearing up to come down hard on mobile privacy. Commissioner Edith Ramirez of course favors consumer education, but she noted that the FTC will be taking a hard line with the handful of large companies who act as gatekeepers to the mobile world. Google, which violated Gmail users' privacy by integrating the social networking facility Buzz without first asking consent, will have to submit to privacy audits for the next 20 years. Twitter, whose private messaging was broken into by hackers, will be audited for the next ten years – twice as long as the company has been in existence.

"No company wants to be the subject of an FTC enforcement action," she said. "What happens next is largely in industry's hands." Engineers and developers, she said, should provide voluntary, workable solutions.

Europeans like to think the EU manages privacy somewhat better, but one of the key lessons to emerge from the first panel of the day, a compare-and-contrast discussion of data-sharing between the EU and the US was that there's greater parity than you might think. What matters, said Edward Hasbrouck, is not data protection but how the use of data affects fundamental rights – to fly or transfer money.

In that discussion, while the Department of Homeland Security representative, Mary Ellen Callahan, argued that the US is much more protective of privacy than a simple comparison of data protection laws might suggest. (There is a slew of pieces of US privacy legislation in progress.) The US operates fewer wiretaps by a factor of thousands, she argued, and is far more transparent.

Ah, yes, said Frank Schmiedel, answering questions to supplement the videotaped appearance of European Commission vice-president Viviane Reding, but if the US is going to persist in its demand that the EU transfer passenger name record, financial, and other data, one of these days, Alice, one of these days…the EU may come knocking, expecting reciprocity. Won't that be fun?  

 

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

Image: CC-AT-NC-SA Flickr: [RAWRZ!] (Rosalyn Davis)

Untrusted systems

Wendy M. Grossman is at CFP 2011, where health care privacy was the first talking point

Why does no one trust patients?

On the TV series House, the eponymous sort-of-hero has a simple answer: "Everybody lies." Because he believes this, and because no one appears able to stop him, he sends his minions to search his patients' homes hoping they will find clues to the obscure ailments he's trying to diagnose.

Today's Health Privacy Summit in Washington, DC, the zeroth day of this year's Computers, Freedom, and Privacy conference, pulled together, in the best Computers, Freedom, and Privacy tradition, speakers from all aspects of health care privacy. Yet many of them agreed on one thing: health data is complex, decisions about health data are complex, and it's demanding too much of patients to expect them to be able to navigate these complex waters. And this is in the US, where to a much larger extent than in Europe the patient is the customer. In the UK, by contrast, the customer is really the GP and the patient has far less direct control. (Just try looking up a specialist in the phone book.)

The reality is, however, as several speakers pointed out, that doctors are not going to surrender control of their data either. Both physicians and patients have an interest in medical records. Patients need to know about their care; doctors need records both for patient care and for billing and administrative purposes. But beyond these two parties are many other interests who would like access to the intimate information doctors and patients originate: insurers, researchers, marketers, governments, epidemiologists.

Yet no one really trusts patients to agree to hand over their data; if they did, these decisions would be a lot simpler. But if patients can't trust their doctor's confidentiality, they will avoid seeking health care until they're in a crisis. In some situations – say, cancer – that can end their lives much sooner than is necessary.

The loss of trust, said lawyer Jim Pyles, could bring on an insurance crisis, since the cost of electronic privacy breaches could be infinite, unlike the ability of insurers to insure those breaches. "If you cannot get insurance for these systems you cannot use them."

If this all (except for the insurance concerns) sounds familiar to UK folk, it's not surprising. As Ross Anderson pointed out, greatly to the Americans' surprise, the UK is way ahead on this particular debate. Nationalized medicine meant that discussions began in the UK as long ago as 1992.

One of Anderson's repeated points is that the notion of the electronic patient record has little to do with the day-to-day reality of patient care. Clinicians, particularly in emergency situations, want to look at the patient. As you want them to do: they might have the wrong record, but you know they haven't got the wrong patient.

"The record is not the patient," said Westley Clarke, and he was so right that this statement was repeated by several subsequent speakers.

One thing that apparently hasn't helped much is the Health Insurance Portability and Accountability Act, which one of the breakout sessions considered scrapping. Is HIPAA a failure or, as long-time Canadian privacy activist Stephanie Perrin would prefer it, a first step? The distinction is important: if HIPPA is seen as an expensive failure it might be scrapped and not replaced. First steps can be succeeded by further, better steps.

Perhaps the first of those should be another of Perrin's suggestions: a map of where your data goes, much like Barbara Garson's book Money Makes the World Go Around? followed her bank deposit as it was loaned out across the world. Most of us would like to believe that what we tell our doctors remains cosily tucked away in their files. These days, not so much.

 

For more detail see Andy Oram's blog

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

 

Image: CC-AT-NC Flickr: jenrock

The creepiness effect

Wendy M. Grossman on Facebook's facial recognition feature, its business model and why they never seems to learn from their mistakes

"Facebook is creepy," said the person next to me in the pub on Tuesday night.

The woman across from us nodded in agreement and launched into an account of her latest foray onto the service. She had, she said uploaded a batch of 15 photographs of herself and a friend. The system immediately tagged all of the photographs of the friend correctly. It then grouped the images of her and demanded to know, "Who is this?"

What was interesting about this particular conversation was that these people were not privacy advocates or techies; they were ordinary people just discovering their discomfort level. The sad thing is that Facebook will likely continue to get away with this sort of thing: it will say it's sorry, modify some privacy settings, and people will gradually get used to the convenience of having the system save them the work of tagging photographs.

In launching its facial recognition system, Facebook has done what many would have thought impossible: it has rolled out technology that just a few weeks ago *Google* thought was too creepy for prime time.

Wired UK has a set of instructions for turning tagging off. But underneath, the system will, I imagine, still recognize you. What records are kept of this underlying data and what mining the company may be able to do on them is, of course, not something we're told about.

Facebook has had to rein in new elements of its service so many times now – the Beacon advertising platform, the many revamps to its privacy settings - that the company's behavior is beginning to seem like a marketing strategy rather than a series of bungling missteps. The company can't be entirely privacy-deaf; it numbers among its staff the open rights advocate and former MP Richard Allan. Is it listening to its own people?

If it's a strategy it's not without antecedents. Google, for example, built its entire business without TV or print ads. Instead, every so often it would launch something so cool everyone wanted to use it that would get it more free coverage than it could ever have afforded to pay for. Is Facebook inverting this strategy by releasing projects it knows will cause widely covered controversy and then reining them back in only as far as the boundary of user complaints?

Because these are smart people, and normally smart people learn from their own mistakes. But Zuckerberg, whose comments on online privacy have approached arrogance, is apparently justified, in that no matter what mistakes the company has made, its user base continues to grow. As long as business success is your metric, until masses of people resign in protest, he's golden. Especially when the IPO moment arrives, expected to be before April 2012.

The creepiness factor has so far done nothing to hurt its IPO prospects – which, in the absence of an actual IPO, seem to be rubbing off on the other social media companies going public. Pandora (net loss last quarter: $6.8 million) has even increased the number of shares on offer.

One thing that seems to be getting lost in the rush to buy shares – LinkedIn popped to over $100 on its first day, and has now settled back to $72 and change (for a Price/Earnings ratio 1076) – is that buying first-day shares isn't what it used to be.

Even during the millennial technology bubble, buying shares at the launch of an IPO was approximately like joining a queue at midnight to buy the new Apple whizmo on the first day, even though you know you'll be able to get it cheaper and debugged in a couple of months. Anyone could have gotten much better prices on Amazon shares for some months after that first-day bonanza, for example (and either way, in the long term, you'd have profited handsomely).

Since then, however, a new game has arrived in town: private exchanges, where people who meet a few basic criteria for being able to afford to take risks, trade pre-IPO shares. The upshot is that even more of the best deals have already gone by the time a company goes public.

In no case is this clearer than the Groupon IPO, about which hardly anyone has anything good to say. Investors buying in would be the greater fools; a co-founder's past raises questions, and its business model is not sustainable.

Years ago, Roger Clarke predicted that the then brand-new concept of social networks would inevitably become data abusers simply because they had no other viable business model. As powerful as the temptation to do this has been while these companies have been growing, it seems clear the temptation can only become greater when they have public markets and shareholders to answer to.

New technologies are going to exacerbate this: performing accurate facial recognition on user-uploaded photographs wasn't possible when the first pictures were being uploaded. What capabilities will these networks be able to deploy in the future to mine and match our data? And how much will they need to do it to keep their profits coming?

 

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

 

Image: CC-AT Andrew Feinberg

Featured Article

Schmidt Happens

Wendy M. Grossman responds to "loopy" statements made by Google Executive Chairman Eric Schmidt in regards to censorship and encryption.

ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views

People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.

ORG Events