Dr Paul Bernal provides clarification on what the Right to be Forgotten means and what the issues are.
A lot has been said and written about the ‘right to be forgotten’ over the last year or two, and yet it doesn’t seem as though many people really know what they’re talking or writing about. So what is the right to be forgotten? How could it work – if it could work? Is it something to be supported or something to be feared? Why is it such a bone of contention? This article will try to start answering some of these questions.
What is the right to be forgotten?
Technically, the right to be forgotten is part of the proposed ‘data protection regulation’ – the revision of the data protection regime currently under discussion in Europe. The relevant section of the proposed regulation puts it like thi
“The data subject shall have the right to obtain from the controller the erasure of personal data relating to them and the abstention of further dissemination of such data.”
The right is primarily intended to be used in two situations: when the data is no longer ‘needed’, and when the data subject ‘withdraws consent’ for the data to be used. It is a little more complex than that, but the essence is simple: if you don’t want people to hold your data, or the reason they held it is no longer valid, you should have the right to have that data deleted.
That much is relatively simple – and does not, on the surface, seem to have much to do with being ‘forgotten’. It’s much more about deleting of data held about you – which is why a number of people (including the author) have been suggesting for some time that it would be better to call it the ‘right to delete’ than the right to be forgotten.
Deleting data held about you
The idea that we should be able to delete data that others hold about us is one that seems, at least on the surface, to make a lot of sense. Indeed, it is important to understand where the original drive for this right came from: the amount of data being held about us, particularly by commercial operators, and the trouble that we have in deleting it. One specific trigger was the difficulty that people have had until recently in deleting their Facebook accounts. Shouldn’t we have the right to do that?
So where are the problems?
The first problem is the name – even mentioning a ‘right to be forgotten’ starts people thinking about the rewriting of history, of Stalin deleting people’s faces from photographs, of censorship. As noted above, that’s not really what the right is about – and indeed the right has a clumsily written exemption for data processed ‘solely for journalistic purposes or the purpose of artistic or literary expression’ – but it is easy to see how the right might be viewed in this way. Moreover, as legal rights often are, there may be the potential for it to be misused or manipulated in this way – or even simply misunderstood so that unwinnable lawsuits are entered into. According to their Global Privacy Counsel, Peter Fleischer, speaking at a recent symposium on the subject, Google are already facing in excess of a hundred suits in Spain alone, on the basis of what people ‘think’ is their right to be forgotten. The term itself raises expectations that could be impossible to meet.
The second is a practical issue: can it be made to work at all? The right not only asks a data holder to delete data, but if they’ve made it public, to ‘take all reasonable steps’ to track down and delete copies and links to the data. Could this work? What kind of a burden would this place on intermediaries like Google and Facebook? Would it have some kind of chilling effect?
The EU, the US and the UK…
The third problem is perhaps more fundamental: the cultural differences in attitudes to privacy and free speech in the EU and the US. In the EU, and particularly in Germany, privacy is taken very seriously, and the rights that people have over data are considered crucial. In the US, privacy very much takes second place to free speech – anything that can even slightly infringe on free speech is likely to face short shrift. The right to be forgotten has been very actively opposed in the US on those grounds – Jeffrey Rosen in the Stanford Law Review calling it the ‘biggest threat to free speech on the internet in the coming decade’.
Who is right? Neither, really. The right is not what its more active opponents in the US think it is – but neither has it been written tightly enough and carefully enough to provide the kind of practical, realisable right to delete personal data that the EU would like to see.
There IS a problem that needs addressing
This is the first thing that needs to be recognised. For individual autonomy – and individual rights – there needs to be something to rein in the data gathering, and cut down the amount of data held. With more careful writing, the right to be forgotten could play a part in bringing this about, particularly if the big businesses of the internet – the Googles and Facebooks – come on board.
Shaping their business models is the key – if they make things work, the law won’t need to be so harsh, and won’t represent any real kind of threat. The Commission is taking the heavy-handed approach mostly because businesses haven’t shown much sign of addressing the issue by themselves. There is some sign of movement, prompted perhaps by this approach. Facebook is making it easier to delete profiles. Google seem to be beginning to understand privacy a little better. Has the approach of the Commission played a part in that? I’m sure they think so. Can the right to be forgotten help? I’m sure they think that too. They may even be right.
Dr Paul Bernal is a lecturer in IT, IP and Media Law at the UEA. He researches into privacy and human rights, particularly on the internet. He blogs at:http://paulbernal.wordpress.com/ and tweets as @paulbernalUK
"Governments find it easy to act on the efficiency side of open data while neglecting transparency" - Tom Slee discusses issues with Open Government Data
Advocates for Open Government Data can point to a large number of useful data sets being released by governments of several countries, but the net effect of this data explosion may not be an improvement in the transparency and accountability of governments.
Open Government Data is promoted as a means to two goals: improved efficiency and increased transparency. These goals are a bit like chocolate and cheese: both are good, but it is not clear that they have very much to do with each other. The dual-target nature of the movement creates a serious set of problems for open data advocates.
One problem is that governments find it easy to act on the efficiency side of open data while neglecting transparency. As the Open Rights Group has pointed out (link), the UK government is an enthusiastic open data supporter, and yet its programme “is an unhealthy conflation of transparency, data on public services and personal data, all of which converge towards the "Open for Business" principle.” In Brasilia last month the conservative Canadian government---a government which just eradicated an essential part of the Canadian census which routinely prevented government scientists from talking to the public without ministerial approval, and which has developed a reputation for secrecy and centralised control ---was welcomed into the Open Government Partnership on the basis of a plan built around “expanding access to Open Data” (link).
A second problem is that there are significant commercial interests under the open data umbrella who are steering open data initiatives in self-interested ways. Who do you think is making this idealistic call to action?
“Do you embrace the idea of a transparent, participatory and collaborative government? Do you believe technological advancements such as web 2.0, social computing and cloud computing lie at the core of making these ideas a reality? Do you wish to be actively involved in this transformation? If so, the Open Government Data Initiative is for you!”
Yes, it’s Microsoft (link).
From a business point of view, open and closed are not opposites. Instead, they are complements: things that go together, like fish and chips. When the price of one goes down, demand for the other increases: lowering the price of fish means that more people eat fish & chips, which raises demand for chips and makes potatoes more valuable.
So who stands to make money from open data sets? The complements to data are things like other data sets, a large base of skilled programmers and computer hardware and software platforms. While much of the open data rhetoric focuses on making data available to “the public”, these complements are things that established technology companies, and Silicon Valley’s aggressive and wealthy venture capital sector, are best poised to exploit. Many open data advocates see themselves as being opposed to “vested business interests” (to use the ORG’s phrase), but they may be unwittingly promoting a whole new set of business interests instead.
What’s more, network effects and increasing returns to scale make many digital technology markets winner-take-all in nature, as the success of titans such as Amazon, Apple, Google and Facebook demonstrates. The new industries created by open data may be more concentrated than those they replace and that means that the companies at their head will be more powerful. Hackathons of civic-minded programmers building community tools sound great, but it is also possible that in the long run open data may “further empower and enrich the already-empowered” to quote community informatics leading light Michael Gurstein. For those interested in a more transparent and accountable society, the creation and promotion of new oligopolistic industries is not something to be welcomed.
Given the range of conflicting interests and the ease with which an agenda based on “data” can be rephrased and repositioned, it seems to me that there is nothing holding together the various proponents of “open data” beyond a commitment to more public bytes, and that’s not enough for a cohesive movement. Advocates committed to improving civil liberties need to ask themselves if their open data partners in government and business are really walking along the same road.
Tom Slee is the author of 'No One Makes You Shop at Wal-Mart: The Surprising Deceptions of Individual Choice' (Between the Lines, 2006). Once a theoretical chemist, he now works in the computer software industry and also blogs on technology and economics at http://whimsley.typepad.com. He grew up in the UK and lives in Canada.
As Facebook has its trading debut Wendy Grossman discusses the uncertain economic future for the social networking site.
"But what if the numbers ever start going down?"
The speaker was the managing director of CompuServe UK in approximately 1991 and I had just suggested putting a banner on the CompuServe login page announcing the number of users, at the time growing rapidly. Well, you know the rest.
I'm not alone in comparing Facebook, which went public yesterday, to past online social venues, and I won't review my reasons here. As I wrote in January when Ramnit surfaced, were I an early investor in Facebook I'd be wanting to cash out now.
As I write this, the market hasn't opened yet in the US, so this is just fun-with-numbers. Facebook's 421.2 million shares at $38 raised $16 billion, a nice war chest to go shopping with. The company's new $104 billion market cap, the Wall Street Journal tweeted yesterday, tops Yahoo!, Groupon, LinkedIn, Netflix, IAC/Interactive Corporate, Zynga, and Pandora Media - combined. It's slightly above Amazon, more than double eBay, about half of Microsoft, almost double Cisco or Boeing. More soberly, it is 50 times CompuServe acquirer AOL's market cap today - but less than half of AOL's $222 billion in December 1999.
I'll blame some of the Facebook IPO madness on the 2010 release of The Social Network: a mainstream movie is fabulous publicity for a five-year-old company. Throw in widespread familiarity and then all those stories about the Arab Spring, and you have a company of apparently vast social significance - but not necessarily good business. The size of the hype has lots of financial pundits trying to sober people up. With reason: the buying frenzy suggests that expectations are so high that a disappointment is almost inevitable.
Warren Buffett, and his legendary mentor, Benjamin Graham, espouse(d) two enduring principles: a "moat" protecting a business from copycats and competitors, and a "margin of safety". The latter is simple enough: buy undervalued companies. If the company makes a misstep, you are somewhat protected against losing your investment. So the statistic that gives real pause is this one: if Apple were valued (price to sales) the way Facebook is at $38 a share, Apple's market cap would be six times what it is now: $3 trillion.
A quick round-up of other issues, beginning with the huge size of the float coupled with a lower - and slower - growth rate than Apple, Google, or LinkedIn. Seeking Alpha directly compares Facebook and Google at IPO time; The Motley Fool compares Facebook to giant Internet IPOs of the past; its graphs show just how hard it will be for Facebook to live up to the track records of Google, Amazon, and Apple. Minyanville has ten more negative perspectives, even including the guy who can't get his teenaged daughter to stop spending all her time on the system. The New York Times's Dealbook has plenty more. Disappointed investors can, however, lie back and think of California, which desperately needs the tax money from those thousand new millionaires.
With respect to Facebook's ongoing business - because guessing at the company's future earnings is really what this is all about - others have cast doubt on its mobile prospects, its advertising model, especially in the light of GM's very public withdrawal, its prospects for diversifying its revenue streams, its trust issues, and Mark Zuckerberg's stated claim that the company's goal is not really to make money. With respect to the latter, the argument goes that either he means it and may shaft his shareholders from time to time in service of his vision, or he will be gradually weaned away from it under the pressure of running a public company. And Zuckerberg, more than most, has retained single-handed control over what the company does. Arguing about the hoodie is just silly; it's branding; get over it.
That's the valuation, but what about that moat? The number of users and market dominance was a very big asset for eBay because it is clear that the bigger the pool of buyers to whom you can offer "long tail" goods for sale the more likely you are to find a match. In hindsight, for eBay and Amazon, first-mover advantage really was key. But whereas people go to Google to fulfill the essential need of finding things online, Facebook usage is discretionary. People don't join Facebook for itself; the real appeal is the presence of their friends, even if once there they get obsessed with Farmville. And Facebook will never be the only way - or even the most fun way - to hang out with your friends.
In 2000, when the dot-com bust hit, the one thing almost everyone knew was that in five years the Internet would be much, much bigger than it was then. Many had bet too much, too soon on the wrong companies, but the premise was right. That's less clear now: Facebook's 900 million (or whatever) is a staggering number, but the company's growth has been driven by increasing user numbers. How long can it keep on doing that? Its biggest challenge will be keeping those users interested enough to stay on the site.
Image: CC-AT-NC-SA Matthew Knott
The self-drive car sparks further thoughts about the way we connect with the physical world
When I first saw that Google had obtained a license for its self-driving car in the state of Nevada I assumed that the license it had been issued was a driver's license. It's disappointing to find out that what they meant was that the car had been issued with license plates so it can operate on public roads. Bah: all operational cars have license plates, but none have driver's licenses. Yet.
The Guardian has been running a poll, asking readers if they'd ride in the car or not. So far, 84 percent say yes. I would, too, I think. With a manual override and a human prepared to step in for oh, the first ten years or so.
I'm sure that Google, being a large company in a highly litigious society, has put the self-driving car through far more rigorous tests than any a human learner undergoes. Nonetheless, I think it ought to be required to get a driver's license, not just license plates. It should have to pass the driving test like everyone else. And then buy insurance, which is where we'll find out what the experts think. Will the rates for a self-driving car be more or less than for a newly licensed male aged 18 to 25?
To be fair, I've actually been to Nevada, and I know how empty most of those roads are. Even without that, I'd certainly rather ride in Google's car than on a roller coaster. I'd rather share the road with Google's car than with a drunk driver. I'd rather ride in Google's car than trust the next Presidential election to electronic voting machines.
That last may seem illogical. After all, riding in a poorly driven car can kill you. A gamed electronic voting machine can only steal your votes. The same problems with debugging software and checking its integrity apply to both. Yet many of us have taken quite long flights on fly-by-wire planes and ridden on driverless trains without giving it much thought.
But a car is *personal*. So much so that we tolerate 1.2 million deaths annually worldwide from road traffic; in 2011 alone, more than ten times as many people died on American roads as were killed in the 9/11 World Trade Center attack. Yet everyone thinks they're an above-average driver and feels safest when they're controlling their own car. Will a self-driving car be that delusional?
The timing was interesting because this week I have also been reading a 2009 book I missed, The Case for Working With Your Hands or Why Office Work is Bad for Us and Fixing Things Feels Good . The author, Michael Crawford, argues that manual labour, which so many middle class people have been brought up to despise, is more satisfying - and has better protection against outsourcing - than anything today's white collar workers learn in college. I've been saying for years that if I had teenagers I'd be telling them to learn a trade like automechanics, plumbing, electrical work, nursing, or even playing live music - anything requiring skill and knowledge and that can't easily be outsourced to another country in the global economy. I'd say teaching, but see last week's.
Dumb down plumbing all you want with screw-together PVC pipes and joints, but someone still has to come to your house to work on it. Even today's modern cars, with their sealed subsystems and electronic read-outs, need hands-on care once in a while. I suppose Google's car arrives back at home base and sends in a list of fix-me demands for its human minders to take care of.
When Crawford talks about the satisfaction of achieving something in the physical world, he's right, up to a point. In an interview for the Guardian in 1995 (TXT), John Perry Barlow commented to me that, "The more time I spend in cyberspace, the more I love the physical world, and any kind of direct, hard-linked interaction with it. I never appreciated the physical world anything like this much before." Now, Barlow, more than most people, knows a lot of about fixing things: he spent 17 years running a debt-laden Wyoming ranch and, as he says in that piece, he spent most of it fixing things that couldn't be fixed. But I'm going to argue that it's the contrast and the choice that makes physical work seem so attractive.
Yes, it feels enormously different to know that I have personally driven across the US many times, the most notable of which was a three-and-a-half-day sprint from Connecticut to Los Angeles in the fall of 1981 (pre-GPS, I might add, without needing to look at a map). I imagine being driven across would be more like taking the train even though you can stop anywhere you like: you see the same scenery, more or less, but the feeling of personal connection would be lost. Very much like the difference between knowing the map and using GPS. Nonetheless, how do I travel across the US these days? Air. How does Barlow make his living? Being a "cognitive dissident". And Crawford writes books. At some point, we all seem to want to expand our reach beyond the purely local, physical world. Finding that balance - and employment for 9 billion people - will be one of this century's challenges.
Kickstarter: medieval-meets-modern and a return to patronage
“We are the media”. This is the catch phrase of Amanda Palmer's massively successful Kickstarter campaign to fund her new album. Cutting out the middleman and sourcing the artists directly: Kickstarter is creating a new generation of art funded through the crowd-sourcing method. It offers a sense of hope that we can take up the slack and hit back at the recession with medieval-meets-modern patronage.
Kings, Popes and nobility providing money to the penniless author is an image we are all familiar with. The old patron dictated what was created, using the artist for prestige and propaganda. Snatching Michelangelo from his sculpting studio and suggesting he paint your ceiling instead wasn't an option for everyone. This system dropped out of favour when capitalism let us all participate in mass consumption of art and artists wanted more independence and new subject matters. Our return to patronage is a democratic flip-side to the autocratic control of the Renaissance
The process involved in taking a creative project from idea to completion via Kickstarter is all about direct funding by individuals. The artist first works out how much a project is going to cost. For example a goal of $10,000 is needed in order to successfully produce, promote and distribute a stop motion short. People can then 'pledge' any given amount towards this budget. There is frequently a system of rewards associated with the pledges. Creators offer hand-made packages at different levels of giving to spur us on. These rewards are specific to the project and the fans: digital downloads, painted ukuleles and comical Christmas decorations. Although credit and payment details are entered at 'point of pledge' no money is removed from your bank account. It is only if the goal is reached in an allotted period of time that the cards and account are charged. This system protects the artist from being left with half their goal and a bunch of expectant folk waiting for your half-made project.
However, I do not believe that crowd funding is the future for all artists. The massive successes of Amanda Palmer and Rich Burlew were dependant on their already strong and dedicated fan-base and it will not work for everyone. We can't all be successful artists, but we can all be supportive patrons. I am therefore less excited about what this mean for the artist (much has already been written on this subject), but what this means for the individual backers and for society.
We are accustomed to giving to charity for many motivations: a desire to do good or belief in the cause, a spontaneous reaction to a specific event, a sense of guilt or a sense of pride, prestige or pity, self-worth or personal meaning. However, as a rule monetary giving is a reaction to a need.
Kickstarter giving is active.
It is not about solutions. It is not about the survival of humanity. It's just making life a little bit better. This is exciting because people are willing to take an action, not based on a need, but out of a simple appreciation of creativity. As an English literature graduate I find this particularly exciting. My degree is disparaged by mainstream media and the government with STEM (Science, Technology, Engineering and Maths) subjects often perceived as the only academic study worth funding.
In the face of all that, the success of Kickstarter means that art is valued by non-artists. The engineer backing $1 to a new comic book Kickstarter fund is not just saying 'I want to read this' but 'I believe that it is worth funding the Arts.'
This is big.
Let me put this in the perspective of a period of recession. People are not waving goodbye to wads of cash they were otherwise using to make origami wallets. The million plus backers have their own economic struggles, but we should be proud that in a time of difficulties, with the Arts particularly suffering (Arts Council England funding was cut by 30% in 2010), everyone has an opportunity to take up that gap -and are doing so.
The future of creativity is being decided on the internet. Is this the e-Renaissance?
Shame that for all my enthusiasm it is only the democratic process in America that we participating in! My hope is that with enough attention Kickstarter will open its doors and process to creative projects across the world.
Image: Renaissance Fair by CC-BY-2.0 Flickr: battcreekcvb
Education on the shareware model... the future of online university courses?
What matters about a university degree? Is it the credential, the interaction with peers and professors, the chance to play a little while longer before turning adult, or the stuff you actually learn? Given how much a degree costs, these are pressing questions for the college-bound and their parents.
This is particularly true in the US, where today's tuition fees at Cornell University's College of Arts and Sciences, are 14 times what they were when I started there as a freshman in 1971. This week, CNBC highlighted the costs of liberal arts colleges such as Colorado's Pepperdine, where tuition, housing, and meals add up to $54,000 a year. Hah, said friends: it's $56,000 at Haverford, where their son is a sophomore.
These are crazy numbers even if you pursue a "sensible" degree, like engineering, mathematics, or a science. In fact, it's beginning to approach the level after which a top-class private university degree no longer makes the barest economic sense. A Reuters study announced this week found that the difference between a two-year "associate" degree and a four-year BA or BSc over the course of a 30-year career is $500,000 to $600,000 (enough to pay for your child's college degree, maybe). Over a career a college degree adds about $1 million over a high school diploma, depending on the major you pick and the field you go into. An accountant could argue that there's still some room for additional tuition increases - but then, even if that accountant has teenaged kids his earnings are likely well above average.
Anthony Carnevale, the director of the center that conducted this research, tells Reuters this is a commercialization of education. Yes, of course - but if college costs as much per child as the family home inevitably commercial considerations will apply even if you don't accept Paypal founder Peter Thiel's argument about a higher education bubble.
All this provides context for this week's announcement that Harvard and MIT are funding a $60 million initiative, EDx, to provide online courses for all and sundry. Given that Britain's relatively venerable Open University was set up in 1969 to bring university-level education to a wide range of non-traditional students, remote learning is nothing new. Still, EDx is one of a number of new online education initiatives.
Experimentation with using the Internet as a delivery medium for higher education began in the mid 1990s (TXT). The Open University augmented the ability for students to interact with each other by adding online conferencing to its media mix, and many other institutions began offering online degrees. Almost the only dissenting voice at the time was that of David F. Noble, a professor at Canada's York University. In a series of essays written from 1997 to 2001, Digital Diploma Mills he criticized the commercialization of higher education and the move toward online instruction. Coursework that formerly belonged to professors and teachers, he argued, would now become a product sold by the university itself; copyright ownership would be crucial. By 2001, he was writing about the failure of many of the online ventures to return the additional revenues their institutions had hoped for.
When I wrote about these various concerns in 1999 for Scientific American (TXT) reader email accused me of being an entitled elitist and gleefully threatened me with a wave of highly motivated, previously locked-out students who would sweep the world. The main thing I hoped I highlighted, however, was the comparatively high drop-out rate of online students. This is a pattern that has continued through to mid-2000s today with little change. This seems to me a significant problem for the industry - but explains why MIT and Harvard, like some other recent newcomers, are talking about charging for exams or completion certificates rather than the courses themselves. Education on the shareware model: certainly fairer for students hoping for career advancement and great for people who just want to learn from the best brands. (Not, thankfully, the future envisaged by one of the interviewees in those articles, who feared online education would be dominated by Microsoft and Disney).
In an economic context, the US's endemic credentialism means it's the certificate that has economic value, not necessarily the learning itself. But across the wider world, it's easy to imagine local authorities taking advantage of the courses that are available and setting their own exams and certification systems. For Harvard and MIT, the courses may also provide a way of spotting far-flung talent to scoop up and educate more traditionally.
Of course, economics are not the only reason to go to college: it may make other kinds of sense. Today's college-educated parents often want their kids to go to college for more complex reasons to do with quality of life, adaptability to a changing future, and the kind of person they would like their kids to be. In my own case, the education I had gave me choices and the confidence that I could learn anything if I needed to. That sort of motivation, sadly, is being priced out of the middle class. Soon it will be open only to the very talented and poor who qualify for scholarships, and the very wealthy who can afford the luxury. No wonder the market sees an opportunity.
The problem with robots isn't robots. The problem is us
Is a robot more like a hammer, a monkey, or the Harley-Davidson on which he rode into town? Or try this one: what if the police program your really cute, funny robot butler (Tony Danza? Scarlett Johansson?) to ask you a question whose answer will incriminate you (and which it then relays). Is that a violation of the Fourth Amendment (protection against search and seizure) or the Fifth Amendment (you cannot be required to incriminate yourself)? Is it more like flipping a drug dealer or tampering with property? Forget science fiction, philosophy, and your inner biological supremacist; this is the sort of legal question that will be defined in the coming decade.
Making a start on this was the goal of last weekend's We Robot conference at the University of Miami Law School, organized by respected cyberlaw thinker Michael Froomkin. Robots are set to be a transformative technology, he argued to open proceedings, and cyberlaw began too late. Perhaps robotlaw is still a green enough field that we can get it right from the beginning. Engineers! Lawyers! Cross the streams!
What's the difference between a robot and a disembodied artificial intelligence? William Smart (Washington University, St Louis) summed it up nicely: "My iPad can't stab me in my bed." No: and as intimate as you may become with your iPad you're unlikely to feel the same anthropomorphic betrayal you likely would if the knife is being brandished by that robot butler above, which runs your life while behaving impeccably like it's your best friend. Smart sounds unsusceptible. "They're always going to be tools," he said. "Even if they are sophisticated and autonomous, they are always going to be toasters. I'm wary of thinking in any terms other than a really, really fancy hammer."
Traditionally, we think of machines as predictable because they respond the same way to the same input, time after time. But Smart, working with Neil Richards (University of Washinton, St Louis), points out that sensors are sensitive to distinctions analog humans can't make. A half-degree difference in temperature, or a tiny change in lighting are different conditions to a robot. To us, their behaviour will just look capricious, helping to foster that anthropomorphic response, wrongly attributing to them the moral agency necessary for guilt under the law: the "Android Fallacy".
Smart and I may be outliers. The recent Big Bang Theory episode in which the can't-talk-to-women Rajesh, entranced with Siri, dates his iPhone is hilarious because in Raj's confusion we recognize our own ability to have "relationships" with almost anything by projecting human capacities such as cognition, intent, and emotions. You could call it a design flaw (if humans had a designer), and a powerful one: people send real wedding presents to TV characters, name Liquid Robotics' Wave Gliders, and characterize sending a six-legged land mine-defusing robot that's lost a leg or two to continue work as "cruel" (Kate Darling, MIT Media Lab).
What if our rampant affection for these really fancy hammers leads us to want to give them rights? Darling asked. Or, asked Sinziana Gutiu (University of Ottawa), will sex robots like Roxxxy teach us wrong expectations of humans (When the discussion briefly compared sex robots to pets, a Twitterer quipped, "If robots are pets is sex with them bestiality?")?
Few are likely to fall in love with the avatars in the automated immigration kiosks proposed at the University of Arizona (Kristen Thomasen, University of Ottawa) with two screens, one with a robointerrogator and the other flashing images and measuring responses. Automated law enforcement, already with us in nascent form, raises a different set of issues (Lisa Shay . Historically, enforcement has never been perfect; laws only have to be "good enough" to achieve their objective, whether that's slowing traffic or preventing murder. These systems pose the same problem as electronic voting: how do we audit their decisions? In military applications, disclosure may tip off the enemy, as Woodrow Hartzog (Samford University). Yet here - and especially in medicine, where liability will be a huge issue - our traditional legal structures decide whom to punish by retracing the reasoning that led to the eventual decision. But even today's systems are already too complex.
When Hartzog asks if anyone really knows how Google or a smartphone tracks us, it reminds me of a recent conversation with Ross Anderson, the Cambridge University security engineer. In 50 years, he said, we have gone from a world whose machines could all be understood by a bright ten-year-old with access to a good library to a world with far greater access to information but full of machines whose inner workings are beyond a single person's understanding. And so: what does due process look like when only seven people understand algorithms that have consequences for the fates of millions of people? Bad enough to have the equivalent of a portable airport scanner looking for guns in New York City; what about house arrest because your butler caught you admiring Timothy Olyphant's gun on Justified?
"We got privacy wrong the last 15 years." Froomkin exclaimed, putting that together. "Without a strong 'home as a fortress right' we risk a privacy future with an interrogator-avatar-kiosk from hell in every home."
The problem with robots isn't robots. The problem is us. As usual, Pogo had it right.
Tashalaw reviews this year's Scrambling for Safety Conference
Yesterday I attended the Scrambling for Safety conference at the LSE. I was really impressed by the wide range of speakers, from the leader of human rights group Liberty Shami Chakrabarti to the Conservative MP David Davis. Finally a real discussion featuring voices from a wide range of political and ideological affiliations, although noting the absence of any Labour politicians. It was a bit less exciting when I realised they all seemed to agree…
The members of the panel talked about the importance of privacy and the negative implications of the invasion of privacy. They also agreed, particularly Julian Huppert and David Davis MP, that politicians generally do not have a clue and often take advice from civil servants and security officials as fact. They often seem to rely on information without attempting to validate or research it themselves, probably due to the volume of information that relates to many issues. But what I really wanted to know was how much would it cost? Is it even viable or productive? Would it in any way benefit the general public in a way that would justify such an invasion of privacy? If not, why on earth was this happening? If stronger policies on surveillance are indeed necessary what are the alternatives?
Some light was shed on these questions in the second panel, who could not identify any real benefit of swamping the police with massive amounts of data and the chaos which would ensue as a result. Data gathering on such a large scale does not seem to make sense at all. If the government’s plans to combat serious crime and terrorism with these measures, this shows a clear lack of foresight. The panel emphasized that gathering data in this way would only catch very basic internet users, as there are so many ways to hide the data being picked up, such as using encrypted pages. Monitoring basic internet use, when more advanced users (you don’t need to be very advanced to use dropbox for example) know a way around it, would encourage a culture of “underground internet use”. This would only complicate things for the police/security services. It is also a waste of money and resources to invest in a policy that is fundamentally ineffective and extremely invasive of basic rights and freedoms. The costs are therefore certainly not outweighed by the benefits, which seem negligible if there even are any.
Whitfield Diffie, somewhat of a celebrity in the tech world, so I was told, addressed the issue of privacy of search terms such as e.g. divorce lawyers or cancer clinics which may be largely indicative of your private life. By accessing an individual’s search data the government would be enforcing a huge invasion of privacy, but it would also be largely counter productive and irrelevant to the police regarding surveillance matters.
A retired police officer in the audience stated that it appears the government are seeking to implement preemptive measures. Such data is often unusable and overwhelming to the police force because of the sheer volume. Analysing such data is complex and time consuming and thus provides little value in many investigations which need to be carried out swiftly. He gave the example of the recent shootings in Toulouse – France, where police had access to 500 intercepted email messages which provided a lead to the suspect. However the manpower to analyse and identify suspects in pressing operations will not always be available in relation to processing data. The police may not have enough people assigned to a particular case to be able to go through all the available data.
The conference was wrapped up by Nick Pickles from Big Brother Watch, who ended the day with a quote from David Cameron criticising Labour’s policies on surveillance. He pointed out that the coalition government must keep their word, something I fully agree with. He then added that he was standing for the Conservatives in the next election, making his short speech seem a little more like a self interested campaign. However the fact that the organisers were from a wide variety of civil society organisations allowed a broad discussion on an issue that affects everyone regardless of their political affiliation.
The main arguments made at the conference, appeared to be that the proposal is not only a gross invasion of fundamental rights and freedoms in both national and European law, but that it is also both costly and ineffective. The plan seems to be based on a misinformed and misguided policy relating to security, which doesn’t seem to provide any benefit to government, the police or national security. While this issue could have huge negative consequences it is still barely understood by either government or the public.
Shami Chakrabarti used the quote “they say the innocent have nothing to hide – but they do have something to protect”. At present there is little legislation relating to privacy law in the UK and Article 8 of the European Convention on Human Rights (ECHR) – the right to a private and family life, is often loosely applied. The Scrambling for Safety conference highlighted the fact that even current law on communications remains highly contentious. The proposed casual and constant invasion of privacy is not in the public interest, neither in terms of security nor cost. The proposal is unrealistic and inappropriate, what is really needed in the UK are stronger privacy laws/rights to protect our freedom and to fully integrate and apply Article 8 of the ECHR in national law.
Tashalaw writes on legal issues in her Weekly Law Blog. you can also follow her on twitter @tashalaws.
Image: One Nation Under CCTV
Bad policies are like counterfeit money: they never quite go away and they ensnare the innocent
It's about a month since the coalition government admitted its plans for the Communications Capabilities Development Programme, and while most details are still unknown, it's pretty clear that it's the Interception Modernisation Programme Redux. The goal is the same: to collect and monitor the nation's communications data. What's changed since 2009 is the scope, which takes in vast quantities of data that have never been kept before and the conditions of storage, which site the data at ISPs rather than GCHQ.
The security policy analyst Susan Landau has written in various places about the fundamental threat to security created by opening a hole (aka, a back door) for law enforcement. A hole is a hole, no matter who it's for, and once it's there it can be used by people it's not intended for. You'd think this would be blatantly obvious. It's like requiring everyone to give the police a copy of the key to their house and/or car.
The justification for all this is modernization: restoring to law enforcement abilities had before the Internet came along and made a mess of everything. (How soon they forget the world of anonymous telephone booths, cash payments, postal mail, and open-air meetings.) The obvious next stage under this logic - if all our online communications are recorded for prospective future study in case we become criminals - would be to demand the same powers over our offline lives.
Yesterday, at the ninth Scrambling for Safety event, Liberty doyenne Shami Chakrabarti raised just this point, asking if the Home Office's goal is to eliminate all unwatchable spaces, online and off. (I note an amusing side effect: such a policy could effectively nationalize the pornography industry.)
CCDP, the MP David Davis said yesterday, would turn us all into "a nation of suspects". Quite so. Though how long anyone will be free to point this out is another question. In 2002, the activist John Gilmore was tossed off a British Airways plane for wearing a small button that said "Suspected Terrorist".
There are all sorts of things wrong with building a system to implement surveillance as standard; Ross Anderson liveblogged the many that were made yesterday. Paul Bernal also has three good blog pieces on the politics of privacy, the infeasibility of securing the data, and why the government keeps getting these policies so spectacularly wrong.
The net.wars back catalogue adds that dataveillance doesn't work and the chilling effect of the amounts of data already available about all of us.
Here, I'd like to pursue an analogy to drug testing in sports. The University of Aberystwyth's Mark Burnley did a good presentation last week for London Skeptics in the Pub; you can hear it at The Pod Delusion.
When it comes to drug testing, athletes are presumed guilty. They must repeatedly prove their innocence by passing drug tests that examine their urine and blood for any of a lengthy list of substances and methods that are banned under rules set down by the World Anti-Doping Agency and that cover all Olympic sports. The result is an arms race between athletes (or more properly, Burnley argues, their coaches and doctors) and the testing authorities. When steroids became detectable they were replaced with new substances and techniques: EPO, insulin, designer steroids, HGH, latterly microdosing. It's the stupid athletes who get caught
All, doping or not, submit to substantial invasions of privacy. Under the "whereabouts" rule, they must identify an hour every day where they can be found; they are held responsible for every substance found in their bodies; missed tests add up to failed tests. You may lack sympathy on the basis that a) they choose to be professional athletes, b) they are rewarded with lots of money and glamor, and c) they're all cheaters anyway.
To counter: for many their choice of specialization was made very young; many athletes who live under these rules are largely invisible and struggling to break even; and the system is failing to catch the cheaters who matter. (That said, what *is* interesting is the exercise of keeping athletes' submitted samples and testing them again years later in the light of improved technical knowledge.) Meanwhile, the huge sums of money in the sports business make it worthwhile to fund research into new, less detectable techniques and the morality plays that surround athletes who do get caught could hardly be bettered as a method for convincing kids that doping is what you need to do to become a winner.
In any event, the big point Burnley made is that the big doping cases that have been broken have been primarily through traditional policing methods. In tennis, Wayne Odesnik did not get caught by doping tests; he got caught by Australian Customs, who found syringes and eight vials of HGH in his suitcase. Similarly, despite what understandably risk-averse politicians would like to believe under the influence of the security services, unfettered data collection will make plenty of ordinary people's lives miserable - but crime will route around it.
Milena Popova analyses the messages behind the keynote speeches at this year's ORGCon
The overwhelming feeling I left this year's ORGCon with was that digital rights in the UK had grown up. The depth and complexity of debate has come a long way since the last conference in 2010. Nowhere is this better demonstrated than in the two keynotes: Cory Doctorow's "The coming war on general-purpose computing" and Larry Lessig's "Recognising the fight we're in". Both painted, in broad brush strokes, a picture much bigger than the current digital rights space.
The copyright wars, Cory said, were just the opening battle in what will soon be a war on computing as we know it. As general-purpose computers continue to tread on the toes of various vested interests (like they did with the entertainment industry), we will see more and more demands for devices which can do all the great things that computers do, except this one thing which really annoys someone. We have seen how badly this has gone, technologically with DRM and legally with the likes of SOPA, PIPA and ACTA. The problem here, Cory argues, is that due to the nature of the technology any attempts to prevent people from sharing digital content converge on surveillance and censorship. The technology that the content industry proposes to use to enforce copyright is the same technology that states like Syria and China use to censor free speech and keep tabs on dissidents. We are likely to see more such attempts in future, from stakeholders other than the content industry, not fewer. The copyright wars, so Cory, have taught us valuable lessons about what works and doesn't work in fighting for our digital rights.
Larry Lessig, too, painted a big picture of digital rights. He started with three case studies of policy areas gone horribly wrong: allocating access to spectrum, the abysmal quality of broadband in the United States, and of course copyright law and enforcement. Here, both he and Cory made a similar point - that lawmakers shouldn't have to be specialists in a field in order to be able to make good laws. To an extent, this goes against some of the digital rights community's received wisdom - including mine. Many of us have often argued that we simply need to educate politicians better on the issues and the technology aspects of such laws. And yet, we don't expect our MPs to be GPs in order to legislate on health issues, or environmental scientists in order to legislate on climate change, or engineers in order to legislate on car safety. Why should we expect them to be technologists to be able to make good laws on technology?
The root cause of the problem, Lessig argues, is not a lack of expertise - it is the pervasive corruption of our democracy. More Americans were in favour of British rule before the American Revolution than Americans today who do not believe that money buys results in Congress. 0.0000063% of Americans - less than 200 individuals - have contributed 80% of the money spent so far in this year's presidential election campaign. We, the rest of us, are not so much the "99 per cent" - we are the 99.9999937%. While these are US numbers, there was some poetic justice in the Cash for Cameron scandal breaking the very day after ORGCon. It is not so much that our politicians don't have the knowledge to make good laws on digital rights, it's that those laws are bought by vested interests with the money to back them.
We are now seeing, Lessig concluded, the beginnings of a global movement to take back our democracies. From the steps of St. Paul's to Zuccotti Park, from the Arab Spring to Indian anti-corruption demonstrations, we the people are waking up and demanding back our governments. He ended his talk with a please to digital rights activists: not to abandon digital rights, our first love, but to add a second cause to our campaigns - the cause of taking back our democracies.
To say all this is daunting is a bit of an understatement. To go to a conference on the complex but fairly compact subject of digital rights and to have the curtain lifted to reveal that what we're actually fighting for is regaining control of our governments, and that the copyright wars weren't so much the big boss fight as the bit at the start of the game that's meant to teach you how to use the controls (Cory's analogy, not mine), can be rather overwhelming. Yet there is hope. Knowing what we're fighting is half the battle. It enables us to regroup, look around, stop hacking at the dragon's toes and start going for its eyes instead.
Wendy M. Grossman responds to "loopy" statements made by Google Executive Chairman Eric Schmidt in regards to censorship and encryption.
ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views
People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.