Crypto: The Revenge
Wendy Grossman looks at how public key cryptography has developed in the last two decades.
I recently had occasion to try out Gnu Privacy Guard, the Free Software Foundation's version of PGP, Phil Zimmermann's legendary Pretty Good Privacy software. It was the first time I'd encrypted an email message since about 1995, and I was both pleasantly surprised and dismayed.
First, the good. Public key cryptography is now implemented exactly the way it should have been all along: once you've installed it and generated a keypair, encrypting a message is ticking a box or picking a menu item inside your email software. Even key management is handled by a comprehensible, well-designed graphical interface. Several generations of hard work have created this and also ensured that the various versions of PGP, OpenPGP, and GPG are interoperable, so you don't have to worry about who's using what. Installation was straightforward and the documentation is good.
Now, the bad. That's where the usability stops. There are so many details you can get wrong to mess the whole thing up that if this stuff were a form of contraception desperate parents would be giving babies away on street corners.
Item: the subject line doesn't get encrypted. There is nothing you can do about this except put a lot of thought into devising a subject line that will compel people to read the message but that simultaneously does not reveal anything of value to anyone monitoring your email. That's a neat trick.
Item: watch out for attachments, which are easily accidentally sent in the clear; you need to encrypt them separately before bundling them into the message.
Item: while there is a nifty GPG plug-in for Thunderbird – Enigmail – Outlook, being commercial software, is less easily supported. GPG's GpgOL module works only with 2003 (SP2 and above) and 2007, and not on 64-bit Windows. The problem is that it's hard enough to get people to change *one* habit, let alone several.
Item: lacking appropriate browser plug-ins, you also have to tell them to stop using Webmail if the service they're used to won't support IMAP or POP3, because they won't be able to send encrypted mail or read what others send them over the Web.
Let's say you're running a field station in a hostile area. You can likely get users to persevere despite these points by telling them that this is their work system, for use in the field. Most people will put up with a some inconvenience if they're being paid to do so and/or it's temporary and/or you scare them sufficiently. But that strategy violates one of the basic principles of crypto-culture, which is that everyone should be encrypting everything so that sensitive traffic doesn't stand out. They are of course completely right, just as they were in 1993, when the big political battles over crypto were being fought.
Item: when you connect to a public keyserver to check or download someone's key, that connection is in the clear, so anyone surveilling you can see who you intend to communicate with.
Item: you're still at risk with regard to traffic data. This is what RIPA and data retention are all about. What's more significant? Being able to read a message that says, "Can you buy milk?" or the information that the sender and receiver of that message correspond 20 times a day? Traffic data reveals the pattern of personal relationships; that's why law enforcement agencies want it. PGP/GPG won't hide that for you; instead, you'll need to set up a proxy or use Tor to mix up your traffic and also protect your Web browsing, instant messaging, and other online activities. As Tor's own people admit, it slows performance, although they're working on it (PDF).
All this says we're still a long way from a system that the mass market will use. And that's a damn shame, because we genuinely need secure communications. Like a lot of people in the mid-1990s, I'd have thought that by now encrypted communications would be the norm. And yet not only is SSL, which protects personal details in transit to ecommerce and financial services sites, the only really mass-market use, but it's in trouble. Partly, this is because of the technical issues raised in the linked article – too many certification authorities, too many points of failure – but it's also partly because hardly anyone understands how to check that a certificate is valid or knows what to do when warnings pop up that it's expired or issued for a different name. The underlying problem is that many of the people who like crypto see it as both a cool technology and a cause. For most of us, it's just more fussy software. The big advance since the mid 1990s is that at least now the *developers* will use it.
Maybe mobile phones will be the thing that makes crypto work the way it should. See, for example, Dave Birch's current thinking on the future of identity. We've been arguing about how to build an identity infrastructure for 20 years now. Crypto is clearly the mechanism. But we still haven't solved the how.
Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.
Image: Engima Machine @davidsickmiller.com: CC BY-NC-SA 2.0 licence
Printers on fire
Wendy Grossman highlights the threat of unprotected consumer hardware to attack.
It used to be that if you thought things were spying on you, you were mentally disturbed. But you're not paranoid if they're really out to get you, and new research at Columbia University, with funding from DARPA's Crash program, exposes how vulnerable today's devices are. Routers, printers, scanners – anything with an embedded system and an IP address.
Usually what's dangerous is monoculture: Windows is a huge target. So, argue Columbia computer science professor Ang Cui, device manufacturers rely on security by diversity: every device has its own specific firmware. Cui estimates, for example, that there are 300,000 different firmware images for Cisco routers, varying by feature set, model, operating system version, hardware, and so on. Sure, what's the payback? Especially compared to that nice, juicy Windows server over there?
"In every LAN there are enormous numbers of embedded systems in every machine that can be penetrated for various purposes," says Cui.
The payback is access to that nice, juicy server and, indeed, in the whole network, few update – or even check – firmware. So once inside, an attacker can lurk unnoticed until the device is replaced.
Cui started by asking: "Are embedded systems difficult to hack? Or are they just not low-hanging fruit?" There isn't, notes Stolfo, an industry providing protection for routers, printers, the smart electrical meters rolling out across the UK, or the control interfaces that manage conference rooms.
If there is, after seeing their demonstrations, I want it.
Their work is two-pronged: first demonstrate the need, then propose a solution.
Cui began by developing a rootkit for Cisco routers. Despite the diversity of firmware and each image's memory layout, routers are a monoculture in that they all perform the same functions. Cui used this insight to find the invariant elements and fingerprint them, making them identifiable in the memory space. From that, he can determine which image is in place and deduce its layout.
"It takes a millisecond."
Once in, Cui sets up a control channel over ping packets (ICMP) to load microcode, reroute traffic, and modify the router's behaviour. "And there's no host-based defense, so you can't tell it's been compromised." The amount of data sent over the control channel is too small to notice – perhaps a packet per second.
"You can stay stealthy if you want to."
You could even kill the router entirely by modifying the EEPROM on the motherboard. How much fun to be the army or a major ISP and physically connect to 10,000 dead routers to restore their firmware from backup?
They presented this at WOOT (Quicktime), and then felt they needed something more dramatic: printers.
"We turned off the motor and turned up the fuser to maximum." Result: browned paper and…smoke.
How? By embedding a firmware update in an apparently innocuous print job. This approach is familiar: embedding programs where they're not expected is a vector for viruses in Word and PDFs.
"We can actually modify the firmware of the printer as part of a legitimate document. It renders correctly, and at the end of the job there's a firmware update." It hasn't been done before now, Cui thinks, because there isn't a direct financial pay-off and it requires reverse-engineering proprietary firmware. But think of the possibilities.
"In a super-secure environment where there's a firewall and no access – the government, Wall Street – you could send a resume to print out." There's no password. The injected firmware connects to a listening outbound IP address, which responds by asking for the printer's IP address to punch a hole inside the firewall.
"Everyone always whitelists printers," Cui says – so the attacker can access any computer. From there, monitor the network, watch traffic, check for regular expressions like names, bank account numbers, and social security numbers, sending them back out as part of ping messages.
"The purpose is not to compromise the printer but to gain a foothold in the network, and it can stay for years – and then go after PCs and servers behind the firewall." Or propagate the first printer worm.
Stolfo's and Cui's call their answer a "symbiote" after biological symbiosis, in which two biological organisms attach to each other to mutual benefit.
The goal is code that works on an arbitrarily chosen executable about which you have very little knowledge. Emulating a biological symbiote, which finds places to attach to the host and extract resources, Cui's symbiote first calculates a secure checksum across all the static regions of the code, then finds random places where its code can be injected.
"We choose a large number of these interception points – and each time we choose different ones, so it's not vulnerable to a signature attack and it's very diverse." At each device access, the symbiote steals a little bit of the CPU cycle (like an RFID chip being read) and automatically verifies the checksum.
"We’re not exploiting a vulnerability in the code," says Cui, "but a logical fallacy in the way a printer works." Adds Stolfo, "Every application inherently has malware. You just have to know how to use it."
Never mind all that. I'm still back at that printer smoking. I'll give up my bank account number and SSN if you just won't burn my house down.
Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.
you were mentally disturbed. But you're not paranoid if they're really
out to get you, and new research at <a href="http://ids.cs.columbia.edu/
users/sal.html">Columbia University</a>, with funding from <a
href="http://www.darpa.mil/Our_Work/I2O/Programs/Clean-
slate_design_of_Resilient_Adaptive_Secure_Hosts_%28CRASH%29.aspx">DA
RPA's Crash program</a>, exposes how vulnerable today's devices are. Routers,
printers, scanners – anything with an embedded system and an IP address. </p>
<p>Usually what's dangerous is monoculture: Windows is a huge
target. So, argue Columbia computer science professor <a href="http:/
/www.cs.columbia.edu/~sal/">Sal Stolfo</a> and PhD student <a
href="http://www.cs.columbia.edu/mice/persons/showPerson.php?
personID=2900&base=%2Fmice%2Fpersons%2F&">Ang Cui</a>, device
manufacturers rely on security by diversity: every device has its own specific
firmware. Cui estimates, for example, that there are 300,000 different firmware
images for Cisco routers, varying by feature set, model, operating system version,
hardware, and so on. Sure, what's the payback? Especially compared to that nice,
juicy Windows server over there?</p>
<p>"In every LAN there are enormous numbers of embedded systems in
every machine that can be penetrated for various purposes," says Cui. </p>
<p>The payback is access to that nice, juicy server and, indeed, the whole
network Few update – or even check – firmware. So once inside, an attacker can
lurk unnoticed until the device is replaced.</p>
Grossman-Document11-10/21/11 10:12:54-
<p>Cui started by asking: "Are embedded systems difficult to hack? Or are
they just not low-hanging fruit?" There isn't, notes Stolfo, an industry providing
protection for routers, printers, the smart electrical meters rolling out across the
UK, or the control interfaces that manage conference rooms.</p>
<p>If there is, after seeing their demonstrations, I want it. </p>
<p>Their work is two-pronged: first demonstrate the need, then propose a
solution.</p>
<p>Cui began by developing a rootkit for Cisco routers.
Despite the diversity of firmware and each image's memory layout,
routers are a monoculture in that they all perform the same functions. Cui used
this insight to find the invariant elements and fingerprint them, making them
identifiable in the memory space. From that, he can determine which image is in
place and deduce its layout.</p>
<p>"It takes a millisecond."</p>
<p>Once in, Cui sets up a control channel over ping packets (ICMP) to
load microcode, reroute traffic, and modify the router's behaviour. "And there's
no host-based defense, so you can't tell it's been compromised." The amount of
data sent over the control channel is too small to notice – perhaps a packet per
second.</p>
<p>"You can stay stealthy if you want to."</p>
<p>You could even kill the router entirely by modifying the EEPROM on
the motherboard. How much fun to be the army or a major ISP and physically
connect to 10,000 dead routers to restore their firmware from backup?</p>
<p>They presented this at <a href="http://www.usenix.org/events/woot11/
stream/cui/index.html">WOOT (Quicktime)</a>, and then felt they needed
something more dramatic: printers. </p>
<p>"We turned off the motor and turned up the fuser to maximum." Result:
browned paper and…smoke.</p>
<p>How? By embedding a firmware update in an apparently innocuous
print job. This approach is familiar: embedding programs where they're not
expected is a vector for viruses in Word and PDFs. </p>
<p>"We can actually modify the firmware of the printer as part of a
legitimate document. It renders correctly, and at the end of the job there's a
firmware update." It hasn't been done before now, Cui thinks, because there isn't a
direct financial pay-off and it requires reverse-engineering proprietary firmware.
But think of the possibilities.</p>
<p>"In a super-secure environment where there's a firewall and no access
Grossman-Document11-10/21/11 10:12:54-
– the government, Wall Street – you could send a resume to print out." There's
no password. The injected firmware connects to a listening outbound IP address,
which responds by asking for the printer's IP address to punch a hole inside the
firewall.</p>
<p>"Everyone always whitelists printers," Cui says – so the attacker can
access any computer. From there, monitor the network, watch traffic, check
for regular expressions like names, bank account numbers, and social security
numbers, sending them back out as part of ping messages. </p>
<p>"The purpose is not to compromise the printer but to gain a foothold in
the network, and it can stay for years – and then go after PCs and servers behind
the firewall." Or propagate the first printer worm. </p>
<p>Stolfo's and Cui's call their answer a "symbiote" after biological
symbiosis, in which two biological organisms attach to each other to mutual
benefit.</p>
<p>The goal is code that works on an arbitrarily chosen executable about
which you have very little knowledge. Emulating a biological symbiote, which
finds places to attach to the host and extract resources, Cui's symbiote first
calculates a secure checksum across all the static regions of the code, then finds
random places where its code can be injected. </p>
<p>"We choose a large number of these interception points – and each time
we choose different ones, so it's not vulnerable to a signature attack and it's very
diverse." At each device access, the symbiote steals a little bit of the CPU cycle
(like an RFID chip being read) and automatically verifies the checksum.</p>
<p>"We’re not exploiting a vulnerability in the code," says Cui, "but a
logical fallacy in the way a printer works." Adds Stolfo, "Every application
inherently has malware. You just have to know how to use it."</p>
<p>Never mind all that. I'm still back at that printer smoking. I'll give up my
bank account number and SSN if you just won't burn my house down.</p>
<p><i>Wendy M. Grossman’s <a href="http://
www.pelicancrossing.net">Web site</a> has an extensive archive of her
books, articles, and music, and an <a href="http://www.pelicancrossing.net/
nwcols.htm"> archive of all the earlier columns in this series</a>. </i></p>
Think of the children
Wendy Grossman asks how useful filters are in preventing children from accessing x-rated sites.
'Give me smut and nothing but!' - Tom Lehrer
Sex always sells, which is presumably why this week's British headlines have been dominated by the news that the UK's ISPs are to operate an opt-in system for porn. The imaginary sales conversations alone are worth any amount of flawed reporting:
ISP Customer service: Would you like porn with that?
Customer: Supersize me!
Sadly, the reporting was indeed flawed. Cameron, it turns out was merely saying that new customers signing up with the four major consumer ISPs would be asked if they want parental filtering. So much less embarrassing. So much less fun.
Even so, it gave reporters such as Violet Blue, at ZDNet UK, a chance to complain about the lack of transparency and accountability of filtering systems.
Still, the fact that so many people could imagine that it's technically possible to turn "Internet porn" on and off as if operated a switch is alarming. If it were that easy, someone would have a nice business by now selling strap-on subscriptions the way cable operators do for "adult" TV channels. Instead, filtering is just one of several options for which ISPs, Web sites, and mobile phone operators do not charge.
One of the great myths of our time is that it's easy to stumble accidentally upon porn on the Internet. That, again, is television, where idly changing channels on a set-top box can indeed land you on the kind of smut that pleased Tom Lehrer. On the Internet, even with safe search turned off, it's relatively difficult to find porn accidentally – though very easy to find on purpose (especially since the advent of the .xxx top-level domain).
It is, however, very easy for filtering systems to remove non-porn sites from view, which is why I generally turn off filters like "Safe search" or anything else that will interfere with my unfettered access to the Internet. I need to know that legitimate sources of information aren't being hidden by overactive filters. Plus, if it's easy to stumble over pornography accidentally I think that as a journalist writing about the Net and in general opposing censorship I think I should know that. I am better than average at constraining my searches so that they will retrieve only the information I really want, which is a definite bias in this minuscule sample of one. But I can safely say that the only time I encounter unwanted anything-like-porn is in display ads on some sites that assume their primary audience is young men.
Eli Pariser, whose The Filter Bubble: What the Internet is Hiding From You I reviewed recently for ZDNet UK, does not talk in his book about filtering systems intended to block "inappropriate" material. But surely porn filtering is a broad-brush subcase of exactly what he's talking about: automated systems that personalize the Net based on your known preferences by displaying content they already "think" you like at the expense of content they think you don't want. If the technology companies were as good at this as the filtering people would like us to think, this weekend's Singularity Summit would be celebrating the success of artificial intelligence instead of still looking 20 to 40 years out.
If I had kids now, would I want "parental controls"? No, for a variety of reasons. For one thing, I don't really believe the controls keep them safe. What keeps them safe is knowing they can ask their parents about material and people's behavior that upsets them so they can learn how to deal with it. The real world they will inhabit someday will not obligingly hide everything that might disturb their equanimity.
But more important, our children's survival in the future will depend on being able to find the choices and information that are hidden from view. Just as the children of 25 years ago should have been taught touch typing, today's children should be learning the intricacies of using search to find the unknown. If today's filters have any usefulness at all, it's as a way of testing kids' ability to think ingeniously about how to bypass them.
Because: although it's very hard to filter out only *exactly* the material that matches your individual definition of "inappropriate", it's very easy to block indiscriminately according to an agenda that cares only about what doesn't appear. Pariser worries about the control that can be exercised over us as consumers, citizens, voters, and taxpayers if the Internet is the main source of news and personalization removes the less popular but more important stories of the day from view. I worry that as people read and access only the material they already agree with our societies will grow more and more polarized with little agreement even on basic facts. Northern Ireland, where for a long time children went to Catholic or Protestant-owned schools and were taught that the other group was inevitably going to Hell, is a good example of the consequences of this kind of intellectual segregation. Or, sadly, today's American political debates, where the right and left have so little common basis for reasoning that the nation seems too polarized to solve any of its very real problems.
Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.
Image: CC BY 2.0 Flickr: Dalbera
A Pledge for Digital Rights
Why and how you can tell which sites ISPs are blocking.
Surfing the World Wide Web gives users the feeling that any information available on the Web is accessible. Experts have shown that this simply is not the case. Some websites are blocked by Internet Service Providers or a corporate firewall. Many companies block websites from certain users because of copyright issues. When consumers suspect that certain websites may be blocked from their computer, they have several options to resolve the potential problem.
How to Know if Sites are Blocked
Try Local Coffee Shops or Free WiFi Locations. Since different Internet service providers will block certain websites, consumers should go to their local coffee shops or other places that have free WiFi that is different from the ISP in the home or office. If the website is not blocked with other ISPs, there are some steps to take to circumvent the blocking.
Check Online for Site Unblocking Software. Apparently, there some software companies that will allow you to determine which sites are being blocked from your view. Unfortunately, some anti-virus software programs and firewalls will not allow consumers to access to the websites where the software is located. Try some of the techniques highlighted in the next section for more information.
Options to Circumvent the Blocking
1. Use a Proxy Server to Access the Website: Find the website’s IP address by typing “tracert www.websitename.com” into a Command Prompt window. In the place of “websitename”, replace it with the name of the website that you are attempting to reach. The IP address displayed first will be correct address for the website.
Type the website’s IP address into the browser instead of the browser name. If the website appears, then this could mean the site is blocked. Search online to find a free proxy. Once the proxy is located, visit the Proxy Settings in the “Tools” or “Options” menu. Enter the IP address and port address of the proxy. The website will be loaded as a proxy and you will bypass your ISP.
Websites such as “hidemyass.com”, “can’t bust me” or “anonr.com” each facilitate the accessibility to sites that a person may want to view but are blocked. Some proxy service sites will allow customers to enter the URL for the site that you want and grab the content. Some of these websites will allow anonymous web surfing by disguising the IP address and location information. Corporate firewalls and Internet Service Providers may block Proxy Server websites also.
2. Web2Mail: This free service will convert the desired URL into HTML readable format that can be read in the selected email client. Any links on this particular website that the user desires to view will also require an email with the URL in the subject line to the Web2Mail service. This is a temporary fix until you can view the websites in another way.
3. Use Virtual Private Networking (VPN): Many people working in a corporate office have restricted Internet. Using Virtual Private Networking, establish a home office and receive unrestricted access to the Web. OpenVPN, TightVNC, LogMeIn Free and UltraVNC are the free Virtual Private Network software options. Most people do not need any technical expertise to setup a VPN at home.
4. Use an Internet Anonymizer: TOR onion router masks the user’s IP address after sending the URL through a maze of servers. Vidalia, Torbutton, Firefox and Operator are other TOR anonymizers that may prevent blocking. JAP is another type of anonymizer router as well.
5. Use Google: Viewing cache is recommend that clients who need to view a blocked site. The cached version may not be up to date, but the basic information that you accessed previously will be present. Google reader can extract RSS feeds from a blocked site if you are looking from updates. Simply enter the site’s URL and the site will extract the RSS feeds for you.
Approaches to Technical Censorship
Internet Protocol (IP) Address Blocking: If access to a particular IP address is blocked, then all websites on that particular website will be blocked. Proxies are helpful in this situation.
Domain Name System Filtering and Redirection: If the domain name is blocked, find an alternative DNS root that will allow resolution of the blocked domain name. Bypass the DNS if the IP address is not blocked and is obtainable from another source.
Uniform Resource Locator Filtering: To circumvent block of URL strings, users may use escaped characters in the URL or encrypted protocols. VPN and TLS/SSL are common encrypted protocols.
Packet Filtering: Certain TCP packet transmissions may be terminated when controversial keywords are detected. Users may utilize VPN or TLS/SSL to circumvent the problem.
Summary
When ISPs or companies block websites, this practice can be inconvenient. The practice is especially inconvenient if the user is not aware of which sites are being blocked through the Internet. Consumers should follow the steps above to determine which methods will alleviate the problem.
This is a guest article by Ruben Corbo, a writer for the website Broadband Expert where you can find high speed internet providers in your area and compare prices on different deals for your wireless internet necessities.
Image: CC BY-SA 2.0 Flickr: RIUM+
Why we should all care about digital rights
Digital rights for all? Milena Popova calls for more diverse representation within the IT community.
I recently gave a talk at Skeptics in the Pub on digital rights. While the audience were lively, engaged, well-informed and provided lots of food for thought in the post-talk discussion, it didn’t escape my attention that only about 10% were female.
The technology sector in general has a reputation for being dominated by white men, and that extends to related campaigning organisations. The Pirate Party for instance, didn’t manage to field a single female candidate in the 2010 general election (though they did better in the Scottish Parliament elections this year). Last year’s ORGCon - a truly inspiring event - wasn’t even close to being gender-balanced, either in terms of speakers or attendees. There are many structural reasons why the sector struggles with diversity: lack of role models, high-profile cases of vicious bullying and harassment of women technologists, and the booth babes phenomenon all play a part and all need to be addressed. However, the fact that a female speaker at an event run by an organisation which is generally inclusive and tends to have more balanced audiences attracted a 90% white male audience points at an element of self-selection too. Do women and minorities simply not care about digital rights?
Personally, I have a number of female and minority friends who do care and who are both interested and active in digital rights: artists, copyright experts, writers. Among the general population, however, digital rights are often seen as the domain of geeky kids who download too much music off the internet. Technology like the internet, mobile phones and mobile data has become so pervasive that we all take it for granted. We organise our social lives through Twitter and Facebook, we access information about education, employment and government services online, we use the internet to express our thoughts and ideas and to reach thousands more people than we could in our local communities. We network, meet new people, get advice, we even meet our partners through online dating.
Mothers get advice and support on all kinds of issues on Mumsnet. Feminists organise through websites like The F Word which often translate into real world action. Disabled people find new ways of reaching out to the world and fighting for their rights through The Broken of Britain Campaign. Bullied lesbian, gay, bisexual or transgender teens can find new hope through the It Gets Better videos. The Finding Ada campaign aims to highlight the successes of women in science and technology and attract more women to the sector. Men, women, black, white, straight, gay, Muslim or humanist, able-bodied or not, the internet brings us together and empowers us all.
Yet our position is precarious. Technology empowers us, but it also gives both businesses and the state new tools to control us: from face recognition to data retention, web blocking and the threatened disconnection from the internet for alleged copyright infringement. Imagine one day you couldn’t access the Internet anymore because your daughter had downloaded the latest Beyoncé song - or even worse, simply because someone at your ISP had mixed up IP addresses when they’d passed on data to rights holders! Imagine your bullied, gay teenage son couldn’t access the It Gets Better videos anymore because his school had deemed them “inappropriate content”. Imagine your data storage devices - laptop, portable hard drive, iPod - being routinely searched by customs agents every time you cross a border, to check for “infringing material”. Our fluffy, liberal, democratic governments already do, or are proposing to do, these things. Imagine what a BNP government 20 years from now would do to sites like The Broken of Britain or the F Word!
Just as the internet has become an inextricable part of the fabric our lives, so have digital rights. Without one, we can never guarantee the neutrality and freedom of the other - and so we can never guarantee our own personal freedom. Digital rights matter - to us all.
Image: CC BY-NC 2.5 (Xkcd Comics)
Parody or Satire - who’s the fairest of them all?
Neeti Jain looks at the law surrounding parody and satire
It is important, when looking to create legal exceptions to copyright infringement, to examine the contours of such exceptions very carefully. The fair use status granted by U.S. Courts to parodies on the one hand, and taken away from satires on the other, serves as an important example of what not to do.
Unlike the Copyright law in the UK, where specific categories of work are protected by a fair dealing reprieve in the United States of America(U.S.), Section 107 of Chapter 17 of the US code provides a defense of fair useto those accused of copyright infringement by setting down a four-factor test to judge the accused work:
(1) the purpose and character of use; (2) the nature of the copyrighted work; (3) the importance of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the accused use on the potential market for or value of the copyrighted work.
Therefore, if a new work is sufficiently transformative in its purpose – such as using the original for commentary, criticism, review or parody – it may be defended as a fair use, even when infringing the original.
On the face of it, the U.S. provisions on fair use may lead to a large variety of new, creative works being protected against allegations of infringement- as long as they pass the test. However, looking at the way courts have interpreted fair use provisions in the case of parody versus satire, it is clear that the ambit of the fair use defense is quite narrow and may be hard to predict, leading to hesitation in creating new works that draw from another.
Parody
The meaning of “parody” according to the Merriam-Webster dictionary is as follows:
1: a literary or musical work in which the style of an author or work is closely imitated for comic effect or in ridicule
2: a feeble or ridiculous imitation
— pa·rod·ic adjective
— par·o·dis·tic adjective
“Big Hairy Woman” held to be fair use
One of the most prominent decisions relating to fair use is the US Supreme Court’s decision in Campbell v. Acuff –Rose Music. The U.S. Supreme Court examined the alleged copy under each of the factors of the four-factor fair use test provided by §107 of the U.S. Copyright Act and finally overturned the decision of the Court of Appeal. The Supreme Court held that 2 Live Crew’s song “Big Hairy Woman”, based on Roy Orbison’s original copyrighted song “Pretty Woman”, was sufficiently transformative of the original and was protected as a fair use. It is worth noting that Two Live Crew had requested permission from Acuff-Rose to write their parody but had been refused. Two Live Crew also did not seek the compulsory license that US law grants to composers to make cover versions of existing copyrighted song recordings because they thought their parody was not a cover of the original – being different in melody and lyrics.
One of the more important factors in the four-factor fair use test is the effect the use has on the potential market for or value of the copyrighted work. In Acuff, the Supreme Court held that parodies would rarely substitute for the original work that they are parodying. The court also noted with annoyance that artists “ask for criticism but only want praise”, for their work.
Why protect parody?
The First Amendment to the U.S. Constitution protects free speech. Fair use protects commentary and criticism in furtherance of that right to free speech. A parody that comments on, and criticizes the subject thereof, is protected by fair use – even though it may be infringing the copyright in the work being critiqued. Therefore, through the Acuff decision, the U.S. Supreme Court seems to have elevated parodies to a desirable fair use and demoted satire to something that is not worthy because the satirist has taken too much, without adding enough, to “avoid the drudgery in working up something fresh.”
Too bad, Satire. You just ain’t fair enough.
The definition of “satire” according to the Merriam-Webster dictionary is as follows:
1: a literary work holding up human vices and follies to ridicule or scorn
2: trenchant wit, irony, or sarcasm used to expose and discredit vice or folly
In Dr. Seuss Enterprises v Penguin Books the United States Court of Appeals for the Ninth Circuit gave its judgment on whether copyright and trade mark rights in the Cat in the Hat character created by Theodor S. Geisel, the author and illustrator of the famous children’s educational books written under the pseudonym “Dr. Seuss”, were infringed by “The Cat NOT in the Hat”, which was a poetic account of the O.J. Simpson trial by “Dr. Juice”. The disputed work narrates the trial from O.J Simpson’s perspective, using Dr. Seuss’s style of rhyme, for example:
“A man this famous/Never hires/Lawyers like/Jacoby Meyers/When you’re accused of a killing scheme/You need to build a real Dream Team” and “One knife?/Two knife?/Red knife/Dead wife.”
The court held that the Cat NOT in the Hat infringed Dr. Seuss’s copyright. Therefore, the question was only whether there was any fair use defense that could be claimed by Penguin and Dove, the publisher and distributor of the infringing work, respectively.
It was held that the Cat NOT in the Hat was not protected by a fair use defense because fair use only protects those transformative works that borrow from the original to comment or ridicule the original work itself. Such criticism, review or commentary would be considered as enhancing freedom of speech. Therefore, The Cat NOT in the Hat, which uses Dr. Seuss’s copyrighted works – such as the drawing of the cat in the hat, the similar title (protected by trade mark) and the style of his rhyming – to ridicule or satirize the O.J. Simpson trial, but not Dr. Seuss or his work, is not protected by fair use.
An example of what may not be fair use according to the U.S. Courts is this video by JibJab:
It is a flash movie, created during the 2004 U.S. presidential election, that uses the popular song by Woody Guthrie, “This land is your land” and has an animated George Bush and John Kerry singing the tune with lyrics that are different from the original (except for the title line). The song’s copyright owner, the Richmond Organization, through its Ludlow Music Unit, threatened legal action. JibJab responded by saying it was a parody and therefore, permitted as fair use.
The parties ultimately settled the case and therefore, it is unknown what its outcome would have been. It could be that the case settled because no one could be sure which way the courts would decide – given the thin line created between satire and parody by the decisions that came before. Given the Dr. Seuss case’s decision, it is likely that this song would have been held to be infringing and not covered by fair use since it is not parodying the actual song or its author but is satirizing the Republicans and their campaign.
Looking at the four factors of the fair use test in the U.S. Copyright Act, it is difficult to imagine why a court would not allow a satire of the sort in the JibJab case. JibJab admittedly use the “heart” of the original - but add plenty of originality and humour. The character and purpose of the use is transformative, a political commentary, criticism and primarily non-commercial (any commercial success coming their way was probably incidental and unintentional).
There is no way that someone who wants to listen to the Guthrie original for its musical qualities would substitute it with the JibJab version. Therefore, it is unlikely to have any effect on the potential market for the original. However, despite all this, JibJab’s video would probably not be accorded fair use protection in the eyes of the court, following the precedent of Acuff-Rose. The reason to not allow a satire that uses a copyrighted work to make its point is probably to prevent plain-old-copying, rather than to prevent any potential or actual monetary loss to the copyright owner.
The questions then are – if copyrighted works could be freely used to create satires on unrelated subjects – would this freedom act as a disincentive to the creativity of authors? Or would it enhance creative expression and free speech? Why is it that writing a review or comment on the original is a desirable “free speech” effect but creating a satire on something else is not? Can anyone say that a satire should not receive free speech protection? If it does borrow from a popular catchy tune or poem or something in order to create something new and socially valuable – a new work that would gain a wider audience due to its popularity or catchiness – why should that not be fair use? It is the original purpose of copyright also to create an intellectual commons for people to derive more creativity out of.
By limiting the applicability of the fair use defense to parody, but not providing it to satire, the law would create a limitation on future innovation and creativity. There may often be cases that have elements of satire and parody. In such cases, it may be counterproductive to have the courts opine on each and every case and pronounce it a parody or satire, fair use or not. Artists, commentators, reviewers, humourists would need to seek legal advice before they can even begin to create new works that are inspired by an existing work.
One would only be able to parody Jay Z himself, but never the government or a politician, using Jay Z’s popular song. Using Jay Z’s song as a background may have a much bigger impact by reaching a wider audience – but fair dealing provisions that are limited to parodies only may not allow such free speech and intellectual exchange. Therefore, it is important, when undertaking a review of the UK’s copyright laws, to include fair dealing exceptions that allow all kinds of humour, criticism, review and commentary to be protected – including satires that use a copyrighted work, as long as it is not made with the intention of stealing the market of the original and does not have that direct effect.
More stuff to read on this issue:
- Give fair use a wide berth and prevent casualties like “Newport State of Mind”.
- Read more about parody exceptions in different jurisdictions in Jagdeep Bahra’s article.
Data protection: What will the new Directive look like?
Ryan Jendoubi argues that the new Data Protection Directive details will flow from European Policy
For the past two years the European Commission's Justice Directorate has been reviewing the 1995 Data Protection Directive and other legislation affecting EU citizens' right to have their personal data protected. An examination of the Justice Commission's Communication on the review,and of the speeches of Commissioner Viviane Reding, reveals a number of recurring themes, but it's difficult at this stage to accurately predict what their implementation will be in practice.
There have been strong hints that the activities of justice and security functions will fall under the new Directive. The likely impact of this is particularly hard to gauge. Reding has made positive reference to the current UK government's commitment to stop "storage of internet and email records without good reason". On the other hand, an attempt in 2008 to reach a similar goal at the European level was highly contested, with the final protections leaving domestically held police data completely unregulated. As ever, when it comes to security, it will be incumbent upon civil society to be vigilant against "emergency" or "exceptional" powers or exemptions which may over time creep into casual usage.
There have been mixed messages on the subject of notification to data breach victims that their privacy has been compromised. In one speech, Reding declared that, "I will introduce a mandatory data breach notification requirement – the same as I did for telecoms and Internet access when I was Telecoms Commissioner, but this time for all sectors: banking data, data collected by social networks or by providers of online video games." The Commission Communication on the review pledged that, "The Commission will examine the modalities for the introduction in the general legal framework of a general personal data breach notification, including the addressees of such notifications and the criteria for triggering the obligation to notify."
There is of course a danger that the Commission will not set a high enough standard. In another speech to a different audience, Reding made the following statement: "I understand that some in the banking sector are concerned that a mandatory notification requirement would be an additional administrative burden. However, I do believe that an obligation to notify incidents of serious data security breach is entirely proportionate" (emphasis mine). Care is due here for three reasons. First is the basic moral argument that people should be told whenever their privacy has been compromised by the party responsible for that breach. Second is the practical issue that the individual may be in a much better position than the data controller (the person in charge of managing their data) to assess the potential seriousness of a breach given their personal circumstances – who decides what is a "serious" breach and what is not? Finally, in answer to the "administrative burden" argument, we should remember that the burden is one of the purposes of any legislation, since it will incentivise data controllers to prevent breaches.
The definition of personal data is an area of ongoing dispute which will receive attention in the new Directive. The limited approach says that data is only 'personal' where it contains enough information in and of itself, or combined only with other information which the data controller already has, to be linked to a particular person. In practice, this leads some to argue that IP addresses are not personal data because the owners of websites visited cannot link an IP address to a person without more information.
On the other hand, a more realistic approach acknowledges that part of the purpose of data protection is to guard the privacy of individuals against the misuse of their data by people other than the data controller, who may well have malicious intent. It follows then that any data which may be linked back to an individual through cross-referencing with other data should also be regarded as personal. Indeed, the wording used in the Data Protection Directive is oriented towards this interpretation. Article 29 (a working group made up of the data authorities from all EU countries) have been clear that in their view this latter approach is the correct interpretation of the law. The UK however differs from many other European countries, having a lower standard not explicitly including the idea of 'indirect' identification (Munir and Teh, 2008). Any additional clarification in the new Directive requiring the UK to raise its standards is likely to be met with political opposition, fuelled by the complaints of data controllers likely to have their workloads increased.
Consent is another area wherein subtle but significant disagreement is found. Ambiguity arises where data controllers claim 'implicit' consent, which is for example the source of the controversy surrounding the new 'cookie law'. The Regulation changes the standard of consent from opt-out ("opportunity to refuse") to opt-in ("given his or her consent"), but goes on to allow consent to be "signified by a subscriber who amends or sets controls on the internet browser [...] or by using another application or programme". Some see this as a get-out clause, whereby a user who fails to modify their browser settings will be interpreted as giving blanket consent to website owners. This would clearly circumvent the kind of user control the regulation exists to institute. Article 29 have already adopted an opinion on the proper definition of consent, calling for greater clarification of the meaning of "unambiguous" consent in the new Directive. Again, getting this basic point wrong would deprive the law of much of its protective purpose, but data controllers continue want consent to remain as 'passive' as possible. Some justify this by talking about preserving a 'smooth user experience', while others are quite frank about the risk of informed users refusing to be tracked, which would put a serious dent in many sites' advertising-based business models.
The right to be forgotten is a tricky concept, but less because it is contentious than because it is prone to being misunderstood. First, since people obviously cannot be forced to forget information, 'forgotten' must necessarily be understood to mean 'forgotten by organisations'. Secondly, being 'forgotten' in this way is actually already the result of other data protection principles, most notably the principle of data minimisation (controllers should collect only the bare minimum information they need) and the principle of keeping data no longer than necessary.
It is difficult to see how the idea can extend beyond these existing requirements, yet the "right to be forgotten" was explicitly mentioned in the Justice Commission's Communication on the review of data protection, as well as several speeches by Commissioner Reding. As the phrase becomes popularised it's very important that we guard against getting carried away with the concept. This is because there are legal, ethical and technical limitations to any erasure of data. For that reason, talking about a "right" in this context is misleading, bringing the risk of raising unrealistic expectations, which argument can then be used against a legitimate and necssary tightening of data minimisation and timely deletion rules (Justice Secretary Ken Clake has already criticised the concept). Another danger is that, if misunderstood and taken literally, a "right to be forgotten" could have massive ripple effects on the internet, with people trying to enforce their 'right' against indexing and caching service: one could imagine the extreme difficulties which could be created for the likes of the Internet Archive. Thus, though it might make a good soundbite, the "right to be forgotten" is a misleading and potentially dangerous broken metaphor which should be excluded from the ongoing debate.
Everyone's keen to talk about how new technological developments require the law to be brought up to date, and that privacy enhancing technologies should be included in new regulations. Predictably though, outside of academic papers there's little detail on what any of that means or what to do about it. There is concern both about the increasing generation of new types of very personal data (location, biological, medical) and about how that data, and the ability to process it, is becoming more accessible (cloud storage, cheaper processors). It's good that things like the treatment of NHS patient records have made more people tune in to data protection as an issue, but making different provisions for specific types of data could create trouble, as all personal data should be accorded the same degree of respect and protection.
On the positive side, it seems that 'privacy enhancing technologies' might be more than a buzzword. For example, the work of the Kantara Initiative makes it possible to conceive of eg user A logging on to service B, who receives A's identity information via identity provider C; the beauty being that C will not know that the identity request related to A, and B will not be able to request any information which A has not authorised. These "serious" privacy protocols do not yet seem to be on the radar of policy makers in the Justice Commission, but the possibilities are exciting. See this episode of Security Now for more information (on the audio file skip to 42:40).
Lastly come a bunch of interrelated issues regarding the management of personal data by large companies. For starters there is discussion of streamlining of the 'registration requirement' for data controllers, which would reduce or remove of the requirement for them to register their activities with the national data authority. Commissioner Vivane Reding has stated that the current notification regime has proved to be 'unnecessary and ineffective', and the streamlining policy is likely to be a hit with the business community. The UK's Ministry of Justice has expressed concern that the registration fees paid by businesses 'an important part of the arrangements for the ICO's independence from the UK Government'. However in ORG's experience the independence of the ICO is already somewhat strained, and given that its present funding levels are far below those enjoyed by similar regulatory bodies, an opportunity to rethink its funding may not go amiss.
Another possible way of streamlining business procedures would be EU-wide registration, whereby a company would only have to register their activities once within the EU. There is a related proposal that approval of contractual undertakings and binding corporate rules (two methods of guaranteeing citizen's data rights in data transfers outside the European Economic Area) should likewise be valid across the whole of Europe, instead of requiring approval in every country. While this will certainly cut the administrative burden on companies, there is an obvious danger where the data authority of one state might take a less stringent view of what is an appropriate level of protection, in which case pan-European approvals might be seen to lower protection standards across the board. In order to avoid this a greater level of prescription and tight definitions from the Commission would be necessary, yet in the European policy process tighter centralised regulation is just what many governments, including the UK, seek to avoid.
While it's possible to have a pretty good idea of which parts of the new Directive will be most interesting, for practical protection of individuals' data and privacy the devil will be in the detail. I heard it said recently that "It's easy to write a patch, but it's not so easy to write a law." The seeming remoteness and obscurity of the European legislative process is frustrating to many, but many of the rights, freedoms and protections which we enjoy in Britain flow to us from the continent. It's important that as individuals and organisations we continue to engage with European policy formation at every opportunity.
Rights precede laws
Crosbie Fitch argues that the concept of copyright is now out-dated
In order to understand the conflict between the publishing industry’s 18th century privilege of copyright and the emancipating cultural liberty of the information age, we need to understand copyright’s history. But, more important than the history of copyright or the law that created it, we need to understand rights.
What is the most important thing to know about rights?
Rights precede law. Our rights are not created by law. Our rights are imbued in us by nature. We, the people, create law to recognise our rights, and create and empower a government to secure them.
What are our rights?
Rights are the vital powers of all human beings. We have rights to life, privacy, truth, and liberty. We have a right to life, to protect the health and integrity of our minds and bodies. We have a right to privacy, to exclude others from the objects we possess and spaces we inhabit. We have a right to truth, to guard against deceit. We have a right to liberty, to move and communicate freely.
How then did government create a ‘right’ to prohibit copies?
No people creates a government to abridge, annul, or derogate from their rights in the interests of a few - or in Orwellian NewSpeak, the greater good. However, a government is in a position to assume power beyond that provided to it by the people. It can assume power to derogate from the people's rights in order to privilege a minority. Indeed, these privileges, so called 'legal rights', are now so pervasive in society that we must qualify the rights we were born with as natural rights.
So, what is copyright?
What we call ‘copyright’ is an 18th century privilege. It was granted by Queen Anne in her statute of 1709 for the ulterior benefit of the crown and its Stationers' Company, so that the de facto printing monopolies established by the guild during its control of the press could become law. The Stationers’ Company resumed enjoyment of its lucrative monopolies and effective control of the press. The crown resumed its ability to quell sedition via indirect control of the consequently beholden press.
Why was this Statute of Anne wrong?
Privileges are unconstitutional, inegalitarian, and unjust. Paraphrasing from Thomas Paine's 'Rights of Man', the liberty and right to copy is, by nature, inherently in all the inhabitants, but the Statute of Anne, by annulling the right to copy in the majority, leaves the right, by exclusion, in the hands of a few - or, as we term them today, 'copyright holders'.
Consequently, copyright, as any privilege, is an instrument of injustice.
What is the consequence of granting copyright?
Copyright is now a cultural pollutant and has effectively created cultural gridlock. Today, individuals face jeopardy in any significant engagement with their own culture. Morever, copyright fools the very same people into believing they have a natural right to control the use of their work. Although we have privacy, the natural exclusive right to prevent others copying our work whilst it is in our possession, this does not provide us with the power to prevent others making further copies of what we give to them.
Such unnatural power is only provided by copyright, because that annuls everyone’s liberty and right to copy, leaving it in the hands of the copyright holder to restore by license. Even so, to prosecute the privilege, to detect and sue infringers, can be very expensive, and tends to require the wealth and economies of scale of a large copyright exploiting publisher.
But then why has copyright lasted so long?
In the 18th century the press could be controlled.
In the last couple of centuries, when printing presses were relatively few and far between, the state and publishers, via their crown granted privilege, could expect to police and control the press.
Why can’t copyright work today?
Today, the press is us, the people. Today, we are all authors, all publishers, all printers. We, the people, are the press. To control the press is to control the people – a people supposedly at liberty.
What is the current approach to making copyright work?
The people are being ‘educated’ to respect copyright through draconian enforcement – severe punishments of a few as a deterrent to the many.
2005: Jammie Thomas-Rasset, 28, mother of 4, shared 24 files. Found liable for damages of $1.9m.
2005: Joel Tenenbaum, 22, shared 31 files. Found liable for damages of $675,000.
2010: Emmanuel Nimley, 22, iPhoned 4 movies and shared them. Sentenced to 6 months’ jail.
2011: Anne Muir, 58, shared music collection. Sentenced to 3 years’ jail.
Not only are publishing corporations trying to subjugate the people through extortion, intimidation, and fear, but the state is complicit, interested, as ever, in both pleasing their sponsors as well as quelling sedition.
Will we ever learn to respect copyright?
Mankind’s cultural liberty is Primordial. Our liberty, our natural right, our power and need to copy has never left us. Our right to copy may have been annulled by Queen Anne, but youngsters are finding out every day that they innately possess the ability and instinctive need to share and build upon their own culture. We will never learn not to copy, because to learn is to copy, and we will never stop learning.
Copyright is a historical accident, a legislative error made in a less principled era. It is time to rectify that error, not the people.
Is that my mission then, to abolish copyright?
No. Copyright should be abolished, and the people should have their liberty restored, but my mission is not to abolish copyright. My mission is, and has always been, to answer this question: “How can artists sell their work when copies are instantaneously diffused upon publication?” Or putting it slightly differently: “How can artists exchange their work for money in the presence of file-sharing, which effectively renders the reproduction monopoly of copyright unenforceable?” The solution is the question. Artists must exchange their work for the money of their fans directly - in a free market. Artists can no longer sell their work to printers in exchange for a royalty of profits on monopoly protected prices. The monopoly of copyright is no longer effective. Its artificial market of copies has ended.
So, what is copyright’s future?
Copyright is an unethical anachronism. It still works as a weapon with which to threaten or punish infringers (with or without evidence), but even with draconian enforcement, the monopoly has ended. When privileged immortal corporations collide with a population naturally at liberty, the latter will prevail, however draconian their ‘education’ by the former. Nevertheless, without copyright, natural rights remain, e.g. an author’s exclusive right to their writings, truth in authorship, and so on.
Moreover, the market for intellectual work can continue quite happily without a reproduction monopoly. Indeed, it will thrive.
Crosbie Fitch is Cultural Libertarian, Natural Lawyer, Copyright & Patent Abolitionist, Software Engineer
Image: CC-AT-SA Flickr: bobbigmac
Face to face
We saw Facebook role out facial recognition software, but what role does the technology have beyond the social media site, writes Wendy M. Grossman
When, six weeks or so back, Facebook implemented facial recognition without asking anyone much in advance, Tim O'Reilly expressed the opinion that it is impossible to turn back the clock and pretend that facial recognition doesn't exist or can be stopped. We need, he said, to stop trying to control the existence of these technologies and instead concentrate on controlling the uses to which collected data might be put.
Unless we're prepared to ban face recognition technology outright, having it available in consumer-facing services is a good way to get society to face up to the way we live now. Then the real work begins, to ask what new social norms we need to establish for the world as it is, rather than as it used to be.
This reminds me of the argument that we should be teaching creationism in schools in order to teach kids critical thinking: it's not the only, or even best, way to achieve the object. If the goal is public debate about technology and privacy, Facebook isn't a good choice to conduct it.
The problem with facial recognition, unlike a lot of other technologies, is that it's retroactive, like a compromised private cryptography key. Once the key is known you haven't just unlocked the few messages you're interested in but everything ever encrypted with that key. Suddenly deployed accurate facial recognition means the passers-by in holiday photographs, CCTV images, and old TV footage of demonstrations are all much more easily matched to today's tagged, identified social media sources. It's a step change, and it's happening very quickly after a long period of doesn't-work-as-hyped. So what was a low-to-moderate privacy risk five years ago is suddenly much higher risk – and one that can't be withdrawn with any confidence by deleting your account.
There's a second analogy here between what's happening with personal data and what's happening to small businesses with respect to hacking and financial crime. "That's where the money is," the bank robber Willie Sutton explained when asked why he robbed banks. But banks are well defended by large security departments. Much simpler to target weaker links, the small businesses whose money is actually being stolen. These folks do not have security departments and have not yet assimilated Benjamin Woolley's 1990s observation that cyberspace is where your money is. The democratization of financial crime has a more direct personal impact because the targets are closer to home: municipalities, local shops, churches, all more geared to protecting cash registers and collection plates than to securing computers, routers, and point-of-sale systems.
The analogy to personal data is that until relatively recently most discussions of privacy invasion similarly focused on celebrities. Today, most people can be studied as easily as famous, well-documented people if something happens to make them interesting: the democratization of celebrity. And there are real consequences. Canada, for example, is doing much more digging at the border, banning entry based on long-ago misdemeanors. We can warn today's teens that raiding a nearby school may someday limit their freedom to travel; but today's 40-somethings can't make an informed choice retroactively.
Changing this would require the US to decide at a national level to delete such data; we would have to trust them to do it; and other nations would have to agree to do the same. But the motivation is not there. Judith Rauhofer, at the online behavioral advertising workshop she organised a couple of weeks ago, addressed exactly this point when she noted that increasingly the mantra of governments bent on surveillance is, "This data exists. It would be silly not to use it."
The corollary, and the reason O'Reilly is not entirely wrong, is that governments will also say, "This *technology* exists. It would be silly not to use it." We can ban social networks from deploying new technologies, but we will still be stuck with it when it comes to governments and law enforcement. In this, govermment and business interests.
So what, then? Do we stop posting anything online on the basis of the old spy motto "Never volunteer information", thereby ending our social participation? Do we ban the technology (which does nothing to stop the collection of the data)? Do we ban collecting the data (which does nothing to stop the technology)? Do we ban both and hope that all the actors are honest brokers rather than shifty folks trading our data behind our backs? What happens if thieves figure out how to use online photographs to break into systems protected by facial recognition?
One common suggestion is that social norms should change in the direction of greater tolerance. That may happen in some aspects, although Anders Sandberg has an interesting argument that transparency may in fact make people more judgmental. But if the problem of making people perfect were so easily solved we wouldn't have spent thousands of years on it with very little progress.
I don't like the answer "It's here, deal with it." I'm sure we can do better than that. But these are genuinely tough questions. The start, I think, has to be building as much user control into technology design (and its defaults) as we can. That's going to require a lot of education, especially in Silicon Valley.
Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series
Image:
What to make of Google+?
Is your privacy really better protected on Google Plus, asks Milena Popova
Last Friday I found myself faced with the Google+ sign-up page. I stared at it for a few minutes, feeling ever so slightly violated. Then I hit the sign-up button anyway.
Google’s marketing gimmick for its new service is well summarised by xkcd. It’s not Facebook but it’s like Facebook. The main way in which Google+ is (allegedly) entirely unlike Facebook is that it allows you much better control of your privacy. You don’t want your boss to see those embarrassing pictures from last weekend? You’d rather your Mum didn’t put two and two together from the fact that you’re “interested in” both women and men and that your relationship status is “It’s complicated”? Tough luck on Facebook.
With its Circles concept, though, Google+ lets you control precisely what bits of information you share with whom (if, of course, you can be bothered). Having also learned from the monumental disaster that was Google Buzz the Circles set-up is right at the heart of Google+:
Every time someone adds you as a contact, or you add them, the option to add them to a circle is right there; every time you post content or update a section of your profile, you can choose who can see that content. So far, so good: your boss will continue to see you as the consummate professional, your Mum will be happy in the knowledge of your domestic bliss, and you can still giggle at those naughty pictures from the weekend with your boyfriend and your girlfriend. Your secrets are safe with Google+.
Or are they? The question, of course, is safe from whom? Google+ may enable you to compartmentalise your social networking life just like you do your real life, but there is one party here who probably knows more about you than you yourself do: Google.
I use Google Maps - so Google knows with a fair degree of accuracy both where I live and where I work, even without the GPS or other location data from my Android phone. It just needs to look at the two postcodes I use when I ask Maps for directions.
I use Gmail as my main email account. My other account is pretty much only used by a few close friends. Everything else - mailing lists for organisations I’m involved with, pitches and drafts for articles, my vaguely amusing attempts to play off my local branch of the LibDems against the local Labour guys, the head hunters and estate agents from the South West who still haven’t gotten around to removing me from their distribution lists three years after I stopped looking for a job and house down there - all of that goes through Gmail.
I use Google calendar. Google knows I’ll be in Scotland every weekend in August, and that last week I had to buy cakes for work, and that next Friday I’ll be in Bath all day.
I use Google Reader for my eclectic mix of technology and politics RSS feeds. I use Google Documents to share work with organisations I’m involved with: from my ORGZine articles to the contact lists for the Yes to Fairer Votes campaign and the paperwork for the small volunteer-run cinema I’m involved with. I have an Android phone. When I first signed into Gmail with it, it managed to match some of my Gmail contacts to my phone book. I don’t use Picassa but Google+ did have several disclaimers in the sign-up process with regards to what it will do to your Picassa content.
Most tellingly, perhaps, I use Google search, and a lot of the time when I use Google search I’m logged into my Google account. So Google knows a fair chunk of what I’m looking at on the web, be it references for an ORGZine article on Google+ or information about the strangely female-dominated Star Trek convention that my friend is running. When I perform the same search on a different computer, Google helpfully highlights the search results I’ve already looked at. Oh, and equally helpfully, for the last few months Google search has been trying to get me to admit that I’m @elmyra on Twitter. It can give me better search results if I only admit to being @elmyra, it says, helpfully.
I wonder how many people will be signing up for Google+ thinking, isn’t it great, I can finally stop worrying about my friends uploading pictures of me in PVC dresses and New Rocks where my boss’ll see them, while never even sparing a thought for the massive amounts of information Google already holds about them. How do you compartmentalise your life when it comes to Google?
Milena is an economics & politics graduate, an IT manager, and a campaigner for digital rights, electoral reform and women's rights
Image: CC-AT-NC Flickr: Thomas Hawk
Latest Articles
Featured Article
Schmidt Happens
Wendy M. Grossman responds to "loopy" statements made by Google Executive Chairman Eric Schmidt in regards to censorship and encryption.
ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views
People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.