Your data, Your decision

Runa A. Sandvik, developer for the Tor Project, responds to our series on protecting privacy, explaining how and why the Tor Browser Bundle lets you control how much information you share online.

In response to Dr Paul Bernal's piece on The Right to be Forgotten, we are featuring a series of digital businesses who have created tools which they believe solve issues related to online privacy.

Social media sites use web and mobile based technology to create interactive dialog among organizations, communities, and individuals. As a member of a social media website, you are encouraged to share more about yourself; what you do, what your likes and dislikes are, what you bought online. You are encouraged to take more photographs, connect with more people, organize more events, and so on. You may think that you are in full control of what, and how much, you share on these sites. The reality is that you are only in control of a fraction of the information these sites collect about you.

The majority of the sites you visit use cookies to distinguish your user from all the others users. This small piece of data can also be used to determine any relationship between sequential page visits you make, and visits you make to other websites. Cookies alone do not identify people by name, but they are defined themselves by a combination of a computer, a user account, and a browser. It is the use of cookies that allows websites to learn more about who are you and what you do online; what you searched for on Google, what you bought on Amazon, which video you viewed on YouTube.

A significant portion of your online experience is based on the use of free web services. After all, you are not paying any money to Google to use Gmail or to watch a video on YouTube, you are not paying any money to Twitter to follow other users, and you are not paying any money to Facebook to be able to connect with people you know. The truth is that you are paying indirectly through the monetization of your personal information. In exchange for, for example, use of Google's collection of free services, you are helping Google build an enormous database filled with billions of searches and personal information. As the saying goes, if you are not paying for it; you are the product.

Some companies do not disclose what data they collect about you, what they do with it, or how long they store the data for. In some cases, you will even find that a company's privacy policy does not cover all the situations where your data is being collected, stored, and used. You have the right to ask to see the data that companies have on you, but not all companies are legally required to comply with your request. Finding out just how much information has stored about you can be an uphill battle, as the Europe versus Facebook campaign and Privacy International's campaign to encourage people to request their data from Twitter have demonstrated.

A different approach

In The Right to be Forgotten, Dr. Paul Bernal writes that "if you don’t want people to hold your data, or the reason they held it is no longer valid, you should have the right to have that data deleted". He argues that you as the consumer should be able to have control over your data, even when the data is held by a commercial operator. But what if you, instead of requesting to have data deleted, could control how much data you share in the first place? What if you could reduce your digital footprint with one piece of software? That is just one of the many challenges we are trying to address at the Tor Project.

The Tor Project develops and maintains the Tor Browser Bundle, a pre-configured software package that allows users to browse the web anonymously and securely. Tor was originally developed for the purpose of protecting government communications. Today, it is used by a wide variety of people for different purposes. An estimated 500,000 people use Tor on a daily basis; some use Tor to keep websites from tracking them and their family members, some use Tor to research sensitive topics, and some use Tor to connect to news sites and instant messaging services when these are blocked by their local Internet providers.

At the cost of some speed, the ability to play Flash videos, and a somewhat different user experience, you can reduce your digital footprint and prevent websites from tracking you online. By using the Tor Browser Bundle, you can control how much information you share with the websites you visit. By default, the sites you visit will not know your real IP address, know what you searched for on the Internet last week, learn which other websites you have visited or are currently visiting, or link your most recent online purchase with purchases you have previously made on either the same site or different sites.

Given the popularity of social media sites, and the number of new sites and services being introduced, there is an urgent need to address the problem that users cannot learn how much data a company holds about them, let alone request that this data is deleted.

Conclusion

You have several choices when it comes to controlling data about you that is held by third parties. Legislation is one possibility, but not letting the data get beyond your control in the first place, is another. The Tor Browser Bundle is no silver bullet; while it will not allow you to delete data you have previously given out, it will allow you to prevent your data from being collected in the future. You rarely see a company say that they will collect as little information as possible when you use their service, and so it is up to you to decide how much data you want to give away.

 

Runa A. Sandvik is a developer for the Tor Project. She analyses blocking events, tests new releases of Tor, helps people use Tor safely, and gives talks all over the world. She tweets as @runasand.

Image: Red Onions CC-BY-ND 2.0 Flickr: Clay Irving

Insecure at Any Speed

Wendy Grossman looks at the problems with online password security and human error in the aftermath of the LinkedIn password fiasco.

"I have always depended on the kindness of strangers," Blanche says graciously to the doctor hauling her off to the nuthouse at the end of Tennessee Williams' play A Streetcar Named Desire. And while she's quite, quite mad in her genteel Old Southern delusional way she is still nailing her present and future situation, which is that she's going to be living in a place where the only people who care about her are being paid to do so (and given her personality, that may not be enough).

Of course it's obvious to anyone who's lying in a hospital bed connected to a heart monitor that they are at the mercy of the competence of the indigenous personnel. But every discussion of computer passwords tends to go as though the problem is us. Humans choose bad passwords: short, guessable, obvious, crackable. Or we use the same ones everywhere, or we keep cycling the same two or three when we're told to change them frequently. We are the weakest link.

And then you read this week's stories that major sites for whom our trust is of business-critical importance - LinkedIn, eHarmony, and Last.fm" - have been storing these passwords in such a way that they were vulnerable to not only hacking attacks but also decoding once they had been copied. My (now old) password, I see by typing it into LeakedIn for checking, was leaked but not cracked (or not until I typed it in, who knows?).

This is not new stuff. Salting passwords before storing them - the practice of adding random characters to make the passwords much harder to crack - has been with us for more than 30 years. If every site does these things a little differently, the differences help mitigate the risk we users bring upon ourselves by using the same passwords all over the place. It boggles the mind that these companies could be so stupid as to ignore what has been best practice for a very long time.

The leak of these passwords is probably not immediately critical. For one thing, although millions of passwords leaked out, they weren't attached to user names. As long as the sites limit the number of times you can guess your password before they start asking you more questions or lock you out, the odds that someone can match one of those 6.5 million passwords to your particular account are…well, they're not 6.5 million to one if you've used a password like "password" or "1233456", but they're small. Although: better than your chances of winning the top lottery prize.

Longer term may be the bigger issue. As Ars Technica notes, the decoded passwords from these leaks and their cryptographically hashed forms will get added to the rainbow tables used in cracking these things. That will shrink the space of good, hard-to-crack passwords.

Most of the solutions to "the password problem" aim to fix the user in one way or another. Our memories have limits - so things like Password Safe will remember them for us. Or those impossible strings of letters and numbers are turned into a visual pattern by something like GridSure, which folded a couple of years ago but whose software and patents have been picked up by CryptoCard.

An interesting approach I came across late last year is sCrib, a USB stick that you plug into your computer and that generates a batch of complex passwords it will type in for you. You can pincode-protect the device and it can also generate one-time passwords and plug into a keyboard to protect against keyloggers. All very nice and a good idea except that the device itself is so *complicated* to use: four tiny buttons storing 12 possible passwords it generates for you.

There's also the small point that Web sites often set rules such that any effort to standardize on some pattern of tough password is thwarted. I've had sites reject passwords for being too long, or for including a space or a "special character". (Seriously? What's so special about a hyphen?) Human factors simply escape the people who set these policies, as XKCD long ago pointed out.

But the key issue is that we have no way of making an informed choice when we sign up for anything. We have simply no idea what precautions a site like Facebook or Gmail takes to protect the passwords that guard our personal data - and if we called to ask we'd run into someone in a call center whose job very likely was to get us to go away. That's the price, you might say, of a free service.

In every other aspect of our lives, we handle this sort of thing by having third-party auditors who certify quality and/or safety. Doctors have to pass licensing exams and answer to medical associations. Electricians have their work inspected to ensure it's up to code. Sites don't want to have to explain their security practices to every Sheldon and Leonard? Fine. But shouldn't they have to show *someone* that they're doing the right things?

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

Image: Offline Password CC-BY-NC-SA Flickr: BinaryCoco

Dragging Academic Publishing into the 21st Century

Milena Popova looks at the continued difficulty of accessing publicly funded research and why a movement towards open access would help academics, businesses and the public.

It’s been nearly ten years since I left academia, but I have enough friends who are academic researchers in various fields to know that academic publishing continues to be stuck in the 19th century. Every so often I need access to a research paper, and I have to beg friends at universities with the right subscriptions to get it for me; occasionally, I act as a broker for such requests from other people. Requests for access to papers are a fairly regular occurrence on my Twitter feed too. Everybody seems to be having a hard time getting hold of the latest research, regardless of whether it’s on nuclear physics, economics or philosophy. Which is all incredibly ironic, given that science is meant to be based on the open dissemination, exchange and discussion of ideas, nevermind the fact that the vast majority of it is funded by taxpayers’ money.

The Internet has done very little for opening up access to scientific research. If you happen to work at a university, it will give you the convenience of being able to get a PDF copy of a paper from your own office rather than having to rummage through dusty bound volumes of journals in the library, but that’s about it. The rest of us continue to be left in the dark. This is not for a lack of will on the part of scientists and researchers. In fact, they have often gone out of their way to open up access to their research as much as possible, for instance through projects such as ArXiv. Rather, it is publishers who stand to lose a lot of money if access to research is opened up further and who are therefore lobbying very hard against any moves towards open access.

The Access2Research initiative is attempting to change the status quo - at least in the US. Last month saw the launch of a petition to the White House to make publicly funded research openly available over the Internet, and on June 3rd the petition hit the required 25,000 signatures. As Cameron Nayoln points out, this puts open access firmly on the US government’s agenda, and is likely to be something we benefit from even outside the US.

Open access to scientific research is hugely important. For a start, it is likely to reduce costs for universities, allowing them to reroute funds they are currently spending on being able to access, among other things, their own research into new projects. In the UK this effect would potentially be even bigger, as subscriptions to electronic journals (as all electronic publications) are currently subject to VAT, adding another 20% to the costs. (Just in case you don’t want to wait for open access to solve the issue, there is a government petition on the VAT issue. A back-of-the-envelope calculation shows that a small to medium-sized university could afford up to an extra 14 lecturer posts if they did not have to pay VAT on electronic subscriptions.)

Being able to access other scientists’ results more cheaply and easily is likely to also improve the quality of ongoing academic research, particularly if opening access results in the kind of innovation that changes what is being published. One big issue in the way publishing works currently is that there is very little incentive to publish the results of failed experiments or experiments which did not yield the expected outcome. This means that in many cases scientists don’t know that a certain piece of work has been done already - and failed, thus potentially duplicating and wasting effort. Open access in and of itself is not going to fix this, but it may provide the right platforms to help address the problem. It would also help scientists find related work more easily, again allowing them to build on past experience rather than repeat work which has already been done.

Fields of research with a strong practical application are likely to benefit disproportionately from open access. In these areas - anything from Computing Science and Engineering to Business - there is currently very little interaction between academia and the real world. Coders and designers of computer chips rarely read the latest research due to lack of access, while university researchers have relatively little interaction with the real-world applications of their subject. I have worked in business for the last ten years and not once have I or my colleagues reached out to academia to understand if there is something we could or should be doing differently. At the same time, Business Schools are desperate to get real-world experience in, both for research and teaching purposes. Opening up access to publications would benefit both sides hugely.

While there is good news in the US, there is also hope for Europe and the UK in the area of open access. The UK government currently has a Working Group on Expanding Access looking at the issue. It had its final meeting in May and is expected to produce a report over the next month or so (“Spring 2012”). At a European level, the EU’s Horizon 2020 research and innovation programme is being finalised right now, due to launch in 2014. With 80 billion Euros behind it, it is highly likely to have a strong influence on the direction European research and innovation takes, and the good news is that open access appears to be firmly on the EU agenda. Overall then, it’s been a good few weeks for those of us passionate about dragging scientific publishing - kicking and screaming - into the 21st century.

 

Milena is an economics & politics graduate, an IT manager, and a campaigner for digital rights, electoral reform and women's rights. She tweets as @elmyra

Image: CC-BY 2.0 NASA Goddard Space Flight Center

The Pet Rock Manifesto

Wendy Grossman reports from the Westminster eForum on the future of security and looks at who is feeding into the Communications Capabilities Development Programme.

I understand why government doesn't listen to security experts on topics where their advice conflicts with the policies it likes. For example: the Communications Capabilities Development Programme, where experts like Susan Landau, Bruce Schneier, and Ross Anderson have all argued persuasively that a hole is a hole and creating a vulnerability to enable law enforcement surveillance is creating a vulnerability that can be exploited by...well, anyone who can come up with a way to use it.

All of that is of a piece with recent UK and US governments' approach to scientific advice in general, as laid out in The Geek Manifesto, the distillation of Mark Henderson's years of frustration serving as science correspondent at The Times (he's now head of communications for the Wellcome Trust). Policy-based evidence instead of evidence-based policy, science cherry-picked to support whatever case a minister has decided to make, the role of well-financed industry lobbyists - it's all there in that book, along with case studies of the consequences.

What I don't understand is why government rejects experts' advice when there's no loss of face involved, and where the only effect on policy would be to make it better, more relevant, and more accurately targeted at the problem it's trying to solve. Especially *this* government, which has in other areas has come such a long way.

Yet this is my impression from Wednesday's Westminster eForum on the UK's Cybersecurity strategy (PDF). Much was said - for example, by James Quinault, the director of the Office of Cybersecurity and Information Assurance - about information and intelligence sharing and about working collaboratively to mitigate the undeniably large cybersecurity threat (even if it's not quite as large as BAe Systems Detica's seemingly-pulled-out-of-the-air £27 billion would suggest; Detica's technical director, Henry Harrison didn't exactly defend that number, but said no one's come up with a better estimate for the £17 billion that report attributed to cyberespionage.)

It was John Colley, the managing director EMEA for (ISC)2 who said it: in a meeting he attended late last year with, among others, the MP James Brokenshire, Minister for Crime and Security at the Home Office shortly before the publication of the UK's four-year cybersecurity strategy (PDF), he asked who the document's formulators had talked to among practitioners, "the professionals involved at the coal face". The answer: well, none. GCHQ wrote a lot of it (no surprise, given the frequent, admittedly valid, references to its expertise and capabilities), and some of the major vendors were consulted. But the actual coal face guys? No influence. "It's worrying and distressing," Colley concluded.

Well, it is. As was Quinault's response when I caught him to ask whether he saw any conflict between the government's policies on CCDP and surveillance back doors built into communications equipment versus the government's goal of making Britain "one of the most secure places in the world to do business". That response was, more or less precisely: No.

I'm not saying the objectives are bad; but besides the issues raised when the document was published, others were highlighted Wednesday. Colley, for example, noted that for information sharing to work it needs two characteristics: it has to go both ways, and it has to take place inside a network of trust; GCHQ doesn't usually share much. In addition, it's more effective, according to both Colley and Stephen Wolthusen, a reader in mathematics at Royal Holloway's Information Security Group, to share successes rather than problems - which means that you need to be able to phone the person who's had your problem to get details. And really, still so much is down to human factors and very basic things, like changing the default passwords on Internet-facing devices. This is the stuff the coalface guys see every day.

Recently, I interviewed nearly a dozen experts of varying backgrounds about the future of infosecurity; the piece is due to run in Infosecurity Magazine sometime around now. What seemed clear from that exercise is that in the long run we would all be a lot more secure a lot more cheaply if we planned ahead based on what we have learned over the past 50 years. For example: before rolling out wireless smart meters all over the UK, don't implement remote disconnection. Don't link to the Internet legacy systems such as SCADA that were never designed with remote access in mind and whose security until now has depended on securing physical access. Don't plant medical devices in people's chests without studying the security risks. Stop, in other words, making the same mistakes over and over again.

The big, upcoming issue, Steve Bellovin writes in Privacy and Cybersecurity: the Next 100 Years (PDF), a multi-expert document drafted for the IEEE, is burgeoning complexity. Soon, we will be surrounded by sensors, self-driving cars, and the 2012 version of pet rocks. Bellovin's summation, "In 20 years, *everything* will be connected...The security implications of this are frightening." And, "There are two predictions we can be quite certain about: there will still be dishonest people, and our software will still have some bugs." Sounds like a place to start, to me.

 

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

Image: CC-BY 2.0 The U S Army

The Web Balancing Act – Privacy vs Personalization

Peter Cranstone discusses the issue of maintaing privacy online whilst still receiving high quality services -and his proposed solution.

In response to Dr Paul Bernal's piece on The Right to be Forgotten, we will be featuring a series of digital businesses who have created tools which they believe solve issues of online privacy.

The Web user is yearning for a better experience in exchange for a secure & trusted exchange of personal data. Unfortunately, these days, these two concepts are at odds.  At 3PMobile® our vision is to make the mobile Web experience, faster, more private and more personal.  We strive to balance the need for user data privacy and control, with the need for enterprise commerce and convenience.  Our software helps consumers regain control over their data; sharing preferences without sacrificing convenience and gives businesses a tool to improve the experience and build greater trust and loyalty with their customers. 

The Choice™ browser simplifies your ability to choose what data you want to share with whom and enables the content providers to programmatically use that data, in real-time, to deliver a personal, optimized mobile Web experience that does not require tracking.  So why this is so important, should come as no surprise.

Our privacy is continuously being eroded so that we feel so exposed that we simply lie to continue using the convenience features built into many Web services.  Unfortunately those lies perpetuate the delivery of irrelevant content, eroding the experience we desire.  Is there a solution? Dare we dream to make a change that brings some respite from those who constantly seek to profile, categorize and predict our online behavior without our permission?  Can we find a respectful balance?

Well being one of those optimistic, problem-solving people, I think there is. However, before we head down that road we need to stop for a second and look at the big picture. The Internet has been around for over 30 years. It literally connects billions of us via all manner of devices. Any solution involving Privacy must consider the current Internet design (the Web plumbing, so to speak) and ensure that any considered change will work now - and in the future.  It also has to support existing business models and user expectations of what the Web “is” and “should be.”

That’s a pretty tall order. Currently there are over 650 million Web servers out there (link). Throw in several billion mobile devices, another billion or two desktop devices and you start to see the size of the problem you have to solve.

In summary, lots of devices, and lots of people, most of who are resistant to change. So before we start, we had better poll the ecosystem constituents and see what they want. This part is easy.  There are only two to poll - the person who uses a browser (the client) and the person who owns the content (the Web server). It just so happened we did this six years ago.  We spoke with hundreds of smartphone owners and dozens of Web content and service providers.  It all boiled down to them wanting the following things from their mobile Web interactions:

 

 

The Customer Wants…

The Enterprise Wants…

1

Convenience

Make Money

2

Privacy

Simplify Integration

3

Control

Control

 

Since we like to solve big problems at 3PMobile, we realized that in order for the mobile Web to thrive, we would need to develop the tools to align these competing interests.  At the core of our tools is the Internet standard, called the HTTP protocol.  It is the specification sheet for the Web and ensures that information can get from point A to point B to point Z and be understood by every browser and Web server connected to the Internet – today and tomorrow.

Well actually here’s our first bit of good news. I commonly refer to this as the “secret that is hidden in plain sight”. The Web design standard (HTTP) is “extensible”. That means it was designed with the foresight that someday, someone might need to add something to it using what is called an X-header.   

This is actually a very, very simple and elegant way to add what would be considered non-standard data (like your name) to the standard communication between a browser and website or service, All you have to do is create something like this:

X_FIRST_NAME=Peter

and you have a header. Wow, that’s really simple. Sure, there were some technical challenges to overcome, but that’s our problem, not yours.  Okay – that’s as technical as I’m going to get.  Back to solving the problem.

So now you can securely send information about yourself, your device’s capabilities and your location (in real-time) to a web service using X-headers.  You’re probably asking yourself how does sending more data increase my privacy on the Internet? Good question!  Let’s return for a minute to the opening line – “The end-user is yearning for a better experience in exchange for a secure & trusted exchange of personal data”.

What I want (the consumer) is a convenient and secure way to share data with a trusted entity. In exchange for that data I want a better experience – in other words, to know me is to love me, market to me, but NOT take advantage of me.  By storing my data on my device and controlling with which sites I share that data, the web service has what it needs to deliver a great experience.  With my permission, it now has real-time, information about me, my device and my location.  I don’t need to be tracked.  When I leave that website my data leaves with me.

Think of the analogy of paying for dinner at your favorite restaurant – I walk in, they recognize me.  I’m seated at my favorite table, with my favorite drink waiting for, and my order has already been sent to the kitchen.  Oh!  And I get all this great service without having to leave my credit card information on file.  They process it on the spot (literally at my table in most EU countries) and they give it right back.  My card leaves with me, not to be seen again until I return again to my favorite restaurant.  

Trust and transparency drive the relationship. The more transparency, the more I trust, the more I share, the better the experience. We’ve now aligned the constituents.  I have a convenient way to share and control my privacy. The content provider has a simple way to target meaningful content and ads and optimize what I see in hopes of driving greater use and  more revenue. It becomes a win-win for everybody.

In closing I think it’s vital that we update our vocabulary to include a more relevant and precise definition of Privacy. Privacy is my ability to control the “collection, use and flow” of my personal data. Using this definition you will see that it perfectly aligns with the improvements to the Internet plumbing I’ve described above. I control the collection, flow and use of my data, in return for that you give me a less intrusive, optimized experience that encourages me to engage in a commerce.

When you ask my permission, I’m more likely to share something honest and of value. When I do that, both personal and commercial objectives are balanced, which leads to a thriving Web community.

 

About the Author:  Peter J. Cranstone, CEO – 3PMobile

Image: CC-BY-NC-SA Auntie P

Would the Real Search Neutrality Please Stand Up

The co-founders of foundem.co.uk explain the need for search neutrality and how and why they are fighting Google on this issue.

As the gateway to the Internet for the vast majority of users, Google has unparalleled influence over which content and services people discover, read, and use. Before Google’s need for growth compelled it to look beyond horizontal search, this unfettered market power wasn't necessarily a problem. Google tended to focus its efforts on providing the best possible search results for its users, even though that usually meant steering them to other people’s websites as quickly as possible. Starting around 2005, however, Google began to develop a significant conflicting interest—to steer users, not to other people’s services, but to its own growing stable of competing services, in price comparison, travel search, social networking, and so on.

By manipulating its search results in ways that systematically promote its own services while demoting or excluding those of its competitors, Google can exploit its gatekeeper advantage to commandeer a substantial proportion of the traffic and revenues of almost any website or industry sector it chooses. As a result, there is now a growing chasm between the enduring public perception of Google as comprehensive and impartial and the reality that it has become increasingly neither.

The debate about net neutrality has tended to focus exclusively on the issues of equal access to the physical infrastructure of the Internet (the network), while ignoring the issues of equal access to its navigational infrastructure (the search engines). If we are to protect equal access to the Internet for users, established businesses, and the innovative start-ups that will power the next wave of growth of the digital economy, we must broaden our horizons beyond network neutrality to include the equally important principle of search neutrality.  

In October 2009, we defined search neutrality as the principle that search engine results should be driven by the pursuit of relevance and not skewed for commercial gain.  Search neutrality is particularly pressing, because Google’s 85% share of the global search market (90% in the UK and 95% in much of Europe) places so much market power in the hands of a single US corporation. And there is ample evidence that Google is already abusing this power. Our European Competition Complaint against Google, submitted in November 2009, describes how Google leverages its overwhelming dominance of horizontal search to unprecedented and virtually unassailable advantage in adjacent sectors.

Despite being one of network neutrality’s most enthusiastic advocates, Google is fighting against the growing calls for search neutrality. In December 2009, we posed a question to Google: how can discriminatory market power be dangerous in the hands of a network provider, but somehow harmless in the hands of an overwhelmingly dominant search engine? So far, Google’s response has been evasive. Because it is difficult for Google to argue against the actual principles of search neutrality—the same principles it has long advocated for network providers—it has contrived an imaginary and fundamentally distorted version to argue against instead.

Clearly, no two search engines will produce exactly the same search results; nor should they. In many cases there is no “right” answer, and no two search engines will agree on the optimum set of search results for a given query. But any genuine pursuit of the most relevant results must, by definition, preclude any form of arbitrary discrimination. The problem for Google is that its Universal Search mechanism, which systematically promotes Google's own services, and its increasingly heavy-handed penalty algorithms, which systematically demote or exclude Google's rivals, are both clear examples of financially motivated arbitrary discrimination.

Despite Google’s concerted efforts to derail the search neutrality debate, by arguing vehemently against a form of search neutrality that no one is advocating, the real search neutrality has become an increasingly important focal point for those concerned about the insidious power of search engine bias. Most recently the EPP Group, Europe’s largest coalition of MEPs, declared search neutrality a core component of its Internet Strategy, and BEUC, the European Consumer Organisation, wrote an open letter calling on the European Commission to protect the principle of search neutrality.

In the traditional bricks-and-mortar world, Google’s anti-competitive practices would be obvious to all. In the seemingly impenetrable world of Internet search, however, Google’s ability to get away with these practices has often depended on its ability to bamboozle people: our video deconstructing Google’s recent testimony to the US Senate Antitrust Subcommittee provides the first public glimpse of the extent to which this strategy unravels in the face of informed scrutiny.

 

Google’s standard reply to the observation that it has a monopoly in search is to point out that “competition is just a click away”. But, Google operates in a two-sided market–with users on one side and websites on the other. While it is true that users have a choice of alternative search engines, the key point is that websites do not. As long as nearly all users continue to choose Google—as they have consistently done for the last decade—then businesses and websites have no alternative search engine by which to reach them.

The competitors Google is referring to when it says “competition is just a click away” are rival horizontal search engines like Yahoo and Bing, but the businesses being harmed by the anti-competitive practices described in our Complaint are not these rival horizontal search engines; they are the thousands of businesses that compete with Google’s other services—in price comparison, online video, digital mapping, news aggregation, local search, travel search, financial search, job search, property search, social networking, and so on.

The unique role that search plays in steering traffic and revenues through the global digital economy means that Google is not just a monopoly; it is probably the most powerful monopoly in history. Given the absence of healthy competition among search engines, and Google’s growing conflict of interest as it continues to expand into new services, there is an urgent need to address the principles of search neutrality through thoughtful debate, rigorous anti-trust enforcement, and perhaps very careful regulation.

Adam and Shivaun Raff,
Co-Founders of www.foundem.co.uk and www.SearchNeutrality.org

Image: CC-NC-ND Flickr: Steve Rhodes

DNSChanger Shutting Down Internet Service

Jon Norwood gives a practical guide to the DNSchanger malware and what will happen when its servers are shut down on July 9th.

If you are a Windows or Mac user then it is important that you thoroughly check your computer for malware before July 9, 2012. The FBI claims that a particular form of malware called DNSChanger is infecting millions of computers in hundreds of countries. This particular form of malware allowed a group of hackers to control the advertising that appeared in browsers on infected computers. Although it is often times impossible to assess the true extent of a particular type of malware's penetration in any given Internet market segment, DNSChanger is much easier to track. How this is will become clear.

DNSChanger specifically targets Mac and Windows systems by manipulating Domain Name Servers on infected computers. So what is a Domain Name Server? These internet servers are often referred to as DNS servers and their purpose is to translate domain names into IP addresses. For example, if you type in www.bing.com it would appear is if you are going directly to the search engine. Fact is the only reason you can find bing.com is because your Internet service provider has DNS servers that see your request, retrieve the actual IP address for the domain you've typed, and then point you in the right direction. The Internet is based on an architecture that's referred to as TCP/IP or transmission control protocol/Internet protocol. In fact, you don't really need domain name servers if you can remember the IP address for whatever site you want to go to. The Internet certainly wouldn't have the appeal that it does now if you had to type in 2.22.50.33 instead of www.bing.com. It is important to add that e-mail also requires a DNS server.

A computer infected with DNSChanger is directed to use a specific group of DNS servers that were under the control of hackers. These fraudulent servers could manipulate users DNS requests to send them anywhere. This group was sophisticated enough to use sleight-of-hand as opposed to sending users to obviously erroneous areas. Web advertisements were fed to users carefully and this led to millions of dollars of revenue for the criminals.

As mentioned above the servers were under the control of criminals; however the FBI has since seized control of them. With the help of Estonian law enforcement the FBI tracked down the six Estonian nationals that were perpetrating the crime. After thorough investigation the FBI chose to leave the fraudulent DNS servers in use due to the fact that so many computers were already infected with DNSChanger. If these fraudulent servers were turned off today anyone infected with DNSChanger would no longer be able to reach a webpage using a domain name or use email. Of course the FBI shut down the erroneous advertisements so the domain name servers that infected computers are using are actually doing the right thing for now. It is hoped that the continued management of the servers will give users sufficient time to clean up their systems. Due to the costs associated with maintaining the servers they will be shut down on July 9, 2012.

So if you are infected with DNSChanger your access to the Internet will continue as it is now and your Internet Provider  will certainly be uninterrupted. However any service that you are using that depends on DNS servers, meaning web browsing or e-mail, will no longer function beyond July 9, 2012. Even though you will stay connected to the Internet you will be severely limited in what you can do.

So the big question becomes how can you tell if you are infected with this malware? If you have access to an inexpensive computer professional that's always the first choice, of course if you need to check it yourself it can be done. For the Windows operating systems do the following:

1.  Open the start menu and do a program search for cmd.exe. This will open the command prompt.

2.  From the command prompt type ipconfig /all

Look specifically for the entry that reads "DNS servers". There should be two lines of numbers listed that looks something like 209.18.47.61. Please understand that number is most likely not your DNS number unless you are a Time Warner user. These numbers are used only as an example to show you what the numbers look like. Once you find your DNS server IP's write them down. Check to see if your numbers match any of the following:

•             85.255.112.0 through 85.255.127.255

•             67.210.0.0 through 67.210.15.255

•             93.188.160.0 through 93.188.167.255

•             7.67.83.0 through 77.67.83.255

•             13.109.64.0 through 213.109.79.255

•             64.28.176.0 through 64.28.191.255

If your computer is currently using any of the above DNS servers then it is likely you are infected with DNSChanger.  For more information on how to remove DNSChanger please visit https://www.us-cert.gov/reading_room/trojan-recovery.pdf.  It must be stressed, if you do not feel comfortable as a computer technician it is always a good idea to get a pro to do the work.

 

Jon Norwood is a regular contributor at: http://www.webexordium.com

Image: CC-AT-NC-SA Flickr: username (Full Name)

Camera Obscura

Wendy Grossman reports from Digital Shoreditch Festival on the speakers and what they tell us about the current debates on freedom, computers and privacy.

There was a smoke machine running in the corner when I arrived at today's Digital Shoreditch, an afternoon considering digital identity, part of a much larger, multi-week festival. Briefly, I wondered if the organizers making a point about privacy. Apparently not; they shut it off when the talks started.

The range of speakers served as a useful reminder that the debates we in what I think of as the Computers, Freedom, and Privacy sector are rather narrowly framed around what we can practically build into software and services to protect privacy (and why so few people seem to care). We wrangle over what people post on Facebook (and what they shouldn't) or how much Google (or the NHS) knows about us and shares with other organizations.

But we don't get into matters of what kinds of lies we tell to protect our public image. Lindsey Clay, the managing director of Thinkbox, the marketing body for UK commercial TV, who kicked off an array of people talking about brands and marketing (though some of them in good causes), did a good, if unconscious, job of showing what privacy activists are up against: the entire mainstream of business is going the other way.

Sounding like Dr Gregory House, people lie in focus groups, she explained, showing a slide comparing actual TV viewer data from Sky to what those people said about what they watched. They claim to fast-forward; really, they watch ads and think about them. They claim to time-shift almost everything; really, they watch live. They claim to watch very little TV; really, they need to sign up for the SPOGO program Richard Pearey explained a little while later. (A tsk-tsk to Pearey: Tim Berners-Lee is a fine and eminent scientist, but he did not invent the Internet. He invented the *Web*.) For me, Clay is confusing "identity" with "image". My image claims to read widely instead of watching TV shows; my identity buys DVDs from Amazon..

Of course I find Clay's view of the Net dismaying - "TV provides the content for us to broadcast on our public identity channels," she said. This is very much the view of the world the Open Rights Group campaigns to up-end: consumers are creators, too, and surely we (consumers) have a lot more to talk about than just what was on TV last night.

Tony Fish, author of My Digital Footrprint, following up shortly afterwards, presented a much more cogent view and some sound practical advice. Instead of trying to unravel the enduring conundrum of trust, identity, and privacy - which he claims dates back to before Aristotle - start by working out your own personal attitude to how you'd like your data treated.

I had a plan to talk about something similar, but Fish summed up the problem of digital identity rather nicely. No one model of privacy fits all people or all cases. The models and expectations we have take various forms - which he displayed as a nice set of Venn diagrams. Underlying that is the real model, in which we have no rights. Today, privacy is a setting and trust is the challenger. The gap between our expectations and reality is the creepiness factor.

Combine that with reading a book of William Gibson's non-fiction, and you get the reflection that the future we're living in is not at all like the one we - for some value of "we" that begins with those guys who did the actual building instead of just writing commentary about it - though we might be building 20 years ago. At the time, we imagined that the future of digital identity would look something like mathematics, where the widespread use of crypto meant that authentication would proceed by a series of discrete transactions tailored to each role we wanted to play. A library subscriber would disclose different data from a driver stopped by a policeman, who would show a different set to the border guard checking passports. We - or more precisely, Phil Zimmermann and Carl Ellison - imagined a Web of trust, a peer-to-peer world in which we could all authenticate the people we know to each other.

Instead, partly because all the privacy stuff is so hard to use, even though it didn't have to be, we have a world where at any one time there are a handful of gatekeepers who are fighting for control of consumers and their computers in whatever the current paradigm is. In 1992, it was the desktop: Microsoft, Lotus, and Borland. In 1997, it was portals: AOL, Yahoo!, and Microsoft. In 2002, it was search: Google, Microsoft, and, well, probably still Yahoo!. Today, it's social media and the cloud: Google, Apple, and Facebook. In 2017, it will be - I don't know, something in the mobile world, presumably.

Around the time I began to sound like an anti-Facebook obsessive, an audience questioner made the smartest comment of the day: "In ten years Facebook may not exist." That's true. But most likely someone will have the data, probably the third-party brokers behind the scenes. In the fantasy future of 1992, we were our own brokers. If William Heath succeeds with personal data stores, maybe we still can be.

 

Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series.

Image: CC-BY-NC 2.0 Flickr: Uncle Bucko

The Culture Paradox - the challenge of Digital Rights and Museums

Nick Poole discusses how the puzzles of copyright are impacting on museums and the successful creation of a digital cultural heritage.

When did you last visit a museum? If it was in the last 12 months, you aren’t alone. According to Government figures (PDF), 47.9% of the UK population visited a museum during 2011. And it’s not just the UK - 57% of respondents to the 2009 Nation Brands Index cited culture as the strongest influence on their choice of holiday destination.

There are many reasons for the current boom in museum visits. Some ascribe it to the ‘staycation’, others that people are looking for meaning in a turbulent world. But whatever the cause of this rise in real-world interest, it marks an equally significant surge in engagement with the online offer of museums.

Nearly 26.1% of the adult population of the UK visited a museum website in 2011. Consumers increasingly expect to engage with museums not just directly, but through online, social and distributed channels.

The mission to share knowledge is encoded in the DNA of museums. The industry-standard definition of a ‘museum’ from the Museums Association says – “Museums enable people to explore collections for inspiration, learning and enjoyment. They are institutions that collect, safeguard and make accessible artefacts and specimens, which they hold in trust for societ

If the purpose of museums is to enable people to enjoy, learn from and use the collections in their care, then the expectation is increasingly that this will happen online. For UK museums to fulfil their civic duty in a connected world, we have to find a way of translating at least 200m objects from atoms into electrons, describing them and then storing them in perpetuity

The scale of this task is bewildering. The recent Collections Trust report The Cost of Digitising Europe’s Heritage put the total bill across the EU at nearly EUR10bn a year for the next decade. In the UK, an investment of almost £150m has enabled us to digitise an estimated 5-10% of our total holdings.

‘But wait’, I hear you cry, ‘I have a camera, I can help you out without spending the equivalent of the GDP of Spain’. And crowdsourcing can certainly help. But the digitisation of museum collections is about selection and preparation and producing images that support scientific analysis – examples include determining the sex of an insect by counting the hairs on its back leg and dating a Chinese plate by the cracks in its glaze.

Golden Insect by Peter Halasz CC-BY-SA 2.5

So the transition from physical culture to digital culture is similar in scale to other public works like building roads & railways – in an ideal world, it would be supported by the public because it inspires and educates us all and helps us to imagine a better futur

But in reality 2008-2012 has seen the sharpest decline in public funding for museums in several generations. Larger museums have seen effective cuts of 5-6% on core budgets, while many smaller museums have experienced a real-terms loss of income of up to 25-30%. These cuts have forced many to introduce admission charging for the first time in years.

So, if the Government isn’t investing in the digitisation of culture, if Google isn’t around to pick up the tab and if there is barely enough money to run the venues, how can we make the most of the once-in-a-generation opportunity of digitisation?

Like every other industry, museums have made the transition from the early days of the Internet, through the Social Web and towards the Internet of Things. At each stage, the expectation has increased that digital surrogates of publicly-owned collections will be made freely available, not just to see, but to share, mash up and reuse via open API (Application Programming Interface).

And museums have worked hard to embrace openness – the British Museum, for example, recently published its collections as linked data through a SPARQL endpoint. The Collections Trust’s own Culture Grid provides an API to almost 2m collections records. But our efforts in this direction are complicated by the nature of the material we are dealing with.

Museum collections include every type of material from prehistoric fossil remains to the artefacts of contemporary culture. We are responsible for managing a huge amount of in-copyright material and material for which we don’t have any information about the attribution of rights. We are creating a vast quantity of new information – ranging from metadata to rich, narrative description and research. In a real sense, the management and use of a collection is about the management of rights, and on a scale that we are barely equipped to deal with.

In managing these rights, museums find themselves balancing four priorities:

  • Ensuring that the interests (both moral and economic) of the rights holders are respected;
  • An increasing expectation that they will generate revenue through the sale and licensing of images;
  • The public expectation that digital versions of heritage collections will be freely accessible online; and
  • The lack of resources to support the costs of digitisation and metadata

This, then, is the paradox. Failing to digitise is failing to deliver our public mission, but there isn’t enough public money to pay for it. Making ‘orphan works’ freely available online risks undermining the creators who depend on income from licensing their works, yet nobody is in a position to pay for collective licensing on the scale required

In the absence of solutions to this paradox, museums have learnt to coexist with risk – the risk of becoming irrelevant to online audiences offset against the risk of contravening someone’s rights. And if we cannot find creative solutions which reconcile the interests of creators and audiences, it is likely to impact on our children’s experience of their heritage for many generations to come.

It is in search of solutions that that Collections Trust and ORG have brought cultural organisations and open communities including the Open Knowledge Foundation, Wikipedia and Creative Commons together. We will be exploring the feasibility of an Open Digitisation Project, through which we hope to address some of these fundamental challenges. In the meantime, I welcome comments, ideas and questions from the ORGzine community!

 

 Nick Poole is Chief Executive of the Collections Trust, a not-for-profit social enterprise working with museums, libraries and archives in the UK and Europe. He is the Chair of the UK Committee of the International Council of Museums and of the Europeana Network, a cross-industry body representing cultural organisations, broadcasters and publishers. Nick is the UK representative at the European Commission on culture and technology. Follow Nick on twitter at @NickPoole1 and on the Collections Link website at http://www.collectionslink.org.uk/collaborate/my-profile/nickpoole.

Image: CC-BY-NC 20 Capitu

Feature: Interview with Hanna Sköld

"I think for me, basically, Creative Commons is a peace movement": ORG interviews the director of 'Nasty Old People' about Creative Commons, crowd-sourcing and her Kickstarter.

Earlier this week I met Hanna Sköld, the fascinating director of Nasty Old People to talk about her new Kickstarter project Granny's Dancing on the Table. Hanna is incredibly enthusiastic about the use of Creative Commons and crowd-sourcing and spoke with insight and humour about why stories told this way matter.

Ruth: Could you tell us a little bit about Nasty Old People?

 Hanna: Nasty Old People is my first feature. I made the movie with a bank loan of 10,000.

When it was finished I released it with a Creative Commons license on the front page of The Pirate Bay. People could download it for free and remix the movie. Then people started to to interact by translating it into 18 different languages and donating money to pay back the bank loan

and they organised live screenings over many places: cinemas, film clubs and things like that.

So it got a really good online digital distribution with a lot of interaction.

 Nasty Old People is a film about a young girl who works in the home help service. In the beginning of the movie she is part of the neo-Nazi group then, when she meets those 'nasty old people', [laughs] -because she is somehow outside of society, and then to meet another group who is also also outside of society in a different way- when they meet she develops true relationships. At the end this is what makes her leave the neo-Nazi group. And she begins a new life.

 This is a theme I have been interested in: how to include each other in society, how to create possibilities for people who are in a way outside, to make them feel connected. I think that when we feel connected to the world we take more responsibility for our surroundings and for the world.

 Ruth: Is that part of the reason you use Creative Commons license and crowd-sourcing methods?

 Hanna: Yes. It is for many reasons. I mean the part that has to do with inclusion, that is very important. With Creative Commons when we release it so that we can remix it and create new stuff I think that is also, in a way, some kind of a peace movement, you know.

 I think for me, basically, Creative Commons is a peace movement.

 When you find a way to, not have just a few people that own the stories and decide who is to tell the stories and who is going to listen to the stories, everyone who wants to be part of making a story can do that. I think it is so we can understand each other's worlds. You don’t make war if you know each other's stories, you know, and that is why I think it is a peace movement.

 Ruth: What was the reception to Nasty Old People like?

 Hanna: I don’t know how many downloads we got. I know we got at least 50,000 but I know it is more because I could not count because it ended up on many different torrent sites and we couldn't keep track of how widely it was spread.

 I know I got people from 113 countries that contacted me to do something, help, create a live screening, to do things like that. Not only many people, but the fact that it is people from many places for me that is a huge success. It was my first time movie and to have it spread like that -even though I can't tell the exact number of people who saw it- for me, it is a huge success.

Ita Zbroniec-Zajt CC BY-NC-SA

 Ruth: Given that you were so successful, why did you choose Kickstarter rather than using a traditional route for Granny's Dancing on the Table?

 I am working with a producer right now. We have had a lot of conversations about it and I don’t want to lose it. I think that for me this is important: this is something I care for. I wouldn't like to just go into the normal producing way. I would lose something that means something for real, the thing that has to do with the interaction with people, the crowd-sourcing. After the Kickstarter campaign I will also need more money to finish it. I think that when I have the support from people, when I can say:

 "Okay, this is something that people want. People want to have Creative Commons movies, people want to take part in making stories."

 then I have a stronger case. Then I can also apply for more money, but also keep the values. If I didn't do that I would be in the hands of the financiers and the producers, but now I can say "I have all these people and they want the same thing as I do" and my voice is stronger.

 Ruth: Could you explain a little bit about your new project, Granny’s Dancing on the Table? On your Kickstarter you also refer to the 'Granny-verse' what is that?

 Hanna: It is a feature film from the beginning; I have a script and so on, but I want to expand the story so it is not only film. We are also creating an iPad game and International Granny Day.

The story is the story of a young girl who grows up isolated from other people. She really has no connection with other people, except for her father, and when she is 17 she runs away from home and comes to a world she doesn’t know anything about and is kind of lost. All the time she has her strong imagination and her longing for her granny. She never met her granny, but she is always there like some kind of spirit, watching over her.

 On International Granny Day I want people to reflect on their own relationships with their grannys; if they are good or bad or if their story has done something to them, to share it with everyone. These stories will be in different kinds of exhibitions. This is a way to take some part of the film's stories and put it into our lives, in many people's lives and to know something of Eini's journey

The reason I do this is because I want more possibilities for interaction and more possibilities to explore the story in different ways because film is film you know, and when it is finished and screened in the cinema and so on it is what it is, but when we have stories we can keep on expanding it the story and keep on exploring the themes and the stories even after the movie is finished.

 The reason I am telling this story, is very much based on my own experiences. I grew up outside of society and I ran away from home and I lived on the streets for a year, so I have a lot of experiences of feeling excluded. And this is why I know it is so important for people to have this, to feel included, and I know that when I talk to people, many many people have some experience of not belonging: to place, circumstances, contacts, whatever. This is a very universal feeling. With the 'Granny-verse' I want to explore how can we include each other, to create a world where we connect with each other.

 For me it is both the way I tell the stories with crowd-sourcing and the story itself.

Why grannys in particular, do you have a personal story of your granny that influenced you?

 For me personally, I never really had any relationships with my grannys. My granny on my father's side died when I was 11 and I only met her a few times. On my mother's side they lived far away so I never really met them and, as I was isolated, that is part of why I never really met them and that is something I really miss personally in my life -to have those relationships with older relatives.

Connection is not only about people in your life but also connection over time. How am I connected to my past, to my older relatives? By letting people tell their stories about their grannys is also letting people share the past with each other and borrow each other's older relatives.

At the same time why grannys? Older women are not so much shown in culture. If they are, they are shown as this extraordinary person. It's either/or. They are either this crazy person who never married or they are some old ordinary with no real personality: just 'an old lady'. I want to show that there is a whole range of women who lived, now and before us, who had done things from being a housewife and caring for their children, or travelling round the world and doing a lot of stuff. I mean if I can see them, if I can make them visible, and say "they walked before me and they did the things they didn't dare to do, and they explored life", I may be encouraged to do the same things in my own life.

 Because they are kind of invisible in the society today, if they suddenly become visible, if we tell their stories: they will never be forgotten What will happen? Will something change? For me as a young woman or other people as young women or young men, will something be different? They are a missing part that connect us and show us something new -so that's why I chose grannys

Ruth: You released Nasty Old People as a free download on the front page of The Pirate Bay, how do you feel about the court orders across Europe for ISPs to block the site?

 Hanna: I think that's very sad. In Sweden we also had this court order where the founders were convicted. I think, since I have met those people and I know them, the reasons behind The Pirate Bay are really ideological. It is the ideology of sharing things for free, it is making a more transparent, equal world. The people who [are] against it are not only working against new developments, they are also working against the people who can see the future and see what will happen. If you want to fight the people who want to watch your movies you will never have an audience.

 You use your energy to fight the audience and the audience uses it to fight you. As a film-maker - with all that creative energy- I want to have those guys on my side.

 Ruth: You mention free culture on Kickstarter and in your blog. What does free culture mean?

 Hanna: For me free culture means culture that is available for everyone first of all.

In the same way as school and education in many places are free and should be free because it is a matter of democracy that we have free education. In the same way the culture should also be free because it has such an impact on society. I think that 'free culture' is available for everyone and then, of course, available to remix and create new stuff. When you can edit and change a story and make it your own, when we can add thoughts and reflect upon it, I think it is the next step.

 Ruth: Kind of like oral storytelling?

 Hanna: I've been thinking about that. At one point in our history we decided that you can own a story. Every story comes from our common... collective unconscious, but we try to burn it on cds.

 In the past someone told the story and then told the story again -nobody knew who told the story in the beginning -this is like free culture for me [laughs]. The way we work with Creative Commons is in a way to go back to this way of telling stories, even though it is not going back it is moving forward, of course. To a new level.

 Ruth: Thank you very much. It's been wonderful to talk to you.

 

  Emma Blomberg CC BY NC SA

To find out more about Hanna Sköld and 'Granny's Dancing on the Table' visit her Kickstarter campaign, and download her feature film 'Nasty Old People' (available on a number of torrent sites and YouTube). 


Image: Daniel Svensson BY-NC-SA

Featured Article

Schmidt Happens

Wendy M. Grossman responds to "loopy" statements made by Google Executive Chairman Eric Schmidt in regards to censorship and encryption.

ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views

People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.

ORG Events