Verdict on new cloud-based music lockers

What do the new cloud-based music locker services offered by Google & Amazon mean for fans and artists?

Google announced this week that it will join Amazon in offering consumers a cloud-based music locker service. Google's news, which had been rumored for some time, presents an opportunity to both answer and ask some questions about the future of the music industry.

Those questions make clear that while services like these do improve the ability for fans to access their music, they still only get us a little bit closer to the larger goal: making sure artists get paid and fans are happy.

Do music locker services violate current copyright laws?

Unlike streaming services, such as Rhapsody or Mog, Google's and Amazon's music lockers allow users to upload their own music files and then access those files from either the internet or devices equipped to run those companies' programs (in Google's case, that means a phone running the Android operating system).

For this kind of personal locker, Amazon and Google believe they don't need licenses from record labels that own copyrights in the songs – and we agree. Essentially, all these services do is allow you to upload a song you already own and access that file from different browsers and devices, not much different from transferring a song you bought to your iPod.

Apparently to avoid any potential copyright liability, neither Amazon nor Google "de-duplicate" users' files, which means that users access the same files they themselves uploaded, even when those files are identical to others on the system. As a result, millions of identical files may exist in the same cloud.

Whether or not Amazon's and Google's copyright fears are well founded is still an open question and may depend, at least in part, on the ongoing litigation between EMI and MP3Tunes. MP3Tunes is a music locker, but – unlike Amazon and Google – it does de-duplicate its files and it provides a search engine on the side allowing users to search for more music. EMI claims that MP3Tunes should be held responsible for infringing content stored in the lockers of some of its users, while MP3Tunes contends that it is immune from liability under the "safe harbor" provisions in the Digital Millennium Copyright Act (DMCA).

While we wait for a ruling in the MP3Tunes case, it appears that Amazon and Google have chosen to play it safe by storing potentially millions of identical files (just another example of how copyright lawsfail to address modern uses of copyrighted works).

Are music locker services the answer to an ailing recording industry?

Personal music locker services are certainly an improvement on current industry services. By allowing music fans increased access to the content they already own, services like Amazon's and Google's improve consumers' music experiences by making it easier to listen to that music where and whenever they want.

But we're a long way from realizing the potential of cloud-based music services, which could increase consumers' access to music they already own while offering new ways to find and purchase music andother add-on content from artists.

It has been widely reported that Google attempted to secure licensing deals that would have allowed it to offer more to consumers – for example, the ability to access music one owns without having to take the time to upload it to the cloud (which can be a slow process). But it seems the record labels are still blocking efforts to give music fans with more or easier access to new music, whether by compulsory licenses or other non-traditional revenue streams. Not only does less access hurt fans, but it hurts artists who might otherwise benefit from innovative business models. So while we applaud Amazon's and Google's attempt to better their customers' music experience, there's no question that their services fall far short ofwhat fans and artists really need and want.

 

This article originally appeared here and is licensed under CC BY 3.0

Julie Samuels is a Staff Attorney at EFF

 

Image: CC-AT-SA Flickr: pmsyyz (Phillip Stewart)

Radical media fail

Activists protest Radical Media's offices after receiving cease and desist letter

Protestors staged a demonstration outside the Radial Media London office after receiving a 'cease and desist'. Radical Media - a PR agency - claimed that the Radical Media Conference, to be held in late 2011 by various activist groups such as Red Pepper, infringed its trademark.

For more information on the Radical Media Conference click here.

Image: Jim Killock

Double exposure

Wendy Grossman looks at the recent wikileaks cables that expose US interference in other countries intellectual property laws & enforcement, and looks at why the US needs to change, for its own sake too

So finally we know. Ever since Wikileaks began releasing diplomatic cables copyright activists have been waiting to see if the trove would expose undue influence on national laws. And this week there it was: a 2005 cable from the US Embassy in New Zealand requesting $386,158 to fund start-up costs and the first year of an industry-backed intellectual property enforcement unit and a 2009 cable offering "help" when New Zealand was considering a "three-strikes" law. Much, much more on this story has been presented and analyzed by the excellent Michael Geist, who also notes similar US lobbying pressure on Canada to "improve" its "lax" copyright laws. My favorite is this bit, excerpted from the cable recounting an April 2007 meeting between Embassy officials and Geist himself:

 

His acknowledgement that Canada is a net importer of copyrighted materials helps explain the advantage he would like to hold on to with a weaker Canadian UPR protection regime. His unvoiced bias against the (primarily U.S. based) entertainment industry also reflects deeply ingrained Canadian preferences to protect and nurture homegrown artists.

In other words, Geist's disagreement with US copyright laws is due to nationalist bias, rather than deeply held principles. I wonder how they explain to themselves the very similar views of such diverse Americans as Macarthur award winner Pamela Samuelson, John Perry Barlow, Lawrence Lessig. The latter in fact got so angry over the US's legislative expansion of copyright that he founded a movement for Congressional reform, expanding to a Harvard Law School center to research broader questions of ethics.

It's often said that a significant flaw in the US Constitution is that it didn't – couldn't, because they didn't exist yet – take account of the development of multinational corporations. They have, of course, to answer to financial regulations, legal obligations covering health and safety, and public opinion, but in many areas concerning the practice of democracy there is very little to rein those in. They can limit their employees' freedom of speech, for example, without ever falling afoul of the First Amendment, which, contrary to often-expressed popular belief, limits only the power of Congress in this area.

There is also, as Lessig pointed out in his first book, Code: and Other Laws of Cyberspace, no way to stop private companies from making and implementing technological decisions that may have anti-democratic effects. Lessig's example at the time was AOL, which hard-coded a limit of 23 participants per chat channel; try staging a mass protest under those limits. Today's better example might be Facebook, which last week was accused of unfairly deleting the profiles of 51 anti-cuts groups and activists. (My personal guess is that Facebook's claim to have simply followed its own rules is legitimate; the better question might be who supplied Facebook with the list of profiles and why.)

Whether or not Facebook is blameless on this occasion, there remains a legitimate question: at what point does a social network become so vital a part of public life that the rules it implements and the technological decisions it makes become matters of public policy rather than questions for it to consider on its own? Facebook, like almost all of the biggest internet companies, is a US corporation, with its mores and internal culture largely shaped by its home country.

We have often accused large corporate rights holders of being the reason why we see the same proposals for tightening and extending copyright popping up all over the world in countries whose values differ greatly and whose own national interests are not necessarily best served by passing such laws. More recently written constitutions could consider such influences. To the best of my knowledge they haven't, although arguably this is less of an issue in places that aren't headquarters to so many of them and where they are therefore less likely to spend large amounts backing governments likely to be sympathetic to their interests.

What Wikileaks has exposed instead is the unpleasant specter of the US, which likes to think of itself as spreading democracy around the world, behaving internationally in a profoundly anti-democratic way. I suppose we can only be grateful they haven't sent Geist and other non-US copyright reform campaigners exploding cigars. Change Congress, indeed: what about changing the State Department?

It's my personal belief that the US is being short-sighted in pursuing these copyright policies. Yes, the US is currently the world's biggest exporter of intellectual property, especially in, but not limited to, the area of entertainment. But that doesn't mean it always will be. It is foolish to think that down the echoing corridors of time (to borrow a phrase from Jean Kerr) the US will never become a net importer of intellectual property. It is sheer fantasy – even racism – to imagine that other countries cannot write innovative software that Americans want to use or produce entertainment that Americans want to enjoy. Even if you dispute the arguments made by campaigning organizations such as the Electronic Frontier Foundation and the Open Rights Group that laws like "three strikes" unfairly damage the general public, it seems profoundly stupid to assume that the US will always enjoy the intellectual property hegemony it has now. One of these days, the US policies exposed in these cables are going to bite it in the ass.

 

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

Image: CC-AT Flickr: espenmoe (Espen Moe)

Searching for reality

Wendy M. Grossman on supercomputers and how much more we can expect from them in the future

They say that every architect has, stuck in his desk drawer, a plan for the world's tallest skyscraper; probably every computer company similarly has a plan for the world's fastest supercomputer. At one time, that particular contest was always won by Seymour Cray. Currently, the world's fastest computer is Tianhe-1A, in China. But one day soon, it's going to be Blue Waters, an IBM-built machine filling 9,000 square feet at the National Center for Supercomputing Applications at the University of Illinois at Champaign-Urbana.

It's easy to forget - partly because Champaign-Urbana is not a place you visit by accident - how mainstream-famous NCSA and its host, UIUC, used to be. NCSA is the place from which Mosaic emerged in 1993. UIUC was where Arthur C. Clarke's HAL was turned on, on January 12, 1997. Clarke's choice was not accidental: my host, researcher Robert McGrath tells me that Clarke visited here and saw the seminal work going on in networking and artificial intelligence. And somewhere he saw the first singing computer, an IBM 7094 haltingly rendering "Daisy Bell." (Good news for IBM: at that time they wouldn't have had to pay copyright clearance fees on a song that was, in 1961, 69 years old.)

So much was invented here: Telnet, for example.

"But what have they done for us lately?" a friend in London wondered.

NCSA's involvement with supercomputing began when Larry Smarr, having worked in Europe and admired the access non-military scientists had to high-performance computers, wrote a letter to the National Science Foundation proposing that the NSF should fund a supercomputing center for use by civilian scientists. They agreed, and the first version of NCSA was built in 1986. Typically, a supercomputer is commissioned for five years; after that it's replaced with the fastest next thing. Blue Waters will have more than 300,000 8-core processors and be capable of a sustained rate of 1 petaflop and a peak rate of 10 petaflops. The transformer room underneath can provide 24 megawatts of power – as energy-efficiently as possible. Right now, the space where Blue Waters will go is a large empty white space broken up by black plug towers. It looks like a set from a 1950s science fiction film.

On the consumer end, we're at the point now where a five-year-old computer pretty much answers most normal needs. Unless you're a gamer or a home software developer, the pressure to upgrade is largely off. But this is nowhere near true at the high end of supercomputing.

"People are never satisfied for long," says Tricia Barker, who showed us around the facility. "Scientists and engineers are always thinking of new problems they want to solve, new details they want to see, and new variables they want to include." Planned applications for Blue Waters include studying storms to understand why some produce tornadoes and some don't. In the 1980s, she says, the data points were kilometers apart; Blue Waters will take the mesh down to 10 meters.

"It's why warnings systems are so hit and miss," she explains. Also on the list are more complete simulations to study climate change.

Every generation of supercomputers gets closer to simulating reality and increases the size of the systems we can simulate in a reasonable amount of time. How much further can it go?

They speculate, she said, about how, when, and whether exaflops can be reached: 2018? 2020? At all? Will the power requirements outstrip what can reasonably be supplied? How big would it have to be? And could anyone afford it?

In the end, of course, it's all about the data. The 500 petabytes of storage Blue Waters will have is only a small piece of the gigantic data sets that science is now producing. Across campus, also part of NCSA, senior research scientist Ray Plante is part of the Large Synoptic Survey Telescope project which, when it gets going, will capture a third of the sky every night on 3 gigapixel cameras with a wide field of view. The project will allow astronomers to see changes over a period of days, allowing them to look more closely at phenomena such as bursters and supernovae, and study dark energy.

Astronomers have led the way in understanding the importance of archiving and sharing data, partly because the telescopes are so expensive that scientists have no choice about sharing them. More than half the Hubble telescope papers, Plante says, are based on archival research, which means research conducted on the data after a short period in which research is restricted to those who proposed (and paid for) the project. In the case of LSST, he says, there will be no proprietary period: the data will be available to the whole community from Day One. There's a lesson here for data hogs if they care to listen.

Listening to Plante – and his nearby colleague Joe Futrelle – talk about the issues involved in storing, studying, and archiving these giant masses of data shows some of the issues that lie ahead for all of us. Many of today's astronomical studies rely on statistics, which in turn requires matching data sets that have been built into catalogues without necessarily considering who might in future need to use them: opening the data is only the first step.

So in answer to my friend: lots. I saw only about 0.1 percent of it.

 

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

Image: All rights reserved - Wendy Grossman

Happy Birthday! Linux turns 20 this month

Graham Armstrong celebrates Linux's 20th birthday

Party hats on guys! Linux, the operating system kernel created by Linus Torvalds, is turning 20 this month. This momentous occasion has been marked by the Linux Foundation setting up a website with information about events taking place to celebrate and party hard. Perhaps more peculiarly, Jim Zemlin, Executive Director of the Linux Foundation, has marked the occasion by claiming that the long-standing battle between Microsoft Windows and Linux has been won. Speaking to Network World, he said this:

"I think we just don't care that much [about Microsoft] anymore," Zemlin said. "They used to be our big rival, but now it's kind of like kicking a puppy."

Linux has been growing slowly over the years, whilst many people outside of techydom may still not be familiar with the name, they will no-doubt have ran into many of the various products which use Linux software. Subtle devices such as coffee machines to techno-wizz devices such as Android smartphones are all examples of Linux in the modern world. Not to mention that Linux is gaining traction in both the tablet PC market and netbook market, perhaps thanks to its versatile scalability and to optimisations that allow the OS to run on less memory than my car keys.

Branching out to smaller devices has helped Linux to develop and thrive, however Linux continues to struggle on the desktop. According to several usage polls taken this year, over 80% of all desktop PCs are still running Microsoft Windows operating systems. Only a small 1.8% chunk of the desktop market uses Linux and a 9% chunk uses Apple’s operating systems. So if Linux can be successful out and about, why can’t it be successful on the desktop?  

Some might criticise Linux for being too difficult to use. My first contact with Linux involved trying to install Gentoo on my PC about ten years ago having only just realised what a hard drive partition was which inevitably led to me apt-get-ing gravity by hoying it out my /dev/window.

The desktop experience has been fought for by projects such as KDE and Gnome, the massively popular Ubuntu owes its success in part to their effort to make the desktop easier. KDE 4 was a rather embarrassing Vista knock-off whilst earlier this month Gnome 3 released creating a faint breeze of excitement from the community, however it does feel that the team at Gnome have cracked it and are now progressing in the right direction.

The difficulty of using Linux jeopardises the most important aspect, which separates Linux from its Apple and Microsoft based counterparts—the philosophy of open software. A year after its creation, The Free Software foundation took Linux under its wing. Torvalds registered Linux under the GPL to ensure the freedom of the software. Unfortunately non-technical folk are typically looking for software that works rather than software that cleanses the soul.

Wendy M. Grossman touched upon the topic in her recent article and of a similar topic to the BBC documentary series The Virtual Revolution (The Cost of Free). Linux has been a massive help to the free software community, however eyebrows were raised at Google’s announcement that the Android Honeycomb source code won’t be released in the near future. Linux has been a success for businesses such as Red Hat and Google entirely because of its open source approach.

If Linux, as open software, can continue to spread then that will be good for the fundamental ideas of software freedom, but spreading that software will be dependent on users ability and willingness to use the software. Happy Birthday Linux!

 

Graham Armstrong is a computing student with a strong interest in free software, and the use of social media technology to aid transparency and democracy. He tweets as @LupusSLE

 

Image: CC-AT-NC-SA Flickr: The Lazy Canadian

Education, education, education

Confiscating students mobile phones & deleting 'inappropriate material' is just another misplaced step in Michael Gove's plans to improve school discipline, argues Milena Popova

In an attempt to take education back to the 19th century enhance discipline in schools, Education Secretary Michael Gove is proposing a variety of new measures and powers for teachers, including the power to confiscate pupils’ mobile phones, search for objectionable content on them and erase it. This is the latest in a series of education policies designed to make today’s children as ill-prepared for the future as possible. Other proposals include the move to “fact-based teaching” and the rewriting of history to fit into a particular, ideologically sound world view.

What’s striking about the mobile phone proposal is that the last time teachers thought this was a good idea was in 1998. The world has moved on a bit since then, with even teaching unions calling the proposed powers disproportionate and inappropriate.

From an education point of view, the idea is positively counter-productive. New technology will not go away just because we ban it from the classroom. Deleting a video of a bullying incident from a phone will not only not stop the distribution of that video, it will erase any evidence of that incident that teachers may ever get access to. Children will either learn how to safeguard their privacy on their phones (which in all fairness would be a good thing) or they will leave school unprepared for the technological challenges of the real world.

The truly sad thing is that mobile phones have huge potential as an educational tool. This article barely scratches the surface of the possible applications of mobile phones in the classroom: use them for accurate timing in science experiments, for looking up information or pictures on the internet (in 40 seconds, rather than the 12 minutes it takes to boot up your bloat-ware-burdened, school-issued laptop); or you could use them in ICT classes to discuss the impact they have had on society, the impact on privacy, and responsible and appropriate use of technology.

There is, however, no room for the future in Michael Gove’s vision of our children’s education, and this is rapidly becoming a problem.

 

Milena is an economics & politics graduate, an IT manager, and a campaigner for digital rights, electoral reform and women's rights. She tweets as @elmyra

 

Image: CC-AT-NC-SA Flickr: Jay Wood

Copyright infringement - just like terrorism

Milena Popova looks at the US proposals to tackle copyright infringement, and warns caution in trusting the state with such powers

How much do you trust the state? In particular, how much do you trust the state to only use powers it has by law for the purpose they were intended for? One of my favourite science fiction stories is by Charles Stross in an anthology edited by Farah Mendlesohn titled “Glorifying Terrorism”.

The story is called “Minutes of the Labour Party Conference 2016” and in it the Labour Party, who after all were responsible for the Terrorism Act 2006, find themselves outlawed and persecuted as terrorists by the BNP government. I am convinced that “Would you trust the BNP with this legislation?” should be a question brought up in every debate on every Bill in Parliament.

What reminded me of this is the US government’s recent history of using the Department of Homeland Security (DHS) to seize domain names suspected of copyright infringement. I am sure that US citizens feel a lot safer knowing that the people supposed to be keeping the terrorists out are busy acting as bailiffs for the entertainment industry.

Now the newly-established Intellectual Property Enforcement Coordinator is proposing to go even further. In a 20-page document titled Administration’s White Paper on Intellectual Property Enforcement Legislative Recommendations, the Coordinator proposes a number legislative changes to strengthen IP enforcement. Highlights include:

  • Clearly defining streaming as “distribution” rather than “performance” of a copyrighted work, thus placing it firmly on the wrong side of the law. The worrying words in that paragraph are “other similar new technologies”. Legislators’ attempts to be future-proof often leave legislation vague and open to abuse.
  • Increasing sentencing guidelines for intellectual property offences, particularly when there is a repeat offence. Examples of precedents cited to support this idea include the sentencing guidelines for repeat immigration, sex, drugs, and firearms offences. Because guns don’t kill people, BitTorrents do.
  • Enabling DHS agents to collaborate with rightsholders, both “pre- and post-seizure”, by for instance asking them what kind of things they should be looking out for to seize, and providing samples back to the rightsholder to assist them in “bringing civil action”. Did I mention that the DHS is acting as a bailiff for the entertainment industry? Yeah...
  • Adding copyright and trademark infringement to the list of cases federal agents are allowed to use wiretaps in. This list is currently reserved for serious crimes, including material support of terrorism and use of weapons of mass destruction. Because copyright infringement is just like setting off a dirty bomb in Manhattan, right?

Would you trust the BNP with this legislation? “She used to BitTorrent “Lost”. This is perfectly adequate justification for a wiretap, right?”

 

Milena is an economics & politics graduate, an IT manager, and a campaigner for digital rights, electoral reform and women's rights. She tweets as @elmyra

Image: CC-AT Flickr: Rande Archer

Student battles huge fine

Outcome up in the air for student who is appealing $67,500 fine in court

On Monday I attended oral arguments here in Boston before the First Circuit Court of Appeals in Sony BMG Music Entertainment v. Tenenbaum (appellate briefs here).  To summarize, several record labels sued Joel Tenenbaum for sharing music files on a peer-to-peer service, and Tenenbaum lost at trial.  However, trial court Judge Nancy Gertner reduced the jury verdict of $675,000 against Mr. Tenenbaum down to $67,500.

Both sides appealed.  The labels framed the sole issue on appeal as:

 

Whether the district court erred by holding that the jury's award of $22,500 per work for willful infringement of 30 copyrighted works violated the Due Process Clause, even though that award is well within the range of statutorily prescribed damages awards for willful copyright infringement and even within the statutory range for non-willful infringement.

In contrast, defendant Tenenbaum framed the issues as:

 

1. Is the award of damages against the defendant unconstitutionally excessive?

2. Was the jury properly guided by the trial judge's instructions?

3. Does the statute under which the defendant was prosecuted apply to individual noncommercial consumers?

4. Does 17 U.S.C. § 504(c) remain operative in the wake of Feltner v. Columbia Pictures Television, Inc., 523 U.S. 340 (1998)?

Monday's hearing took place before a three-judge panel consisting of Chief Judge Sandra L. Lynch, Judge Juan R. Torruella, and Judge O. Rogeriee Thompson.  In addition to the plaintiffs and defendants, the United States (as intervenor) and the Electronic Frontier Foundation (as amicus curiae) presented oral arguments.

Based on the judges' questions and demeanor at oral argument, my impression is that Joel Tenenbaum faces an uphill battle and is likely to lose his appeal.  I don't have a transcript of the proceedings, but the following stands out from my notes and memory.

Chief Judge Lynch clearly had no tolerance for the defense's contention that "no one thought" the statutory penalties for copyright infringement would ever apply to "consumers".  She pointed out that the statute appeared to apply to consumers, eliciting a concession from Tenenbaum's counsel that statutory copyright penalties were not facial unconstitutional.  This left the defense with little more than a half-hearted argument that the jury verdict was improper here because the copyright statute originally contemplated damage calculations by judges.

Judges Torruella and Thompson seemed somewhat more suspicious of the record labels' arguments, but it was unclear whether these suspicions would help Tenenbaum win his case.  Judge Torruella asked the labels' lawyer whether "lost sales" would provide a useful measure of damages, to which he replied that damages should be commensurate with the "lost of value of the copyright". 

He argued that file-sharing in the aggregate caused enormous economic losses to the labels because it essentially put the music "in the public domain."  (Why Joel Tenenbaum should be personally responsible for the actions of thousands or millions of other file-sharers remained the obvious question he never managed to answer.)

For her part, Judge Thompson questioned whether appellate courts could ever find that a jury for statutory damages in a copyright infringement action to be excessive if it fell within the statutory range ($750 to $150,000 per work infringed).  The labels' counsel did concede copyright damage awards were "not immune from Williams [Philip Morris USA v. Williams, 549 U.S. 346 (2007)] review" but maintained that such a problem would be "rare" and that this was not that case.

We likely won't have the First Circuit's decision for several months, so there's still plenty of time to speculate about what the outcome will be...

 

This article was originally published here by Joel Sage. Legally Sociable (http://legallysociable.com/) / CC BY-NC 3.0 

Image: CC-AT-NC-SA Flickr: xsix

European Copyright: Collusion for the control of the net

Fundamental online rights at risk in Europe

Later today, a college meeting of the European Commissioners will take place to decide the future of European copyright policy. This revision takes place in conditions that raise severe concerns from a democratic perspective and put fundamental rights at risk, especially when it comes to the internet.

The "Internal Market" General Directorate, under the responsibility of French Commissioner Michel Barnier, just completed a public consultation process. This consultation took the form of comments on a report purporting to be an "impact assessment" of the European copyright enforcement policy, and of the 2004/48/CE "IPRED" directive, also known as the Fourtou directive.

In reality, this document recycles arguments and proposals directly fed by the entertainment industry: that culture is on the verge of demise due to online piracy, and that the only solution lies in more repressive measures specifically targeting the internet.

This obsession with repression is clear when one also considers that the Commission secretly negotiated the ACTA anti-counterfeiting agreement over a three-year period with 12 other countries. Disguised as a basic trade agreement, ACTA actually compels its signatories to create criminal sanctions for copyright and patent infringements, again with a particular focus on activities taking place on the internet.

After the failure of mass-repression against online file-sharers, these same interest groups are now attempting to put repressive policies at the core of the network. By turning technical intermediaries (access providers, online service providers) into a private copyright police, these intermediaries would then be compelled to censor their networks and services by filtering their users' communications to prevent potential infringements.

Such a reversal of the legal framework would inevitably cause severe harm to fundamental freedoms, and in particular the right to privacy and to freedom of expression. By encouraging the circumvention of judicial authorities in order to set up direct blocking and filtering of the internet and its services, European decision-makers would be laying the ground for a censorship infrastructure similar to that used for political purposes in authoritarian regimes.

Such a policy would run decisively contrary to our democratic values and the rule of law. It can only be explained by the blindness – if not the laziness –  of European policy-makers listening solely to those segments of the entertainment industry whose economic models are still based on controlling copies. The Commission continues for instance to relay industry-originated figures that the U. S. Government Accountability Office has described in a recent report as mere fantasy.

Any consideration of the fact that file-sharing could be beneficial for culture, its diversity or its economy, is systematically set aside. A growing number of independent studies nonetheless show that the largest file-sharers are also the largest consumers of commercial offerings – in the same way that lending library users are avid book buyers.

Non-market use and commercial use are not mutually exclusive, but rather complementary. In much the same way, innovative models for financing creation based on the legalisation of sharing, such as "Kulturflatrate" or "Creative Contribution" supported in France by the Création-Public-Internet coalition, are systematically ignored by European decision-makers.

The toxic influence of the entertainment industry on the European law-making process is now reaching new extremes with the appointment of Maria Martin-Prat, previously in charge of legal and institutional matters with the musical majors lobby IFPI, as head of the copyright unit in the Internal Market DG of the European Commission.

European citizens and their representatives must adamantly oppose this unhealthy collusion threatening fundamental freedoms and the internet's very infrastructure. It is unforgivable that the Commission has chosen to encourage the implementation of an internet control and censorship infrastructure, rather than initiate the long overdue reform of copyright laws unadapted to new uses and technology.

 

This article orginally appeared here and is licensed under Creative Commons AT-SA

Written by Jérémie Zimmermann and Philippe Aigrain, co-founders, La Quadrature du Net

 

Image: CC-AT-NC-SA Flickr: Michel Barnier

Equal Access

Wendy M. Grossman looks at web blocking and asks what it really solves

It is very, very difficult to understand the reasoning behind the not-so-secret plan to institute web blocking. In a letter to the Open Rights Group, Ed Vaizey, the Minister for culture, communications, and creative industries, confirmed that such a proposal emerged from a workshop to discuss "developing new ways for people to access content online". (Orwell would be so proud.)

We fire up Yes, Minister once again to remind everyone the four characteristics of proposals ministers like: quick, simple, popular, cheap. Providing the underpinnings of web site blocking is not likely to be very quick, and it's debatable whether it will be cheap. But it certainly sounds simple, and although it's almost certainly not going to be popular among the 7 million people the government claims engage in illegal file-sharing – a number PC Pro has done a nice job of dissecting - it's likely to be popular with the people Vaizey seems to care most about, rights holders.

The four opposing kiss-of-death words are: lengthy, complicated, expensive, and either courageous or controversial, depending how soon the election is. How to convince Vaizey that it's these four words that apply and not the other four?

Well, for one thing, it's not going to be simple, it's going to be complicated. Web site blocking is essentially a security measure. You have decided that you don't want people to have access to a particular source of data, and so you block their access. Security is, as we know, not easy to implement and not easy to maintain. Security, as Bruce Schneier keeps saying, is a process, not a product. It takes a whole organization to implement the much more narrowly defined IWF system.

What kind of infrastructure will be required to support the maintenance and implementation of a block list to cover copyright infringement? Self-regulatory, you say? Where will the block list, currently thought to be about 100 sites, come from? Who will maintain it? Who will oversee it to ensure that it doesn't include "innocent" sites? ISPs have other things to do, and other than limiting or charging for the bandwidth consumption of their heaviest users (who are not all file sharers by any stretch) they don't have a dog in this race. Who bears the legal liability for mistakes?

The list is most likely to originate with rights holders, who, because they have shown over most of the last 20 years that they care relatively little if they scoop innocent users and sites into the net alongside infringing ones, no one trusts to be accurate. Don't the courts have better things to do than adjudicate what percentage of a given site's traffic is copyright-infringing and whether it should be on a block list? Is this what we should be spending money on in a time of austerity? Mightn't it be…expensive?

Making the whole thing even more complicated is the obvious (to anyone who knows the internet) fact that such a block list will - according to Torrentfreak already has - start a new arms race.

And yet another wrinkle: among blocking targets are cyberlockers. And yet this is a service that, like search, is going mainstream: Amazon.com has just launched such a service, which it calls Cloud Drive and for which it retains the right to police rather thoroughly. Encrypted files, here we come.

At least one ISP has already called the whole idea expensive, ineffective, and rife with unintended consequences.

There are other obvious arguments, of course. It opens the way to censorship. It penalizes innocent uses of technology as well as infringing ones; torrent search sites typically have a mass of varied material and there are legitimate reasons to use torrenting technology to distribute large files. It will tend to add to calls to spy on internet users in more intrusive ways (as web blocking fails to stop the next generation of file-sharing technologies).

It will tend to favor large (often American) services and companies over smaller ones. Google, as IsoHunt told the US Court of Appeals two weeks ago, is the largest torrent search engine. (And, of course, Google has other copyright troubles of its own; last week the court rejected the Google Books settlement.)

But the sad fact is that although these arguments are important they're not a good fit if the main push behind web blocking is an entrenched belief that only way to secure economic growth is to extend and tighten copyright while restricting access to technologies and sites that might be used for infringement. Instead, we need to show that this entrenched belief is wrong.

We do not block the roads leading to car boot sales just because sometimes people sell things at them whose provenance is cloudy (at best). We do not place levies on the purchase of musical instruments because someone might play copyrighted music on them.

We should not remake the internet – a medium to benefit all of society – to serve the interests of one industrial group. It would make more sense to put the same energy and financial resources into supporting the games industry which, as Tom Watson MP (Lab – Bromwich) has pointed out has great potential to lift the British economy.

 

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series

Image: CC-AT-NC-SA Flickr: Hot Grill

Featured Article

Schmidt Happens

Wendy M. Grossman responds to "loopy" statements made by Google Executive Chairman Eric Schmidt in regards to censorship and encryption.

ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views

People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.

ORG Events