Google Settlement 2.0 Changes Little

Defendant Google and two of the plaintiffs, the Authors Guild and the American Association of Publishers, filed their revised settlement proposal at the end of the day on Friday, November 13th. Facing vehement objections from European rights holders this year, not to mention the threat of more litigation, the parties decided to limit the settlement to books published in the U.S., the U.K., Canada and Australia. But little has changed for writers and publishers of American books.

Orphan Works.

Under the old settlement, income collected by Google for the exploitation of orphan works (those whose rightsholders cannot be located) would have been held for 5 years, after which it would have been redistributed to known rightsholders and the Book Registry (and Google, of course). Settlement 2.0 will allow 25% of the monies collected for any orphan work to be used to locate rightsholders if the income is still unclaimed after 5 years. If the income is unclaimed after 10 years, it will be distributed to charity — not-for-profit organizations which may be designated by the Registry, the courts, or the newly-created office of the Unclaimed Works Fiduciary. No one yet knows how this will work.

In fact, Settlement 2.0 establishes Google’s total dominion over orphan (unclaimed) works. It gives Google and only Google the right to exploit them. Neither the Book Registry, which will represent the authors and publishers, nor the newly-created office of the Unclaimed Works Fiduciary will have the power to license these works to third parties (e.g., bookstores and online merchants).

Here is how the Settlement (both 1.0 and 2.0) cleverly hides this fact: the Registry, it says, may license copyrights to third parties “to the extent permitted by law.” But the law HAS no provisions to allow anyone but the actual rightsholder to issue licenses. Google knows this. As Danny Sullivan at Search Engine Land points out, “the parties have represented to the United States that they believe the Registry would lack the power and ability to license copyrighted books without the consent of the copyright owner – which consent cannot be obtained from the owners of orphan works.” The Settlement will not change the law. It will simply empower Google to skirt it.

Will Congress change the law to permit third parties to compete with Google in exploiting orphan works? Perhaps. In a conference call late Friday night, Richard Sarnoff, chairman of the American Association of Publishers, said that he hopes that the Settlement “unlocks a positive outcome on the legislative process on orphan works as now there’s a way to actually implement any legistation that Congress decides on orphan works.” Translated into English: the industry is hoping (or is it?) that Congress modifies copyright law to allow the exploitation of orphan works without permission from rightsholders. This wouldn’t be an outrageous outcome at all. However, unless and until Congress ‘waves its magic wand’ (to paraphrase James Grimmelman at The Laboratorium), no one but Google and the industry (via its Registry) will lawfully be able to exploit orphan works. At its very essence, Settlement 2.0 remains anti-competitive.

Re-Sellers of In-Copyright (but NOT Orphan) Books.

In September, when it appeared that Settlement 1.0 wouldn’t be approved, Google suddenly announced that approved book retailers will be able to sell consumers online access to the out-of-print books covered by the Settlement, including orphan work. Rightholders, Google said, would still receive their 63% specified in Settlement 1.0 as the income split with Google. But the remaining 37% would be split between Google and competing retailers. Whether that will be commercially viable for retailers is another question. Note the tricky language in Settlement 2.0:

Google will permit the reseller of a Book to retain a majority of Google’s share of Net Purchase Revnues from Consumer Purchases through such reseller.

The “majority” of 37% is anything over 18.5%, and nothing in Settlement 2.0 limits Google’s discretion in setting the actual percentage. Caveat lector.

With the Reseller program, Google will have two ways for third parties to participate in the exploitation of books to Google’s benefit. (The other is the “Affiliate Program,” already established under Settlement 1.0, in which approved websites which link to Google products will be given a small referral fee.) In both cases, Google alone will control hosting and serving of Digital Copies, and will profit from it.

Other changes in Settlement 2.0 include:

1.  Google’s Exploitation Rights Less Open-Ended. Under Settlement 2.0, income may be earned only via print-on-demand, downloads and consumer subscriptions.

2. Terminals at Public Libraries. The Registry will be permitted to increase the number of terminals at public libraries (from the mandated ONE per building). However, can doesn’t mean will. The Registry has a conflict of interest here: it would collect royalties from public libraries only for photocopies and downloads, while royalties from college, university and corporate libraries will include yearly licensing fees. There may be little incentive to expand royalty-free access.

3. FREE Books. Rightsholders will be able to tell Google that access to their books must be given away for free under Creative Commons or other licenses. (That’s for the handful of writers who don’t want income or who begrudge Google even a penny.)

Google’s claim that it is principally motivated by philanthropic concerns — the preservation, says Sergey Brin on the Google Public Policy Blog, of “our cultural heritage” — is belied by negotiations which are aimed at making Google even richer and more powerful than it already is. The Open Book Alliance, which is funded by Microsoft and Amazon, among others, got it right in its public statement shortly after Settlement 2.0 was filed:

Today, Google, the Authors Guild, and the Association of American Publishers released their revised book settlement proposal in an attempt to fix the deeply flawed legal agreement. … None of the proposed changes appear to address the fundamental flaws illuminated by the Department of Justice and other critics that impact public interest.  [Instead], Google, the AAP, and the AG are attempting to distract people from their continued efforts to establish a monopoly over digital content access and distribution; usurp Congress’s role in setting copyright policy; lock writers into their unsought registry, stripping them of their individual contract rights; put library budgets and patron privacy at risk; and establish a dangerous precedent by abusing the class action process.

In 2010, the Justice Department will weigh in once again.


Google Grows a Monopoly, Book Industry Cashes In

In the past week, Google has announced both its acquisition of ReCaptcha, the main company providing “captcha codes” to websites around the world, and a new deal with On Demand Books, the maker of the Espresso Book Machine, which prints books on demand. The proposals are aimed to make the proposed Google Settlement more “exciting,” but they only raise further concerns over Google’s monopoly power in the online book market.

The acquisition of ReCaptcha will allow Google to use ReCaptcha’s technology to improve its book scanning, something which ReCaptcha was already doing for the not-for-profit Internet Archive. reCAPTCHA started as a project of the School of Computer Science at Carnegie Mellon University. Back in May of 2007, the ReadWriteWeb blog explained how the use of captcha codes works to improve book scanning:

There are many projects underway to scan old books and other texts into digital format, but Optical Character Recognition software often falls short, especially with oddly stylized text or old, faded works. When the computer can’t figure out a word, a human has to step in and enter it manually. This means reading thousands of digital images of words and deciphering them — or essentially what you do when you solve a CAPTCHA image. The Internet Archive project scans 12,000 books per month and sends the team at Carnegie Mellon hundreds of thousands of images of words the computer can’t figure out, according to the Washington Post. These images are turned into CAPTCHAs for the reCAPTCHA program.

But if the computer doesn’t know the word, how will it know if the human entered it properly? The reCAPTCHA program gives users two words to decipher: one which it already knows, and one which is a mystery. Employing a certain level of trust, the computer assumes that if the user correctly identifies the word it knows, then he probably figured out the one it doesn’t correctly as well.

In the news about ReCaptcha’s acquisition on Google’s blog (posted by a co-founder of ReCaptcha and a Google representative), no mention is made of the Internet Archives’ project. Will Google pull the plug on the Archive’s use of Google’s newly-acquired intellectual property? Since ReCaptcha’s software is not entirely open source, but proprietary, the acquisition may have the effect of damaging an important and competing book scanning project.

The On Demand Books deal will make Google the supplier of some 2 million out-of-print and out-of-copyright books for use in On Demand’s machine. While everyone has the right to do what s/he wants with out-of-copyright books, the effect of this deal may be to choke off competition from other book scanning projects.

The Espresso Book Machine is a high-speed printer. According to the company, it can turn out a 300-page paperback in under five minutes and at low cost, about a penny a page. There are only five Espressos currently in operation in the U.S.,[1] but the deal will give ample incentive for book stores and libraries to lease the $100,000 machine — and to use Google exclusively as its source for texts. This will be irresistible, in fact, if the Google Settlement is approved: the Settlement grants Google, and only Google, publishing-on-demand rights for out-of-print but in copyright texts.

Books supplied by Google and printed by the Espresso will supposedly sell for around $8, of which $1 will go to On Demand (in addition to leasing fees) and $1 will go Google. Google says it will donate its commission to not-for-profit causes, but that highlights precisely the problem with monopolies: they can do what they want, make exclusive deals, give money way and price their competitors right out of the market.

Remember, however, that the Google Settlement isn’t just about Google. Equally culpable is the publishing industry whose main interest in the Settlement is to monetize the library system. As I stated in my earlier posting, no longer will publishers be limited to selling books to libraries across the US. Rather, the  industry will provide time-limited licenses for which libraries will have to pay and pay again. Connections for free public libraries will be free, but should any patron wish to print out pages from in-copyright books, a per-page royalty will be charged. Contrast this with the current situation: When you go to the library and want to photocopy something, you pay only a photocopy charge, not a royalty to the copyright holder on top of it.

Moreover, the publishing industry will avail itself of the royalties accrued from Google’s exploitation of “orphan” books. Although the number of titles is “relatively” small, clearly Google and the publishing industry would like to divvy up these potential earnings for their own enrichment.The Google Settlement is a power grab that should be stopped.


[1] The locations are The Internet Archive (San Francisco), the University of Michigan Shapiro Library (Ann Arbor), the New Orleans Public Library, The Brigham Young University Bookstore (Provo, UT) and the Northshire Bookstore (Manchester Center, VT). Another seven Espressos will go into operation in the Fall of 2009. For a complete worldwide list, see

Google Settlement: Against the Public Interest

Once the elation wears off over the prospect of being able to search nearly every book ever published in the United States — whether in- or out-of-print — or being able to buy books for which one previously had to scour rare and used book shops, reality sets in: the proposed Google Settlement does not offer many of the public benefits that its advocates claim. Mainly, the proposed agreement rewards the case defendant, Google, with a near-monopoly over digital books, and the plaintiffs — actually, the Authors Guild and the Association of American Publishers — with far more power than they imagined possible under the copyright law.

Here’s what the lawsuit is about: U.S. copyright law reserves to the copyright holder the right to reproduce copyrighted works. A digital copy is a reproduction. Despite this, Google audaciously embarked on a project to scan entire libraries in order to create a massive searchable database. The search code is Google’s intellectual property, but the database itself is comprised of the copyright-protected property of each and every publisher and author whose works were digitally copied. Google reasonably has argued “no harm, no foul:” no one would have access to the scanned books. Rather, it would merely permit the viewing of brief snippets — fragments of sentences — in response to search terms designated by the user.

Google snippet view“Snippet view” is pretty useless, actually: not enough to constitute research and often less useful than even a library card catalog entry. It has no commercial value, except insofar as Google can sell advertising on its search return pages, and no scholarly value, except to point the searchee in the direction of topical books s/he would have to acquire by other means: at a bookstore or library. The commercial and informational limitations of Snippet view provided ample motivation for Google to settle the case and secure the rights to display and exploit much, much more. (For those unfamiliar with Snippet view, click on the image, which shows a  Snippet view search return for the phrase “copyright infringement.”)

Many people thought Google should litigate, not settle, in order to obtain a ruling that its “book search” project was “fair use” and thus required no permission from, or compensation to, the copyright holders of the scanned books. That defense, however, is far from guaranteed and even a little doubtful. In litigation Google runs a substantial risk of its project going the way of Napster, Grokster, Morpheus and Pirate Bay, but without the ability to make it a commercial venture. For its part, the Authors Guild and the Association of American Publishers didn’t want to lose on its claim of copyright infringement, but more important, it saw money to be made — and in particular a way to get royalty money out of libraries. As the Author’s Guild President, Roy Blount Jr., wrote on the Guild’s site in October 2008, “[f]ar more interesting for most of us — and the ambitious part of our proposal — is the prospect for future revenues.”[1]

The Google settlement is a brilliant collusion between plaintiffs and defendant to create unprecedented riches for themselves in the digital book market. What began as a mere Google Book Search project has transmogrified into a Google Book Industry. Google will be handed a range of opportunities, most of them brand new, to “monetize” its database, while the plaintiffs will be given royalties and control over their distribution.

Spokespersons for both Google and the authors and publishers groups have painted rosy pictures of near-universal access to the information in the database, of sharing with others its newly created “rights,” and of the great educational benefits to students and scholars. Here is a sampling of those comments:

Richard Sarnoff, former chairman of the Association of American Publishers and co-chairman of the U.S. affiliate of Bertelsmann A.G., which owns Random House:

We have never said that the same kinds of outcomes would not be available to Microsoft or Amazon or anyone else who is willing to make the same investments. We have a road map to do it now.[2]

Roy Blount, President of the Author’s Guild:

Readers wanting to view books online in their entirety for free need only reacquaint themselves with their participating local public library: every public library building is entitled to a free, view-only license to the collection. College students working on term papers will be able to point their computers to resources other than Wikipedia, if they’re so inclined: students at subscribing institutions will be able to read and print out any books in the collection.[3]

David Drummond, Senior V.P. of Corporate Development and Chief Legal Officer, Google:

We believe strongly in an open and competitive market for digital books. As part of that commitment, today we announced that for the out-of-print books being made available through the Google Books settlement, we will let any book retailer sell access to those books. Google will host the digital books online, and retailers such as Amazon, Barnes & Noble or your local bookstore will be able to sell access to users on any Internet-connected device they choose. Retailers can also pursue their own digitization efforts of out-of-print books in parallel.[4]

Yet a closer look at the Settlement Agreement as it now stands reveals stingier control, self-dealing contrary to the public interest and promises which may not even be legal to make.

1. For a one-time payment of $45 million, Google will be released from all past liability for the books it has already scanned. This money will be distributed to copyright holders (at around $60 per title). In addition, Google will pay the plaintiffs’ legal fees and an additional $34.5 million to fund and launch a “Book Registry.” When you hear about a “$125 million settlement,” keep in mind that that figure includes a hefty amount for the plaintiffs’ lawyers.

For future copying, Google gets a free pass — that is, it will not have to pay anything for the scans it makes until money begins rolling in on database licensing, advertising (e.g., on banner ads that accompany certain types of search results and book-display pages) and other revenue streams. Google will charge a 10% fee off the top for “operating expenses” and split the remaining revenue, paying 30% to itself and 70% to the Book Registry on behalf of authors and publishers. (For convenience, Google calls its share 37%: 10% off the top, and then 30% of 90%.)

Contrary to statements made by the parties to the lawsuit, these terms — the one-time fee and the free pass — cannot be granted to anyone other than Google, for the simple reason that neither the New York District Court which has jurisdiction over the case, nor Google nor the plaintiffs have any legal power or authority to do so. The settlement resolves no issue of law regarding whether digital copying for purposes of database searching constitutes copyright infringement or “fair use.” Furthermore, the plaintiffs are not even legally bound to accept the same or similar projects or offers thereof by anyone else. (A “roadmap” doesn’t create a legal right.) In fact, should any party wish to scan books on its own, it must run the same risk that Google ran of being sued. And without the luck of a new class action settled in an identical manner as Google’s, even a single copyright holder could shut down such a project.[5]

2. The Book Registry will keep a database of copyright holders and their works and pay out royalties to authors and publishers. It will also have the power to negotiate prices and conditions of display and sale by Google, and to decide what constitutes an out-of-print book (based on availability and other issues) and what is an “orphan,” i.e., a book whose copyright ownership cannot be ascertained or whose owner cannot reasonably (according to the Book Registry) be found.

Google will license access and otherwise commercially exploit orphan titles, while the Book Registry will collect the income on such titles, but there is no provision in the current copyright law that permits the use of orphans just because their owners aren’t around to object. Nor is there any precedent for the parties to distribute the royalties to these works among themselves and the owners of other copyrighted works. Although Google insists that orphan titles, which probably number under 600,000,[6] constitute a tiny proportion of the more than 10 million books it has scanned so far, if the potential for income were economically insignificant, the parties would never have included orphans in their negotiations.

3. Google will have the right to display the complete contents of out-of-print (but in copyright books) unless their copyright owners opt out. Owners of orphan works will, of course, not be able to opt out. For in-print non-fiction books, Google will have the right to display up to 20% of a book’s content, but not more than 5 consecutive pages, following which there must be at least two blocked pages. For fiction, the formula is different: up to 5% or 15 pages, whichever is less, adjacent to where a user lands on a given page from a search. To avoid spoilers, the final 5% of such books (or 15 pages, whichever is greater) will be blocked. That said, rightsholders will have several options for previews. One of them, for example, allows Google to display up to 10% of the book without an adjacent page limitation.

4. Google will sell time-limited subscriptions for database access to institutions (universities, colleges, corporations, think tanks, etc.), the price for which will be set by Google and the Book Registry. In theory, prices will be set according to the type of institution, but most important here is the creation of a massive revenue stream that never existed before. Until now, authors and publishers derived income from libraries only by selling them books.

Although paid subscription users will have the right to print out books whose contents are displayed in full, piracy concerns inconvenience the user with busywork that will do nothing to dissuade willful infringers:

With respect to copy/paste, the user will not be able to select, copy and paste more than four (4) pages of the content of a Display Book with a single copy/paste command. Printing will be on a page-by-page basis or a page range basis, but the user will not be able to select a page range that is greater than twenty (20) pages with one print command for printing.

For not-for-profit colleges and universities and public libraries, on the other hand, there are no guarantees. The Settlement Agreement provides that Google may provide public access service “free,” but there is no obligation. Not-for-profit colleges and universities which offer bachelor’s degrees will get no more than one computer terminal for every 4,000 full-time students. Other not-for-profit education institutions will get no more than one computer terminal for every 10,000 full-time students. For public libraries, the agreement specifies “no more than one terminal” per library building.

It will probably be more efficient to wait your turn at a terminal than requisition a book through inter-library loan, but copying any part of the book is going to cost you at these “free” locations: users at libraries and not-for-profit institutions of higher learning will have to pay a fee — set by the Registry but collected by Google — to print out any pages from books. This represents a further victory by the publishing industry over public libraries, which collect no royalties on behalf of authors or publishers when library users photocopy pages from hard copies. In essence, the Google Settlement transforms the “public good” of free public and not-for-profit libraries into a profit center for Google, authors and publishers.

5. Google plans a full range of commercial exploitation: Printing on Demand (POD), Custom publishing (e.g. coursepacks), PDF downloads, consumer subscription models, terminals in copy shops that charge a per-hour or per-user fee, and the like. Whether Google may authorize third parties to do any of this is not only legally questionable, but such offers may turn out to be economically uninteresting. Although Google has not said what percentages it will offer, presumably it will not be forgoing its 10% administrative share. Even if it offers third parties two-thirds of its income share — 20% — there may be few takers outside of Amazon and a few large bookstore chains. Even then, the chains may find that such meager profits don’t justify the use of a store’s time and resources. Google’s monopoly lies in its 10% administrative fee plus 30% share of revenues. No other bookseller, virtual or otherwise, will be able to come close to that unless they scan the books and enter into similar settlements themselves.

6. The privacy issues raised by the Amazon Kindle incident loom even larger under the proposed Google Settlement. Google will collect data not only on which search terms readers use and which books they look at, but also on how long a reader spends on each page. Readers will “store” their virtual libraries with Google. There is no limit to the kind and extent of data which Google may collect on users’ reading habits, how long it may retain it or what it may do with it, except as Google unilaterally determines. It is only a matter of time before Google begins responding to subpoenas for such information, whether it be in criminal prosecutions, matters of so-called national security or even cases of libel or divorce.

7. Google will have the right not to include books in its database for “editorial” reasons and “non-editorial” reasons. The former is not at all defined and the latter is ambiguously defined as “reasonable quality, legal, or technical concerns that are not solely editorial-based concerns.” For books excluded on solely editorial grounds, Google will advise the Book Registry, but there is no provision to make this information publicly available. Partial or non-editorial exclusions remain Google’s secret.

8. Will the Google Book Industry be a boon to students? Most students seem to do their research these days on the Internet rather than in the library, so anything to add to the possible sources of information could make some difference. However, the content display limitations for in-print works, or those out-of-print works whose owners opt out, are not research-friendly. They will often, if not usually, obscure information needed for a full understanding of the topic being researched. It may be good for quick quotations, but such limited displays are worthless for scholarship.

Furthermore, as Geoffrey Nunberg has pointed out in The Chronicle of Higher Education,[7] the metadata Google has provided for the books scanned thus far can be wildly inaccurate. Mr. Nunberg refers to the metadata Google provides as a “train wreck: a mishmash wrapped in a muddle wrapped in a mess.” For example,

Start with publication dates. To take Google’s word for it, 1899 was a literary annus mirabilis, which saw the publication of Raymond Chandler’s Killer in the Rain, The Portable Dorothy Parker, André Malraux’s La Condition Humaine, Stephen King’s Christine, The Complete Shorter Fiction of Virginia Woolf, Raymond Williams’s Culture and Society 1780-1950, and Robert Shelton’s biography of Bob Dylan, to name just a few. And while there may be particular reasons why 1899 comes up so often, such misdatings are spread out across the centuries. A book on Peter F. Drucker is dated 1905, four years before the management consultant was even born; a book of Virginia Woolf’s letters is dated 1900, when she would have been 8 years old. Tom Wolfe’s Bonfire of the Vanities is dated 1888, and an edition of Henry James’s What Maisie Knew is dated 1848.

There are also classification and other errors (including missing and illegible pages), and some books may inadvertently be hidden from any direct search at all. For example, Google has scanned three volumes of the Victorian title, My Secret Life, but a search for it turns up only the third volume, designated as such not on the search result page, but only in display view. The “Other Editions” link on Volume 3’s display view does turn up volumes 1 and 2, but the user has to click on them to know, since the links don’t indicate that they are for different volumes and not different editions with identical content. Moreover, nowhere to be found is the fact that the work consists of 8 volumes. Only the length is specified — “2359 pages” — and you need to add up the pages in Volumes 1, 2 and 3 to know you have less than half the work. Finally, Google has (laughably) included as a subject category for the work “Literary Criticism,” which this book definitely is not.

Mr. Nunberg is optimistic that most of the “metadata,” informational and scanning errors will eventually be corrected, but currently they are built into the system and Google has no substantial financial interest in doing it better. That is one of the many disadvantages of a monopoly.

Whether the monopoly created by the Settlement Agreement is the fault of Google or the authors’ and publishers’ representatives, it is now everyone’s problem. The Agreement is wholly aimed at benefiting Google to the near-exclusion of competitors. At the same time, it affords minimal provisions for the public good, which provisions must be balanced against a de facto expansion of control by copyright owners on what libraries can offer the public without incurring royalty obligations. Many hoped this case would be settled in the public interest, but the proposed Agreement carries the ball in the opposite direction.




[2] Helft, Miguel, “11th-Hour Filings Oppose Google’s Book Settlement,” New York Times, September 8, 2009,



[5] For further legal failings of the Google Settlement, see James Grimmelman, “How to Fix the Google Book Search Settlement,” Journal of Internet Law, April 2009,


[7] Geoffrey Nunberg, “Google’s Book Search: A Disaster for Scholars,” The Chronicle Review, August 31, 2009.

The proposed Google Settlement agreement can be found at

Mea Culpa: Amazon’s Error

Amazon’s chief, Jeffrey P. Bezos, has apologized to customers for deleting their Orwell titles in July and offered restitution: a restoral of the digital copies, an Amazon gift certificate or a check for $30. Mr. Bezos was contrite: “This is an apology for the way we previously handled illegally sold copies of ‘1984’ and other novels on Kindle,” he said, characterizing the deletions as “stupid, thoughtless and painfully out of line with our principles.”

The self-criticism, however, doesn’t resolve the basic issues — first, that when you “buy” an e-book, you acquire only the right to have it reside on your Kindle under conditions determined unilaterally by Amazon; and second, the Kindle may compromise your privacy. As long as Amazon has the power to delete and replace e-books on its customers’ Kindles at will, there is the likelihood that it will be turned against readers, whether by governments or private parties in litigation.

The Performance Rights Act of 2009: What’s the Fuss?

As Congress holds hearings to consider passage of the Performance Rights Act of 2009, the National Association of Broadcasters (NAB) mounts a nativist opposition. According to the NAB’s vice-president Dennis Wharton, the law is un-American – not only a “tax,” but an “effort to line the coffers of foreign record labels at the expense of America’s free and local radio stations.” [1] Given this level of hyperbole, it is be helpful to consider what the bill proposes as well as the history of the sound performance copyright, its related royalties and the role radio plays in the Internet age.

The bill proposes to amend the copyright law to grant copyright owners of sound recordings, and producers and recording artists whose performances are embodied thereon, the right to collect a royalty when their records are played on conventional, so-called “terrestrial,” radio. (The right to collect such royalties are generally known as “neighboring rights.”)  While the big terrestrial broadcasting companies like Clear Channel, Cumulus Media, Citadel Broadcasting, CBS Radio and the ABC Radio Network (a subsidiary of Disney) will have to negotiate away percentages of their gross advertising receipts, the proposed law allows individual stations with gross revenues of less than $1.25 million to pay a flat fee of $5,000 per year. Non-commercial stations will pay a flat fee of $1,000 per year and “religious” broadcasters, bars, clubs and concert venues will pay nothing at all.

Webcasters, including those FCC-licensed terrestrial stations which simulcast on the Internet, are already paying royalties for neighboring rights. Under U.S. copyright law, these royalties are split 50% to sound recording owners (usually, but not always, record companies); 45% to featured artists (which can be an individual or a band); 2.5% to an escrow account managed by an independent administrator jointly appointed by copyright owners of sound recordings and the American Federation of Musicians (AFofM) for distribution to non-featured musicians (whether or not they are members of the AFofM); and 2.5% to a similar escrow account related to the American Federation of Television and Radio Artists (AFTRA) for distribution to nonfeatured vocalists (whether or not they are members of AFTRA).

Moreover, both terrestrial broadcasters and webcasters pay royalties to be distributed to copyright owners of musical compositions (50%) and songwriters (50%) each time a song is broad- or webcast. This raises the question why terrestrial broadcasters should be treated differently from webcasters, or musical compositions differently from sound recordings.

A Little Background.

Copyright protection in musical compositions began in 1831, but it wasn’t until 1972 that a federal copyright in sound recordings was recognized. Initially, music publishers and Congress viewed sound recordings as a species of infringement on copyrights in musical compositions. In White-Smith Music Publishing Company v. Apollo Company, a case which began to wend its way through the courts in 1905, a publishing company sued the maker of perforated player-piano rolls, claiming that the rolls copied its musical compositions in violation of its exclusive rights to publish, copy, and sell reproductions of its works. In 1908 the Supreme Court threw the case out, ruling that the plaintiffs were “entitled to copyright in three sheets of music” but not to “the production of the sounds indicated by or on those sheets of music; … nor to any mechanism for the production of such sounds or music.” If Congress wanted to accord relief, the Court said, it could do so by amending the copyright laws.[2]

Congress did that in 1909, giving music publishers the right over first-time “mechanical reproductions” of their musical compositions but requiring publishers to accept a statutorily-specified royalty for any subsequent mechanical reproductions. Consistent with the Supreme Court’s view in the White-Smith case was the law’s failure to recognize in sound recordings any type of protectible intellectual expression. They were, like a piano-player roll, merely “mechanical” copies of underlying creative works. Unauthorized copying of sound recordings, including player-piano rolls, could be dealt with under state and common law provisions prohibiting the piracy of goods and unfair competition.

When, by act of Congress on October 15, 1971, sound recordings were finally recognized as works subject to copyright protection, they were still viewed as not quite products of the intellect. The new law did not grant public performance rights for sound recordings, but limited the scope of rights to reproduction and distribution “in a tangible form that directly or indirectly recaptures the actual sounds fixed in the recording.” The protectability of sound recordings, in other words, did not lie in any intellectual aspect of its creation – for example, how the recorded song was interpreted or conceived in the minds of the recording artists. To the contrary, the law protected only the particular fixation of the song. Anyone could (and still can) make another sound recording faithfully reproducing any other artist’s interpretation, but as long as the sounds are newly recorded, not only is there no infringement on the referent recording, but the new recording is separately copyrightable.

Alongside this legal history, publishers and broadcasters have successfully argued for years that radio play gave sound recordings a benefit that musical compositions did not earn: promotion resulting in sales. This was an odd argument, since publishers and their songwriters did (and do) earn a benefit by way of the mechanical royalty that record companies paid them every time a record was distributed or sold. The fact is that until now publishers and songwriters have been  paid both on radio play and record sales, while sound recording owners and recording artists have been paid only on record sales. (Furthermore, unlike recording artists, publishers and songwriters are not subject to recoupment of advances and recording costs.) Music publishers, however, had the upper hand historically. They wanted no encroachment on their earnings from record companies,[3] just as broadcasters had no desire to pay two types of performance royalties on the same record. It was also true that, at least until recently, record companies and recording artists were dependent on broadcasters to promote their records and concerts.

The Digital Era

The presumption against sound recording performance rights began to erode, at least in the minds of lawmakers, once digital technology pointed the way to new distribution methods via the Internet. The copying of sound recordings over the air was always imperfect at best. Over the Internet, however, the “broadcast” of a song was in fact the delivery of a perfect reproduction, indistinguishable from its digital “original,” straight to the listener’s computer. This changed the notion of what a broadcast was and record companies feared, correctly to some extent, that it might supplant traditional record distribution.

Consequently, in 1995 Congress passed the “Digital Performance Rights in Sound Recordings Act,” which recognized a limited public performance right in sound recordings with respect to webcasting. Sound recording copyright owners could refuse to license interactive broadcasts outright, but for simulcasting and subscription services (with certain limitations), the law forced record companies and webcasters to negotiate mutually acceptable royalty rates.[4] Congress exempted terrestrial radio (with the exception of their digital simulcasts) to avoid upsetting what it viewed as a decades-long symbiosis between broadcasters, record companies and recording artists. The truth, however, is that the business of radio and the rise of the Internet was already destroying that symbiosis.

Today it is fair to ask to what degree radio promotes record sales. In “Don’t Play It Again Sam: Radio Play, Record Sales and Property Rights,” Stan J. Liebowitz (School of Management, University of Texas at Dallas) found that the assumption that it does at all is unjustified. Overall listening, in fact, is today a substitute for the purchase of sound recordings, according to the study.[5] Prior to the advent of digital communications, radio was one of the only places you could hear new music. But today the new music is on the Internet, while the majority of music played on terrestrial radio is at least two years old.

This conclusion is supported by The Future of Music Coalition’s study, “Same Old Song: An Analysis of Radio Playlists in a Post-FCC Consent Decree Word,” which analyzes terrestrial broadcasting overall and on a format by format basis.[6]

Here are a few of the study’s findings for the major formats in 2008:

CHR/Pop: New releases accounted for 39% of the playlists, while songs that were at least two years old accounted for 30%.

Country: 20% of the playlist was dedicated to new releases, while 59% was dedicated to releases before 2007 (38% for 2000-2006 releases and 21% for pre-2000 releases).

Triple A Commercial:  Nearly 50% of this format’s 2008 playlist was dedicated to songs released prior to 1999. New releases accounted for only 19%, while releases that were two or  more years old accounted for nearly 70% of the total playlist.

Urban AC (Adult Contemporary): New releases accounted for a mere 12% of the playlist, while music that was more than NINE years old accounted for 56%. Over all, music that was two or more years old accounted for over 70% of the playlist.

Since the top 5000 songs on each of the format charts account for 77% of the spins on these radio stations, one can see, by extrapolation, how little music is being “promoted.” In Urban AC, we’re talking around 60 new songs a year. In Country and Triple A Commercial, around a hundred. In CHR/Pop, where competition is fiercest, as many as a few thousand, assuming many albums that are only a year old are still in the promotion phase. It would thus appear that terrestrial radio stations are not so much promoting new records, as attracting listeners and earning advertising dollars by playing songs and artists that have already stood the test of time. This is the position that Nancy Sinatra argued in her recent New York Times guest editorial. The FMC characterizes the programming practices of terrestrial radio as “risk averse” and they’re right.

In light of the FMC study, the NAB’s defense against the Performance Rights Act of 2009 based on localism also appears rather fraught. “By 2002,” the study concluded,

virtually every geographic market was dominated by four firms controlling 70 percent of market share or greater. In addition, nearly every music format was controlled by an oligopoly. In 28 of the 30 major music formats nationwide, four companies or fewer controlled more than 50 percent of listeners. As a result, an increasingly small number of companies determined what music was played on specific formats. In addition, radio station group owners introduced cost-cutting measures that reduced local staff and centralized programming decisions at the regional, or cluster, level. With individual station autonomy drastically limited and a broad trend toward shorter playlists, musicians had far fewer opportunities to receive airplay.

Save local radio? If it’s local radio the NAB is concerned about, the proposed law already takes them into account.

The Bigger Picture

If the United States doesn’t yet recognize neighboring rights, this isn’t true for most of the rest of the world. As early as the 1961 “Rome Convention for the Protection of Performers, Producers of Phonographs and Broadcasting Organizations,” countries have allowed sound recording copyright owners and recording artists to collect royalties on public performances of their works. The original signatories to the treaty – France, Germany, the Netherlands, Belgium, Luxembourg and Italy – have since been joined by nearly seventy “Qualifying Countries” whose national laws recognize neighboring rights as a condition for reciprocal treatment.[7]

The United States is not among those countries which qualify. Consequently, American record companies, producers and recording artists are generally unable to collect the royalties that would otherwise accrue to them throughout the world. On its website, the RIAA puts the figure at “tens of millions of dollars each year.”[8] The uncollected money is used in France, according to attorney Nancy Prager, to subsidize music education in French schools.[9] One could imagine that in other countries the royalties might simply be redistributed to rights holders from qualifying countries, to the loss of American-based rights holders and artists.

The existence of uncollected foreign neighboring rights royalties turns the NAB’s argument on its head, making the specter of windfalls by “foreigners” even more laughable than it already is. Each of the “foreign-owned” majors have substantial assets in the United States, employ thousands of people and record thousands of American artists and performers who stand to gain under the new law. Moreover, Warner Music Group (which is still American-owned) and independents which are American-owned or have distribution deals with American companies, account for well over 30% of the market. Does the NAB think they aren’t worthy of consideration? Or do their broadcasters simply not play their music?

Putting aside the issue of uncollected foreign royalties, the case for a sound performance royalty on radio play is easy to make. The sound recording – the actual performance of a musical work captured on record – is at least as important as the song it embodies. People don’t want to listen to just any old songs, but ones recorded by recording artists they like. This is what attracts listeners and thus advertising dollars. Publishers and songwriters already share in radio’s revenues. Why shouldn’t record companies and recording artists?




[3] The Performance Act of 2009 specifically provides that neighboring rights royalties will not diminish public performance royalties paid to publishers and songwriters.

[4] News of recent settlements by webcasters, including Pandora, can be found at For a good, short article on webcasting royalties without the hyperbole of those who think they should be able to earn money from advertising but pay nothing for the music which brings it, see

[5] Mr. Liebowitz’s study may be downloaded at

[6] The complete report is at




Big Brother Amazon?

kindle-parodyBy now most everyone online knows that last week Amazon deleted copies of George Orwell’s “Animal Farm” and “1984” from customers’ Kindles. They did so, they say, because Amazon was fooled by the supplier of the e-books. It turns out that they didn’t have the rights.

There is nothing explicitly stated in Amazon’s licensing agreement with customers, however, to allow it (effectively) to enter a customer’s Kindle and delete what the customer has paid for.

The Kindle License Agreement and Terms of Use, states as follows:

Use of Digital Content. Upon your payment of the applicable fees set by Amazon, Amazon grants you the non-exclusive right to keep a permanent copy of the applicable Digital Content and to view, use, and display such Digital Content an unlimited number of times, solely on the Device or as authorized by Amazon as part of the Service and solely for your personal, non-commercial use.

There are no restrictions, no caveats that would hinder the conclusion that Amazon breached this clause.

That said, it should be clear that while Amazon may sell books, it does not sell e-books. It licenses software. As the Terms of Use state:

Digital Content will be deemed licensed to you by Amazon under this Agreement unless otherwise expressly provided by Amazon.

All of the Software is licensed, not sold, and such license is non-exclusive.

You acknowledge that the sale of the Device to you does not transfer to you title to or ownership of any intellectual property rights of Amazon or its suppliers.

While that latter provision is really meant to apply to the underlying work and not the the particular copy on your Kindle, it does drive the point home that the End User is nobody, at least in the opinion of Amazon’s lawyers.

The current uproar should make Amazon wary of repeating such an episode, but more importantly it raises the issues of both the value and risk of paying for digital content which a company has only licensed and to which the company has ongoing access via the electronic device on which it is stored. Think of it like going to a book store and having the clerk follow you home to stick his foot in your front door.

Numerous kinds of disputes can result in a company (or an individual) losing the right to distribute a written text or any other copyrightable property, including sound recordings, musical compositions, films and videos. Such disputes include copyright infringement, libel, fraud, misrepresentation (apparently as in the case of the Orwell e-books), negligence and contractual differences.

Due to unauthorized sampling, Warner Bros. lost the right to distribute the original version of Biz Markie’s album, “Get A Haircut.” As part of the verdict, the court ordered Warner to delete the album from its catalog and recall all extant copies from stores and distributors, as far as practicable. (Similarly, Arista was ordered to yank the original release of the Notorious B.I.G.’s “Ready To Die.”) How much better it would have been for the plaintiffs if the court could have ordered the defendants to pull every copy out of the hands of consumers!

The capability (and evident willingness) of Amazon to remove intellectual property not just from its website, but from its consumers’ devices, is a welcome gift to plaintiffs in disputes over intellectual property, since it is usually the desire of such plaintiffs to have the offending work obliterated from the face of the earth. Those who value privacy or have a less Orwellian view of how information should be controlled will understand why this is not good.

Perhaps even more chilling is the possibility that Amazon’s capabilities and policies could be exploited by the government. Amazon sells plenty of “adult” books, any number of which could be found “obscene” in some locale. It is an axiom of federal obscenity law that no one has the right to receive obscenity even for private consumption. (One needn’t even know the item is “obscene.” Under the law, one only need know that the material is “sexually oriented.”) Now an e-book is found on someone’s computer, say, in the Middle District of Georgia, or the Western District of Pennsylvania, and is declared to be “obscene” by a jury. Amazon might or might not stop offering it for sale, but it is certain that Amazon has the data of everyone who has downloaded the e-book. What happens when the government approaches Amazon and asks it to turn over information on those customers? What if the government takes an interest in who’s reading certain other kinds of books – for example, those that pertain to terrorism or radical environmentalism or animal rights – and wants to know what readers are thinking about them?

The policies are already in place for Amazon’s cooperation:

No Illegal Use and Reservation of Rights. You may not use the Device, the Service or the Digital Content for any illegal purpose.

Under federal law, it is illegal to receive obscene matter via “interstate commerce,” e.g., Amazon’s whispersynch network. Furthermore, someone who applied knowledge s/he learned in a book to committing any illegal act would be in breach of this provision.)

The Device Software will provide Amazon with data about your Device and its interaction with the Service (such as available memory, up-time, log files and signal strength) and information related to the content on your Device and your use of it (such as automatic bookmarking of the last page read and content deletions from the Device). Annotations, bookmarks, notes, highlights, or similar markings you make in your Device are backed up through the Service.

Marking passages and highlights could be used against someone in a prosecution, at the very least to impeach his or her character.

Protection of and Others: We release account and other personal information when we believe release is appropriate to comply with the law; … or protect the rights, property, or safety of, our users, or others.

This overbroad provision speaks for itself. Amazon has reserved the right to do anything it believes is “appropriate,” whatever that means.

This article is not an attack on Amazon, but a suggestion that it take a good look at how it formulates policy, respects customers and defends privacy. In many ways, intellectual property and criminal laws lag far behind developments in digital commerce and without big and powerful companies like Amazon taking principled stands, consumers can expect more incursions into their privacy and freedom. Orwell would not be amused.


(For more information on the Biz Markie and Notorious B.I.G. disputes, see “A Short History of Sample Clearing” on Clearance 13′-8″, Inc.’, website.

File-sharing down in the UK: It’s all in the business model

According to U.K.-based Music Ally, which describes itself as “the leading digital music business information and strategy company…since 2001,” a survey of 1000 music fans1 shows that “regular music filesharing amongst UK teenagers” has dropped by a third, and that a higher percentage are “obtaining their downloads via purchase.”

“We think the positive figures represent both greater takeup of legal streaming services among teens – in particular YouTube – and other competing ways of finding music for free such as CD burning and Bluetooth,” Music Ally’s site reports. (Currently Youtube’s offerings of majors’ songs are restricted due to a dispute between Youtube and PRS, Britains public performance rights society.)

The Guardian‘s report on the survey added this:

The research revealed that many teenagers (65%) are streaming music regularly, with more 14 to 18 year olds (31%) listening to streamed music on their computer every day compared with music fans overall (18%).

The picture may be more complex than a simple shift from filesharing to streaming, with people sharing music in new ways such as via bluetooth technology, on blogs, and through copying, also known as ripping content from friends’ MP3 devices.

But if these changes have occurred, it is easy to see why. The major record companies have spent most of their energy persecuting and prosecuting up- and downloaders. (This policy has been – temporarily? – suspended in the United States, but the majors are leaning on regulators across Europe to cut or slow down internet connections of people who download files.) Little effort was given to the task of understanding listening habits and bringing music online to reach the broadest number of consumers and listeners. A trickle of music to this or that service, and under onerous conditions to both online retailers and consumers (remember digital rights management?), was the best they could  muster.

It was this complete anti-marketing strategy which facilitated the growth of file-sharing.

In fact, what is happening is that people are discovering that streaming and buying from sites like iTunes is actually less time-consuming than illegal downloading. Even if you have a lot of time on your hands, if there is something you want, you don’t necessarily want to spend a lot of time doing it. It might seem to record executives that “everything” is readily available on sites like Pirate Bay, but this is far from the case.

Then there is the question of quality. A rip might come from a scratched CD or vinyl record (Radiohead’s Itch is an example), or be at an embarrassingly low bitrate only suitable for listening on a cheap pair of earphones (128 kbps is enough to strip out the production quality of most music), or be missing songs, or offer the songs out of order or without proper track titles so the downloader is left surfing to Amazon or allmusic to get the right information. Downloading from blogs has its own hazards, like that of malware being installed on your computer. This is all more time wasted.

And speaking of Radiohead’s Itch, why can I download this release from Pirate Bay, but not buy it from Amazon?

As the Dead Kennedys know, nothing beats convenience. The more quality downloads and streaming at competitive prices there are at the greatest number of online outlets, the more people will pay for downloads and streaming. It’s a pretty simple business model. The majors would be wise to follow it.


1Unfortunately neither Music Ally, nor The Leading Question, its research arm, provided details as to whether responders to the survey were chosen randomly, how many declined to participate or whether participants were self-selected. Nor were any other details of the survey or its degree of reliability released on Music Ally’s or The Leading Question’s respective websites.  Caveat emptor as to any conclusions.