Copyright/Intellectual Property

Hoopla Digital and HarperCollins Disrupt Library E-Lending

25 June 2017

From Copyright and Technology:

An announcement this week by hoopla digital and HarperCollins augurs big changes in the ways that public libraries make e-books available. It sets the stage for realignment of the relationships between publishers and libraries, and it could have longer-term ripple effects on the entire e-book market.

For more than a decade, public libraries have been able to “lend” e-books using a certain model: the library “acquires” a title through an e-lending platform such as OverDrive; the library then has one “copy” that it can make available to patrons at a time. The platform sends each patron a DRM-protected file that allows reading for up to the library’s lending period. If one patron has the e-book “checked out” then another patron can’t read it until the period expires or the first patron “returns” it.

The library technologist Eric Hellman calls this model “Pretend It’s Print (PIP),” while the industry term is “one copy, one user.” PIP is an apt term because the library pays a fixed price for the title, just as it would do if it were acquiring a print book, and the publisher can account for it much as if it were a sale. If a library wants to enable more than one patron to read the title at a time, it has to “acquire” multiple “copies.”

. . . .

Hoopla digital, an OverDrive competitor, is a digital “lending” platform for libraries run by Midwest Tape, a leading supplier of library-ready physical media products (such as CDs, DVDs, and Blu-ray discs). It had been licensing digital audiobooks from HarperCollins since last year (as well as from many indie publishers). The new deal expands the relationship into e-books. HarperCollins will make over 15,000 titles available, though apparently not including frontlist titles.

What’s new here? Instead of libraries paying a fixed upfront price, as in the PIP model, they pay per “loan,” and there are no longer any limits on how many people can read an e-book at the same time. This is a boon to library patrons: it means that all titles can be available at any time, with no waiting lists. Otherwise the reader experience is much the same as with PIP, as is the technology used to deliver and display e-books.

The innovation is really on the financial and licensing side. At a basic level, this arrangement is risky for libraries. With the hoopla digital model, Libraries can’t budget for “acquisitions” as they have done with print books for centuries; instead they have to bear less predictable costs that rise and fall with demand for titles. At the same time, it enables libraries to license long-tail titles that they wouldn’t normally “acquire” because they don’t expect sufficient demand to make the one-time price worthwhile. That’s another benefit to users.

Publishers, meanwhile, get paid based on actual readership, not on a per-title basis. That’s good for them conceptually, because it makes libraries more like channel partners. So why are publishers slow to embrace this model — which hoopla digital has offered for e-books since 2014?

One reason is author contracts. Many author contracts don’t provide for handling royalties on much other than simple book purchases. From the perspectives of rights management and royalty processing, both e-book retail and PIP library e-book revenues can be treated similarly to print book sales. Things get difficult for publishers when they license into access models that fall short of purchases: sometimes they have to pay authors for each access as if it were a purchase, even if a user only reads a few pages, because the author contracts won’t allow otherwise.

. . . .

Trade book authors typically own copyrights in their works and only license publishers for specific purposes, such as print book sales; so publishers have little flexibility to enter into innovative license agreements without having to renegotiate author contracts. And renegotiating author contracts is, to put it mildly, not scalable.

. . . .

CPC is tantamount to an e-book rental model. (When I asked Eric Hellman for a name for this model along the lines of PIP, he replied “it’s just rental.”) Some early e-book services in the U.S. offered rentals, which were met with indignant backlash from people who insisted that “ownership” was the only acceptable model — never mind that whether you actually own a downloaded e-book is debatable — and that rentals were some sort of evil plot to deprive people of their rights (although e-book rental has been successful outside the U.S., such as in Japan and South Korea). Now libraries are eagerly embracing rentals, albeit ones users only pay for indirectly through taxes; Hoopla digital has already signed up public library systems in Boston, Philadelphia, Chicago, San Francisco, and Los Angeles.

Link to the rest at Copyright and Technology 

PG says this one more problem for traditional publishers who have used publishing contracts that provide for different (usually higher) royalties for the licensing of rights than they do for the sales of physical books or the “sales” of ebooks (which Amazon and others say are “licensed, not sold” to Amazon customers by publishers).

So far, most publishers have ignored this problem (by choosing the lower royalty rate for ebooks by pretending they are sold, of course) instead of facing it by amending their publishing contracts for ebooks to clarify royalties (and hopefully raise them for authors).

PG has written extensively about this problem, for example, here and here.

Share

Supreme Court’s Lexmark Decision Expands Scope of Patent Exhaustion Defense

3 June 2017

From Fenwick & West LP:

For the fifth time this session, and following fast on the heels of its landmark decision in TC Heartland v. Kraft Foods earlier in May, the Supreme Court again reversed the Federal Circuit. The case, Impression Products, Inc. v. Lexmark International, Inc., significantly expands the scope of the patent exhaustion doctrine. The doctrine of patent exhaustion limits the rights that remain available to a patentee following the initial authorized sale of a patented item. In a 7-1 opinion issued on May 30, the Supreme Court reversed the Federal Circuit analysis concerning both domestic and foreign sales, overturning more than two decades of precedent at the lower courts. It held that “a patentee’s decision to sell a product exhausts all of its patent rights in that item, regardless of any restrictions the patentee purports to impose or the location of the sale.”

This case arises from a dispute between Lexmark, a manufacturer of printer cartridges, and resellers of its cartridges. Lexmark makes proprietary toner cartridges for printers, which it markets and sells both internationally and domestically. The Lexmark cartridges are sold either at full price, or at a discounted rate under its return program. Each return program cartridge carries a contractual single-use/no-resale obligation on the purchaser not to refill the cartridge with toner and reuse it. Other companies known as “re-manufacturers” acquire empty Lexmark cartridges (including ones sold under the return program) from purchasers in the United States and abroad, refill them with toner, and then resell them at lower prices.

Lexmark brought a patent infringement suit against several of these resellers. The litigation proceeded until only a single count of infringement remained against a single defendant, Impression Products. Impression Products did not contest the enforceability of Lexmark’s patents, or that the patents covered the cartridges that Impression Products imported and sold. Rather, Impression Products contested liability based solely on the defense of patent exhaustion and moved to dismiss Lexmark’s claim of infringement with respect to both cartridges sold domestically and those sold abroad.

With respect to cartridges that Lexmark sold domestically, the district court found that the doctrine of patent exhaustion barred Lexmark’s claims, even for cartridges subject to the post-sale use restrictions of Lexmark’s return program.

. . . .

Sitting en banc, the Federal Circuit ruled in favor of Lexmark on both the domestic and international exhaustion issues, holding that the neither Quanta nor Kirtsaeng overruled the limits on patent exhaustion under prior Federal Circuit case law.

. . . .

The Lexmark Court first considered the question of whether a patentee that sells a patented article domestically subject to express restrictions on a purchaser’s right to reuse or resell the product may then enforce those restrictions by bringing a lawsuit for patent infringement. In examining this question, the Lexmark Court drew heavily from its prior patent exhaustion decisions in Quanta and United States v. Univis Lens Co., 316 U. S. 241 (1942). These cases uniformly held that the first authorized sale in the U.S. of a material object terminates patent rights associated with that object and leaves a patentee without the ability, under patent law, to control the use or disposition of the product after the initial sale. These cases, however, left open the possibility that a patentee may still be able to place contractual restrictions on the use of the items it sold.

With Lexmark, the Supreme Court slammed that door shut. Indeed, all eight Justices agreed that—under the patent exhaustion doctrine—Lexmark’s sale of the cartridges extinguished the asserted patent rights, notwithstanding the contractual restrictions on reuse Lexmark attempted to place on the articles prior to sale. The Court based its decision not only on its prior patent exhaustion cases, but also on its copyright ruling in Kirtsaeng, which addressed the first sale doctrine codified at Section 109(a) of the Copyright Act. It explained its view that: “This well-established exhaustion rule marks the point where patent rights yield to the common law principle against restraints on alienation.”

. . . .

The Court noted that, while “[i]t is true that a patented method may not be sold in the same way as an article or device, [m]ethods nonetheless may be ‘embodied’ in a product, the sale of which exhausts patent rights.” Quanta also held that the patent exhaustion doctrine applied if the item sold is only a component of a device but “the incomplete article substantially embodies the patent because the only step necessary to practice the patent is the application of common processes or the addition of standard parts.” In other words, if an item “embodies essential features of the patented invention,” including method claims, and “their only reasonable and intended use was to practice the patent,” the sale of the item will exhaust the claim.

The Lexmark decision does nothing to disturb the Quanta framework. Accordingly, under the combination of Lexmark and Quanta, patent exhaustion applies where critical components of a claimed apparatus or method are sold by the patentee either domestically or internationally.

. . . .

The Lexmark Court suggested two situations where patent exhaustion may not apply.

First, because the doctrine depends on an initial sale, it may not apply where a patentee distributes a patented article pursuant to license, as opposed to in an outright sale. As the Court noted, “[a] patentee can impose restrictions on licensees because a license does not implicate the same concerns about restraints on alienation as a sale.” After all, “a license is not about passing title to a product, it is about changing the contours of the patentee’s monopoly.” By contrast, “[p]atent exhaustion reflects the principle that, when an item passes into commerce, it should not be shaded by a legal cloud on title as it moves through the marketplace.” It is, of course, common to distribute software, firmware, and other technology via license rather than sale, and thus patent exhaustion may be inapplicable for such distributions.

Second, patent exhaustion may also not apply where the unauthorized sale of a patented article occurs.

Link to the rest at Fenwick & West LP and thanks to Colleen for reminding me to post on this topic.

Colleen wondered if the Lexmark decision might have an impact on ebooks and the first sale doctrine that permits the resale of printed books by the purchasers thereof without restriction.

PG could hold forth on this topic at great length, but, in a reversal of his usual practice, he will restrain himself on this occasion.

The Lexmark decision is of interest to traditionally-published authors because it clearly distinguishes between the rights of the patent holder if a product embodying the patented apparatus is sold or if it is licensed.

If the product is sold, the patent is exhausted and the patent owner has no further rights to prevent anybody from doing almost anything with the product, including refill it. If the product is licensed, but not sold the patent holder may be able to control what happens to the product later on.

A US Circuit Court of Appeals has held that there is a distinction between licensing and sales under copyright law. PG has previously posted about this decision, FBT Productions LLC v. Aftermath Records, 621 F.3d 958 (9th Circ. 2010).

The FBT case involved the rapper, Eminem. For iTunes downloads, Eminem’s publisher was paying the same royalties as would have been due upon the sale of CD versions of the songs. Eminem contended that the relationship between the publisher and iTunes was a license of a subsidiary right, for which a much higher royalty was due under the singer’s publishing contract.

Ultimately, the court held that downloaded songs were licensed, not sold. Elements of the court’s decision were that only a single master copy of the song was provided to iTunes and Apple then made copies for downloading by the customer as opposed to Apple selling a separate CD to each purchaser.

The impact on authors comes with ebooks.

The Terms of Use for ebooks on the websites of Amazon, Barnes & Noble, Kobo, etc., say that ebooks are licensed to the purchaser, not sold to the purchaser.

For a long time prior to the Eminem case and, unaccountably, after the Eminem case, a great many publishers provided boilerplate royalty provisions that paid a percentage of the net income from each ebook sold by the publisher. Typically, this percentage is 25%. Quite often in a separate subsidiary rights section of the contract, a much higher percentage royalty is paid for the licensing of the author’s books.

The Lexmark case addresses a point of great concern to publishers – pirated copies of ebooks.

If a court were to apply the Lexmark reasoning to copyrights, the first sale of an ebook would exhaust all of the rights the publishers hold via their contracts with authors and ebooks could be freely resold on the used books market just like printed books are. If ebooks are licensed, resale of ebooks can be restricted. But higher royalty rates would seem to apply.

Finally, a bit of background – The Federal Circuit is an appeals court that only handles appeals from decisions of US District Courts on patents (plus a bunch of even more obscure items), regardless of the location of the original action.

US Circuit Courts of Appeal handle appeals of decisions within a particular geographical area, e.g. the Third Circuit Court of Appeals handles appeals of cases tried in Pennsylvania, New Jersey, Delaware, and the Virgin Islands.

Sometimes the various circuit courts of appeal issue conflicting decisions. That requires the US Supreme Court to straighten out the conflicts.

The theory behind the establishment of the Federal Circuit is that patent law is its own weird little area of the law, sometimes with a lot of technology and math thrown in, and that judges who specialize in hearing cases of that sort will usually be able to handle those appeals more efficiently.

As with the Circuit Courts of Appeal, decisions of the Federal Circuit can be appealed to the Supreme Court. The Supreme Court declines to hear most appeals from any of the lower appellate courts, however.

Lately, the Federal Circuit has gone off on a few frolics of its own and the Supreme Court has accepted more appeals in order to straight the law out.

Share

Google Is Not Going the Way of Kleenex, Cellophane, or Aspirin

20 May 2017

From Fortune:

“Google” is not generic—at least not according to a federal appeals court.

In an important trademark ruling, the 9th Circuit Court of Appeals found that “googled” may have become a synonym for Internet searching, but that doesn’t mean the company can’t protect its name.

The ruling, handed down in San Francisco this week, involved a man who registered hundreds of website names such as “GoogleDisney.com” and “GoogleBarackObama.com.” After Google persuaded Internet regulators to hand over the names, the man sued to strip Google of its trademark, arguing the word “google” had become generic.

. . . .

This has happened to other brands in the past, including Kleenex. Under federal law, anyone can wipe out a trademark—and the legal protection it offers—if they can show most of the public thinks of the mark as a common word for a good or service. This phenomenon is known as “generocide.”

But the appeals court says this isn’t the case when it comes to Google. The court reviewed evidence, including lyrics from the rapper T-Pain that say “Google my name,” to acknowledge people use the word “googled” as way to say “search the Internet.”

This isn’t enough, though, to invalidate Google’s trademark. As the court explains, people still regard Google as a brand in its own right, even if its name is also a verb.

To make the point, the appeals court cited an earlier decision involving a restaurant that replaced customers’ order for “a coke” with a non Coca-Cola beverage. In that case, Coke won the case because the restaurant failed to show people thought of cola and Coke interchangeably.

Link to the rest at Fortune

Share

AAP Files Amicus Brief in Important Copyright Case

13 May 2017

From the Association of American Publishers:

The Association of American Publishers (AAP)  today filed an amicus curiae brief in Capitol Records, LLC v. Redigi, Inc., an important copyright case concerning whether “used” creative digital content can be resold in the online environment. AAP has urged the U.S. Court of Appeals for the Second Circuit to affirm the district court’s decision, which found no plausible legal interpretation under the Copyright Act that would permit a company to reproduce and resell digital music files without a license. In addition to rejecting application of the “first sale” doctrine under Section 109 of the Act, the district court found that all four factors of fair use under Section 107 weighed against the defendant ReDigi which has appealed the decision.

In its brief, the AAP notes that the rapid growth of digital publications (including eBooks, professional and scholarly publications, and adaptive educational content) make the threat of Redigi’s business activities to publishers and their markets “not hypothetical.” AAP explains the critical interest of publishers in ensuring that federal courts apply the first sale doctrine as a defense against infringement pursuant to the plain meaning of the statute, which limits application of the defense to situations where the owner of a lawful copy of the copyrighted work embodied in a tangible “material object” chooses to distribute that particular copy. The owner of the lawful copy cannot assert the defense if distribution of the work is achieved by reproducing the copy.

AAP cites to the clear reports of both the U.S. Copyright Office and the U.S. Commerce Department, in 2001 and 2016 respectively, which concluded that the ability of a consumer to resell a purchased copy or lend it without restriction applies only to tangible property, and that extending the first sale doctrine to digital transmissions would carry risks to copyright owners’ primary markets.

Link to the rest at the Association of American Publishers

Share

Torching the Modern-Day Library of Alexandria

27 April 2017

From The Atlantic:

You were going to get one-click access to the full text of nearly every book that’s ever been published. Books still in print you’d have to pay for, but everything else—a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe—would have been available for free at terminals that were going to be placed in every local library that wanted one.

At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You’d be able to highlight passages and make annotations and share them; for the first time, you’d be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable—as alive in the digital world—as web pages.

It was to be the realization of a long-held dream. “The universal library has been talked about for millennia,” Richard Ovenden, the head of Oxford’s Bodleian Libraries, has said. “It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution.” In the spring of 2011, it seemed we’d amassed it in a terminal small enough to fit on a desk.

“This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life,” one eager observer wrote at the time.On March 22 of that year, however, the legal agreement that would have unlocked a century’s worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York.When the library at Alexandria burned it was said to be an “international catastrophe.” When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who’d had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.

. . . .

Google’s secret effort to scan every book in the world, codenamed “Project Ocean,” began in earnest in 2002 when Larry Page and Marissa Mayer sat down in the office together with a 300-page book and a metronome. Page wanted to know how long it would take to scan more than a hundred-million books, so he started with one that was lying around. Using the metronome to keep a steady pace, he and Mayer paged through the book cover-to-cover. It took them 40 minutes.

Page had always wanted to digitize books. Way back in 1996, the student project that eventually became Google—a “crawler” that would ingest documents and rank them for relevance against a user’s query—was actually conceived as part of an effort “to develop the enabling technologies for a single, integrated and universal digital library.” The idea was that in the future, once all books were digitized, you’d be able to map the citations among them, see which books got cited the most, and use that data to give better search results to library patrons. But books still lived mostly on paper. Page and his research partner, Sergey Brin, developed their popularity-contest-by-citation idea using pages from the World Wide Web.
By 2002, it seemed to Page like the time might be ripe to come back to books. With that 40-minute number in mind, he approached the University of Michigan, his alma mater and a world leader in book scanning, to find out what the state of the art in mass digitization looked like. Michigan told Page that at the current pace, digitizing their entire collection—7 million volumes—was going to take about a thousand years. Page, who’d by now given the problem some thought, replied that he thought Google could do it in six.. . . .He offered the library a deal: You let us borrow all your books, he said, and we’ll scan them for you. You’ll end up with a digital copy of every volume in your collection, and Google will end up with access to one of the great untapped troves of data left in the world. Brin put Google’s lust for library books this way: “You have thousands of years of human knowledge, and probably the highest-quality knowledge is captured in books.” What if you could feed all the knowledge that’s locked up on paper to a search engine?

By 2004, Google had started scanning. In just over a decade, after making deals with Michigan, Harvard, Stanford, Oxford, the New York Public Library, and dozens of other library systems, the company, outpacing Page’s prediction, had scanned about 25 million books. It cost them an estimated $400 million. It was a feat not just of technology but of logistics.

. . . .

The stations—which didn’t so much scan as photograph books—had been custom-built by Google from the sheet metal up. Each one could digitize books at a rate of 1,000 pages per hour. The book would lie in a specially designed motorized cradle that would adjust to the spine, locking it in place. Above, there was an array of lights and at least $1,000 worth of optics, including four cameras, two pointed at each half of the book, and a range-finding LIDAR that overlaid a three-dimensional laser grid on the book’s surface to capture the curvature of the paper. The human operator would turn pages by hand—no machine could be as quick and gentle—and fire the cameras by pressing a foot pedal, as though playing at a strange piano.

What made the system so efficient is that it left so much of the work to software. Rather than make sure that each page was aligned perfectly, and flattened, before taking a photo, which was a major source of delays in traditional book-scanning systems, cruder images of curved pages were fed to de-warping algorithms, which used the LIDAR data along with some clever mathematics to artificially bend the text back into straight lines.

. . . .

In August 2010, Google put out a blog post announcing that there were 129,864,880 books in the world. The company said they were going to scan them all.

Of course, it didn’t quite turn out that way. This particular moonshot fell about a hundred-million books short of the moon. What happened was complicated but how it started was simple: Google did that thing where you ask for forgiveness rather than permission, and forgiveness was not forthcoming. Upon hearing that Google was taking millions of books out of libraries, scanning them, and returning them as if nothing had happened, authors and publishers filed suit against the company, alleging, as the authors put it simply in their initial complaint, “massive copyright infringement.”

. . . .

As Tim Wu pointed out in a 2003 law review article, what usually becomes of these battles—what happened with piano rolls, with records, with radio, and with cable—isn’t that copyright holders squash the new technology. Instead, they cut a deal and start making money from it. Often this takes the form of a “compulsory license” in which, for example, musicians are required to license their work to the piano-roll maker, but in exchange, the piano-roll maker has to pay a fixed fee, say two cents per song, for every roll they produce. Musicians get a new stream of income, and the public gets to hear their favorite songs on the player piano. “History has shown that time and market forces often provide equilibrium in balancing interests,” Wu writes.

But even if everyone typically ends up ahead, each new cycle starts with rightsholders fearful they’re being displaced by the new technology. When the VCR came out, film executives lashed out. “I say to you that the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone,” Jack Valenti, then the president of the MPAA, testified before Congress. The major studios sued Sony, arguing that with the VCR, the company was trying to build an entire business on intellectual property theft. But Sony Corp. of America v. Universal City Studios, Inc. became famous for its holding that as long as a copying device was capable of “substantial noninfringing uses”—like someone watching home movies—its makers couldn’t be held liable for copyright infringement.

The Sony case forced the movie industry to accept the existence of VCRs. Not long after, they began to see the device as an opportunity. “The VCR turned out to be one of the most lucrative inventions—for movie producers as well as hardware manufacturers—since movie projectors,” one commentator put it in 2000.
It only took a couple of years for the authors and publishers who sued Google to realize that there was enough middle ground to make everyone happy. This was especially true when you focused on the back catalog, on out-of-print works, instead of books still on store shelves. Once you made that distinction, it was possible to see the whole project in a different light. Maybe Google wasn’t plundering anyone’s work. Maybe they were giving it a new life. Google Books could turn out to be for out-of-print books what the VCR had been for movies out of the theater.If that was true, you wouldn’t actually want to stop Google from scanning out-of-print books—you’d want to encourage it. In fact, you’d want them to go beyond just showing snippets to actually selling those books as digital downloads.. . . .

Those who had been at the table crafting the agreement had expected some resistance, but not the “parade of horribles,” as Sarnoff described it, that they eventually saw. The objections came in many flavors, but they all started with the sense that the settlement was handing to Google, and Google alone, an awesome power. “Did we want the greatest library that would ever exist to be in the hands of one giant corporation, which could really charge almost anything it wanted for access to it?”, Robert Darnton, then president of Harvard’s library, has said.

Darnton had initially been supportive of Google’s scanning project, but the settlement made him wary. The scenario he and many others feared was that the same thing that had happened to the academic journal market would happen to the Google Books database. The price would be fair at first, but once libraries and universities became dependent on the subscription, the price would rise and rise until it began to rival the usurious rates that journals were charging, where for instance by 2011 a yearly subscription to the Journal of Comparative Neurology could cost as much as $25,910.Although academics and library enthusiasts like Darnton were thrilled by the prospect of opening up out-of-print books, they saw the settlement as a kind of deal with the devil. Yes, it would create the greatest library there’s ever been—but at the expense of creating perhaps the largest bookstore, too, run by what they saw as a powerful monopolist. In their view, there had to be a better way to unlock all those books. “Indeed, most elements of the GBS settlement would seem to be in the public interest, except for the fact that the settlement restricts the benefits of the deal to Google,” the Berkeley law professor Pamela Samuelson wrote.

Link to the rest at The Atlantic and thanks to Valerie for the tip.

Share

Andy Warhol Estate Sues Photog Over Prince Photo Copyright Fight

16 April 2017
Comments Off on Andy Warhol Estate Sues Photog Over Prince Photo Copyright Fight

From PetaPixel:

The estate of legendary artist Andy Warhol has filed a lawsuit against New York City photographer Lynn Goldsmith. The reason? Goldsmith believes Warhol violated her copyright by turning one of her portraits of Prince into a painting.

The New York Daily News reports that the lawsuit is a “preemptive strike” against the photographer before she gets a chance to file a copyright infringement lawsuit first.

The photo at the center of the battle is a publicity portrait of the late musician Prince, captured by Goldsmith back in 1981. Three years after the photo was made, Warhol decided to create his “Prince” series of paintings and “drew inspiration” from the picture to create his works.

Goldsmith is accusing the Warhol painting of being an infringement of her photo, but the Warhol estate is arguing that the paintings were transformative enough to be considered new works.

“Although Warhol often used photographs taken by others as inspiration for his portraits, Warhol’s works were entirely new creations,” writes Warhol estate lawyer Luke Nikas in the lawsuit. “As would be plain to any reasonable observer, each portrait in Warhol’s Prince Series fundamentally transformed the visual aesthetic and meaning of the Prince Publicity Photograph.”

. . . .

“It is a crime that so many ‘artists’ can get away with taking photographers’ images and painting on them or doing whatever to them without asking permission of the ‘artist’ who created the image in the first place,” the photographer wrote.

Link to the rest at PetaPixel

Share

In Music, DRM Is Back While Ownership Is Going Away

13 April 2017

From Copyright and Technology:

The RIAA’s annual revenue figures for recorded music are a goldmine of information about the state, health, and direction of the music industry. The 2016 figures that the RIAA published at the end of March generated a few common headlines in the trade and business press:

  • Recorded music revenue in the United States is finally growing again, up 11% over last year after five years of flat-to-slight-decline;
  • Subscription streaming revenue growth accelerated, more than doubling since 2015.  It is now the majority source (51%) of recorded music revenue, even counting CDs and other physical products;
  • The vinyl renaissance is hitting its limits, as growth has slowed and vinyl looks headed for a new peak of 6% of total industry revenue.

But beneath those headlines lie a few more developments which indicate fundamental tipping points in music’s digital transformation.

First, and most relevant to us here, is something I’ve predicted but the RIAA numbers make it official: encrypted digital music accounts for more revenue than DRM-free music (digital or analog). In other words — if you define DRM as any encrypted means of content delivery — DRM for music is back.

. . . .

[O]nly 30% of digital music comes from DRM-free sources while 70% is encrypted in some way.

. . . .

The only modes of digital music delivery that don’t use encryption today are CDs, downloads, and many simulcast streams of AM/FM radio signals. CDs have fallen so far from their 1999 peak that they now account for only 15% of total revenue. Download revenue is also in free-fall and now accounts for 24% of total revenue.

. . . .

An even more important indication of change is the shift in consumer preference from “ownership” to “access” models. This won’t come as a shock to those who have been watching the music industry evolve for a while, but it’s reality now: people have found that when music is available everywhere on any device, it’s not so important to own it anymore.

Link to the rest at Copyright and Technology

Share

Australia’s copyright reform could bring millions of books and other reads to the blind

4 April 2017

From The Conversation:

Proposed changes to Australia’s copyright law should make it easier for people to create and distribute versions of copyrighted works that are accessible to people with disabilities.

The Copyright Amendment (Disability Access and other Measures) Bill was introduced to Parliament on Wednesday.

If passed, it would enable people with disabilities to access and enjoy books and other material in formats they can use, such as braille, large print or DAISY audio.

The Australian Human Rights Commission has long been calling for action to end the “world book famine” – only 5% of books produced in Australia are available in accessible formats. This means that people with vision impairment and other reading disabilities are excluded from a massive proportion of the world’s knowledge and culture.

Under the current law, educational institutions and other organisations can produce accessible copies of books, but the system is slow and expensive. Only a small number of popular books are available, and technical books that people need for work are often out of reach.

Technology should make accessibility much easier, but publishers have been slow to enable assistive technologies.

. . . .

Amazon’s Kindle, for example, used to allow text-to-speech to help blind people read books, but Amazon gave in to publishers’ fears and allowed them to disable the feature. Apple’s electronic books are much better, but there are still major gaps.

Link to the rest at The Conversation

Share

Stephen King Sued Over The Dark Tower

3 April 2017

From TMZ:

Stephen King stole the idea for his main man in “The Dark Tower” series from a famous comic book character also known as a gunslinger … according to a new suit.

The creator of “The Rook” comics claims King’s protagonist, Roland Deschain, is based on his main character, Restin Dane. He says Deschain has striking similarities to Dane other than just their initials — both are “time-traveling, monster-fighting, quasi-immortal, romantic adventure heroes.”

“The Rook” creator also points out King’s Deschain dresses like a cowboy despite not being from the Old West — just like Restin Dane — and the towers in both books look the same.

. . . .

According to the docs … the Restin Dane character was in more than 5 million comic magazines from 1977-1983 and King admits he read those stories. The first book in King’s ‘Dark Tower’ series was released in 1982.

 

Link to the rest at TMZ and thanks to Michael for the tip.

PG says TMZ doesn’t do a very good job of covering legal matters.

 

Share

How this Texas woman changed the lives of the blind and impaired with creation of audiobook studio

28 March 2017
Comments Off on How this Texas woman changed the lives of the blind and impaired with creation of audiobook studio

From The Houston Chronicle:

Carolyn Randall is enthralled by words. She’s been so as long as her 90-year-old memory can recall.

Decades before she’d create the Texas State Library’s audiobook recording studio, a project that has helped thousands of blind and impaired people, Randall was a bookworm growing up in Champaign, Illinois. She read historical fiction and scripts by Fyodor Dostoevsky.

“I was a slow reader,” said Randall, now a Houston resident. “I paid attention to each word.”

. . . .

Shortly after, Randall heard that the University of Houston needed help to record audiobooks. She began volunteering weekly.

In the late 1960s, Robert Levy founded what was then Taping for the Blind, a Houston audiobook and radio program now called Sight into Sound. The news made its way to Randall, who, upon hearing it, remembered an uncle who had once said he needed audiobooks while recovering from cataract surgery. She had an idea.

“I thought, ‘I can do this in an even better way than at the University of Houston,'” Randall said. “That’s how I really got started.”

She stayed with the program for about 10 years before moving with Howard to Austin.

Living in the capitol meant an opportunity to volunteer at the state library.

Randall couldn’t pass it up. She began with small tasks, “filing whatever they needed,” she said. But she quickly cultivated relationships. She also noticed there was no state-sponsored studio to record audiobooks. The library’s Talking Book Program had for decades used an audiobooks archive provided by the National Library Service for the Blind and Physically Handicapped. But no state resource existed for audiobooks and authors specific to Texas.

Randall lobbied for funding to outfit a room with recording booths. Volunteers were recruited, and the studio was born in 1978, with Randall as its director.

. . . .

Almost 40 years later, more than 5,000 titles (books, magazines, etc.) have been recorded at the studio, which in total has a collection of more than 10,000 titles in multiple languages. The studio has about 100 volunteers, and it services roughly 18,000 blind and impaired people statewide. It also offers some books in braille.

Link to the rest at The Houston Chronicle

Of course, PG was reminded of 17 U.S. Code § 121, which provides, in part:

Notwithstanding the provisions of section 106, it is not an infringement of copyright for an authorized entity to reproduce or to distribute copies or phonorecords of a previously published, nondramatic literary work if such copies or phonorecords are reproduced or distributed in specialized formats exclusively for use by blind or other persons with disabilities.

. . . .

“authorized entity” means a nonprofit organization or a governmental agency that has a primary mission to provide specialized services relating to training, education, or adaptive reading or information access needs of blind or other persons with disabilities;

Share
Next Page »
Share