Imitation Is The Best Form Of Flattery. Flattery Is Not A Defense To Copyright Infringement.

This content has been archived. It may no longer be accurate or relevant.

From Above the Law:

Unless you’ve been living under a law library, it would be hard to not take note of the rapid influx of AI art. Face modifying apps, extended shots of events and people that never happened that uncanny only begins to explain their weirdness, you name it. The figure of AI as artist has arrived, but is any of it legal? A small group of artists aim to find out. From Reuters:

A group of visual artists has sued artificial intelligence companies for copyright infringement, adding to a fast-emerging line of intellectual property disputes over AI-generated work.

Stability AI’s Stable Diffusion software copies billions of copyrighted images to enable Midjourney and DeviantArt’s AI to create images in those artists’ styles without permission, according to the proposed class-action lawsuit filed Friday in San Francisco federal court.

The artists’ lawyers, the Joseph Saveri Law Firm and Matthew Butterick, filed a separate proposed class action lawsuit in November against Microsoft’s GitHub Inc and its business partner OpenAI Inc for allegedly scraping copyrighted source code without permission to train AI systems.

. . . .

I’m gonna flag it for you in case your eyes glossed over it. The word there is billions. Billons. With a B. Even if the individual damages are pennies on the dollar, the aggregate of those alleged copyright infringements would be… well, I’m not that good at math, but it would put a sizeable dent in my student loan principal.

. . . .

For those not in the know, if you’ve ever seen a stock image, every one you’ve seen is probably from Getty.

Link to the rest at Above the Law

7 thoughts on “Imitation Is The Best Form Of Flattery. Flattery Is Not A Defense To Copyright Infringement.”

  1. So PG, I’m curious. If I were artistically inclined, I could look at one or more copyrighted images on some service and then independently create some work of my own that was influenced by them, provided it was not a direct copy. However, if an AI does exactly the same thing, it’s wrong. Help me out here?

    • Davemich, to “look at” one or more copyrighted images as a natural person (or even an unnatural person, like a lawyer) does not require making a reproductive/reproducible copy of that image. If you don’t make a copy, we’re truly into the difficult distinction between “influence” and “unauthorized derivative work” — the key point being that it’s not clear and obvious.(Law-journal-article fodder: How about visual-assist-systems for the blind?)

      When an AI or other data processing system analyzes an image, it makes a reproduced/reproducible image as part of that process. Upon making the copy without authorization, the infringement is complete… especially since under either fair dealing (most of the world) or fair use (the US), the act of making that copy obviates any need for the AI or downstream to purchase an authorized copy.

      The complaint that “this is insensitive to the way that technology ‘really’ works” runs into “the law doesn’t care, and can be amended or changed… but it’s the law until it is changed or amended, and there Will Be Consequences for violating the law as it really is even when that’s not what it should be.”

      • IIRC the “temporary copy” aspect of scanning was dealt with in the Google/Hathitrust case and outweighed by the other tests. If the software is not storing a humanly recognizable image the product of the scan is not necessarily a copy of the original work. A lot is going to depend on the mechanics of the scan/analysis. The software might look at the image and record just brightness, contrast, and saturation but not the distribution of the pixels, as one example.

        If anything, the viewing=copying argument was stronger on text than on graphics.

        It’s one thing if the software training database contains recognizable replicas of the scanned images but if the contents is more analytical (Vermeer art has ABC traits while Rembrant art has XYZ) things are going to get a tad more difficult to deal with absent an absolute prohibition on machine vision or automated analysis.

        • IIRC the “temporary copy” aspect of scanning was dealt with in the Google/Hathitrust case and outweighed by the other tests.

          No. Not even close. Even if it had, it was dictum (discussion that is not the holding necessary to resolve the matter, even if not labelled that way), and it would be limited to its facts (the specific use: creating an index/finding tool, not creating a new original work that is possibly a derivative of the source work) and applicable only in the Second Circuit. The Seventh and Ninth Circuits long ago held that the “temporary copy” was indeed an infringement, but that infringement might have a fact-specific defense that excuses a particular infringement but notably does not and cannot do so on a “general practice” basis.

          The fundamental problem with GBS is that it was misstructured so that purely procedural considerations under the law of standing and the rules regarding class actions made it essentially impossible to reach the merits. I blame the Guild and its lawyers for this, and those goofups made the law more unclear and harmed authors’ (and other creators’) rights. Bad trust-fund kids, no cookie.

          The last paragraph is another aspect of the point I’m making: That until the law is changed to reflect new uses/methods, people are stuck with the existing law — however messy, however illogical. It’s why we’re still stuck with so, so many bad copyright decisions on “sampling,” because the courts with authority to change the law have not been squarely presented with the opportunity to do so — the context has always been sufficiently snarled that “overruled as a result of intervening precedent, statutory change, and judges learning what ‘sampling’ actually is” has not been necessary to decide the cases before the judges. The Sixth Circuit and Second Circuit are both remarkably biased in this aspect; some music-related law out of the Sixth Circuit still descends from the 1870 Act!

          Last, and far from least: Be, very, very wary of applying concepts drawn from disputes about pure text to any other copyrightable materials. Those concepts are at best the starting point. The Supreme Court recognized this in the nineteenth century at the constitutional level, so just changing the Copyright Act wouldn’t be a solution anyway…

          • Mostly I was thinking thermographic imagers: they scan and extract useful data without making a copy or anything resembling an image. Algorithms can do the same. Any “sticky” argument is going to have to start with *what* is collected and how, not where.

            Without that info the only answer that will stick is: It depends. Case by case. Absent IdiotPolitician™ intervention, each case will be a story unto itself if it tries to use rules meant for one regime in another.

            The big hangup for now is style. And style isn’t among the data that can be collected. In fact, it is more of an emergent property so good luck trying to quantify it and protect it.

            It’ll likely take decades of fighting and every outcome will only lead to another workaround. Techies are good at whack-a-mole.

  2. Two things come to mind:

    1- Betamax decision.
    2- My mother’s lawyer once told us “Paper will accept whatever ink puts on it. Doesn’t mean the courts will.”

    New tech means new revenue streams. Lots of folks want a piece of the pie, earned or not.
    More billable hours for everybody.

Comments are closed.