Business Musings: AI, Copyright, And Writers

This content has been archived. It may no longer be accurate or relevant.

From Kristine Kathryn Rusch:

Here we are—the mess of the mess of the mess. Right now, we’re in one of those technological befuddling moments, where the technology is ahead of the law.

What that means, exactly, is this: We’re not sure what the technology can do, so we don’t know if what it’s doing is legal, in a whole variety of ways.

The law is both a scalpel and a cudgel. If we use the law one way, it becomes a cudgel that smashes behavior and does its best to prevent the behavior from ever occurring again. Look at the laws against homicide in your state. Those laws are not scalpels. Those laws are cudgels, deliberately. As civilized humans, we don’t want other humans to commit murder for any reason. End of story.

(Please don’t write to me about exceptions. I know. I write entire novels about them.)

There are many times, however, that we need the law to be a scalpel. We need it to delicately carve good behavior from bad. We also don’t want it to accidentally smash something good to smithereens.

Just today, Dean and I were walking home in a wind tunnel created by the buildings near ours. The wind was bad anyway, but in that little area, it was extreme, like usual. Dean mentioned that there are entire computer programs that could explain why.

Those programs are often used now to examine how the wind works around bridges and tall buildings in relation to other tall buildings. In the past, those calculations were done by engineers and often by hand. One mathematical error and even brand-new bridges and buildings collapse.

. . . .

Now, though, tech allows us to prevent all kinds of wide-ranging disasters because of computer modeling.

In some ways, generative artificial intelligence in art, audio, and writing is nothing more than computer modeling. The artificial intelligence isn’t intelligence at all, at least as we know it. It’s an algorithm trained to respond in a particular way to a variety of inputs.

The inputs make the AI program reactive, not creative. My post last week titled “AI And Mediocre Work” dealt with a lot of this, but a comment by Matt Weber capsulized it with a quote from Oliver Sacks, in his book, An Anthropologist on Mars:

Creativity, as usually understood, entails not only a “what,” a talent, but a “who” — strong personal characteristics, a strong identity, personal sensibility, a personal style, which flow into the talent, interfuse it, give it personal body and form. Creativity in this sense involves the power to originate, to break away from the existing way of looking at things, to move freely in the realm of the imagination, to create and recreate worlds fully in one’s mind — while supervising all this with a critical inner eye.

These generative AI programs are useful for a variety of things, some of them mentioned in the comments on the last post, others mentioned in analysis about the programs that you can find most anywhere. What they are not is creative.

Let’s set that aside, though. We will all end up using these programs for one task or another.

What started this little miniseries of blogs was, in fact, my desire to start using AI audio. It had gotten to a level that I feel comfortable putting not only the blog posts into audio, but some of the nonfiction books as well. If you want to find out what I’m thinking about the various audio opportunities for my own work, please look at this post.

Up until that point, a lot of my readers thought I was opposed to using generative AI. I’m not. I have already used several different programs for minor things, and I’m going to use others for relatively major things.

I’m just as interested in the AI art programs as I am in the AI audio programs. I’ve used some mapping programs to help artists visualize the layout of my various worlds. I’m using the free programs, so the tools are often wrong in a variety of ways. I have to use words and bad maps to get my point across. But that’s okay.

I like some of the art I’m seeing from the various programs, and that art would be good enough to use on, say, short story ebook covers, where we don’t want to spend a lot of money. (If any.)

We’re not doing that yet, though, and there’s a really good reason.

Copyright.

The copyright issues on much of the AI usage are a complete mess and that, in my opinion, makes them dangerous to use in any commercial manner.

I don’t use the word “dangerous” lightly. Copyright issues could mean something as simple as removing the item from sale to hundreds of thousand paid in statutory damages.

The problem is that we don’t know what’s happening yet, and because we don’t know, we have to be really careful.

Some of the copyright issues can be resolved with a contract. The Terms of Service on these sites are contracts that you agree to, either by affirmatively clicking I accept or by using the site or by paying money for the service.

The problem with Terms of Service is that they can change on a whim. In its paper on artificial intelligence and copyright published in February, the Congressional Research Service made the passing comment about OpenAI, the developer of ChatGPT and DALL-E.

OpenAI’s current Terms of Use, for example, appear to assign any copyright to the user: “OpenAI hereby assigns to you all its right, title and interest in and to Output.” A previous version of these terms, by contrast, purported to give OpenAI such rights.

As I said, these terms can change drastically. It’s up to the user to check the terms constantly.

Contracts can supercede copyright if done properly, but doing the contracts properly means understanding the law.

And the law is just plain unclear. The article that I quoted above, from the Congressional Research Service, has a good overview of where the law stands right now in the U.S., and provides links.

Link to the rest at Kristine Kathryn Rusch

Here’s a link to Kris Rusch’s books. If you like the thoughts Kris shares, you can show your appreciation by checking out her books.

PG says AI is going to continue developing very quickly with or without changes in the copyright laws.

Yes, there undoubtedly will be changes in copyright laws, but legislators move at a snail’s pace compared with software engineers and designers. AI is a huge breakthrough and it will take some time for humans to coalesce around where lines are to be drawn between permitted and not permitted uses of AI.

There are certainly going to be some copyright infringement lawsuits and judges (who are anything but technically-oriented, but generally possess a respectable level of general intelligence) will make different and sometimes conflicting decisions for awhile.

Legislatures gonna legislate. Some will do better than others, but the first laws are going to be rough around the edges.

Wherever there are meaningful copyright laws, copyright attorneys are already thinking hard about AI and there will certainly be some lawsuits. That said, on the internet, there are plenty of places that are effectively beyond the reach of western copyright legislation. (China, Russia and a variety of island kingdoms come to mind.)

It’s going to be a legal Wild West for awhile. PG has already read articles about the various ways attorneys can use AI in litigation and contract drafting. He expects to read a lot more.

PG PS:

You should check out the comments to this post. Two valued and prolific TPV commenters elaborate on their forecasts and expectations regarding AI and courteously disagree with some of the thoughts the other has posted.

15 thoughts on “Business Musings: AI, Copyright, And Writers”

  1. Salon has a long piece detailing the history of ChatBots and humans’ reaction to them:

    https://www.msn.com/en-us/news/technology/what-a-precursor-to-chatgpt-taught-us-about-ai-in-1966/ar-AA19FYw1?pc=EMMX01&cvid=361183e5947243239e299a7df01d71d3&ei=20

    Basically, humans are social creatures that tend to anthropomorphize creatures and devices and seek to find patterns, agency, and order everywhere, clouds, potatoes, whatever. Especially where it doesn’t exist. No shock that interactive tools are perceived as “people” as far back as Eliza in the 60’s and more recently with the likes of Siri, Alexa, and Cortana.

    Can’t br helped.
    Humans will do what humans do.

    One thing to bear in mind is that whatever a given tool or technology is intended for, humans will use it *their* way to meet *their* needs. Examples to remember is how in the early days of PC computing people who learned one of the major programs of the pre-GUI era used them for everything, like doing document creation in LOTUS 1-2-3 or complex calculations in Word Perfect. It took years and years for developers to figure out how to properly meet user needs with software suites of tools optimized for the various tasks. It was a feedback loop where users by their actions told debelopers what to do. Those that noticed thrived, those that didn’t ended up as roadkill on the “information superhighway”. 😉

    Consumer GPT tools are in their infancy and people are using them for everything that comes to mind to see how *they* can use them. In time, the developers will figure out how to optimize the tools for the most profitable applications and which features are missing.

    “This has happened before. It will happen again.”

    For those fretting about GPT tools displacing creatives, I offer up: “How reproducible are the results?” For all the talk of “prompt engineering”, can the current generation of tools consistently generate the same output from the same inputs? I haven’t seen it mentioned anywhere. The very methodology that produces natural seeming results works against the consistency required for production “creative” uses.

    For now.

    Future versions will no doubt be able to help humans displace other humans from the various production loops. But that will take time. Years more likely than months so there will be some time to figure out a proper balance and fit the tool to the task. No need to panic just yet. Current attempts merely show the possibilities of what it to come and indicate what needs to be done on both sides of the divide.

    But yeah, humans will figure out how to maximize their gains at the expense of others.
    They always do.

  2. I don’t mind AI research, in fact, I depend upon it. But using AI to supplant creatives, often poorly compensated to begin with, seems particularly crass. I have a friend who has had to write to clients for whom they narrated an audiobook, or series of audiobooks that, having been performed years ago, did not have a clause excluding their voice from being used to train AI voice software, begging them to send an opt-out notice to keep them from being used in the publisher’s AI voice training data.

  3. OpenAI’s terms are unlikely to change–they correctly assign all output rights to the user–because that is standard in the software industry. Neither AutoCAD, nor Adobe, nor Microsoft, nor any other honorable tool maker makes any irrational claim to copyright. Anybody that attempts it (there used to be some in the ebook creation business) gets ignored away. A form of self regulation. By the time the bureaucrats get involved there should be no problem left to police. (Think of ebook creation and distribution: is there any credible player left demanding part copyright as pay? Haven’t heard of any in years.)

    Legit software developers enjoy comfortable margins (30-60% and even higher) so they have no need to scam their customers. And OpenAI is a defacto Microsoft subsidiary and whatever their initial boilerplate might’ve been they now must and will conform to legit terms. Or else. And as pacesetters in their segment the me2 crowd will have to follow their example or, as above, be ignored to death.

    Now, in gold rush days there is no shortage of bad actors from other industries looking to muscle in for a quick buck but they’ll be gone soon enough as the tools get mainstreamed.

    In the meantime: yes, read the terms of service and ignore anybody demanding anything but (reasonable) cash pay.

    Now, a caveat about stories that *will* show up: OpenAI and its competitors will get significant, if not a majority part of their revenue from licensing their model creation tools to other corporations for specialty inhouse or commercial products. Those can and likely will demand more that just one time cash. (This is also industry standard. If you look at the usually ignored launch sequence credits for video games they list tools like HAVOK, UNREAL ENGINE, etc. Those can demand a slice of revenues for a period of time. But not IP rights.)

    At some point some pearl clutcher will conflate the two tool markets for clickbait.

    This too is standard for *their* business.

    • Felix, the problem will not be with the credible players. It will be with the non-credible players who come in with platforms, or senses of entitlement, or boatloads of excess capital obtained in another scheme (or via trust fund), who nonetheless have Really Expensive Lawyers to litigate people to death. Something this blast from the past. That… individual was, or should have been, non-credible sitting on the publisher’s side of the desk.

      • Don’t forget, the tech world has its own lawyers and pockets so deep the entire publishing businesses combined amount to pocket change. US publishing moved what, $15B a year? MS bmoves $160B and is spending $70B in acquiring Activision, global legal battles included. The actual cost is $67B but the trade press rounds it to $70B. What other business considers $3B a rounding error?

        1995 is a long time ago and the entire tech world, from Seattle to SiliValley to Texas lerned that the only viable solution to activists and politically connected roadkill was to own the IdiotPoliticians™. So MS evolved from 1 part time lawyer in DC to an entire army of techs, lawyers and activists. (The entirety of Congress, politicians and staff runs on state of the art PCs, with dedicated support 24/7/365. Both parties. Gratis.) The rest of the tech world followed suit. The pols get their vig, tech gets a free hand.

        Microsoft’s reach is on display right now as part of their campaign to secure Activision. They have legal teams fighting (and apparently winning) in every major market from the EU, UK, and Japan, to Brazil, the middle east and points in between. (In the UK the bureaucrats polled mayor gaming publishers about the purchase. Seven replied. Only one didn’t see it as good for customers, developers, and competitors: Sony, of course. MS lawyers convinced the courts that Sony needed to oen their books to prove their lies, er, “contentions”. Sony was so ordered. Afterwards, the buresucrsts have grudgingly accepted the MS position. The EU has quietly signaled they won’t require any special action to sign off, some time this month.)

        And Japan? Well, Sony’s whole gripe is over console gaming, not PC or mobile (the big win for MS) so “curiously” two weeks ago during trade talks, the senator from Boeing pointedly asked the Japanese trade rep why they allowed Sony to pay “independent” game developers not to develop for XBOX, resulting in a 2% share for MS. Last week, Sony announced they have no issues with MS getting Activision. It ain’t pretty but it works.

        When Microsoft’s last investment in OpenAI was revealed to be $10B, a financial report I saw claimed that if they put GPT in all their major software the return could be $600B over the next decade. They might’ve been conservative: pre GPT, BING had an 8% search engine market share that brought in $11B in 2022. By March, GPT had boosted that to 12%. They don’t need to actually dominate search to earn back the OpenAI investment.

        Against all that, do you really think the Gettys of the world stand a chance against MS, Google, Facebook, IBM, and NVIDIA, for starters? And that is just the influence game. The tech game is even more fun: when MS claims ChatGPT can only run on Azure’s massive collection of NVIDIA “graphics” cards, they ain’t lying. It’s why OpenAI agreed to the MS deal, they have the algorithms, but MS has the hardware, data, and coders. There aren’t many comparable cloud operators: Amazon AWS, GOOGLE, IBM, Oracle. Any chance they’ll tolerate scammers on their systems?

        See, the reason ChatGPT is scaring the bejeezus out of everybody is it is years ahead of industry expectations. And one of the reasons is it running on enormous datacenters. Not a product line for the classic “two guys in a garage” much less publishing industry refugees.

        But anybody who thinks otherwise is welcome to try. It’ll be fun to watch.
        MS (et al) have small armies devoted to finding commercial pirates, black hat hackers, and counterfeiters. And lots of experience. Plus direct ties to international law enforcement.

        Any scams are likely coming from China or Russia but both have forbidden consumer “AI”, especially GPT. Their regimes survive on information control and they can’t afford GPT class software inside their national networks. (Military “AI” is a different creature.)

        The danger isn’t the software but in the uses of the output. And that’s a consumer issue.

        Publishing might pretend that ” books are different” but big tech really *is* different. That’s why they’re worth trillions.

    • Good thoughts, as usual, F.

      There is bound to be some litigation to spice things up, but an intelligent creator can make it very difficult for an over-reaching online AI website to effectively claim a copyright to anything the creator develops using output from an online AI generator.

      For one thing, I haven’t seen anything straight from an online AI image generator that I would be inclined to use that wouldn’t require a lot of post-processing by me to be used.

      For a second thing, unless I told the world that I had used a particular AI generator to provide the basis for something I created, how would the AI provider identify its output as the basis of my creation?

      I expect there could be a possibility of inserting a watermark, visible or invisible, into each image the a given an AI generator produced, but, if you search Google for “how to remove a watermark from an image,” you’ll find a zillion ways to do it. If I believed I was using the original AI image as inspiration to create a new work and I would be the copyright owner for the resulting creation, I wouldn’t have an ethical issue with removing a watermark on the AI image before transforming it.

      Artists have been inspired to create images, sculptures, books, stories, etc., etc., based on the inspiration of another creator’s work for centuries. Going to an art gallery or museum looking for inspiration is one of the oldest ways new art has been created for centuries. Ditto for reading a book or newspaper, hearing a story, etc.

      That said, I’m sure there is going to be litigation on this type of issue, semi-justified and unjustified.

      • “For one thing, I haven’t seen anything straight from an online AI image generator that I would be inclined to use that wouldn’t require a lot of post-processing by me to be used.”

        Exactly.
        If you notice how the big boys use GPT, it is as added *features* not as standalone products. The idea is the tech makes the paid product better. Either as input or output.

        Anybody looking to extort money first needs to demonstrate a specific infringement.
        Good luck with that.

        I too expect lots of attempts but few succeses, except maybe in Europe under “moral rights”. And even there they’ll have a hard time because data mining web crawlers are legal. That much is settled even in the EU.

        https://discoverdigitallaw.com/is-web-scraping-legal-short-guide-on-scraping-under-the-eu-jurisdiction/

        The most likely target will be claiming derivatives but for all the angst of “in the style of”, showing liability will be extremely hard unless the output is used for conterfeiting. And that is a different story.

        For the most part the underlying practices in model training have already been litigated over the past decades. Any successful lawsuits will have to make new law.

        • Data mining web crawlers are not legal in the EU; they merely haven’t been targeted yet for regulation under the sui generis database integrity right. That’s not scheduled for consideration until 2025. (Yes, it really is scheduled.) Even the article linked doesn’t claim they’re “legal,” merely “not per se prohibited.”

          That said, there’s a problem in Europe that we don’t have: Cultural artifacts. Thanks to the First Amendment and the (IMNSHO superior) fair use paradigm, the US has only one set of images that it’s actually inherently illegal to use as AI source: Legal tender (and even then it’s only size and resolution restrictions). Just go to France and see if you can use anything at the Louvre (even from its official catalog of holdings) as an AI source without being invited to assist the ministry with certain inquiries — and yes, in most of Europe, infringing a cultural artifact (which has no expiration on its copyright!) is treated at least initially as a criminal matter.

          All of which is getting a bit far afield from the OP, but whatever.

          • I would worry more about “moral rights”. Fuzzier.

            So far the euros are only invoking privacy.

            https://www.theverge.com/2023/3/31/23664451/italy-bans-chatgpt-over-data-privacy-laws

            This is an area where their paranoia is irrelevant. If they want to “protect” their residents from new tech it’s fine. They’re not needed.

            All the “artificial inteligence” hype is blinding people to what the software is and how it works. Somehow they seem to think that in a database of 175 trillion elements their “precious” is important.

            https://medium.com/@simranjeetsingh1497/gpt4-everything-you-need-to-know-71c6d0a34ae2#:~:text=This%20model%20is%20trained%20on%20an%20enormous%20corpus,parameters%2C%20as%20opposed%20to%20GPT-3’s%20175%20billion%20parameters.

            They don’t get that what the model training does is extract usage trends and relationships between words and sounds or image elements. No single element matters, just its components. Which means copyright is far and away the wrong thing to worry about. Any use it might’ve made of any single item would be minimal and transformative.

            Anybody looking for…action…is going to have to look in a different direction.

            • I think those usage trends and relationships is how we learn to navigate the world. If that’s all we do, it might be a bit disconcerting for some.

              • True but humans are more complicated, internally motivated, and less rational/mechanistic. Emotional and rationalizing. The last two are *not* things you want to simulate in production software.

                I don’t fret GPT in productivity apps because the focus is on specific data processing functions. There’s no added value ($$$) without specificity.

                The (miniscule) danger of quasi-sentient True AI arising lies in academia and blue sky R&D working to replicate human processes which by nature are generalistic and unfocused. That is what needs monitoring and regulating. But for commercial software the profit motive is all the regulation necessary.

                One thing the handwringers need to internalize is that intelligence (real or simuated) does not equate to sentience. Humans have both, yes, but many animals are sentient but not terribly intelligent. (Applies to many humans.) While robots and basic lifeforms can show *unprompted* problem solving skills without sentience.

                Intelligence is focused whereas sentience is unbounded.
                Current software has neither.

                As always in tech, the problem is the humans.

                • I suspect there is a bit too much focus on sentient machines and replicating human processes.

                  First, I don’t think we know what sentient is. Without self-reflection, how do we determine something is sentient? I only presume the rest of you experience a similar mental state as I do. I can’t demonstrate you do.

                  But, we can dispense with the philosophy and consider that AI may become capable of replicating human results. We see some of that now. Maybe it isn’t using sentient or human processes. It would use something else.

                  Then we would have two systems that produce the same results. One human, and one AI. Humans would use the “No True Scotsman” to contend AI isn’t a true human. AI would shrug and point to objective results.

                  Was Asimov’s positronic brain human? Sentient? How would we know?

                • R. Daniel evolved agency but in service of an implied Zeroth law not known or intentionally embedded in the brain’s matrix by his creators.

                  In Microsoft/Halo terms, he was rampant. No different than more recent games Cortana. He thought he was doing good, but was he?

                  As to sentience, there is a lot to be said for solipsism. 😀

            • This is also an important point. It leads to one of those deep epistomological questions that always works out better in fiction than it does in the real world (let alone law, which is its own special fictional world):

              How does a “moral right,” or any other consideration of a “morality” or “personal right,” predictably and replicably interface with an artificial intelligence supposedly developed using pure logical methods?

              It’s also worth remembering that “moral right” is a mistranslation from French into English — and I mean English, not Anglo-American — legal language by French-is-the-language-of-all-cultured-persons artistes (pejorative sense intended) a couple of centuries back. “Moral” in the romance language has a distinctly different etymology and connotation than it does in English or the Teutonic languages. And that seeming tangent further exposes the problems with the interplay between “moral rights” and “product of generative 2023-reference-point AIs.”

              I’d also sarcastically ask if the concept of moral rights and products of generative AIs has any implications for the ownership of those moral rights (particularly under the Thirteenth Amendment), but I think I hear Gibson’s Turing Police knocking at the door…

              Felix’s note demonstrates the tendency of “unconsidered cases” to make our nice, neat, ideologically determined answers go kerblooey.

  4. For individudual writers, this is already an “as for me and my house” issue. I already include a statement in the backmatter of every fiction, long or short:

    “This fiction is a Creation, the result of a partnership between a human writer and the character(s) he accessed with his creative subconscious. This is in no part the block-by-block, artificial Construction of any sort of AI or of any conscious, critical, human mind. What you read here is what actually happened there.”

Comments are closed.