From Kristine Kathryn Rusch:
Here we are—the mess of the mess of the mess. Right now, we’re in one of those technological befuddling moments, where the technology is ahead of the law.
What that means, exactly, is this: We’re not sure what the technology can do, so we don’t know if what it’s doing is legal, in a whole variety of ways.
The law is both a scalpel and a cudgel. If we use the law one way, it becomes a cudgel that smashes behavior and does its best to prevent the behavior from ever occurring again. Look at the laws against homicide in your state. Those laws are not scalpels. Those laws are cudgels, deliberately. As civilized humans, we don’t want other humans to commit murder for any reason. End of story.
(Please don’t write to me about exceptions. I know. I write entire novels about them.)
There are many times, however, that we need the law to be a scalpel. We need it to delicately carve good behavior from bad. We also don’t want it to accidentally smash something good to smithereens.
Just today, Dean and I were walking home in a wind tunnel created by the buildings near ours. The wind was bad anyway, but in that little area, it was extreme, like usual. Dean mentioned that there are entire computer programs that could explain why.
Those programs are often used now to examine how the wind works around bridges and tall buildings in relation to other tall buildings. In the past, those calculations were done by engineers and often by hand. One mathematical error and even brand-new bridges and buildings collapse.
. . . .
Now, though, tech allows us to prevent all kinds of wide-ranging disasters because of computer modeling.
In some ways, generative artificial intelligence in art, audio, and writing is nothing more than computer modeling. The artificial intelligence isn’t intelligence at all, at least as we know it. It’s an algorithm trained to respond in a particular way to a variety of inputs.
The inputs make the AI program reactive, not creative. My post last week titled “AI And Mediocre Work” dealt with a lot of this, but a comment by Matt Weber capsulized it with a quote from Oliver Sacks, in his book, An Anthropologist on Mars:
Creativity, as usually understood, entails not only a “what,” a talent, but a “who” — strong personal characteristics, a strong identity, personal sensibility, a personal style, which flow into the talent, interfuse it, give it personal body and form. Creativity in this sense involves the power to originate, to break away from the existing way of looking at things, to move freely in the realm of the imagination, to create and recreate worlds fully in one’s mind — while supervising all this with a critical inner eye.
These generative AI programs are useful for a variety of things, some of them mentioned in the comments on the last post, others mentioned in analysis about the programs that you can find most anywhere. What they are not is creative.
Let’s set that aside, though. We will all end up using these programs for one task or another.
What started this little miniseries of blogs was, in fact, my desire to start using AI audio. It had gotten to a level that I feel comfortable putting not only the blog posts into audio, but some of the nonfiction books as well. If you want to find out what I’m thinking about the various audio opportunities for my own work, please look at this post.
Up until that point, a lot of my readers thought I was opposed to using generative AI. I’m not. I have already used several different programs for minor things, and I’m going to use others for relatively major things.
I’m just as interested in the AI art programs as I am in the AI audio programs. I’ve used some mapping programs to help artists visualize the layout of my various worlds. I’m using the free programs, so the tools are often wrong in a variety of ways. I have to use words and bad maps to get my point across. But that’s okay.
I like some of the art I’m seeing from the various programs, and that art would be good enough to use on, say, short story ebook covers, where we don’t want to spend a lot of money. (If any.)
We’re not doing that yet, though, and there’s a really good reason.
The copyright issues on much of the AI usage are a complete mess and that, in my opinion, makes them dangerous to use in any commercial manner.
I don’t use the word “dangerous” lightly. Copyright issues could mean something as simple as removing the item from sale to hundreds of thousand paid in statutory damages.
The problem is that we don’t know what’s happening yet, and because we don’t know, we have to be really careful.
Some of the copyright issues can be resolved with a contract. The Terms of Service on these sites are contracts that you agree to, either by affirmatively clicking I accept or by using the site or by paying money for the service.
The problem with Terms of Service is that they can change on a whim. In its paper on artificial intelligence and copyright published in February, the Congressional Research Service made the passing comment about OpenAI, the developer of ChatGPT and DALL-E.
As I said, these terms can change drastically. It’s up to the user to check the terms constantly.
Contracts can supercede copyright if done properly, but doing the contracts properly means understanding the law.
And the law is just plain unclear. The article that I quoted above, from the Congressional Research Service, has a good overview of where the law stands right now in the U.S., and provides links.
Link to the rest at Kristine Kathryn Rusch
Here’s a link to Kris Rusch’s books. If you like the thoughts Kris shares, you can show your appreciation by checking out her books.
PG says AI is going to continue developing very quickly with or without changes in the copyright laws.
Yes, there undoubtedly will be changes in copyright laws, but legislators move at a snail’s pace compared with software engineers and designers. AI is a huge breakthrough and it will take some time for humans to coalesce around where lines are to be drawn between permitted and not permitted uses of AI.
There are certainly going to be some copyright infringement lawsuits and judges (who are anything but technically-oriented, but generally possess a respectable level of general intelligence) will make different and sometimes conflicting decisions for awhile.
Legislatures gonna legislate. Some will do better than others, but the first laws are going to be rough around the edges.
Wherever there are meaningful copyright laws, copyright attorneys are already thinking hard about AI and there will certainly be some lawsuits. That said, on the internet, there are plenty of places that are effectively beyond the reach of western copyright legislation. (China, Russia and a variety of island kingdoms come to mind.)
It’s going to be a legal Wild West for awhile. PG has already read articles about the various ways attorneys can use AI in litigation and contract drafting. He expects to read a lot more.
You should check out the comments to this post. Two valued and prolific TPV commenters elaborate on their forecasts and expectations regarding AI and courteously disagree with some of the thoughts the other has posted.