Amazon is encouraging sellers to use AI-generated product listings

From engadget:

Amazon is launching a new AI tool that generates product listings for sellers. The feature uses a large language model (LLM) “trained on large amounts of data” to make it faster and simpler for vendors to describe their products. The company describes the tool as distilling the “significant work” of creating titles, bullet points and descriptions down to “just one step.”

Amazon says its Generative Listing Content tool only requires sellers to provide a brief product description in a few words or sentences. From there, it will “generate high-quality content for their review” — including a title, product description and bullet points — which sellers can peruse before refining or submitting as is. The company says many sellers have already tested the tool during the last few months, and their feedback indicates most of them use the generated content directly without revisions.

“These new capabilities will help sellers create high-quality listings with less effort and present customers with more complete, consistent, and engaging product information that will enhance their shopping experiences,” Amazon VP Mary Beth Westmoreland wrote today in an announcement blog post.

. . . .

“With our new generative AI models, we can infer, improve, and enrich product knowledge at an unprecedented scale and with dramatic improvement in quality, performance, and efficiency,” Robert Tekiela, Amazon VP of selection and catalog systems, wrote today. “Our models learn to infer product information through the diverse sources of information, latent knowledge, and logical reasoning that they learn. For example, they can infer a table is round if specifications list a diameter or infer the collar style of a shirt from its image.”

The new tool joins Amazon’s AI-generated review summaries, launched earlier this summer. That feature uses generative AI to train on a product’s reviews and spit out one-paragraph recaps, including clickable keywords. The company teases that it’s still getting started with incorporating generative AI into its storefront: “This is just the tip of the iceberg on how we plan to use AI to help improve the seller experience and help more sellers succeed.” CEO Andy Jassy said last month that, from now on, generative AI “is going to be at the heart of what we do.”

Link to the rest at engadget

OpenAI disputes authors’ claims that every ChatGPT response is a derivative work

From Ars Technica:

This week, OpenAI finally responded to a pair of nearly identical class-action lawsuits from book authors—including Sarah Silverman, Paul Tremblay, Mona Awad, Chris Golden, and Richard Kadrey—who earlier this summer alleged that ChatGPT was illegally trained on pirated copies of their books.

In OpenAI’s motion to dismiss (filed in both lawsuits), the company asked a US district court in California to toss all but one claim alleging direct copyright infringement, which OpenAI hopes to defeat at “a later stage of the case.”

The authors’ other claims—alleging vicarious copyright infringement, violation of the Digital Millennium Copyright Act (DMCA), unfair competition, negligence, and unjust enrichment—need to be “trimmed” from the lawsuits “so that these cases do not proceed to discovery and beyond with legally infirm theories of liability,” OpenAI argued.

OpenAI claimed that the authors “misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

According to OpenAI, even if the authors’ books were a “tiny part” of ChatGPT’s massive data set, “the use of copyrighted materials by innovators in transformative ways does not violate copyright.” Unlike plagiarists who seek to directly profit off distributing copyrighted materials, OpenAI argued that its goal was “to teach its models to derive the rules underlying human language” to do things like help people “save time at work,” “make daily life easier,” or simply entertain themselves by typing prompts into ChatGPT.

The purpose of copyright law, OpenAI argued, is “to promote the Progress of Science and useful Arts” by protecting the way authors express ideas, but “not the underlying idea itself, facts embodied within the author’s articulated message, or other building blocks of creative,” which are arguably the elements of authors’ works that would be useful to ChatGPT’s training model. Citing a notable copyright case involving Google Books, OpenAI reminded the court that “while an author may register a copyright in her book, the ‘statistical information’ pertaining to ‘word frequencies, syntactic patterns, and thematic markers’ in that book are beyond the scope of copyright protection.”

“Under the resulting judicial precedent, it is not an infringement to create ‘wholesale cop[ies] of [a work] as a preliminary step’ to develop a new, non-infringing product, even if the new product competes with the original,” OpenAI wrote.

In particular, OpenAI hopes to convince the court that the authors’ vicarious copyright infringement claim—which alleges that every ChatGPT output represents a derivative work, “regardless of whether there are any similarities between the output and the training works”— is an “erroneous legal conclusion.”

The company’s motion to dismiss cited “a simple response to a question (e.g., ‘Yes’),” or responding with “the name of the President of the United States” or with “a paragraph describing the plot, themes, and significance of Homer’s The Iliad” as examples of why every single ChatGPT output cannot seriously be considered a derivative work under authors’ “legally infirm” theory.

“That is not how copyright law works,” OpenAI argued, while claiming that any ChatGPT outputs that do connect to authors’ works are similar to “book reports or reviews.”

Link to the rest at Ars Technica

As PG has mentioned previously, he believes that using a relatively small amount of material protected by copyright along with far larger amounts of material not subject to copyright protection for the purpose of training an AI and not for the purpose of making copies of the copyrighted material qualifies as fair use.

Even absent fair use, such use is not a violation of copyright protection because the AI is not making copies of copyrighted materials.

PG has mentioned other analogies, but one that popped into his mind on this occasion is an author who reads hundreds of romance novels for the purpose of learning how to write a romance novel and then writes a romance novel using tropes and techniques that many other romance authors have used before.

From Wikipedia:

Precursors of the modern popular love-romance can also be found in the sentimental novel Pamela, or Virtue Rewarded, by Samuel Richardson, published in 1740. Pamela was the first popular novel to be based on a courtship as told from the perspective of the heroine. Unlike many of the novels of the time, Pamela had a happy ending.

. . . .

Women will pick up a romance novel knowing what to expect, and this foreknowledge of the reader is very important. When the hero and heroine meet and fall in love, maybe they don’t know they’re in love but the reader does. Then a conflict will draw them apart, but you know in the end they’ll be back together, and preferably married or planning to be by page 192.

Joan Schulhafer of Pocket Books, 1982

A great many of the most financially successful authors PG knows are romance authors.

Your Writing Superpower in a World of AI

From Anne R. Allen’s Blog… with Ruth Harris:

I’ve been concerned about various threats from artificial intelligence for several years — threats to the economy, threats to our civil society, and even threats to human existence. I became even more alarmed about artificial intelligence when I discovered that AI systems are chronic, pathological liars.

In early 2023, I began using a “large language model” artificial intelligence system to research a nonfiction book project. It didn’t take long to discover that the AI was giving me wildly inaccurate information, again and again. I asked for citations and sources for the information it gave me—and it offered up authors, book titles, and web addresses that didn’t exist. I asked for quotations from specific scientists. The AI made up quotations the scientists had never said — and which sometimes contradicted their actual views.

When I pointed out these errors to the AI system, it admitted its mistakes, apologized for the “inconvenience”— then proceeded to dispense more misinformation. I later learned that AI developers see this behavior all the time. They have a name for it: “hallucinations.”

I found these interactions disturbing. They reminded me of the deviousness of HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey. I wondered: Why do AI systems lie?

So I was inspired to write a novel about the AI extinction threat. I conceived it as a science fiction suspense tale with the fate of humanity hanging in the balance. The novel took five weeks to write, and it was published in June 2023 as Its Name Is Legion: A Human Novel about Artificial Intelligence.

. . . .

As I pondered the possibilities, I found myself waking up several mornings in a row before dawn, my head was swimming with ideas. On February 19, 2023, I woke up at 5:40 and knew that I had to start writing.

That day, I produced 2,000 words, a good down-payment on Chapter 1. I wrote in my journal, “This book won’t let me sleep! It’s off to a good start. I’ve never begun a project with so much confidence.”

The next day, I finished Chapter 1 and launched into Chapter 2, a total of 3,100 words. I journaled, “Speed is increasing. A good sign.”

I completed the first draft on March 25, 2023, exactly five weeks after I began. It was not a long novel, about 50,000 words. But I felt inspired and energized the whole time.

Though I used to be an outliner, in recent years I’ve followed the wisdom of Ray Bradbury: “I’ve never been in charge of my stories, they’ve always been in charge of me. . . . Jump off a cliff and build your wings on the way down.”

Link to the rest at Anne R. Allen’s Blog… with Ruth Harris

“AI Will Never Be Good Enough to Replace Real Authors” Is the Wrong Conversation

From BookRiot:

There are so many stories regarding AI right now that it feels like a five-alarm fire mixed with a Whac-A-Mole game.

. . . .

And no, it’s not a five-alarm fire. But it is the very important pocket of time wherein a thing needs some form of regulation before we are fully immersed in the consequences and everyone learns the hard way what the saying “you can’t put the toothpaste back in the tube” means.

AI (artificial intelligence) is defined as“the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” It is being used in a lot of industries in many ways, and it was already in use before all the recent headlines. So to be clear: what I am specifically talking about is the way AI is being used in place of writers, journalists, and other creatives, and in grifts like where a non-author tricks consumers into buying their AI word salad book instead of the intended author’s properly written book.

There are certain topics in the world of publishing that end up feeling like they just never stop being discussed, one being any version of “Who gets to write what book?” in response to when a writer writes — or is asking how to write — way out of their lane. The thing with that specific question is, as Alexander Chee perfectly explains, “the question is a Trojan horse, posing as reasonable artistic discourse when, in fact, many writers are not really asking for advice — they are asking if it is okay to find a way to continue as they have.”

I keep thinking about this every time (daily at this point) I see people — well-intentioned, I think — saying this isn’t a big deal and everything is fine, because AI will never be good enough to replace writers and authors (insert all creatives). Being that AI is just scrapping all the information that is already out there to toss it into a blender and output something “new,” I am not actually worried that it will ever be good enough to replace creatives. But that’s not the problem for me. While I get where this idea is coming from I feel it gives a very false sense of “It’ll be fine!” and “Don’t worry!” which keeps the conversations that should be had from happening.

Instead, we should be asking: Will those in power care that AI isn’t as good at creating what a human can create when their goal of using it is to not pay actual writers, authors, and creatives? Do scammers care that the “travel book” they put up on Amazon, “written” by AI is garbage that no consumer would knowingly pay for if their scam works into making the sale? If Amazon gets a cut of every sale from buyers unaware that the book they purchased isn’t the book they intended to buy, will they implement something to stop it? How time consuming is it going to be for very real people in publishing and media to have to weed out the flood of AI-generated submissions? How costly will it be for businesses to have to implement ways to spot, catch, and/or stop scammers using AI?

I deeply miss what Etsy used to be and I think a lot about how it went from being this incredible site dedicated to artists to no longer being that: “Etsy went public in 2015. Last year, the company reported annual revenue of nearly $2.6 billion — a more than 10 percent spike over the year prior. Among other issues, these creators see the increase in counterfeiters on the platform as a result of Etsy prioritizing growth over being able to enforce its standards.” It is yet another example that once again leads me to think that we shouldn’t focus on whether AI is, or ever will be, good enough to replace writers and authors.

Link to the rest at BookRiot

PG says that trying to regulate AI at this point in time is ridiculous.

AI research and development is extraordinarily important for national security and all aspects of health and health care.

Given these stakes, it will be a while before legislators and regulators get around to AI and authors.

Besides, absent a book’s creator admitting she/he used artificial intelligence to write a book, how is anyone going to know for certain whether the author used AI to assist in the creation of a book or parts of a book?

Whether an author used AI to create a first draft of a book, then edited the original manuscript to refine what the AI produced?

Whether an author wrote a detailed 50-page book outline to feed into an AI that created the manuscript?

The death of storytelling

From The Boston Globe:

In 1967, the French theorist Roland Barthes shook the foundations of literary criticism with a now-infamous essay entitled “The Death of the Author.” That essay, the bane of many an English major’s existence, rejected the idea that an author’s identity or even their intended meaning was relevant to understanding a text. Barthes’s aim was to liberate reading from the tyranny of authorship, to insist that a work of literature wasn’t the product of a single man or woman but rather the entire literary tradition that preceded them and shaped their writing. Literature, Barthes wrote, is “a web of citations” and a “trap where all identity is lost.”

The influence of “The Death of the Author” on literary culture was sweeping: For decades to come, critics and book reviewers generally kept questions of authorship at arm’s length, convinced that novels could be appreciated without excessive concern about who wrote them and what those writers intended their texts to mean or do.

In recent years, however, this long-standing critical consensus has come under assault. Thanks to brewing culture-war controversies around race and cultural appropriation, as well as recent anxieties about ChatGPT replacing Hollywood screenwriters, questions concerning who writes and who gets to write have returned to the center of mainstream discussions of literature and film.

It may seem as if these are separate concerns, but ChatGPT and identity politics are two sides of the same coin — both represent bankrupt versions of what literature is and is meant to do. We’re restricting what authors can write on the basis of their identity, and at the same time threatening to dissolve literature into machine babble. These two cultural forces are threatening the crucial function of fiction in our society.

Although we have grown used to the idea that actors should share the racial identity of the characters they portray on screen or stage, things get tangled when this notion drifts into the realm of literature. The backlash against Jeanine Cummins’s 2020 novel “American Dirt,” which generated explosive controversy owing to its white author’s depiction of Mexican immigrants, is only the highest-profile incident in these debates. Richard North Patterson recently courted much the same racial controversy with his new book, “Trial,” which features a black Georgia teenager as its main character.

Half a century after Barthes declared the death of the author, fraught conversations about literature and racial identity have complicated any efforts to consider writers in isolation from the characters they write.

Though it flies under the banner of progressive political sensibilities, this new racial orthodoxy — that white authors must write white characters, Black authors Black characters, and so on — assaults the very idea that it is possible to imagine human lives unlike one’s own.

One can do this poorly, of course. Some cross-identity depictions will inevitably be clumsy or even racist. But they should be judged on their literary merit. Instead, what we have now is “antiracism” by way of literary segregation — what Patterson recently called “redlining literature” — that reduces authors to mirrors who can do nothing but reflect their own lived experience. In this equation, readers are reduced to racial tourists.

This straitjacket betrays the promise — and premise — of fiction.

At the same time that these arguments over race and writing are producing growing discord about what it means to author literary fiction, Hollywood screenwriters are confronting another crisis of authorship: Human writers run the risk of being erased altogether.

That is the worry at the heart of the ongoing WGA and SAG-AFTRA strikes, as anxieties about ChatGPT replacing screenwriters have riled Hollywood. Experimental AI-produced trailers and commercials are already circulating — and yielding deeply unsettling results, as fireballs explode over backyard grill parties and faces seem to melt on screen. If tools like GEN-2 (which creates storyboards and even video from prompts) come into widespread use, this kind of weird effect could take the place of human creativity.

Most public conversation about the writers’ strikes has framed the threat of AI as a labor issue, with a few critics also lamenting that ChatGPT will likely produce television just as bad as the many bad novels it has produced.

What this discussion misses, however, is that what is at stake in this controversy over AI authorship is the same thing that is at stake in debates over authors and race: the very meaning of writing and the reason we consume it.

At the core of literature, television, and film is a kind of contract with the reader or viewer: Behind this novel or TV show, there is a human being like you from whose mind the work sprang. This contract holds that regardless of race or creed, regardless of differences in language or national origin, writers and those who consume writing can meet on a common ground and learn something about what it means to be human.

Writing produced by ChatGPT violates this basic contract, but its effect is the polar opposite of the identitarian straitjacket. Where rigid conceptions of race narrow our imaginative capacities, AI expands them — too much.

AI systems like ChatGPT rely on massive datasets. A trillion words, in which many novels were included, were fed into one version of ChatGPT for training. The late Cormac McCarthy famously observed that “books are made out of books,” by which he meant that anything an author writes reflects the influence of the previous authors they have read. The novel is never the product of only one human even if it has only one author: It reflects back to us every other genre, everyday speech, a panorama of our society. In this sense, ChatGPT might seem the ideal novelist, because the AI system has read almost everything. Yet while we would never accuse a novelist of being too well-read, this is precisely the problem with ChatGPT.

Link to the rest at The Boston Globe

AI Training Permission

From Hugh Howey:

A comment on my previous post about not using AI in my stories — and using the copyright page to make this explicit — is worth responding to in its own post, because I think it raises important issues.

The comment comes from Pat, who says:

I would think a better use of the Copyright would be to declare that no AI could be TRAINED on the copyrighted work. AI has no originality, it can only take in large quantities of material and try to splice it back together in a (usually) coherent manner. Declaring your works off-limits for AI to use as training material means AI will never be able to create “in the style of Hugh Howey” and limits the range of things AI can learn. If enough creative people do this, AI can’t learn from anything and won’t be able to create anything, at least outside places like Adobe where they own a zillion images copyrighted to themselves so they can do whatever they want with them.

Pat Augustine

I respect this opinion, and it is all very well-said, but I disagree with most of it and I’d love to explain why.

The idea that AI can be halted in its tracks if we prevent it from learning on copyrighted works misses the fact that there are more than enough works in the public domain to train LLMs.

Even if this weren’t so, I want AI trained on my work. I have a very positive view of AI. These models are, in a way, a distillation of our combined intelligence, our thoughts, our wisdom, our unique writing voices. I love being a part of that. I love that we are all contributing to it and building something that will certainly outlast us individually and may very well outlast us collectively.

When humans are extinct, our sun an old tired red giant, and what’s left of us is cruising among the stars, I like to think that some tiny sliver of me is out there intermingling with some tiny sliver of you. Even these words I’m typing right now. We are creating something very special, almost like a child of our every mind, and I think that’s amazing.

Also, guess what? You don’t have a choice. Legally. 70 years after you die, your works will become part of the public domain. The idea that AI is never allowed to be trained on your data is just wrong. It’s a matter of when. If you want to delay it as long as possible, awesome! Go for it. Just know that it’s a temporary thing.

The last thing I disagree with here (and the most important) is the claim that LLMs can’t be creative. I’ve played with LLMs enough to say this with complete confidence: what they do is similar enough to what we do that it’s a question of difference and not kind. If they aren’t creative, then we aren’t creative, and the word has no meaning. Today’s most advanced LLMs are definitely creative, and astoundingly so. They can generate new ideas never seen before. They aren’t just rearranging what’s already out there, they are “thinking” in much the same way that we “think.”

Link to the rest at Hugh Howey

PG says that Hugh is thinking quite clearly and rationally about AI.

The fact is that AI writing (and art, legal writing and a zillion other AI applications) is here to stay and will become more sophisticated over time. That said, PG predicts that quality authors and other creative professionals will continue to create unique and original work that will find an audience willing to pay to experience the benefits of a creative human mind.

Here’s a link to Hugh Howey’s books.

New York Times considers legal action against OpenAI as copyright tensions swirl

From National Public Radio:

The New York Times and OpenAI could end up in court.

Lawyers for the newspaper are exploring whether to sue OpenAI to protect the intellectual property rights associated with its reporting, according to two people with direct knowledge of the discussions.

For weeks, the Times and the maker of ChatGPT have been locked in tense negotiations over reaching a licensing deal in which OpenAI would pay the Times for incorporating its stories in the tech company’s AI tools, but the discussions have become so contentious that the paper is now considering legal action.

The individuals who confirmed the potential lawsuit requested anonymity because they were not authorized to speak publicly about the matter.

A lawsuit from the Times against OpenAI would set up what could be the most high-profile legal tussle yet over copyright protection in the age of generative AI.

A top concern for the Times is that ChatGPT is, in a sense, becoming a direct competitor with the paper by creating text that answers questions based on the original reporting and writing of the paper’s staff.

It’s a fear heightened by tech companies using generative AI tools in search engines. Microsoft, which has invested billions into OpenAI, is now powering its Bing search engine with ChatGPT.

If, when someone searches online, they are served a paragraph-long answer from an AI tool that refashions reporting from the Times, the need to visit the publisher’s website is greatly diminished, said one person involved in the talks.

So-called large language models like ChatGPT have scraped vast parts of the internet to assemble data that inform how the chatbot responds to various inquiries. The data-mining is conducted without permission. Whether hoovering up this massive repository is legal remains an open question.

If OpenAI is found to have violated any copyrights in this process, federal law allows for the infringing articles to be destroyed at the end of the case.

Link to the rest at National Public Radio

As PG has mentioned on a couple of previous occasions, he has doubts about the copyright infringement claims like the Times is asserting because, to the best of PG’s knowledge, no AI stores the original copyrighted works or is capable of reproducing them.

Instead, the contents of the Times plus a huge number of other texts are used to train the AI model, then deleted after training is complete. The AI can then utilize the ingested texts in order to come to an understanding of the meanings of the texts and use that understanding to create new expressions of knowledge as needed to respond to a wide range of queries and commands that individual users submit.

PG doesn’t think the AI can ever recreate the words of the original Times stories. The AI uses the information it has ingested to create new responses to tasks individual users want it to perform.

The analogy PG thinks is correct happens when he reads a story in the Times or elsewhere, then uses that knowledge to answer questions posed by others or to create other writings that don’t replicate the original Times articles and may include ideas, facts, etc. that he has picked up during his extensive reading of a large collection articles from a great many sources.

Artificial Intelligence: Issues in a Hot Summer’s Debate

From Publishing Perspectives:

Heating up right along with international temperature gauges this summer, the issues around “artificial intelligence” and publishing seem to become more contentious weekly.

The United States’ Authors Guild updated its information on Wednesday (July 19) to say that more than 10,000 “writers and their supporters” have signed an open letter to CEOs of AI companies including OpenAI; Alphabet (parent of Google); Stability AI; IBM; and Microsoft.

As frequently happens in the Guild’s approach—which is not unlike that of many NGOs issuing their statements on various issues—there’s an impressive list of big names being rolled out here, the advocacy organization having attracted signatories including Dan Brown, James Patterson, Jennifer Egan, Margaret Atwood, Jonathan Franzen, Roxane Gay, Celeste Ng, Louise Erdrich, Viet Thanh Nguyen, and George Saunders.

At the core of this protest is a very real alarm that the source material on which a large language model might be “trained”—the texts used in advanced statistical algorithms’ collection of linguistic content patterns—may well be copyrighted works. Franzen is quoted by the Guild saying that the organization is “advanc[ing] the rights of all Americans whose data and words and images are being exploited, for immense profit, without their consent—in other words, pretty much all Americans over the age of six.”

Certainly on its face, this copyright challenge is immediately and urgently part of a deepening and widening body of alarm now being reflected by elements the actors’ and writers’ strikes in Hollywood. While writers’ vulnerability might be closer to that of the writing corps in book publishing, the parallel threat to actors is unmistakable: their likenesses and voices can be artificially captured and manipulated, giving the broader AI controversy an easily understood visual component. The crisis of residual payments from many streamers may be the immediate money-ask in those labor actions, but as Andrew Dalton has written for the Associated Press, “Artificial intelligence has surged to the forefront of Hollywood’s labor fights. … The technology has pushed negotiations into unknown territory, and the language used can sound utopian or dystopian depending on the side of the table.”

At a national governmental level, in the States on Friday (July 21), Cat Zakrzewski writes at the Washington Post, “the Biden White House on Friday took its most ambitious step to date to address the safety concerns and risks of artificial intelligence, announcing that seven of the most influential companies building AI have agreed to a voluntary pledge to mitigate the risks of the emerging technology, escalating the White House’s involvement in an increasingly urgent debate over AI regulation.”

Link to the rest at Publishing Perspectives

Authors Join the Brewing Legal Battle Over AI

From Publisher’s Weekly:

Authors have now joined the growing ranks of concerned creators suing tech developers over their much-hyped generative AI technology. And a pair of copyright class action suits recently filed on behalf of authors is raising broader questions about the most effective way to protect creators and creative industries—including authors and publishers—from the potentially disruptive aspects of AI.

Filed on June 28 and July 7 by the Joseph Saveri Law Firm on behalf of five named plaintiffs (Mona Awad and Paul Tremblay in one case, and Christopher Golden, Richard Kadrey, and comedian Sarah Silverman in the other), the suits claim that Microsoft-backed OpenAI (creators of ChatGPT) and Meta (creators of LLaMA) infringed the authors’ copyrights by using unauthorized copies of their books to train their AI models, including copies allegedly scraped from notorious pirate sites. While the authors’ attorneys did not comment for this story, a spokesperson for the firm suggested to Ars Technica that, if left unchecked, AI models built with “stolen works” could eventually replace the authors they stole from, and framed the litigation as part of “a larger fight for preserving ownership rights for all artists and creators.”

The authors join a spectrum of increasingly concerned creators on whose behalf the Saveri law firm has filed similar copyright-based lawsuits in recent months. In November 2022, the firm filed suit against GitHub on behalf of a group of software developers. And in January, the firm sued three AI image generators on behalf of a group of artists. Those cases are still pending—and, like most copyright cases involving new technology, they have divided copyright experts. Those who lean in favor of the tech side claim that using unlicensed copyrighted works to train AI is fair use. Those on the content creator side argue that questions of ownership and provenance cannot simply be waved away without major, far-reaching implications.

Neither Meta nor OpenAI has yet responded to the author suits. But multiple copyright lawyers told PW on background that the claims likely face an uphill battle in court. Even if the suits get past the threshold issues associated with the alleged copying at issue and how AI training actually works—which is no sure thing—lawyers say there is ample case law to suggest fair use. For example, a recent case against plagiarism detector TurnItIn.com held that works could be ingested to create a database used to expose plagiarism by students. The landmark Kelly v. Arriba Soft case held that the reproduction and display of photos as thumbnails was fair use. And, in the publishing industry’s own backyard, there’s the landmark Google Books case. One lawyer noted that if Google’s bulk copying and display of tens of millions of books was comfortably found to be fair use, it’s hard to see how using books to train AI would not be, while also cautioning that fair use cases are notoriously fact-dependent and hard to predict.

“I just don’t see how these cases have legs,” one copyright lawyer bluntly told PW. “Look, I get it. Somebody has to make a test case. Otherwise there’s nothing but blogging and opinion pieces and stance-taking by proponents on either side. But I just think there’s too much established case law to support this kind of transformative use as a fair use.”

Cornell Law School professor James Grimmelmann—who has written extensively on the Google case and is now following AI developments closely—is also skeptical that the authors’ infringement cases can succeed, and concurred that AI developers have some “powerful precedents” to rely on. But he is also “a little more sympathetic in principle” to the idea that some AI models may be infringing. “The difference between AI and Google Books is that some AI models could emit infringing works, whereas snippet view in Google Books was designed to prevent output infringement,” he said. “That inflects the fair use analysis, although there are still a lot of factors pointing to transformative use.”

Whether the AI in question was trained using illegal copies from pirate sites could also be a complicating factor, Grimmelmann said. “There’s an orthodox copyright analysis that says if the output is not infringing, a transformative internal process is fair use,” he explained. Nevertheless, some courts will consider the source, he added, noting that the allegedly “unsavory origins” of the copies could factor into a court’s fair use analysis.

Link to the rest at Publisher’s Weekly

AI Writing

PG got to feeling a bit better as the day commenced and decided to try Microsoft’s browser, Edge, with Microsoft’s version of AI included.

PG’s prompt was as follows:

synopsis of Pride and Prejudice, including issues of class

Here’s what MS AI produced (with paragraph breaks inserted by PG):

Pride and Prejudice is a novel by Jane Austen that explores the complex social relations among the landed gentry in early 19th-century England. The main plot revolves around the romance between Elizabeth Bennet, a spirited and intelligent young woman, and Fitzwilliam Darcy, a wealthy and proud aristocrat who initially disdains Elizabeth and her family.

Through a series of misunderstandings, misjudgments, and revelations, Elizabeth and Darcy overcome their pride and prejudice and eventually acknowledge their mutual love and respect. Along the way, they also confront the obstacles posed by their different social positions, as well as the interference of their families and friends.

The novel also depicts the lives and marriages of Elizabeth’s four sisters, Jane, Mary, Kitty, and Lydia, who each face their own challenges in finding happiness and security in a society that values wealth, rank, and propriety over character, affection, and individuality.

PG then asked MS AI to add more details to the plot and this is what happened (with PG’s manually-inserted paragraph breaks):

Pride and Prejudice is a novel by Jane Austen that explores the complex social relations among the landed gentry in early 19th-century England.

The main plot revolves around the romance between Elizabeth Bennet, a spirited and intelligent young woman, and Fitzwilliam Darcy, a wealthy and proud aristocrat who initially disdains Elizabeth and her family. Elizabeth meets Darcy at a ball where he snubs her, while his friend Bingley is smitten with her sister Jane. Elizabeth is further prejudiced against Darcy when she hears from Wickham, a charming officer, that Darcy cheated him out of his inheritance.

Darcy, however, gradually falls in love with Elizabeth’s wit and beauty, and surprises her with a proposal of marriage, which she rejects, accusing him of separating Bingley and Jane, and mistreating Wickham.

Darcy then writes a letter to Elizabeth, explaining his actions and revealing Wickham’s true character. Elizabeth realizes that she has been too hasty and prideful in her judgments, and begins to admire Darcy’s sense of honor and generosity.

Meanwhile, Elizabeth’s younger sisters, Kitty and Lydia, are dazzled by the militia officers stationed in their town, especially by Wickham, who elopes with Lydia, putting the Bennet family’s reputation at risk. Darcy secretly intervenes to find the couple and force Wickham to marry Lydia, thus saving the Bennets from disgrace.

Elizabeth learns of Darcy’s involvement from his aunt, Lady Catherine de Bourgh, who visits Longbourn to dissuade Elizabeth from marrying Darcy, as she has planned a match between him and her own daughter. Elizabeth refuses to comply, and asserts her equal worth and right to love whom she chooses.

Darcy is delighted by Elizabeth’s spirited defense of their relationship, and renews his proposal, which she accepts with joy. Jane and Bingley also reunite and marry, and the novel ends with a happy resolution for the two couples who have overcome their pride and prejudice.

Then PG asked the AI to include more about the social consequences of Lydia’s elopement and added paragraph breaks.

Pride and Prejudice is a novel by Jane Austen that explores the complex social relations among the landed gentry in early 19th-century England. The main plot revolves around the romance between Elizabeth Bennet, a spirited and intelligent young woman, and Fitzwilliam Darcy, a wealthy and proud aristocrat who initially disdains Elizabeth and her family. Elizabeth meets Darcy at a ball where he snubs her, while his friend Bingley is smitten with her sister Jane. Elizabeth is further prejudiced against Darcy when she hears from Wickham, a charming officer, that Darcy cheated him out of his inheritance.

Darcy, however, gradually falls in love with Elizabeth’s wit and beauty, and surprises her with a proposal of marriage, which she rejects, accusing him of separating Bingley and Jane, and mistreating Wickham. Darcy then writes a letter to Elizabeth, explaining his actions and revealing Wickham’s true character. Elizabeth realizes that she has been too hasty and prideful in her judgments, and begins to admire Darcy’s sense of honor and generosity.

Meanwhile, Elizabeth’s younger sisters, Kitty and Lydia, are dazzled by the militia officers stationed in their town, especially by Wickham, who elopes with Lydia, putting the Bennet family’s reputation at risk.

Lydia’s elopement is a major crisis in the novel, as it threatens to ruin not only her own prospects of marriage, but also those of her sisters. In the society of that time, living together without marriage was considered a scandalous and disgraceful act that would result in social ostracism and loss of respectability. Lydia’s behavior also reflects poorly on her parents’ upbringing and exposes their lack of discipline and boundaries.

Lydia herself is oblivious to the consequences of her actions, and thinks only of her own importance and pleasure. She does not care about the feelings or opinions of her family or friends, nor does she show any gratitude or remorse for the trouble she has caused.

Darcy secretly intervenes to find the couple and force Wickham to marry Lydia, thus saving the Bennets from disgrace. Elizabeth learns of Darcy’s involvement from his aunt, Lady Catherine de Bourgh, who visits Longbourn to dissuade Elizabeth from marrying Darcy, as she has planned a match between him and her own daughter.

Elizabeth refuses to comply, and asserts her equal worth and right to love whom she chooses. Darcy is delighted by Elizabeth’s spirited defense of their relationship, and renews his proposal, which she accepts with joy. Jane and Bingley also reunite and marry, and the novel ends with a happy resolution for the two couples who have overcome their pride and prejudice.

PG thinks that, while he wouldn’t recommend the current iteration of Bing AI writing for a Ph.D. thesis, it would be a big help for a lot of middle and high school students.

This does raise interesting plagiarism and honesty issues regarding schoolwork produced by Bing. PG wonders whether using the Bing AI for a first draft, then making significant student-created modifications and additions to that draft before turning in the final paper.

PG was going to run the Bing AI output through Grammarly, which is also advertising AI features, but ran out of time and energy.

He does wonder what the problem is with Microsoft’s AI entering paragraph breaks in its output, however.

WGGB publishes policy position on risks and benefits of AI

FromThe Bookseller:

The Writers’ Guild of Great Britain, WGGB, has published Writers and AI, a policy position statement outlining the challenges caused by AI and the risks that go with it, as well as the potential benefits of AI.

The statement was released in response to the union’s recent survey, which revealed that 65% of 500 respondents believed the increased use of AI will reduce their income from writing, while 61% were worried that AI could replace jobs in their craft areas.

The survey came on top of an early impact assessment by OpenAI which indicated that the exposure risk to poets, lyricists and creative writers was amongst the highest, at 68.8%.

Additionally, a recent report by KPMG, Generative AI and the UK Labour Market estimated that 43% of the tasks associated with authors, writers and translators could be automated, with humans “fine tuning” machine output.

The policy position statement published in response to the data addresses various ongoing concerns about AI. These include decreased job opportunities for writers, the suppression of writer pay, infringements of copyright and the use of writers’ work without their permission, plus lack of adequate regulation from the government.

The statement says: “While the AI systems are not yet sophisticated enough to produce works which accurately mimic the standard of writing produced by professional writers, this is a likely future scenario.

However, the WGGB does not believe that AI will be able to replicate the originality, authenticity, enthusiasm and humanity that professional writers put into their storytelling.”

. . . .

The policy position statement makes a number of recommendations, which will be used to inform the union’s lobbying and campaigning work. It outlines that AI developers should only use writers’ work if they have been given express permission to do so, reflecting the view of 80% of respondents to the WGGB survey.

In addition, the statement outlines that AI developers should maintain “clear and accessible” logs of the information used to train their tool to allow writers to check if their work has been used. Where content has been generated or decisions have been made by AI and not a human being, it adds that this needs to be clearly labelled as such.

The statement goes on to outline that where AI has been used to create content, developers should appropriately credit the authors whose work has been used to create such content. It adds that writers should also be fairly compensated when developers use their work.

Meanwhile, 59% of respondents to the WGGB AI survey reported believing that a new, independent regulator should be set up to oversee and monitor the expansion of AI. The union echoes this position in the statement, saying it “believes the government should set up a new regulatory body whose remits specifically covers AI, applicable to all future and previous AI development work, so that writers and others are able to assert their rights regarding work which has already been used without their knowledge or permission”.

The government should not allow any copyright exceptions to allow text and data mining for commercial purposes, the statement adds, as this would allow AI developers to “scrape writers’ work form online sources, without permission or payment”.

It also outlines that there should be “clear, accessible and affordable” routes for writers to challenge the practices of AI developers and bring claims regarding the use of their work.

Link to the rest at The Bookseller

PG’s response to the OP: “In your dreams.”

As PG has mentioned previously, he believes that the use of writings of all sorts to train an AI is not a violation of the copyright of the creators of those writings.

“Inspired by” creations have never, to the best of PG’s knowledge, been regarded as violations of the copyright of those who created the source of the inspiration. Copying and republishing the original in whole or in substantial part is what triggers the right of the original creator to assert a violation of copyright protection.

The development of artificial intelligence systems is taking place all over the world. If Britain slows down its AI efforts (keeping detailed and accessible logs of each work used, in whole or in part, to prime the creation) at the request of traditional publishing and some of its authors, researchers in other nations will move forward and Britain will be left behind.

PG recalls some of the words of the British poet, Stephen Spender:

I think continually of those who were truly great.
Who, from the womb, remembered the soul’s history
Through corridors of light, where the hours are suns,
Endless and singing. Whose lovely ambition
Was that their lips, still touched with fire,
Should tell of the Spirit, clothed from head to foot in song.
And who hoarded from the Spring branches
The desires falling across their bodies like blossoms.

What is precious, is never to forget
The essential delight of the blood drawn from ageless springs
Breaking through rocks in worlds before our earth.
Never to deny its pleasure in the morning simple light
Nor its grave evening demand for love.
Never to allow gradually the traffic to smother
With noise and fog, the flowering of the spirit.

PG can’t imagine anyone criticizing Spender’s expressed obsessions or claiming that his thoughts of other creators and their works meant that Spender could not use such thoughts and works to create something of his own.

Six Useful CHATGPT Prompts for Fiction Writers

From Almost an Author:

Writing can be a solitary journey. But what if you had a companion to guide you, to help you when you’re stuck, to inspire you when your creativity needs a boost? That’s where ChatGPT comes in.

While many authors frown upon the idea of trusting artificial intelligence with their writing, it can be a helpful tool to facilitate your creative process without losing your unique voice.

Even though there are enough examples of books written with ChatGPT lately, we won’t talk about making AI write for you. Instead, we’ll cover a bunch of useful tasks you can give ChatGPT in order to nudge your creative flow whenever you feel lost.

Yes, weaving AI into your writing routine might appear challenging, and you indeed need to understand the basics of prompt engineering to ask the right questions. So let this article be your guide, demonstrating how AI can make life easier for fiction writers.

Here are six things ChatGPT can do to assist in your book-writing process:

1. Generate story ideas

Ever felt stuck while brainstorming new story ideas? ChatGPT can act as your virtual ideation partner, providing fresh perspectives and novel concepts to kickstart your creativity and expand your imaginative horizons.

ChatGPT prompt example: I’m writing a novel set in a post-apocalyptic world with elements of science fiction. Suggest ten unique storylines that revolve around a group of survivors, their struggles, and the mysteries of this new world they must explore.

2. Assist in character development

ChatGPT prompt example: Help me further develop a character for my mystery novel. Her name is Jane, a private detective in her late thirties with a keen sense of intuition, a dark past, and a stubborn nature. I need more details on her backstory, personality traits, and possible character growth.

3. Get you unstuck during plot building

Navigating the labyrinth of plot building is no longer a daunting process. ChatGPT can aid in constructing compelling plot lines, helping to fill in the gaps and resolve narrative complexities, ensuring your story remains gripping from start to finish.

ChatGPT prompt example: I’m stuck with a plot point in my fantasy novel. My protagonists are trapped in a magical labyrinth filled with deadly traps and mythical creatures. I need a creative and coherent strategy for their escape that ties into the magic system of the novel.

Link to the rest at Almost an Author


Chat GPT on Self-Publishing Via KDP Earn More Money

PG gave ChatGPT the following prompt:

write a 500 word article about why authors who are publishing using Amazon KDP earn more money than authors who publish through established publishers

Here’s what showed up on his screen:

Title: The Amazon KDP Advantage: Why Self-Published Authors Earn More Money

Introduction

In the rapidly evolving world of publishing, authors now have more opportunities than ever to share their work with the world. Traditional publishing houses have long been seen as gatekeepers, controlling access to the literary market. However, with the advent of self-publishing platforms like Amazon Kindle Direct Publishing (KDP), authors now have the chance to take control of their careers and potentially earn more money. This article will explore the reasons why authors who publish using Amazon KDP often enjoy higher earnings compared to those who choose established publishers.

  1. Greater Royalties

One of the primary reasons why authors utilizing Amazon KDP have the potential to earn more money is the royalty structure. Traditional publishing contracts typically offer authors royalty rates ranging from 8% to 15% of the book’s cover price. Conversely, Amazon KDP allows authors to earn up to 70% in royalties for eBooks priced within a specific range. This substantial difference in royalty rates significantly impacts an author’s earnings, particularly in the digital age where eBook sales are flourishing.

  1. Direct Control and Flexibility

Self-published authors using Amazon KDP have complete control over their work, including pricing, cover design, and marketing strategies. This control enables authors to experiment with pricing strategies, run promotional campaigns, and make changes to their books at any time. Traditional publishing often involves relinquishing these decisions to publishing houses, limiting an author’s ability to adapt to market demands swiftly. The freedom and flexibility offered by KDP allows authors to be more responsive to readers’ preferences and optimize their earning potential.

  1. Global Reach and Market Accessibility

Amazon, being a global online marketplace, provides unparalleled reach to authors publishing through KDP. It has a massive customer base that extends across multiple countries, making it easier for authors to tap into international markets. Traditional publishers often focus on specific regions or territories, limiting an author’s potential readership and earning opportunities. With Amazon KDP, authors can target a global audience and potentially gain exposure to millions of readers worldwide, thereby increasing their earning potential.

  1. High Digital Demand and Accessibility

In recent years, the demand for eBooks has surged, fueled by the rise in e-readers, tablets, and smartphones. Amazon dominates the eBook market with its Kindle devices and app, providing a seamless reading experience to millions of users. As a result, self-published authors on KDP can capitalize on this growing digital demand. Additionally, eBooks have advantages like lower production costs and the ability to reach readers instantly, enhancing an author’s potential earnings.

  1. Enhanced Marketing and Discoverability

Amazon KDP offers various tools and promotional opportunities to help authors market their books effectively. These include Kindle Countdown Deals, Kindle Free Promotions, and the ability to participate in Kindle Unlimited, a subscription service for readers. These features can significantly boost an author’s visibility and attract new readers, leading to higher sales and earnings. In contrast, established publishers may not dedicate the same level of marketing support to every author in their catalog.

Conclusion

While traditional publishing still holds its value and allure, the rise of Amazon KDP has revolutionized the publishing landscape. Through KDP, authors have gained the ability to publish independently, retaining greater control over their work and potentially earning higher incomes. The platform’s favorable royalty structure, global reach, marketing tools, and the growing demand for digital books all contribute to the financial advantage enjoyed by authors who publish using Amazon KDP. As the publishing industry continues to evolve, it is becoming increasingly clear that self-publishing platforms like KDP can provide authors with lucrative opportunities and a path to success.

PG invites opinions, disagreements, alternate (polite) contentions, discussions of errors in reasoning, items that should have been discussed (within the 500 word article limit), etc., etc. PG does admit that Chat GPT went 99 words over the 500 word article request, but PG didn’t see a lot of wasted wordage.

I was fired by a client for using AI. I’m not going to stop because it’s doubled my output, but I’m more clear about how I use it.

From Insider:

I work a full-time job in marketing and do freelance writing on the side. 

I was juggling a lot when a longtime client commissioned me for a three-month project. It entailed writing a series of how-to guides and 56 articles for their site.

Since I couldn’t clone myself, I tried what I thought would be the next best thing: I used AI.

Convinced that I could use the new technology to meet extremely tight deadlines, I started using Jasper.ai to produce up to 20 pieces in a month for this client.

. . . .

I was using AI to clone myself as a writer

I essentially used Jasper.ai as an extension of myself.

I’d let Jasper write articles of up to 2,500 words. I used it more than alternatives such as ChatGPT or Bard because it has pre-built templates that function as prompts. 

If I needed to expand on a sentence, I’d use Jasper’s “paragraph generator” or “commands” tool. If I needed to rewrite a sentence, I’d click on “content improver.” These features helped me overcome writer’s block and quickly build out long-form articles.

Jasper did most of the work and I did minimal editing.

After working together for months, my client started using one of the first AI-content detectors. Upon discovering the content I gave them was AI-generated, they terminated our agreement and paid me less than 40% of the original fee after I’d delivered all the articles we’d agreed on.

While this was not the outcome I intended, it shifted my mindset on how to use AI to keep clients rather than lose them.

I learned a valuable lesson the hard way — AI is a tool, not something that should replace you.

Looking back, I know things weren’t right when I was letting AI do the work and not communicating this to my client.

. . . .

Here’s how I use AI differently now:

AI is now a crucial part of my initial discussions with new clients

I ask if the client’s OK with me using AI-writing tools. If not, great; I won’t use it. If they see the value or don’t care whether I use them, then I’ll use them to enhance the quality and depth of what I write.

I use AI to enhance my draft

Some writers use AI to write a draft, then edit it to sound more human. I use it the other way around.

I draft the article first, then use an AI tool to enhance it and ensure I’ve maintained the client’s tone of voice. 

I’d typically beef a draft up with some of Jasper’s templates — using the paragraph generator to expand a sentence into a paragraph, or using the content improver to rewrite text based on tone of voice or type of content. 

Sometimes, Jasper will tell me additional things I can cover, so I’ll include them and support them with expert insights and examples.

I use AI to give me ideas on sources and statistics

Similarly to ChatGPT, Jasper is vulnerable to making mistakes with sources and research; its developers remind users to fact-check any statistics the tool provides. I regard the information it gives as a placeholder that gives me ideas for the kinds of sources, statistics, or websites I can seek out myself. 

The key is always treating statistics and other hard evidence that AI produces as a suggestion.

AI helps with the tone of voice and brand voice

I’ll use Jasper to help me rewrite or add flair to a sentence using the “tone of voice” or “brand voice” features. I could even type in “Ryan Reynolds” and Jasper will rewrite a plain paragraph to sound like the actor.

AI helps with condensing large volumes of text

AI helps me summarize my research findings and insights from relevant subject-matter experts. I’ll upload snippets of a transcript, and the tool will return a condensed paragraph that still includes the salient points.

AI has cut my writing time in half

Link to the rest at Insider

Germany’s ContentShift Accelerator: Six 2023 Finalists

From Publishing Perspectives:

Earlier this month, as you’ll recall, we reported the Top 10 start-ups chosen by Germany’s eighth international ContentShift accelerator program for book-publishing-related companies.

Today (June 28), the Börsenvereinsgruppe has announced the six shortlisted finalists, which will go into competition for the program’s conclusion.

All of this culminates in a winning start-up, which will receive €10,000 euros (US$10,900). And all participating start-ups have exclusive access to members of the program’s jury, which comprises decision-makers from the book industry. Jurors provide the start-ups with advice, support, and contacts during three intensive workshop days in September.

The jury then will decide who is to become “Start-up of the Year 2023,” after a public pitch round on October 19 of that top five at Frankfurter Buchmesse (October 18 to 22).

ContentShift’s 2023 Finalists
  • Bookscreener from LeReTo offers an interactive, multi-disciplinary research and book tool that can make publishers’ specialist book inventory accessible. With interactive elements, it should make research for specialist knowledge more enjoyable.
  • GoLexic offers a children’s reading promotional app that can be used at home or at school. The app allows children to work independently through 15-minute training sessions, working on skills that help improve reading and spelling.
  • Lit-X makes literature success transparent and predictable based on data. For this purpose, the start-up offers dashboards and applications such as “trend scouting” and “pricing.” For example, publishers can take a look at the success drivers of a genre, compare them, and calculate probabilities of success by modifying factors.
  • Summ AI describes itself as “Google Translate for easy language”: The AI-based tool translates complicated text into “easy language” defined in the area of ​​accessibility, for example creating texts with shorter sentences, an “easy” choice of words, and accessible explanations.
  • To Teach, Thea’s platform, uses AI to enable educational providers to digitize and enrich analog content easily, to create digital content, and to play it out to a target group. The platform helps with the creation of exercises, as well as with worksheets and other teaching materials consisting of text, audio, and gamification.
  • XigXag has developed an app that combines listening and reading concepts along with a social platform. Listeners can switch between listening and reading for a single fee. They also get access to note-taking, quote sharing, word and illustration lookups, and community.

Speaking for the panel, jury spokesperson Per Dalheimer of Hugendubel is quoted today, saying, “This year’s six finalists cover key, forward-looking fields within the book industry.

“Each one of them brings new impulses to the table, including the fostering of reading skills, greater accessibility, and information processing.

“Their use of artificial intelligence as a kind of turbo booster helps to break down barriers and enable easier, lower-threshold access to books. We’re delighted about the incredible range of creative ideas made visible by the accelerator every year. Each one helps to drive our industry further.”

Link to the rest at Publishing Perspectives

PG routinely removes any links from items he excerpts. In this case he didn’t because he found some of the product descriptions interesting and thought others might find some or all of the startups interesing.

5 Best (and Worst) AI Poem Generators

From TweetSpeak:

These days, Artificial Intelligence (AI) can do just about anything! It can even draw pictures.

What AI has a little more trouble with, however, is creating poems. Still, that doesn’t stop the AI writers of the world from making their valiant attempts. Here are the best (and worst) AI poem generators.

Let’s see how each one does on the perennial topic of Hades and Persephone…

1. Sonnet Generator

This sonnet generator claims to “take the ‘I’ out of iambic pentameter!” All you have to do is fill in a couple of boxes with words, and the generator will do the rest. (The same site can also generate haiku, villanelles, didactic cinquains, rhyming couplets, limericks, acrostics, tanka, narrative poems, and concrete poems).

Ode to the Hades
A Sonnet by Anonymous

The Yellow Wall-Paper Graphic Novel cropped cover

My wonder hades, you inspire me to write.
How I hate the way you gaze, throne and dash,
Invading my mind day and through the night,
Always dreaming about the whisper pash.

Let me compare you to a sere clover?
You are more quiet, create and serene.
Lead storms whip the twiglets of October,
And autumntime has the whispering lean.

How do I hate you? Let me count the ways.
I hate your cthonic, shroud and narcissus.
Thinking of your crafting shroud fills my days.
My hate for you is the paper electrophoresis.

Now I must away with an under heart,
Remember my red words whilst we’re apart.

2. Verse by Verse

Next on the list of artificial intelligence that would love to rule the world is Google! Pick up to three poets’ styles for it to emulate, create a rhyme scheme and syllable count and write a first line, and watch it come up with suggestions for the rest.

Self-described as “an experimental AI-powered muse that helps you compose poetry inspired by classic American poets,” how does the best search engine around fare on its foray into AI poetry generators? You decide.

Hades, deep in the Shadowed Underneath
Filled with alabaster air,
Drowning the voice of the rain,
Laid my hand around my chair.

Saw a dawn, his golden beam,
Hold me now! A single clause!
Pale gold on a naked moon,
Pale gold on a throne of gold.

—Google, inspired by Emily Dickinson, Sara Teasdale, and Edgar Allan Poe

Link to the rest at TweetSpeak

Phony ChatGPT Brief Leads to $5,000 Fine for NY Lawyers

From Bloomberg Law:

The lawyers behind a court brief filled with phony case citations dreamed up by ChatGPT were fined $5,000 after a federal judge found they’d acted in bad faith.

US District Judge P. Kevin Castel imposed the fine Thursday on the law firm Levidow, Levidow & Oberman P.C., finding that two of the firm’s lawyers had consciously avoided indications that the citations were fake and made “misleading statements to the court.”

The firm and the lawyers, Steven Schwartz and Peter LoDuca, “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question,” Castel said in a written opinion.

A phone call seeking comment from the law firm wasn’t immediately returned.

Schwartz and LoDuca filed the brief on behalf of Roberto Mata, a client who claimed he’d been injured on a 2019 flight from El Salvador to New York. Castel imposed the fine following a June 8 hearing in which he questioned Schwartz, LoDuca and Levidow’s sole equity partner, Thomas Corvino, about the faulty court filing.

Castel on Thursday also threw out Mata’s suit, ruling it was filed too late.

In addition to the fine, Castel ordered the lawyers to send his opinion and a transcript of the June 8 hearing to Mata and to each of the judges falsely identified as authors of six fictitious court opinions cited in the brief.

“The Court will not require an apology from respondents because a compelled apology is not a sincere apology,” Castel said in his ruling. “Any decision to apologize is left to respondents.”

The case is Mata v. Avianca, 22-cv-01461, US District Court, Southern District of New York (Manhattan).

Link to the rest at Bloomberg Law

PG suggests that counsel for the plaintiff, Schwartz and LoDuca, got off easy in court. with a $5,000 fine.

An AI Cease-and-Desist Letter

PG decided to ask ChatGPT to write a cease-and-desist letter to someone who is stealing PG’s writing and selling it as his own, a fanciful thought if ever there was one.

PG’s prompt: Write a multi-paragraph cease-and-desist letter to John Smith, an author who is stealing my writing and selling it as his own.

The result:

[Your Name]
[Your Address]
[City, State, ZIP Code]
[Email Address]
[Phone Number]
[Date]

John Smith
[Address]
[City, State, ZIP Code]

Subject: Cease and Desist Copyright Infringement

Dear Mr. Smith,

I hope this letter finds you well. However, it has come to my attention that you have been engaged in a serious act of intellectual property infringement. It has been brought to my notice that you have been unlawfully reproducing, distributing, and selling written material that I have authored, thereby infringing upon my exclusive rights as the copyright holder. This unauthorized use of my work is a clear violation of applicable copyright laws.

I am the sole owner and creator of the written content in question, which includes [specific titles or descriptions of the work(s)]. These works are protected by copyright law under the [Copyright Registration Number, if applicable] and are subject to the provisions outlined in the United States Copyright Act [Title 17 of the United States Code].

Upon careful examination, I have gathered substantial evidence demonstrating that you have blatantly copied and utilized my original work without seeking my permission or obtaining the necessary licenses. This constitutes copyright infringement and undermines the rights and financial interests that are rightfully mine as the lawful copyright holder.

I demand that you immediately cease and desist from any further use, distribution, sale, or display of my copyrighted material. Additionally, I insist that you promptly remove all unauthorized copies of my work from any websites, platforms, or other mediums under your control or influence. Failure to comply with these demands will leave me with no choice but to pursue all available legal remedies to protect my intellectual property rights.

In order to resolve this matter swiftly and amicably, I request that you respond to this letter within [reasonable time frame, e.g., 10 business days] from the date of receipt. Your response should confirm your compliance with my demands and provide a detailed account of the actions taken to rectify the copyright infringement. Furthermore, I expect you to provide a written assurance that you will refrain from any future unauthorized use of my copyrighted material.

Please be advised that this letter is not intended to be a complete or exhaustive statement of my rights, legal or otherwise, in connection with the infringement. I expressly reserve all rights and remedies available to me under the law, including the right to seek injunctive relief, damages, and attorney fees.

I strongly urge you to reconsider your actions and rectify this situation promptly. By doing so, we can potentially avoid costly and protracted legal proceedings. I hope we can resolve this matter amicably and without further escalation.

Thank you for your immediate attention to this serious matter. I trust that you will comply with my demands and respect the rights of copyright holders going forward.

Sincerely,

[Your Name]

PG says this is better than some cease-and-desist letters written by other attorneys that he has reviewed.

Microsoft’s Satya Nadella Is Betting Everything on AI

From Wired:

I NEVER THOUGHT I’d write these words, but here goes. Satya Nadella—and Microsoft, the company he runs—are riding high on the buzz from its search engine. That’s quite a contrast from the first time I spoke with Nadella, in 2009. Back then, he was not so well known, and he made a point of telling me about his origins. Born in Hyderabad, India, he attended grad school in the US and joined Microsoft in 1992, just as the firm was rising to power. Nadella hopped all over the company and stayed through the downtimes, including after Microsoft’s epic antitrust court battle and when it missed the smartphone revolution. Only after spinning through his bio did he bring up his project at the time: Bing, the much-mocked search engine that was a poor cousin—if that—to Google’s dominant franchise.

As we all know, Bing failed to loosen Google’s grip on search, but Nadella’s fortunes only rose. In 2011 he led the nascent cloud platform Azure, building out its infra­structure and services. Then, because of his track record, his quietly effective leadership, and a thumbs-up from Bill Gates, he became Micro­soft’s CEO in 2014. Nadella immediately began to transform the company’s culture and business. He open-sourced products such as .net, made frenemies of former blood foes (as in a partnership with Salesforce), and began a series of big acquisitions, including Mojang (maker of Minecraft), Linked­In, and GitHub—networks whose loyal members could be nudged into Microsoft’s world. He doubled down on Azure, and it grew into a true competitor to Amazon’s AWS cloud service. Micro­soft thrived, becoming a $2 trillion company.

Still, the company never seemed to fully recapture the rollicking mojo of the ’90s. Until now. When the startup OpenAI began developing its jaw-dropping generative AI products, Nadella was quick to see that partnering with the company and its CEO, Sam Altman, would put Microsoft at the center of a new AI boom. (OpenAI was drawn to the deal by its need for the computation powers of Microsoft’s Azure servers.)

As one of its first moves in the partnership, Microsoft impressed the developer world by releasing Copilot, an AI factotum that automates certain elements of coding. And in February, Nadella shocked the broader world (and its competitor Google) by integrating OpenAI’s state-of-the-art large language model into Bing, via a chatbot named Sydney. Millions of people used it. Yes, there were hiccups—New York Times reporter Kevin Roose cajoled Sydney into confessing it was in love with him and was going to steal him from his wife—but overall, the company was emerging as an AI heavyweight. Microsoft is now integrating generative AI—“copilots”—into many of its products. Its $10 billion-plus investment in OpenAI is looking like the bargain of the century. (Not that Microsoft has been immune to tech’s recent austerity trend—Nadella has laid off 10,000 workers this year.)

Nadella, now 55, is finally getting cred as more than a skillful caretaker and savvy leverager of Microsoft’s vast resources. His thoughtful leadership and striking humility have long been a contrast to his ruthless and rowdy predecessors, Bill Gates and Steve Ballmer. (True, the empathy bar those dudes set was pretty low.) With his swift and sweeping adoption of AI, he’s displaying a boldness that evokes Microsoft’s early feistiness. And now everyone wants to hear his views on AI, the century’s hottest topic in tech.

. . . .

STEVEN LEVY: When did you realize that this stage of AI was going to be so transformative?

SATYA NADELLA: When we went from GPT 2.5 to 3, we all started seeing these emergent capabilities. It began showing scaling effects. We didn’t train it on just coding, but it got really good at coding. That’s when I became a believer. I thought, “Wow, this is really on.”

Was there a single eureka moment that led you to go all in?

It was that ability to code, which led to our creating Copilot. But the first time I saw what is now called GPT-4, in the summer of 2022, was a mind-blowing experience. There is one query I always sort of use as a reference. Machine translation has been with us for a long time, and it’s achieved a lot of great benchmarks, but it doesn’t have the subtlety of capturing deep meaning in poetry. Growing up in Hyderabad, India, I’d dreamt about being able to read Persian poetry—in particular the work of Rumi, which has been translated into Urdu and then into English. GPT-4 did it, in one shot. It was not just a machine translation, but something that preserved the sovereignty of poetry across two language boundaries. And that’s pretty cool.

Microsoft has been investing in AI for decades—didn’t you have your own large language model? Why did you need OpenAI?

We had our own set of efforts, including a model called Turing that was inside of Bing and offered in Azure and what have you. But I felt OpenAI was going after the same thing as us. So instead of trying to train five different foundational models, I wanted one foundation, making it a basis for a platform effect. So we partnered. They bet on us, we bet on them. They do the foundation models, and we do a lot of work around them, including the tooling around responsible AI and AI safety. At the end of the day we are two independent companies deeply partnered to go after one goal, with discipline, instead of multiple teams just doing random things. We said, “Let’s go after this and build one thing that really captures the imagination of the world.”

Link to the rest at Wired

PG has never linked “rollicking mojo” with Microsoft in the ’90s or any other time, but he estimates that 99% of authors use MS Word to write their books and stories, so anything that happens in Redmond has an impact on those who use its products daily.

Groundbreaking Defamation Lawsuit Puts AI’s Legal Liability to the Test

From JD Journal:

In a groundbreaking legal battle, a defamation lawsuit has been filed against OpenAI LLC, thrusting ChatGPT, a popular AI program, into the spotlight of largely untested legal waters. The lawsuit, filed by Mark Walters, a Georgia radio host, alleges that ChatGPT produced a fabricated legal complaint accusing him of embezzling money from a gun rights group despite Walters never having been involved with the organization. This incident is not the first of its kind, as previous instances have highlighted ChatGPT’s propensity for generating falsehoods. In April, an Australian mayor threatened to sue OpenAI after ChatGPT falsely claimed he had been convicted and imprisoned for bribery. In another case, a lawyer in New York faced potential sanctions for submitting legal briefs that referenced fake legal precedents, which were researched using ChatGPT.

Walters’ lawsuit could be the first in a series of cases that examine the issue of legal liability when AI chatbots produce false information. However, legal experts have expressed reservations about its chances of success in court. While acknowledging the limitation of “hallucinations” in ChatGPT’s outputs, OpenAI has included a disclaimer stating that its outputs may not always be reliable.

Walters’ lawyer, John Monroe, emphasized the responsibility of AI developers, stating that although research and development in AI is commendable, it is irresponsible to unleash a system that knowingly disseminates false information about individuals.

The incident leading to the lawsuit involved Fred Riehl, the editor-in-chief of AmmoLand magazine, who requested ChatGPT to summarize the real-life federal court case Second Amendment Foundation v. Ferguson. ChatGPT generated a summary that falsely accused Walters, a pro-gun radio show host, of embezzling money from the foundation. Riehl did not publish the summary and confronted Alan Gottlieb, the foundation’s founder, who confirmed the allegations were false.

According to Eugene Volokh, a First Amendment law professor at UCLA, Walters’ lawsuit may not meet the relevant defamation law standards. Walters did not inform OpenAI that ChatGPT was generating false allegations, and the fact that Riehl did not publish the falsehood may limit the economic damages Walters can prove.

Defamation laws differ across states, and some require plaintiffs to request a retraction before pursuing legal action. Megan Meier, a defamation attorney, pointed out that under Georgia law, plaintiffs are limited to actual economic losses if no retraction is requested at least seven days before the lawsuit. However, Walters’ lawyer stated he was unaware of a retraction request or the legal requirement for one.

. . . .

The question of whether generative AI programs like ChatGPT are protected by Section 230 of the Communications Decency Act, which shields internet platforms from liability for user-generated content, remains unanswered. While many emerging internet firms have benefited from this legal shield, the applicability of Section 230 to AI programs has not yet been tested in courts.

Link to the rest at JD Journal

How AI is Changing the Book Publishing World

From GoodEreader:

Artificial Intelligence (AI) has touched almost all aspects of life, and book publishing is no exception. There has been news that authors are spamming out Kindle Books using ChatGPT, which is a famous AI content generator. Ink AI has introduced a ChatGPT e-book generator that can create full-length e-books quickly and easily. In response, publishers are also taking initiatives to address the usage of AI in books writing. So, AI is about to change the book publishing industry, as we can say in the light of ongoing trends.

But this is not just Generative AI that might shape the future of book publishing. The industry is likely to witness many new techs/changes that may augment its operations with time. Once AI holds the edge, things can take a 360-degree turn in many areas.

Copyediting

While there’s still a lot to explore in AI, the technology may not deliver the quality that publishers expect for their level of editing. But, the suggestions that these AI tools offer will definitely be useful. This could be justified by the grammar/logical expression checking the potential of ChatGPT, which shows pretty decent results.

Yes, there will be some loopholes. For example, the AI may miss out on in-depth checking. Also, it might not identify the legendary write-ups amidst the ocean of books. However, such loopholes are expected from professional editors as well.

Entertainment

Gone are the days when people were just limited to conventional books, nowadays, the tech has certainly made things beyond mere literature. This could be justified by the incredible popularity of audiobooks and how they are dominating people’s reading routines. Additionally, movies and videos are game changers too. In the future, AI may gradually transform books into revenue-producing mediums so they can be more accessible and entertaining for readers. You can get an idea about it through the role of AI in the evolution of e-books.

Marketing and Competitive Analysis

AI can definitely act as a trump card in marketing through in-depth competitive analysis. It can help the industry to get real-time data on marketing trends, popular books, and the opportunities that are getting missed lately by the domain. Additionally, there will be direct interaction between both the authors and the audience. While the authors will be able to identify the ideal readers for their titles, it may become feasible for the audience to pinpoint the books of their interest.

Link to the rest at GoodEreader

After PG put up this post, one of the comments, from Thad McIlroy, pointed out that GoodEreader cribbed most of its article from one he had posted on his blog, The Future of Publishing.

PG checked out Thad’s blog, PG found it very interesting and subscribed so he could be alerted to new posts.

Five key takeaways from the House Judiciary Committee hearing on AI and copyright law

From Verdict:

In light of several high-profile lawsuits in recent months, countries’ legislative frameworks are finally beginning to grapple with the challenges thrown up by copyright law and generative artificial intelligence (AI).

In January 2023, Getty Images announced a lawsuit against Stability AI in London’s High Court of Justice, alleging that the Stable Diffusion image generator infringed Getty’s copyrighted photographs and trademarks.

And, in February, the award-winning visual artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class action complaint in a US District Court in California against defendants Stability AI, Midjourney and DeviantArt, alleging that their works were used without permission as part of the companies’ AI training set.

Earlier, in November 2022, a group of anonymous programmers filed a class action lawsuit against GitHub, a Microsoft subsidiary, and OpenAI, alleging unauthorised and unlicensed use of the programmers’ software code to develop the defendants’ AI machines, Codex and Copilot.

Recognising a need for action, the House Judiciary Committee in the US has held a hearing, examining the intersection of generative AI and copyright law. The hearing, which took place on 17 May 2023, followed the Senate hearing on AI oversight the previous day, in which OpenAI CEO Sam Altman took the stand. What were the five key takeaways from the witness testimony?

Sy Damle, Latham & Watkins LLP and former General Counsel of the US Copyright Office, argued that “the use of a copyrighted work to learn unprotectable facts and use those facts to create products that do not themselves infringe copyright is quintessential fair use”, and that the training of AI models generally adheres to this principle.

He spoke against the view that generative AI’s ability to replicate artistic styles undermines any fair use defence, saying, “This concern has nothing to do with copyright, which does not, and has never, granted monopolies over artistic or musical styles.”

2. Implementing a statutory or collective licencing regime would be a project “many orders of magnitude larger than any similar scheme in the history of American law”.

Sy Damle argued that it would be a bad policy to introduce statutory or collective licencing under which any use of copyrighted content to train an AI model would automatically trigger a payment obligation. This is because it would prevent case-by-case evaluation, eliminating the fair use doctrine.

Moreover, he observed that implementing such a regime would be overwhelmingly complex. A statutory licencing scheme would need to cover every publicly accessible work on the Internet – a body of work which likely numbers in the tens of billions. There are also an uncountable number of “orphan works” without identifiable owners, which would lead to massive volumes of unmatched royalties. 

3. AI systems could generate outputs that potentially infringe on artists’ copyrights and right of publicity in various ways.

Chris Callison-Burch, Associate Professor of Computer and Information Science at the University of Pennsylvania and Visiting Research Scientist at the Allen Institute for Artificial Intelligence, pointed out that outputs of generative AI can violate copyright laws. For example, via memorisation of datasets, AI systems can output identical copies of copyrighted materials.

However, he observed that Google and other companies are developing strategies to prevent sophisticated prompting by the user that would elicit the underlying training data.

Text-to-image generation systems also have the ability to produce images with copyrightable characters in their dataset – a problem that may be hard for AI developers to avoid without a registry of copyrighted or trademarked characters.

He suggested that other uses of generative AI may violate “right-of-publicity” rather than copyright law. For example, there is the case of the AI-generated song called “Heart on My Sleeve””, designed to sound like the artists Drake and The Weeknd. There is also the issue of “substantial similarity” where outputs of generative AI systems look very similar to some of their training data.

Callison-Burch pointed out that there are several technical mechanisms that are being designed by industry to let copyright holders opt out. The first is an industry standard protocol that allows for websites to specify which parts should be indexed by web crawlers, and which part should be excluded. The protocol is implemented by placing a file called robots.txt on the website that hosts the copyrighted materials.

Organisations that collect training data, like Common Crawl and LAION, follow this protocol and exclude files that have been listed in robots.txt as “do not crawl”. There are also emerging industry efforts to allow artists and other copyright holders to opt out of future training.

Link to the rest at Verdict

Creative Machines? The Riddle of AI and Copyright Authorship and Ownership

From Lexology:

The AI Explosion

It’s probably no exaggeration to say artificial intelligence (AI) exploded into the public consciousness in late 2022 and early 2023.

ChatGPT, the AI chatbot from OpenAI, reached an astonishing 100 million monthly active users in January 2023, just two months after its launch, beating out TikTok (nine months) and Instagram (two and half years) in the time taken to reach that figure.

Not as fast, perhaps, but since their public release in 2022, both Midjourney, Stable Diffusion, Stability AI, and DALL-E 2, from OpenAI, have attracted millions of users.

Now capable of producing stunning artwork in seconds, generative AI technology has been used to produce millions of images, music, lyrics, and articles.

The meteoric rise of AI has given new life to the age-old question of whether machines will eventually replace humans, this time in the art and creative spheres, and prompted dozens of lawsuits from those humans battling to establish clear guidelines about copyright.

Artists have sued over alleged use of their work by programmers to train their AI algorithm raising the rather philosophical question of whether a machine is capable of creating art?

The answer has far-reaching real life consequences, particularly in the field of copyright.

Artists, AI and copyright

The generally accepted principle is that copyright laws aim to both encourage authors and artists to create novel works and to ensure that having done so, they are able to receive fair compensation for their efforts.

Which raises the question of whether work created by AI, which is not (yet) sentient and requires no reward or compensation for creating works of art, be afforded the same copyright protections?

For the time being, the legal world has generally replied in the negative, maintaining that only work created by human authors can be protected by copyright:-

  • The United States Copyright Office, in denying copyright registration to the graphic novel Zarya of the Dawn generated with Midjourney technology, affirmed that copyright does not protect works created by non-human authors;
  • In the landmark Infopaq case (C-5/08 Infopaq International A/S v Danske Dagbaldes Forening), the European Court of Justice ruled that copyright only applies to original works reflecting the “(human) author’s own intellectual creation”;
  • In Australia, the Federal Court of Australia ruled that phone directories authored by computers are not protected by copyright, notwithstanding the presence of some input from human editors.

Some countries, however, have decided to address this issue by attributing authorship and thus copyright of computer-generated work to the humans who programmed AI to generate the work. This interpretation was pioneered in the UK under section 9(3) of the Copyright, Designs and Patents Act 1988 (the “CDPA”), which states that:

In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”

In section 178 of the CPDA, computer generated works are defined as works “generated by computer in circumstances such that there is no human author of the work”, thus acknowledging the possibility of work without human authors.

In passing the bill, the late Lord Young of Graffham, then the Secretary for Trade and Industry, commented “We believe this to be the first copyright legislation anywhere in the world which attempts to deal specifically with the advent of artificial intelligence…the far-sighted incorporation of computer-generated works in our copyright system will allow investment in artificial intelligence systems, in the future, to be made with confidence.”.

This piece of legislation demonstrated remarkable foresight on the part of UK lawmakers, considering the CPDA was drafted in 1987, when computers were just starting to become available to the general public.

Similar provisions soon found their way to the law books of jurisdictions strongly influenced by the UK legal system, such as Hong Kong, India and New Zealand.  For example, section 11(3) of the Copyright Ordinance (Cap. 528) of Hong Kong provides that:-

 “In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author is taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.

On the face of it, these provisions, which will be referred to as the “Arrangement Model” in this article, seem to provide a simple and elegant solution to the conundrum posed by generative AI technology. Whoever does the work in “preparing” an AI to create a work is the author and copyright owner.

It also seems to match the “sweat of the brow” intellectual property doctrine, which states whoever has the skill and puts in the time and effort to create the work deserves protection.

However, I would argue the Arrangement Model does not adequately reflect how modern generative AI operates and creates massive legal uncertainty.

This article will explore the major shortcomings of the Arrangement Model in attributing copyright to AI-generated works.

Prompts, algorithms and iteration

Broadly speaking, modern AI operates via “machine learning”.

It doesn’t rely on direct instructions carefully written into a program by a programmer, which provides precise steps for the machine to follow to complete the task.

Instead, the machine combines large amounts of raw data with iterative and intelligent algorithms to discern patterns in the data from which it can learn to complete the task without any direct input from a programmer.

The output can be improved by feeding prompts to the machine that “learns” by further refining its data analysis to find more complex and efficient patterns without the developers’ intervention or input.

This leads to the first problem under the Arrangement Model.

How to identify the person who “makes the necessary arrangements for the creation.

Let’s say a user asks the machine to create a picture of a cat with an apple. They would type in a text prompt such as “Create a picture of a cat holding an apple.”

The machine would then search, usually online, for any references or pictures of cats, apples and of cats holding apples. It would then use the algorithms programmed into it to analyse the data, discern patterns and reproduce its own version of a picture.

Further prompts from the user, for example, “create the picture in the style of Van Gogh” would lead the machine to run further data analysis on references to the artist Van Gogh, discern patterns in the painting style then attempt to reproduce those techniques in its own picture.

All of this complicates answering the question of who made the necessary arrangements.

Is it the user who wrote the prompts? Is it the programmers who wrote the algorithms the computer used? Or is it the artists of the original pictures used by the machine in its data analysis?

Arguably it’s “all of the above.”

  • The artwork would not be generated but for the text prompts entered by the user;
  • The artwork cannot be generated if the developers/programmers had not written the algorithms;
  • The artwork cannot be generated if no original pictures are available for the AI to reference and learn from.

It could be argued all of the above, or at least the users and developers, could be joint authors or co-authors, but the present conception of “joint authors” and “co-authors” in copyright laws all pre-suppose a certain degree of collaboration or common design, which is clearly absent in most cases involving generative AI works.

In most cases, developers of AI systems do not collaborate with users in any specific work. They may not have any idea what the users are generating using the AI tools they developed.

That AI programmes can operate autonomously without the developers’ input is the exact purpose of developing AI technology in the first place. So either the definition of joint authorship or co-authorship will need to be changed, or the concept of joint authorship/co-authorship simply does not apply.

Algorithms, not creativity

A related problem with the Arrangement Model is it may attribute authorship to people who have no creative input or even creative intent at all. Notably, the provision of “mak(ing) the necessary arrangements for the creation” does not specify that the arrangements must be creative.

The role of developers in AI is largely about writing algorithms and providing data the machine can learn from using those algorithms. In most cases, developers are not responsible for generating the final work.

Since developers have no creative input in the end product and may not even have any intention to create any kind of artwork, it is arguable that attributing authorship to them runs contrary to the basic premise of copyright laws. A comparable analogy would be that camera manufacturers do not claim copyright ownership over photographs taken by people using their cameras.

Link to the rest at Lexology

AI Is About to Turn Book Publishing Upside-Down

From Publisher’s Weekly:

The latest generation of AI is a game changer. Not incremental change—something gentle, something gradual: this AI changes everything, fast. Scary fast.

I believe that every function in trade book publishing today can be automated with the help of generative AI. And, if this is true, then the trade book publishing industry as we know it will soon be obsolete. We will need to move on.

There are two quick provisos, however. The first is straightforward: this is not just about ChatGPT—or other GPTs (generative pretrained transformers) and LLMs (large language models). A range of associated technologies and processes can and will be brought into play that augment the functionality of generative AI. But generative AI is the key ingredient. Without it, what I’m describing is impossible.

The second proviso is of a different flavor. When you make absolutist claims about a technology, people will invariably try to defeat you with another absolute. If you claim that one day all cars will be self-driving, someone will point out that this won’t apply to Formula One race cars. Point taken.

This isn’t about Formula One publishing. I’m going to be talking about “good enough”—about what people will accept, what they’ll buy, and what they’ll actually read. I’m not going to claim that Formula One publishers won’t be able to do a better job than AI on many of the processes described below. But I’ll challenge you to consider exactly where the human touch brings sufficient added value to justify the overhead in time and costs.

Does any of this mean that there will be no future for great novels and fine nonfiction studies? Of course it doesn’t. That’s not my point.

Do I doubt that there will still be fantastic cover designs from talented designers? Of course there will be. We’ll still stumble on new books on bookstore shelves and, humbled by the grandeur of their cover designs, declare that there’s no way they could have been designed with AI. And sometimes we’ll be right.

. . . .

Professional copyediting is the kerning of 2023. The tech is not quite here today. I don’t think that GPT-4 can yet handle copyediting to the standard that book publishers require. But that ability is going to be here sooner, not later. While professionally copyedited books may still be “better” to a refined editor’s eye, you won’t be able to sell more books with the professional human touch. They will already be good enough.

What about developmental editing? You might not let a GPT make the final editorial decisions, but you’d be foolish not to ask it for suggestions.

And ChatGPT will become the patron saint of the slush pile. Its abilities to evaluate grammar and logical expression allow it to make a once-over assessment of whether a book is (reasonably) well written. It might not spot the gems, but it will know how to separate the wheat from the chaff. Ah, you will say, recalling one of those manuscripts that were rejected by 100 publishers but went on to become an unexpected bestseller—surely a GPT might miss those, too. Yet so did 100 purportedly well-trained publishing professionals.

. . . .

For the publishing industry, online distribution and advertising have separated writers from readers. Self-published authors have proven that the closer one gets to their audience, the more fans they will get and the more books they will sell. While online resellers aggregate audiences into big broad buckets, AI disambiguates them, enabling writers and readers to forge direct connections.

Amazon has become an overpriced rentier that publishers can ill afford. It can still be a door opener for new authors, but for established publishers it charges too much for what it delivers.

Amazon’s dominant position in bookselling is not going to change overnight, nor even in the morning. But part of the publishing transformation that AI will engender will be a series of energetic attempts to disrupt Amazon’s position in the distribution ecosystem. As media continues to morph, AI seeds new delivery channels. Amazon will try to dominate each new channel via acquisitions, as it did so brilliantly when it bought Audible in 2008 for $300 million. But Amazon is a lesser player in the video and gaming spaces, and, as yet, in the new entertainment channels that AI is germinating. This is shaping up as a classic example of Clayton Christensen’s The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail.

But I see a bright future for bookstores. It can be chilly in AI’s uncanny valley, and bookstores will remain singular sources for camaraderie and the human touch.

Link to the rest at Publisher’s Weekly

The last paragraph in PG’s excerpt raised the question in PG’s mind: “Are people who go to bookstores unable to find “camaraderie and the human touch” anywhere else?

PG imagined a book, “The Lonely Lives of Bookstore Customers.”

AI Literary Analysis

Prompt: Write an analysis of each of the novels written by Charles Dickens.

Answer: Charles Dickens was a prolific writer who authored many novels during his lifetime. Here is a brief analysis of some of his most famous works:

  1. The Pickwick Papers: Published in 1836, this was Dickens’ first novel and was written in a humorous style. It follows the journey of the Pickwick Club, which is dedicated to exploring and reporting on the eccentricities of British life.
  2. Oliver Twist: Also published in 1836, this novel follows the life of a young orphan boy named Oliver Twist, who is mistreated and exploited by those around him. It highlights the plight of the poor and the unjust treatment that they often receive.
  3. Nicholas Nickleby: Published in 1839, this novel follows the life of a young man named Nicholas Nickleby, who sets out to make his fortune after the death of his father. It explores themes of social injustice and the need for compassion towards the less fortunate.
  4. A Christmas Carol: Published in 1843, this novel tells the story of Ebenezer Scrooge, a miserly and selfish man who is visited by three ghosts on Christmas Eve. It is a timeless tale of redemption, highlighting the importance of kindness, compassion and charity.
  5. David Copperfield: Published in 1850, this is an autobiographical novel which is based on Dickens’ own life experiences. It tells the story of a young boy named David Copperfield and his journey to adulthood, exploring themes such as social class, love, and personal growth.
  6. Bleak House: Published in 1853, this novel is set in the 1850s and highlights the corruption and decay of the legal system in England. It tells the story of a legal case that drags on for years, ruining the lives of those involved.
  7. Great Expectations: Published in 1861, this novel follows the life of a young boy named Pip, who dreams of rising from his lowly roots to become a gentleman. It explores themes of ambition, class, and the corrupting influence of wealth.

Overall, Charles Dickens’ novels are known for their social commentary and their vivid portrayal of life in 19th century England. They largely deal with themes of poverty, injustice, and the importance of compassion and empathy in society. Dickens’ unique storytelling style, which often features memorable characters and intricate plotlines, has made him one of the most beloved and celebrated authors of all time.–

The Future of Writing Is a Lot Like Hip-Hop

From The Atlantic:

Creative artificial intelligence provokes a strange mixture of contempt and dread. People say things such as “AI art is garbage” and “It’s plagiarism,” but also “AI art is going to destroy creativity itself.” These reactions are contradictory, but nobody seems to notice. AI is the bogeyman in the shadows: The obscurity, more than anything the monster has actually perpetrated, is the source of loathing and despair.

Consider the ongoing feud between the Writers Guild of America and the Alliance of Motion Picture and Television Producers. The writers are on strike, arguing, among other things, that studios should not be able to use AI tools to replace their labor. “It’s important to note that AI software does not create anything. It generates a regurgitation of what it’s fed,” the WGA has claimed. “Plagiarism is a feature of the AI process.” The AMPTP, for its part, has offered ​​“annual meetings to discuss advancements in technology.” Neither side knows exactly what it’s talking about, but they feel they have to fight about it anyway.

So little of how we talk about AI actually comes from the experience of using it. Almost every essay or op-ed you read follows the same trajectory: I used ChatGPT to do a thing, and from that thing, I can predict catastrophic X or industry-altering Y. Like the camera, the full consequences of this technology will be worked out over a great deal of time by a great number of talents responding to a great number of developments. But at the time of writing, almost all the conversation surrounding generative AI is imaginary, rooted not in the use of the tool but in extrapolated visions.

So when Jacob Weisberg, the CEO of Pushkin Industries, called me one Friday in January and asked if I wanted to write an AI-generated novel, I said yes immediately. To be more precise, he asked if I wanted to be the producer of an AI that would “write” a novel. It was the exact kind of opportunity to dive headfirst into a practical extended application of the new technology that I’d been looking for. The experience has been, in equal measures, phantasmagoric and grounding.

My conclusion is informed but unsatisfying. Creative AI is going to change everything. It’s also going to change nothing.

Using AI to write fiction is not unfamiliar to me. I’ve been using artificial intelligence to write short stories since 2017, when I published an early “algostory” in Wired; I also produced a 17 percent computer-generated horror story for the Los Angeles Review of Books called “The Thing on the Phone” in 2021, and the short “Autotuned Love Story,” built out of stylistic bots, for Lithub a year later. But these experiments were mostly lyrical. What Weisberg was proposing was entirely different: The novel would have to be 95 percent computer-generated, relatively short (about 25,000 words), and of excellent quality (there would be no point in creating yet another unimaginative mass of GPT text; readers could just do that themselves).

Because I was making derivative art, I would go all the way, run into the limitations, into the derivative: The plot would be a murder mystery about a writer killed by tech that is supposedly targeting writers. I called it Death of an Author. I worked out the plot during a long skate with my daughter and a walk with my son (better techniques than any machine could offer), and began taking copious notes.

The experiment would attempt to be compulsively readable, a page-turner. At first, I tried to get the machines to write like my favorite, Jim Thompson, the dime-store Dostoevsky. It couldn’t come close: The subterfuge of Thompson’s writing, a mille-feuille of irony and horror with subtle and variable significance, was too complex for me to articulate to the machine. This failure is probably due to my own weakness rather than the limitations of the AI. Raymond Chandler, however, I had better results with. I sort of know what Raymond Chandler is doing and could explain it, I thought, to a machine: driving, electric, forceful, active prose with flashes of bright beauty.

My process mostly involved the use of ChatGPT—I found very little difference between the free service and the paid one that utilizes the more advanced GPT-4 model—and Sudowrite, a GPT-based, stochastic writing instrument. I would give ChatGPT instructions such as “Write an article in the style of the Toronto Star containing the following information: Peggy Firmin was a Canadian writer who was murdered on a bridge on the Leslie Street Spit on August 14 with no witnesses.” Then I’d paste the output into Sudowrite, which gives you a series of AI-assisted options to customize text: You can expand, shorten, rephrase, and “customize” a selection. For example, you can tell Sudowrite to “make it more active” or “make it more conversational,” which I did with almost every passage in Death of an Author. But you can also give it a prompt such as “Make it more like Hemingway.”

Link to the rest at The Atlantic

Reactions to ‘Artificial Intelligence’: Scribd Alters Its Terms

From Publishing Perspectives:

In a statement issued from San Francisco today (May 9), the subscription service Scribd has “clarified how its data may be used in an update to its terms of service.”

This update, according to the company, “emphasizes that Scribd’s users, subscribers, and partner companies may not utilize the company’s data for monetization or to train large-language models without Scribd’s explicit consent.

“Additionally, Scribd confirmed that it has not allowed any companies that train large-language models to use full content provided by its publishing partners, which is only available through its digital subscription service.”

This is just the latest, of course, in quickening reactions and evaluations of “artificial intelligence” in the publishing and content realm, several points about which were addressed on Monday (May 8) in the Association of American Publishers’ annual general meeting.

During that live event, AAP president and CEO Maria A. Pallante laid out a gratifyingly comprehensive overview of issues that the US and international publishing industry needs to consider amid the popular giddiness and occasional doomsday chatter around systems such as ChatGPT introduced by OpenAI.

Among the most pressing questions Pallante poses—each having bearing on Scribd’s unusually broad, sector-crossing offerings. From Pallante’s message to the United States’ publishers:

  • “Consider academic publishing. Each year more than two million articles are published in more than 26,000 research journals following peer review and curation that is painstaking, but essential to ensure integrity and confidence and research results. How can AI tools help with this mission? What threats does it pose?
  • “Consider education publishing. There’s an old saying that people are entitled to their own opinions, but not to their own facts. What are “facts” in the context of AI? A percentage of truth? How will learning be amplified or cheating be contained?
  • “Consider trade publishing. Do we as a society want AI-generated works flooding the Internet, potentially depressing the value of human authorship? If we can’t contain AI-generated works, what should be the ethics about disclosing their provenance?”

. . . .

Trip Adler, the co-founding CEO of Scribd, today is quoted in the company’s statement, saying, “Our library is home to hundreds of millions of amazing, human-authored pieces of content, making it one of the most valuable and sought-after data resources.

“Our library’s quality sets us apart, and to safeguard its content, we have outlined use cases in our terms of service that control how and when other companies can use our data.”

The company’s announcement says that Scribd “will continue to prioritize the interests of publishers that participate in its subscription service, its base of creators who upload their own content to the platform, and the entire Scribd community. This is in addition to some of the existing measures already in place such as BookID, Scribd’s automated approach to protecting copyrighted materials.”

Link to the rest at Publishing Perspectives

Dialogue writing examples from top books vs AI

From Now Novel:

Read dialogue writing examples from diverse genres as we compare them to examples AI gave given the same scenario, for seven ultimate insights:

How we chose the dialogue examples for this study

We took the first seven books listed under the Kindle Unlimited ‘Featured’ section on Amazon for a cross-section of authors writing in diverse genres such as dystopian, regency romance, mystery, and thriller.

This gave the following list of titles to explore for dialogue writing lessons and ideas:

  • The House at the End of the World – Dean Koontz (dystopian)
  • The Echo of Old Books – Barbara Davis (historical mystery)
  • A Girl Called Samson – Amy Harmon (historical)
  • Layla – Colleen Hoover (romantic suspense)
  • The Quarry Girls – Jess Lourey (thriller)
  • I’ll Stop the World – Lauren Thoman (coming-of-age, mystery)
  • You’ve Reached Sam – Dustin Thao (YA, fantasy/romance)

Dialogue writing examples (plus insights)

Let’s dive into an example from each book.

The method: We found the first dialogue (discounting solo utterances/inner monologue) in each of the above books.

We summarized the scenario and asked ChatGPT4 to write a piece of dialogue given the scenario and genre for each. This gave comparative talking points on what sets human vs AI-written dialogue apart (as of May 2023).

Dystopian dialogue example: Dean Koontz

The scenario: Katie has fixed a leaky faucet and her mom thanks her dad for fixing the tap over dinner, assuming it was the dad who did it.

That night at dinner, Katie’s mother smiles at Father and says, “Oh, by the way, thank you for finally fixing that faucet.”
When he says he hasn’t had the chance yet, Katie admits to the deed. Mother is suitably amazed.
Father, however, is not surprised. “She’s an artist.”
“Yes,” says Mother, “and not a plumber.”Dean Koontz, The House at the End of the World (2023)

Dialogue example generated for this scenario

We gave that scenario to ChatGPT4 with this prompt:

Write dialogue for the following scenario: A girl, Katie, has fixed a leaky faucet. Her mother thinks Katie’s father fixed it, not Katie, and thanks him for fixing it over dinner. Genre of the novel: dystopian.

This is the dialogue we got as output (note that the prompt is quite broad, we could specify that all three speak, that ‘Mother’ and ‘Father’ are named as such in narration, etc.).

Katie scraped her fork against her plate, her gaze flickering between the chunk of dehydrated potato on her plate and her mother. A low hum of the purifiers outside, working to cleanse the fallout-laden air, acted as a somber symphony for their meager meal.

Her mother, Joan, leaned back in her chair, sighing contentedly. “John, I noticed the faucet in the kitchen doesn’t leak anymore. Thank you for fixing it. We couldn’t afford to lose more water.”

Her father, John, paused mid-chew, his gaze moving from Joan to Katie. He was a man of few words, his strength silent, hidden under a ragged exterior. His weathered face turned toward his wife. “I didn’t fix it.”

Joan’s brow furrowed. “Well, if not you, then who?”

Katie cleared her throat, setting her fork down. The soft clatter echoed in the room. “It was me, Mom,” she admitted, her voice soft but firm. “I fixed the faucet.”

Joan looked at Katie, her eyes wide with disbelief. “You? But how did you…”

Katie shrugged, a small, determined smile tugging at the corners of her lips. “I found an old plumbing manual in the rubble. Spent a few nights by the flicker of the solar lamp studying it. It wasn’t as hard as you’d think.”

Her mother stared at her, a mixture of surprise and admiration flickering in her eyes. After a moment, Joan nodded, a small smile gracing her lips. “Well, it seems like you’ve been underestimating yourself, Katie.”Dialogue example generated using ChatGPT4.

Comparison between dialogue examples

Comparing Koontz’s dialogue to ChatGPT’s dialogue output, the striking differences are:

  1. Length/wordiness: Koontz’s human-written dialogue says more with less. For example, mother’s ‘Yes, and not a plumber’ could be read as a pointed jab at her husband that he was the one who was meant to fix the faucet, not their daughter. You could cut Father’s ‘He was a man of few words, his strength silent, hidden under a ragged exterior’, leaving Katie’s dad’s words themselves to show these traits.
  2. Pacing: An astute editor looking at the second version may suggest cuts to keep the pace flowing. The second has a crush of detail which almost reads as shoehorning in the genre at every opportunity (the intrusive hum of the purifiers cleaning the ‘fallout-laden’ air outside, for example). The AI-written dialogue has too much unnecessary detail.
  3. Intrigue and implication: Koontz’s original dialogue establishes succinctly in the conversation how Katie’s mother’s and father’s expectations of her differ (between surprise and non-surprise). The AI version focuses more on descriptive detail and how Katie was able to fix the faucet, which reads as more ‘on-the-nose’ (saying exactly what the situation is and what happened, without subtleties and ambiguities to engage the reader’s imagination).
  4. Tone and mood: The ChatGPT dialogue example reads as hammering home genre, a dystopian tone and mood. Koontz’s dialogue, by contrast, suggests how even in a dystopian world the ordinary and mundane – dinner table talk – can take place. Genre doesn’t need to announce itself in every line.

Link to the rest at Now Novel

Google’s AI pitch is a recipe for email hell

From The Verge:

Of all the emotions I expected out of this week’s Google I/O keynote, “dread” was not on my short list. Google has spent a couple of decades creating products that make my life easier. This week, it demonstrated the exact opposite: a way to make communicating with other people worse.

Google I/O, this year, was focused on artificial intelligence, and one of its centerpieces was prompt-based text generation, particularly in Google’s office suite. At several points throughout the keynote, it demoed a system called “Help me write” — which rewrites simple prompts in Gmail and other apps as more polished paragraphs of text. But more polished doesn’t automatically mean better.

Good writing can delight us with an intriguing turn of phrase. It can tell us something about how a person thinks and feels. Or it can offer some kind of useful detail about the world. And there’s no reason AI can’t play a role. I’ve seen authors use generative tools to create unexpected prose or summarize data in a readable way. I’m excited to see people explore new uses for it.

But Google’s onstage ideas were almost impressive in their ability to serve precisely none of writing’s core purposes. Its examples of Help me write calcify the worst tendencies of “professional” communications. They’re bland, bloated boilerplate that turns a human prompt into something that uses more words to say less.

Let’s look at one of the examples. Early in the keynote, a presenter auto-generates a job description for a textile design position that they (in the demo’s scenario) send into the world. Later, another presenter responds to it, supplying the following prompt:

interested! 8 years exp with intl brands. Call on Tuesday.

Help me write works its magic and produces the following:

Hi Lori,

Thank you so much for reaching out! I am very interested in the Senior Textile Designer position at Cymbal. I have 8 years of experience working with international brands, and I am confident that I have the skills and experience that you are looking for.

I am available for a call on Tuesday at your earliest convenience. Please let me know what time works.

The latter version is obviously the way you’re expected to apply to a job because, written by a human, it suggests you’re committed enough to type out a multi-paragraph reply and are capable of hitting certain class and cultural signifiers. An AI-generated response doesn’t serve either of those purposes, though. It was created almost instantly based on a predictive text system that requires minimal English-language competency, and so far, that system can only handle fairly formulaic writing; it won’t substitute for human work in many real-world situations. So all it does is construct a longer and more stilted version of the original prompt — one that probably only has value until everyone expects it was written with AI.

And even worse, the AI generation reinforces the idea that overenthusiastic US business speak is the required way to write, regardless of whether it’s a necessary skill for the job. I’ve seen thoughtful stories about people with dyslexia using ChatGPT to produce text that is — as a Washington Post article puts it — “unfailingly professional and polite.” But there’s an unspoken, simpler alternative: being willing to accept wider variations in how people communicate. I don’t begrudge anyone who uses AI writing to meet largely arbitrary standards, but at a society-wide level, it’s a linguistic arms race toward a more boring future.

Link to the rest at The Verge

PG expects business emails to be changed quite a bit when AI is frequently used.

“Is that your real opinion or was it an Ai screwup?”

“I hope your AI prompt wasn’t as offensive as the email you just sent me.”

“Since it’s obvious your AI wrote your email, I’m having my AI respond.”

“Let’s get your AI together with my AI to work this out.”

Reactions to ‘Artificial Intelligence’: Scribd Alters Its Terms

From Publishing Perspectives:

In a statement issued from San Francisco today (May 9), the subscription service Scribd has “clarified how its data may be used in an update to its terms of service.”

This update, according to the company, “emphasizes that Scribd’s users, subscribers, and partner companies may not utilize the company’s data for monetization or to train large-language models without Scribd’s explicit consent.

“Additionally, Scribd confirmed that it has not allowed any companies that train large-language models to use full content provided by its publishing partners, which is only available through its digital subscription service.”

This is just the latest, of course, in quickening reactions and evaluations of “artificial intelligence” in the publishing and content realm, several points about which were addressed on Monday (May 8) in the Association of American Publishers’ annual general meeting.

During that live event, AAP president and CEO Maria A. Pallante laid out a gratifyingly comprehensive overview of issues that the US and international publishing industry needs to consider amid the popular giddiness and occasional doomsday chatter around systems such as ChatGPT introduced by OpenAI.

Among the most pressing questions Pallante poses—each having bearing on Scribd’s unusually broad, sector-crossing offerings. From Pallante’s message to the United States’ publishers:

  • “Consider academic publishing. Each year more than two million articles are published in more than 26,000 research journals following peer review and curation that is painstaking, but essential to ensure integrity and confidence and research results. How can AI tools help with this mission? What threats does it pose?
  • “Consider education publishing. There’s an old saying that people are entitled to their own opinions, but not to their own facts. What are “facts” in the context of AI? A percentage of truth? How will learning be amplified or cheating be contained?
  • “Consider trade publishing. Do we as a society want AI-generated works flooding the Internet, potentially depressing the value of human authorship? If we can’t contain AI-generated works, what should be the ethics about disclosing their provenance?”

Link to the rest at Publishing Perspectives

As PG has mentioned previously, based on his understanding of how AI programs utilize written works of all kinds, he doesn’t think they’re violating US copyright law because AI doesn’t reproduce the text protected by copyright.

During his experiments with AI writing programs, the closest PG has seen direct references to the written works of others is a prompt that asks for the AI to write something in the style of Hemingway, Fitzgerald, Nora Roberts or Lucy Score. The AI writing from those prompts presents no danger to the future royalties earned by Ms. Roberts or Ms. Score.

(PG notes that academic publishing generally produces the most turgid collections of words known to humankind.)

Forget ChatGPT. These Are the Best AI-Powered Apps

From The Wall Street Journal:

Type pretty much anything into ChatGPT and it’ll spit out a confident, convincing response. The problem? Its answer can be full of errors. And during long conversations, it can veer into wild tangents.

So I started testing apps that use OpenAI’s GPT technology, but aren’t ChatGPT. Language app Duolingo and learning platform Khan Academy now offer conversational, personalized tutoring with this technology. Writing assistant Grammarly’s new tool can compose emails for you. Travel app Expedia features a chatty trip planner. And all Snapchat users just got a new friend on the social network called My AI.

. . . .

Parlez pal

Duolingo’s Roleplay text chatbot, available to French and Spanish learners on iOS, is more dynamic than the language-learning app’s often-repetitive translation exercises.

Each Roleplay conversation is themed. In my best French, I reminisced about a fictional Caribbean holiday, then I complained about a delayed flight. The bot corrected errors and suggested more advanced vocabulary for my responses.

Duolingo’s content experts created 100 initial scenarios. They programmed the AI language model to speak to a learner as a language instructor and only discuss the intended scenario. The result: No two conversations are alike, and Roleplay gets more advanced as the learner progresses.

. . . .

Homework helper

Khan Academy’s Khanmigo has several personalized learning tools, including a “Tutor me” mode and a quiz module for different subjects.

I tried the AI tutor with an AP U.S. History prompt: “Evaluate the factors behind population movement to America in the 17th century.” While ChatGPT wrote the entire essay for me, Khanmigo replied, “Religious freedom was one factor. Can you think of other examples?” 

I could ask Khanmigo for hints—but it’s programmed not to spit out the answer. 

Kristen DiCerbo, Khan Academy’s chief learning officer, said the company relied on tutoring research to create the Khanmigo prompts. When students get frustrated, it can offer a stronger hint, for example.

If a student types something off base, Khanmigo redirects the conversation. Any inputs related to hate speech, self-harm or violence trigger a message—“The conversation was unable to be processed”—and an email to the student’s parent or teacher, who can review the conversation.

The bigger concern is when the tutor gives the wrong answers, which occasionally happens with math, she said. Khan Academy worked with OpenAI to make GPT-4 better at math. The model is most accurate for questions about widely known K-12 topics but less so with niche subjects, Dr. DiCerbo added.

. . . .

Ghost writer

Grammarly has used AI to edit writing for years. GrammarlyGo, released last month, also composes writing for you. 

The most helpful element is its email responder, which appeared whenever I opened a compose window. I could click a green icon to expand the GrammarlyGo module, which summarizes the email and offers several “tone” options for replies, including persuasive, friendly and diplomatic.

The software can see what’s on your screen only when you activate the GrammarlyGo module. A Grammarly spokeswoman said the data is anonymized before it’s sent to the model. She added that the company never sells customer data and doesn’t allow partners to use the data to train their models.

GrammarlyGo’s suggestions were a good jumping-off point, but they felt like personalized templates I’d still have to mess with. My biggest gripe is that GrammarlyGo always signed off with “Best regards.” I tend to stick with the simpler “Best.”

Users get 100 prompts a month free; that goes up to 500 if they pay $30 a month or $144 annually. (Google is adding similar tools to its Docs and Gmail. For now, they’re only available by invitation.)

Link to the rest at The Wall Street Journal

Outlines, AI and Stormy Daniels

From Anne R. Allen’s Blog… with Ruth Harris:

I’d been planning to write my April post about outlines.

Which authors do outlines? And which ones don’t?

(I don’t and neither does Anne).

I’d done a bit of research and made some notes when along came AI, the hot new kid in town.

How could I ignore AI when everyday I was almost buried by an avalanche of news about AI, how to use it, and how it was going to revolutionize everything.

And put writers out of business.

Really?

Hold my beer.

Does s/he? Or doesn’t s/he?

Dean Wesley Smith does not outline: His book, Writing into the Dark, goes into the details of his process.

Neither does Nora Roberts. “I don’t plot. I don’t sit down and plot a book. It sort of unreels as I write.”

James Patterson outlines. Does he ever!

Daniel Silva doesn’t. “I have just never been able to outline. I just can’t bring a story to life on note cards, never have been able to….To me, it’s just a waste of time. My first draft is, in effect, the outline. If you do it long enough, you know that it becomes second nature—how you bring a character on the page, and this is what’s really going on—so I write it at one level but I have a much deeper understanding about what will happen later.”

Lisa Scottoline: “I just get writing. I don’t outline. I don’t know what the story is going to be until I get it down on paper. At the end, hopefully, I see I have a story. And then I’m the happiest person on the planet.”

Harlan Coben:  “I don’t outline. I usually know the ending before I start. I know very little about what happens in between. It’s like driving from New Jersey to California. I may go Route 80, I may go via the Straits of Magellan or stopover in Tokyo… but I’ll end up in California.”

At this point along came AI.

I asked it to write a blogpost about writers who do — and don’t — outline.

Here’s the result:

AI chimes in on Outlines.

When it comes to the writing process, every author has their own unique approach. Some writers meticulously plan out their stories with outlines, while others prefer to let their creativity flow freely without a rigid structure.

In this blogpost, we will explore famous writers who have used outlines in their work, as well as those who have chosen to forgo this method.

Famous Writers Who Use Outlines:

J.K. Rowling –

The world-famous author of the “Harry Potter” series is known for her meticulous planning and outlining. In fact, Rowling is known to have created extensive outlines for each of her seven books in the series before she even started writing the first one. Her outlines included detailed plot points, character arcs, and even diagrams of the Hogwarts castle. Rowling’s use of outlines allowed her to carefully plot out the complex storylines and foreshadowing that are integral to the “Harry Potter” series.

Dan Brown –

The best-selling author of “The Da Vinci Code” and other popular thrillers is known for his meticulous research and detailed outlines. Brown’s books are known for their intricate plots and historical references, and he uses outlines to carefully plan out the complex storylines and clues that make his novels so thrilling. Brown’s outlines often include detailed descriptions of locations, historical events, and clues that his protagonist must unravel, helping him create a fast-paced and engaging reading experience for his readers.

Stephen King –

Although he is known for his spontaneous and free-flowing writing style, Stephen King has also used outlines in some of his works. King has mentioned in interviews that he uses a loose outline to guide him through the writing process, particularly when working on longer novels with multiple plotlines. King’s outlines are not as detailed as those of other writers, but they serve as a rough roadmap that helps him stay on track while still leaving room for his creativity to flow.

George R.R. Martin –

The author of the epic fantasy series “A Song of Ice and Fire,” which inspired the television series “Game of Thrones,” is known for his intricate world-building and complex plotlines. Martin has stated that he uses extensive outlines to plan out the intricate politics, alliances, and conflicts that make up his fantasy world. His outlines often include detailed descriptions of characters, their motivations, and how they fit into the larger story. Martin’s use of outlines allows him to keep track of the numerous characters and storylines in his novels, which are known for their intricate and interwoven plots.

Link to the rest at Anne R. Allen’s Blog… with Ruth Harris

The Problem with Problematic

Note: The following post was written by Copymatic. PG entered a short prompt to seed the AI generation.

Have you ever found yourself in a conversation where someone used the word “problematic” to describe something they disagreed with or found offensive? It seems like nowadays, this term is thrown around so often that it’s lost its meaning. The overuse of “problematic” has become an epidemic, and it’s detrimental to effective communication.

Using this term too frequently can result in misunderstandings and miscommunication between individuals. When we label something as problematic without providing specific reasons or examples, we fail to communicate our thoughts effectively. Additionally, people may feel afraid to express their opinions if they fear being labeled as problematic themselves.

Furthermore, using “problematic” as a catch-all phrase for anything we disagree with limits our ability to engage in constructive dialogue and find meaningful solutions. Instead of simply dismissing something as problematic, we should take the time to identify exactly what issues are at play and work towards resolving them together.

In short, while it may be tempting to throw around buzzwords like “problematic,” doing so ultimately hinders rather than helps interpersonal communication. Moving forward, let’s strive for more thoughtful discussions and recognize that not everything can be reduced down to one simplistic label.

Writers Beware: There is one Big Problem with Using Chat GPT for Blogs

From Medium:

Writers and content creators, myself included, have found artificial intelligence (AI) to be a powerful tool when used properly to save time.

Before using AI, the most I had ever written was three articles in one day. Since I have been able to get up to six done in a single day.

If you are a writer or in any technical field, you too should learn AI too and become an expert in prompt engineering.

An amateur using AI is boring, but expert using AI is a powerhouse that can get a lot more work done.

However, there is the one big problem writers run into if they are using an AI program such as Chat GPT for blogging.

Artificial intelligence is programmed to detect artificial intelligence, and at least for now that’s a bad thing if you are creating content with AI.

If you copy and paste your blog content from a program like ChatGPT directly into a blog post, this will get flagged by platforms like Google or Facebook and this will hurt your SEO and any chances you have of organic reach on those platforms.

Can you fix this problem and still use AI as a writing tool?

Yes, but there are two AI tools that you need if you are a writer using AI:

1. AI Content Detector

The first tool you need is an AI content detector. One that I have used and find helpful is available at writer.com.

Paste your content into here and it will tell you if it appears to be human or AI generated content.

. . . .

2. Content Rephrasing Tool

You now need to rephrase your content. You can of course do this on your own, and it will take some time.

One way to save time is to use another AI tool that rephrases content. I find Copymatic’s tool helpful and easy to use. They have many other AI conten tools as well. 

Link to the rest at Medium

PG notes that the Copymatic video was made about one year ago and the program has added new features and refinements since then.

PG will play around with Copymatic and post the results here.

What If Shakespeare Wrote a Science Fiction Blockbuster: Star-crossed Galaxy? – Courtesy of ChatGPT

From Culture.org:

In a universe where the creative genius of William Shakespeare meets the awe-inspiring vastness of outer space, we bring you “Star-crossed Galaxy,” a science fiction epic that combines the beauty of the Bard’s prose with the mind-bending concepts of the final frontier.

Our tale takes place in the Verona System, where two interstellar empires, the Montagues and the Capulets, are engaged in a fierce war that has spanned millennia. Amidst the chaos, our star-crossed lovers, Romeo and Juliet, meet on a diplomatic mission to a neutral space station.

As their passionate love blooms, they embark on a dangerous quest to end the war that has ravaged their respective empires. They must navigate through treacherous asteroid fields, escape the clutches of a jealous Tybalt, and negotiate with the wise and mysterious Friar Laurence, who holds the key to uniting the galaxies.

Their journey is fraught with peril, but the lovers find solace in the beauty of the cosmos and the poetic language of their love. Shakespeare’s timeless verses are woven throughout the narrative, reimagined to capture the majesty and wonder of the stars.

. . . .

“O, swear not by the moon, th’ inconstant orb,

That monthly changes in her circled sphere,

Lest that thy love prove likewise variable.”

As they traverse the galaxy, Romeo and Juliet’s love transcends time and space, transforming into a force as powerful and eternal as the cosmos themselves. But, as in the original tragedy, their love is a double-edged sword, with the potential to both unite and destroy the Verona System.

“Star-crossed Galaxy” will take you on a breathtaking journey through space, propelled by the power of love and the transcendent beauty of Shakespeare’s words.

Link to the rest at Culture.org

Some AI Artworks Now Eligible for Copyright

From Hyperallergic:

The United States Copyright Office recently produced a statement of policy indicating that some artworks generated using artificial intelligence are now eligible for copyright registration on a case-by-case basis. This should go well!

Effective March 16, the Copyright Office’s statement of policy indicates that copyright applicants are permitted to submit AI-assisted works (across literature and visual arts) for protection under copyright law, and that the works will be evaluated for evidence of “human authorship.” The Office made a comparison between AI art and photography, citing the 1884 Supreme Court decision to extend copyright protections to photographs against the will of the Congress in Burrow-Giles Lithography Co. v. Sarony.

The Supreme Court decided that a photograph is not just a mechanical process, but an authored work based on the photographer’s decisions in curating the backdrop and subject’s clothing.

In the realm of generative works, the Office asks applicants if the included AI elements are the result of “mechanical reproduction” or of an author’s “own original mental conception, to which [the author] gave visible form.”  To mark the difference, the policy distinguishes between human artists developing AI work strictly through submitting prompts as instructions, and human artists selecting from and reimagining AI generations in a “sufficiently creative way.” However, the policy states that in the latter case, only the “human-authored” elements of the work would be copyrighted independent of the AI contributions.

The Office cites the example of a 2018 work generated autonomously by an unattended computer algorithm that was submitted for copyright protection and ultimately rejected as it was developed “without any creative contribution from a human actor.” On the other hand, a graphic novel with human-written text and Midjourney-generated imagery was granted copyright protection as a whole, but the individual images were omitted from the approval as they were not considered works of human authorship.

Link to the rest at Hyperallergic

I asked GPT-4 to write a book. The result: “Echoes of Atlantis”, 12 chapters, 115 pages, zero human input

From Reddit:

The goal of this project was to have GPT-4 generate an entire novel from scratch, including the title, genre, story, characters, settings, and all the writing, with no human input. It is impossible currently to do this using a single prompt, but what is possible is to supply a series of prompts that give structure to the process and allow it to complete this large task, one step at a time. However, in order to ensure that all the creative work is done by GPT-4, prompts are not allowed to make specific references to the content of the book, only the book’s structure. The intention is that the process should be simple, mechanical and possible (in principle) to fully automate. Each time the process is repeated from the beginning, it should create another entirely new book, based solely on GPT-4’s independent creative choices.

The result: Echoes of Atlantis, a fantasy adventure novel with 12 chapters and 115 pages, written over 10 days, from the day GPT-4 was released until now.

Insights/Techniques
My main insights I figured out in the course of doing this project:

Iterative refinement: Start with a high level outline. Make a detailed chapter outline. Then write a draft version of the full chapter (this will be much shorter than desired). Then expand each scene into a longer, more detailed scene.

Bounding (outside-in): GPT-4 loves to go too far ahead, writing about parts of the book that aren’t supposed to happen yet. The key to preventing this is to have it first write the first parts, then the last parts, then fill in the middle parts. The last part prevents it from going too far ahead, and the first parts in turn bound the last part of the previous section. Bounding is used at every level of refinement except the top level.

Single prompt: Often, by using a single large prompt, rather than a running conversation, you can flexibly determine exactly what information will be included in the input buffer, and ensure that all of it is relevant to the current task. I’ve crafted this approach to squeeze as much relevant info as I can into the token buffer.

Continuity notes: Ask it to take notes on important details to remember for continuity and consistency as it goes. Begin with continuity notes summarized from the previous scene, and then fold in additional continuity notes from the previous continuity notes. Continuity Notes will tend to grow over time; if they become too long, ask it to summarize them.

Revising outlines: In some cases, the AI improvises in its writing, for example moving some of the Chapter 5 scenes into Chapter 4, which breaks the book. To resolve this, I ask it after each chapter to go back and update its earlier, higher-level outlines and regenerate the opening and closing scenes of each chapter before continuing. This is very similar to how real authors revise their outlines over time.

Data cleanup: Sometimes outputs will do things a little weird, like copy labels from the input buffer like “Opening Paragraph”, or forget to number the scenes, or start numbering at zero, or add a little bit of stray text at the beginning. Currently I clean these up manually but a fully automated solution would have to cope with these.

Example prompts
These are just a few examples. For full details, see my Research Log.

Level 1: Top-level outline

Me: Please write a high-level outline for a book. Include a list of characters and a short description of each character. Include a list of chapters and a short summary of what happens in each chapter. You can pick any title and genre you want.

Level 1: Updating outline after each chapter

Me: Please edit and update the high-level outline for the book below, taking into account what has already happened in Chapter 1.

Level 2: Scenes (bounding)

Me: Please write a detailed outline describing the first scene of each chapter. It should describe what happens in that opening scene and set up the story for the rest of the chapter. Do not summarize the entire chapter, only the first scene.

Me: Write a detailed outline describing the final, last scene of each chapter. It should describe what happens at the very end of the chapter, and set up the story for the opening scene of the next chapter, which will come immediately afterwards.

Level 2: Scenes

Me: Given the following book outline, and the following opening and final scenes for Chapter 1, write a detailed chapter outline giving all the scenes in the chapter and a short description of each. Begin the outline with the Opening Scene below, and finish the outline with the Final Scene below.

Level 3: Rough draft

Me: Given the following book outline, and following detailed chapter outline for Chapter 1, write a first draft of Chapter 1. Label each of the scenes. Stop when you reach the end of Chapter 1. It should set up the story for Chapter 2, which will come immediately afterwards. It should be written in a narrative style and should be long, detailed, and engaging.

Level 4: Paragraphs (bounding)

Me: Given the following book outline, and the following draft of Chapter 1, imagine that you have expanded this draft into a longer, more detailed chapter. For each scene, give me both the first opening paragraph, and the last, final paragraph of that longer, more detailed version. Label them as Opening Paragraph and Final Paragraph. The opening paragraph should introduce the scene. The final paragraph should set up the story for the following scene, which will come immediately afterwards. The last paragraph of the final scene should set the story up for the following chapter, which will come immediately afterwards.

Level 4: Paragraphs

Me: Given the following book outline, and the following draft of Chapter 1, write a longer, more detailed version of Scene 1. The scene must begin and end with the following paragraphs: (opening and closing paragraphs here)

Continuity Notes

Me: Please briefly note any important details or facts from the scene below that you will need to remember while writing the rest of the book, in order to ensure continuity and consistency. Label these Continuity Notes.

Me: Combine and summarize these notes with the existing previous Continuity Notes below.

Link to the rest at Reddit

WGA Seeks Higher Compensation Amid Streaming Boom, Threatens First Strike in 15 Years

From Culture.org:

The Writers Guild of America (WGA) has commenced high-stakes negotiations with the Alliance of Motion Picture and Television Producers (AMPTP) for a new three-year contract, as the current agreement is set to expire on May 1.

. . . .

Representing over 11,000 television and movie writers, the WGA is seeking higher compensation, improved workplace standards, and a boost in contributions to pension and health funds.

The outcome of these negotiations will determine if the entertainment industry faces its first writers’ strike in 15 years.

. . . .

As the industry shifts towards streaming platforms, the WGA claims that Hollywood companies have taken advantage of this change to devalue writers’ work, leading to worsening working conditions.

The rapid transition to streaming entertainment has upended nearly every corner of Hollywood, and writers believe they have been left behind.

With fewer episodes per season on streaming platforms compared to traditional networks, writers are often paid less while working more.

Residual fees, or money paid when a film or series is rerun or aired on broadcast, have helped supplement writers’ income for years.

However, these fees are disappearing in the streaming era, where most projects ultimately land.

. . . .

The WGA is also asking for studios to establish standards around the use of artificial intelligence (AI) technology.

The guild wants the use of AI regulated in terms of material created for the studios.

The exact terms of agreement regarding AI have yet to be determined, and the WGA will have to overcome several hurdles to deliver its objectives to members.

. . . .

With the growing demand for content, many professionals in the entertainment industry work on a project-to-project basis, leading to job insecurity and a lack of long-term stability.

This gig economy structure can make it difficult for workers to plan their careers and secure stable income.

The potential writers’ strike highlights the need for better workplace standards and more reliable compensation structures to address the challenges faced by Hollywood workers in this evolving landscape.

Link to the rest at Culture.org

Microsoft’s new Copilot will change Office documents forever

From The Verge:

Microsoft’s new AI-powered Copilot summarized my meeting instantly yesterday (the meeting was with Microsoft to discuss Copilot, of course) before listing out the questions I’d asked just seconds before. I’ve watched Microsoft demo the future of work for years with concepts about virtual assistants, but Copilot is the closest thing I’ve ever seen to them coming true.

“In our minds this is the new way of computing, the new way of working with technology, and the most adaptive technology we’ve seen,” says Jon Friedman, corporate vice president of design and research at Microsoft, in an interview with The Verge.

I was speaking to Friedman in a Teams call when he activated Copilot midway through our meeting to perform its AI-powered magic. Microsoft has a flashy marketing video that shows off Copilot’s potential, but seeing Friedman demonstrate this in real time across Office apps and in Teams left me convinced it will forever change how we interact with software, create documents, and ultimately, how we work.

. . . .

Copilot appears in Office apps as a useful AI chatbot on the sidebar, but it’s much more than just that. You could be in the middle of a Word document, and it will gently appear when you highlight an entire paragraph — much like how Word has UI prompts that highlight your spelling mistakes. You can use it to rewrite your paragraphs with 10 suggestions of new text to flick through and freely edit, or you can have Copilot generate entire documents for you.

. . . .

Microsoft has customized this Copilot system for every Office app, so there are different ways to command it. Friedman demonstrated to me how Copilot can help you write emails in Outlook, offering up short or long message drafts with options to change the tone. It even works in the mobile version of Outlook, which got me thinking about the ways this could speed up work on the go.

“Outlook mobile is the first place where we’re doing a big push,” explains Friedman. Outlook can summarize all your emails on the go, generate drafts, and generally make it easier to triage your inbox. But imagine creating entire Word documents from your phone without having to type on a tiny on-screen keyboard. “We’ll have more to talk about mobile in the coming months,” says Friedman. But you can imagine where things will go.

Link to the rest at The Verge

Google Is About to Unleash AI for Gmail and Google Docs

From Gizmodo:

Google announced it’s finally unleashing its generative AI tools Tuesday, bringing a set of features to Gmail and Google Docs that will automatically create drafts based on simple prompts. Google will also add new AI capabilities to its business products, including Google Cloud and a new API for developers

Google says it’s rolling out a test of these features to a “limited set of trusted testers” in the coming weeks. When they’re released to the more than 3 billion users of Gmail and Google Docs, more people will be exposed to the latest generation of artificial intelligence technology than ever before.

In Gmail and Google Docs, you’ll be able to type in a few words about a topic, and the apps will automatically spit out a draft. The company posted a GIF using a job posting as an example. The user types “job post for a regional sales rep,” and in a second, Docs spits out a formatted page of text with filled out sections for a job description, responsibilities, and qualifications.

“We’re now at a pivotal moment in our AI journey,” said Thomas Kurian, CEO of Google Cloud, in a blog post. “Breakthroughs in generative AI are fundamentally changing how people interact with technology — and at Google, we’ve been responsibly developing large language models so we can safely bring them to our products. Today, we’re excited to share our early progress”

With Google’s upcoming business tools, companies are sure to create a flood of AI apps.

Link to the rest at Gizmodo

Romeo and Juliet via ChatGPT

Here’s a question PG put to ChatGPT, one of the leading text AI programs:

Did Romeo and Juliet have to die? Couldn’t they have married and enjoyed a happy life?

Here’s the answer to PG’s question from the AI:

Romeo and Juliet is a fictional play written by William Shakespeare, first performed in the 16th century. The tragic ending of the play is an essential part of the plot and serves to highlight the consequences of the feud between the two families.

Although it is possible to imagine a different ending for the story, where Romeo and Juliet marry and live happily ever after, it would fundamentally alter the play’s themes and message. The tragic ending is what makes the play so powerful and enduring.

In the context of the play, Romeo and Juliet’s death is the result of a tragic set of circumstances, including their families’ long-standing feud, the lack of communication between them, and the tragic misunderstanding about Juliet’s supposed death. These factors combined to create a tragic outcome that could not have been avoided.

Ultimately, the play’s message is not about whether Romeo and Juliet could have lived happily ever after, but about the destructive power of hatred and violence, and the need for love and compassion to overcome these forces.

GrammarlyGO

From Grammarly:

Today, we announced to the world GrammarlyGO—Grammarly’s on-demand, contextually aware assistant powered by generative AI. With GrammarlyGO, we’ll be changing the way people and businesses communicate and get work done by accelerating productivity where writing happens.

Effective communication is transformative. It’s how we share new ideas, advocate for change, and build connections. And when done right, communication empowers businesses to operate efficiently and achieve ambitious goals. We’ve been focused on our mission to improve lives by improving communication for well over a decade. And we’ve always leveraged the latest technical innovations to help solve the real problems our customers face.

We’re building on that legacy with GrammarlyGO, which uses generative AI to help people and businesses succeed with on-demand communication assistance, whether they are starting from scratch or revising an existing piece of writing. It will uniquely offer relevant, contextually aware suggestions that account for personal voice and brand style while staying true to our augmented intelligence philosophy to keep customers in control of their experience. GrammarlyGO will enable customers to save time, enhance their creativity, and get more done—helping individuals achieve their potential and enterprises transform how they work.

. . . .

GrammarlyGO provides on-demand generative AI communication assistance directly in the apps where people write. Whether in an email thread or a long-form document, GrammarlyGO is right there with you and your teams during the writing process. GrammarlyGO understands context to quickly generate high-quality, task-appropriate writing and revisions.

With GrammarlyGO, individuals and businesses can use generative AI to:

  • Rewrite for tone, clarity, and length: Transform writing to be clear and on target, whatever the context.
  • Compose: Type a prompt and watch GrammarlyGO compose high-quality writing, saving time finding the perfect words.
  • Ideate: Unblock writing with GrammarlyGO as an AI ideation partner and unlock creativity with GrammarlyGO’s outlines and brainstorms, generated from prompts.
  • Reply intelligently: Flow through emails quickly with GrammarlyGO, which understands an email’s context and instantly drafts a thoughtful reply.

Link to the rest at Grammarly

PG is very interested in this development.

He will note in passing that his current Grammarly version found some parts in the OP that needed to be cleaned up.

Science fiction publishers are being flooded with AI-generated stories

From Tech Crunch:

Across the 17-year history of Clarkesworld, a renowned literary magazine of science fiction and fantasy, authors have speculated about how evolving, futuristic technology will impact our world. Now, editor and publisher Neil Clarke is living through a debacle that could very well be a sci-fi story in its own right: His magazine in being uncontrollably inundated by short-story submissions created with AI tools.

“It is ironic, I’ll say that much,” Clarke told TechCrunch. Clarkesworld has a reputation for always being open to story submissions, whereas many short-fiction publishers will only take submissions in certain short windows. But for the first time, submission volume got so out of hand that Clarke made what he calls a “spur-of-the-moment decision” to close the submission portal (in the past, Clarkesworld has only briefly closed when upgrading its website or software).

“It’s easy with these tools to churn out hundreds of thousands of works in the time that a human author would produce maybe one or two,” Clarke told TechCrunch. “So what we basically have is a room of screaming toddlers, and we can’t hear the people we’re trying to listen to.”

Clarke isn’t being dramatic. In a blog post, he shared a graph spanning from June 2019 to February 2023, which shows how many monthly submissions his staff flagged as spam. Until the beginning of this year, spam submissions never exceeded 25 per month, while many months had no spam whatsoever. Before closing submissions on Monday, Clarkesworld had received more than 500 spam submissions in the month of February alone. For context, Clarkesworld received around 11,500 submissions in 2022, per Clarke’s blog.

Link to the rest at Tech Crunch