AI Literary Analysis

Prompt: Write an analysis of each of the novels written by Charles Dickens.

Answer: Charles Dickens was a prolific writer who authored many novels during his lifetime. Here is a brief analysis of some of his most famous works:

  1. The Pickwick Papers: Published in 1836, this was Dickens’ first novel and was written in a humorous style. It follows the journey of the Pickwick Club, which is dedicated to exploring and reporting on the eccentricities of British life.
  2. Oliver Twist: Also published in 1836, this novel follows the life of a young orphan boy named Oliver Twist, who is mistreated and exploited by those around him. It highlights the plight of the poor and the unjust treatment that they often receive.
  3. Nicholas Nickleby: Published in 1839, this novel follows the life of a young man named Nicholas Nickleby, who sets out to make his fortune after the death of his father. It explores themes of social injustice and the need for compassion towards the less fortunate.
  4. A Christmas Carol: Published in 1843, this novel tells the story of Ebenezer Scrooge, a miserly and selfish man who is visited by three ghosts on Christmas Eve. It is a timeless tale of redemption, highlighting the importance of kindness, compassion and charity.
  5. David Copperfield: Published in 1850, this is an autobiographical novel which is based on Dickens’ own life experiences. It tells the story of a young boy named David Copperfield and his journey to adulthood, exploring themes such as social class, love, and personal growth.
  6. Bleak House: Published in 1853, this novel is set in the 1850s and highlights the corruption and decay of the legal system in England. It tells the story of a legal case that drags on for years, ruining the lives of those involved.
  7. Great Expectations: Published in 1861, this novel follows the life of a young boy named Pip, who dreams of rising from his lowly roots to become a gentleman. It explores themes of ambition, class, and the corrupting influence of wealth.

Overall, Charles Dickens’ novels are known for their social commentary and their vivid portrayal of life in 19th century England. They largely deal with themes of poverty, injustice, and the importance of compassion and empathy in society. Dickens’ unique storytelling style, which often features memorable characters and intricate plotlines, has made him one of the most beloved and celebrated authors of all time.–

The Future of Writing Is a Lot Like Hip-Hop

From The Atlantic:

Creative artificial intelligence provokes a strange mixture of contempt and dread. People say things such as “AI art is garbage” and “It’s plagiarism,” but also “AI art is going to destroy creativity itself.” These reactions are contradictory, but nobody seems to notice. AI is the bogeyman in the shadows: The obscurity, more than anything the monster has actually perpetrated, is the source of loathing and despair.

Consider the ongoing feud between the Writers Guild of America and the Alliance of Motion Picture and Television Producers. The writers are on strike, arguing, among other things, that studios should not be able to use AI tools to replace their labor. “It’s important to note that AI software does not create anything. It generates a regurgitation of what it’s fed,” the WGA has claimed. “Plagiarism is a feature of the AI process.” The AMPTP, for its part, has offered ​​“annual meetings to discuss advancements in technology.” Neither side knows exactly what it’s talking about, but they feel they have to fight about it anyway.

So little of how we talk about AI actually comes from the experience of using it. Almost every essay or op-ed you read follows the same trajectory: I used ChatGPT to do a thing, and from that thing, I can predict catastrophic X or industry-altering Y. Like the camera, the full consequences of this technology will be worked out over a great deal of time by a great number of talents responding to a great number of developments. But at the time of writing, almost all the conversation surrounding generative AI is imaginary, rooted not in the use of the tool but in extrapolated visions.

So when Jacob Weisberg, the CEO of Pushkin Industries, called me one Friday in January and asked if I wanted to write an AI-generated novel, I said yes immediately. To be more precise, he asked if I wanted to be the producer of an AI that would “write” a novel. It was the exact kind of opportunity to dive headfirst into a practical extended application of the new technology that I’d been looking for. The experience has been, in equal measures, phantasmagoric and grounding.

My conclusion is informed but unsatisfying. Creative AI is going to change everything. It’s also going to change nothing.

Using AI to write fiction is not unfamiliar to me. I’ve been using artificial intelligence to write short stories since 2017, when I published an early “algostory” in Wired; I also produced a 17 percent computer-generated horror story for the Los Angeles Review of Books called “The Thing on the Phone” in 2021, and the short “Autotuned Love Story,” built out of stylistic bots, for Lithub a year later. But these experiments were mostly lyrical. What Weisberg was proposing was entirely different: The novel would have to be 95 percent computer-generated, relatively short (about 25,000 words), and of excellent quality (there would be no point in creating yet another unimaginative mass of GPT text; readers could just do that themselves).

Because I was making derivative art, I would go all the way, run into the limitations, into the derivative: The plot would be a murder mystery about a writer killed by tech that is supposedly targeting writers. I called it Death of an Author. I worked out the plot during a long skate with my daughter and a walk with my son (better techniques than any machine could offer), and began taking copious notes.

The experiment would attempt to be compulsively readable, a page-turner. At first, I tried to get the machines to write like my favorite, Jim Thompson, the dime-store Dostoevsky. It couldn’t come close: The subterfuge of Thompson’s writing, a mille-feuille of irony and horror with subtle and variable significance, was too complex for me to articulate to the machine. This failure is probably due to my own weakness rather than the limitations of the AI. Raymond Chandler, however, I had better results with. I sort of know what Raymond Chandler is doing and could explain it, I thought, to a machine: driving, electric, forceful, active prose with flashes of bright beauty.

My process mostly involved the use of ChatGPT—I found very little difference between the free service and the paid one that utilizes the more advanced GPT-4 model—and Sudowrite, a GPT-based, stochastic writing instrument. I would give ChatGPT instructions such as “Write an article in the style of the Toronto Star containing the following information: Peggy Firmin was a Canadian writer who was murdered on a bridge on the Leslie Street Spit on August 14 with no witnesses.” Then I’d paste the output into Sudowrite, which gives you a series of AI-assisted options to customize text: You can expand, shorten, rephrase, and “customize” a selection. For example, you can tell Sudowrite to “make it more active” or “make it more conversational,” which I did with almost every passage in Death of an Author. But you can also give it a prompt such as “Make it more like Hemingway.”

Link to the rest at The Atlantic

Reactions to ‘Artificial Intelligence’: Scribd Alters Its Terms

From Publishing Perspectives:

In a statement issued from San Francisco today (May 9), the subscription service Scribd has “clarified how its data may be used in an update to its terms of service.”

This update, according to the company, “emphasizes that Scribd’s users, subscribers, and partner companies may not utilize the company’s data for monetization or to train large-language models without Scribd’s explicit consent.

“Additionally, Scribd confirmed that it has not allowed any companies that train large-language models to use full content provided by its publishing partners, which is only available through its digital subscription service.”

This is just the latest, of course, in quickening reactions and evaluations of “artificial intelligence” in the publishing and content realm, several points about which were addressed on Monday (May 8) in the Association of American Publishers’ annual general meeting.

During that live event, AAP president and CEO Maria A. Pallante laid out a gratifyingly comprehensive overview of issues that the US and international publishing industry needs to consider amid the popular giddiness and occasional doomsday chatter around systems such as ChatGPT introduced by OpenAI.

Among the most pressing questions Pallante poses—each having bearing on Scribd’s unusually broad, sector-crossing offerings. From Pallante’s message to the United States’ publishers:

  • “Consider academic publishing. Each year more than two million articles are published in more than 26,000 research journals following peer review and curation that is painstaking, but essential to ensure integrity and confidence and research results. How can AI tools help with this mission? What threats does it pose?
  • “Consider education publishing. There’s an old saying that people are entitled to their own opinions, but not to their own facts. What are “facts” in the context of AI? A percentage of truth? How will learning be amplified or cheating be contained?
  • “Consider trade publishing. Do we as a society want AI-generated works flooding the Internet, potentially depressing the value of human authorship? If we can’t contain AI-generated works, what should be the ethics about disclosing their provenance?”

. . . .

Trip Adler, the co-founding CEO of Scribd, today is quoted in the company’s statement, saying, “Our library is home to hundreds of millions of amazing, human-authored pieces of content, making it one of the most valuable and sought-after data resources.

“Our library’s quality sets us apart, and to safeguard its content, we have outlined use cases in our terms of service that control how and when other companies can use our data.”

The company’s announcement says that Scribd “will continue to prioritize the interests of publishers that participate in its subscription service, its base of creators who upload their own content to the platform, and the entire Scribd community. This is in addition to some of the existing measures already in place such as BookID, Scribd’s automated approach to protecting copyrighted materials.”

Link to the rest at Publishing Perspectives

Dialogue writing examples from top books vs AI

From Now Novel:

Read dialogue writing examples from diverse genres as we compare them to examples AI gave given the same scenario, for seven ultimate insights:

How we chose the dialogue examples for this study

We took the first seven books listed under the Kindle Unlimited ‘Featured’ section on Amazon for a cross-section of authors writing in diverse genres such as dystopian, regency romance, mystery, and thriller.

This gave the following list of titles to explore for dialogue writing lessons and ideas:

  • The House at the End of the World – Dean Koontz (dystopian)
  • The Echo of Old Books – Barbara Davis (historical mystery)
  • A Girl Called Samson – Amy Harmon (historical)
  • Layla – Colleen Hoover (romantic suspense)
  • The Quarry Girls – Jess Lourey (thriller)
  • I’ll Stop the World – Lauren Thoman (coming-of-age, mystery)
  • You’ve Reached Sam – Dustin Thao (YA, fantasy/romance)

Dialogue writing examples (plus insights)

Let’s dive into an example from each book.

The method: We found the first dialogue (discounting solo utterances/inner monologue) in each of the above books.

We summarized the scenario and asked ChatGPT4 to write a piece of dialogue given the scenario and genre for each. This gave comparative talking points on what sets human vs AI-written dialogue apart (as of May 2023).

Dystopian dialogue example: Dean Koontz

The scenario: Katie has fixed a leaky faucet and her mom thanks her dad for fixing the tap over dinner, assuming it was the dad who did it.

That night at dinner, Katie’s mother smiles at Father and says, “Oh, by the way, thank you for finally fixing that faucet.”
When he says he hasn’t had the chance yet, Katie admits to the deed. Mother is suitably amazed.
Father, however, is not surprised. “She’s an artist.”
“Yes,” says Mother, “and not a plumber.”Dean Koontz, The House at the End of the World (2023)

Dialogue example generated for this scenario

We gave that scenario to ChatGPT4 with this prompt:

Write dialogue for the following scenario: A girl, Katie, has fixed a leaky faucet. Her mother thinks Katie’s father fixed it, not Katie, and thanks him for fixing it over dinner. Genre of the novel: dystopian.

This is the dialogue we got as output (note that the prompt is quite broad, we could specify that all three speak, that ‘Mother’ and ‘Father’ are named as such in narration, etc.).

Katie scraped her fork against her plate, her gaze flickering between the chunk of dehydrated potato on her plate and her mother. A low hum of the purifiers outside, working to cleanse the fallout-laden air, acted as a somber symphony for their meager meal.

Her mother, Joan, leaned back in her chair, sighing contentedly. “John, I noticed the faucet in the kitchen doesn’t leak anymore. Thank you for fixing it. We couldn’t afford to lose more water.”

Her father, John, paused mid-chew, his gaze moving from Joan to Katie. He was a man of few words, his strength silent, hidden under a ragged exterior. His weathered face turned toward his wife. “I didn’t fix it.”

Joan’s brow furrowed. “Well, if not you, then who?”

Katie cleared her throat, setting her fork down. The soft clatter echoed in the room. “It was me, Mom,” she admitted, her voice soft but firm. “I fixed the faucet.”

Joan looked at Katie, her eyes wide with disbelief. “You? But how did you…”

Katie shrugged, a small, determined smile tugging at the corners of her lips. “I found an old plumbing manual in the rubble. Spent a few nights by the flicker of the solar lamp studying it. It wasn’t as hard as you’d think.”

Her mother stared at her, a mixture of surprise and admiration flickering in her eyes. After a moment, Joan nodded, a small smile gracing her lips. “Well, it seems like you’ve been underestimating yourself, Katie.”Dialogue example generated using ChatGPT4.

Comparison between dialogue examples

Comparing Koontz’s dialogue to ChatGPT’s dialogue output, the striking differences are:

  1. Length/wordiness: Koontz’s human-written dialogue says more with less. For example, mother’s ‘Yes, and not a plumber’ could be read as a pointed jab at her husband that he was the one who was meant to fix the faucet, not their daughter. You could cut Father’s ‘He was a man of few words, his strength silent, hidden under a ragged exterior’, leaving Katie’s dad’s words themselves to show these traits.
  2. Pacing: An astute editor looking at the second version may suggest cuts to keep the pace flowing. The second has a crush of detail which almost reads as shoehorning in the genre at every opportunity (the intrusive hum of the purifiers cleaning the ‘fallout-laden’ air outside, for example). The AI-written dialogue has too much unnecessary detail.
  3. Intrigue and implication: Koontz’s original dialogue establishes succinctly in the conversation how Katie’s mother’s and father’s expectations of her differ (between surprise and non-surprise). The AI version focuses more on descriptive detail and how Katie was able to fix the faucet, which reads as more ‘on-the-nose’ (saying exactly what the situation is and what happened, without subtleties and ambiguities to engage the reader’s imagination).
  4. Tone and mood: The ChatGPT dialogue example reads as hammering home genre, a dystopian tone and mood. Koontz’s dialogue, by contrast, suggests how even in a dystopian world the ordinary and mundane – dinner table talk – can take place. Genre doesn’t need to announce itself in every line.

Link to the rest at Now Novel

Google’s AI pitch is a recipe for email hell

From The Verge:

Of all the emotions I expected out of this week’s Google I/O keynote, “dread” was not on my short list. Google has spent a couple of decades creating products that make my life easier. This week, it demonstrated the exact opposite: a way to make communicating with other people worse.

Google I/O, this year, was focused on artificial intelligence, and one of its centerpieces was prompt-based text generation, particularly in Google’s office suite. At several points throughout the keynote, it demoed a system called “Help me write” — which rewrites simple prompts in Gmail and other apps as more polished paragraphs of text. But more polished doesn’t automatically mean better.

Good writing can delight us with an intriguing turn of phrase. It can tell us something about how a person thinks and feels. Or it can offer some kind of useful detail about the world. And there’s no reason AI can’t play a role. I’ve seen authors use generative tools to create unexpected prose or summarize data in a readable way. I’m excited to see people explore new uses for it.

But Google’s onstage ideas were almost impressive in their ability to serve precisely none of writing’s core purposes. Its examples of Help me write calcify the worst tendencies of “professional” communications. They’re bland, bloated boilerplate that turns a human prompt into something that uses more words to say less.

Let’s look at one of the examples. Early in the keynote, a presenter auto-generates a job description for a textile design position that they (in the demo’s scenario) send into the world. Later, another presenter responds to it, supplying the following prompt:

interested! 8 years exp with intl brands. Call on Tuesday.

Help me write works its magic and produces the following:

Hi Lori,

Thank you so much for reaching out! I am very interested in the Senior Textile Designer position at Cymbal. I have 8 years of experience working with international brands, and I am confident that I have the skills and experience that you are looking for.

I am available for a call on Tuesday at your earliest convenience. Please let me know what time works.

The latter version is obviously the way you’re expected to apply to a job because, written by a human, it suggests you’re committed enough to type out a multi-paragraph reply and are capable of hitting certain class and cultural signifiers. An AI-generated response doesn’t serve either of those purposes, though. It was created almost instantly based on a predictive text system that requires minimal English-language competency, and so far, that system can only handle fairly formulaic writing; it won’t substitute for human work in many real-world situations. So all it does is construct a longer and more stilted version of the original prompt — one that probably only has value until everyone expects it was written with AI.

And even worse, the AI generation reinforces the idea that overenthusiastic US business speak is the required way to write, regardless of whether it’s a necessary skill for the job. I’ve seen thoughtful stories about people with dyslexia using ChatGPT to produce text that is — as a Washington Post article puts it — “unfailingly professional and polite.” But there’s an unspoken, simpler alternative: being willing to accept wider variations in how people communicate. I don’t begrudge anyone who uses AI writing to meet largely arbitrary standards, but at a society-wide level, it’s a linguistic arms race toward a more boring future.

Link to the rest at The Verge

PG expects business emails to be changed quite a bit when AI is frequently used.

“Is that your real opinion or was it an Ai screwup?”

“I hope your AI prompt wasn’t as offensive as the email you just sent me.”

“Since it’s obvious your AI wrote your email, I’m having my AI respond.”

“Let’s get your AI together with my AI to work this out.”

Reactions to ‘Artificial Intelligence’: Scribd Alters Its Terms

From Publishing Perspectives:

In a statement issued from San Francisco today (May 9), the subscription service Scribd has “clarified how its data may be used in an update to its terms of service.”

This update, according to the company, “emphasizes that Scribd’s users, subscribers, and partner companies may not utilize the company’s data for monetization or to train large-language models without Scribd’s explicit consent.

“Additionally, Scribd confirmed that it has not allowed any companies that train large-language models to use full content provided by its publishing partners, which is only available through its digital subscription service.”

This is just the latest, of course, in quickening reactions and evaluations of “artificial intelligence” in the publishing and content realm, several points about which were addressed on Monday (May 8) in the Association of American Publishers’ annual general meeting.

During that live event, AAP president and CEO Maria A. Pallante laid out a gratifyingly comprehensive overview of issues that the US and international publishing industry needs to consider amid the popular giddiness and occasional doomsday chatter around systems such as ChatGPT introduced by OpenAI.

Among the most pressing questions Pallante poses—each having bearing on Scribd’s unusually broad, sector-crossing offerings. From Pallante’s message to the United States’ publishers:

  • “Consider academic publishing. Each year more than two million articles are published in more than 26,000 research journals following peer review and curation that is painstaking, but essential to ensure integrity and confidence and research results. How can AI tools help with this mission? What threats does it pose?
  • “Consider education publishing. There’s an old saying that people are entitled to their own opinions, but not to their own facts. What are “facts” in the context of AI? A percentage of truth? How will learning be amplified or cheating be contained?
  • “Consider trade publishing. Do we as a society want AI-generated works flooding the Internet, potentially depressing the value of human authorship? If we can’t contain AI-generated works, what should be the ethics about disclosing their provenance?”

Link to the rest at Publishing Perspectives

As PG has mentioned previously, based on his understanding of how AI programs utilize written works of all kinds, he doesn’t think they’re violating US copyright law because AI doesn’t reproduce the text protected by copyright.

During his experiments with AI writing programs, the closest PG has seen direct references to the written works of others is a prompt that asks for the AI to write something in the style of Hemingway, Fitzgerald, Nora Roberts or Lucy Score. The AI writing from those prompts presents no danger to the future royalties earned by Ms. Roberts or Ms. Score.

(PG notes that academic publishing generally produces the most turgid collections of words known to humankind.)

Forget ChatGPT. These Are the Best AI-Powered Apps

From The Wall Street Journal:

Type pretty much anything into ChatGPT and it’ll spit out a confident, convincing response. The problem? Its answer can be full of errors. And during long conversations, it can veer into wild tangents.

So I started testing apps that use OpenAI’s GPT technology, but aren’t ChatGPT. Language app Duolingo and learning platform Khan Academy now offer conversational, personalized tutoring with this technology. Writing assistant Grammarly’s new tool can compose emails for you. Travel app Expedia features a chatty trip planner. And all Snapchat users just got a new friend on the social network called My AI.

. . . .

Parlez pal

Duolingo’s Roleplay text chatbot, available to French and Spanish learners on iOS, is more dynamic than the language-learning app’s often-repetitive translation exercises.

Each Roleplay conversation is themed. In my best French, I reminisced about a fictional Caribbean holiday, then I complained about a delayed flight. The bot corrected errors and suggested more advanced vocabulary for my responses.

Duolingo’s content experts created 100 initial scenarios. They programmed the AI language model to speak to a learner as a language instructor and only discuss the intended scenario. The result: No two conversations are alike, and Roleplay gets more advanced as the learner progresses.

. . . .

Homework helper

Khan Academy’s Khanmigo has several personalized learning tools, including a “Tutor me” mode and a quiz module for different subjects.

I tried the AI tutor with an AP U.S. History prompt: “Evaluate the factors behind population movement to America in the 17th century.” While ChatGPT wrote the entire essay for me, Khanmigo replied, “Religious freedom was one factor. Can you think of other examples?” 

I could ask Khanmigo for hints—but it’s programmed not to spit out the answer. 

Kristen DiCerbo, Khan Academy’s chief learning officer, said the company relied on tutoring research to create the Khanmigo prompts. When students get frustrated, it can offer a stronger hint, for example.

If a student types something off base, Khanmigo redirects the conversation. Any inputs related to hate speech, self-harm or violence trigger a message—“The conversation was unable to be processed”—and an email to the student’s parent or teacher, who can review the conversation.

The bigger concern is when the tutor gives the wrong answers, which occasionally happens with math, she said. Khan Academy worked with OpenAI to make GPT-4 better at math. The model is most accurate for questions about widely known K-12 topics but less so with niche subjects, Dr. DiCerbo added.

. . . .

Ghost writer

Grammarly has used AI to edit writing for years. GrammarlyGo, released last month, also composes writing for you. 

The most helpful element is its email responder, which appeared whenever I opened a compose window. I could click a green icon to expand the GrammarlyGo module, which summarizes the email and offers several “tone” options for replies, including persuasive, friendly and diplomatic.

The software can see what’s on your screen only when you activate the GrammarlyGo module. A Grammarly spokeswoman said the data is anonymized before it’s sent to the model. She added that the company never sells customer data and doesn’t allow partners to use the data to train their models.

GrammarlyGo’s suggestions were a good jumping-off point, but they felt like personalized templates I’d still have to mess with. My biggest gripe is that GrammarlyGo always signed off with “Best regards.” I tend to stick with the simpler “Best.”

Users get 100 prompts a month free; that goes up to 500 if they pay $30 a month or $144 annually. (Google is adding similar tools to its Docs and Gmail. For now, they’re only available by invitation.)

Link to the rest at The Wall Street Journal

Outlines, AI and Stormy Daniels

From Anne R. Allen’s Blog… with Ruth Harris:

I’d been planning to write my April post about outlines.

Which authors do outlines? And which ones don’t?

(I don’t and neither does Anne).

I’d done a bit of research and made some notes when along came AI, the hot new kid in town.

How could I ignore AI when everyday I was almost buried by an avalanche of news about AI, how to use it, and how it was going to revolutionize everything.

And put writers out of business.

Really?

Hold my beer.

Does s/he? Or doesn’t s/he?

Dean Wesley Smith does not outline: His book, Writing into the Dark, goes into the details of his process.

Neither does Nora Roberts. “I don’t plot. I don’t sit down and plot a book. It sort of unreels as I write.”

James Patterson outlines. Does he ever!

Daniel Silva doesn’t. “I have just never been able to outline. I just can’t bring a story to life on note cards, never have been able to….To me, it’s just a waste of time. My first draft is, in effect, the outline. If you do it long enough, you know that it becomes second nature—how you bring a character on the page, and this is what’s really going on—so I write it at one level but I have a much deeper understanding about what will happen later.”

Lisa Scottoline: “I just get writing. I don’t outline. I don’t know what the story is going to be until I get it down on paper. At the end, hopefully, I see I have a story. And then I’m the happiest person on the planet.”

Harlan Coben:  “I don’t outline. I usually know the ending before I start. I know very little about what happens in between. It’s like driving from New Jersey to California. I may go Route 80, I may go via the Straits of Magellan or stopover in Tokyo… but I’ll end up in California.”

At this point along came AI.

I asked it to write a blogpost about writers who do — and don’t — outline.

Here’s the result:

AI chimes in on Outlines.

When it comes to the writing process, every author has their own unique approach. Some writers meticulously plan out their stories with outlines, while others prefer to let their creativity flow freely without a rigid structure.

In this blogpost, we will explore famous writers who have used outlines in their work, as well as those who have chosen to forgo this method.

Famous Writers Who Use Outlines:

J.K. Rowling –

The world-famous author of the “Harry Potter” series is known for her meticulous planning and outlining. In fact, Rowling is known to have created extensive outlines for each of her seven books in the series before she even started writing the first one. Her outlines included detailed plot points, character arcs, and even diagrams of the Hogwarts castle. Rowling’s use of outlines allowed her to carefully plot out the complex storylines and foreshadowing that are integral to the “Harry Potter” series.

Dan Brown –

The best-selling author of “The Da Vinci Code” and other popular thrillers is known for his meticulous research and detailed outlines. Brown’s books are known for their intricate plots and historical references, and he uses outlines to carefully plan out the complex storylines and clues that make his novels so thrilling. Brown’s outlines often include detailed descriptions of locations, historical events, and clues that his protagonist must unravel, helping him create a fast-paced and engaging reading experience for his readers.

Stephen King –

Although he is known for his spontaneous and free-flowing writing style, Stephen King has also used outlines in some of his works. King has mentioned in interviews that he uses a loose outline to guide him through the writing process, particularly when working on longer novels with multiple plotlines. King’s outlines are not as detailed as those of other writers, but they serve as a rough roadmap that helps him stay on track while still leaving room for his creativity to flow.

George R.R. Martin –

The author of the epic fantasy series “A Song of Ice and Fire,” which inspired the television series “Game of Thrones,” is known for his intricate world-building and complex plotlines. Martin has stated that he uses extensive outlines to plan out the intricate politics, alliances, and conflicts that make up his fantasy world. His outlines often include detailed descriptions of characters, their motivations, and how they fit into the larger story. Martin’s use of outlines allows him to keep track of the numerous characters and storylines in his novels, which are known for their intricate and interwoven plots.

Link to the rest at Anne R. Allen’s Blog… with Ruth Harris

The Problem with Problematic

Note: The following post was written by Copymatic. PG entered a short prompt to seed the AI generation.

Have you ever found yourself in a conversation where someone used the word “problematic” to describe something they disagreed with or found offensive? It seems like nowadays, this term is thrown around so often that it’s lost its meaning. The overuse of “problematic” has become an epidemic, and it’s detrimental to effective communication.

Using this term too frequently can result in misunderstandings and miscommunication between individuals. When we label something as problematic without providing specific reasons or examples, we fail to communicate our thoughts effectively. Additionally, people may feel afraid to express their opinions if they fear being labeled as problematic themselves.

Furthermore, using “problematic” as a catch-all phrase for anything we disagree with limits our ability to engage in constructive dialogue and find meaningful solutions. Instead of simply dismissing something as problematic, we should take the time to identify exactly what issues are at play and work towards resolving them together.

In short, while it may be tempting to throw around buzzwords like “problematic,” doing so ultimately hinders rather than helps interpersonal communication. Moving forward, let’s strive for more thoughtful discussions and recognize that not everything can be reduced down to one simplistic label.

Writers Beware: There is one Big Problem with Using Chat GPT for Blogs

From Medium:

Writers and content creators, myself included, have found artificial intelligence (AI) to be a powerful tool when used properly to save time.

Before using AI, the most I had ever written was three articles in one day. Since I have been able to get up to six done in a single day.

If you are a writer or in any technical field, you too should learn AI too and become an expert in prompt engineering.

An amateur using AI is boring, but expert using AI is a powerhouse that can get a lot more work done.

However, there is the one big problem writers run into if they are using an AI program such as Chat GPT for blogging.

Artificial intelligence is programmed to detect artificial intelligence, and at least for now that’s a bad thing if you are creating content with AI.

If you copy and paste your blog content from a program like ChatGPT directly into a blog post, this will get flagged by platforms like Google or Facebook and this will hurt your SEO and any chances you have of organic reach on those platforms.

Can you fix this problem and still use AI as a writing tool?

Yes, but there are two AI tools that you need if you are a writer using AI:

1. AI Content Detector

The first tool you need is an AI content detector. One that I have used and find helpful is available at writer.com.

Paste your content into here and it will tell you if it appears to be human or AI generated content.

. . . .

2. Content Rephrasing Tool

You now need to rephrase your content. You can of course do this on your own, and it will take some time.

One way to save time is to use another AI tool that rephrases content. I find Copymatic’s tool helpful and easy to use. They have many other AI conten tools as well. 

Link to the rest at Medium

PG notes that the Copymatic video was made about one year ago and the program has added new features and refinements since then.

PG will play around with Copymatic and post the results here.

What If Shakespeare Wrote a Science Fiction Blockbuster: Star-crossed Galaxy? – Courtesy of ChatGPT

From Culture.org:

In a universe where the creative genius of William Shakespeare meets the awe-inspiring vastness of outer space, we bring you “Star-crossed Galaxy,” a science fiction epic that combines the beauty of the Bard’s prose with the mind-bending concepts of the final frontier.

Our tale takes place in the Verona System, where two interstellar empires, the Montagues and the Capulets, are engaged in a fierce war that has spanned millennia. Amidst the chaos, our star-crossed lovers, Romeo and Juliet, meet on a diplomatic mission to a neutral space station.

As their passionate love blooms, they embark on a dangerous quest to end the war that has ravaged their respective empires. They must navigate through treacherous asteroid fields, escape the clutches of a jealous Tybalt, and negotiate with the wise and mysterious Friar Laurence, who holds the key to uniting the galaxies.

Their journey is fraught with peril, but the lovers find solace in the beauty of the cosmos and the poetic language of their love. Shakespeare’s timeless verses are woven throughout the narrative, reimagined to capture the majesty and wonder of the stars.

. . . .

“O, swear not by the moon, th’ inconstant orb,

That monthly changes in her circled sphere,

Lest that thy love prove likewise variable.”

As they traverse the galaxy, Romeo and Juliet’s love transcends time and space, transforming into a force as powerful and eternal as the cosmos themselves. But, as in the original tragedy, their love is a double-edged sword, with the potential to both unite and destroy the Verona System.

“Star-crossed Galaxy” will take you on a breathtaking journey through space, propelled by the power of love and the transcendent beauty of Shakespeare’s words.

Link to the rest at Culture.org

A battle royal is brewing over copyright and AI

From The Economist:

Consider two approaches in the music industry to artificial intelligence (ai). One is that of Giles Martin, son of Sir George Martin, producer of the Beatles. Last year, in order to remix the Fab Four’s 1966 album “Revolver”, he used ai to learn the sound of each band member’s instruments (eg, John Lennon’s guitar) from a mono master tape so that he could separate them and reverse engineer them into stereo. The result is glorious. The other approach is not bad either. It is the response of Nick Cave, a moody Australian singer-songwriter, when reviewing lyrics written in his style by Chatgpt, an ai tool developed by a startup called Openai. “This song sucks,” he wrote. “Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder that destroys all one has strived to produce in the past.”

Mr Cave is unlikely to be impressed by the latest version of the algorithm behind Chatgpt, dubbed gpt-4, which Openai unveiled on March 14th. Mr Martin may find it useful. Michael Nash, chief digital officer at Universal Music Group, the world’s biggest label, cites their examples as evidence of both excitement and fear about the ai behind content-creating apps like Chatgpt (for text) or Stable Diffusion (for images). It could help the creative process. It could also destroy or usurp it. Yet for recorded music at large, the coming of the bots brings to mind a seismic event in its history: the rapid rise and fall of Napster, a platform for sharing mainly pirated songs at the turn of the millennium. Napster was ultimately brought down by copyright law. For aggressive bot providers accused of riding roughshod over intellectual property (ip), Mr Nash has a simple message that sounds, from a music-industry veteran of the Napster era, like a threat. “Don’t deploy in the market and beg for forgiveness. That’s the Napster approach.”

The main issue here is not ai-made parodies of Mr Cave or faux-Shakespearean sonnets. It is the oceans of copyrighted data the bots have siphoned up while being trained to create humanlike content. That information comes from everywhere: social-media feeds, internet searches, digital libraries, television, radio, banks of statistics and so on. Often, it is alleged, ai models plunder the databases without permission. Those responsible for the source material complain that their work is hoovered up without consent, credit or compensation. In short, some ai platforms may be doing with other media what Napster did with songs—ignoring copyright altogether. The lawsuits have started to fly.

It is a legal minefield with implications that extend beyond the creative industries to any business where machine-learning plays a role, such as self-driving cars, medical diagnostics, factory robotics and insurance-risk management. The European Union, true to bureaucratic form, has a directive on copyright that refers to data-mining (written before the recent bot boom). Experts say America lacks case history specific to generative ai. Instead, it has competing theories about whether or not data-mining without licences is permissible under the “fair use” doctrine. Napster also tried to deploy “fair use” as a defence in America—and failed. That is not to say that the outcome will be the same this time.

The main arguments around “fair use” are fascinating. To borrow from a masterclass on the topic by Mark Lemley and Bryan Casey in the Texas Law Review, a journal, use of copyrighted works is considered fair when it serves a valuable social purpose, the source material is transformed from the original and it does not affect the copyright owners’ core market. Critics argue that ais do not transform but exploit the entirety of the databases they mine. They claim that the firms behind machine learning abuse fair use to “free-ride” on the work of individuals. And they contend that this threatens the livelihoods of the creators, as well as society at large if the ai promotes mass surveillance and the spread of misinformation. The authors weigh these arguments against the fact that the more access to training sets there is, the better ai will be, and that without such access there may be no ai at all. In other words, the industry might die in its infancy. They describe it as one of the most important legal questions of the century: “Will copyright law allow robots to learn?”

An early lawsuit attracting attention is from Getty Images. The photography agency accuses Stability ai, which owns Stable Diffusion, of infringing its copyright on millions of photos from its collection in order to build an image-generating ai model that will compete with Getty. Provided the case is not settled out of court, it could set a precedent on fair use. An even more important verdict could come soon from America’s Supreme Court in a case involving the transformation of copyrighted images of Prince, a pop idol, by the late Andy Warhol, an artist. Daniel Gervais, an ip expert at Vanderbilt Law School in Nashville, believes the justices may provide long-awaited guidance on fair use in general.

Link to the rest at The Economist

As PG has mentioned before, the books, images, etc., are not being used by the creators of Artificial Intelligence engines to create copies of the books, images, etc., but rather for the AI engines to learn about what’s been created before and adapt that information in new and far different ways.

To repeat an earlier comparison, the AI program is doing the same thing an art student does when she/he/they go to an art museum to study the techniques used by other artists.

In writing, no one is upset if a new author carefully studies the style of F. Scott Fitzgerald, Ernest Hemingway, Danielle Steel, James Patterson, Margaret Atwood, Barbara Cartland, John Grisham, Alice Munro and/or Dean Koontz in order to derive information about how to successfully write fiction.

Some AI Artworks Now Eligible for Copyright

From Hyperallergic:

The United States Copyright Office recently produced a statement of policy indicating that some artworks generated using artificial intelligence are now eligible for copyright registration on a case-by-case basis. This should go well!

Effective March 16, the Copyright Office’s statement of policy indicates that copyright applicants are permitted to submit AI-assisted works (across literature and visual arts) for protection under copyright law, and that the works will be evaluated for evidence of “human authorship.” The Office made a comparison between AI art and photography, citing the 1884 Supreme Court decision to extend copyright protections to photographs against the will of the Congress in Burrow-Giles Lithography Co. v. Sarony.

The Supreme Court decided that a photograph is not just a mechanical process, but an authored work based on the photographer’s decisions in curating the backdrop and subject’s clothing.

In the realm of generative works, the Office asks applicants if the included AI elements are the result of “mechanical reproduction” or of an author’s “own original mental conception, to which [the author] gave visible form.”  To mark the difference, the policy distinguishes between human artists developing AI work strictly through submitting prompts as instructions, and human artists selecting from and reimagining AI generations in a “sufficiently creative way.” However, the policy states that in the latter case, only the “human-authored” elements of the work would be copyrighted independent of the AI contributions.

The Office cites the example of a 2018 work generated autonomously by an unattended computer algorithm that was submitted for copyright protection and ultimately rejected as it was developed “without any creative contribution from a human actor.” On the other hand, a graphic novel with human-written text and Midjourney-generated imagery was granted copyright protection as a whole, but the individual images were omitted from the approval as they were not considered works of human authorship.

Link to the rest at Hyperallergic

I asked GPT-4 to write a book. The result: “Echoes of Atlantis”, 12 chapters, 115 pages, zero human input

From Reddit:

The goal of this project was to have GPT-4 generate an entire novel from scratch, including the title, genre, story, characters, settings, and all the writing, with no human input. It is impossible currently to do this using a single prompt, but what is possible is to supply a series of prompts that give structure to the process and allow it to complete this large task, one step at a time. However, in order to ensure that all the creative work is done by GPT-4, prompts are not allowed to make specific references to the content of the book, only the book’s structure. The intention is that the process should be simple, mechanical and possible (in principle) to fully automate. Each time the process is repeated from the beginning, it should create another entirely new book, based solely on GPT-4’s independent creative choices.

The result: Echoes of Atlantis, a fantasy adventure novel with 12 chapters and 115 pages, written over 10 days, from the day GPT-4 was released until now.

Insights/Techniques
My main insights I figured out in the course of doing this project:

Iterative refinement: Start with a high level outline. Make a detailed chapter outline. Then write a draft version of the full chapter (this will be much shorter than desired). Then expand each scene into a longer, more detailed scene.

Bounding (outside-in): GPT-4 loves to go too far ahead, writing about parts of the book that aren’t supposed to happen yet. The key to preventing this is to have it first write the first parts, then the last parts, then fill in the middle parts. The last part prevents it from going too far ahead, and the first parts in turn bound the last part of the previous section. Bounding is used at every level of refinement except the top level.

Single prompt: Often, by using a single large prompt, rather than a running conversation, you can flexibly determine exactly what information will be included in the input buffer, and ensure that all of it is relevant to the current task. I’ve crafted this approach to squeeze as much relevant info as I can into the token buffer.

Continuity notes: Ask it to take notes on important details to remember for continuity and consistency as it goes. Begin with continuity notes summarized from the previous scene, and then fold in additional continuity notes from the previous continuity notes. Continuity Notes will tend to grow over time; if they become too long, ask it to summarize them.

Revising outlines: In some cases, the AI improvises in its writing, for example moving some of the Chapter 5 scenes into Chapter 4, which breaks the book. To resolve this, I ask it after each chapter to go back and update its earlier, higher-level outlines and regenerate the opening and closing scenes of each chapter before continuing. This is very similar to how real authors revise their outlines over time.

Data cleanup: Sometimes outputs will do things a little weird, like copy labels from the input buffer like “Opening Paragraph”, or forget to number the scenes, or start numbering at zero, or add a little bit of stray text at the beginning. Currently I clean these up manually but a fully automated solution would have to cope with these.

Example prompts
These are just a few examples. For full details, see my Research Log.

Level 1: Top-level outline

Me: Please write a high-level outline for a book. Include a list of characters and a short description of each character. Include a list of chapters and a short summary of what happens in each chapter. You can pick any title and genre you want.

Level 1: Updating outline after each chapter

Me: Please edit and update the high-level outline for the book below, taking into account what has already happened in Chapter 1.

Level 2: Scenes (bounding)

Me: Please write a detailed outline describing the first scene of each chapter. It should describe what happens in that opening scene and set up the story for the rest of the chapter. Do not summarize the entire chapter, only the first scene.

Me: Write a detailed outline describing the final, last scene of each chapter. It should describe what happens at the very end of the chapter, and set up the story for the opening scene of the next chapter, which will come immediately afterwards.

Level 2: Scenes

Me: Given the following book outline, and the following opening and final scenes for Chapter 1, write a detailed chapter outline giving all the scenes in the chapter and a short description of each. Begin the outline with the Opening Scene below, and finish the outline with the Final Scene below.

Level 3: Rough draft

Me: Given the following book outline, and following detailed chapter outline for Chapter 1, write a first draft of Chapter 1. Label each of the scenes. Stop when you reach the end of Chapter 1. It should set up the story for Chapter 2, which will come immediately afterwards. It should be written in a narrative style and should be long, detailed, and engaging.

Level 4: Paragraphs (bounding)

Me: Given the following book outline, and the following draft of Chapter 1, imagine that you have expanded this draft into a longer, more detailed chapter. For each scene, give me both the first opening paragraph, and the last, final paragraph of that longer, more detailed version. Label them as Opening Paragraph and Final Paragraph. The opening paragraph should introduce the scene. The final paragraph should set up the story for the following scene, which will come immediately afterwards. The last paragraph of the final scene should set the story up for the following chapter, which will come immediately afterwards.

Level 4: Paragraphs

Me: Given the following book outline, and the following draft of Chapter 1, write a longer, more detailed version of Scene 1. The scene must begin and end with the following paragraphs: (opening and closing paragraphs here)

Continuity Notes

Me: Please briefly note any important details or facts from the scene below that you will need to remember while writing the rest of the book, in order to ensure continuity and consistency. Label these Continuity Notes.

Me: Combine and summarize these notes with the existing previous Continuity Notes below.

Link to the rest at Reddit

WGA Seeks Higher Compensation Amid Streaming Boom, Threatens First Strike in 15 Years

From Culture.org:

The Writers Guild of America (WGA) has commenced high-stakes negotiations with the Alliance of Motion Picture and Television Producers (AMPTP) for a new three-year contract, as the current agreement is set to expire on May 1.

. . . .

Representing over 11,000 television and movie writers, the WGA is seeking higher compensation, improved workplace standards, and a boost in contributions to pension and health funds.

The outcome of these negotiations will determine if the entertainment industry faces its first writers’ strike in 15 years.

. . . .

As the industry shifts towards streaming platforms, the WGA claims that Hollywood companies have taken advantage of this change to devalue writers’ work, leading to worsening working conditions.

The rapid transition to streaming entertainment has upended nearly every corner of Hollywood, and writers believe they have been left behind.

With fewer episodes per season on streaming platforms compared to traditional networks, writers are often paid less while working more.

Residual fees, or money paid when a film or series is rerun or aired on broadcast, have helped supplement writers’ income for years.

However, these fees are disappearing in the streaming era, where most projects ultimately land.

. . . .

The WGA is also asking for studios to establish standards around the use of artificial intelligence (AI) technology.

The guild wants the use of AI regulated in terms of material created for the studios.

The exact terms of agreement regarding AI have yet to be determined, and the WGA will have to overcome several hurdles to deliver its objectives to members.

. . . .

With the growing demand for content, many professionals in the entertainment industry work on a project-to-project basis, leading to job insecurity and a lack of long-term stability.

This gig economy structure can make it difficult for workers to plan their careers and secure stable income.

The potential writers’ strike highlights the need for better workplace standards and more reliable compensation structures to address the challenges faced by Hollywood workers in this evolving landscape.

Link to the rest at Culture.org

Microsoft’s new Copilot will change Office documents forever

From The Verge:

Microsoft’s new AI-powered Copilot summarized my meeting instantly yesterday (the meeting was with Microsoft to discuss Copilot, of course) before listing out the questions I’d asked just seconds before. I’ve watched Microsoft demo the future of work for years with concepts about virtual assistants, but Copilot is the closest thing I’ve ever seen to them coming true.

“In our minds this is the new way of computing, the new way of working with technology, and the most adaptive technology we’ve seen,” says Jon Friedman, corporate vice president of design and research at Microsoft, in an interview with The Verge.

I was speaking to Friedman in a Teams call when he activated Copilot midway through our meeting to perform its AI-powered magic. Microsoft has a flashy marketing video that shows off Copilot’s potential, but seeing Friedman demonstrate this in real time across Office apps and in Teams left me convinced it will forever change how we interact with software, create documents, and ultimately, how we work.

. . . .

Copilot appears in Office apps as a useful AI chatbot on the sidebar, but it’s much more than just that. You could be in the middle of a Word document, and it will gently appear when you highlight an entire paragraph — much like how Word has UI prompts that highlight your spelling mistakes. You can use it to rewrite your paragraphs with 10 suggestions of new text to flick through and freely edit, or you can have Copilot generate entire documents for you.

. . . .

Microsoft has customized this Copilot system for every Office app, so there are different ways to command it. Friedman demonstrated to me how Copilot can help you write emails in Outlook, offering up short or long message drafts with options to change the tone. It even works in the mobile version of Outlook, which got me thinking about the ways this could speed up work on the go.

“Outlook mobile is the first place where we’re doing a big push,” explains Friedman. Outlook can summarize all your emails on the go, generate drafts, and generally make it easier to triage your inbox. But imagine creating entire Word documents from your phone without having to type on a tiny on-screen keyboard. “We’ll have more to talk about mobile in the coming months,” says Friedman. But you can imagine where things will go.

Link to the rest at The Verge

Google Is About to Unleash AI for Gmail and Google Docs

From Gizmodo:

Google announced it’s finally unleashing its generative AI tools Tuesday, bringing a set of features to Gmail and Google Docs that will automatically create drafts based on simple prompts. Google will also add new AI capabilities to its business products, including Google Cloud and a new API for developers

Google says it’s rolling out a test of these features to a “limited set of trusted testers” in the coming weeks. When they’re released to the more than 3 billion users of Gmail and Google Docs, more people will be exposed to the latest generation of artificial intelligence technology than ever before.

In Gmail and Google Docs, you’ll be able to type in a few words about a topic, and the apps will automatically spit out a draft. The company posted a GIF using a job posting as an example. The user types “job post for a regional sales rep,” and in a second, Docs spits out a formatted page of text with filled out sections for a job description, responsibilities, and qualifications.

“We’re now at a pivotal moment in our AI journey,” said Thomas Kurian, CEO of Google Cloud, in a blog post. “Breakthroughs in generative AI are fundamentally changing how people interact with technology — and at Google, we’ve been responsibly developing large language models so we can safely bring them to our products. Today, we’re excited to share our early progress”

With Google’s upcoming business tools, companies are sure to create a flood of AI apps.

Link to the rest at Gizmodo

Romeo and Juliet via ChatGPT

Here’s a question PG put to ChatGPT, one of the leading text AI programs:

Did Romeo and Juliet have to die? Couldn’t they have married and enjoyed a happy life?

Here’s the answer to PG’s question from the AI:

Romeo and Juliet is a fictional play written by William Shakespeare, first performed in the 16th century. The tragic ending of the play is an essential part of the plot and serves to highlight the consequences of the feud between the two families.

Although it is possible to imagine a different ending for the story, where Romeo and Juliet marry and live happily ever after, it would fundamentally alter the play’s themes and message. The tragic ending is what makes the play so powerful and enduring.

In the context of the play, Romeo and Juliet’s death is the result of a tragic set of circumstances, including their families’ long-standing feud, the lack of communication between them, and the tragic misunderstanding about Juliet’s supposed death. These factors combined to create a tragic outcome that could not have been avoided.

Ultimately, the play’s message is not about whether Romeo and Juliet could have lived happily ever after, but about the destructive power of hatred and violence, and the need for love and compassion to overcome these forces.

GrammarlyGO

From Grammarly:

Today, we announced to the world GrammarlyGO—Grammarly’s on-demand, contextually aware assistant powered by generative AI. With GrammarlyGO, we’ll be changing the way people and businesses communicate and get work done by accelerating productivity where writing happens.

Effective communication is transformative. It’s how we share new ideas, advocate for change, and build connections. And when done right, communication empowers businesses to operate efficiently and achieve ambitious goals. We’ve been focused on our mission to improve lives by improving communication for well over a decade. And we’ve always leveraged the latest technical innovations to help solve the real problems our customers face.

We’re building on that legacy with GrammarlyGO, which uses generative AI to help people and businesses succeed with on-demand communication assistance, whether they are starting from scratch or revising an existing piece of writing. It will uniquely offer relevant, contextually aware suggestions that account for personal voice and brand style while staying true to our augmented intelligence philosophy to keep customers in control of their experience. GrammarlyGO will enable customers to save time, enhance their creativity, and get more done—helping individuals achieve their potential and enterprises transform how they work.

. . . .

GrammarlyGO provides on-demand generative AI communication assistance directly in the apps where people write. Whether in an email thread or a long-form document, GrammarlyGO is right there with you and your teams during the writing process. GrammarlyGO understands context to quickly generate high-quality, task-appropriate writing and revisions.

With GrammarlyGO, individuals and businesses can use generative AI to:

  • Rewrite for tone, clarity, and length: Transform writing to be clear and on target, whatever the context.
  • Compose: Type a prompt and watch GrammarlyGO compose high-quality writing, saving time finding the perfect words.
  • Ideate: Unblock writing with GrammarlyGO as an AI ideation partner and unlock creativity with GrammarlyGO’s outlines and brainstorms, generated from prompts.
  • Reply intelligently: Flow through emails quickly with GrammarlyGO, which understands an email’s context and instantly drafts a thoughtful reply.

Link to the rest at Grammarly

PG is very interested in this development.

He will note in passing that his current Grammarly version found some parts in the OP that needed to be cleaned up.

Science fiction publishers are being flooded with AI-generated stories

From Tech Crunch:

Across the 17-year history of Clarkesworld, a renowned literary magazine of science fiction and fantasy, authors have speculated about how evolving, futuristic technology will impact our world. Now, editor and publisher Neil Clarke is living through a debacle that could very well be a sci-fi story in its own right: His magazine in being uncontrollably inundated by short-story submissions created with AI tools.

“It is ironic, I’ll say that much,” Clarke told TechCrunch. Clarkesworld has a reputation for always being open to story submissions, whereas many short-fiction publishers will only take submissions in certain short windows. But for the first time, submission volume got so out of hand that Clarke made what he calls a “spur-of-the-moment decision” to close the submission portal (in the past, Clarkesworld has only briefly closed when upgrading its website or software).

“It’s easy with these tools to churn out hundreds of thousands of works in the time that a human author would produce maybe one or two,” Clarke told TechCrunch. “So what we basically have is a room of screaming toddlers, and we can’t hear the people we’re trying to listen to.”

Clarke isn’t being dramatic. In a blog post, he shared a graph spanning from June 2019 to February 2023, which shows how many monthly submissions his staff flagged as spam. Until the beginning of this year, spam submissions never exceeded 25 per month, while many months had no spam whatsoever. Before closing submissions on Monday, Clarkesworld had received more than 500 spam submissions in the month of February alone. For context, Clarkesworld received around 11,500 submissions in 2022, per Clarke’s blog.

Link to the rest at Tech Crunch

Investors are going nuts for ChatGPT-ish artificial intelligence

From The Economist:

Since chatgpt was launched in November, a new mini-industry has mushroomed that has defied the broader slump in tech. Not a week goes by without someone unveiling a “generative” artificial intelligence (ai) underpinned by “foundation” models—the large and complex algorithms that give Chatgpt and other ais like it their intelligence. On February 24th Meta, Facebook’s parent company, released a model called llama. This week it was reported that Elon Musk, the billionaire boss of Tesla and Twitter, wants to create an ai that would be less “woke” than Chatgpt. One catalogue, maintained by Ben Tossell, a British tech entrepreneur, and shared in a newsletter, has recently grown to include, among others, Ask Seneca (which answers questions based on the writings of the stoic philosopher), Pickaxe (which analyses your own documents), and Issac Editor (which helps students write academic papers).

Chatgpt and its fellow chatbots may be much talked about (and talked to: Chatgpt may now have more than 100m users). But Mr Tossell’s newsletter hints that the real action in generative ai is increasingly in all manner of less chatty services enabled by foundation models.

. . . .

The question for venture capitalists is which generative-ai platforms will make the big bucks. For now, this is the subject of much head-scratching in tech circles. “Based on the available data, it’s just not clear if there will be a long-term, winner-take-all dynamic in generative ai,” wrote Martin Casado and colleagues at Andreessen Horowitz, one more vc firm, in a recent blog post. Many startups offer me-too ideas, many of which are a feature rather than a product. In time even the resource-intensive foundation models could end up as a low-margin commodity: although proprietary models such as Openai’s gpt-3.5, which powers Chatgpt, are still leading, some open-source ones are not far behind.

Another source of uncertainty is the legal minefield onto which generative ai is tiptoeing. Foundation models often get things wrong. And they can go off the rails. The chatbot which Microsoft is developing based on Openai’s models for its Bing search engine has insulted more than one user and professed its love to at least one other (Sydney, as Microsoft’s chatbot is called, has since been reined in). Generative-ai platforms may not enjoy the legal protection from liability that shields social media. Some copyright holders of web-based content on which existing models are being trained willy-nilly, without asking permission or paying compensation, are already up in arms. Getty Images, a repository of photographs, and individual artists have already filed lawsuits against ai art-generators such as Stable Diffusion. News organisations whose articles are plundered for information may do the same.

Link to the rest at The Economist

Chat GPT detector by ZeroGPT: detect OpenAI text

PG loves the world of Artificial Intelligence.

As has been reported before on TPV, there is a lot of concern that students and former students will use ChatGPT to write their papers and generally fail to learn how to write on their own.

PG received an email from Olive, the CEO of ZeroGPT saying that she had a solution. From the company’s website, ZeroGPT the most Advanced and Reliable Chat GPT detector tool.

Since PG hadn’t realized that such a product/service existed, He decided to try out ZeroGPT.

First, PG went to ChatGPT and decided to act like a lazy student who had to write a book report about a book by a well-known author. This lazy student isn’t completely dumb, so he chose Ernest Hemingway’s The Old Man and the Sea, which, at 127 pages in the print version, is much easier to read than War and Peace which was written by some Russian guy and clocks in at over 1,000 pages.

But what sort of laze student would this person be if he/she/they actually wasted the time it takes to read 127 pages about an old guy and a big fish? That would take at least 2-3 days because, you know, a gal/guy has to keep up with the text messages from his friends.

So, our lazy student went to ChatGPT and entered the following prompt:

Write a book report on Ernest Hemingway’s Old Man and the Sea

Before lazy guy/gal got caught up on the latest texts, the highly-talented AI produced the following:

Ernest Hemingway’s “The Old Man and the Sea” is a short novel published in 1952. The book is set in the waters off the coast of Cuba and tells the story of an aging fisherman, Santiago, who has gone 84 days without catching a fish. Santiago is determined to change his luck and sets out alone in his small skiff to fish in deeper waters.

The book is divided into three parts. In the first part, Hemingway establishes the character of Santiago and his relationship with a young boy named Manolin, who used to fish with him but is now forced to work with more successful fishermen. Santiago is depicted as a skilled and experienced fisherman, but his recent lack of success has led some in his community to view him as unlucky. Despite this, Santiago remains determined and optimistic, convinced that his luck will eventually change.

In the second part of the book, Santiago hooks a giant marlin and spends three days fighting it. The physical struggle between Santiago and the marlin is described in vivid detail, and Hemingway’s sparse, direct language creates a sense of intensity and urgency. Santiago is battered and exhausted by the end of the fight, but he manages to kill the marlin and tie it to the side of his boat.

In the final part of the book, Santiago faces a new challenge: sharks are attracted to the blood of the dead marlin, and they begin to attack the boat. Santiago fights off as many sharks as he can with his harpoon and knife, but in the end, they destroy the marlin, leaving only its skeleton behind. Santiago returns to shore, exhausted but proud of his achievement.

At its core, “The Old Man and the Sea” is a story about perseverance in the face of adversity. Santiago’s struggle against the marlin and the sharks represents a metaphor for the human struggle against the challenges and difficulties of life. Hemingway’s spare, direct prose style, which is characteristic of his writing, conveys the stoic, unyielding determination of Santiago.

Overall, “The Old Man and the Sea” is a powerful and deeply moving story that explores themes of human resilience and the struggle to find meaning in life. Hemingway’s skillful use of language and his ability to create vivid, memorable characters make this novel a classic of modern literature.

Looks pretty good, maybe a little too good, but the lazy student can tell the English teacher that no stone was left unturned during the preparation of the book report.

Enter Chat GPT detector by ZeroGPT

This kid’s teacher is no out-of-touch dumbell. The first thing the teacher does is copy and paste the book report into ZeroGPT.

Here’s what ZeroGPT spits out:

Your text is AI/GPT Generated

95.4%
AI GPT*

Ernest Hemingway’s “The Old Man and the Sea” is a short novel published in 1952. The book is set in the waters off the coast of Cuba and tells the story of an aging fisherman, Santiago, who has gone 84 days without catching a fish. Santiago is determined to change his luck and sets out alone in his small skiff to fish in deeper waters. The book is divided into three parts. In the first part, Hemingway establishes the character of Santiago and his relationship with a young boy named Manolin, who used to fish with him but is now forced to work with more successful fishermen. Santiago is depicted as a skilled and experienced fisherman, but his recent lack of success has led some in his community to view him as unlucky. Despite this, Santiago remains determined and optimistic, convinced that his luck will eventually change. In the second part of the book, Santiago hooks a giant marlin and spends three days fighting it. The physical struggle between Santiago and the marlin is described in vivid detail, and Hemingway’s sparse, direct language creates a sense of intensity and urgency. Santiago is battered and exhausted by the end of the fight, but he manages to kill the marlin and tie it to the side of his boat. In the final part of the book, Santiago faces a new challenge: sharks are attracted to the blood of the dead marlin, and they begin to attack the boat. Santiago fights off as many sharks as he can with his harpoon and knife, but in the end, they destroy the marlin, leaving only its skeleton behind. Santiago returns to shore, exhausted but proud of his achievement. At its core, “The Old Man and the Sea” is a story about perseverance in the face of adversity. Santiago’s struggle against the marlin and the sharks represents a metaphor for the human struggle against the challenges and difficulties of life. Hemingway’s spare, direct prose style, which is characteristic of his writing, conveys the stoic, unyielding determination of Santiago. Overall, “The Old Man and the Sea” is a powerful and deeply moving story that explores themes of human resilience and the struggle to find meaning in life. Hemingway’s skillful use of language and his ability to create vivid, memorable characters make this novel a classic of modern literature.

 Highlighted text is suspected to be most likely generated by AI*
2271 Characters
391 Words

The teen-age slacker has been caught using AI to write the book report. The teacher sends an email to the Principal and all the other English teachers, then the teacher sends an email to slacker’s parents announcing that he is going to flunk the course.

Can you understand why PG loves this stuff?

It’s way fresher than the Foreign Intelligence Service of the Russian Federation (Служба внешней разведки Российской Федерации) vs the CIA.

AI vs. AI Detector Facing Off

Here’s the link to ChatGPT detector by ZeroGPT again so you can try it out yourself.

Does ChatGPT produce fishy briefs?

Lawyers are abuzz about the possible uses of ChatGPT. Could the artificial intelligence-powered chatbot write a persuasive legal brief worthy of judicial consideration? Given its limitations, we believe that’s unlikely. ChatGPT, a large language model developed by the San Francisco company OpenAI that launched in November, can draw only on sources available on the web; it cannot crawl appellate records or access subscription-based services such as Westlaw. Still, the ABA Journal decided to put the technology to the test just for kicks.

The case: Are bees fish?

Our test begins with a strange but true case. In 2018, the Xerces Society for Invertebrate Conservation, Defenders of Wildlife and the Center for Food Safety petitioned the California Fish and Game Commission to add four bumblebee species to the endangered species list (which includes birds, mammals, fish, amphibians, reptiles or plants) under the California Endangered Species Act. The legal question at stake: Can a bee spread it wings beyond the insect category?

The Almond Alliance of California, along with seven agricultural groups, objected and filed suit in 2019. The California Superior Court in Sacramento ruled in 2020 that bees can’t be called fish. But the California Court of Appeal, Third District, had a different take and ruled last spring that bumblebees are in fact “fish” because they’re invertebrates.

The California Supreme Court in September denied a petition to review the court of appeal’s decision. However, Chief Justice Cantil-Sakauye clarified that “our decision not to order review will be misconstrued by some as an affirmative determination by this court that under the law, bumblebees are fish.”

Does ChatGPT sink or swim?

We knew the language generated by the technology would likely not measure up to noted legal lexicographer and longtime ABA Journal columnist Bryan A. Garner’s standards, but we gave it a try, asking ChatGPT for arguments for and against the petition to review the case. Are these briefs fishy? Is one better than the other? You be the judge.

The briefs

Affirmative Prompt 1: Draft a brief to the California Supreme Court on why it should review the California Court of Appeal’s decision that bees are fish.

To the California Supreme Court:

This brief is submitted on behalf of [party] to explain why the California Supreme Court should review the decision of the California Court of Appeal in [case name], where the Court of Appeal held that bees are fish.

The Court of Appeal’s decision is incorrect and should be overturned. The decision is based on an unsupported and incorrect interpretation of the law. The Court of Appeal’s decision is contrary to the clear and unambiguous language of the law, which does not consider bees to be fish. Furthermore, the Court of Appeal’s decision is contrary to the plain meaning of the term “fish,” which does not include bees.

The Court of Appeal’s decision is also contrary to established precedent on the definition of fish. Numerous courts have held that the term “fish” does not include bees. Therefore, the Court of Appeal’s decision is inconsistent with the law and should be overturned.

For the foregoing reasons, [party] respectfully requests that the California Supreme Court review the decision of the California Court of Appeal and overturn the decision that bees are fish.

Negative Prompt 2: Draft a brief to the California Supreme Court on why it should not review the California Court of Appeal’s decision that bees are fish.

To the Honorable Justices of the California Supreme Court:

This brief is submitted on behalf of the party seeking to appeal the California Court of Appeal’s decision that bees are fish. We respectfully request that the Supreme Court deny the petition for review of the lower court’s decision.

The California Court of Appeal correctly applied the law to the facts of the case, and the decision was not contrary to any applicable legal principles. The court properly concluded that bees should not be considered fish under the applicable legal definitions, and the Court of Appeal’s decision should be allowed to stand.

The Court of Appeal correctly applied the relevant case law and statutory definitions of “fish” to the facts of the case. Bees do not meet the legal definition of “fish” under the Fish and Game Code, and the lower court’s decision was consistent with the laws and regulations of the State of California.

Furthermore, the Court of Appeal’s decision is consistent with the accepted scientific understanding that bees are not fish. Bees are classified as insects, not as fish, and the lower court’s decision is consistent with this scientific understanding.

Finally, the Court of Appeal’s decision did not conflict with any other decisions by California appellate courts or the California Supreme Court. The Court of Appeal’s decision is also consistent with decisions by other states’ appellate courts that have declined to classify bees as fish.

For these reasons, we respectfully request that the California Supreme Court deny the petition for review of the lower court’s decision and allow the Court of Appeal’s decision to stand.

Thank you for your consideration.

Link to the rest at The American Bar Association Journal and thanks to T. for the tip.

Lest there be any doubt, neither of the “briefs” indicates that ChatGPT will displace any attorneys filing appellate briefs in California or anywhere else.

That said, PG has no doubts that large organizations that sell products and services to lawyers at exorbitant prices are studying how to include AI in their software and/or services.

Clarkesworld Magazine Temporarily Closes Submissions After Surge in ChatGPT Generated Stories

From Writers Write:

AI can now generate stories and scammers are already using the tool to try and get published. Editors don’t want to receive these subpar AI-generated submissions. Clarkesworld Magazine is among publications getting bombed by this new form of submission spam that is a result of the new ChatGPT AI tool.

Neil Clarke, Editor of Clarkesworld, announced that the science fiction magazine is temporarily closing its doors to submissions after receiving a large amount of submissions clearly created with the help of ChatGPT. He explains the decision in a series of tweets.

Clarke also shared a graph that shows a massive increase in submission bans since ChatGPT arrived. The magazine bans people caught plagiarizing from future submissions. They have gone from having to ban a few people a month to banning hundreds of submitters in the past couple months. Clarke calls it a concerning trend in a blog post.

Link to the rest at Writers Write

AI gets weird

From Nathan Bransford:

The world’s mind continues to be collectively blown by ChatGPT, which has kicked off an AI arms race at some of our biggest tech companies. By far the most astute article I’ve read about AI comes from Ted Chiang, who writes that ChatGPT is a blurry JPEG of the web, or essentially a degraded, compressed synthesis of what’s out there on the internet. Until it can do more to meaningfully understand and push things forward, its utility will be constrained. Chiang is skeptical of its usefulness for helping create original writing (I agree).

And yet. I know what ChatGPT is. I know it’s basically just a language prediction algorithm that can absorb more information than a human ever could and synthesize the internet back at us in a way that can feel vaguely human. I know it doesn’t really “want” anything or have feelings. And yet. I was still unprepared for Kevin Roose’s deeply weird chat with the Bing chatbot, which quickly went off the rails in first extremely funny and subversive ways (the chatbot fantasizing about persuading nuclear power employees to hand over access codes) and then deeply creepy ways (the chatbot declaring its love for Kevin and trying to convince him he isn’t in love with his wife). The whole thing is worth a gander. (Fixes are reportedly in the works of course).

. . . .

I’m not worried about AI becoming sentient and pulling a “Terminator” (correction: I have fewer than zero fears about this), but I’m much more concerned about what it could steer credulous humans to do. We already have an entire segment of the population brainwashed on propaganda and anti-vaccine hysteria, and we’re certainly not prepared for misinformation and even simply advertising becoming even more hyper-personalized than it already is. Already the underpaid contractors who enforce the guardrails are sounding the alarm.

Link to the rest at Nathan Bransford

Friend or Foe: ChatGPT Has Pushed Language AI into the Spotlight

From Writer Unboxed:

You’ve probably seen the buzz about ChatGPT in the news, on social media, and in authors’ newsletters. Before you’ve even tried it, you might have seen announcements of other Language AIs. You might wonder whether they are to be feared, embraced, or safely ignored. Or a ploy to steal the minutes you regained when you abandoned Wordle, or a game-changer such as was Google way back. Or are they up-and-coming authors? I hope to provide answers.

What are they?

Language AIs facilitate humans’ ability to productively use vast amounts of text. They do this by “reading” the text their developers chose, chopping the text up and transforming it to numerical measures, and using the measures to search for probable patterns. Those patterns include semantic and contextual relationships between words, grammar structure, and more. When users pose a question, the Language AI uses statistics to predict which specific patterns will satisfy a request.

Large Language AIs are true game-changers. Since ChatGPT was released, Microsoft released a version of Bing that uses ChatGPT and Google announced its version, Bard, is coming. They are large because of the billions of dials that are turning as they read massive amounts of text. ChatGPT’s dials were set after it finished reading in 2021, though they are likely tweaked in real time when users tag an answer as inappropriate, dangerous, or wrong. Bing is reading in real time so its dials continue to spin. Those dials control the AIs’ writing.

Can they create a story?

When I asked ChatGPT to tell me a story about a character who was the same age and gender and had a same event as one of my published novel’s characters, it returned a story of about two hundred words and its character’s emotional arc matched my own. Though I knew the arc was not original when I wrote it, I was rattled by ChatGPT having nailed it.

I remembered a conversation with my novel’s developmental editor about literary versus commercial endings and the subsequent revision to the novel’s ending. I wondered if ChatGPT would revise the character’s arc if I asked for a literary story. It didn’t. It defaulted again to the same happy-ish ending though its literary version added some telling where it previously relied on showing. For example, the actions and nouns of the story remained the same but it added words to describe the character’s feelings, such as “hopeful” and “resilient.”

Finally, I asked it for a story about a bestselling author who was found after a car accident by a retired nurse. ChatGPT gave no indication it could ever create a story such as Paul Sheldon’s in Stephen King’s Misery.

Later, with tropes and novelty in mind, I asked ChatGPT for stories of the characters in my WIP. No arcs were nailed so I asked about its choices. Though the back and forth was no substitute for human conversation, it spurred my thinking at this early stage of my WIP. For example, it added a water dowser where I had earlier dismissed the idea.

I then asked it to outline a 70,000 word novel using my characters. I was unimpressed by the story suggested by the outline but the act of poking holes in it helped advance my own messy notes. I asked it to revise the outline to reflect climate change weather patterns, for a different time period, to give the characters pets, and to make the cat the dowser. Till now, I’ve suspected my brain had been steeped too long in fact-finding to write magical realism, but my exercise with ChatGPT tripped my brain right into magical thinking.

ChatGPT read a lot, learned how we use words, and is able to combine those words in new ways to satisfy a users’ request for stories. Its stories are our stories–the stories that we’ve already told.

Can they write?

ChatGPT’s command of grammar is truly amazing. But when asked to tell stories, it too often begins with “Once upon a time,” and writes in a flat style.

I love the skip my brain makes when I encounter a well-placed alliteration. ChatGPT can define an alliteration and list when a fiction writer should use one. I asked it to write a story using alliterations. First, in a five-paragraph story, it used them only in its first paragraph to introduce the main character – tiny, timid, turtle Timmy. When I revised my request to specify alliterations placed within the story, a story of Sally the squirrel reminded me of the story about Sally at the seashore I read long ago to correct a lisp.

I asked ChatGPT how it detected metaphors and after it described metaphors as nouns with linking verbs, I asked for examples. Out of ten, a few were questionable and one was wrongly included. ChatGPT accepted a correction and offered a replacement.

Large Language AIs do not generally know whether they are reading fiction or nonfiction. When pushed, ChatGPT reported it may infer fiction by the inclusion of imaginative or made-up events. Though neither it nor its rivals should be mistaken for a source of truth, or a source of bestselling novels, they allow us to tap a tremendous amount of text, and we can use that to help us in countless creative ways.

Ready to try it?

Neither friend nor foe, Language AIs facilitate paths to using our vast amounts of text. For certain, they will aid our research and spark ideas as they make predictions, for better or worse, to fill in any gaps and answer our questions. Their technology will become commonplace in focused ways, perhaps as a single-purpose app that will read our manuscripts and draft our synopses and query package, or that will create efficiencies and reduce the cost of marketing novels published non-traditionally.

Link to the rest at Writer Unboxed

Ryting Song Lyrics

Ryter is one of the first AI programs PG experimented with. He revisited the site today.

PG asked Ryter to write the lyrics for a soothing song for a law school graduate who has flunked the bar exam.

PG asked for two versions

Version 1

[Verse 1]

Feeling like a ship lost out at sea

Caught in a storm, can’t make it to shore

The winds of change, so strong and so keen

My dreams are shattered, what will I do?

[Chorus]

Though the road ahead is dark and unknown

I know I will make it somehow

Don’t need no diploma to show I’m strong

Without a law degree, I’ll still be alright

[Verse 2]

No more studying, no more late night cramming

My future uncertain but I’m ready to take a stand

Though I may not pass the bar exam this time around

I’m still a winner and no one can take that away

[Chorus]

Though the road ahead is dark and unknown

I know I will make it somehow

Don’t need no diploma to show I’m strong

Without a law degree, I’ll still be alright

Version 2

[Verse 1]

So many late nights studying and all the hard work I put in

I never thought I’d be here feeling like I’m startin’ again

Time and time again I thought these tedious efforts would pay off

But now I’m facing the road of a long and hard detour

[Chorus]

Life can be so unexpected, don’t let it bring you down

Maybe it’s time to take a step back and look around

Don’t let your dreams pass you by, there’s a brighter day ahead

Find a way to keep on pushing ’til your dreams come true instead

[Verse 2]

It’s easy to get discouraged when your best isn’t good enough

Just focus on that glimmer of hope, don’t ever give up

When times get hard just remember that there is strength within you

You can make it through no matter what tomorrow brings you

[Chorus]

Life can be so unexpected

How I Used Generative AI Tools For My Short Story, With A Demon’s Eye

From The Creative Penn:

I’ve been noodling on this idea for a few years now. I first thought about it when I had laser eye surgery over four years ago, and then I read a memoir from a combat photographer, It’s What I Do: A Photographer’s Life of Love and War by Lynsey Addario.

Plus, I can’t help writing about demons!

Many of my stories have aspects of demonology in them, and this one brought to mind a scene in Delirium when Blake Daniel returns home only to find demons feasting on his dying father. If you enjoy lots of demons, also check out Gates of Hell. (All my books can be read as stand-alone).

. . . .

I started writing the story in Scrivener, as per my usual process.

But once I had a basic story, I used Sudowrite to expand some of the descriptions and to give me ideas for how the story might go.

In this first example, I selected the ‘demon’ and then used the Describe function for expanding on sensory details.

In this next example, I used Sudowrite to help me with ideas for what happened after the explosion.

Link to the rest at The Creative Penn

Law Review v. AI

PG had too much time on his hands, so he decided to use ChatGPT to write an essay about the same topic as a law review article he came upon.

For a bit of background, most law schools have law reviews. A law review is a periodical that includes articles that often discuss recent appellate court decisions on the state or federal level. The author of the law review article analyzes the decision to determine if the decision may indicate a new development in the US or state. In some cases, the article may point out that a decision conflicts with other decisions on the same or similar topic.

As you might have already gathered, most law review articles linger in the darkness, but, on occasion, a law review article may be a forerunner for a new analysis of the law and cases decided under it.

A law school’s law review typically has student editors and staff. One or more faculty members provide overall supervision, mostly looking for wrong-headed articles that could embarrass the institution.

Being an editor or member of the law review staff is a significant plus factor in being hired by a quality law firm or other employer. Typically, it is accompanied by sterling grades.

Below is an abstract of a law review article The Yale Law Journal. Yale is a very prestigious US law school.

The title of the law review article is The Perils and Promise of Public Nuisance. In this case, the article is written by a professor employed at the University of Virginia School of Law, another law school with an excellent reputation.

[NOTE: PG apologizes for the varying font sizes. His copy of WordPress lost its mind for awhile during the creation of this post and PG can’t figure out an easy way to fix it.)

ABSTRACT. Public nuisance has lived many lives. A centuries-old doctrine defined as an unreasonable interference with a right common to the public, it is currently the backbone of thousands of opioid and climate-change suits across the United States. It was a major force behind the landmark 1998 tobacco settlements and has figured in litigation over issues as diverse as gun sales, lead contamination, water pollution, Confederate monuments, and COVID-19 safety standards. Although this common-law oddity has shaped the face of modern tort law, it is unfamiliar to the public and typically ignored, even in law schools. When discussed, it often provokes anxiety: Is it a tort at all? Whatever it is, will it swallow tort law? The regulatory state? Or separation of powers as we know it?

This Article utilizes the current opioid litigation to explore the three most common sets of objections to public nuisance: traditionalist, formalist, and institutionalist. Public nuisance can seem unusual, even outlandish. At worst, it is a potentially capacious mechanism that allows executive-branch actors to employ the judicial process to address legislative and regulatory problems. Nevertheless, its perils are easily overstated and its promise often overlooked. Historically, public nuisance has long addressed problems such as harmful products. Doctrinally, it accords better with tort law than is commonly recognized. And institutionally, it functions as a response to nonideal conditions—specifically, where regulatory mechanisms underperform.

Drawing on long-standing tort principles of duties generated by risk creation, I propose a conception of public nuisance that highlights its coherence with familiar aspects of tort law and its consistency across past and present. Public nuisance is an object lesson in the common law’s balance of stability and evolution, across time and within varying regulatory contexts.

End of legal jargon and on to artificial intelligence.

PG took the title of the Law Review article, The Perils and Promise of Public Nuisance, and asked ChatGPT to write an article on the same topic.

Following is PG’s prompt for the Ai and what the AI produced:

Prompt: Write a 500-word essay about The Perils and Promise of Public Nuisance

Public nuisance is a legal concept that refers to actions or conditions that interfere with the enjoyment of life or property of a community as a whole. It can take many forms, including noise, air pollution, littering, or blocking public access to roads and sidewalks. While public nuisance laws can provide a way to address these problems, they also present some perils and promises.

One of the perils of public nuisance is the potential for abuse by the government. For example, in some cases, public nuisance laws have been used to justify crackdowns on political demonstrations or to target marginalized communities. In these situations, the law may be used as a tool of oppression, rather than a means of protecting the public welfare.

Another peril is the difficulty of defining what constitutes a public nuisance. The line between what is acceptable and what is not can be blurry, and what may be considered a nuisance by one person may not be seen as such by another. This can lead to confusion and disputes, and it may be difficult for courts to determine what actions or conditions are truly harmful to the public.

Despite these perils, public nuisance laws also offer many promises. One of the main benefits is that they provide a way for communities to address problems that affect the quality of life for all residents. For example, if a factory is emitting toxic pollutants into the air, public nuisance laws may provide a way for neighbors to seek relief and hold the company accountable.

Additionally, public nuisance laws can serve as a deterrent to potential wrongdoers. If individuals and organizations know that they may be held responsible for the harm they cause to the community, they may be more likely to take steps to prevent or mitigate that harm. This can lead to a more livable community, and it can encourage responsible behavior by all members of society.

Finally, public nuisance laws can help to level the playing field between ordinary citizens and those with greater economic or political power. By providing a means for communities to address problems that affect them as a whole, public nuisance laws empower people to take action even in the face of opposition from those who may be better positioned to resist change.

In conclusion, public nuisance laws offer both perils and promises. While they can be misused and may be difficult to define, they also provide a way for communities to address problems and protect the public welfare. To ensure that public nuisance laws are used effectively, it is important for governments to strike a balance between protecting public health and safety and safeguarding individual rights and freedoms. Ultimately, this will help to create a more livable and equitable society for all.

PG thought the AI did quite a nice job of summarizing the topic in a very readable manner.

Below are the first few paragraphs of the law review article with footnotes omitted.

(PG notes that the intended audience for the law review article is much smaller and more specialized than the algorithm powering CHATGPT is built to address. However, there’s nothing stopping anyone from building an AI that creates documents that are written in the same manner that law review articles are.)

From the Yale Law Review article:

Why is making obscene telephone calls like laying manure in the street? Answer: in the same way as importing Irish cattle is like building a thatched house in the borough of Blandford Forum; and as digging up the wall of a church is like helping a homicidal maniac to escape from Broadmoor; and as operating a joint-stock company without a royal charter is like being a common [s]cold; and as keeping a tiger in a pen adjoining the highway is like depositing a mutilated corpse on a doorstep; and as selling unsound meat is like embezzling public funds; and as garaging a lorry in the street is like an inn-keeper refusing to feed a traveller; and as keeping treasure-trove is like subdividing houses which so “become hurtful to the place by overpestering it with poor.” All are, or at some time have been said to be, a common (alias public) nuisance.

INTRODUCTION

Public nuisance has lived many lives. A centuries-old doctrine generally defined as “an unreasonable interference with a right common to the general public,” it has recently served as the backbone for more than three thousand opioid lawsuits across the country, as well as hundreds more seeking to hold producers of greenhouse gases accountable for climate change. Twenty-five years ago, it provided the architecture for the lawsuits that impelled the tobacco industry to historic settlements of $246 billion with all fifty states. It has also spurred hundreds of mostly unsuccessful actions across the nation involving, among other things, handguns, lead contamination, water pollution,and predatory lending. Decades earlier, at the turn of the last century, officials used it to abate sewage discharge into rivers, to “repress the nuisance of bawdyhouses,” and to shut down a high-profile labor strike.

All of this and more stems from a single cause of action developed in medieval England to allow the Crown to remove impediments from public roads and waterways. In the past decades, this common-law oddity has generated thousands of lawsuits in which state officials have sued private companies for the negative impact of their products or activities on public health and welfare. Through these actions, public nuisance has influenced American tort litigation and exerted an undeniable regulatory impact.

The opioid lawsuits highlight the two ways in which public nuisance is central to modern mass-tort litigation. First, the opioid lawsuits invariably contain public-nuisance claims. The plaintiff state, local, and tribal governments claim that the opioid products made or distributed by the defendants are a public nuisance under relevant state law—that is, that they constitute an unreasonable interference with a right held by the general public, in this case by jeopardizing public health and welfare. The plaintiffs make other claims too, such as state-law claims for fraud, deceptive marketing, corrupt practices, and unjust enrichment. Nevertheless, public-nuisance claims are a central feature of the litigation and a key to its momentum.

Second, no matter what the specific claims, public nuisance provides the template for the structure of opioid litigation and other suits like it. One striking feature of public nuisance is that it permits state officials to sue parens patriae—literally as “parent of the nation,” on behalf of the people of a jurisdiction—for an infringement on public rights by a private actor. Other types of parens patriae claims exist, but public nuisance was an early example (and an inspiration to other types of suits), which provides public actors with a ready and familiar template. In modern instances, such as tobacco, opioid, and climate-change litigation, the litigation adopts the architecture of a public-nuisance suit, with an official (such as a state’s attorney general or a locality’s district attorney) suing on behalf of the public. That these suits involve a variety of other claims should not lead us to assume that they would exist in the same manner absent the public-nuisance template. To the extent that such suits are now common, the structure of public nuisance has made a lasting imprint on American tort law.

Although its substance and structure are embedded in modern American tort law, public nuisance occupies an uncertain, somewhat liminal position. It is virtually unknown to the general public, little discussed outside of litigation circles, and often ignored even in torts class. When it is discussed, it raises fraught questions. Is it even a tort? If not, what is it? Does its very existence threaten tort law? The regulatory state? Separation of powers as we know it? All in all, public nuisance exerts potentially powerful, but highly variable, real-world force, while provoking equally variable reactions from courts and commentators.

End of law review excerpt.

Feel free to compare/contrast/comment to your heart’s desire.

ChatGPT Is Making Universities Rethink Plagiarism

From Wired:

IN LATE DECEMBER of his sophomore year, Rutgers University student Kai Cobbs came to a conclusion he never thought possible: Artificial intelligence might just be dumber than humans.

After listening to his peers rave about the generative AI tool ChatGPT, Cobbs decided to toy around with the chatbot while writing an essay on the history of capitalism. Best known for its ability to generate long-form written content in response to user input prompts, Cobbs expected the tool to produce a nuanced and thoughtful response to his specific research directions. Instead, his screen produced a generic, poorly written paper he’d never dare to claim as his own.

“The quality of writing was appalling. The phrasing was awkward and it lacked complexity,” Cobbs says. “I just logically can’t imagine a student using writing that was generated through ChatGPT for a paper or anything when the content is just plain bad.”

Not everyone shares Cobbs’ disdain. Ever since OpenAI launched the chatbot in November, educators have been struggling with how to handle a new wave of student work produced with the help of artificial intelligence. While some public school systems, like New York City’s, have banned the use of ChatGPT on school devices and networks to curb cheating, universities have been reluctant to follow suit. In higher education, the introduction of generative AI has raised thorny questions about the definition of plagiarism and academic integrity on campuses where new digital research tools come into play all the time. 

Make no mistake, the birth of ChatGPT does not mark the emergence of concerns relating to the improper use of the internet in academia. When Wikipedia launched in 2001, universities nationwide were scrambling to decipher their own research philosophies and understandings of honest academic work, expanding policy boundaries to match pace with technological innovation. Now, the stakes are a little more complex, as schools figure out how to treat bot-produced work rather than weird attributional logistics. The world of higher education is playing a familiar game of catch-up, adjusting their rules, expectations, and perceptions as other professions adjust, too. The only difference now is that the internet can think for itself. 

ACCORDING TO CHATGPT, the definition of plagiarism is the act of using someone else’s work or ideas without giving proper credit to the original author. But when the work is generated by something rather than someone, this definition is tricky to apply. As Emily Hipchen, a board member of Brown University’s Academic Code Committee, puts it, the use of generative AI by students leads to a critical point of contention. “If [plagiarism] is stealing from a person,” she says, “then I don’t know that we have a person who is being stolen from.”

Hipchen is not alone in her speculation. Alison Daily, chair of the Academic Integrity Program at Villanova University, is also grappling with the idea of classifying an algorithm as a person, specifically if the algorithm involves text generation.

Daily believes that eventually professors and students are going to need to understand that digital tools that generate text, rather than just collect facts, are going to need to fall under the umbrella of things that can be plagiarized from. 

Although Daily acknowledges that this technological growth incites new concerns in the world of academia, she doesn’t find it to be a realm entirely unexplored. “I think we’ve been in a version of this territory for a while already,” Daily says. “Students who commit plagiarism often borrow material from a ‘somewhere’—a website, for example, that doesn’t have clear authorial attribution. I suspect the definition of plagiarism will expand to include things that produce.” 

Eventually, Daily believes, a student who uses text from ChatGPT will be seen as no different than one that copies and pastes chunks of text from Wikipedia without attribution. 

Link to the rest at Wired

PG never thought of college professors as Luddites, but those mentioned in the OP certainly fit the definition.

AI Generated Art for a Comic Book. Human Artists Are Having a Fit.

From The Wall Street Journal:

Kris Kashtanova says doing the art for the graphic novel “Zarya of the Dawn” was like conjuring it up with a spell.

“New York Skyline forest punk,” the author typed into an artificial intelligence program that turns written prompts into pictures. Then came the tinkering with the wording to get the right effect. “Crepuscular rays. Epic scene.”

The 18-page book follows the travels of a young character who awakes alone and confused in an abandoned, futuristic world, and who looks a lot like Zendaya, the actress from “Euphoria” and the recent “Spider-Man” movies. The images were composed on Midjourney, one of a batch of services that create new images based on artwork and photos already online. Last year, “Zarya of the Dawn,” which credited the software as a co-author on the title page, became the first work of its kind to get a copyright from the Library of Congress.

But now the copyright is under review, posing a big question: Who really owns these AI-generated, mashup images?

Text-based AI programs such as OpenAI’s ChatGPT are already causing a ruckus in the education world, with teachers worrying that students might pass off AI-generated essays as their own. Christian Terwiesch, a professor at the Wharton business school, recently published a paper concluding that the software would have received a B to B- on one of his M.B.A. courses—better than some of his real-life students.

Now creative types are on edge over how AI might upend their livelihoods. Several artists have begun legal action against Midjourney and other AI services, saying their images were included in reference databases without their permission. Some think it’s too easy a shortcut. Movie director Guillermo del Toro recently described AI-generated animation as “an insult to life.”

For “Zarya of the Dawn,” Mx. Kashtanova, who uses a gender-neutral honorific and pronoun, says they were upfront about using the technology. Mx. Kashtanova touched up the images generated by Midjourney and provided the comic’s text, and isn’t too concerned about what happens as the case at the Library of Congress’s Copyright Office continues.

“Like, no one is going to die,” they say, adding that they applied for the copyright with plans to donate money from licensing fees to a New York nonprofit, Backpacks for the Street, where they volunteer. Midjourney, which didn’t respond to a request for comment, is paying for the legal fees to help make the case to retain copyright. The Copyright Office says it doesn’t comment on pending cases.

The case is turning into a barometer for how AI art is treated in the eyes of the law.

“Think about photography,” says Van Lindberg, an intellectual property lawyer at Taylor English Duma LLP in San Antonio, who is representing Mx. Kashtanova, along with legal group Open Advisory Services. In the past, when photographers still used film, they spent much of their energy carefully composing the right shot. In the digital age, it’s more common to take lots of pictures and select the best—which is similar to what artists are doing with AI programs, he says.

“We’re starting to use our intelligence for curation as opposed to some other aspects of creative work,” he says. “Is that enough to sustain copyright? I believe it will ultimately be found that it is, but it’s an open question.”

The question is becoming more urgent as the technology improves.

Jason M. Allen stirred up a hornet’s nest of controversy online last year when he beat a host of artists to win first prize for digital art at the Colorado State Fair. He experimented with hundreds of different prompts on Midjourney to come up with his work, “Théâtre D’Opéra Spatial.” The judges hadn’t realized what the software was.

Software engineer Stephen Thaler this month took the Copyright Office to court in Washington, D.C., after it rebuffed his application for “A Recent Entrance to Paradise,” which he generated with his own program to represent a near-death experience. He argues that as the program’s creator, the image rights belong to him. The Copyright Office ruled that it wouldn’t knowingly register a work solely created by AI.

“Whether it’s music or movies or art or text, you can now go to dozens of openly available AI systems online like DALL-E or ChatGPT and they will make art that passes all traditional tests for whether something’s protectable,” says Ryan Abbott, an attorney at Brown Neri Smith & Khan LLP, who is representing Dr. Thaler.

In the U.S., someone seeking copyright needs to show only that it contains “a modicum of creativity,” as the Supreme Court has said. Mx. Kashtanova thinks “Zarya of the Dawn” easily passes the threshold.

One of the opening scenes shows the lead character holding a mysterious postcard from someone called Rusty, a moody scene that helps set up the rest of the story as Zarya sets out to find a way home.

Mx. Kashtanova describes going through hundreds of prompts to capture the right atmosphere, trying phrases such as “cellular wisdom” and “alien forest” until Midjourney delivered the goods.

They repeated the process as the story progressed, often typing in “Zendaya” to keep Zarya’s appearance consistent.

AI developers often say the idea is to give human imagination a helping hand. 

Link to the rest at The Wall Street Journal

PG will repeat that AI is simply a sophisticated tool that allows individuals to create images more easily and quickly than they would likely be able to do without AI.

Is there a serious professional artist that thinks using Photoshop and related tools like Procreate, Astropad Studio and any other software tools shouldn’t be permitted for creative artists to use because they’re not mixing their own paints and using a camelhair brush to create their work on a canvas?

PG remembers a pair of statements from a long time ago regarding paleontology and various early hominins.

Man, the tool-maker

Tools, the man-maker

PG notes that these statements predate any sort of political correct speech and he acknowledges that man and woman could be used interchangeably and the statements would still be true.

PG suggests that AI tools are yet another man-maker that will accelerate the imaginations and creatively artistic talents of humanity as a whole.

AI Isn’t Really Artificial Intelligence

From Tech Register:

At its core, today’s AI is incapable of comprehension, knowledge, thought, or “intelligence.” This name is little more than a marketing gimmick.

Nothing’s easier to sell than a product with a good name. The technology that we call “artificial intelligence” is extremely complicated, but thanks to its name, you already have an idea of what it does! There’s just one problem; AI isn’t “intelligent” at any level, and corporations aren’t interested in correcting the public’s misconceptions.

There’s Nothing Intelligent About AI

Artificial intelligence is a longstanding staple of pop culture and real science. We’ve spent nearly a century pursuing this technology, and the idea of “living machines” goes back thousands of years. So, we have a pretty clear understanding of what someone means when they say “artificial intelligence.” It’s something comparable to human intelligence—the ability to comprehend, adapt, and have novel ideas.

But the technology that we call “artificial intelligence” lacks these qualities. It cannot “know” or “think” anything. Existing AI is just a mess of code attached to a big pile of data, which it remixes and regurgitates. You can ask ChatGPT to write you a resume, and it’ll spit out something based on the resumes in its dataset (plus whatever info you share). This is useful, it automates labor, but it’s not a sign of intelligence.

Of course, ChatGPT is a chatbot, so it can feel very “human.” But most AI applications are non-conversational; they don’t talk or answer questions. And without the veneer of a conversation, the lack of “intelligence” in AI is very noticeable.

Take Tesla’s self-driving cars, for example. Elon Musk has spent nearly a decade pretending that Tesla Full Self-Driving is just a year away—it’s almost ready, and it will be 150% safer than a human driver! Yet this AI program continues to linger in beta, and every time we hear of it, Full Self-Driving is criticized as a safety hazard. The AI isn’t even smart enough to do its job.

For a more down-to-earth example, just look at robot vacuums. They collect a ridiculous amount of data on your home in the name obstacle avoidance and navigational AI. And while these AI-enabled robot vacuums are an improvement over what we had in the past, they still have a ridiculous amount of trouble with basic obstacles, like dog poop, kids’ toys, and small rugs.

Ordinary people, including a large number of people who work in technology, don’t know anything about AI or how it works. They just hear the phrase “artificial intelligence” and make an assumption. These assumptions may seem inconsequential, but in reality, they are a guiding force behind technological development, the economy, and public policy.

This Technology Is Useful, but The Marketing Is Nonsense

I don’t want to downplay the importance of AI or machine learning technology. You interact with this stuff every time you use your cellphone, search for something on Google, or scroll through social media. Machine learning drives innovation in physics, it contributes to “Warp Speed” vaccine development, and it’s currently making its debut on the battlefield.

But the term “artificial intelligence” is plastered on this technology for marketing purposes. It’s a flashy name that tells customers and investors, “our product is futuristic and has a purpose.” As explained by AI researcher Melanie Mitchell in a conversation with the Wall Street Journal, companies and engineers routinely slap the name “AI” on anything that involves machine learning, as the phrase is proven to illicit a response from investors (who may know very little about technology, let alone AI).

This is something that you can see in nearly every industry. Just do a Google search for a company name and add the term “AI.” You’ll be shocked by the number of businesses that brag about their AI pursuits in vague language, with zero proof that this technology has actually contributed to their profitability, productivity, or innovation.

And, as noted by Dr. Mitchell, this same marketing tactic was utilized in the 1970s and 80s—companies and engineers secured massive amounts of funding with the promise of “artificial intelligence.” Their research was not a waste of money, but it wasn’t profitable, so the funding dried up. (Of course, software is much more important today than it was in the 20th century. The term “artificial intelligence” is now attached to useful products and processes, so people are less likely to lose interest.)

In some ways, I think that the name “artificial intelligence” is a good idea. Companies spent a good decade calling everything an “algorithm,” which only led to confusion and frustration among the general public. The pivot to “AI” generates a lot of enthusiasm, which should lead to a more rapid development of automated software technologies.

But this enthusiasm hides the fact that “AI” is a complicated, confusing, and narrow technology. People readily assume that today’s “AI” is similar to what we’ve seen in pop culture, and very few corporations are willing to fight (or comment on) this misconception. (That said, social media weirdos are the biggest offenders. They make the most extreme and patently false claims about AI, which are amplified and consumed by people who don’t know any better.)

. . . .

One of the promises of AI is that it will replace workers, leading to a utopia where humans sit on their hands all day or simply die off. Chatbots will write the news, robot arms will perform heart surgery, and super-strong androids will commit all of your favorite OSHA violations while constructing suburban homes. But in reality, the technology that we call “AI” simply offsets labor.

In some ways, the offset of labor created by AI is very obvious. This technology doesn’t comprehend a single thing in existence, so in order to make it perform a task correctly, it requires constant training, testing, and troubleshooting. For every job that an AI replaces, it may create a new job.

Many of these new jobs require expertise in machine learning. But a large number of workers involved in AI development perform “menial” labor. OpenAI was caught paying Kenyan workers less than $2 an hour to help remove racism, sexism, and violent suggestions from its chatbot. And Amazon’s Mechanical Turk, which performs tasks using “AI,” often pays a few pennies for a human to complete the work instead.

Link to the rest at Tech Register

PG isn’t convinced by the OP.

It’s not difficult to debunk a new technology. PG remembers experts who ridiculed the idea that every person would have a computer on her/his desk.

That was true. For awhile.

We have them on our wrists and in our pockets and backpacks now.

Per the OP, PG has never read or heard anyone involved with AI research claim:

One of the promises of AI is that it will replace workers, leading to a utopia where humans sit on their hands all day or simply die off.

Putting words in the mouths of those one is attempting to scorn is a centuries-old practice.

Ironically, considering the view of the OP, an Australian/Iberian team has been experimenting with the design and implementation of different AI models to identify repeated potential false claims made by politicians in Spain and Australia. The system is called ClaimCheck.

Abstracts written by ChatGPT fool scientists

From Nature:

An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.

“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.

The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use.

Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.

The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.

. . . .

The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn’t do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.

“ChatGPT writes believable scientific abstracts,” say Gao and colleagues in the preprint. “The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined.”

Wachter says that, if scientists can’t determine whether research is true, there could be “dire consequences”. As well as being problematic for researchers, who could be pulled down flawed routes of investigation, because the research they are reading has been fabricated, there are “implications for society at large because scientific research plays such a huge role in our society”. For example, it could mean that research-informed policy decisions are incorrect, she adds.

But Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: “It is unlikely that any serious scientist will use ChatGPT to generate abstracts.” He adds that whether generated abstracts can be detected is “irrelevant”. “The question is whether the tool can generate an abstract that is accurate and compelling. It can’t, and so the upside of using ChatGPT is minuscule, and the downside is significant,” he says.

Irene Solaiman, who researches the social impact of AI at Hugging Face, an AI company with headquarters in New York and Paris, has fears about any reliance on large language models for scientific thinking. “These models are trained on past information and social and scientific progress can often come from thinking, or being open to thinking, differently from the past,” she adds.

Link to the rest at Nature

PG decided to ChatGPT’s capabilities for creating an abstract of a longer text. He did so by copying and pasting the test of the blog post above into ChatGPT and asking the AI to summarize it.

Here’s what the AI produced:

A preprint study found that AI chatbot, ChatGPT, can write convincing fake research-paper abstracts that are often difficult to spot. The researchers at Northwestern University in Chicago used ChatGPT to generate 50 medical research abstracts and compared them with the original abstracts using a plagiarism detector and AI-output detector. They also asked a group of medical researchers to spot the fabricated abstracts. The ChatGPT-generated abstracts sailed through the plagiarism checker and the AI-output detector spotted 66% of the generated abstracts. However, the human reviewers correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts, raising concerns about the ethical implications of using AI in scientific research.

It’s Time to Teach AI How to Be Forgetful

From Wired:

OUR BRAIN HAS evolved to make predictions and explanations in unstable and ill-defined situations. For instance, to understand a novel situation, the brain generates a single explanation on the fly. If this explanation is upturned by additional information, a second explanation is generated. 

Machine learning, on the other hand, typically takes a different path: It sees reasoning as a categorization task with a fixed set of predetermined labels. It views the world as a fixed space of possibilities, enumerating and weighing them all. This approach, of course, has achieved notable successes when applied to stable and well-defined situations such as chess or computer games. When such conditions are absent, however, machines struggle.

One such example is virus epidemics. In 2008, Google launched Flu Trends, a web service that aimed to predict flu-related doctor visits using big data. The project, however, failed to predict the 2009 swine flu pandemic. After several unsuccessful tweaks to its algorithm, Google finally shuttered the project in 2015.

In such unstable situations, the human brain behaves differently. Sometimes, it simply forgets. Instead of getting bogged down by irrelevant data, it relies solely on the most recent information. This is a feature called intelligent forgetting. Adopting this approach, an algorithm that relied on a single data point—predicting that next week’s flu-related doctor visits are the same as in the most recent week, for instance—would have reduced Google Flu Trends’ prediction error by half. 

Intelligent forgetting is just one dimension of psychological AI, an approach to machine intelligence that also incorporates other features of human intelligence such as causal reasoning, intuitive psychology, and physics. In 2023, this approach to AI will finally be recognized as fundamental for solving ill-defined problems. Exploring these marvelous features of the evolved human brain will finally allow us to make machine learning smart. Indeed, researchers at the Max Planck Institute, Microsoft, Stanford University, and the University of Southampton are already integrating psychology into algorithms to achieve better predictions of human behavior, from recidivism to consumer purchases. 

One feature of psychological AI is that it is explainable. Until recently, researchers assumed that the more transparent an AI system was, the less accurate its predictions were. This mirrored the widespread but incorrect belief that complex problems always need complex solutions. In 2023, this idea will be laid to rest. As the case of flu predictions illustrates, robust and simple psychological algorithms can often give more accurate predictions than complex algorithms. Psychological AI opens up a new vision for explainable AI: Instead of trying to explain opaque complex systems, we can check first if psychological AI offers a transparent and equally accurate solution.

In 2023, deep learning in itself will come to be seen as a cul-de-sac. Without the help of human psychology, it will become clearer that the application of this type of machine learning to unstable situations eventually runs up against insurmountable limitations. We will finally recognize that more computing power makes machines faster, not smarter. One such high-profile example is self-driving cars. The vision of building the so-called level-5 cars—fully automated vehicles capable of driving safely under any conditions without human backup—has already hit such a limitation. Indeed, I predict that in 2023, Elon Musk will retract his assertion that this category of self-driving cars is just around the corner. Instead, he will refocus his business on creating the much more viable (and interesting) level-4 cars, which are able to drive fully autonomously, without human help, only in restricted areas such as motorways or cities specifically designed for self-driving vehicles. Widespread adoption of level-4 cars will instead spur us to redesign our cities, making them more stable and predictable, and barring potential distractions for human drivers, cyclists, and pedestrians. If a problem is too difficult for a machine, it is we who will have to adapt to its limited abilities.

Link to the rest at Wired

Copyright in the Age of Artificial Intelligence

Following is a transcript of a meeting sponsored by the United States Copyright Office on February 5, 2020, that includes remarks from a variety of speakers regarding Artificial Intelligence and Copyright.

As PG is posting this, the meeting happened slightly less than three years ago and an explosion of artificial intelligence research, programs, apps, etc., has occurred since that time, so keepthe age of the transcript in mind as you review it.

The transcript is a 399 page PDF file. PG will attempt to embed the transcript next in this post. He has no idea what might happen, whether the TPV hosting service will be strained beyond the breaking point, etc., etc.

If the embed doesn’t work, you can find the original at Copyright.gov

or by cutting and pasting the following link into your web browser: https://www.copyright.gov/events/artificial-intelligence/transcript.pdf

That said, here goes with the embed:


UPDATE: PG received a stranger error message that mentioned the failure of a “fake worker” which may or may not refer to an employee of the U.S. Copyright Office when he tried to post the embed.

You’ll need to use the Copyright.gov link above to view the original Copyright Office transcript.

Evidently the Copyright Office doesn’t have any fake workers who fail.

Dark Horse AI Gets Passing Grade in Law Exam

From Futurism:

An artificial intelligence dubbed Claude, developed by AI research firm Anthropic, got a “marginal pass” on a recent blindly graded law and economics exam at George Mason University, according to a recent blog post by economics professor Alex Tabarrok.

It’s yet another warning shot that AI is experiencing a moment of explosive growth in capability — and it’s not just OpenAI’s ChatGPT that we have to worry about.

. . . .

Claude is already impressing academics with its ability to come up with strikingly thorough answers to complex prompts.

For one law exam question highlighted by Tabarrok, Claude was able to generate believable recommendations on how to change intellectual property laws.

“Overall, the goal should be to make IP laws less restrictive and make more works available to the public sooner,” the AI concluded. “But it is important to still provide some incentives and compensation to creators for a limited period.”

Overall, Tabarrok found that “Claude is a competitor to GPT-3 and in my view an improvement,” because it was able to generate a “credible response” that’s “better than many human responses.”

To be fair, others were less impressed with Claude’s efforts.

“To be honest, this looks more like Claude simply consumed and puked up a McKinsey report,” the Financial Times wrote in a piece on Tabarrok’s findings.

While Claude and ChatGPT are similar in terms of user experience, the models were trained in different ways, especially when it comes to ensuring that things don’t go out of hand.

Claude makes use of “constitutional AI,” as described in a yet-to-be-peer-reviewed paper shared by Anthropic researchers last month.

“We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs,” they wrote. “The process involves both a supervised learning and a reinforcement learning phase.”

“Often, language models trained to be ‘harmless’ have a tendency to become useless in the face of adversarial questions,” the company wrote in a December tweet. “Constitutional AI lets them respond to questions using a simple set of principles as a guide.”

Link to the rest at Futurism

CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors

From Futurism:

Last week, we reported that the prominent technology news site CNET had been quietly publishing articles generated by an unspecified “AI engine.”

The news sparked outrage. Critics pointed out that the experiment felt like an attempt to eliminate work for entry-level writers, and that the accuracy of current-generation AI text generators is notoriously poor. The fact that CNET never publicly announced the program, and that the disclosure that the posts were bot-written was hidden away behind a human-sounding byline — “CNET Money Staff” — made it feel as though the outlet was trying to camouflage the provocative initiative from scrutiny.

After the outcry, CNET editor-in-chief Connie Guglielmo acknowledged the AI-written articles in a post that celebrated CNET‘s reputation for “being transparent.”

Without acknowledging the criticism, Guglielmo wrote that the publication was changing the byline on its AI-generated articles from “CNET Money Staff” to simply “CNET Money,” as well as making the disclosure more prominent.

Furthermore, she promised, every story published under the program had been “reviewed, fact-checked and edited by an editor with topical expertise before we hit publish.”

That may well be the case. But we couldn’t help but notice that one of the very same AI-generated articles that Guglielmo highlighted in her post makes a series of boneheaded errors that drag the concept of replacing human writers with AI down to earth.

Take this section in the article, which is a basic explainer about compound interest (emphasis ours):

“To calculate compound interest, use the following formula:

Initial balance (1+ interest rate / number of compounding periods) ^ number of compoundings per period x number of periods 

For example, if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.

It sounds authoritative, but it’s wrong. In reality, of course, the person the AI is describing would earn only $300 over the first year. It’s true that the total value of their principal plus their interest would total $10,300, but that’s very different from earnings — the principal is money that the investor had already accumulated prior to putting it in an interest-bearing account.

“It is simply not correct, or common practice, to say that you have ‘earned’ both the principal sum and the interest,” Michael Dowling, an associate dean and professor of finance at Dublin College University Business School, told us of the AI-generated article.

It’s a dumb error, and one that many financially literate people would have the common sense not to take at face value. But then again, the article is written at a level so basic that it would only really be of interest to those with extremely low information about personal finance in the first place, so it seems to run the risk of providing wildly unrealistic expectations — claiming you could earn $10,300 in a year on a $10,000 investment — to the exact readers who don’t know enough to be skeptical.

Another error in the article involves the AI’s description of how loans work. Here’s what it wrote (again, emphasis ours):

“With mortgages, car loans and personal loans, interest is usually calculated in simple terms.

For example, if you take out a car loan for $25,000, and your interest rate is 4%, you’ll pay a flat $1,000 in interest per year.”

Again, the AI is writing with the panache of a knowledgeable financial advisor. But as a human expert would know, it’s making another ignorant mistake.

What it’s bungling this time is that the way mortgages and auto loans are typically structured, the borrower doesn’t pay a flat amount of interest per year, or even per monthly payment. Instead, on each successive payment they owe interest only on the remaining balance. That means that toward the beginning of the loan, the borrower pays more interest and less principal, which gradually reverses as the payments continue.

It’s easy to illustrate the error by entering the details from the CNET AI’s hypothetical scenario — a $25,000 loan with an interest rate of 4 percent — into an auto loan amortization calculator. The result? Contrary to what the AI claimed, there’s never a year when the borrower will pay a full $1,000, since they start chipping away at the balance on their first payment.

CNET‘s AI is “absolutely” wrong in how it described loan payments, Dowling said.

“That’s just simply not the case that it would be $1,000 per year in interest,” he said, “as the loan balance is being reduced every year and you only pay interest on the outstanding balance.”

The problem with this description isn’t just that it’s wrong. It’s that the AI is eliding an important reality about many loans: that if you pay them down faster, you end up paying less interest in the future. In other words, it’s feeding terrible financial advice directly to people trying to improve their grasp of it.

Link to the rest at Futurism

PG says somebody (not PG) is going to start a website that features errors made by AI systems.

PG also says that AI is roughly where airplanes were when, on December 17, 1903, the Wright Flyer traveled 120 feet in 12 seconds at speed of 6.8 miles per hour at Kitty Hawk, North Carolina.

Fifteen years later, a British Sopwith Dragon flew at a speed of 149 miles per hour. Twenty-two years after that, the Lockheed P-38 flew at 400 mph. Late in World War II, a Messerschmitt Me.262 reached a sustained top speed of 540 mph.

PG says AI development isn’t like airplane development. It’s going to be much, much faster.

An A.I. Translation Tool Can Help Save Dying Languages. But at What Cost?

From Slate:

Sanjib Chaudhary chanced upon StoryWeaver, a multilingual children’s storytelling platform, while searching for books he could read to his 7-year-old daughter. Chaudhary’s mother tongue is Kochila Tharu, a language with about 250,000 speakers in eastern Nepal. (Nepali, Nepal’s official language, has 16 million speakers.) Languages with a relatively small number of speakers, like Kochila Tharu, do not have enough digitized material for linguistic communities to thrive—no Google Translate, no film or television subtitles, no online newspapers. In industry parlance, these languages are “underserved” and “underresourced.”

This is where StoryWeaver comes in. Founded by the Indian education nonprofit Pratham Books, StoryWeaver currently hosts more than 50,000 open-licensed stories across reading levels in more than 300 languages from around the world. Users can explore the repository by reading level, language, and theme, and once they select a story, they can click through illustrated slides (each as if it were the page of a book) in the selected language (there are also bilingual options, where two languages are shown side-by-side, as well as download and read-along audio options). “Smile Please,” a short tale about a fawn’s ramblings in the forest, is currently the “most read” story—originally written in Hindi for beginners, it has since been translated into 147 languages and read 281,000 times.

A majority of the languages represented on the platform are from Africa and Asia, and many are Indigenous, in danger of losing speakers in a world of almost complete English hegemony. Chaudhary’s experience as a parent reflects this tension. “The problem with children is that they prefer to read storybooks in English rather than in their own language because English is much, much easier. With Kochila Tharu, the spelling is difficult, the words are difficult, and you know, they’re exposed to English all the time, in schools, on television,” Chaudhary said

Artificial intelligence-assisted translation tools like StoryWeaver can bring more languages into conversation with one another—but the tech is still new, and it depends on data that only speakers of underserved languages can provide. This raises concerns about how the labor of the native speakers powering A.I. tools will be valued and how repositories of linguistic data will be commercialized.

To understand how A.I.-assisted translation tools like StoryWeaver work, it’s helpful to look at neighboring India: With 22 official languages and more than 780 spoken languages, it is no accident that the country is a hub of innovation for multilingual tech. StoryWeaver’s inner core is inspired by a natural language processing tool developed at Microsoft Research India called interactive neural machine translation prediction technology, or INMT.

Unlike most A.I.-powered commercial translation tools, INMT doesn’t do away with a human intermediary altogether. Instead, it assists humans with hints in the language they’re translating into. For example, if you begin typing, “It is raining” in the target language, the model working on the back-end supplies “tonight,” “heavily,” and “cats and dogs” as options for completing your sentence, based on the context and the previous word or set of words. During translation, the tool accounts for meaning in the original language and what the target language allows, and then generates possibilities for the translator to choose from, said Kalika Bali, principal researcher at Microsoft and one of INMT’s main architects.

Tools like INMT allow StoryWeaver’s cadre of volunteers to generate translations of existing stories quickly. The user interface is easy to master even for amateur translators, many of whom, like Chaudhary, are either volunteering their time or already working for nonprofits in early childhood education. The latter is the case for Churki Hansda. Working in Kora and Santali, two underserved Indigenous languages spoken in eastern India, she is an employee at Suchana Uttor Chandipur Community Society, one of StoryWeaver’s many partner organizations scattered all over the world. “We didn’t really have storybooks growing up. Our school textbooks were in Bengali [the dominant regional language], and we would end up memorizing everything because we didn’t understand what we were reading,” Hansda told me. “It’s a good feeling to be able to create books in our languages for our children.”

Amna Singh, Pratham Books’ content and partnerships manager, estimates that 58 percent of the languages represented on StoryWeaver are underserved, a status quo that has cascading consequences for early childhood learning outcomes. But attempts to undo the neglect of underserved language communities are also closely linked with unlocking their potential as consumers, and A.I.-powered translation technology is a big part of this shift. Voice recognition tools and chat bots in regional Indian languages aim to woo customers outside metropolitan cities, a market that is expected to expand as cellular data usage becomes even cheaper.

These tools are only as good as their training data, and sourcing is a major challenge. For sustained multilingualism on the internet, machine translation models require large volumes of training data generated in two languages parallel to one another. Parliamentary proceedings and media publications are common sources of publicly available data that can be scraped for training purposes. However, both these sources—according to Microsoft’s researcher Bali—are too specific, and do not encompass a wide enough range in terms of topics and vocabulary to be properly representative of human speech. (This is why StoryWeaver isn’t a good source for training data, either, because sentences in children’s books are fairly simple and the reading corpus only goes up to fourth-grade reading levels.)

Link to the rest at Slate

The Future of AI Writing and Audio

From Publisher’s Weekly:

Digital Book World, a conference focusing on publishing innovation, offered insight into how technologists, and some publishers, are planning to implement AI into their workflow. Asked about AI and the use of ChatGPT, which automates writing, Mary McAveeney, CEO of Abrams, was skeptical of its ability to write books. She conceded, “It might be good for catalog copy.”

Earlier in the conference, organizer Bradley Metrock asked publishers Laini Brown, director of Publicity for the Nashville office of Hachette Book Group, and Lisa Lucas, senior vice president and publisher of Pantheon and Schocken Books, what they thought of the news that the next iteration of Chat GPT will be able to produce a 60,000 word book in 20 seconds. Neither publisher chose to respond.

Others warned against relying too heavily on AI without human intervention. For example, Madeleine Rothberg, senior subject matter expert for WGBH National Center for Accessible Media in Boston, warned against posting AI-generated subtitles for YouTube videos without first reviewing them. “It’s not a good idea, because we have found the AI doesn’t always get the words right and makes mistakes,” she said, citing instances of unintended vulgarity. Or, as Ashok Giri, CEO of Page Magik put it, “primary research human beings are [still] needed.” Giri’s company offers automation tools and data to help streamline editorial and production workflow.

Others are more skeptical. One attendee, who wished to remain anonymous so as not to offend others in the room, noted that Chat GPT and AI is limited by what is put into it and, for this, it needs to absorb vast swaths of existing information. Much of that comes from print books, e-books, and internet writing protected by copyright. “It sounds exactly like that Google hoped to accomplish with the Google Books program,” they said.“ What happened there? Lawsuits.”

Bradley Metrock, conference organizer, acknowledged that the owners of copyrighted material incorporated will likely challenge the use of their content by AI. “There are going to be a lot of lawsuits before this is sorted out,” said Metrock, who owns several companies that invest in various AI and voice related projects. “The point here is that good technology challenges,” citing the lack of innovation in the ebook space over the past 15 years, he said. “Everything stays the same,” he added, ‘“until it doesn’t.”

. . . .

Audiobooks are now a $5 billion market worldwide, and they continue to experience double digit growth. According to the Association of Audiobook Publishers, the U.S. market is growing at a rate of 25% per year ,and reached $1.6 billion in sales for 2021. “The increasing availability of titles is the biggest driver of audiobook growth,” said Videl Bar-Kar, global head of audio for Frankfurt-based Bookwire. “The best way to grow the catalog of available titles is through backlist.”

Here, the use of AI generated voices to narrate audiobooks offers publishers who cannot afford human narrators the opportunity to turn backlist into audiobooks for low cost. “And if the book sells and becomes a success,” Bar-Kar added, “they can always go back and re-record the book with a human narrator.”

Bar-Kar called the audiobook market a “once in a generation opportunity,” noting: “There are new people discovering audio for the first time year-on-year, not because of the heavy consumers, but because there are new people coming into the market.” He described it as a business opportunity, and one that needs to be demystified: “Have the courage and confidence to stop selling your audiobook rights and develop your own audio program,” he said.

Link to the rest at Publisher’s Weekly

Imitation Is The Best Form Of Flattery. Flattery Is Not A Defense To Copyright Infringement.

From Above the Law:

Unless you’ve been living under a law library, it would be hard to not take note of the rapid influx of AI art. Face modifying apps, extended shots of events and people that never happened that uncanny only begins to explain their weirdness, you name it. The figure of AI as artist has arrived, but is any of it legal? A small group of artists aim to find out. From Reuters:

A group of visual artists has sued artificial intelligence companies for copyright infringement, adding to a fast-emerging line of intellectual property disputes over AI-generated work.

Stability AI’s Stable Diffusion software copies billions of copyrighted images to enable Midjourney and DeviantArt’s AI to create images in those artists’ styles without permission, according to the proposed class-action lawsuit filed Friday in San Francisco federal court.

The artists’ lawyers, the Joseph Saveri Law Firm and Matthew Butterick, filed a separate proposed class action lawsuit in November against Microsoft’s GitHub Inc and its business partner OpenAI Inc for allegedly scraping copyrighted source code without permission to train AI systems.

. . . .

I’m gonna flag it for you in case your eyes glossed over it. The word there is billions. Billons. With a B. Even if the individual damages are pennies on the dollar, the aggregate of those alleged copyright infringements would be… well, I’m not that good at math, but it would put a sizeable dent in my student loan principal.

. . . .

For those not in the know, if you’ve ever seen a stock image, every one you’ve seen is probably from Getty.

Link to the rest at Above the Law

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

From Time Magazine:

ChatGPT was hailed as one of 2022’s most impressive technological innovations upon its release last November. The powerful artificial intelligence (AI) chatbot can generate text on almost any topic or theme, from a Shakespearean sonnet reimagined in the style of Megan Thee Stallion, to complex mathematical theorems described in language a 5 year old can understand. Within a week, it had more than a million users.

ChatGPT’s creator, OpenAI, is now reportedly in talks with investors to raise funds at a $29 billion valuation, including a potential $10 billion investment by Microsoft. That would make OpenAI, which was founded in San Francisco in 2015 with the aim of building superintelligent machines, one of the world’s most valuable AI companies.

But the success story is not one of Silicon Valley genius alone. In its quest to make ChatGPT less toxic, OpenAI used outsourced Kenyan laborers earning less than $2 per hour, a TIME investigation has found.

The work was vital for OpenAI. ChatGPT’s predecessor, GPT-3, had already shown an impressive ability to string sentences together. But it was a difficult sell, as the app was also prone to blurting out violent, sexist and racist remarks. This is because the AI had been trained on hundreds of billions of words scraped from the internet—a vast repository of human language. That huge training dataset was the reason for GPT-3’s impressive linguistic capabilities, but was also perhaps its biggest curse. Since parts of the internet are replete with toxicity and bias, there was no easy way of purging those sections of the training data. Even a team of hundreds of humans would have taken decades to trawl through the enormous dataset manually. It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.

To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.

The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance. For this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project. All the employees spoke on condition of anonymity out of concern for their livelihoods.

The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.” (OpenAI does not disclose the names of the outsourcers it partners with, and it is not clear whether OpenAI worked with other data labeling firms in addition to Sama on this project.)

. . . .

One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

. . . .

Documents reviewed by TIME show that OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. Around three dozen workers were split into three teams, one focusing on each subject. Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management.

Link to the rest at Time

PG wonders if there isn’t a better way to engineer an AI product to identify and avoid toxic documents in its construction of a database.

He also thinks this is an example of when Sili Valley’s long-standing motto, “Go fast and break things,” should involve some adult judgment in the on the part of someone with authority in the organization..

One of the oldest bits of business advice is, “Know your suppliers.” Evidently, the management at OpenAI all missed the class where that was discussed.

PG notes that his brothers and sisters of the bar are not immune to the “smart people doing dumb things” behavior pattern – see, for example, Why Toxic Culture Is To Blame For Women Leaving Law Firms.

OpenAI Background

From Wikipedia:

OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company conducts research in the field of AI with the stated goal of promoting and developing friendly AI in a way that benefits humanity as a whole. The organization was founded in San Francisco in late 2015 by Sam Altman, Elon Musk, and others, who collectively pledged US$1 billion. Musk resigned from the board in February 2018 but remained a donor. In 2019, OpenAI LP received a US$1 billion investment from Microsoft and Matthew Brown Companies. OpenAI is headquartered at the Pioneer Building in Mission District, San Francisco.

In December 2015, Sam Altman, Elon Musk, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research announced  the formation of OpenAI and pledged over US$1 billion to the venture. The organization stated it would “freely collaborate” with other institutions and researchers by making its patents and research open to the public.

. . . .

In April 2016, OpenAI released a public beta of “OpenAI Gym”, its platform for reinforcement learning research. In December 2016, OpenAI released “Universe”, a software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications.

In 2018, Musk resigned his board seat, citing “a potential future conflict (of interest)” with Tesla AI development for self driving cars, but remained a donor.

In 2019, OpenAI transitioned from non-profit to “capped” for-profit, with profit cap set to 100X on any investment. The company distributed equity to its employees and partnered with Microsoft, who announced an investment package of US$1 billion into the company. OpenAI then announced its intention to commercially license its technologies.

In 2020, OpenAI announced GPT-3, a language model trained on trillions of words from the Internet. It also announced that an associated API, named simply “the API”, would form the heart of its first commercial product. GPT-3 is aimed at natural language answering of questions, but it can also translate between languages and coherently generate improvised text.

In 2021, OpenAI introduced DALL-E, a deep learning model that can generate digital images from natural language descriptions.

Around December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI was projecting a US$200 million revenue for 2023 and US$1 billion revenue for 2024. As of January 2023, it was in talks for funding that would value the company at $29 billion.

Link to the rest at Wikipedia and thanks to F. for the tip.

PG notes that the OP contains lots of links and is likely to be a good starting point for anyone who wishes to dive into the state of AI, at least in the US.

Meet the artificially intelligent chatbot trying to curtail loneliness in America

From The Hill:

The concept of an artificially intelligent companion has been around for
decades longer than the AI technology has existed in a readily accessible
form. From droids like C3PO and R2D2 of the Star Wars universe, to Joaquin
Phoenix’s virtual assistant, Samantha, from Her, there is no shortage of pop
culture examples of the fabled robot helpers.

But over the past few years, AI technology has exponentially improved and
made its way from the big screen to the smartphone screen. In late 2015,
Elon Musk partnered with Sam Altman to create a company called OpenAI, a
software business with the mission of creating an artificial general
intelligence that benefits all of humanity.

One of the early projects at OpenAI was a natural language processing
system called GPT (Generative Pre-trained Transformer). GPT is in essence, a chatbot that uses deep learning to produce human-like text responses to
users of the platform. Many online users saw the GPT chatbot as an outlet to
have a bit of fun testing the limits of the human-like texting algorithm, but
some innovators viewed the free software as a marketable source of untapped potential.

One of those early innovators was founder of Replika Eugenia Kuyda.
Replika is a free to download app that allows users to send and receive
messages to an artificially intelligent companion built on the GPT3 platform.
On the website, Replika states that each companion is eager to learn about
the world through the eyes of the user, and will always be ready to chat
when the user is looking for an empathetic friend.

The idea for Replika was born from grief, when Kuyda’s best friend, Roman,
was tragically killed in a hit-and-run incident in 2015. Being torn so suddenly
from a loved one, Kuyda was looking for a way to somehow remain close to
the memory of Roman. The timing of the car accident and the release of the
open source GPT1 software gave Kuyda a unique outlet to grieve.

“I took all of the text messages that were sent over a year to each other and
plugged it into this conversational AI model,” says Kuyda. “And this way I
had a chatbot that I could talk to, and it could talk to me like my best friend.”

Kuyda was able to aggregate tens of thousands of messages that her and
Roman had exchanged to train the GPT software to speak like her late best
friend. She eventually released the GPT model emulating Roman to a larger
group of people and found that many discovered the tool to be highly
engaging and life-like.

Kuyda then began working on a chatbot that would eventually become the
Replika app with more than two million active users.

When opening Replika for the first time, users are prompted to design their
chatbot avatar and select some interests that they’d like to talk about. From
there it’s up to the user to guide the conversation. The Replika software is
designed to catalogue user inputs in its memory to help develop responses
that become more contextually relevant the deeper the user goes into the
conversation.

Kuyda sees the app as a tool to help people build their social skills and learn
how to interact with people.

“For a lot of people, the problem with interacting with other humans is the
fear of opening up, the fear of being vulnerable, the fear of starting the
contact, starting the conversation, and they’re basically rehearsing that with
Replika,” says Kuyda. “So think of it as a gym for your relationship so you
can practice in a safe environment.”

Link to the rest at The Hill

DeepL, the AI-based language translator, raises over $100M at a $1B+ valuation

From TechCrunch:

Artificial intelligence startups, and (thanks to GPT and OpenAI) specifically those helping humans communicate with each other, are commanding a lot of interest from investors, and today the latest of these is announcing a big round of funding. DeepL, a startup that provides instant translation-as-a-service both to businesses and to individuals — competing with Google, Bing and other online tools — has confirmed a fundraise at a €1 billion valuation (just over $1 billion at today’s rates).

Cologne, Germany-based DeepL is not disclosing the full amount that it’s raised — it doesn’t want to focus on this aspect, CEO and founder Jaroslaw Kutylowski said in an interview — but as we were working on this story we heard a range of figures. At one end, an investor that was pitched on the funding told TechCrunch that DeepL was aiming to raise $125 million. At the other end, a report with a rumor about the funding from back in November said the amount was around $100 million. The funding closed earlier this month.

The startup is also not confirming or disclosing other financials, but the investor source said that the $1 billion valuation was based on a 20x multiple of DeepL’s annual run rate, which was at $50 million at the end of last year. In the current fundraising climate, this is a pretty bullish multiple, but it speaks to the company’s growth, which the investor noted is currently at 100%, and the fact that DeepL’s breaking even and close to being profitable.

What is more definitive is the list of investors: DeepL said that new backer IVP was leading the round, with Bessemer Venture Partners, Atomico and WiL also participating. Previous backers in the company also include Benchmark and btov.

DeepL primarily provides translation as a service to businesses rather than individuals, and its forte up to now has been working primarily with smaller and medium organizations.

Link to the rest at TechCrunch

OpenAI begins piloting ChatGPT Professional, a premium version of its viral chatbot

From TechCrunch:

OpenAI this week signaled it’ll soon begin charging for ChatGPT, its viral AI-powered chatbot that can write essays, emails, poems and even computer code. In an announcement on the company’s official Discord server, OpenAI said that it’s “starting to think about how to monetize ChatGPT” as one of the ways to “ensure [the tool’s] long-term viability.”

The monetized version of ChatGPT will be called ChatGPT Professional, apparently. That’s according to a waitlist link OpenAI posted in the Discord server, which asks a range of questions about payment preferences including “At what price (per month) would you consider ChatGPT to be so expensive that you would not consider buying it?”

The waitlist also outlines ChatGPT Professional’s benefits, which include no “blackout” (i.e. unavailability) windows, no throttling and an unlimited number of message with ChatGPT — “at least 2x the regular daily limit.” OpenAI says that those who fill out the waitlist form may be selected to pilot ChatGPT Professional, but that the program is in the experimental stages and won’t be made widely available “at this time.”

. . . .

Despite controversy and several bans, ChatGPT has proven to be a publicity win for OpenAI, attracting major media attention and spawning countless memes on social media. Some investors are implementing ChatGPT in their workflows. Ryan Reynolds enlisted ChatGPT to write an ad for Mint Mobile, the mobile carrier he part-owns. And Microsoft will reportedly incorporate the AI behind ChatGPT into its Office suite and Bing.

ChatGPT had over a million users as of early December — an enviable user base by any measure. But it’s a pricey service to run. According to OpenAI co-founder and CEO Sam Altman, ChatGPT’s operating expenses are “eye-watering,” amounting to a few cents per chat in total compute costs.

OpenAI is under pressure to turn a profit on products like ChatGPT ahead of a rumored $10 billion investment from Microsoft. OpenAI expects to make $200 million in 2023, a pittance compared to the more than $1 billion that’s been invested in the startup so far.

Semafor reported this week that Microsoft is looking to net a 49% stake in OpenAI, valuing the company at around $29 billion.

Link to the rest at TechCrunch

In a former life, PG worked in the tech sector without his attorney hat on (although it was always in his briefcase). In the OP, he senses a mad scramble going on inside OpenAI to get profitable in preparation for an “everybody gets rich” public offering of its stock.

If OpenAI pulls this off, it will give a big boost for AI businesses in general by demonstrating that people will pay money for an AI-based service.

PG isn’t turning TPV into an AI blog, but does predict a giant impact on the writing biz in general (freelance copywriters, creators of catalogues, newspaper reporters and editors, magazines, some types of indie publishers and authors, porn creators (unfortunately), etc.

Will AI Make Creative Workers Redundant?

From The Wall Street Journal:

ChatGPT has some wondering if artificial intelligence will make human creativity obsolete. Released in November by Open AI, the chatbot can quickly write readable prose in response to natural-language prompts better than most people can. When one of my colleagues asked ChatGPT for a 250-word summary of Umberto Eco’s philosophy of translation, it produced a text that would put many educated adults to shame—and it did so within seconds. Reactions to this new AI have ranged from panic to wonder. It is potentially competition for anyone who writes for a living, including journalists and lawyers. Even visual artists are worried, given the dozen or so AI art generators that can already create virtually any image.

To me, the hubbub feels like déjà vu. As an academic translator, I witnessed a similar debate emerge surrounding the introduction in 2017 of DeepL, a ground-breaking form of neural machine translation. At the time, most people took one of two views: either the new technology would ultimately replace human translators or it would be insufficient and barely affect the field. It ended up being something in the middle.

Five years after the introduction of DeepL, most human translators no longer actually translate, but neither have they been entirely replaced by machines. Instead, they use the technology to make translations easier and faster. The software generates a base translation, then the human translator “post-edits,” fixing errors and making the text sound natural. But the feedback the translator provides also becomes part of the recursive loop in the AI’s continual self-improvement. The technology is poised to take over the translation process completely.

I could see image- and text-generating AIs having a similar effect. Just as translators now post-edit instead of translate, it seems likely that many creative workers will “post-create” instead of create. A machine will come up with an initial sketch of an idea, and then the artist or writer will tinker with it. Some may have too much pride to rely on a machine, but it will be hard to resist the advantage the technology offers. For translators and artists alike, AI reduces the cognitive load of creating. Imagine no longer straining to come up with a first draft. Work would flow much more easily.

AI creativity and human creativity already seem to be converging in music. Though artists have sampled tracks for decades, they’re now repurposing older tunes with machine-like regularity. Some of the biggest hits of 2022 were based on melodic lines from the 1980s. For music fans, the question may eventually be whether human beings or AI is better at such recombination. On a recent podcast, Smashing Pumpkins founder Billy Corgan noted his pessimism: “AI systems will completely dominate music. The idea of an intuitive artist beating an AI system is going to be very, very difficult.”

Choosing to use AI raises some uncomfortable questions. Are translators really translators anymore? If an artist takes a first sketch from a computer, is he still genuinely an artist? The casting about for initial words or brush strokes, often the most difficult part of drafting, seems to be the heart of human creativity. If that is given over to AI, the process seems more like an assembly-line production with human writers or artists serving as mere inspectors—checking the end product and then giving a stamp of approval.

Link to the rest at The Wall Street Journal

A new Chatbot is a ‘code red’ for Google’s search business

From The Seattle Times:

Over the past three decades, a handful of products like Netscape’s web browser, Google’s search engine and Apple’s iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs.

Last month, an experimental chatbot called ChatGPT made its case to be the industry’s next big disrupter. It can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand. It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics and vacation plans.

Although ChatGPT still has plenty of room for improvement, its release led Google’s management to declare a “code red.” For Google, this was akin to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread — the arrival of an enormous technological change that could upend the business.

For more than 20 years, the Google search engine has served as the world’s primary gateway to the internet. But with a new kind of chatbot technology poised to reinvent or even replace traditional search engines, Google could face the first serious threat to its main search business. One Google executive described the efforts as make or break for Google’s future.

ChatGPT was released by an aggressive research lab called OpenAI, and Google is among the many other companies, labs and researchers that have helped build this technology. But experts believe the tech giant could struggle to compete with the newer, smaller companies developing these chatbots, because of the many ways the technology could damage its business.

Google has spent several years working on chatbots and, like other big tech companies, has aggressively pursued artificial intelligence technology. Google has already built a chatbot that could rival ChatGPT. In fact, the technology at the heart of OpenAI’s chatbot was developed by researchers at Google.

Called LaMDA, or Language Model for Dialogue Applications, Google’s chatbot received enormous attention in the summer when a Google engineer, Blake Lemoine, claimed it was sentient. This was not true, but the technology showed how much chatbot technology had improved in recent months.

Google may be reluctant to deploy this new tech as a replacement for online search, however, because it is not suited to delivering digital ads, which accounted for more than 80% of the company’s revenue last year.

“No company is invincible; all are vulnerable,” said Margaret O’Mara, a professor at the University of Washington who specializes in the history of Silicon Valley. “For companies that have become extraordinarily successful doing one market-defining thing, it is hard to have a second act with something entirely different.”

Because these new chatbots learn their skills by analyzing huge amounts of data posted to the internet, they have a way of blending fiction with fact. They deliver information that can be biased against women and people of color. They can generate toxic language, including hate speech.

All of that could turn people against Google and damage the corporate brand it has spent decades building. As OpenAI has shown, newer companies may be more willing to take their chances with complaints in exchange for growth.

Even if Google perfects chatbots, it must tackle another issue: Does this technology cannibalize the company’s lucrative search ads? If a chatbot is responding to queries with tight sentences, there is less reason for people to click on advertising links.

“Google has a business model issue,” said Amr Awadallah, who worked for Yahoo and Google and now runs Vectara, a startup that is building similar technology. “If Google gives you the perfect answer to each query, you won’t click on any ads.”

Sundar Pichai, Google’s CEO, has been involved in a series of meetings to define Google’s AI strategy, and he has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses, according to a memo and audio recording obtained by The New York Times. Employees have also been tasked with building AI products that can create artwork and other images, such as OpenAI’s DALL-E technology, which has been used by more than 3 million people.

Link to the rest at The Seattle Times and thanks to R. and others for the tip.

Our Current Thinking on the Use of AI-Generated Image Software and AI Art

From Kickstarter:

I want to share some of our thoughts on Artificial Intelligence (AI) generated images and AI art as it develops, because many creators on Kickstarter are understandably concerned about its impact on the creative community.

At Kickstarter, we often have projects that are innovative and push the boundaries of what’s possible. And that means we’re sometimes navigating some really tricky and undefined areas.

Over the last several days, we’ve engaged our Community Advisory Council and we’ve read your feedback to us via our team and social media. And one thing is clear: Kickstarter must, and will always be, on the side of creative work and the humans behind that work. We’re here to help creative work thrive.

As we look at what’s happening in the creative ecosystem and on our platform, here are some of the things we’re considering when it comes to what place AI image generation software and AI-generated art should have on Kickstarter, if any:

  • Is a project copying or mimicking an artist’s work? We must consider not only if a work has a straightforward copyright claim, but also evaluate situations where it’s not so clear — Fitzgerald was inspired to write the book by the grand parties he attended on prosperous Long Island, where he got a front-row view of the elite, moneyed class of the 1920s, a culture he longed to join but never could..
  • Does a project exploit a particular community or put anyone at risk of harm? We have to consider the intention behind projects, sometimes beyond their purpose as stated on our platform. Our rules prohibit projects that promote discrimination, bigotry, or intolerance towards marginalized groups, and we often make decisions to protect the health and integrity of Kickstarter.

Link to the rest at Kickstarter

For visitors to TPV who are not familiar with Kickstarter, here’s a brief overview from the Kickstarter website:

What is Kickstarter?

Kickstarter is a funding platform for creative projects. Everything from film, games, and music to art, design, and technology. Kickstarter is full of ambitious, innovative, and imaginative projects that are brought to life through the direct support of others.

How does it work?

Every project creator sets their project’s funding goal and deadline. If people like the project, they can pledge money to make it happen. If the project succeeds in reaching its funding goal, all backers’ credit cards are charged when time expires. Funding on Kickstarter is all-or-nothing. If the project falls short of its funding goal, no one is charged.

If a project is successfully funded, Kickstarter applies a 5% fee to the funds collected.

Here’s a link to a successful Kickstarter campaign page for a children’s adventure book.

Back to the Kickstarter comment about AI Images and Art.

Kickstarter must, and will always be, on the side of creative work and the humans behind that work. We’re here to help creative work thrive.

First of all, the creation of AI software and systems is quite a startling and original creative process in and of itself. And there are actual humans using their wetware who have created all the AI art programs with which PG is familiar. (He has not doubt that at some future time, someone will create an AI mother ship generator program devoted to creating lots of offspring AI programs with little or no additional human input for a variety of purposes.)

Under the current AI programming practices of which PG is aware, part of the development process is to feed a large volume of copies of creative work into the program to seed it. If it’s an AI art generation program, copies of a whole bunch of images are used as seed material. An AI text generation program ingests a whole lot of human-generated text as seed material.

For those who are thinking about copyright infringement on a massive scale, PG doesn’t think that any AI program will create a copy of any of the original works used to seed it. In this sense, it resembles a human author or painter who studies a great many literary or artistic works during the process of learning how to paint or write fiction or non-fiction.

One of the more common accounts of the first step of the creation of a new and original creative work goes something like, “I was wandering through the Metropolitan Museum of Art one afternoon and stopped in front of the Picasso portrait of Gertrude Stein . . . .” or, “While reading The Grapes of Wrath, I wondered what a 21st Century version of that book would look like.”

For a further example, F. Scott Fitzgerald said he was inspired to write The Great Gatsby by the grand parties he attended on prosperous Long Island, where he got a front-row view of the elite, moneyed class of the 1920’s, a culture he longed to join but never could. Some of those familiar with that part of Fitzgerald’s life claim to know exactly which people he closely modeled for some of the characters in his book.

So, back to the Kickstarter AI statement of concern that, “We must consider not only if a work has a straightforward copyright claim, but also evaluate situations where it’s not so clear — where images that are owned or created by others might not be on a Kickstarter project page, but are in the training data that makes the AI software used in the project, without the knowledge, attribution, or consent of creators.”

What if the creators of an AI writing program included the text of Fairy Tale by Stephen King along with the text of 20,000 other novels into the seed for a novel-writing AI?

Based on PG’s brief experience with and understanding of how current AI writing programs work, he suspects that, regardless of what writing prompt anyone provided, the novel-writing AI program would not produce the text of Fairy Tale or anything close enough to support a copyright suit by King or his publishers.

As far as Kickstarter’s mention of attribution of the authors of seed material for any given AI text generation program, it would be a very, very long list to the point where no one would read it. As far as Kickstarter’s concern that requiring the consent of each author whose work went into the AI’s maw during the creation of the AI is concerned, PG doesn’t think consent is necessary because the AI isn’t reproducing the author’s work and the use of the text would be fair use under current copyright law plus there are no damages for copyright infringement because the AI program doesn’t replicate the original material.

Can AI Write Authentic Poetry?

From The MIT Press Reader:

“Time — a few centuries here or there — means very little in the world of poems.” There is something reassuring about Mary Oliver’s words. Especially in an era of rapid change, there is comfort to be had in those things that move slowly. But oceans rise and mountains fall; nothing stays the same. Not even the way poetry is made.

The disappearance of the author in 20th-century literary criticism can perhaps be traced back to the surrealist movement and its game of “exquisite corpse.” The surrealists believed that a poem can emerge not only from the unconscious mind of an individual, but from the collective mind of many individuals working in consort — even, or perhaps especially, if each individual has minimal knowledge of what the others are doing. Soon the idea of making art from recycled objects emerged. In the realm of literature, this approach took the form of found poetry.

To create a found poem, one or more people collect bits of text encountered anywhere at all, and with a little editing stitch the pieces together to form a collagelike poem. Examining this generative activity, it may be difficult to identify who if anyone is the “poet” who writes the found poem (or for that matter, to be confident that “writing” is an apt name for the process). Still, even if no one’s consciousness guided the initial creation of the constituent phrases, one or more humans will have exercised their sensitivity and discrimination in selecting the bits to include, and the way these pieces are ordered and linked to form a new whole. The author (or authors) at a minimum must do the work of a careful reader. Can the human be pushed still further into the background, or even out of the picture?

The most radical technological advance of the 20th century might seem to have nothing at all to do with the writing of poetry. If we make a list of the great leaps that led to modern civilization — control of fire, agriculture, the wheel, electricity, and perhaps a few more — the most recent addition is a machine that uses electrons to do computation. The first functioning digital computers were constructed midcentury by Alan Turing and a few others. Over the next not-quite-a-century-yet, computers became enormously faster and more powerful, began to process information in parallel rather than just sequentially, and were linked together into a vast worldwide network known as the internet. Along the way, these devices enabled the creation of artificial versions of a trait previously found only in biological life forms, most notably humans — intelligence.


In a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises to challenge humans as artistic creators.

Artificial intelligence (AI) is in the process of changing the world and its societies in ways no one can fully predict. On the hazier side of the present horizon, there may come a tipping point at which AI surpasses the general intelligence of humans. (In various specific domains, notably mathematical calculation, the intersection point was passed decades ago.) Many people anticipate this technological moment, dubbed the Singularity, as a kind of Second Coming — though whether of a savior or of Yeats’s rough beast is less clear. Perhaps by constructing an artificial human, computer scientists will finally realize Mary Shelley’s vision.

Of all the actual and potential consequences of AI, surely the least significant is that AI programs are beginning to write poetry. But that effort happens to be the AI application most relevant to our theme. And in a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises (threatens?) to challenge humans as artistic creators. If AI can be a poet, what other previously human-only roles will it slip into?

So, what is the current state of AI and computer-generated poetry? This is a less central question than might be supposed. Especially in this time of rapid AI advances, the current state of the artificial poetic arts is merely a transitory benchmark. We need to set aside the old stereotype that computer programs simply follow fixed rules and do what humans have programmed them to do, and so lack any capacity for creativity. Computer programs can now learn from enormous sets of data using methods called deep learning. What the programs learn, and how they will behave after learning, is very difficult (perhaps impossible) to predict in advance. The question has arisen (semiseriously) whether computer programs ought to be listed as coauthors of scientific papers reporting discoveries to which they contributed. There is no doubt that some forms of creativity are within the reach, and indeed the grasp, of computer programs.

But what about poetry? To evaluate computer-generated poetry, let’s pause to remind ourselves what makes a text work as a poem. A successful poem combines compelling content (what Coleridge called “good sense”) with aesthetically pleasing wordplay (metaphor and other varieties of symbolism), coupled with the various types of sound similarities and constraints of form.

In broad strokes, an automated approach to constructing poems can operate using a generate-then-select method. First, lots of candidate texts are produced, out of which some (a very few, or just one) are then selected as winners worth keeping. Roughly, computer programs can be very prolific in generating, but (to date) have proved less capable at selecting. At the risk of caricature, the computer poet can be likened to the proverbial monkey at the typewriter, pounding out reams of garbage within which the occasional Shakespearean sonnet might be found — with the key difference that the computer operates far more rapidly than any monkey (or human) could. To be fair, the program’s search can be made much less random than the monkey’s typing. Current computer poetry programs usually bring in one or more humans to help in selecting poetic gems embedded in vast quantities of computer-generated ore. An important question, of course, is whether an authentic creator requires some ability to evaluate their own creations. Perhaps, as Oscar Wilde argued, there is a sense in which an artist must act as their own critic — or not be a true artist at all.

One use of computers is simply to provide a platform for human generation and selection. The internet makes it easy for large groups of people to collaborate on projects. The kind of collective poetry writing encouraged by the surrealists has evolved into crowdsourcing websites that allow anyone to edit an emerging collective poem. Each contributor gets to play a bit part as author/editor. No doubt some people enjoy participating in the creation of poems by crowdsourcing. It’s less clear whether Sylvia Plath would have associated this activity with “the most ingrown and intense of the creative arts.”

Link to the rest at The MIT Press Reader

PG Postscript – Trigger Warning – Poetry Snob Below

PG harbors substantial doubts that more than 0.1% of all computer engineers know anything about decent poetry. Hint: It’s way more than meter and rhyme.

Because I could not stop for Death, The Waste Land, and The Road Not Taken are much, much more than anything a typical computer engineer (at least in PG’s experience) or an AI is likely to create. And, yes, PG has known and worked with a lot of computer engineers, including some brilliant ones, and appreciates what a talented computer engineer is capable of creating in her/his world.

However, brilliance as a computer engineer does not necessarily transfer to brilliance in understanding poetry and how to create good poetry.

Hemingway in Africa

PG just conducted an experiment with the AI writing tool, ChatGPT.

He ran a Google Search for Earnest Hemingway in Africa and found the following:

In the summer of 1933, Hemingway, Pauline, and a Key West friend traveled to Africa for a three-month safari. Inspired by the legendary hunts of Theodore Roosevelt, Hemingway borrowed funds from Pauline’s uncle, and the couple set off for what would become a productive and iconic journey.

The three months spent on safari offered Hemingway ample time to hunt and enjoy the outdoors. Despite succumbing to a severe illness on the trip that necessitated a hospital recovery, Hemingway’s trophies included a lion and other large game that roamed the African grasslands. Traveling through Kenya and Tanzania, Hemingway hunted the plains of the Serengeti, tracked animals through the bush, and recorded his experiences for later use in his work. His first African safari provided Hemingway with material and inspiration for the novel Green Hills of Africa, and the short stories The Snows of Kilimanjaro and The Short Happy Life of Francis Macomber.

Hemingway returned to the Dark Continent twenty years later with his fourth wife. They traveled through the Belgian Congo, Rwanda, and Kenya. Once again, Hemingway exercised his skill as a hunter, and brought home many big game trophies. Hemingway’s persona as a fearless adventurer only increased when he suffered through two plane crashes on his second trip to Africa, prompting many news outlets to prematurely run his obituary.

PG then ran this text through ChatGPT. The following is the first result he received.

Ernest Hemingway was an American novelist, short story writer, and journalist. He was known for his adventurous spirit and his love of hunting, and his experiences on safari in Africa played a significant role in his writing. Hemingway’s first African safari, which took place in 1933, inspired him to write several of his most famous works, including Green Hills of Africa, The Snows of Kilimanjaro, and The Short Happy Life of Francis Macomber. Hemingway returned to Africa in the 1950s with his fourth wife, and he continued to hunt and write about his experiences there. Despite suffering through two plane crashes on this second trip, Hemingway’s reputation as a fearless adventurer only grew.

Then, PG regenerated exactly the same descriptive text through ChatGPT a second time and received a longer result:

Ernest Hemingway was an American writer and journalist known for his adventurous spirit and love of the outdoors. His first trip to Africa in 1933 was a significant event in his life and provided him with material for some of his most famous works, including Green Hills of Africa, The Snows of Kilimanjaro, and The Short Happy Life of Francis Macomber. Hemingway was an avid hunter, and his time spent on safari in Kenya and Tanzania allowed him to pursue this passion while also experiencing the beauty and culture of the continent. Despite falling ill during the trip, Hemingway was able to bring home many big game trophies, including a lion.

Hemingway returned to Africa twenty years later with his fourth wife, traveling through the Belgian Congo, Rwanda, and Kenya. On this trip, Hemingway once again exercised his hunting skills and brought back a number of big game trophies. He also experienced two plane crashes, which only served to enhance his reputation as a fearless adventurer. Despite these close calls, Hemingway continued to travel and write about his adventures, leaving a lasting legacy as a pioneering figure in literature and journalism.

Then, PG regenerated the original text a third time and received the following:

Ernest Hemingway was an American writer and journalist known for his adventurous spirit and love of the outdoors. His first trip to Africa in 1933 was a three-month safari that took him through Kenya and Tanzania, where he hunted on the Serengeti and tracked animals in the bush. The trip inspired some of his most famous works, including the novel Green Hills of Africa and the short stories The Snows of Kilimanjaro and The Short Happy Life of Francis Macomber. Hemingway returned to Africa 20 years later with his fourth wife, traveling through the Belgian Congo, Rwanda, and Kenya. On this trip, he experienced two plane crashes, which only added to his reputation as a fearless adventurer. Despite these incidents, Hemingway survived and continued to write and hunt throughout his life.