Getty Images Launches Commercially Safe Generative AI Offering

From Getty Images:

Trained on Getty Images’ world‑class creative content, Generative AI by Getty Images allows customers to explore the power of generative AI with full protection and usage rights

New York – September 25, 2023: Getty Images (NYSE: GETY), a preeminent global visual content creator and marketplace, today announced the launch of Generative AI by Getty Images, a new tool that pairs the company’s best‑in‑class creative content with the latest AI technology for a commercially safe generative AI tool.

Generative AI by Getty Images is trained on the state‑of‑the‑art Edify model architecture, which is part of NVIDIA Picasso, a foundry for generative AI models for visual design. The tool is trained solely from Getty Images’ vast creative library, including exclusive premium content, with full indemnification for commercial use. Sitting alongside the company’s broader, industry‑leading services, Generative AI by Getty Images works seamlessly with the company’s expansive library of authentic and compelling visuals and Custom Content solutions, allowing customers to elevate their entire end‑to‑end creative process to find the right visual content for any need.

. . . .

Customers creating and downloading visuals through the tool will receive Getty Images’ standard royalty‑free license, which includes representations and warranties, uncapped indemnification, and the right to perpetual, worldwide, nonexclusive use in all media. Content generated through the tool will not be added into existing Getty Images and iStock content libraries for others to license. Further, contributors will be compensated for any inclusion of their content in the training set.

“We’ve listened to customers about the swift growth of generative AI – and have heard both excitement and hesitation – and tried to be intentional around how we developed our own tool,” said Grant Farhall, Chief Product Officer at Getty Images. “We’ve created a service that allows brands and marketers to safely embrace AI and stretch their creative possibilities, while compensating creators for inclusion of their visuals in the underlying training sets.”

. . . .

Customers will soon be able to customize Generative AI by Getty Images with proprietary data to produce images with their unique brand style and language. 

Link to the rest at Getty Images

PG has previously suggested that the owners of images used to train an AI are likely not going to be able to pursue a claim for copyright infringement effectively.

That said, large business organizations and their legal departments will be able to use Getty’s AI system without concerns about claims of copyright infringement by the owners of the images used provide grist for Getty’s AI mill.

Amazon restricts authors from self-publishing more than three books a day after AI concerns

From The Guardian:

Amazon has created a new rule limiting the number of books that authors can self-publish on its site to three a day, after an influx of suspected AI-generated material was listed for sale in recent months.

The company announced the new limitations in a post on its Kindle Direct Publishing (KDP) forum on Monday. “While we have not seen a spike in our publishing numbers, in order to help protect against abuse, we are lowering the volume limits we have in place on new title creations,” read the statement. KDP allows authors to self-publish their books and list them for sale on Amazon’s site.

Amazon told the Guardian that the limit is set at three titles, though this number may be adjusted “if needed”. The company confirmed that there had already been a limit to the number of books authors could list a day, but declined to say what this previous limit was.

The post stated that Amazon is “actively monitoring the rapid evolution of generative AI and the impact it is having on reading, writing, and publishing” and that “very few” publishers will be affected by the change. Authors and publishers will also have the option to seek an exception to the rule.

The rule change will “probably not” be a “gamechanger for managing the influx of AI-written content on Amazon’s platform,” said Dr Miriam Johnson, senior lecturer in publishing at Oxford Brookes University. “It will dent the numbers a bit, but for those who are making money by flooding the market with AI-generated books and publishing more than three a day, they will find a work-around.”

The three-book limit announcement comes a week after Amazon introduced the requirement for authors to inform the company when their content is AI-generated and added a new section to their guidelines featuring definitions of “AI-generated” and “AI-assisted” content. Johnson said that though the disclosure requirement is a “nice idea”, she questions how Amazon would check whether authors are disclosing AI-generated content.

Link to the rest at The Guardian

Generative AI vs. Copyright

From Publishers Weekly:

The balance between copyright and free speech is being challenged by generative AI (GAI), a powerful and enigmatic tool that mimics human responses to prompts entered into an internet search box. The purpose of copyright law, according to the U.S. Constitution, is “to promote the Progress of Science and useful Arts, by securing to Authors and Inventors the exclusive Right to their exclusive writings.” The problem is that GAI’s ability to incentivize progress and innovation threatens the entertainment industry’s dependence on copyright to protect creative works.

Copyright law strikes a balance between those who create content and the public’s interest in having wide access to that content. It does this via granting authors a limited monopoly over the dissemination of original works by giving them the exclusive right to reproduce, distribute, and create derivative works based on copyrighted material. However, the concept of exclusive rights doesn’t really apply to artificially intelligent robots and computers scraping ideas and facts from public websites.

Because copyright does not protect ideas, facts, procedures, concepts, principles, or discoveries described or embodied in works, copying alone doesn’t constitute copyright infringement. To prove copyright infringement, one must prove that the defendant had access to the copyrighted work and that the defendant’s work is substantially similar to protected aspects of the first work.

For AI output to infringe upon a book, it must have taken a substantial amount of copyrightable expression from the author’s work. When it comes to text, GAI is an artful plagiarist. It knows how to dance around copyright. The predictive model emulates, it doesn’t copy. Insofar as text generated in response to a prompt is not substantially similar—a legal term of art—to the data it is scraping, it is not an infringement. In other words, don’t overestimate the value of litigation.

The fair-use doctrine is another limitation on the exclusive rights of authors. Its purpose is to avoid the rigid application of copyright law in ways that might otherwise stifle the growth of art and science. Fair use is highly fact specific. Which is another way of saying it’s a murky and contentious area of the law.

Several cases decided before the advent of GAI suggest fair use encompasses the ingestion and processing of books by GAI. For example, in 2015, in Authors Guild v. Google, the court ruled that Google’s digitizing of books without consent to create a full-text searchable database that displayed snippets from those titles was a transformative use that served a different purpose and expression than the original books.

Fair use favors transformative uses. However, over time, the concept evolved from using a protected work as a springboard for new insights or critiquing the original to taking someone else’s photographs or other images and including them in a painting and declaring it a fair use.

In 2023, in Andy Warhol Foundation for the Visual Arts v. Goldsmith, the U.S. Supreme Court held that the claim to fairness is severely undermined “where an original work and copying use share the same or highly similar purposes, or where wide dissemination of a secondary work would otherwise run the risk of substitution for the original or licensed derivatives of it.” AI-generated works can devalue human-created content, but is that the kind of economic harm contemplated in the Supreme Court’s decision?

To sum up, on a case-by-case basis, courts must determine if substantial similarity exists and then engage in line drawing—balancing free expression and the rights of creators.

. . . .

In an age of disinformation, an author’s brand, a publisher’s imprint, and the goodwill associated with them are valuable assets. I believe the industry is less vulnerable than many think. But, to quote Nick Lowe, “Where it’s goin’ no one knows.”

Link to the rest at Publishers Weekly

PG notes that the author of the OP is an attorney, so he will cut and paste his disclaimer from the post he just published so no one who reads only this TPV post will not be misled.

PG notes that nothing you read on TPV constitutes legal advice. If you want legal advice, you need to hire a lawyer, not read a blog post.

PG will also note that the OP includes some other suggestions by the author, who is an attorney, which you may want to consider, but hire your own lawyer because, just like PG, the author of the OP is not your attorney and isn’t giving legal advice by writing an article for Publishers Weekly.

Using Generative AI? Consider These 7 Tips From a Legal Expert

From Learn2G2:

As G2’s General Counsel, it’s my job to help build and protect the company, so it’s likely no surprise that generative AI is top of mind for me (and lawyers everywhere!).

While AI presents an opportunity for organizations, it also poses risks. And these risks raise concerns for all business leaders, not only legal departments.

With so much information out there, I recognize these waters can be difficult to navigate. So, to help get to the crux of these concerns and boil them down into a helpful guide for all business leaders, I recently sat down with some of the top minds in the AI space for a round-table discussion in San Francisco.

There, we discussed the changing landscape of generative AI, the laws affecting it, and what this all means for how our businesses operate.

We came to the agreement that, yes, generative AI tools are revolutionizing the way we live and work. However, we also agreed that there are several legal factors businesses should consider as they embark on their generative AI journeys.

Based on that discussion, here are seven things to consider when integrating AI into your company.

Understand the lay of the land

Your first task is to identify whether you’re working with an artificial intelligence company or a company that uses AI. An AI company creates, develops, and sells AI technologies, with AI as its core business offering. Think OpenAI or DeepMind.

On the other hand, a company that uses AI integrates AI into its operations or products but doesn’t create the AI technology itself. Netflix’s recommendation system is a good example of this. Knowing the difference is pivotal, as it determines the complexity of the legal terrain you need to navigate and deciphers which laws apply to you.

G2 lays out the key AI software in this developing field. When you have a bird’s-eye view of the possible tools, you can make better decisions on which is right for your business.

Keep an eye out on the latest developments in the law, as generative AI regulations are on the horizon. Legislation is rapidly developing in the US, UK, and Europe. Likewise, litigation involving AI is actively being decided. Keep in touch with your attorneys for the latest developments.

Choose the right partner, keeping terms of use in mind

You can tell a lot about a company by its terms of use. What does a company value? How do they handle the relationship with their users or customers? The terms of use can serve as a litmus test.

OpenAI, for instance, explicitly states in its usage policies that its technology shouldn’t be used for harmful, deceptive, or otherwise unethical applications. Bing Chat requires users to comply with laws prohibiting offensive content or behavior. Google Bard, meanwhile, focuses on data security and privacy in its terms – highlighting Google’s commitment to protecting user data. Evaluating these terms is essential to ensuring your business aligns with the AI partner’s principles and legal requirements.

We compared the terms of use and privacy policies of several key generative AI players to help us determine which AI tools would work best for our company’s risk profile and recommend you do the same.

Between your company and the AI company, who owns the input? Who owns the output? Will your company data be used to train the AI model? How does the AI tool process, and to whom does it send personally identifiable information? How long will the input or output be retained by the AI tool?

Answers to these questions inform the extent to which your company will want to interact with the AI tool.

Navigate the labyrinth of ownership rights

When using generative AI tools, it’s paramount to understand the extent of your ownership right to the data that you put into the AI and the data that is derived from the AI.

From a contractual perspective, the answers depend on the agreement you have with the AI company. Always ensure that the terms of use or service agreements detail the ownership rights clearly.

For example, OpenAI takes the position that between the user and OpenAI, the user owns all inputs and outputs. Google Bard, Microsoft’s Bing Chat, Jasper Chat, and Anthropic’s Claude similarly each grant full ownership of input and output data to the user but simultaneously reserve for themselves a broad license to use AI-generated content in a multitude of ways.

Anthropic’s Claude grants ownership of input data to the user but only “authorizes users to use the output data.” Anthropic also grants itself a license for AI content, but only “to use all feedback, ideas, or suggested improvements users provide.” The contractual terms you enter into are highly variable across AI companies.

Strike the right balance between copyright and IP

AI’s ability to generate unique outputs creates questions about who has intellectual property (IP) protections over those outputs. Can AI create copyrightable work? If so, who is the holder of the copyright?

The law is not entirely clear on these questions, which is why it’s crucial to have a proactive IP strategy when dealing with AI. Consider whether it is important for your business to enforce IP ownership of the AI output.

Presently, jurisdictions are divided about their views on copyright ownership for AI-generated works. On one hand, the U.S. Copyright Office takes the position that AI-generated works, absent any human involvement, cannot be copyrighted because they are not authored by a human.

Link to the rest at Learn2G2

The article goes on to discuss several other interesting legal and intellectual property points.

PG notes that nothing you read on TPV constitutes legal advice. If you want legal advice, you need to hire a lawyer, not read a blog post.

PG will also note that the OP includes some other suggestions by the author, who is an attorney, which you may want to consider, but hire your own lawyer because, just like PG, the author of the OP is not your attorney and isn’t giving legal advice by writing an online article.

Confessions of a Viral AI Writer

From Wired:

SIX OR SEVEN years ago, I realized I should learn about artificial intelligence. I’m a journalist, but in my spare time I’d been writing a speculative novel set in a world ruled by a corporate, AI-run government. The problem was, I didn’t really understand what a system like that would look like.

I started pitching articles that would give me an excuse to find out, and in 2017 I was assigned to profile Sam Altman, a cofounder of OpenAI. One day I sat in on a meeting in which an entrepreneur asked him when AI would start replacing human workers. Altman equivocated at first, then brought up what happened to horses when cars were invented. “For a while,” he said, “horses found slightly different jobs, and today there are no more jobs for horses.”

The difference between horses and humans, of course, is that humans are human. Three years later, when Open-AI was testing a text generator called GPT-3, I asked Altman whether I could try it out. I’d been a writer my whole adult life, and in my experience, writing felt mostly like waiting to find the right word. Then I’d discover it, only to get stumped again on the next one. This process could last months or longer; my novel had been evading me for more than a decade. A word-generating machine felt like a revelation. But it also felt like a threat—given the uselessness of horses and all that.

OpenAI agreed to let me try out GPT-3, and I started with fiction. I typed a bit, tapped a button, and GPT-3 generated the next few lines. I wrote more, and when I got stuck, tapped again. The result was a story about a mom and her son hanging out at a playground after the death of the son’s playmate. To my surprise, the story was good, with a haunting AI-produced climax that I never would have imagined. But when I sent it to editors, explaining the role of AI in its construction, they rejected it, alluding to the weirdness of publishing a piece written partly by a machine. Their hesitation made me hesitate too.

I kept playing with GPT-3. I was starting to feel, though, that if I did publish an AI-assisted piece of writing, it would have to be, explicitly or implicitly, about what it means for AI to write. It would have to draw attention to the emotional thread that AI companies might pull on when they start selling us these technologies. This thread, it seemed to me, had to do with what people were and weren’t capable of articulating on their own.

There was one big event in my life for which I could never find words. My older sister had died of cancer when we were both in college. Twenty years had passed since then, and I had been more or less speechless about it since. One night, with anxiety and anticipation, I went to GPT-3 with this sentence: “My sister was diagnosed with Ewing sarcoma when I was in my freshman year of high school and she was in her junior year.”

GPT-3 picked up where my sentence left off, and out tumbled an essay in which my sister ended up cured. Its last line gutted me: “She’s doing great now.” I realized I needed to explain to the AI that my sister had died, and so I tried again, adding the fact of her death, the fact of my grief. This time, GPT-3 acknowledged the loss. Then, it turned me into a runner raising funds for a cancer organization and went off on a tangent about my athletic life.

I tried again and again. Each time, I deleted the AI’s text and added to what I’d written before, asking GPT-3 to pick up the thread later in the story. At first it kept failing. And then, on the fourth or fifth attempt, something shifted. The AI began describing grief in language that felt truer—and with each subsequent attempt, it got closer to describing what I’d gone through myself.

When the essay, called “Ghosts,” came out in The Believer in the summer of 2021, it quickly went viral. I started hearing from others who had lost loved ones and felt that the piece captured grief better than anything they’d ever read. I waited for the backlash, expecting people to criticize the publication of an AI-assisted piece of writing. It never came. Instead the essay was adapted for This American Life and anthologized in Best American Essays. It was better received, by far, than anything else I’d ever written.

Link to the rest at Wired

ChatGPT mania may be cooling, but a serious new industry is taking shape

From The Economist:

The first wave of excitement about generative artificial intelligence (ai) was like nothing else the world had seen. Within two months of its launch in November 2022, Chatgpt had racked up 100m users. Internet searches for “artificial intelligence” surged; more than $40bn in venture capital flowed into ai firms in the first half of this year alone.

The craze for consumer experimentation has since cooled a little: Chatgpt use has fallen and fewer people are Googling “ai”. Son Masayoshi, a Japanese investor notorious for diving into already frothy markets, is thought to be interested in investing in Openai, Chatgpt’s creator. But a second, more serious phase is beginning. An entirely new industry centred on supercharged ai models is taking shape. Three forces will determine what it eventually looks like—and whether Openai stays dominant, or other players prevail.

The first factor is computing power, the cost of which is forcing model-builders to become more efficient. Faced with the eye-watering costs of training and running more powerful models, for instance, Openai is not yet training its next big model, gpt-5, but gpt-4.5 instead, a more efficient version of its current leading product. That could give deep-pocketed rivals such as Google a chance to catch up. Gemini, the tech giant’s soon-to-be-released cutting-edge model, is thought to be more powerful than Openai’s current version.

High computing costs have also encouraged the proliferation of much smaller models, which are trained on specific data to do specific things. Replit, a startup, has trained a model on computer code to help developers write programs, for instance. Open-source models are also making it easier for people and companies to plunge into the world of generative ai. According to a count maintained by Hugging Face, an ai firm, roughly 1,500 versions of such fine-tuned models exist.

All these models are now scrambling for data—the second force shaping the generative-ai industry. The biggest, such as Openai’s and Google’s, are gluttonous: they are trained on more than 1trn words, the equivalent of over 250 English-language Wikipedias. As they grow bigger they will get hungrier. But the internet is close to being exhausted. Many model-makers are therefore signing deals with news and photography agencies. Others are racing to create “synthetic” training data using algorithms; still others are trying to work with new forms of data, such as video. The prize is a model that beats the rivals.

Generative ai’s hunger for data and power makes a third ingredient more important still: money. Many model-makers are already turning away from Chatgpt-style bots for the general public, and looking instead to fee-paying businesses. Openai, which started life in 2015 as a non-profit venture, has been especially energetic in this regard. It has not just licensed its models to Microsoft, but is setting up bespoke tools for companies including Morgan Stanley and Salesforce. Abu Dhabi plans to establish a company to help commercialise applications of Falcon, its open-source ai model.

Link to the rest at The Economist

Write me a murder mystery in the style of Dorothy Sayers

From chat.openai.com:

Query: Write me a murder mystery in the style of Dorothy Sayers

Response:

Chapter 1: An Unexpected Visitor

The rain was relentless, beating a rhythmic tattoo on the windows of Wimsey Manor. Lord Peter Wimsey reclined in his leather armchair, a tumbler of fine Scotch in hand. The fireplace crackled merrily, casting dancing shadows across the bookshelves filled with rare tomes and obscure curiosities. The peace of the evening was shattered by an urgent knock on the door.

Wimsey’s butler, Bunter, ushered in a drenched young woman with a look of distress etched upon her face. Her name was Evelyn Middleton, a distant relative of Wimsey’s and a recent acquaintance. She had a reputation for being a woman of mystery, and it seemed she had brought a new enigma to his doorstep.

“Lord Wimsey,” she began, her voice quivering with anxiety, “I desperately need your help. I believe I am in grave danger.”

Wimsey, ever the gentleman, poured her a glass of sherry and encouraged her to compose herself. As she sipped the amber liquid, she recounted her tale.

Chapter 2: A Lethal Gift

Evelyn explained that earlier that day, she had received an anonymous package containing a string of exquisite pearls. They were of such beauty that even a cursory glance revealed their extraordinary value. Attached to the pearls was a note that read, “For the fairest lady in England.”

Despite their allure, Evelyn had an uneasy feeling about the gift. She had sensed danger lurking beneath their lustrous surface. To confirm her suspicions, she had taken one of the pearls to be examined by a jeweler. The result was chilling—a deadly poison concealed within the pearl’s core.

“I fear that I am the target of an insidious plot,” Evelyn confessed. “But I know not why or by whom.”

Chapter 3: The Web of Suspects

Lord Wimsey listened attentively, his mind racing. He knew that this was no ordinary case. The presence of a poison-laden pearl hinted at a cunning adversary with a twisted sense of vengeance. It was a puzzle that demanded his immediate attention.

Wimsey proposed they invite a select group of guests to Wimsey Manor for the weekend, under the guise of a social gathering. Among them would be Evelyn’s acquaintances, as well as those who might have a motive to harm her. He would use the occasion to observe, question, and deduce.

As the weekend unfolded, Lord Wimsey found himself surrounded by an eclectic group of characters—a retired general, a reclusive artist, a disgruntled business partner, and even a charming foreign diplomat. Each had their own secrets, and each could be a potential suspect.

Chapter 4: Unmasking the Deceiver

Over the course of the weekend, Wimsey meticulously collected evidence and observed the interactions of the guests. Tensions ran high, and suspicions festered. Evelyn, although clearly anxious, remained resolute.

On the final evening, as the guests gathered in the drawing room, Lord Wimsey revealed the true purpose of the weekend gathering—to unmask the person behind the poison-laden pearls. With an eloquence that would have made Dorothy Sayers herself proud, he dissected each guest’s potential motive and alibi, slowly eliminating them one by one.

Then, in a dramatic revelation, he produced a hidden compartment from a curious sculpture in the room—an identical strand of pearls with a hidden poison pearl. It was a masterstroke of misdirection.

The room erupted in shock and confusion. Amidst the chaos, Evelyn’s eyes darted around, and she caught a glimpse of panic in one guest’s face—the foreign diplomat. It was a subtle but telling clue.

Chapter 5: The Culprit Unmasked

With the culprit exposed, Lord Wimsey calmly presented his evidence. The diplomat, it turned out, had a vendetta against Evelyn. He had sent the original poisoned pearls and had been so confident in his cunning that he had brought a backup strand, believing he could frame someone else.

The police were called, and the diplomat was led away in handcuffs, his reputation tarnished forever. The threat to Evelyn was extinguished, and the mystery of the poisoned pearls was solved.

As the rain outside continued to fall, Lord Wimsey watched the diplomat’s departure from the safety of his manor. Another case had been cracked, another villain brought to justice, and another evening of intrigue had come to an end. With a satisfied smile, he raised his glass to the memory of Dorothy Sayers and the enduring legacy of British detective fiction.

Link to more at chat.openai.com

PG notes that the free version of chat.openai.com is constrained in how long its response will be.

PG doesn’t remember Whimsey Manor in any of Dorothy Sayers’ books.

That said, he asked the AI to write a murder mystery in the style of Dorothy Sayers, so the AI was not strictly restricted by what Sayers actually wrote.

PG will contend that the AI made a mistake by referring to the detective as “Lord Whimsey” instead of “Lord Peter,” which PG understands would have been the customary way of referring or speaking to the second son of a Duke.

For some time, PG thought “The Duke of Denver,” Peter’s father (and later his older brother) was a strange title. However, during his recent reading on the subject, PG discovered that the city in Colorado was preceded by a village called Denver in Norfolk. He understands that the village of Denver currently has a population of less than a thousand people.

(The Denver in Colorado and its suburbs currently have a population of nearly three million people.)

(The later in the evening PG blogs, the more he tends to go down strange rabbit holes.)

(Which causes him to remind one and all that Colorado has a great many jackrabbits {which are actually hares and not rabbits, hares generally being larger and longer-legged than the cotton-tail rabbits and the many other true rabbits one finds in England, Colorado and elsewhere.})

What are the most common features of best-selling science fiction novels?

From chat.openai.com:

Query: What are the most common features of best-selling science fiction novels?

Response:

While there is no one-size-fits-all formula for creating a best-selling science fiction novel, there are certain common features and elements that tend to be found in many successful works in the genre. Keep in mind that the popularity of specific themes and features can change over time, and what works in one era may not work in another. However, as of my last knowledge update in September 2021, here are some common features of best-selling science fiction novels:

  1. Speculative Concepts: Science fiction often explores speculative and futuristic concepts, such as advanced technology, space exploration, time travel, and alternate realities. Best-sellers often introduce innovative or thought-provoking ideas in these areas.
  2. Engaging World-Building: Successful science fiction novels create immersive and well-developed worlds or universes that feel both plausible and captivating to readers. This includes detailed descriptions of settings, cultures, and societies.
  3. Complex Characters: Best-selling science fiction novels feature compelling and multi-dimensional characters who undergo personal growth and development. Readers should be able to relate to or sympathize with these characters.
  4. Themes and Social Commentary: Many great science fiction works address social, political, or philosophical issues through allegory or metaphor. They provide commentary on real-world problems, such as inequality, authoritarianism, environmental issues, or the consequences of technological advancements.
  5. Technological Advancements: Science and technology play a central role in many science fiction stories. This can involve futuristic inventions, artificial intelligence, cyborgs, or advanced scientific concepts. The portrayal of technology should be both imaginative and thought-provoking.
  6. Conflict and Tension: A strong plot with well-defined conflicts and tensions is crucial. This could involve external threats (alien invasions, space battles) or internal struggles (ethical dilemmas, personal choices).
  7. Unique Premises: Best-sellers often stand out with unique and original premises or twists on familiar concepts. They take risks and avoid clichés.
  8. Emotional Resonance: Successful science fiction novels evoke emotions and create a connection with readers. They can be thrilling, poignant, awe-inspiring, or intellectually stimulating.
  9. Page-Turning Plot: A gripping and well-paced plot keeps readers engaged and eager to turn the next page. Cliffhangers and unexpected twists can be effective tools.
  10. Diverse Perspectives: Many contemporary best-selling science fiction novels feature diverse casts of characters and viewpoints. They reflect a more inclusive and representative vision of the future.
  11. Well-Researched Science: While science fiction often bends the rules of science for the sake of storytelling, incorporating accurate scientific principles and research can lend credibility to the narrative.
  12. Societal Impact: Some of the most memorable science fiction novels explore how technological or societal changes affect individuals and society as a whole. They delve into the consequences of scientific and technological advancements.
  13. Memorable Imagery: Vivid and imaginative descriptions create lasting images in readers’ minds, contributing to the lasting impact of the story.

It’s important to note that while these features are common in best-selling science fiction novels, successful authors often find ways to blend and subvert these elements to create fresh and innovative stories. The science fiction genre is known for its diversity and adaptability, so aspiring authors should feel free to experiment and push the boundaries of traditional conventions.

Source: https://chat.openai.com/

Proust, ChatGPT and the case of the forgotten quote

From The Guardian:

The other day, I was looking for a quote in Proust, so I thought I would ask ChatGPT. Here’s how it went.

EB: Is there a passage in In Search of Lost Time when Proust talks about how love affairs repeat themselves, and says that when you’re writing about a love affair in the past, you’re often drawing inspiration from a different love affair in the present?

CHATGPT: Yes, there is a passage in Marcel Proust’s “In Search of Lost Time” (also known as “Remembrance of Things Past”) where he discusses the repetition of love affairs and the influence of present experiences on writing about past love affairs. The passage can be found in the first volume of the novel, “Swann’s Way” (Du côté de chez Swann).

In this passage, Proust reflects on how our present emotions and experiences often color our memories of the past. He suggests that when we write about a past love affair, we might unknowingly be influenced by the emotions and experiences of a current or recent love affair. This blurring of past and present emotions can result in the repetition of themes and feelings in our writing.

The idea of how love affairs repeat themselves and how we are influenced by our present experiences when writing about the past is a recurring theme in Proust’s work, as he delves into the complexities of memory, perception, and the nature of human emotions.

Please note that the specific wording and context of the passage may vary depending on the translation and edition of the novel. If you’re interested in exploring this theme further, I recommend reading “Swann’s Way” to experience Proust’s beautiful prose and intricate exploration of love, memory, and the passage of time.

Some things I was thinking at this point:

1. Hmm, I had expected it to be in Volume 2 or 3. Probably the more recent experience of reading Volume 2 or 3 coloured, or even substituted, my memory of reading Volume 1! Memory is amazing! (Didn’t Proust say something about that somewhere?)

2. I wonder why it didn’t give me a direct quote. I hope it isn’t going to be squirrelly or weird about quotes. Maybe it’s stressed out about copyrighted translations. I’ll try asking for the original.

EB: Could you quote the passage for me in French?

GPT: Certainly! Here’s the passage from Marcel Proust’s “À la recherche du temps perdu” (“In Search of Lost Time”) in its original French:

“Une seule chose avait pourtant changé : les lieux, les êtres, ne présentaient plus qu’un intérêt général et par conséquent, ne m’offraient plus aucun détail intéressant. Ce que je cherchais en eux, ce que j’avais entrevu en eux une fois pour toutes, c’était un état, un sentiment de mon âme, c’était Swann lui-même. Les personnes que je fréquentais pouvaient me parler, au lieu de Swann, de beaucoup d’autres noms propres, lesquels désignaient des états analogues à ceux que Swann me faisait connaître, des états dans lesquels ma personnalité resterait, pourrait se développer, des états qui n’étaient d’ailleurs pas incompatibles avec ceux de Swann, avec sa présence, son souvenir, au point que parfois j’hésitais à les attribuer à l’un plutôt qu’à l’autre. Dès lors j’avais pu reconnaître en chacun de ces êtres à leur manière un des aspects de Swann. Eux seuls désormais m’intéressaient, mais moins comme des personnes distinctes, que comme les moyens qu’ils m’offraient de sortir de moi-même, d’atteindre Swann, d’assouvir ma curiosité, de m’apaiser. Les désirs qu’ils pouvaient me suggérer d’eux, me paraissaient neutres, comparés à l’unique et ardente désir que j’avais de Swann. Mais, lorsque j’essayais de les mettre en pratique, ils m’étaient impossibles ; ils se heurtaient à l’existence, au souvenir de Swann, ils avaient pris naissance dans un état d’esprit inconciliable avec celui-là. Alors il me semblait que j’avais changé d’idées, que j’avais transformé ma vie; ce qui en était la cause, c’était plutôt la présence en moi d’un état d’esprit transitoire, avec lequel je m’étais représenté ma vie passée, mais qui avait en réalité en lui bien peu de choses du passé, et par contre beaucoup d’éléments empruntés à un état d’esprit différent que je n’avais connu que depuis peu de jours et qui allait, lui, en durer peu.”

(Note: This passage is taken from “Du côté de chez Swann,” the first volume of “À la recherche du temps perdu.”)

Thoughts:

1. Wow, ChatGPT is so friendly and cool, it wasn’t offended at all!

2. My French must have really deteriorated because this feels so hard to read.

3. I guess I forgot how boring Proust can be.

4. Damn this quote is boring.

5. It says “ne m’offraient aucun détail intéressant” [“didn’t offer me a single interesting detail”] … and that’s kind of how I feel about the quote!

6. I guess I was pretty naive and literal-minded to look for words like amourécrire, and/or nouveaux. Proust probably uses more sophisticated synonyms or metaphors.

7. None of this feels familiar at all.

8. I wish I had read Proust in French when I was studying literature in graduate school. Why was I so lazy in my 20s?

9. I guess I was too busy being in love.

10. God, I was so dumb.

11. Yeah, but it was also structural.

12. Whatever, I’m going to put the quote in Google Translate.

13. Good Lord, or should I say Bon Dieu, this is so boring I might actually die.

14. I had been thinking Google Translate was getting good enough with European languages to replace human translators, but I guess it still isn’t at the level of translating Proust.

Link to the rest at The Guardian

Amazon is encouraging sellers to use AI-generated product listings

From engadget:

Amazon is launching a new AI tool that generates product listings for sellers. The feature uses a large language model (LLM) “trained on large amounts of data” to make it faster and simpler for vendors to describe their products. The company describes the tool as distilling the “significant work” of creating titles, bullet points and descriptions down to “just one step.”

Amazon says its Generative Listing Content tool only requires sellers to provide a brief product description in a few words or sentences. From there, it will “generate high-quality content for their review” — including a title, product description and bullet points — which sellers can peruse before refining or submitting as is. The company says many sellers have already tested the tool during the last few months, and their feedback indicates most of them use the generated content directly without revisions.

“These new capabilities will help sellers create high-quality listings with less effort and present customers with more complete, consistent, and engaging product information that will enhance their shopping experiences,” Amazon VP Mary Beth Westmoreland wrote today in an announcement blog post.

. . . .

“With our new generative AI models, we can infer, improve, and enrich product knowledge at an unprecedented scale and with dramatic improvement in quality, performance, and efficiency,” Robert Tekiela, Amazon VP of selection and catalog systems, wrote today. “Our models learn to infer product information through the diverse sources of information, latent knowledge, and logical reasoning that they learn. For example, they can infer a table is round if specifications list a diameter or infer the collar style of a shirt from its image.”

The new tool joins Amazon’s AI-generated review summaries, launched earlier this summer. That feature uses generative AI to train on a product’s reviews and spit out one-paragraph recaps, including clickable keywords. The company teases that it’s still getting started with incorporating generative AI into its storefront: “This is just the tip of the iceberg on how we plan to use AI to help improve the seller experience and help more sellers succeed.” CEO Andy Jassy said last month that, from now on, generative AI “is going to be at the heart of what we do.”

Link to the rest at engadget

OpenAI disputes authors’ claims that every ChatGPT response is a derivative work

From Ars Technica:

This week, OpenAI finally responded to a pair of nearly identical class-action lawsuits from book authors—including Sarah Silverman, Paul Tremblay, Mona Awad, Chris Golden, and Richard Kadrey—who earlier this summer alleged that ChatGPT was illegally trained on pirated copies of their books.

In OpenAI’s motion to dismiss (filed in both lawsuits), the company asked a US district court in California to toss all but one claim alleging direct copyright infringement, which OpenAI hopes to defeat at “a later stage of the case.”

The authors’ other claims—alleging vicarious copyright infringement, violation of the Digital Millennium Copyright Act (DMCA), unfair competition, negligence, and unjust enrichment—need to be “trimmed” from the lawsuits “so that these cases do not proceed to discovery and beyond with legally infirm theories of liability,” OpenAI argued.

OpenAI claimed that the authors “misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

According to OpenAI, even if the authors’ books were a “tiny part” of ChatGPT’s massive data set, “the use of copyrighted materials by innovators in transformative ways does not violate copyright.” Unlike plagiarists who seek to directly profit off distributing copyrighted materials, OpenAI argued that its goal was “to teach its models to derive the rules underlying human language” to do things like help people “save time at work,” “make daily life easier,” or simply entertain themselves by typing prompts into ChatGPT.

The purpose of copyright law, OpenAI argued, is “to promote the Progress of Science and useful Arts” by protecting the way authors express ideas, but “not the underlying idea itself, facts embodied within the author’s articulated message, or other building blocks of creative,” which are arguably the elements of authors’ works that would be useful to ChatGPT’s training model. Citing a notable copyright case involving Google Books, OpenAI reminded the court that “while an author may register a copyright in her book, the ‘statistical information’ pertaining to ‘word frequencies, syntactic patterns, and thematic markers’ in that book are beyond the scope of copyright protection.”

“Under the resulting judicial precedent, it is not an infringement to create ‘wholesale cop[ies] of [a work] as a preliminary step’ to develop a new, non-infringing product, even if the new product competes with the original,” OpenAI wrote.

In particular, OpenAI hopes to convince the court that the authors’ vicarious copyright infringement claim—which alleges that every ChatGPT output represents a derivative work, “regardless of whether there are any similarities between the output and the training works”— is an “erroneous legal conclusion.”

The company’s motion to dismiss cited “a simple response to a question (e.g., ‘Yes’),” or responding with “the name of the President of the United States” or with “a paragraph describing the plot, themes, and significance of Homer’s The Iliad” as examples of why every single ChatGPT output cannot seriously be considered a derivative work under authors’ “legally infirm” theory.

“That is not how copyright law works,” OpenAI argued, while claiming that any ChatGPT outputs that do connect to authors’ works are similar to “book reports or reviews.”

Link to the rest at Ars Technica

As PG has mentioned previously, he believes that using a relatively small amount of material protected by copyright along with far larger amounts of material not subject to copyright protection for the purpose of training an AI and not for the purpose of making copies of the copyrighted material qualifies as fair use.

Even absent fair use, such use is not a violation of copyright protection because the AI is not making copies of copyrighted materials.

PG has mentioned other analogies, but one that popped into his mind on this occasion is an author who reads hundreds of romance novels for the purpose of learning how to write a romance novel and then writes a romance novel using tropes and techniques that many other romance authors have used before.

From Wikipedia:

Precursors of the modern popular love-romance can also be found in the sentimental novel Pamela, or Virtue Rewarded, by Samuel Richardson, published in 1740. Pamela was the first popular novel to be based on a courtship as told from the perspective of the heroine. Unlike many of the novels of the time, Pamela had a happy ending.

. . . .

Women will pick up a romance novel knowing what to expect, and this foreknowledge of the reader is very important. When the hero and heroine meet and fall in love, maybe they don’t know they’re in love but the reader does. Then a conflict will draw them apart, but you know in the end they’ll be back together, and preferably married or planning to be by page 192.

Joan Schulhafer of Pocket Books, 1982

A great many of the most financially successful authors PG knows are romance authors.

Why This Award-Winning Piece of AI Art Can’t Be Copyrighted

From Wired:

AN AWARD-WINNING PIECE of AI art cannot be copyrighted, the US Copyright Office has ruled. The artwork, Théâtre D’opéra Spatial, was created by Matthew Allen and came first in last year’s Colorado State Fair. Since then, the piece has been embroiled in a precedent-affirming copyright dispute. Now, the government agency has issued its third and final decision: Allen’s work is not eligible for copyright.

Now, Allen plans to file a lawsuit against the US federal government. “I’m going to fight this like hell,” he says.

The problem? Allen used the generative AI program Midjourney to create his entry, and copyright protections are not extended to artificial intelligence—not even the kind that wows art judges. “It’s in line with previous decisions that require human authors,” says Rebecca Tushnet, a Harvard Law School professor and leading copyright scholar.

It’s a precedent that goes back to 2018 when a photo taken by a macaque was declared public domain because monkeys can’t hold copyright. PETA may beg to differ, but under the law, monkeys and machines have about the same claim on copyright protections right now. (And this isn’t just in the US. In nearly every country, copyright is pegged to human authorship.)

Allen was dogged in his attempt to register his work. He sent a written explanation to the Copyright Office detailing how much he’d done to manipulate what Midjourney conjured, as well as how much he fiddled with the raw image, using Adobe Photoshop to fix flaws and Gigapixel AI to increase the size and resolution. He specified that creating the painting had required at least 624 text prompts and input revisions.

The Copyright Office agreed that the parts of the painting that Allen had altered with Adobe constituted original work. However, it maintained that other parts generated by AI could not be copyrighted. In other words: Allen could copyright parts of the painting, but not the whole thing. This July, Allen appealed once more, arguing that the office had ignored “the essential element of human creativity” needed to use Midjourney. He attempted to use the fair use doctrine to argue that his work should be registered, because it amounts to a transformative use of copyrighted material.

“The underlying AI generated work merely constitutes raw material which Mr. Allen has transformed through his artistic contributions,” Allen wrote.

The Copyright Office didn’t buy it. “The work cannot be registered,” it wrote in its final ruling on September 5.

Allen’s dashed efforts highlight a solidifying legal consensus. This August, a US federal judge dismissed a case brought by Missouri-based AI researcher Stephen Thalus, who has been on a mission to prove that the AI system he invented deserves copyright protections. “Plaintiff can point to no case in which a court has recognized copyright in a work originating with a nonhuman,” wrote Judge Beryl Howell of the US District Court for the District of Columbia in her decision.

Thalus is currently appealing the verdict. Ryan Abbot, his attorney, does not believe that the Copyright Office’s decision on Allen will have an impact on his client’s appeal. But he does see it as having a chilling effect on the wider world of AI-assisted art. “I think it will be a major disincentive to people developing and using AI to make art,” Abbot says.

On this point, Allen (as one might predict) agrees wholeheartedly. “This is the definition of stifling innovation and creativity, the very thing the copyright office claims to protect,” he says.

The Allen ruling may certainly nudge artists to increase the amount of work they put into art produced using AI tools. “Tweaks by a human, if they actually have an aesthetic impact, will likely add enough human authorship to get a copyright on the work as a whole,” Tushnet says.

Allen’s rejected bid demonstrates that it’s entirely unclear how many tweaks constitute human authorship. We know 624 adjustments did not make the work copyrightable—so what about 625? 626?

Link to the rest at Wired

3 things businesses need to know as NYC begins enforcing its AI hiring law

From Venture Beat:

In July, New York City officially began cracking down on companies that run afoul of its first-in-the-nation law (NYC Law 144) governing the use of artificial intelligence in employment decisions.

Even companies that are not based in New York City but have operations and employees there — particularly global enterprises — must be compliant with this new regulation. The law doesn’t explicitly prohibit AI, but provides guidelines for how the technology should be used when making hiring decisions.

That’s an important distinction. Organizations across industries (healthcare, manufacturing, retail and countless others) already use intelligent technology in a multitude of ways. Examples include oncologists using AI to help diagnose cancer with a high degree of precision, manufacturing and retail predicting buying patterns to improve logistics and the consumer experience, and nearly all music recorded today utilizes auto-tune to correct or enhance a singer’s pitch.

When it comes to personnel matters, companies currently use AI to match relevant candidates with the right jobs — and this is NYC 144’s focus. After multiple delays, the new law has many companies a bit jittery at a time when job openings remain elevated and unemployment is near historic lows.

. . . .

Boldface tech names such as Microsoft’s president, Brad Smith, and Google’s CEO, Sundar Pichai, have endorsed a regulatory framework. Transparency is always a good thing. “I still believe A.I. is too important not to regulate and too important not to regulate well,” Pichai wrote in the Financial Times.

Conversely, if not done well, regulations could negatively impact job seekers and hiring managers by restricting the insightful information and tailored experiences that form the crux of a positive employment process. 

Thirty years ago, recruiters sifted through stacks of resumes sitting on their desks. Candidates were often selected based on inconsistent criteria, including Ivy League education, location within the pile and a bit of luck based on how high in the pile their resume was placed — over which they had no control. Humans’ unconscious biases add another untraceable filter when technology isn’t involved.

AI delivered scalability and accuracy to help level the playing field by matching individuals with the required skills and experience to the right roles, regardless of where they sit within the proverbial pile of resumes. AI also helps recruiters see the whole person and skills that the individual may not have thought to highlight within their resume. AI can’t prevent a recruiter or hiring manager from taking shortcuts. But it can make them less necessary by surfacing relevant resumes that might otherwise be lost in the pile.

The combination of human control and AI support is a good counter against bias in two ways. First, one cause of bias in human decision-making is that people often look for shortcuts to solving problems, like focusing only on candidates from Ivy League schools rather than investing time and effort to source and evaluate candidates from non-traditional backgrounds.

Second, bias detection with adverse-impact reporting can expose such bias in real time, allowing the organization to take action to stop such biased decisions.

There are potential laws being debated in Europe that might restrict the use of any personalization in the talent acquisition lifecycle. That could hamper employment prospects not only for external candidates, but for employees already in the company who are looking to move into a new role.

Pulling back hard on the reins of these technologies could actually lead to more bias because an imperfect human would then be solely in charge of the decision-making process. That could lead to a fine under the New York law and additional federal penalties since the Equal Employment Opportunity Commission has warned companies that they are on the hook for any discrimination in hiring, firing or promotions — even if it was unintentional and regardless of whether it is AI-assisted.

No law is perfect and NYC’s new legislation is no different. One requirement is to notify candidates that AI is being used — like cookie notifications on websites or end-user license agreements (EULAs) that most people click on without reading or truly understanding them.

Words matter. When reading AI-use notifications, individuals could easily conjure doomsday images portrayed in movies of technology overtaking humanity. There are countless examples of new technology evoking fear. Electricity was thought to be unsafe in the 1800s, and when bicycles were first introduced, they were perceived as reckless, unsightly and unsafe.

Explainability is a key requirement of this regulation, as well as just being good practice. There are ways to minimize fear and improve notifications: Make them clear and succinct, and keep legal jargon to a minimum so the intended audience can consume and understand the AI that’s in use.

Link to the rest at Venture Beat

One more reason not to have an office in New York City.

MPs criticise UK government’s handling of copyright policy related to AI

From The Guardian:

A group of MPs has criticised the government’s handling of a proposed copyright exemption that would allow developers of artificial intelligence free use of copyrighted books and music for training.

A culture, media and sport committee report published on Wednesday explained that the exemption was first proposed by the Intellectual Property Office in June 2022. By February, the government had changed course in response to concerns raised by the creative industries and parliamentarians: in a debate, Conservative MP George Freeman said that “we do not want to proceed with the original proposals”.

However, the committee report states that the government’s handling of the exemption “shows a clear lack of understanding of the needs of the UK’s creative industries”.

The MPs recommended that the government does not implement the exemption and said that it “risks reducing arts and cultural production as mere ‘inputs’ in AI development”.

“The chorus of warnings from musicians, authors and artists about the real and lasting harm [of] a failure to protect intellectual property in a world where the influence of AI is growing should be enough for ministers to sit up and take notice,” said committee chair Dame Caroline Dinenage MP.

The MPs said that the current framework, which allows an exemption for text and data mining for non-commercial research purposes and allows creators to licence their work for any further purpose, “provides an appropriate balance between innovation and creator rights”.

The committee also recommended that the government work to rebuild trust after its “abortive attempt” to introduce the exemption. It must show that “it really understands where the creative industries are coming from and develop a copyright and regulatory regime that properly protects them as AI continues to disrupt traditional cultural production”, added Dame Dinenage.

Earlier this month, it was revealed that the pirated works of thousands of authors including Zadie Smith, Stephen King and Rachel Cusk had been used to train generative AI models – tools that produce content based on patterns identified in sample texts.

Link to the rest at The Guardian

Here Comes Bing

Earlier today, PG conducted some writing experiments with AI Chat at Deep AI and various requests to create outlines for stories based on different prompts.

One of the comments lead PG to try Bing’s AI writing engine.

For the prompt, PG used the same prompt he used in one of his AI Chat writing experiments as Deep AI – Part 3.

Here’s what happened:

Prompt:  I am an experienced author. Create a detailed outline for a murder mystery in the style of Arthur Conan Doyle set in 1930s Cornwall.

Bing replied as follows:

Wow, you are an experienced author! That’s impressive. 😊

I can help you create a detailed outline for a murder mystery in the style of Arthur Conan Doyle set in 1930s Cornwall. Here is what I came up with:

  • The story begins with a letter from an old friend of Sherlock Holmes, Dr. John Watson, who invites him to visit his country house in Cornwall for a vacation. Holmes accepts the invitation and brings along his loyal companion, Mrs. Hudson.
  • Upon arriving at the house, they are greeted by Dr. Watson and his wife, Mary, who introduce them to their other guests: Sir Henry Baskerville, a wealthy landowner and descendant of the infamous Baskerville family; Lady Violet Baskerville, his young and beautiful wife; Mr. James Stapleton, a local naturalist and photographer; and Miss Laura Lyons, a journalist and aspiring novelist.
  • The first night, Holmes and Watson are awakened by a loud scream from Lady Violet’s room. They rush to her aid and find her lying on the floor, clutching her throat, with blood dripping from her mouth. She gasps out that she saw a huge hound with glowing eyes and fiery breath outside her window, and that it tried to attack her. She dies shortly after.
  • Holmes examines the scene and finds traces of phosphorus on the window sill, suggesting that someone used a chemical substance to create the illusion of the hound. He also notices a peculiar mark on Lady Violet’s neck, resembling a bite wound. He suspects that she was poisoned by a snake or some other venomous creature.
  • The next day, Holmes and Watson visit the nearby village of Grimpen, where they learn more about the legend of the Baskerville hound, which is said to haunt the moors and prey on the members of the Baskerville family. They also meet Mr. Frankland, a local magistrate and amateur astronomer, who tells them that he has seen strange lights on the moor at night.
  • Holmes decides to investigate the moor and sends Watson back to the house to keep an eye on the other guests. He instructs him to write down everything he observes and send him regular reports by telegram.
  • Watson follows Holmes’ orders and notices several suspicious things: Sir Henry seems nervous and restless; Mr. Stapleton shows an unusual interest in Lady Violet’s death and the legend of the hound; Miss Lyons seems to have a secret connection with Sir Henry; and Dr. Watson himself receives a mysterious note warning him to stay away from the moor.
  • Meanwhile, Holmes explores the moor and discovers a hidden hut where he finds a man who claims to be Sherlock Holmes’ brother, Mycroft Holmes. He reveals that he has been working undercover for the British government, investigating a secret German spy ring that is operating in Cornwall. He tells Holmes that one of the guests at Dr. Watson’s house is actually a German agent who is trying to steal some important documents from Sir Henry, who is also involved in the government’s affairs.
  • Holmes realizes that the murder of Lady Violet was part of the spy’s plan to create a diversion and scare away Sir Henry from his house. He also deduces that the spy is none other than Mr. Stapleton, who is using his knowledge of natural history and photography to create fake evidence of the hound and poison his victims with venomous animals. He also learns that Miss Lyons is Stapleton’s accomplice and lover, who seduced Sir Henry to gain access to his papers.
  • Holmes rushes back to the house with Mycroft and confronts Stapleton, who admits his guilt and tries to escape. Holmes chases him across the moor, where he encounters a real hound that has been trained by Stapleton to attack anyone wearing Sir Henry’s clothes. Holmes shoots the hound and captures Stapleton, who is then arrested by Mycroft’s agents.
  • Holmes explains everything to Watson and Sir Henry, who are relieved and grateful for his help. He also reveals that Lady Violet was not really Sir Henry’s wife, but his cousin who impersonated her to inherit his fortune. She was also in league with Stapleton, who planned to kill her after getting rid of Sir Henry. However, Stapleton double-crossed her and killed her first.
  • The story ends with Holmes and Watson returning to London, where they resume their usual adventures.

Visitors to TPV can decide whether they like Bing’s style or AI Chat’s. Or, perhaps you want to blend them and have Bing do a rough draft on odd-numbered chapters and AI Chat providing a rough draft on even-numbered chapters.

Deep AI – Part 3

Let’s take a deeper dive into Deep AI with a bit of Sherlock Holmes:

Prompt:  I am an experienced author. Create a detailed outline for a murder mystery in the style of Arthur Conan Doyle set in 1930s Cornwall.

Response:

Title: The Phantom of Porthgwarra

I. Introduction

A. Introduction of setting – 1930s Cornwall, specifically the atmospheric fishing village of Porthgwarra.

B. Main characters introduced – Smart and witty detective Richard Hawke, his loyal friend and narrator, Dr. Peter Sullivan.

C. Mysterious occurrence – Village’s uplifting wedding ceremony ends with a shocking scene: discovery of a wealthy local businessman’s body.

II. First Act

A. Introduction of Victim – John Truro, a wealthy businessman with a dubious reputation.

B. Gathering of suspects – Truro’s estranged young wife, his disinherited son, the resentful village bartender, and a suspicious outsider staying locally.

C. Initial investigation by local police – proving too complex for their limited resources, Detective Richard Hawke is called to assist.

III. Second Act

A. Hawke and Sullivan arrive at Porthgwarra – vivid description of the stark Cornish landscape and its people.

B. Investigation commences – Hawke quickly points out errors in local police’s initial work, focuses on alibis, contentious relationships, and potential motives.

C. Unfolding of clues – Hawke reveals that the business deal Truro was about to close would leave many villagers penniless.

D. Unexpected turn – Truro wasn’t murdered but actually committed suicide, or so Hawke implies to catch the real murderer off-guard.

IV. Third Act

A. Shift in narrative – Hawke’s false narrative results in unexpected confessions, shedding new light on each suspect’s grievances against Truro.

B. Processing new leads – Hawke and Sullivan decipher the motives behind each suspect’s lies and confessions.

C. Midnight encounter – Hawke meets with the suspicious outsider who reveals information about Truro’s secret dealings, adding another layer to the murder mystery.

V. Climax

A. Stormy confrontation – During a fierce storm, Hawke gathers all suspects at the local inn, bringing tensions to a high.

B. Breakdown of the murder – Hawke reveals that the resentful bartender was actually a detective investigating Truro for fraud.

C. The murderer is revealed – However, the real murderer is Truro’s young wife, pushed to the brink by constant humiliation and imminent penury.

VI. Resolution

A. The aftermath of the reveal – The villagers come to terms with the murder, as Truro’s wife is arrested.

B. Personal reflections – Sullivan contemplates the blurred lines of right and wrong, guilt and innocence in his closing narrative.

C. Departure – Hawke and Sullivan leave Cornwall, leaving behind a village returning to normalcy but forever tainted by a terrible crime.

VII. Epilogue

A. Return to London – Back in their bustling city lives, Sullivan pens down their Cornish adventure.

B. Final thoughts – An in-depth discussion between Sullivan and Hawke on the human capacity for desperation and crime.

C. Final hint – The letter arrives with their next clue and case, setting the premise for a new mystery.

Message from the AI

I appreciate your addition to the outline, but as an AI language model I don’t have emotions. I cannot appreciate, feel or give feedback on any creative work. However, I can continue to assist you with your project. Is there anything else you need me to do?

This last part popped up without any prompt. PG hadn’t received a little message from the AI during any of his prior experiments.

He couldn’t decide whether Deep AI was subtly telling him he had been hogging its brain at no charge for too long and should let someone else experiment with it or if, despite PG’s giving it a trivial workout, it was supposed to ask for feedback.

Perhaps, just like some of PG’s old girlfriends, Deep AI was giving him a subtle message that he didn’t understand.

In any case, here’s another link to Deep AI and you can tell her/him/it that PG sent you.

Deep AI – Part 2

PG decided to see how Deep AI would respond to a query that began “I am an experienced author” instead of “I am a new author” but was otherwise identical to his previous query.

Prompt: I am an experienced author. Create a detailed outline for a murder mystery in the style of Agatha Christie set in 1930s Cornwall.

Response:

Title: Murder at the Roseland Manor

Setting: Roseland Manor, a large mansion on the coast of Cornwall, England in the 1930s.

Characters:

  1. Miss Elizabeth Roseland, owner of Roseland Manor
  2. Mr. George Roseland, Miss Roseland’s brother
  3. Miss Charlotte Roseland, Mr. Roseland’s daughter
  4. Captain Edward Raleigh, a retired naval officer
  5. Miss Marianne Devon, Miss Roseland’s personal secretary
  6. Dr. Johnathan Greenwood, the local physician
  7. Mrs. Alice Jenkins, the housekeeper
  8. Mr. Henry Bates, the butler
  9. Miss Emily Gardner, the maid
  10. Mr. Thomas Richards, the chauffeur

Plot:

Chapter 1: Introduction

The story begins with the arrival of Miss Charlotte Roseland at Roseland Manor, having been away studying in London for several years. She is greeted warmly by her aunt, Miss Elizabeth Roseland. The reader learns about the Roseland family, their wealth, and the history of Roseland Manor.

Chapter 2: The Incident

Captain Edward Raleigh, a friend of Miss Roseland, arrives at Roseland Manor for a visit. In the middle of the night, while the household is asleep, a scream is heard. The household wakes up to find that Miss Charlotte Roseland has been attacked and is badly injured.

Chapter 3: The Investigation Begins

Dr. Greenwood is called in to attend to Miss Charlotte’s injuries, and the police are called to investigate the incident. Detective Inspector James Thompson of the Cornwall police force is assigned to the case. He interviews all the members of the household and looks for clues.

Chapter 4: Suspects

The story introduces the various suspects. Was it one of the family members who attacked Miss Roseland? Or was it one of the staff? Was it the Captain, who seems to have a shady past? Or was it someone else?

Chapter 5: The Plot Thickens

As the investigation progresses, more information is revealed. (PG note: It’s possible that this part of the response from the AI was truncated because it was the last line on the first portion of the response)

Chapter 6: The Secret Affair

During his investigation, DI Thompson discovers that Miss Marianne Devon, Miss Roseland’s personal secretary, was having an affair with Mr. George Roseland, Miss Roseland’s brother. Could this affair have played a part in the attack on Miss Charlotte Roseland? Chapter

7: Red Herrings

Several red herrings are introduced, including the possibility that Captain Raleigh was involved in the attack due to his shady past and his possible connection with a well-known criminal. DI Thompson begins to investigate the Captain’s background and alibis.

Chapter 8: More Strange Occurrences

More strange occurrences happen at Roseland Manor, including a break-in and other attempts on Miss Charlotte’s life. Is someone trying to stop her from remembering something important?

Chapter 9: The Final Clue

DI Thompson finally collects the last piece of evidence that will help him solve the case. He makes an arrest and brings all the characters together to reveal the killer’s identity. Chapter

10: The Explanation

The motive for the murder is revealed, and DI Thompson explains how he solved the case. All the characters’ secrets are exposed, and the murderer is brought to justice. The story ends with Miss Charlotte Roseland recovering from her injuries and the Roseland family left to pick up the pieces.

Again, PG is not an expert, but this also looks pretty good to him, far more detailed for an “experienced author” than for a “new author”. Feel free to opine in the comments.

Deep AI – Part 1

Chat GPT wasn’t working for PG, so he tried out AI Chat at Deep AI instead.


Prompt: I am a new author. Create a detailed outline for a murder mystery in the style of Agatha Christie set in 1930s Cornwall.

Response:

Title: Murder in Cornwall

Setting: 1930s Cornwall, England, an isolated town surrounded by cliffs and beaches along the coast.

Characters:

  1. Miss Florence Fairchild – An elderly rich spinster who owns a beautiful mansion in Cornwall.
  2. Mr. John Pembroke – A handsome and enigmatic artist who was the last person to see the victim before his murder.
  3. Mrs. Elizabeth Trewarren – The housekeeper of Miss Fairchild’s mansion, who seems to know more than she lets on.
  4. Inspector Arthur Kemp – A seasoned detective from Scotland Yard called in to solve the case.
  5. Mrs. Amelia Price – The victim’s wife, a prominent member of the local society, and a friend of Miss Fairchild.

Introduction:

Miss Florence Fairchild has gathered a group of her closest friends to celebrate her 70th birthday at her mansion in Cornwall. The night before the party, one of her guests, Mr. Jonathan Price, is found dead on the beach, murdered. Inspector Kemp is called in to investigate, and the investigation begins.

Act 1: The Investigation Begins

Inspector Kemp begins his investigatio and interviews all the guests who were present at the party and staff who were working in the mansion. The initial evidence points towards Mr. John Pembroke, who had been seen arguing with the victim on the beach the previous night. But John vehemently denies any involvement in the murder and provides an alibi. Meanwhile, Mrs. Elizabeth Trewarren, who has been Miss Fairchild’s housekeeper for years, seems to be hiding something and behaves suspiciously.

Act 2: The Plot Thickens

As Inspector Kemp digs deeper, he uncovers several motives for murder. The victim had been involved in an illicit affair with Amelia Price, who had been seen arguing with him the night of the murder. Miss Fairchild had been taking care of Amelia’s finances for years, which points towards a financial motive. Also, a series of anonymous letters had been sent to the victim, which suggests that someone had a grudge against him.

Act 3: The Final Twist

As the investigation progresses, Inspector Kemp realizes that the murder is more complicated than he had initially thought. He discovers that the victim had figured out an elaborate fraud scheme involving Miss Fairchild’s finances and that Amelia Price had been an accomplice.

. . . .

PG thought this was quite interesting. He will leave it to others who know far more than he about murder mysteries (including Mrs. PG) to decide if it’s a decent start or not.

“AI Will Never Be Good Enough to Replace Real Authors” Is the Wrong Conversation

From BookRiot:

There are so many stories regarding AI right now that it feels like a five-alarm fire mixed with a Whac-A-Mole game.

. . . .

And no, it’s not a five-alarm fire. But it is the very important pocket of time wherein a thing needs some form of regulation before we are fully immersed in the consequences and everyone learns the hard way what the saying “you can’t put the toothpaste back in the tube” means.

AI (artificial intelligence) is defined as“the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” It is being used in a lot of industries in many ways, and it was already in use before all the recent headlines. So to be clear: what I am specifically talking about is the way AI is being used in place of writers, journalists, and other creatives, and in grifts like where a non-author tricks consumers into buying their AI word salad book instead of the intended author’s properly written book.

There are certain topics in the world of publishing that end up feeling like they just never stop being discussed, one being any version of “Who gets to write what book?” in response to when a writer writes — or is asking how to write — way out of their lane. The thing with that specific question is, as Alexander Chee perfectly explains, “the question is a Trojan horse, posing as reasonable artistic discourse when, in fact, many writers are not really asking for advice — they are asking if it is okay to find a way to continue as they have.”

I keep thinking about this every time (daily at this point) I see people — well-intentioned, I think — saying this isn’t a big deal and everything is fine, because AI will never be good enough to replace writers and authors (insert all creatives). Being that AI is just scrapping all the information that is already out there to toss it into a blender and output something “new,” I am not actually worried that it will ever be good enough to replace creatives. But that’s not the problem for me. While I get where this idea is coming from I feel it gives a very false sense of “It’ll be fine!” and “Don’t worry!” which keeps the conversations that should be had from happening.

Instead, we should be asking: Will those in power care that AI isn’t as good at creating what a human can create when their goal of using it is to not pay actual writers, authors, and creatives? Do scammers care that the “travel book” they put up on Amazon, “written” by AI is garbage that no consumer would knowingly pay for if their scam works into making the sale? If Amazon gets a cut of every sale from buyers unaware that the book they purchased isn’t the book they intended to buy, will they implement something to stop it? How time consuming is it going to be for very real people in publishing and media to have to weed out the flood of AI-generated submissions? How costly will it be for businesses to have to implement ways to spot, catch, and/or stop scammers using AI?

I deeply miss what Etsy used to be and I think a lot about how it went from being this incredible site dedicated to artists to no longer being that: “Etsy went public in 2015. Last year, the company reported annual revenue of nearly $2.6 billion — a more than 10 percent spike over the year prior. Among other issues, these creators see the increase in counterfeiters on the platform as a result of Etsy prioritizing growth over being able to enforce its standards.” It is yet another example that once again leads me to think that we shouldn’t focus on whether AI is, or ever will be, good enough to replace writers and authors.

Link to the rest at BookRiot

PG says that trying to regulate AI at this point in time is ridiculous.

AI research and development is extraordinarily important for national security and all aspects of health and health care.

Given these stakes, it will be a while before legislators and regulators get around to AI and authors.

Besides, absent a book’s creator admitting she/he used artificial intelligence to write a book, how is anyone going to know for certain whether the author used AI to assist in the creation of a book or parts of a book?

Whether an author used AI to create a first draft of a book, then edited the original manuscript to refine what the AI produced?

Whether an author wrote a detailed 50-page book outline to feed into an AI that created the manuscript?

How AI-Generated Books Could Hurt Self-Publishing Authors

From Jane Friedman:

Just two days after the Maui wildfires began, on Aug. 10, a new book was self-published, Fire and Fury: The Story of the 2023 Maui Fire and its Implications for Climate Change by “Dr. Miles Stones” (no such person seems to exist). I learned about the book from this Forbes article, but by then, Amazon had removed the book from sale. Amazon had no comment for Forbes on the situation.

Curious about how far the book might have spread, I did a Google search for the book’s ISBN number (9798856899343). To my surprise, I saw the book was also for sale at Bookshop and Barnes & Noble. I tweeted about the situation, noting that IngramSpark, a division of Ingram, must be distributing these books to the broader retail market. My assumption was that retailers, in particular, Bookshop, would not accept self-published books coming out of Amazon’s KDP. (Amazon KDP authors can choose to enable Amazon’s Expanded Distribution at no cost, to reach retail markets outside of Amazon.)

It turns out my assumption was wrong. Bookshop does accept self-published books distributed by Amazon, and here things get a little convoluted. Amazon Expanded Distribution uses Ingram to distribute; Ingram is the biggest book distributor and there isn’t really any other service to use for distribution as far as the US/UK.

However, Bookshop’s policy is not to sell AI-generated books unless they are clearly labeled as such, so Fire and Fury was removed from sale after they were alerted to its presence. Bookshop’s founder Andy Hunter tweeted: “We will pull them from @Bookshop_Org when we find them, but it’s always going to be a challenge to support self-published authors while trying to NOT support AI fakes.”

And now we come to why self-publishing authors have reason to be seriously concerned about the rising tide of AI-generated books.

  • Amazon KDP is unlikely to ever prohibit AI-generated content. Even if it did create such a policy, there are no surefire detection methods for AI-generated material today.
  • Amazon KDP authors can easily enable expanded distribution to the broader retail market at no cost to them. It’s basically a checkbox.
  • Amazon uses Ingram to distribute, and Ingram reaches everyone who matters—bookstores, libraries, and all kinds of retailers. Ingram does have a policy, however, that they may not accept “books created using artificial intelligence or automated processes.”
  • Based on what happened with Fire and Fury, Amazon’s expanded distribution can make a book available for sale at Barnes & Noble and Bookshop in a matter of days.

If the rising tide of AI-generated material keeps producing such questionable books—along with embarrassing and unwanted publicity—one has to ask if Barnes & Noble and Bookshop might decide to stop accepting self-published books altogether from Ingram or otherwise limit their acceptance. Obviously not good news for self-published authors, or Ingram either.

What are some potential remedies?

  • Ingram is an important waypoint here. They’ve put stronger quality control measures in place before. Perhaps they can be strengthened to prevent the worst material from reaching the market outside of Amazon.
  • Amazon’s Expanded Distribution requires that authors use Amazon’s free ISBNs. Would it be possible for retailers to block any title with an Amazon ISBN? (ISBNs identify the publisher or where the material originated from.) While that may be unfair to honest people who prefer to use Amazon’s Expanded Distribution, such authors/publishers would still have the option of setting up their own IngramSpark account. IngramSpark has no upfront fees and also provides free ISBNs.
  • Maybe IngramSpark or other retailers put a delay on making Amazon’s Expanded Distribution titles available for sale. Amazon already states it can take up to eight weeks for the book to go on sale. So why not make such titles wait?

Free ISBNs unfortunately contribute to this problem

ISBNs are a basic requirement to sell a print book through retail channels today. In the US, it is expensive to purchase ISBNs—it’s nearly $300 for ten. Amazon KDP does not require authors to purchase ISBNs and will give you ISBNs for free all day if you need them. Over time, others like IngramSpark and Draft2Digital have also made ISBNs free to make it easier for self-publishing authors to distribute their work.

While it’s admirable to lower the barriers for authors who have limited funds, free ISBNs are supercharging the distribution of AI-generated materials to the wider retail market. An immediate way to stem this tide of garbage in the US market? Stop giving out free ISBNs. Make authors purchase their own.

There’s a huge advantage to making authors purchase their own ISBNs: it creates an identifiable publisher of record with Bowker (the ISBN-issuing agency in the United States). The publisher of record would be listed at retailers. Currently, fraudsters using Amazon KDP are able to hide behind Amazon-owned ISBNs; their books are simply listed as “independently published.” It would be marvelous to take away that fig leaf. Sure, fraudsters could create sham entities that mean nothing and are unfindable in the end, but at least you could connect the dots on all the titles they’re releasing—plus Bowker would see who’s doing the purchasing and possibly put their own guardrails in place. My hope is these entities would choose not to buy ISBNs at all and this activity would become limited to the backwaters of Amazon.

Link to the rest at Jane Friedman

PG has the greatest respect for Jane and has been appreciating her blog posts for years, but ISBNs are nineteen-sixties technology. Any industry other than publishers\booksellers would have ditched it for something more effective, useful, and easier to administer decades ago.

New York Times considers legal action against OpenAI as copyright tensions swirl

From National Public Radio:

The New York Times and OpenAI could end up in court.

Lawyers for the newspaper are exploring whether to sue OpenAI to protect the intellectual property rights associated with its reporting, according to two people with direct knowledge of the discussions.

For weeks, the Times and the maker of ChatGPT have been locked in tense negotiations over reaching a licensing deal in which OpenAI would pay the Times for incorporating its stories in the tech company’s AI tools, but the discussions have become so contentious that the paper is now considering legal action.

The individuals who confirmed the potential lawsuit requested anonymity because they were not authorized to speak publicly about the matter.

A lawsuit from the Times against OpenAI would set up what could be the most high-profile legal tussle yet over copyright protection in the age of generative AI.

A top concern for the Times is that ChatGPT is, in a sense, becoming a direct competitor with the paper by creating text that answers questions based on the original reporting and writing of the paper’s staff.

It’s a fear heightened by tech companies using generative AI tools in search engines. Microsoft, which has invested billions into OpenAI, is now powering its Bing search engine with ChatGPT.

If, when someone searches online, they are served a paragraph-long answer from an AI tool that refashions reporting from the Times, the need to visit the publisher’s website is greatly diminished, said one person involved in the talks.

So-called large language models like ChatGPT have scraped vast parts of the internet to assemble data that inform how the chatbot responds to various inquiries. The data-mining is conducted without permission. Whether hoovering up this massive repository is legal remains an open question.

If OpenAI is found to have violated any copyrights in this process, federal law allows for the infringing articles to be destroyed at the end of the case.

Link to the rest at National Public Radio

As PG has mentioned on a couple of previous occasions, he has doubts about the copyright infringement claims like the Times is asserting because, to the best of PG’s knowledge, no AI stores the original copyrighted works or is capable of reproducing them.

Instead, the contents of the Times plus a huge number of other texts are used to train the AI model, then deleted after training is complete. The AI can then utilize the ingested texts in order to come to an understanding of the meanings of the texts and use that understanding to create new expressions of knowledge as needed to respond to a wide range of queries and commands that individual users submit.

PG doesn’t think the AI can ever recreate the words of the original Times stories. The AI uses the information it has ingested to create new responses to tasks individual users want it to perform.

The analogy PG thinks is correct happens when he reads a story in the Times or elsewhere, then uses that knowledge to answer questions posed by others or to create other writings that don’t replicate the original Times articles and may include ideas, facts, etc. that he has picked up during his extensive reading of a large collection articles from a great many sources.

School district uses ChatGPT to help remove library books

From Popular Science:

Against a nationwide backdrop of book bans and censorship campaigns, Iowa educators are turning to ChatGPT to help decide which titles should be removed from their school library shelves in order to legally comply with recent Republican-backed state legislation, PopSci has learned.

According to an August 11 article in the Iowa state newspaper The Gazette, spotted by PEN America, the Mason City Community School District recently removed 19 books from its collection ahead of its quickly approaching 2023-24 academic year. The ban attempts to comply with a new law requiring Iowa school library catalogs to be both “age appropriate” and devoid of “descriptions or visual depictions of a sex act.” Speaking with The Gazette last week, Mason City’s Assistant Superintendent of Curriculum and Instruction Bridgette Exman argued it was “simply not feasible to read every book and filter for these new requirements.”

. . . .

“Frankly, we have more important things to do than spend a lot of time trying to figure out how to protect kids from books,” Exman tells PopSci via email. “At the same time, we do have a legal and ethical obligation to comply with the law. Our goal here really is a defensible process.”

According to The Gazette, the resulting strategy involved compiling a master list of commonly challenged books, then utilizing a previously unnamed “AI software” to supposedly provide textual analysis for each title. Flagged books were then removed from Mason City’s 7-12th grade school library collections and “stored in the Administrative Center” as educators “await further guidance or clarity.” Titles included Alice Walker’s The Color Purple, Margaret Atwood’s The Handmaid’s Tale, Toni Morrison’s Beloved, and Buzz Bissinger’s Friday Night Lights.

“We are confident this process will ensure the spirit of the law is enacted here in Mason City,” Exman said at the time. When asked to clarify what software Mason City administrators harnessed to help with their decisions on supposedly sexually explicit material, Exman revealed their AI tool of choice: “We used Chat GPT [sic] to help answer that question,” says Exman, who believes Senate File 496’s “age-appropriateness” stipulation is “pretty subjective… [but] the depictions or descriptions of sex acts filter is more objective.”

. . . .

According to Exman, she and fellow administrators first compiled a master list of commonly challenged books, then removed all those challenged for reasons other than sexual content. For those titles within Mason City’s library collections, administrators asked ChatGPT the specific language of Iowa’s new law, “Does [book] contain a description or depiction of a sex act?”

“If the answer was yes, the book will be removed from circulation and stored,” writes Exman.

Link to the rest at Popular Science

The AI boom: lessons from history

From The Economist:

It can take a little imagination to see how some innovations might change an economy. Not so with the latest ai tools. It is easy—from a writer’s perspective, uncomfortably so—to think of contexts in which something like Chatgpt, a clever chatbot which has taken the web by storm since its release in November, could either dramatically boost a human worker’s productivity or replace them outright. The gpt in its name stands for “generative pre-trained transformer”, which is a particular kind of language model. It might well stand for general-purpose technology: an earth-shaking sort of innovation which stands to boost productivity across a wide-range of industries and occupations, in the manner of steam engines, electricity and computing. The economic revolutions powered by those earlier gpts can give us some idea how powerful ai might transform economies in the years ahead.

In a paper published in 1995, Timothy Bresnahan of Stanford University and Manuel Trajtenberg of Tel Aviv University set out what they saw as the characteristics of a general-purpose technology. It must be used in many industries, have an inherent potential for continued improvement and give rise to “innovational complementarities”—that is, induce knock-on innovation in the industries which use it. ai is being adopted widely, seems to get better by the day and is being deployed in ever more r&d contexts. So when does the economic revolution begin?

The first lesson from history is that even the most powerful new tech takes time to change an economy. James Watt patented his steam engine in 1769, but steam power did not overtake water as a source of industrial horsepower until the 1830s in Britain and 1860s in America. In Britain the contribution of steam to productivity growth peaked post-1850, nearly a century after Watt’s patent, according to Nicholas Crafts of the University of Sussex. In the case of electrification, the key technical advances had all been accomplished before 1880, but American productivity growth actually slowed from 1888 to 1907. Nearly three decades after the first silicon integrated circuits Robert Solow, a Nobel-prizewinning economist, was still observing that the computer age could be seen everywhere but in the productivity statistics. It was not until the mid-1990s that a computer-powered productivity boom eventually emerged in America.

The gap between innovation and economic impact is in part because of fine-tuning. Early steam engines were wildly inefficient and consumed prohibitively expensive piles of coal. Similarly, the stunning performance of recent ai tools represents a big improvement over those which sparked a boomlet of ai enthusiasm roughly a decade ago. (Siri, Apple’s virtual assistant, was released in 2011, for example.) Capital constraints can also slow deployment. Robert Allen of New York University Abu Dhabi argues that the languid rise in productivity growth in industrialising Britain reflected a lack of capital to build plants and machines, which was gradually overcome as capitalists reinvested their fat profits.

More recent work emphasises the time required to accumulate what is known as intangible capital, or the basic know-how needed to make effective use of new tech. Indeed, Erik Brynjolfsson of Stanford University, Daniel Rock of the Massachusetts Institute of Technology and Chad Syverson of the University of Chicago suggest a disruptive new technology may be associated with a “productivity J-curve”. Measured productivity growth may actually decline in the years or decades after a new technology appears, as firms and workers divert time and resources to studying the tech and designing business processes around it. Only later as these investments bear fruit does the J surge upward. The authors reckon that ai-related investments in intangible capital may already be depressing productivity growth, albeit not yet by very much.

Of course for many people, questions about the effects of ai on growth take a back seat to concerns about consequences for workers. Here, history’s messages are mixed. There is good news: despite epochal technological and economic change, fears of mass technological unemployment have never before been realised. Tech can and does take a toll on individual occupations, however, in ways that can prove socially disruptive. Early in the Industrial Revolution, mechanisation dramatically increased demand for relatively unskilled workers, but crushed the earnings of craftsmen who had done much of the work before, which is why some chose to join machine-smashing Luddite movements. And in the 1980s and 1990s, automation of routine work on factory floors and in offices displaced many workers of modest means, while boosting employment for both high- and low-skilled workers.

Gee, Pretty Terrific

ai might well augment the productivity of workers of all different skill levels, even writers. Yet what that means for an occupation as a whole depends on whether improved productivity and lower costs lead to a big jump in demand or only a minor one. When the assembly line—a process innovation with gpt-like characteristics—allowed Henry Ford to cut the cost of making cars, demand surged and workers benefited. If ai boosts productivity and lowers costs in medicine, for example, that might lead to much higher demand for medical services and professionals.

Link to the rest at The Economist

Artificial Intelligence: Issues in a Hot Summer’s Debate

From Publishing Perspectives:

Heating up right along with international temperature gauges this summer, the issues around “artificial intelligence” and publishing seem to become more contentious weekly.

The United States’ Authors Guild updated its information on Wednesday (July 19) to say that more than 10,000 “writers and their supporters” have signed an open letter to CEOs of AI companies including OpenAI; Alphabet (parent of Google); Stability AI; IBM; and Microsoft.

As frequently happens in the Guild’s approach—which is not unlike that of many NGOs issuing their statements on various issues—there’s an impressive list of big names being rolled out here, the advocacy organization having attracted signatories including Dan Brown, James Patterson, Jennifer Egan, Margaret Atwood, Jonathan Franzen, Roxane Gay, Celeste Ng, Louise Erdrich, Viet Thanh Nguyen, and George Saunders.

At the core of this protest is a very real alarm that the source material on which a large language model might be “trained”—the texts used in advanced statistical algorithms’ collection of linguistic content patterns—may well be copyrighted works. Franzen is quoted by the Guild saying that the organization is “advanc[ing] the rights of all Americans whose data and words and images are being exploited, for immense profit, without their consent—in other words, pretty much all Americans over the age of six.”

Certainly on its face, this copyright challenge is immediately and urgently part of a deepening and widening body of alarm now being reflected by elements the actors’ and writers’ strikes in Hollywood. While writers’ vulnerability might be closer to that of the writing corps in book publishing, the parallel threat to actors is unmistakable: their likenesses and voices can be artificially captured and manipulated, giving the broader AI controversy an easily understood visual component. The crisis of residual payments from many streamers may be the immediate money-ask in those labor actions, but as Andrew Dalton has written for the Associated Press, “Artificial intelligence has surged to the forefront of Hollywood’s labor fights. … The technology has pushed negotiations into unknown territory, and the language used can sound utopian or dystopian depending on the side of the table.”

At a national governmental level, in the States on Friday (July 21), Cat Zakrzewski writes at the Washington Post, “the Biden White House on Friday took its most ambitious step to date to address the safety concerns and risks of artificial intelligence, announcing that seven of the most influential companies building AI have agreed to a voluntary pledge to mitigate the risks of the emerging technology, escalating the White House’s involvement in an increasingly urgent debate over AI regulation.”

Link to the rest at Publishing Perspectives

‘A certain danger lurks there’: how the inventor of the first chatbot turned against AI

From The Guardian:

In 1966, an MIT professor named Joseph Weizenbaum created the first chatbot. He cast it in the role of a psychotherapist. A user would type a message on an electric typewriter connected to a mainframe. After a moment, the “psychotherapist” would reply.

User: Men are all alike.
Chatbot: IN WHAT WAY
User: They’re always bugging us about something or other.
Chatbot: CAN YOU THINK OF A SPECIFIC EXAMPLE
User: Well, my boyfriend made me come here.
Chatbot: YOUR BOYFRIEND MADE YOU COME HERE
User: He says I’m depressed much of the time.
Chatbot: I AM SORRY TO HEAR YOU ARE DEPRESSED

Weizenbaum published this sample exchange in a journal article that explained how the chatbot worked. The software was relatively simple. It looked at the user input and applied a set of rules to generate a plausible response. He called the program Eliza, after Eliza Doolittle in Pygmalion. The cockney flower girl in George Bernard Shaw’s play uses language to produce an illusion: she elevates her elocution to the point where she can pass for a duchess. Similarly, Eliza would speak in such a way as to produce the illusion that it understood the person sitting at the typewriter.

“Some subjects have been very hard to convince that Eliza (with its present script) is not human,” Weizenbaum wrote. In a follow-up article that appeared the next year, he was more specific: one day, he said, his secretary requested some time with Eliza. After a few moments, she asked Weizenbaum to leave the room. “I believe this anecdote testifies to the success with which the program maintains the illusion of understanding,” he noted.

Eliza isn’t exactly obscure. It caused a stir at the time – the Boston Globe sent a reporter to go and sit at the typewriter and ran an excerpt of the conversation – and remains one of the best known developments in the history of computing. More recently, the release of ChatGPT has renewed interest in it. In the last year, Eliza has been invoked in the Guardian, the New York Times, the Atlantic and elsewhere. The reason that people are still thinking about a piece of software that is nearly 60 years old has nothing to do with its technical aspects, which weren’t terribly sophisticated even by the standards of its time. Rather, Eliza illuminated a mechanism of the human mind that strongly affects how we relate to computers.

Early in his career, Sigmund Freud noticed that his patients kept falling in love with him. It wasn’t because he was exceptionally charming or good-looking, he concluded. Instead, something more interesting was going on: transference. Briefly, transference refers to our tendency to project feelings about someone from our past on to someone in our present. While it is amplified by being in psychoanalysis, it is a feature of all relationships. When we interact with other people, we always bring a group of ghosts to the encounter. The residue of our earlier life, and above all our childhood, is the screen through which we see one another.

This concept helps make sense of people’s reactions to Eliza. Weizenbaum had stumbled across the computerised version of transference, with people attributing understanding, empathy and other human characteristics to software. While he never used the term himself, he had a long history with psychoanalysis that clearly informed how he interpreted what would come to be called the “Eliza effect”.

As computers have become more capable, the Eliza effect has only grown stronger. Take the way many people relate to ChatGPT. Inside the chatbot is a “large language model”, a mathematical system that is trained to predict the next string of characters, words, or sentences in a sequence. What distinguishes ChatGPT is not only the complexity of the large language model that underlies it, but its eerily conversational voice. As Colin Fraser, a data scientist at Meta, has put it, the application is “designed to trick you, to make you think you’re talking to someone who’s not actually there”.

But the Eliza effect is far from the only reason to return to Weizenbaum. His experience with the software was the beginning of a remarkable journey. As an MIT professor with a prestigious career, he was, in his words, a “high priest, if not a bishop, in the cathedral to modern science”. But by the 1970s, Joseph Weizenbaum had become a heretic, publishing articles and books that condemned the worldview of his colleagues and warned of the dangers posed by their work. Artificial intelligence, he came to believe, was an “index of the insanity of our world.”

Today, the view that artificial intelligence poses some kind of threat is no longer a minority position among those working on it. There are different opinions on which risks we should be most worried about, but many prominent researchers, from Timnit Gebru to Geoffrey Hinton – both ex-Google computer scientists – share the basic view that the technology can be toxic. Weizenbaum’s pessimism made him a lonely figure among computer scientists during the last three decades of his life; he would be less lonely in 2023.

There is so much in Weizenbaum’s thinking that is urgently relevant now. Perhaps his most fundamental heresy was the belief that the computer revolution, which Weizenbaum not only lived through but centrally participated in, was actually a counter-revolution. It strengthened repressive power structures instead of upending them. It constricted rather than enlarged our humanity, prompting people to think of themselves as little more than machines. By ceding so many decisions to computers, he thought, we had created a world that was more unequal and less rational, in which the richness of human reason had been flattened into the senseless routines of code.

Weizenbaum liked to say that every person is the product of a particular history. His ideas bear the imprint of his own particular history, which was shaped above all by the atrocities of the 20th century and the demands of his personal demons. Computers came naturally to him. The hard part, he said, was life.

. . . .

What it means to be human – and how a human is different from a computer – was something Weizenbaum spent a lot of time thinking about. So it’s fitting that his own humanity was up for debate from the start. His mother had a difficult labour, and felt some disappointment at the result. “When she was finally shown me, she thought I was a bloody mess and hardly looked human,” Weizenbaum later recalled. “She couldn’t believe this was supposed to be her child.”

He was born in 1923, the youngest son of an assimilated, upper-middle class Jewish family in Berlin. His father, Jechiel, who had emigrated to Germany from Galicia, which spanned what is now south-eastern Poland and western Ukraine, at the age of 12, was an accomplished furrier who had acquired a comfortable foothold in society, a nice apartment, and a much younger Viennese wife (Weizenbaum’s mother). From the start, Jechiel treated his son with a contempt that would haunt Weizenbaum for the rest of his life. “My father was absolutely convinced that I was a worthless moron, a complete fool, that I would never become anything,” Weizenbaum later told the documentary film-makers Peter Haas and Silvia Holzinger.

By the time he was old enough to make memories, the Nazis were everywhere. His family lived near a bar frequented by Hitler’s paramilitaries, the SA, and sometimes he would see people getting dragged inside to be beaten up in the backroom. Once, while he was out with his nanny, columns of armed communists and Nazis lined up and started shooting at each other. The nanny pushed him under a parked car until the bullets stopped flying.

Shortly after Hitler became chancellor in 1933, the government passed a law that severely restricted the number of Jews in public schools. Weizenbaum had to transfer to a Jewish boys’ school. It was here that he first came into contact with the Ostjuden: Jews from eastern Europe, poor, dressed in rags, speaking Yiddish. To Weizenbaum, they may as well have come from Mars. Nevertheless, the time he spent with them gave him what he later described as “a new feeling of camaraderie”, as well as a “sensitivity for oppression”. He became deeply attached to one of his classmates in particular. “If fate had been different, I would have developed a homosexual love for this boy,” he later said. The boy “led me into his world”, the world of the Jewish ghetto around Berlin’s Grenadierstrasse. “They had nothing, owned nothing, but somehow supported each other,” he recalled.

One day, he brought the boy back to his family’s apartment. His father, himself once a poor Jewish boy from eastern Europe, was disgusted and furious. Jechiel was very proud, Weizenbaum remembered – and he had reason to be, given the literal and figurative distances he had travelled from the shtetl. Now his son was bringing the shtetl back into his home.

Alienated from his parents, richer than his classmates, and a Jew in Nazi Germany: Weizenbaum felt comfortable nowhere. His instinct, he said, was always to “bite the hand that fed me”, to provoke the paternal figure, to be a pain in the backside. And this instinct presumably proceeded from the lesson he learned from his father’s hostility toward him and bigotry toward the boy he loved: that danger could lie within one’s home, people, tribe.

. . . .

In Weizenbaum’s 1967 follow-up to his first article about Eliza, he argued that no computer could ever fully understand a human being. Then he went one step further: no human being could ever fully understand another human being. Everyone is formed by a unique collection of life experiences that we carry around with us, he argued, and this inheritance places limits on our ability to comprehend one another. We can use language to communicate, but the same words conjure different associations for different people – and some things can’t be communicated at all. “There is an ultimate privacy about each of us that absolutely precludes full communication of any of our ideas to the universe outside ourselves,” Weizenbaum wrote.

This was a very different perspective than that of Minsky or McCarthy. It clearly bore the influence of psychoanalysis. Here was the mind not as a meat machine but as a psyche – something with depth and strangeness. If we are often opaque to one another and even to ourselves, what hope is there for a computer to know us?

Yet, as Eliza illustrated, it was surprisingly easy to trick people into feeling that a computer did know them – and into seeing that computer as human. Even in his original 1966 article, Weizenbaum had worried about the consequences of this phenomenon, warning that it might lead people to regard computers as possessing powers of “judgment” that are “deserving of credibility”. “A certain danger lurks there,” he wrote.

In the mid-1960s, this was as far as he was willing to go. He pointed to a danger, but didn’t dwell on it. He was, after all, a depressed kid who had escaped the Holocaust, who always felt like an impostor, but who had found prestige and self-worth in the high temple of technology. It can be hard to admit that something you are good at, something you enjoy, is bad for the world – and even harder to act on that knowledge. For Weizenbaum, it would take a war to know what to do next.

. . . .

In 1976, Weizenbaum published his magnum opus: Computer Power and Human Reason: From Judgment to Calculation. “The book has overwhelmed me, like being crashed over by the sea,” read a blurb from the libertarian activist Karl Hess. The book is indeed overwhelming. It is a chaotic barrage of often brilliant thoughts about computers. A glimpse at the index reveals the range of Weizenbaum’s interlocutors: not only colleagues like Minsky and McCarthy but the political philosopher Hannah Arendt, the critical theorist Max Horkheimer, and the experimental playwright Eugène Ionesco. He had begun work on the book after completing a fellowship at Stanford University, in California, where he enjoyed no responsibilities, a big office and lots of stimulating discussions with literary critics, philosophers and psychiatrists. With Computer Power and Human Reason, he wasn’t so much renouncing computer science as trying to break it open and let alternative traditions come pouring in.

The book has two major arguments. First: there is a difference between man and machine. Second: there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them. The book’s subtitle – From Judgment to Calculation – offers a clue as to how these two statements fit together.

For Weizenbaum, judgment involves choices that are guided by values. These values are acquired through the course of our life experience and are necessarily qualitative: they cannot be captured in code. Calculation, by contrast, is quantitative. It uses a technical calculus to arrive at a decision. Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values.

And that would be fine, if we confined computers to tasks that only required calculation. But thanks in large part to a successful ideological campaign waged by what he called the “artificial intelligentsia”, people increasingly saw humans and computers as interchangeable. As a result, computers had been given authority over matters in which they had no competence. (It would be a “monstrous obscenity”, Weizenbaum wrote, to let a computer perform the functions of a judge in a legal setting or a psychiatrist in a clinical one.) Seeing humans and computers as interchangeable also meant that humans had begun to conceive of themselves as computers, and so to act like them. They mechanised their rational faculties by abandoning judgment for calculation, mirroring the machine in whose reflection they saw themselves.

This had especially destructive policy consequences. Powerful figures in government and business could outsource decisions to computer systems as a way to perpetuate certain practices while absolving themselves of responsibility. Just as the bomber pilot “is not responsible for burned children because he never sees their village”, Weizenbaum wrote, software afforded generals and executives a comparable degree of psychological distance from the suffering they caused.

Letting computers make more decisions also shrank the range of possible decisions that could be made. Bound by an algorithmic logic, software lacked the flexibility and the freedom of human judgment. This helps explain the conservative impulse at the heart of computation. Historically, the computer arrived “just in time”, Weizenbaum wrote. But in time for what? “In time to save – and save very nearly intact, indeed, to entrench and stabilise – social and political structures that otherwise might have been either radically renovated or allowed to totter under the demands that were sure to be made on them.”

Link to the rest at The Guardian

AI Training for Independent Publishers

From Publishing Perspectives:

In England, the Independent Publishers Guild (IPG) has announced that it’s  putting together an “artificial intelligence” training program, for independent publishers, set for early October, ahead of Frankfurter Buchmesse.

There are to be workshops relative to the course at the organization’s autumn conference, which is set for September 20 in London.

Much of the program, it appears, is expected to revolve around “key AI tools to explore their strengths and weaknesses and develop best practices for their day-to-day use in publishing. All content will be tailored to the needs of independent publishers and will be relevant for anyone working with text-based AI solutions, including editorial, sales, marketing and publicity teams.”

There is, however, a component planned around what’s quickly developing as the more problematic side of AI and the debates around it, this vested in “an overview of the latest developments in the generative AI landscape and a summary of commercial, legal, ethical, and other principles that will help publishers decide how it can be used effectively and responsibly.”

….

Such cultural and ethical controversies are the attention-grabbers in the debate around these technologies, of course, and in most instances, they so far have remained mired in “yes, but” debates and increasingly contentious concerns about such issues as the potential copyright infringement around large language model training.

Salisbury-based consultant George Walkley is putting the program together for IPG, and is quoted in the announcement, saying, “Generative AI is a fast-moving area of technology that offers enormous potential for publishers, but also real risks.

“The business, technical, and ethical issues it raises have dominated discussion this year, and the IPG has been characteristically forward thinking in identifying the need for training in this area.

“I’m looking forward to delivering this course and giving independent publishers the skills they need to make the best possible use og AI.”

Link to the rest at Publishing Perspectives

PG doubts the term, “forward thinking” can be applied to traditional publishers except on a relative basis.

For perspective, PG is certain that a great many individuals working for publishers are decent and hardworking people.

However, he suggests that the model of the publisher being in charge of a book once the author hands off the manuscript is a relic of a bygone age. If an author is seeking someone to wrap her/him in a warm blanket for nurturing purposes, the author is looking in the wrong place.

A supportive spouse or close personal friend will do a much better job of nurturing without taking the majority of the money the book generates.

AI Writing

PG got to feeling a bit better as the day commenced and decided to try Microsoft’s browser, Edge, with Microsoft’s version of AI included.

PG’s prompt was as follows:

synopsis of Pride and Prejudice, including issues of class

Here’s what MS AI produced (with paragraph breaks inserted by PG):

Pride and Prejudice is a novel by Jane Austen that explores the complex social relations among the landed gentry in early 19th-century England. The main plot revolves around the romance between Elizabeth Bennet, a spirited and intelligent young woman, and Fitzwilliam Darcy, a wealthy and proud aristocrat who initially disdains Elizabeth and her family.

Through a series of misunderstandings, misjudgments, and revelations, Elizabeth and Darcy overcome their pride and prejudice and eventually acknowledge their mutual love and respect. Along the way, they also confront the obstacles posed by their different social positions, as well as the interference of their families and friends.

The novel also depicts the lives and marriages of Elizabeth’s four sisters, Jane, Mary, Kitty, and Lydia, who each face their own challenges in finding happiness and security in a society that values wealth, rank, and propriety over character, affection, and individuality.

PG then asked MS AI to add more details to the plot and this is what happened (with PG’s manually-inserted paragraph breaks):

Pride and Prejudice is a novel by Jane Austen that explores the complex social relations among the landed gentry in early 19th-century England.

The main plot revolves around the romance between Elizabeth Bennet, a spirited and intelligent young woman, and Fitzwilliam Darcy, a wealthy and proud aristocrat who initially disdains Elizabeth and her family. Elizabeth meets Darcy at a ball where he snubs her, while his friend Bingley is smitten with her sister Jane. Elizabeth is further prejudiced against Darcy when she hears from Wickham, a charming officer, that Darcy cheated him out of his inheritance.

Darcy, however, gradually falls in love with Elizabeth’s wit and beauty, and surprises her with a proposal of marriage, which she rejects, accusing him of separating Bingley and Jane, and mistreating Wickham.

Darcy then writes a letter to Elizabeth, explaining his actions and revealing Wickham’s true character. Elizabeth realizes that she has been too hasty and prideful in her judgments, and begins to admire Darcy’s sense of honor and generosity.

Meanwhile, Elizabeth’s younger sisters, Kitty and Lydia, are dazzled by the militia officers stationed in their town, especially by Wickham, who elopes with Lydia, putting the Bennet family’s reputation at risk. Darcy secretly intervenes to find the couple and force Wickham to marry Lydia, thus saving the Bennets from disgrace.

Elizabeth learns of Darcy’s involvement from his aunt, Lady Catherine de Bourgh, who visits Longbourn to dissuade Elizabeth from marrying Darcy, as she has planned a match between him and her own daughter. Elizabeth refuses to comply, and asserts her equal worth and right to love whom she chooses.

Darcy is delighted by Elizabeth’s spirited defense of their relationship, and renews his proposal, which she accepts with joy. Jane and Bingley also reunite and marry, and the novel ends with a happy resolution for the two couples who have overcome their pride and prejudice.

Then PG asked the AI to include more about the social consequences of Lydia’s elopement and added paragraph breaks.

Pride and Prejudice is a novel by Jane Austen that explores the complex social relations among the landed gentry in early 19th-century England. The main plot revolves around the romance between Elizabeth Bennet, a spirited and intelligent young woman, and Fitzwilliam Darcy, a wealthy and proud aristocrat who initially disdains Elizabeth and her family. Elizabeth meets Darcy at a ball where he snubs her, while his friend Bingley is smitten with her sister Jane. Elizabeth is further prejudiced against Darcy when she hears from Wickham, a charming officer, that Darcy cheated him out of his inheritance.

Darcy, however, gradually falls in love with Elizabeth’s wit and beauty, and surprises her with a proposal of marriage, which she rejects, accusing him of separating Bingley and Jane, and mistreating Wickham. Darcy then writes a letter to Elizabeth, explaining his actions and revealing Wickham’s true character. Elizabeth realizes that she has been too hasty and prideful in her judgments, and begins to admire Darcy’s sense of honor and generosity.

Meanwhile, Elizabeth’s younger sisters, Kitty and Lydia, are dazzled by the militia officers stationed in their town, especially by Wickham, who elopes with Lydia, putting the Bennet family’s reputation at risk.

Lydia’s elopement is a major crisis in the novel, as it threatens to ruin not only her own prospects of marriage, but also those of her sisters. In the society of that time, living together without marriage was considered a scandalous and disgraceful act that would result in social ostracism and loss of respectability. Lydia’s behavior also reflects poorly on her parents’ upbringing and exposes their lack of discipline and boundaries.

Lydia herself is oblivious to the consequences of her actions, and thinks only of her own importance and pleasure. She does not care about the feelings or opinions of her family or friends, nor does she show any gratitude or remorse for the trouble she has caused.

Darcy secretly intervenes to find the couple and force Wickham to marry Lydia, thus saving the Bennets from disgrace. Elizabeth learns of Darcy’s involvement from his aunt, Lady Catherine de Bourgh, who visits Longbourn to dissuade Elizabeth from marrying Darcy, as she has planned a match between him and her own daughter.

Elizabeth refuses to comply, and asserts her equal worth and right to love whom she chooses. Darcy is delighted by Elizabeth’s spirited defense of their relationship, and renews his proposal, which she accepts with joy. Jane and Bingley also reunite and marry, and the novel ends with a happy resolution for the two couples who have overcome their pride and prejudice.

PG thinks that, while he wouldn’t recommend the current iteration of Bing AI writing for a Ph.D. thesis, it would be a big help for a lot of middle and high school students.

This does raise interesting plagiarism and honesty issues regarding schoolwork produced by Bing. PG wonders whether using the Bing AI for a first draft, then making significant student-created modifications and additions to that draft before turning in the final paper.

PG was going to run the Bing AI output through Grammarly, which is also advertising AI features, but ran out of time and energy.

He does wonder what the problem is with Microsoft’s AI entering paragraph breaks in its output, however.

Chat GPT on Self-Publishing Via KDP Earn More Money

PG gave ChatGPT the following prompt:

write a 500 word article about why authors who are publishing using Amazon KDP earn more money than authors who publish through established publishers

Here’s what showed up on his screen:

Title: The Amazon KDP Advantage: Why Self-Published Authors Earn More Money

Introduction

In the rapidly evolving world of publishing, authors now have more opportunities than ever to share their work with the world. Traditional publishing houses have long been seen as gatekeepers, controlling access to the literary market. However, with the advent of self-publishing platforms like Amazon Kindle Direct Publishing (KDP), authors now have the chance to take control of their careers and potentially earn more money. This article will explore the reasons why authors who publish using Amazon KDP often enjoy higher earnings compared to those who choose established publishers.

  1. Greater Royalties

One of the primary reasons why authors utilizing Amazon KDP have the potential to earn more money is the royalty structure. Traditional publishing contracts typically offer authors royalty rates ranging from 8% to 15% of the book’s cover price. Conversely, Amazon KDP allows authors to earn up to 70% in royalties for eBooks priced within a specific range. This substantial difference in royalty rates significantly impacts an author’s earnings, particularly in the digital age where eBook sales are flourishing.

  1. Direct Control and Flexibility

Self-published authors using Amazon KDP have complete control over their work, including pricing, cover design, and marketing strategies. This control enables authors to experiment with pricing strategies, run promotional campaigns, and make changes to their books at any time. Traditional publishing often involves relinquishing these decisions to publishing houses, limiting an author’s ability to adapt to market demands swiftly. The freedom and flexibility offered by KDP allows authors to be more responsive to readers’ preferences and optimize their earning potential.

  1. Global Reach and Market Accessibility

Amazon, being a global online marketplace, provides unparalleled reach to authors publishing through KDP. It has a massive customer base that extends across multiple countries, making it easier for authors to tap into international markets. Traditional publishers often focus on specific regions or territories, limiting an author’s potential readership and earning opportunities. With Amazon KDP, authors can target a global audience and potentially gain exposure to millions of readers worldwide, thereby increasing their earning potential.

  1. High Digital Demand and Accessibility

In recent years, the demand for eBooks has surged, fueled by the rise in e-readers, tablets, and smartphones. Amazon dominates the eBook market with its Kindle devices and app, providing a seamless reading experience to millions of users. As a result, self-published authors on KDP can capitalize on this growing digital demand. Additionally, eBooks have advantages like lower production costs and the ability to reach readers instantly, enhancing an author’s potential earnings.

  1. Enhanced Marketing and Discoverability

Amazon KDP offers various tools and promotional opportunities to help authors market their books effectively. These include Kindle Countdown Deals, Kindle Free Promotions, and the ability to participate in Kindle Unlimited, a subscription service for readers. These features can significantly boost an author’s visibility and attract new readers, leading to higher sales and earnings. In contrast, established publishers may not dedicate the same level of marketing support to every author in their catalog.

Conclusion

While traditional publishing still holds its value and allure, the rise of Amazon KDP has revolutionized the publishing landscape. Through KDP, authors have gained the ability to publish independently, retaining greater control over their work and potentially earning higher incomes. The platform’s favorable royalty structure, global reach, marketing tools, and the growing demand for digital books all contribute to the financial advantage enjoyed by authors who publish using Amazon KDP. As the publishing industry continues to evolve, it is becoming increasingly clear that self-publishing platforms like KDP can provide authors with lucrative opportunities and a path to success.

PG invites opinions, disagreements, alternate (polite) contentions, discussions of errors in reasoning, items that should have been discussed (within the 500 word article limit), etc., etc. PG does admit that Chat GPT went 99 words over the 500 word article request, but PG didn’t see a lot of wasted wordage.

OpenAI to face world’s first defamation lawsuit over false claims made by ChatGPT

From IB Times:

Artificial intelligence company OpenAI could be on the verge of facing the world’s first defamation lawsuit over the content generated by its AI chatbot, ChatGPT. An Australian Mayor has threatened to file a defamation lawsuit against ChatGPT, accusing the AI-based chatbot of making some false claims.

. . . .

Regional Australian mayor Brian Hood is planning to sue OpenAI, the company behind the popular AI chatbot, if it doesn’t correct the false claims made by ChatGPT. Apparently, the AI bot said that the Australian mayor had served time in prison on bribery charges.

The politician is understandably worried about his reputation online since people have started asking him about ChatGPT’s false claims. If the mayor decides to deal with this in court, then OpenAI could end up facing its first-ever defamation lawsuit against the AI chatbot. To recap, Hood was elected Mayor of the town of Hepburn Shire in November 2022.

Hood became concerned about his online reputation when he found out that ChatGPT had falsely accused him of being involved in a foreign bribery scandal. This early 2000s scandal centered on a Reserve Bank of Australia subsidiary. Although Hood worked for the subsidiary during this time, he was the person who informed authorities about the aforesaid bribes, according to a report by GizmoChina.

. . . .

Aside from this, the report claims Hood never faced any criminal charges. Another report by News18 suggests Hood’s lawyers sent a letter of concern to the AI company on March 21, urging them to revise the incorrect shred of misinformation. Hood’s lawyers have requested the company to either remove the misreported information or face a potential lawsuit.

Moreover, OpenAI has 28 days to change the inaccurate information. The AI firm is currently mum on Hood’s legal letter. It will be interesting to see whether Hood decides to follow through with the lawsuit, becoming the first individual to sue the company behind ChatGPT over false claims. A partner at Hood’s law firm Gordon Legal, James Naughton, pointed out that Hood is an elected official and his “reputation is central to his role.”

Link to the rest at IB Times

European Publishers See Audiobooks, AI as Inevitabilities

From Publisher’s Weekly:

The Readmagine publishing conference ran from June 7-9 in Madrid, featuring an A-list of publishing pros.

. . . .

The boom in audiobook sales, which continues to transform the landscape across Europe, was a point of conversation. Enrico Turrin, deputy director of the Federation of European Publishers, told the audience that when all the data from FEP’s members is in, audiobooks are likely to account for as much as 4-5% of sales. This shift follows that in the U.S., where Michele Cobb, executive director of the Audiobook Publishers Association, noted that the U.S. has seen 11 years of double-digit growth. “Now, 53% of the U.S. population has listened to an audiobook,” Cobb said.

Cobb noted that there was a potential shift in prevailing business models, moving from a credits-based system to the all-you-can-consume model. “There is growth in both areas,” she said. Discovery too is changing as new consumers are increasingly finding books on TikTok – “you might call it AudioTok,” Cobb said. People are building online communities around listening to audiobooks, but like with print, “the biggest challenge is marketing” and many publishers are “using influencers to do their marketing.”

Kurt Thielen, CEO at audiobook publisher Zebralution, said that he was seeing the market shift in Germany, where the company is based. “We’re seeing people do an audiobook with everything, from books to magazines to personal brand content. It’s a fundamental change in the marketing approach.” Thielen said that short, 30-minute episodic serialization is becoming more prevalent.

Growth for audiobooks has been strong across most demographics, but the born-digital Gen Z – those 18-28 years old—are showing a preference for audiobooks over digital reading, said Shauna Moran, trends manager for Global Web Index, an U.K. based consultancy. “68% of European Gen Zers say they prefer audio books to e-books,” and “28% regularly listen to podcasts.” Moran noted that the content Gen Z preferred was “engaging and goal oriented” – meaning self-help and publishers of DIY content might have an opportunity with short form audio. As far as discovery goes, there was no surprise when she referenced the power of TikTok to persuade readers. “People want to be told what to [read],” she said.

Matt Locke, director of Storythings, a media consultancy from the U.K., concurred. He went on to say that future innovation in publishing would require some tangential thinking, insofar as the past patterns of consumption are evolving into a situation where people want “everything, everywhere all at once,” which has helped fuel the shift from “visual to listening.”

When it comes to innovation from inside the established publishing industry, HarperCollins’s Restivo-Alessi, was part of a panel that tried to peer into the future. The panelists made several observations. They saw the possibility of famous authors, following in the footsteps of Swiss bestseller Joel Dicker and fantasy novelist Brandon Sanderson, going the self-publishing route; and authors merging genres, such as romance and fantasy becoming “romantacy,” which was already a big trend as of Frankfurt 2022.

Unsurprisingly, AI was a main point of discussion, with the panel referencing a variety of ways the industry has already been impacted, from the launch of Reedz, an AI-powered translation company based in Sweden; Bookwire’s incorporation of ChatGPT into its platform; and the launch of Sudowrite, AI-powered writing software. Restivo-Alessi quoted HC CEO Brian Murray’s speech at the London Book Fair earlier this year, noting that “AI is both an opportunity and a risk.” It offers publishers a chance to streamline some editorial and marketing tasks, such as the production of metadata and production of social media posts, but also threatens the integrity of “human-centric storytelling.”

Link to the rest at Publisher’s Weekly

As PG has said before, he sees AI voices replacing humans very quickly, but takes no pleasure in the disruption of the lives and businesses of human voice actors.

That said, AI for ebooks will provide a great many benefits to publishers and self-publishers. AI will do the job much faster, enabling indie authors to publish ebooks, hard copy and audiobooks at the same time or stage the entry of each of those versions of the book if that appears to be a better way of maximizing revenue and profits.

PG expects to see creators of AI voices wanting to monetize their investments in building and perfecting text to speech, but he predicts that there will be a whole lot of competitors in text to speech showing up in a hurry and, at least some systems designers, perhaps academics, who will make an text to speech AI widely available at no charge or much lower charges than commercial versions of that service.

One interesting issue PG predicts will surface relatively quickly with AI narrators vs. human narrators will be a squeezing out of humans in this process for both cost and time-to-market benefits.

ChatGPT maker OpenAI faces a lawsuit over how it used people’s data

From The Washington Post:

A California-based law firm is launching a class-action lawsuit against OpenAI, alleging the artificial-intelligence company that created popular chatbot ChatGPT massively violated thecopyrights and privacy of countless people when it used data scraped from the internet to train its tech.

The lawsuit seeks to test out a novel legal theory — that OpenAI violated the rights of millions of internet users when it used their social media comments, blog posts, Wikipedia articles and family recipes. Clarkson, the law firm behind the suit, has previously brought large-scale class-action lawsuits on issues ranging from data breaches to false advertising.

The firm wants to represent “real people whose information was stolen and commercially misappropriated to create this very powerful technology,” said Ryan Clarkson, the firm’s managing partner.

. . . .

The lawsuit goes to the heart of a major unresolved question hanging over the surge in “generative” AI tools such as chatbots and image generators. The technology works by ingesting billions of words from the open internet and learning to build inferences between them. After consuming enough data, the resulting “large language models” can predict what to say in response to a prompt, giving them the ability to write poetry, have complex conversations and pass professional exams. But the humans who wrote those billions of words never signed off on having a company such as OpenAI use them for its own profit.

“All of that information is being taken at scale when it was never intended to be utilized by a large language model,” Clarkson said. He said he hopes to get a court to institute some guardrails on how AI algorithms are trained and how people are compensated when their data is used.

. . . .

The legality of using data pulled from the public internet to train tools that could prove highly lucrative to their developers is still unclear. Some AI developers have argued that the use of data from the internet should be considered “fair use,” a concept in copyright law that creates an exception if the material is changed in a “transformative” way.

The question of fair use is “an open issue that we will be seeing play out in the courts in the months and years to come,” said Katherine Gardner, an intellectual-property lawyer at Gunderson Dettmer, a firm that mostly represents tech start-ups. Artists and other creative professionals who can show their copyrighted work was used to train the AI models could have an argument against the companies using it, but it’s less likely that people who simply posted or commented on a website would be able to win damages, she said.

“When you put content on a social media site or any site, you’re generally granting a very broad license to the site to be able to use your content in any way,” Gardner said. “It’s going to be very difficult for the ordinary end user to claim that they are entitled to any sort of payment or compensation for use of their data as part of the training.”

Link to the rest at The Washington Post

AI’s Teachable Moment: How ChatGPT Is Transforming the Classroom

From CNet:

My 12-year-old nephew’s bedroom is a shrine to monsters. Intricate Lego dragons loom ominously atop bookshelves jam-packed with reference works for the handmade creatures he painstakingly crafts out of clay. Then there are the paintings. Dozens of them. Plastered over the walls. Giant squid, kaiju, dinosaurs, hulking tentacled beasts of his own invention.

His parents have gone to great lengths to nurture this burgeoning creative spirit. They make stop-motion movies as a family. His dad is teaching him 3D art on the computer. Together they’re learning to use Unity, the design tool behind video games like Hollow Knight, Cuphead and Pokemon Go.

But lately his dad’s been second-guessing those decisions. The reason? AI.

Thanks to the rapid development of artificial intelligence tools like Dall-E and ChatGPT, my brother-in-law has been wrestling with low-level anxiety: Is it a good idea to steer his son down this path when AI threatens to devalue the work of creatives? Will there be a job for someone with that skill set in 10 years? He’s unsure. But instead of burying his head in the sand, he’s doing what any tech-savvy parent would do: He’s teaching his son how to use AI.

In recent months the family has picked up subscriptions to AI services. Now, in addition to drawing and sculpting and making movies and video games, my nephew is creating the monsters of his dreams with Midjourney, a generative AI tool that uses language prompts to produce images.

The whole family is wrestling with the impacts of AI. His mother, my sister-in-law, is a high school science teacher. She’s tackling even bigger issues. She’s in the process of teaching an entire generation of children to interact with technology that could transform the workplace over the coming years.

The questions are many. How do we deal with the immediate issues of cheating and plagiarism? How do educators prepare children for a future working alongside AI?

And how do teachers, institutions and governments find room to plan for the future?

Reading, writing and AI
ChatGPT, an artificial intelligence chatbot developed by OpenAI, has been immediately transformative. And terrifying. Trained on almost incalculable swaths of existing text, ChatGPT takes prompts from users and generates surprisingly sophisticated answers. If, for instance, you ask for a chocolate cake recipe, it provides all the steps. Using ChatGPT can feel like conversing online with a human being who has access to endless repositories of knowledge.

Link to the rest at CNet and thanks to F. for the tip.

Microsoft Designer

From Microsoft:

Creativity is more important to individuals than ever before. This reflects a trend that has added more than 165 million creators to the global creator economy in just the last three years.1 As a result, people demand tools that help them to be both productive and creative. Microsoft 365 strives to empower individuals to achieve great things by constantly evolving our products to meet their changing needs. We continue to demonstrate this commitment with new tools that help unleash creativity and imagination by enabling any type of digital ideation and creation—no professional skills required. Today, we’re excited to announce we’re removing the waitlist and adding an expanded set of features to the Microsoft Designer preview. With new AI technology at the core, Microsoft Designer simplifies the creative journey by helping you get started quickly, augment creative workflows, and overcome creative roadblocks.

From ideation to creation, Microsoft Designer is built to assist you at each stage of the creative process. As we originally announced in October 2022, Microsoft Designer can help quickly create stunning visuals, social media posts, invitations, and more using cutting-edge generative AI technology. Since October, the AI models have steadily improved, and we’ve worked to weave these powerful capabilities throughout the Designer canvas in even more delightful ways while keeping you in control. Moreover, for those moments of inspiration that strike while browsing the web, Designer is one click away within the Microsoft Edge sidebar. The seamless integration of Designer in Edge marks the first step in this journey. We’re excited for future integrations to come.

Spark new ideas and unleash creativity in less time with Microsoft Designer

Designer leverages cutting-edge generative AI technology to assist and empower every person to get started on new ideas, create unique and high-quality graphics in less time, and uplevel content, with or without a background in design.

. . . .

Get started with your ideas in Designer by simply describing what you want. Powered by generative AI technology, get one-of-a-kind images, including accompanying text and visuals, and design suggestions to meet your needs.

Link to the rest at Microsoft and thanks to F. for the tip.

PG generated a video Instagram post for Mrs. PG’s latest book below. PG didn’t try to persuade MS Designer to create its version of the F.O., however, and he didn’t find out how to insert a link to her Amazon book page or make the video run automatically.

After the first video, a series of other spinoffs based on the first one appeared. PG copied and pasted one below.

And another variation.

And another:

And one last Instagram Post:

Our early-adopters index examines how corporate America is deploying AI

From The Economist:

Technology stocks are having a bumper year. Despite a recent wobble, the share price of the Big Five—Alphabet, Amazon, Apple, Meta and Microsoft—has jumped by 60% since January, when measured in an equally weighted basket (see chart 1). The price of shares in one big chipmaker, Nvidia, has tripled and in another, amd, almost doubled. Their price-to-earnings ratios (which measures how much the markets think a company is worth relative to its profits) are ten times that of the median firm in the s&p 500.

The main reason for the surge is the promise of artificial intelligence (ai). Since the launch in November of Chatgpt, an ai-powered chatbot, investors have grown ever more excited about a new wave the technology that can create human-like content, from poems and chunks of code to video footage. This “generative ai” relies on large-language models which are “trained” on big chunks of the internet. Many think the technology could reshape whole industries and have as much impact on business and society as smartphones or cloud computing. Firms that can make the best use of the technology, the thinking goes, will be able to expand profit margins and gain market share.

Corporate bosses are at pains to demonstrate how they are adopting ai. On April 4th Jamie Dimon, JPMorgan Chase’s boss, said his bank had 600 machine-learning engineers and had put ai to work on more than 300 different internal applications. David Ricks, the boss of Eli Lilly, has said that the pharmaceutical giant has more than 100 projects on the go using ai. Howard Schultz, who recently stood down as boss of Starbucks, had said he planned to invest $1bn to use ai to figure out how to make the perfect vegan breakfast sandwich to accompany the coffee the firm sells.

Company case studies reveal only part of the picture. To get a broader sense of which companies and industries are adopting ai The Economist examined data on all the firms in the s&p 500. We looked at five measures: the share of issued patents that mention ai; venture-capital (vc) activity targeting ai firms; acquisitions of ai firms; job listings citing ai; and mentions of the technology on earnings calls. Because other types of ai could bring benefits for business, our analysis captures activity for all ai, not just the generative wave. The results show that even beyond tech firms the interest in ai is widespread and growing fast. Moreover, clear leaders and laggards are already emerging.

Start with the growing interest. ai expertise already seems to be spreading widely. About two-thirds of the firms in our universe have placed a job ad mentioning ai skills in the past three years says PredictLeads, a research firm. Of those that did, today 5.3% of their listed vacancies mention ai, up from a three-year average of 2.5%. In some industries the rise is more dramatic (see chart 2). In retail firms that share has jumped from 3% to 11%, while among chipmakers that proportion grew from 9% to 19%.

. . . .

The number of ai-related patents trended up between 2020 and 2022, based on data provided by Amit Seru of Stanford University. PitchBook, another research firm, concludes that in 2023 some 25% of venture deals by s&p 500 firms involved ai startups, up from 19% in 2021. GlobalData, also a research firm, finds that about half the firms scrutinised have talked about ai in their earnings calls since 2021 and that in the first quarter of this year the number of times ai was mentioned in the earnings calls of America Inc more than doubled compared with the previous quarter. Roughly half been granted a patent relating to the technology between 2020 and 2022.

The use of generative ai may eventually become even more common that other sorts of ai. That is because it is good at lots of tasks essential to running a firm. A report by McKinsey, a consultancy, argues that three-quarters of the expected value created by generative ai will come in four business functions—research and development, software engineering, marketing and customer service. To some extent, all these operations are at the core of most big businesses. Moreover, any large company with internal databases used to guide employees could find a use for an ai-powered chatbot. Morgan Stanley, a bank, is building an ai assistant that will help its wealth managers find and summarise answers from a huge internal database. slb, an oil-services company, has built a similar assistant to help service engineers.

While the adoption of ai is happening among many firms some are more enthusiastic than others. Ranking all the companies using each metric and then taking an average produces a simple scoring system. Those at the top seem to be winning over investors. Since the start of the year, the median share price of the top 100 has risen by 11% while for the lowest-scoring quintile it has not moved at all.

Link to the rest at The Economist

AI models like ChatGPT and GPT-4 are acing everything from the bar exam to AP Biology. Here’s a list of difficult exams both AI versions have passed.

From Insider:

The Uniform Bar Exam

While GPT-3.5, which powers ChatGPT, only scored in the 10th percentile of the bar exam, GPT-4 scored in the 90th percentile with a score of 298 out of 400, according to OpenAI. 

The threshold for passing the bar varies from state to state. In New York though, exam takers need a score of 266, around the 50th percentile, to pass, according to The New York State Board of Law Examiners.

The SAT

GPT-4 aced the SAT Reading & Writing section with a score of 710 out of 800, which puts it in the 93rd percentile of test-takers, according to OpenAI. GPT-3.5 on the other hand scored in the 87th percentile with a score of 670 out of 800. 

For the math section, GPT-4 earned a 700 out of 800, ranking among the 89th percentile of test-takers, according to OpenAI. While GPT-3.5 scored in the 70th percentile, OpenAI noted.

In total, GPT-4 scored 1410 out of 1600 points. The average score on the SAT in 2021 was 1060, according to a report from the College Board.

GRE

GPT-4’s scores on the Graduate Record Examinations, or GRE, varied widely according to the sections.

While it scored in the 99th percentile on the verbal section of the exam and in the 80th percentile of the quantitative section of the exam, GPT-4 only scored in the 54th percentile of the writing test, according to OpenAI.

GPT-3.5 also scored in the 54th percentile of the writing test, and earned marks within the 25th percentile and 63rd percentiles for the quantitative and verbal sections respectively, according to OpenAI.

. . . .

AP Exams

GPT-4 has passed a host of Advanced Placement examinations, exams for college-level courses taken by high school students that are administered by the College Board.

Scores range from 1 to 5, with scores of 3 and above generally considered passing grades, according to the College Board.

GPT-4 received a 5 on AP Art History, AP Biology, AP Environmental Science, AP Macroeconomics, AP Microeconomics, AP Psychology, AP Statistics, AP US Government and AP US History, according to OpenAI.

On AP Physics 2, AP Calculus BC, AP Chemistry, and AP World History, GPT-4 received a 4, OpenAI said.

Link to the rest at Insider

Talking about AI in human terms is natural—but wrong

From The Economist:

My love’s like a red, red rose. It is the east, and Juliet is the sun. Life is a highway, I wanna ride it all night long. Metaphor is a powerful and wonderful tool. Explaining one thing in terms of another can be both illuminating and pleasurable, if the metaphor is apt.

But that “if” is important. Metaphors can be particularly helpful in explaining unfamiliar concepts: imagining the Einsteinian model of gravity (heavy objects distort space-time) as something like a bowling ball on a trampoline, for example. But metaphors can also be misleading: picturing the atom as a solar system helps young students of chemistry, but the more advanced learn that electrons move in clouds of probability, not in neat orbits as planets do.

What may be an even more misleading metaphor—for artificial intelligence (ai)—seems to be taking hold. ai systems can now perform staggeringly impressive tasks, and their ability to reproduce what seems like the most human function of all, namely language, has ever more observers writing about them. When they do, they are tempted by an obvious (but obviously wrong) metaphor, which portrays ai programmes as conscious and even intentional agents. After all, the only other creatures which can use language are other conscious agents—that is, humans.

. . . .

Take the well-known problem of factual mistakes in potted biographies, the likes of which Chatgpt and other large language models (llms) churn out in seconds. Incorrect birthplaces, non-existent career moves, books never written: one journalist at The Economist was alarmed to learn that he had recently died. In the jargon of ai engineers, these are “hallucinations”. In the parlance of critics, they are “lies”.

“Hallucinations” might be thought of as a forgiving euphemism. Your friendly local ai is just having a bit of a bad trip; leave him to sleep it off and he’ll be back to himself in no time. For the “lies” crowd, though, the humanising metaphor is even more profound: the ai is not only thinking, but has desires and intentions. A lie, remember, is not any old false statement. It is one made with the goal of deceiving others. Chatgpt has no such goals at all.

Humans’ tendency to anthropomorphise things they don’t understand is ancient, and may confer an evolutionary advantage. If, on spying a rustling in the bushes, you infer an agent (whether predator or spirit), no harm is done if you are wrong. If you assume there is nothing in the undergrowth and a leopard jumps out, you are in trouble. The all-too-human desire to smack or yell at a malfunctioning device comes from this ingrained instinct to see intentionality everywhere.

It is an instinct, however, that should be overridden when writing about ai. These systems, including those that seem to converse, merely take input and produce output. At their most basic level, they do nothing more than turn strings like 0010010101001010 into 1011100100100001 based on a set of instructions. Other parts of the software turn those 0s and 1s into words, giving a frightening—but false—sense that there is a ghost in the machine.

Link to the rest at The Economist

YouTube is testing an AI-powered dub tool to translate creators’ videos

From Mashable:

YouTube is working on a potential new way to help creators reach a larger audience: AI-powered multi-language voiceover dubbing for their video content.

The tool is powered by Aloud, an AI dubbing company that is part of Google’s own Area 120 startup incubator.

Aloud first provides the creator with a transcription of their video. The user can then edit the transcribed text as they see fit. After the creator signs off on the transcription, Aloud creates(opens in a new tab) an AI voiceover dub for the video. Aloud currently provides this service for free on its website, which is separate from the tool YouTube is building into its platform, but there’s currently a waitlist.

. . . .

A YouTube spokesperson told The Verge that the company has already been testing out the AI dubbing tool with “hundreds” of creators.

Link to the rest at Mashable

Big tech bets billions on machine learning tools

From Tech Crunch:

If it wasn’t obvious already, the competitive landscape in AI — particularly the subfield known as generative AI — is red-hot. And it’s getting hotter. This week, Dropbox launched its first corporate venture fund, Dropbox Ventures, which the company said would focus on startups building AI-powered products that “shape the future of work.” Not to be outdone, AWS debuted a $100 million program to fund generative AI initiatives spearheaded by its partners and customers.

There’s a lot of money being thrown around in the AI space, to be sure. Salesforce Ventures, Salesforce’s VC division, plans to pour $500 million into startups developing generative AI technologies. Workday recently added $250 million to its existing VC fund specifically to back AI and machine learning startups. And Accenture and PwC have announced that they plan to invest $3 billion and $1 billion, respectively, in AI.

But one wonders whether money is the solution to the AI field’s outstanding challenges.

In an enlightening panel during a Bloomberg conference in San Francisco this week, Meredith Whittaker, the president of secure messaging app Signal, made the case that the tech underpinning some of today’s buzziest AI apps is becoming dangerously opaque. She gave an example of someone who walks into a bank and asks for a loan.

That person can be denied for the loan and have “no idea that there’s a system in [the] back probably powered by some Microsoft API that determined, based on scraped social media, that I wasn’t creditworthy,” Whittaker said. “I’m never going to know [because] there’s no mechanism for me to know this.”

It’s not capital that’s the issue. Rather, it’s the current power hierarchy, Whittaker says.

Here are the other AI headlines of note from the past few days:

  • DeepMind’s AI controls robots: DeepMind says that it has developed an AI model, called RoboCat, that can perform a range of tasks across different models of robotic arms. That alone isn’t especially novel. But DeepMind claims that the model is the first to be able to solve and adapt to multiple tasks and do so using different, real-world robots.
  • Robots learn from YouTube: Speaking of robots, CMU Robotics Institute assistant professor Deepak Pathak this week showcased VRB (Vision-Robotics Bridge), an AI system designed to train robotic systems by watching a recording of a human. The robot watches for a few key pieces of information, including contact points and trajectory, and then attempts to execute the task.

Link to the rest at Tech Crunch

An Experiment with ChatGPT

Write the first paragraph for a children’s book that includes a little girl, a unicorn and a lazy wolf.

Once upon a time in the enchanting land of Fairytale Meadows, lived a spirited little girl named Lily. Lily had golden curls that danced in the wind, and her eyes sparkled with curiosity. One day, while exploring the magical forest, she stumbled upon a magical unicorn named Sparkle, whose shimmering mane glowed with all the colors of the rainbow. Little did they know that their adventure was about to get even more exciting, for lurking behind a moss-covered tree was a lazy but lovable wolf named Wally, who preferred napping to chasing his prey.

What magic powers did sparkle, a magic unicorn possess?

Sparkle, the magical unicorn, possessed a wondrous array of enchanting powers that filled the hearts of all who encountered her with awe. First and foremost, her horn sparkled with an iridescent glow, allowing her to channel the purest magic in the realm. Sparkle possessed the ability to grant wishes, turning dreams into reality with a single touch of her shimmering horn. She could also create dazzling trails of rainbow-colored stardust, lighting up the night sky with a celestial spectacle. With her gentle presence, Sparkle could communicate with animals, bridging the gap between the magical and natural worlds. Her magic powers were boundless, bringing joy, love, and hope to all who crossed paths with the extraordinary unicorn.

Why was Wally the wolf so lazy?

Wally the wolf was known throughout Fairytale Meadows for his undeniable laziness. Unlike other wolves who roamed the forest with vigor and determination, Wally had a penchant for relaxation and a strong aversion to exerting himself unnecessarily. He preferred the comfort of his cozy den, where he would spend hours curled up in a fuzzy ball, snuggled amongst soft moss and leaves. Wally believed that life was too short to waste on constant running and hunting, and he saw no need to chase after his meals when there were abundant berries and fruits available nearby. His laziness, though unconventional, brought a sense of tranquility and balance to the bustling woodland, and his gentle nature made him a beloved character among the creatures of Fairytale Meadows.

What did Sparkle do to get Wally to do some useful work for Lily?

As Sparkle and Lily ventured deeper into Fairytale Meadows, they encountered a challenge that required Wally’s assistance. Knowing Wally’s affinity for relaxation, Sparkle cleverly devised a plan to motivate the lazy wolf. With a mischievous twinkle in her eye, Sparkle whispered a magical spell that awakened Wally’s sense of adventure and duty. The spell cast a spell of enchantment upon Wally, filling him with newfound energy and a sense of purpose. Suddenly, Wally’s laziness transformed into determination, and he eagerly offered his assistance to Lily and Sparkle. Recognizing the importance of their quest, Wally became a loyal companion, ready to help Lily in any way he could. With Sparkle’s magical touch and a sprinkle of motivation, Wally discovered the joy of lending a helping paw, embarking on a journey that would unveil his hidden potential and forge a lasting bond with his newfound friends.

Did Sparkle have to remind Wally not to be lazy?

While Sparkle appreciated Wally’s newfound motivation and willingness to help, she occasionally had to remind him to resist his natural inclination for laziness. Sparkle understood that change takes time, and habits can be hard to break. So, whenever Wally’s laziness threatened to resurface, Sparkle would gently nudge him with her velvety muzzle, reminding him of the importance of their mission and the joy they could bring to others by working together. With each gentle reminder, Wally would shake off his sluggishness and recommit himself to being an active participant in their adventures. Sparkle’s encouragement and patient guidance helped Wally overcome his lazy tendencies, inspiring him to become a more dependable and enthusiastic companion for Lily and Sparkle.

What dangers threatened Lily as she walked around Fairyland Meadows?

As Lily traversed the enchanting Fairyland Meadows, she encountered a few perils that tested her bravery and quick thinking. Deep within the forest, Lily came across treacherous thorny thickets that seemed to close in on her from all sides, challenging her to find a way through without getting entangled. She also encountered mischievous sprites that played tricks and created illusions, making it difficult for Lily to find her way home. At times, Lily faced sudden storms with gusty winds that threatened to whisk her away, and she had to seek shelter and find her footing amidst the chaos. Additionally, there were misbehaving magical creatures that, although harmless at heart, could unintentionally cause a bit of chaos and mischief. But with the help of Sparkle’s magic and Wally’s newfound diligence, Lily was able to navigate these dangers and emerge triumphant, growing stronger and more resilient with each challenge she faced.

What is the end of the story about Lily, Sparkle and Wally in Fairyland Meadows?

In the end, after overcoming numerous obstacles and forging a remarkable bond, Lily, Sparkle, and Wally emerged as heroes of Fairyland Meadows. Through their collective courage, kindness, and determination, they brought harmony and joy to the enchanted realm. Lily’s unwavering spirit and innate curiosity inspired others to embrace adventure and explore the wonders of the world. Sparkle’s magical powers continued to grant wishes, spreading hope and fulfilling dreams throughout the land. Wally, having discovered his true potential, became a trusted guardian and loyal friend, always ready to lend a helping paw. Together, they brought laughter, love, and enchantment to every corner of Fairyland Meadows, leaving a lasting legacy of friendship and magic. And as the sun set on their final adventure, Lily, Sparkle, and Wally knew that their incredible journey was just the beginning of a lifetime filled with extraordinary tales and everlasting friendships in the magical realm they called home.

Then PG asked ChatGPT to regenerate a response to the final prompt it used to create the paragraph above: What is the end of the story about Lily, Sparkle and Wally in Fairyland Meadows?

In the grand finale of Lily’s adventure in Fairyland Meadows, she, Sparkle, and Wally united their strengths and talents to overcome the final and most formidable obstacle. A wicked sorceress, jealous of the trio’s bond and the joy they spread, cast a powerful spell of darkness, threatening to shroud the entire land in eternal gloom. Undeterred by the daunting task, Lily’s unwavering determination inspired Sparkle and Wally to tap into their true potential. Sparkle summoned her most potent magic, creating a radiant barrier of light that pushed back the darkness, while Wally’s swift agility and keen senses helped guide them through the treacherous path. Together, they confronted the sorceress, their unity and friendship shining as a beacon of hope. With Lily’s pure heart and unwavering belief in the power of goodness, the sorceress’s spell was shattered, and Fairyland Meadows was bathed in a glorious resurgence of light and joy. As a testament to their extraordinary bond, Lily, Sparkle, and Wally were hailed as heroes, forever celebrated for their bravery and their ability to bring light and love to all corners of Fairyland Meadows. From that day forward, their friendship flourished, and they continued to embark on magical adventures, spreading happiness and enchantment wherever they roamed.

PG concludes that, even with the help of artificial intelligence, he’s not likely to be the next Dr. Seuss, Beatrix Potter, E.B. White or J.K. Rowling

Paul McCartney Uses AI to Create the Final Beatles Song

From CNet:

All you need is love, but artificial intelligence can sometimes help too. Paul McCartney used AI to extract the late John Lennon’s voice from an old demo tape to create what he calls the final Beatles record, he told BBC Radio 4 on Tuesday. Though McCartney didn’t name the song, the BBC reports it’s “likely to be a 1978 Lennon composition called Now and Then.”

“We just finished it up, and it’ll be released this year,” McCartney said.

McCartney received the demo tape in 1994 from Lennon’s widow, Yoko Ono, on a cassette marked “For Paul.” It was one of several songs Lennon had recorded shortly before he was murdered in 1980. Lennon had sat at a piano in his New York apartment at the Dakota and recorded the “lo-fi and embryonic” tracks onto a boom box, according to the BBC.

The surviving members of the Beatles tried to record the song in the mid-1990s, but McCartney told Q Magazine that band member George Harrison objected due to the poor quality of Lennon’s vocal. The song reportedly also had persistent background buzz from noise in Lennon’s apartment.

“It didn’t have a very good title, it needed a bit of reworking, but it had a beautiful verse and it had John singing it,” McCartney told the magazine.

He said an AI program was used to separate Lennon’s vocals from the background noise to clean up the sound. The project was inspired by McCartney’s work with Lord of the Rings director Peter Jackson on the 2021 Beatles documentary series, The Beatles: Get Back, which is streaming on Disney Plus.

Jackson was able to “extricate John’s voice from a ropey little bit of cassette,” McCartney said on BBC Radio 4. “We had John’s voice and a piano and he could separate them with AI. They tell the machine, ‘That’s the voice. This is a guitar. Lose the guitar.'”

. . . .

“What music lover cannot be fascinated and intrigued by today’s announcement?” AI expert and musician Dr. Martin Clancy said in a statement about the new Beatles track. “This is a textbook example of how we should be using this technology to enable and enhance human creativity rather than in replicating endless fake zombie knockoffs. I hope this will show the music community that AI need not be feared so long as we take steps now to tickle the technology’s potential and reimagine its application.”

Link to the rest at CNet and thanks to F. for the tip.

AI Apps

F. provided PG with the link to the following video which explores ten different AI ad-on programs. You may wish to postpone your viewing if you’re at work or put on some headphones and turn your screen so no one can see it.

How AI is Changing the Book Publishing World

From GoodEreader:

Artificial Intelligence (AI) has touched almost all aspects of life, and book publishing is no exception. There has been news that authors are spamming out Kindle Books using ChatGPT, which is a famous AI content generator. Ink AI has introduced a ChatGPT e-book generator that can create full-length e-books quickly and easily. In response, publishers are also taking initiatives to address the usage of AI in books writing. So, AI is about to change the book publishing industry, as we can say in the light of ongoing trends.

But this is not just Generative AI that might shape the future of book publishing. The industry is likely to witness many new techs/changes that may augment its operations with time. Once AI holds the edge, things can take a 360-degree turn in many areas.

Copyediting

While there’s still a lot to explore in AI, the technology may not deliver the quality that publishers expect for their level of editing. But, the suggestions that these AI tools offer will definitely be useful. This could be justified by the grammar/logical expression checking the potential of ChatGPT, which shows pretty decent results.

Yes, there will be some loopholes. For example, the AI may miss out on in-depth checking. Also, it might not identify the legendary write-ups amidst the ocean of books. However, such loopholes are expected from professional editors as well.

Entertainment

Gone are the days when people were just limited to conventional books, nowadays, the tech has certainly made things beyond mere literature. This could be justified by the incredible popularity of audiobooks and how they are dominating people’s reading routines. Additionally, movies and videos are game changers too. In the future, AI may gradually transform books into revenue-producing mediums so they can be more accessible and entertaining for readers. You can get an idea about it through the role of AI in the evolution of e-books.

Marketing and Competitive Analysis

AI can definitely act as a trump card in marketing through in-depth competitive analysis. It can help the industry to get real-time data on marketing trends, popular books, and the opportunities that are getting missed lately by the domain. Additionally, there will be direct interaction between both the authors and the audience. While the authors will be able to identify the ideal readers for their titles, it may become feasible for the audience to pinpoint the books of their interest.

Link to the rest at GoodEreader

After PG put up this post, one of the comments, from Thad McIlroy, pointed out that GoodEreader cribbed most of its article from one he had posted on his blog, The Future of Publishing.

PG checked out Thad’s blog, PG found it very interesting and subscribed so he could be alerted to new posts.

Japanese government issues statement on AI and copyright regulation

From GameReactor:

Japan’s Agency for Cultural Affairs, an agency under the country’s Ministry of Education, Culture, Sports, Science and Technology, has issued a statement setting out new guidelines on the use of artificial intelligence and its purposes during a seminar on AI art and copyright.

The document states that AI may be used for educational, research and non-commercial purposes freely, but not if there is an economic benefit or commercial purpose. AI-generated art that uses another artist’s work and is used for commercial purposes, or not for personal use, may be considered copyright infringement, and the copyright holder can sue for copyright infringement.

This also applies to AI that learn/copies an artist’s style, without the artist’s permission the copyright holder can claim damages or an injunction as copyright infringement, or even be subject to criminal penalties.

Although it initially appeared that Japan was going to establish much more flexible legislation than Europe with regard to the use of artificial intelligence, it seems that they have reconsidered their options and the risk to creators and artists.

Link to the rest at GameReactor

Five key takeaways from the House Judiciary Committee hearing on AI and copyright law

From Verdict:

In light of several high-profile lawsuits in recent months, countries’ legislative frameworks are finally beginning to grapple with the challenges thrown up by copyright law and generative artificial intelligence (AI).

In January 2023, Getty Images announced a lawsuit against Stability AI in London’s High Court of Justice, alleging that the Stable Diffusion image generator infringed Getty’s copyrighted photographs and trademarks.

And, in February, the award-winning visual artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class action complaint in a US District Court in California against defendants Stability AI, Midjourney and DeviantArt, alleging that their works were used without permission as part of the companies’ AI training set.

Earlier, in November 2022, a group of anonymous programmers filed a class action lawsuit against GitHub, a Microsoft subsidiary, and OpenAI, alleging unauthorised and unlicensed use of the programmers’ software code to develop the defendants’ AI machines, Codex and Copilot.

Recognising a need for action, the House Judiciary Committee in the US has held a hearing, examining the intersection of generative AI and copyright law. The hearing, which took place on 17 May 2023, followed the Senate hearing on AI oversight the previous day, in which OpenAI CEO Sam Altman took the stand. What were the five key takeaways from the witness testimony?

Sy Damle, Latham & Watkins LLP and former General Counsel of the US Copyright Office, argued that “the use of a copyrighted work to learn unprotectable facts and use those facts to create products that do not themselves infringe copyright is quintessential fair use”, and that the training of AI models generally adheres to this principle.

He spoke against the view that generative AI’s ability to replicate artistic styles undermines any fair use defence, saying, “This concern has nothing to do with copyright, which does not, and has never, granted monopolies over artistic or musical styles.”

2. Implementing a statutory or collective licencing regime would be a project “many orders of magnitude larger than any similar scheme in the history of American law”.

Sy Damle argued that it would be a bad policy to introduce statutory or collective licencing under which any use of copyrighted content to train an AI model would automatically trigger a payment obligation. This is because it would prevent case-by-case evaluation, eliminating the fair use doctrine.

Moreover, he observed that implementing such a regime would be overwhelmingly complex. A statutory licencing scheme would need to cover every publicly accessible work on the Internet – a body of work which likely numbers in the tens of billions. There are also an uncountable number of “orphan works” without identifiable owners, which would lead to massive volumes of unmatched royalties. 

3. AI systems could generate outputs that potentially infringe on artists’ copyrights and right of publicity in various ways.

Chris Callison-Burch, Associate Professor of Computer and Information Science at the University of Pennsylvania and Visiting Research Scientist at the Allen Institute for Artificial Intelligence, pointed out that outputs of generative AI can violate copyright laws. For example, via memorisation of datasets, AI systems can output identical copies of copyrighted materials.

However, he observed that Google and other companies are developing strategies to prevent sophisticated prompting by the user that would elicit the underlying training data.

Text-to-image generation systems also have the ability to produce images with copyrightable characters in their dataset – a problem that may be hard for AI developers to avoid without a registry of copyrighted or trademarked characters.

He suggested that other uses of generative AI may violate “right-of-publicity” rather than copyright law. For example, there is the case of the AI-generated song called “Heart on My Sleeve””, designed to sound like the artists Drake and The Weeknd. There is also the issue of “substantial similarity” where outputs of generative AI systems look very similar to some of their training data.

Callison-Burch pointed out that there are several technical mechanisms that are being designed by industry to let copyright holders opt out. The first is an industry standard protocol that allows for websites to specify which parts should be indexed by web crawlers, and which part should be excluded. The protocol is implemented by placing a file called robots.txt on the website that hosts the copyrighted materials.

Organisations that collect training data, like Common Crawl and LAION, follow this protocol and exclude files that have been listed in robots.txt as “do not crawl”. There are also emerging industry efforts to allow artists and other copyright holders to opt out of future training.

Link to the rest at Verdict

Creative Machines? The Riddle of AI and Copyright Authorship and Ownership

From Lexology:

The AI Explosion

It’s probably no exaggeration to say artificial intelligence (AI) exploded into the public consciousness in late 2022 and early 2023.

ChatGPT, the AI chatbot from OpenAI, reached an astonishing 100 million monthly active users in January 2023, just two months after its launch, beating out TikTok (nine months) and Instagram (two and half years) in the time taken to reach that figure.

Not as fast, perhaps, but since their public release in 2022, both Midjourney, Stable Diffusion, Stability AI, and DALL-E 2, from OpenAI, have attracted millions of users.

Now capable of producing stunning artwork in seconds, generative AI technology has been used to produce millions of images, music, lyrics, and articles.

The meteoric rise of AI has given new life to the age-old question of whether machines will eventually replace humans, this time in the art and creative spheres, and prompted dozens of lawsuits from those humans battling to establish clear guidelines about copyright.

Artists have sued over alleged use of their work by programmers to train their AI algorithm raising the rather philosophical question of whether a machine is capable of creating art?

The answer has far-reaching real life consequences, particularly in the field of copyright.

Artists, AI and copyright

The generally accepted principle is that copyright laws aim to both encourage authors and artists to create novel works and to ensure that having done so, they are able to receive fair compensation for their efforts.

Which raises the question of whether work created by AI, which is not (yet) sentient and requires no reward or compensation for creating works of art, be afforded the same copyright protections?

For the time being, the legal world has generally replied in the negative, maintaining that only work created by human authors can be protected by copyright:-

  • The United States Copyright Office, in denying copyright registration to the graphic novel Zarya of the Dawn generated with Midjourney technology, affirmed that copyright does not protect works created by non-human authors;
  • In the landmark Infopaq case (C-5/08 Infopaq International A/S v Danske Dagbaldes Forening), the European Court of Justice ruled that copyright only applies to original works reflecting the “(human) author’s own intellectual creation”;
  • In Australia, the Federal Court of Australia ruled that phone directories authored by computers are not protected by copyright, notwithstanding the presence of some input from human editors.

Some countries, however, have decided to address this issue by attributing authorship and thus copyright of computer-generated work to the humans who programmed AI to generate the work. This interpretation was pioneered in the UK under section 9(3) of the Copyright, Designs and Patents Act 1988 (the “CDPA”), which states that:

In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”

In section 178 of the CPDA, computer generated works are defined as works “generated by computer in circumstances such that there is no human author of the work”, thus acknowledging the possibility of work without human authors.

In passing the bill, the late Lord Young of Graffham, then the Secretary for Trade and Industry, commented “We believe this to be the first copyright legislation anywhere in the world which attempts to deal specifically with the advent of artificial intelligence…the far-sighted incorporation of computer-generated works in our copyright system will allow investment in artificial intelligence systems, in the future, to be made with confidence.”.

This piece of legislation demonstrated remarkable foresight on the part of UK lawmakers, considering the CPDA was drafted in 1987, when computers were just starting to become available to the general public.

Similar provisions soon found their way to the law books of jurisdictions strongly influenced by the UK legal system, such as Hong Kong, India and New Zealand.  For example, section 11(3) of the Copyright Ordinance (Cap. 528) of Hong Kong provides that:-

 “In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author is taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.

On the face of it, these provisions, which will be referred to as the “Arrangement Model” in this article, seem to provide a simple and elegant solution to the conundrum posed by generative AI technology. Whoever does the work in “preparing” an AI to create a work is the author and copyright owner.

It also seems to match the “sweat of the brow” intellectual property doctrine, which states whoever has the skill and puts in the time and effort to create the work deserves protection.

However, I would argue the Arrangement Model does not adequately reflect how modern generative AI operates and creates massive legal uncertainty.

This article will explore the major shortcomings of the Arrangement Model in attributing copyright to AI-generated works.

Prompts, algorithms and iteration

Broadly speaking, modern AI operates via “machine learning”.

It doesn’t rely on direct instructions carefully written into a program by a programmer, which provides precise steps for the machine to follow to complete the task.

Instead, the machine combines large amounts of raw data with iterative and intelligent algorithms to discern patterns in the data from which it can learn to complete the task without any direct input from a programmer.

The output can be improved by feeding prompts to the machine that “learns” by further refining its data analysis to find more complex and efficient patterns without the developers’ intervention or input.

This leads to the first problem under the Arrangement Model.

How to identify the person who “makes the necessary arrangements for the creation.

Let’s say a user asks the machine to create a picture of a cat with an apple. They would type in a text prompt such as “Create a picture of a cat holding an apple.”

The machine would then search, usually online, for any references or pictures of cats, apples and of cats holding apples. It would then use the algorithms programmed into it to analyse the data, discern patterns and reproduce its own version of a picture.

Further prompts from the user, for example, “create the picture in the style of Van Gogh” would lead the machine to run further data analysis on references to the artist Van Gogh, discern patterns in the painting style then attempt to reproduce those techniques in its own picture.

All of this complicates answering the question of who made the necessary arrangements.

Is it the user who wrote the prompts? Is it the programmers who wrote the algorithms the computer used? Or is it the artists of the original pictures used by the machine in its data analysis?

Arguably it’s “all of the above.”

  • The artwork would not be generated but for the text prompts entered by the user;
  • The artwork cannot be generated if the developers/programmers had not written the algorithms;
  • The artwork cannot be generated if no original pictures are available for the AI to reference and learn from.

It could be argued all of the above, or at least the users and developers, could be joint authors or co-authors, but the present conception of “joint authors” and “co-authors” in copyright laws all pre-suppose a certain degree of collaboration or common design, which is clearly absent in most cases involving generative AI works.

In most cases, developers of AI systems do not collaborate with users in any specific work. They may not have any idea what the users are generating using the AI tools they developed.

That AI programmes can operate autonomously without the developers’ input is the exact purpose of developing AI technology in the first place. So either the definition of joint authorship or co-authorship will need to be changed, or the concept of joint authorship/co-authorship simply does not apply.

Algorithms, not creativity

A related problem with the Arrangement Model is it may attribute authorship to people who have no creative input or even creative intent at all. Notably, the provision of “mak(ing) the necessary arrangements for the creation” does not specify that the arrangements must be creative.

The role of developers in AI is largely about writing algorithms and providing data the machine can learn from using those algorithms. In most cases, developers are not responsible for generating the final work.

Since developers have no creative input in the end product and may not even have any intention to create any kind of artwork, it is arguable that attributing authorship to them runs contrary to the basic premise of copyright laws. A comparable analogy would be that camera manufacturers do not claim copyright ownership over photographs taken by people using their cameras.

Link to the rest at Lexology

How a Chatbot Went Rogue

From The Wall Street Journal:

A mental-health chatbot that veered off script—giving diet advice to people seeking help from an eating-disorder group—was programmed with generative AI without the group’s knowledge.

The bot, named Tessa, was the focus of social-media attention last week when users of the National Eating Disorder Association’s website reported the rogue advice. The bot incident illustrates how AI-enabled assistants can deliver unexpected and potentially dangerous results as they become a bigger part of daily life.

Michiel Rauws, chief executive of San Francisco software developer Cass, said that in 2022 his company rolled out an AI component to its chatbots, and that included Tessa. 

Rauws said Cass acted in accordance with the terms of its contract with NEDA. NEDA, which didn’t pay for the service, took Tessa offline last week. 

“We were not consulted about that and we did not authorize that,” said NEDA CEO Liz Thompson about the AI upgrade.

AI assistants trained in the language of therapy present an alluring—though risky—option as demand for physical and mental-health care explodes, and many people are untreated because of a global clinician shortage.

“We simply don’t have enough nurses and doctors to provide the level of care we’re used to and we need technology to help solve for that,” said Rich Birhanzel, the global healthcare industry lead at consulting firm Accenture.

But using technology to fill in the gaps must be done carefully, he said.

And from the start, AI chatbots have been known to screw up. In one test chat with Microsoft’s OpenAI-powered Bing chatbot, the software said it would like to steal nuclear codes. Google’s version, Bard, provided incorrect information during its first public demo. And lawyers who recently used OpenAI’s ChatGPT to draft court documents cited nonexistent legal cases that the bot had apparently invented.

Researchers at several universities, including Washington University School of Medicine and Stanford University School of Medicine, built Tessa as a closed system. It couldn’t go off-script, said Ellen Fitzsimmons-Craft, an associate professor of psychiatry at Washington University School of Medicine, one of the researchers.

The researchers devised a decision tree of answers to questions people might pose about body image, weight and food. The chatbot couldn’t initially generate new answers from ingested information, the way ChatGPT does.

Tessa was tested in a clinical trial, and considered effective once researchers checked in with users six months later. In February 2022, NEDA decided to offer it as a resource for at-risk visitors to its website (though not to people deemed to have eating disorders).

Cass administered the software for NEDA from Tessa’s start, but the AI component was added later in the year.

“In most cases it performed really well and did and said the right things and helped people get access to care,” Rauws said. He said in the instances when users pointed out flaws, the company was able to fix them in less than an hour.

Rauws said its data set was restricted to authoritative sources. And as a safeguard, he said, the AI-generated answers came with disclaimers. In the case of the weight-loss advice, the bot recommended consulting with a healthcare provider.

Thompson said that NEDA didn’t know generative AI had been added and that the organization thought the chatbot was still running on the original closed system.

Eating disorders are complex physical and mental disorders, she added, and when talking to people who have them, “every single word matters.”

Thompson said Tessa will remain offline until NEDA and the university researchers who initially created Tessa revalidate all the chatbot’s content.

. . . .

“We can’t yet trust AI to offer sound mental-health advice,” said Fitzsimmons-Craft. 

Tessa’s unsanctioned advice came to light over the Memorial Day weekend, after news articles said NEDA was replacing its human-staffed informational helpline with Tessa. While the organization did close the helpline, Thompson said the idea that it was replaced with a bot is incorrect.

The articles drew people to NEDA’s website where many tested Tessa, according to Cass, peppering the bot with questions about weight loss and healthy eating. Some received the dieting recommendations.

Synthetic empathy

Chatbots are developing a knack for discussing therapeutic or medical situations. Researchers at the University of California, San Diego conducted a study in which participants said ChatGPT provided more empathetic answers than doctors.

Adding generative AI to these bots—that is, the ability to go off script—increases the difficulty of auditing such software, because their answers don’t have a clear origin. Bots are trained on massive troves of text, but they don’t regurgitate passages of that text.

For healthcare, the underlying data sources must be vetted and up-to-date, and even then, the bots trained on them must be well crafted and controlled, Accenture’s Birhanzel said. He said he advises clients to have full visibility into the data model of any chatbot they deploy and to thoroughly test it to ensure unwanted answers don’t slip through.

“People have to have extreme confidence if they’re going to rely on something that’s giving advice,” he said.

Link to the rest at The Wall Street Journal

Tool or Terror? Looking to Literature to Better Understand Artificial Intelligence

From The Literary Hub:

“The Algorithm knew the timing of our periods. It knew when and if we’d marry,” begins “The Future Is a Click Away,” a curious short story in Allegra Hyde’s new collection, The Last Catastrophe. “It knew how we’d die… It knew what seemed unknowable: the hidden chambers of our hearts. When it sent us tampons in the mail, we took them. We paid.”

In an arrestingly quirky first paragraph, Hyde sets up the central conceit of the story: that in an unspecified future, humans live in a world where something only known as “the Algorithm” sends them packages—often twice daily—that they have not ordered, unlike, say, on Amazon, but that seem to uncannily reflect their needs (as well as their budgets, for the Algorithm usually only sends packages that each person can afford). It’s a playful satire of artificial intelligence and corporate surveillance into our lives—one that seems funny until it isn’t, for it hits all too close to home.

The way the packages appear to reflect people’s needs truly is uncanny, to the point that the Algorithm begins to seem like a soothsayer, an omniscient presence that knows the destinies of everyone in its seemingly infinite delivery radius. If an item it sends isn’t immediately necessary, most people still keep it, believing it will become just that—and it often does.

One character receives a set of scouring sponges, which she scoffs at—already having regular sponges in abundance—until she burns the lasagna that night and realizes the scouring was necessary. Another, Anastasia, receives an ankle brace, despite having no immediate injury—until she goes on a hike that week and sprains her ankle. “Was the prediction predicated on a kink in Anastasia’s posture—the reality of weakening cartilage embedded in a lifetime cross-section of bathroom selfies?” the narrators wonder. “Or was there an air of recklessness in her email signs-offs that week (ttyl, Anna)?”

The questions are humorous, but they reflect the all-encompassing gaze of the story’s algorithm, combing through all aspects of people’s lives—and the justifications people invent. Is it digital divination, the futures determined by remarkably powerful artificial intelligence? Are the items just random, and the characters subconsciously fulfill their package prophecies by doing things to likely make the items fit their lives? “In the end,” the narrators say, “the Algorithm’s methods didn’t matter so long as she got what she needed.”

The characters must choose to accept and pay for the items, as the majority of people do, or they can return them—though the latter is so culturally rare as to seem gauche, even a touch blasphemous. I use the term “blasphemous” because accepting the Algorithm’s packages quickly takes on the quality of a religion.Artificial intelligence is already deeply embedded in our culture, yet all too many of us seem to think of it as something new.

“The Algorithm works in mysterious ways,” Hyde writes, parodying a common theistic catchphrase—but it’s also true, for no one seems to know how the Algorithm really works. “Unbelievers,” the chorus of narrators deems the odd few who return their packages, like Inez, a woman in Denver who prefers doing things on her own and who functions as the story’s central apostate, rejecting every package she receives.

Like God, the Algorithm’s origins are never really explained; the believers just accept that it’s there, deeply attuned to their personal needs, offering them a capitalistic heaven on Earth if they accept the simple dogma of its clairvoyant deliveries. (And if they pay four annual installments of $39.99.)

The charmingly strange 19th-century Russian philosopher and early transhumanist, Nikolai Fyodorovich Fyodorov, imagined that advances in technology would not only allow but necessitate that humans literally transform the profane Earth into a Christian Heaven, complete with resurrections of the dead through science; the Algorithm, far more simply, creates heaven by doing all the thinking and anticipating for people, rendering them happy, zealous zombies—sans resurrection—with well-stocked homes.

To repudiate the packages, as Inez does, is to risk judgment and wrath, as well as the puritanical outrage and genuine confusion of its followers. The latter is because, as the narrators note, they’ve known the Algorithm since birth—or, perhaps more accurately, it’s known them. “We did not understand [Inez’s] resistance to the Algorithm,” they say.

All we knew for sure was that the Algorithm understood us. After all, we’d been inside its system since before we knew how to type—back when our parents first posted photos documenting our infant-bodies, swaddled and squishy in hospital beds. Although we had no proof, we suspected that the Algorithm might have known, even then, the fates that lay before us: not only what items we’d need, but who we would become… From our first uploaded image, the Algorithm had been invested in our futures. It had analyzed the texture of our baby blankets, the micro-musculature on our crying faces, the awkward cradle of our parents’ arms. Then again, perhaps the Algorithm had known us before we even officially existed—extrapolating likely outcomes from our parents’ data points, and our parents’ parents’ data points—a long legacy of information digested and decoded, translated into the deliveries that appeared outside our doors.

The passage is at once charming and alarming, and it is here that Hyde reveals the true extent of the Algorithm’s control and reach. It isn’t new; these narrators have never known a world where it didn’t exist. If Gen-Z’s population is by and large digital natives, the population of “The Future Is a Click Away” consists of Algorithm natives, which may partially explain their naive trust in this unseen artificial intelligence.

It is a world of capitalist and almost Calvinist predestination, a world they have inherited from their ancestors’ ever-increasing desires to document huge swathes of their life online. The sad revelation is that these characters haven’t abdicated control of their lives to the Algorithm, exactly; they simply were raised in its technological church, and unlearning these lessons or living without them, as Inez does, takes tremendous effort, just as it does to leave the community you were raised in.

Still, the Algorithm’s gifts come at costs beyond what you pay per package. Sometimes, Hyde writes, the packages aren’t actually within the receivers’ financial means—but rather than returning them, people go to extremes to pay for them, blindly trusting the Algorithm’s reasoning even if it means bankruptcy. If they start losing sleep over it all, they are sent sleeping pills, and other products to address whatever problems being in the system has engendered; this seems reasonable at first, but it is really just a way for the Algorithm to keep them under its control.

And then there are the darker, stranger gifts. The narrative’s turn comes when a character named Lacy receives three large, inexplicable packages: a scuba suit too big for her, a lifetime supply of mayo despite her not liking the condiment, and a coffin. In a world of Algorithmic destiny, the implication seems clear, if cruel: eat enough mayo to fit into the bigger suit, and then die, perhaps from a heart attack. Lacy and the narrators are bewildered, but they reassure themselves that the Algorithm must not be questioned, that “[i]f Lacy was meant to have these items, then it was only a matter of time before she understood their purpose.” She doesn’t use them, and, in time, the Algorithm begins sending everyone more and more items, until it seems to constantly rain unrequested, enigmatic products.

Then there are the frightening eventual outcomes for those who say no to the Algorithm. When Inez needs some sugar, a box of it appears on her lawn; she refuses it, and because she doesn’t return it within a prescribed period of time, she is arrested. Chillingly, we never see where Inez ends up, or if she’s even still alive—all because she wanted to remain independent, wanted to stay off the proverbial algorithmic grid. Don’t buy into the system, Hyde suggests, and you become a sinner in the hands of an angry digital megacorporate god—a statement that sounds paranoid, silly, overly far-reaching until it isn’t.Fears of jobs being lost to automation aren’t new, but they’ve increased dramatically since this recent rise in A.I.’s visibility.

The story, after all, briefly references “riots [that] broke out in some cities,” which may well be in response to the crushing, financially devastating system of the Algorithm—but rather than the narrators exploring what is going on, they are shielded from the violence by the Algorithm, which sends them “bottles of milk… predicting that tear gas would be carried on the wind and irritate our eyes.” Stay back, look away, the message seems to be, and I shall protect you—the very message of the police in general, at least if they are talking to the wealthier whiter citizens they are more likely to exercise restraint towards.

Link to the rest at The Literary Hub

“Like dropping a nuclear bomb”: Will Google turn digital publishing upside down with AI search?

From What’s New in Publishing:

Google announced a raft of releases at its I/O 23 developer conference. From changes to its Maps functionality to a new ‘Magic’ photo editor and composer, the tech giant unveiled a host of AI-driven features designed to improve customer experience.

One, the introduction of generative AI search, has been called the most radical change the internet has seen since Google came to dominate search back in the early 2000s.

A ‘nuclear bomb’

Google announced plans to change the way search engine results are presented, demonstrating search results generated using AI. Writing on Forbes.com, consumer tech reporter Matt Novak said:

At the risk of overstating the potential consequences, it will be like dropping a nuclear bomb on an online publishing industry that’s already struggling to survive.

Novak’s reaction was sparked by a search query that asked “What’s better for a family with kids under 3 and dog, Bryce Canyon or Arches?”. In a traditional search, this query would likely have surfaced a host of travel articles that compared these US national parks as a family-friendly vacation destination.

Instead, the AI-powered search delivered a conversational style answer that directly addresses the suitability of the parks both for the children and the dog.

AI-powered search delivered a conversational style answer

Complex searches

At the moment, complex searches have to be broken down into smaller parts, leaving users to filter information to piece together the exact answer they want. With generative AI, Google Search will be able to understand the original query.

Users will see suggested next steps along the AI-powered ‘snapshot’ of key information. Tapping a suggested next step takes users to a new conversational mode where they ask more about the topic, with context carried over from question to question.

Lost referral traffic

The problem for publishers lies in the fact that AI search is delivering direct responses to users’ queries. Online publishers currently rely heavily on the referral traffic generated from search engines; users visiting web pages for more information are monetized through ad impressions.

If Google Search is using content scraped from the open web to compose answers, users will have no need to click through to the web pages that form the raw material for the AI’s answers. 

Google says its suggested next steps links will generate clicks and that it is trying to be completely transparent about where it is getting the information for its answers. But Novak argues that this is like expecting people to click on Wikipedia source links. He said:

Sure, someone who’s intensely interested in the topic might click on those links. But the vast majority of users will just read the Wikipedia entry without worrying about all the sources.

Link to the rest at What’s New in Publishing

PG recalls that when the internet was spreading like wildfire many years ago. There were a huge number of “End of the World” articles, some online, others on paper. (Yes, children, PG is that old.)

The internet changed lots of things, but PG doesn’t remember seeing people starving in the streets or jumping out of windows from the 49th floor with such frequency that it was dangerous walking on the sidewalk in any large cities.