What is the meaning behind Shakespeare’s Sonnet 136?

This is a question that Perplexity suggested for itself to answer: What is the meaning behind Shakespeare’s Sonnet 136?

Perplexity’s Answer to its own question:

Sonnet 136 by William Shakespeare, often referred to by its first line, “If thy soul check thee that I come so near,” is a complex and layered poem that uses wordplay and puns to convey the speaker’s desire and emotional plea to the Dark Lady, a character in Shakespeare’s sonnets known for her captivating but elusive affection 1 2 5.

Themes and Interpretation

Wordplay on “Will”

The sonnet is notable for its clever use of the word “Will,” which serves multiple purposes within the text. It refers to the poet’s own name, William, to the concept of will or desire, and potentially to a sexual pun, common in the Elizabethan era, where “will” could also mean sexual desire or even the genitals themselves. 2 4 7. This punning serves to create a playful yet earnest tone as the speaker attempts to persuade the Dark Lady to reciprocate his love or at least acknowledge his presence among her many admirers.

Desire and Acceptance

The speaker expresses a deep yearning to be accepted by the Dark Lady, not just as another lover but on a more personal and intimate level. He implores her to accept his “will” in both senses—his identity (name) and his desire. The repeated entreaties to “make but my name thy love, and love that still” suggest a plea for enduring affection based solely on his identity, an identity that he feels should be sufficient for her to return his love. 1 2 5.

Inclusion Among Many

There is also a theme of inclusion and enumeration in the sonnet. The speaker acknowledges that the Dark Lady has many lovers and uses the logic that adding one more (himself) would not be significant in terms of number but would be meaningful to him. This is encapsulated in the lines “Among a number one is reckoned none” and “Then in the number let me pass untold,” where he suggests that his inclusion would not disrupt her life but would immensely satisfy his desire to be close to her. 1 2 5.

Literary Devices and Structure

Shakespeare employs several literary devices in Sonnet 136, including alliteration, metaphor, and rhetorical questions, enhancing the lyrical quality of the poem and emphasizing the emotional depth of the speaker’s plea. The sonnet follows the traditional Shakespearean form with three quatrains and a concluding couplet, using the rhyme scheme ABABCDCDEFEFGG. The meter is predominantly iambic pentameter, which lends a rhythmic, persuasive quality to the speaker’s arguments 2 4.

Conclusion

Overall, Sonnet 136 is a poignant reflection on unrequited love, identity, and the desire for acceptance. Shakespeare uses linguistic cleverness and structural nuances to craft a sonnet that is both intellectually engaging and emotionally resonant, revealing the complexities of human desire and interaction

PG’s Comment

Of the various experiments he tried with Perplexity, this one was the most impressive because Perplexity suggested the prompt itself.

Plus, as with other experiments, the AI appears to be quite careful to provide links to the web resources it used for its analysis and response. No plagiarism for this AI.

A Newly-Discovered Shakespearean Sonnet

Prompt: Write a Shakespearian Sonnet about a 16-year-old boy named Tom who is very anxious about asking a girl named Susan to a school play, Much Ado About Nothing

Perplexity’s Response:

When youthful Tom beholds fair Susan’s grace,
His heart doth race within its bony cage.
Forsooth, he longs to ask her to a place
Where players tread the boards, the Bard their stage.

In iambic beats, his pulse doth pound,
A sonnet’s rhythm in his chest doth play.
The words to woo, in anxious loops abound,
Yet fear doth grip his tongue, leads thoughts astray.

“Much Ado About Nothing” is the play,
A tale of love that’s tangled in its weave.
Tom wishes Susan by his side to stay,
If courage he can muster up to heave.

To conquer dread, the question he must pop,
Ere curtain’s rise and lights do finally drop.

Perplexity’s Explanation of What It Did:

In this Shakespearean sonnet, we follow the traditional structure of three quatrains followed by a couplet, with a rhyme scheme of ABAB CDCD EFEF GG. The poem captures the anxiety of a 16-year-old boy named Tom, who is nervous about asking a girl named Susan to join him at a school play, “Much Ado About Nothing.” The sonnet reflects Tom’s internal struggle and his desire to overcome his fear to express his feelings to Susan.

Compare and contrast the factors that have resulted in bestselling authors J.K. Rowling and Stephen King

From Perplexity:

Prompt: Compare and contrast the factors that have resulted in bestselling authors J.K. Rowling and Stephen King’s success.

Note – The links in the response were inserted by Perplexity.

Response:

Writing Style

J.K. Rowling:
Rowling’s writing style is often described as clear, accessible, and richly detailed, which has contributed significantly to her broad appeal. Her narrative style in the Harry Potter series combines whimsical fantasy with complex themes, making her books beloved by both children and adults. The progression in the complexity of language and themes throughout the Harry Potter series reflects her ability to grow with her audience.

Stephen King:
King’s style is markedly different, known for its directness and vivid, often graphic, descriptions. His storytelling is characterized by an ability to transform the ordinary into the terrifying, using the horror and supernatural genres to explore the depths of human psychology and societal issues. King’s narrative style is often conversational, creating a sense of closeness between the reader and the characters35.

Genre

J.K. Rowling:
Rowling is synonymous with the fantasy genre, particularly through her Harry Potter series. However, she has also ventured into adult fiction and crime writing under the pseudonym Robert Galbraith, where she maintains her narrative depth with a more mature and gritty approach26.

Stephen King:
King is a prolific figure in horror, supernatural fiction, suspense, and fantasy. His works often blend elements of the genres to explore the darker side of human nature. King’s ability to cross genre boundaries while maintaining a strong core identity as a horror writer demonstrates his versatility5.

Marketing

J.K. Rowling:
The marketing of the Harry Potter series was a phenomenon, with midnight book releases and extensive merchandising. Rowling’s books also benefited from strong word-of-mouth and a growing online fan community. The later books were marketed with significant secrecy and hype, adding to their allure8.

Stephen King:
King’s marketing has often played on his established reputation as the “Master of Horror.” His books frequently leverage his name recognition and the promise of high suspense and horror. King’s approach to marketing also includes his active social media presence and engagement with his fan base39.

Personal Background

J.K. Rowling:
Rowling’s personal “rags to riches” story, in which she wrote the initial Harry Potter book as a struggling single mother, has become a part of her public persona and has endeared her to many fans. Her background and personal struggles have been well-publicized, contributing to her relatability and the rooting interest of her audience617.

Stephen King:
King’s background includes a stint as a high school English teacher and a struggle with addiction, which he has openly discussed. These experiences have influenced his writing and are reflected in the depth and authenticity of his characterizations. King’s openness about his personal challenges has helped form a connection with his readers519.

Conclusion

Both J.K. Rowling and Stephen King have mastered the art of storytelling, albeit in very different ways. Rowling’s enchanting style and expansion from young adult fantasy to broader genres contrasts with King’s straightforward, often visceral horror narratives. Their marketing strategies reflect their unique appeals: Rowling’s shrouded in secrecy and magic, King’s straightforward and prolific. Their personal stories provide depth to their public personas, inviting readers not just into their books but also into the journeys that shaped them. Both authors have transformed their personal experiences, distinctive styles, and smart marketing into bestselling phenomena, albeit serving different tastes within the vast landscape of fiction.


Follow-on Prompt: what are the similarities and differences in the writing styles of j.k. rowling and stephen king

Response:

Similarities in Writing Styles

1. Rich Descriptions:
Both J.K. Rowling and Stephen King are known for their rich and vivid descriptions. Rowling’s descriptions often bring the magical world to life with sensory details, as seen in her portrayal of characters and settings in the Harry Potter series2

Similarly, King’s descriptions are vivid, particularly in creating atmospheric settings and building tension in his horror and supernatural stories3.

2. Character Development:
Rowling and King both excel in character development. Rowling’s characters are diverse and evolve over the course of the Harry Potter series, with complex themes of friendship, family, and morality2

King’s characters are also well-developed, often with intricate backstories that contribute to the psychological depth of his narratives3.

3. Engagement with Readers:
Both authors have a strong ability to engage readers. Rowling’s narrative style is accessible and has a broad appeal, while King’s conversational tone creates a sense of closeness between the reader and the characters23.

Differences in Writing Styles

1. Genre and Themes:
Rowling is primarily associated with the fantasy genre, particularly with the Harry Potter series, and her themes often revolve around magic, belonging, and humanity

King, on the other hand, is a master of horror, suspense, and the supernatural, exploring darker aspects of human nature and societal issues3.

2. Narrative Approach:
Rowling is known for her use of a basic plot outline, allowing for some spontaneity in her writing process, which is evident in the structured yet evolving plot of the Harry Potter series1

King is identified as a “pantser,” preferring to develop his stories organically without a fixed outline, which can be seen in the spontaneous development of his narratives1.

3. Writing Process:
Rowling’s meticulous planning and outlining contrast with King’s more improvisational style.

King has been quoted as saying, “Outlines are the last resource of bad fiction writers who wish to God they were writing masters’ theses,” indicating his preference for a more free-form approach to storytelling.

4. Literary Devices:
Rowling’s writing includes the use of rhetorical devices, ethos, logos, and participles with connectives and active verbs, which contribute to her unique syntax and diction2

King’s style, while also descriptive, often includes a more direct and sometimes graphic use of language, with a focus on creating a strong atmosphere through metaphors and vivid imagery3.

5. Influence and Inspiration:
Rowling’s influences include a range of authors and genres, from the Arthurian legend to European folklore and authors like Roald Dahl and Jane Austen, which is reflected in her allusions and character names46

King’s writing, while unique, has been compared to his pseudonym Richard Bachman, and his use of brand names and pop culture references is a noted feature of his style3.In summary, while J.K. Rowling and Stephen King share some similarities in their rich descriptive abilities and strong character development, they differ significantly in their chosen genres, narrative approaches, writing processes, use of literary devices, and sources of inspiration.


Prompt: How do J.K. Rowling and Stephen King differ in their approach to writing

Response:

J.K. Rowling and Stephen King, while both highly successful authors, have distinct approaches to writing that reflect their unique creative processes and methodologies.

J.K. Rowling’s Approach:

  1. Planning and Outlining:
    • Rowling emphasizes the importance of planning in her writing process. She is known for her detailed planning and outlining before beginning to write her novels. For instance, she spent five years planning the Harry Potter series, plotting out the entire story arc and character development across all seven books before publishing the first12.
    • She also uses intricate, color-coded charts to organize her thoughts and plot details, ensuring a coherent and well-structured narrative13.
  2. Writing in Spare Moments:
    • Rowling advises aspiring writers to write whenever they can find the time, highlighting the importance of utilizing spare moments to progress one’s writing. This approach was crucial during the early stages of her career when she was a single mother writing in cafes910.
  3. Rewriting and Persistence:
    • Rowling places a strong emphasis on rewriting and the necessity of resilience in the face of rejection and criticism. She acknowledges the importance of informed feedback and the need for humility and resilience as part of a writer’s life14.

Stephen King’s Approach:

  1. Organic Development:
    • King’s writing process is more organic and less reliant on detailed planning. He often starts with a situation and develops the story by following the characters and situations to see where they lead, without a fixed outline. This approach allows for a spontaneous development of the narrative3.
  2. Daily Writing Routine:
    • King is known for his disciplined daily writing routine, emphasizing the importance of writing every day to maintain momentum and progress in one’s work. He believes in the power of routine to foster creativity and productivity1.
  3. Revision and Rest:
    • After completing a draft, King advocates for setting it aside for a period before returning to it for revision. This break allows for a fresh perspective on the work, making it easier to identify areas for improvement15.
  4. Avoidance of Adverbs and Passive Voice:
    • King advises writers to avoid adverbs and passive voice, promoting a writing style that is direct and active. He believes that strong writing should stand on its own without relying on adverbs for emphasis511.

Conclusion:

While both authors share a commitment to their craft and a disciplined approach to writing, their methodologies diverge significantly. Rowling’s methodical planning and detailed outlining contrast with King’s more spontaneous and organic development of stories. Rowling utilizes structured time management, writing in whatever time she has available, and emphasizes the importance of rewriting and resilience. King, on the other hand, follows a daily writing routine, advises a break between writing and revising, and focuses on the technical aspects of writing, such as avoiding adverbs and passive voice. These differences highlight the diverse approaches to writing that can lead to successful and compelling storytelling.

Will copyright law enable or inhibit generative AI?

From The World Economic Forum:

As artificial intelligence (AI) moves beyond performing simple tasks to creating original content, it blurs the lines between humans and machines. In doing so, it challenges a core tenet of many traditional intellectual property (IP) frameworks: only works created by humans are protected by copyright laws.

The development and training of AI models also raise another important copyright issue: whether the use of third-party content in that exercise might infringe on the copyright of that content.

Emerging use cases around generative AI are disrupting traditional views of creativity, authorship and ownership and pushing the boundaries of copyright law. As the world catches up with innovation, the resulting legal ambiguity impacts all sides of the AI equation – developers, content creators and copyright owners.

Training and copyright infringement

To create content, generative AI learns from processing vast amounts of information. This training process can involve using copyright-protected content, raising a critical question: does training generative AI models constitute copyright infringement, exposing developers to related claims?

If so, is new legislation needed to address this issue in a manner that does not unreasonably impede innovation but still respects the rights of creators?

These questions have led some jurisdictions to consider amendments to existing text and data mining exemptions and/or fair use/fair dealing exceptions to cover the training of generative AI. Such exemptions permit certain activities that would otherwise constitute copyright infringement.

Other jurisdictions are still awaiting guidance, creating a drastically uneven regulatory landscape that will continue for some time. But, no matter where they stand, legislators need to consider this issue to incentivise innovation while protecting content creators’ rights.

Looking past the legal uncertainty at traditional means of creativity, humans constantly learn as we receive and process information. Typically, things we create are inextricably linked to what we have learned from others and our experiences. One could argue that AI is no different in that it learns from the underlying datasets to produce something new.

Copyright – ownership of content

Copyright law is a key means by which innovation and creation are rewarded – and thus enabled – raising the stakes for courts and intellectual property authorities to define copyright ownership in the context of generative AI.

Depending on where you are in the world, the definition of copyright varies, a significant legal consideration in its own right. But virtually everywhere, copyright requires meaningful human contribution.

Generative AI, which often involves little human input, shatters the mould of this traditional framework. It calls into question whether AI-generated content can be protected by copyright in the first place and, if so, then who owns this copyright?

Is it the developer of the AI model who arguably enabled the ‘creative’ process? Is it the owner of the AI model? Or is the person who considered and input the prompts that allowed the AI tool to generate the content? Of course, this depends on whether there was sufficient human contribution to justify copyright protection in the first place.

Currently, most generative AI tools provide that users own the content they generate in their terms of use. However, if the content is not copyright-protected in the first place, then the terms of use can’t change that position. There is less clarity around who is responsible if AI-generated content infringes someone else’s copyright in a protected work.

To answer these questions requires courts and intellectual property authorities to challenge the scope of traditional copyright law in relation to generative AI, which could lead to changes to existing legislation covering ownership of computer-generated content.

Eventually, it is possible that copyright law will need to be further reimagined and redefined to account for the possibility of recognizing a machine as its own entity with independent authority and the ability to own and protect content, though this does not appear to be on the horizon in the short term. In this more progressive scenario, it is unclear where liability would rest for negative consequences (including copyright infringement) caused by generative AI outputs.

Link to the rest at The World Economic Forum

To the best of PG’s knowledge, The World Economic is a non-profit NGO based in Switzerland that sponsors a variety of conferences and programs to make the world a better place. Its most high-profile program is a conference held annually in Davos, Switzerland.

Roughly 3,000 of the most powerful and wealthy leaders globally use over 1500 private jets plus a great many larger government jets to arrive and depart. Global warming has a regular place on each conference’s agenda.

Outline for a Murder Mystery in the Style of Dorothy Sayers

Perplexity Prompt: Create an outline for a murder mystery in the style of Dorothy Sayers

Perplexity Response:

Setting: The Quaint Village of Little Hampden

  • Little Hampden is a picturesque English village known for its tight-knit community, charming cottages, and a central village green where the annual summer fete is held.
  • The village is surrounded by dense woods and rolling hills, with a small river running through it.
  • The local vicarage, the pub called “The Weeping Willow,” and the village hall are the main social hubs.

Introduction: A Mysterious Death

  • During the village fete, a beloved local artist, Jonathan Eastwood, is found dead in his studio, which overlooks the village green.
  • The death appears to be a suicide, but certain inconsistencies lead to suspicions of foul play.
  • Lord Peter Wimsey, visiting his aunt in Little Hampden, takes an interest in the case.

Key Characters

  • Lord Peter Wimsey: The aristocratic amateur detective with a sharp intellect and a passion for justice.
  • Mervyn Bunter: Wimsey’s loyal valet and assistant, with a keen eye for detail.
  • Reverend Thomas Marple: The well-respected vicar with a love for gardening and a secret past.
  • Eleanor Brampton: The victim’s muse and a talented painter, known for her fiery temper and tumultuous relationship with Eastwood.
  • Dr. Henry Forbes: The village doctor, whose medical opinion on the death raises more questions than answers.
  • Inspector Charles Parker: Wimsey’s friend from Scotland Yard, called in to assist with the investigation.

Red Herrings

  • A mysterious figure seen near the artist’s studio on the night of the murder.
  • A scandalous love letter found in Eastwood’s desk, hinting at a possible affair.
  • A recently altered will that benefits a distant relative.
  • A disgruntled former student of Eastwood’s who had a public falling out with him.

Twists and Turns

  • Evidence that the suicide note was not written by Eastwood.
  • Discovery of a hidden compartment in the studio containing a valuable missing painting.
  • An anonymous tip that leads Wimsey to question the alibis of several villagers.
  • A break-in at the vicarage that uncovers a stash of blackmail letters.

Character Development

  • Wimsey’s empathy and understanding of human nature help him connect with the villagers and gain their trust.
  • Bunter’s military background proves crucial in deciphering a code found among Eastwood’s belongings.
  • Eleanor Brampton’s initial hostility towards Wimsey softens as she becomes an ally in the investigation.
  • Reverend Marple’s struggle with his conscience as secrets from his past come to light.

Clues and Evidence

  • A palette knife smeared with paint and blood, found in the nearby woods.
  • A torn piece of a jacket caught on a bramble that matches the fabric of a local squire’s missing coat.
  • A set of footprints leading away from the studio that suggests the presence of a second person.
  • The discovery of a rare poison in the victim’s system, not readily available to the public.

Conclusion: Unveiling the Murderer

  • Wimsey gathers all suspects for a dramatic reveal in the village hall.
  • He meticulously explains how each piece of evidence fits into the puzzle, exonerating the innocent and cornering the guilty.
  • The murderer is revealed to be someone unexpected, with a motive rooted in a long-held grudge and greed.
  • Wimsey’s clever deduction and attention to human psychology bring the case to a satisfying close.

Epilogue: Aftermath and Reflection

  • The village of Little Hampden returns to its peaceful ways, with the shadow of the murder lifted.
  • Wimsey reflects on the nature of crime and justice, and the impact of the case on the community.
  • Eleanor Brampton dedicates her next painting to Wimsey, capturing the essence of Little Hampden and its resilience.
  • Wimsey and Bunter depart, leaving behind grateful villagers and a sense of order restored.

Based upon his playtime with Perplexity, he thinks the AI might be most useful for this sort of question – an author is stuck for a plot and wants some fresh ideas. PG thinks there might be help for sub-plots, character creation, etc., generated with similar more-focused prompts.

AI v. Hemingway

In his experiments with AI writing programs, PG has discovered a few strengths and weaknesses.

Following are the first few paragraphs of “The Sun Also Rises,” as originally written by Ernest Hemingway:

Robert Cohn was once middleweight boxing champion of Princeton. Do not think that I am very much impressed by that as a boxing title, but it meant a lot to Cohn. He cared nothing for boxing, in fact he disliked it, but he learned it painfully and thoroughly to counteract the feeling of inferiority and shyness he had felt on being treated as a Jew at Princeton. There was a certain inner comfort in knowing he could knock down anybody who was snooty to him, although, being very shy and a thoroughly nice boy, he never fought except in the gym. He was Spider Kelly’s star pupil. Spider Kelly taught all his young gentlemen to box like featherweights, no matter whether they weighed one hundred and five or two hundred and five pounds. But it seemed to fit Cohn. He was really very fast. He was so good that Spider promptly overmatched him and got his nose permanently flattened. This increased Cohn’s distaste for boxing, but it gave him a certain satisfaction of some strange sort, and it certainly improved his nose. In his last year at Princeton he read too much and took to wearing spectacles. I never met any one of his class who remembered him. They did not even remember that he was middleweight boxing champion.

I mistrust all frank and simple people, especially when their stories hold together, and I always had a suspicion that perhaps Robert Cohn had never been middleweight boxing champion, and that perhaps a horse had stepped on his face, or that maybe his mother had been frightened or seen something, or that he had, maybe, bumped into something as a young child, but I finally had somebody verify the story from Spider Kelly. Spider Kelly not only remembered Cohn. He had often wondered what had become of him.

Robert Cohn was a member, through his father, of one of the richest Jewish families in New York, and through his mother of one of the oldest. At the military school where he prepped for Princeton, and played a very good end on the football team, no one had made him race-conscious. No one had ever made him feel he was a Jew, and hence any different from anybody else, until he went to Princeton. He was a nice boy, a friendly boy, and very shy, and it made him bitter. He took it out in boxing, and he came out of Princeton with painful self-consciousness and the flattened nose, and was married by the first girl who was nice to him. He was married five years, had three children, lost most of the fifty thousand dollars his father left him, the balance of the estate having gone to his mother, hardened into a rather unattractive mould under domestic unhappiness with a rich wife; and just when he had made up his mind to leave his wife she left him and went off with a miniature-painter. As he had been thinking for months about leaving his wife and had not done it because it would be too cruel to deprive her of himself, her departure was a very healthful shock.

The divorce was arranged and Robert Cohn went out to the Coast. In California he fell among literary people and, as he still had a little of the fifty thousand left, in a short time he was backing a review of the Arts. The review commenced publication in Carmel, California, and finished in Provincetown, Massachusetts. By that time Cohn, who had been regarded purely as an angel, and whose name had appeared on the editorial page merely as a member of the advisory board, had become the sole editor. It was his money and he discovered he liked the authority of editing. He was sorry when the magazine became too expensive and he had to give it up.

By that time, though, he had other things to worry about. He had been taken in hand by a lady who hoped to rise with the magazine. She was very forceful, and Cohn never had a chance of not being taken in hand. Also he was sure that he loved her. When this lady saw that the magazine was not going to rise, she became a little disgusted with Cohn and decided that she might as well get what there was to get while there was still something available, so she urged that they go to Europe, where Cohn could write. They came to Europe, where the lady had been educated, and stayed three years. During these three years, the first spent in travel, the last two in Paris, Robert Cohn had two friends, Braddocks and myself. Braddocks was his literary friend. I was his tennis friend.

The lady who had him, her name was Frances, found toward the end of the second year that her looks were going, and her attitude toward Robert changed from one of careless possession and exploitation to the absolute determination that he should marry her. During this time Robert’s mother had settled an allowance on him, about three hundred dollars a month. During two years and a half I do not believe that Robert Cohn looked at another woman. He was fairly happy, except that, like many people living in Europe, he would rather have been in America, and he had discovered writing. He wrote a novel, and it was not really such a bad novel as the critics later called it, although it was a very poor novel. He read many books, played bridge, played tennis, and boxed at a local gymnasium.

And here is what PG’s current most interesting writing AI, Perplexity, generated:

Certainly! Here is a creative rewriting of the first few paragraphs of “The Sun Also Rises” in the style of Ernest Hemingway: 

Robert Cohn was once middleweight boxing champion of Princeton. Do not think that I am very much impressed by that as a boxing title, but it meant a lot to Cohn. He cared nothing for boxing, in fact he disliked it, but he learned it painfully and thoroughly to counteract the feeling of inferiority and shyness he had felt on being treated as a Jew at Princeton. There was a certain inner comfort in knowing he could knock down anybody who was snooty to him, although, being very shy and a thoroughly nice boy, he never fought except in the gym.

He was Spider Kelly’s star pupil. Spider Kelly taught all his young gentlemen to box like featherweights, no matter whether they weighed one hundred and five or two hundred and five pounds. But it seemed to fit Cohn. He was really very fast. He was so good that Spider promptly overmatched him and got his nose permanently flattened. This increased Cohn’s distaste for boxing, but it gave him a certain satisfaction of some strange sort, and it certainly improved his nose.

In his last year at Princeton he read too much and took to wearing spectacles. I never met any one of his class who remembered him. They did not even remember that he was middleweight boxing champion.

I mistrust all frank and simple people, especially when their stories hold together, and I always had a suspicion that perhaps Robert Cohn had never been middleweight boxing champion, and that perhaps a horse had stepped on his face, or that maybe his mother had been frightened or seen something, or that he had, maybe, bumped into something as a young child, but I checked on all the stories I heard about Cohn, and found out that they were true. This passage maintains Hemingway’s characteristic terse prose, simple and direct language, and understated tone. It also reflects Hemingway’s focus on themes of masculinity and the effects of societal pressures on the individual.

Perplexity has a “Tell Me More” option at the end of its responses (at least in the paid version). PG hit the “Tell Me More” button and here’s what Perplexity provided:

Continuing in the style of Ernest Hemingway, the narrative would likely proceed with a focus on the disillusionment and existential themes that are prevalent in his works. Here’s an extension of the rewritten passage, maintaining the Hemingway style: After Princeton, Cohn was lost. Like many of us, the war had changed him. It wasn’t just the physical scars, those were easy to hide. It was something deeper, a sort of emptiness that came with seeing too much and understanding too little. He wandered through Europe, searching for something he couldn’t name. It was in Paris that he found Frances, and for a while, it seemed as though he might have found what he was looking for. But happiness, like a shadow, is elusive when chased.

Paris was a city that promised much but held back even more. It was filled with expatriates like us, all running from something, all trying to write the next great novel, paint the next masterpiece, or simply forget the past. The cafes of Montparnasse were our battlegrounds, where we fought against our own disillusionment with words and wit as our weapons.

Cohn wrote a novel during those years. It wasn’t particularly good, nor was it bad. It simply was. Like so much of life, it existed in that gray area between success and failure. Frances encouraged him, perhaps more out of a desire to secure her own future than any belief in his literary talents. But in the end, it didn’t matter. The book was published, and Cohn found himself a minor celebrity among the expatriate community.

Yet, fame, Cohn discovered, was as empty as everything else. He began to realize that the problem wasn’t with Paris, or his novel, or Frances. The problem was with him. He had been searching for meaning in a world that seemed determined to withhold it. And in that search, he had lost himself.

It was around this time that I met him. He was a man out of step with the world, clinging to the belief that somewhere, somehow, there was something real, something true. I couldn’t help but admire him for that, even as I pitied him. For in the end, we were all lost, all searching, all hoping for the sun to rise on a new day that would bring with it some semblance of meaning. This continuation captures the essence of Hemingway’s themes: the search for meaning in a post-war world, the disillusionment with the “Lost Generation,” and the personal struggles of his characters as they navigate their lives. Hemingway’s writing often explored the complexities of human emotions and relationships against the backdrop of a world that seemed indifferent to individual suffering, and this narrative aims to reflect those elements.

End of Perplexity Rewrite

PG says Papa doesn’t have anything to worry about.

Postscript:

During his web-wandering in connection with this post, PG discovered something else of interest:

The Unpublished Opening of The Sun Also Rises
by Ernest Hemingway
Also includes:
Letter to Ernest Hemingway on The Sun Also Rises
by F. Scott Fitzgerald

This was published in a now-defunct periodical titled Antæus No. 33, Spring 1979, edited by Daniel Halpern. You can read this Unpublished Opening/Fitzgerald Critique article Here.

If you don’t have a copy, you can read The Sun Also Rises, with the opening Hemingway used after seeing Fitzgerald’s letter Here.

Video will kill the truth if monitoring doesn’t improve, argue two researchers

From The Economist:

Generative artificial intelligence (ai) is rewriting the disinformation playbook. There is no better place to study this than Taiwan, which, the Swedish Varieties of Democracy Project finds, is subject to more disinformation from abroad than any other democracy. China has greatly increased the scope and sophistication of its influence operations on the island using cutting-edge ai. We conducted fieldwork during the elections in Taiwan in January, meeting organisations on disinformation’s front lines. We saw in their experience the signs of how ai could disrupt this global year of elections.

One such sign is the rise of ai-powered video, in particular deepfakes—content created to convince people of events that did not occur. One video aired on TikTok, a Chinese-owned platform, showed Taiwanese presidential candidate William Lai endorsing his rivals’ joint opposition ticket.

Deepfakes are just one of several tactics being developed for video-based propaganda. CapCut, an app produced by Bytedance, TikTok’s parent company, uses ai to generate videos from text descriptions. CapCut was reportedly used by Spamouflage, a China-linked foreign-influence campaign, to turn written propaganda into ai-generated newscasts. These videos are not quite deepfakes: few viewers would think such broadcasts were real. But they mark disinformation’s expansion to video-based platforms. As online attention shifts from tweets to TikTok, disinformation is following.

AI also makes it easier to pump out media about events as soon as they occur. A few days before the election every Taiwan-registered phone buzzed simultaneously with an air-raid alert triggered by a Chinese satellite crossing Taiwanese airspace. Within 24 hours Taiwan ai Labs, a research outfit, observed over 1,500 co-ordinated social-media posts promoting conspiracy theories about the alert and sowing distrust. At one point as many as five a minute were appearing. Many were far more readable than the stuff produced by a typical content mill.

In the three months leading up to the elections, Cloudflare, an internet-services provider, found that efforts to crash Taiwanese websites jumped by 3,370% compared to the same period the year before. Had these sites been taken down in large numbers, Taiwanese residents seeking information online would instead have seen a deluge of disinformation on social media. This is what happened after Hawaii’s wildfires in 2023, when Spamouflage spread ai-generated conspiracy theories.

Many Taiwanese feel as if their country is desperately adding sandbags to its levees the night before a typhoon. Government agencies and research organisations lack the tools for timely tracking of video content. Fact-checkers struggle to keep pace: during Taiwan’s 2020 presidential election Cofacts, a fact-checking platform, was able to respond to around 80% of incoming requests within 24 hours. This year, on election day, it managed to respond to just 15% of requests.

Worse, technology companies lack effective technical countermeasures against disinformation distributed by states. They are increasingly using “watermarks”, digital branding of content as ai-generated, and content policies to prevent their proprietary ai from being abused by others. But well-resourced states can build their own ai that is unimpeded by watermarking and content-policy constraints. Tech firms are also focused on building tools for detecting ai-generated content, but Taiwanese disinformation-fighters say these provide conflicting and inaccurate results.

AI-generated disinformation requires a broad response that involves governments, technology companies, fact-checking outfits, and think tanks. Big tech companies should help fund fact-checkers and other private-sector organisations that provide front-line intelligence. They should also share more of their data with civil-society groups, whose local knowledge and legitimacy mean they are well placed to track accounts and counter false narratives in their countries of operation.

Front-line organisations need to be on the video-sharing platforms, of course, both to monitor them and to create and share content. Some, though, have reasons to be wary of TikTok. In Taiwan, just one of the major fact-checking and disinformation-research organisations has a TikTok account, owing to concerns about the app’s Chinese ownership.

This means ceding a front—and a potentially powerful tool. A study by usc Annenberg, a journalism school, in 2017 showed that video-based fact-checking is more effective than text-checking at correcting the beliefs of users exposed to disinformation. In 2022 a study in Science Advances, a journal, found that “prebunking” videos, which are designed to pre-empt disinformation—before an election, say, or after a natural disaster—worked even across political divides, offering viewers on both left and right a certain level of inoculation.

. . . .

Disinformation has reached an inflection point in a year when more than half the world’s population lives in countries that will hold nationwide elections. Malicious actors will be using ai to make falsehoods more engaging and believable and to distort narratives in moments of crisis. Those who wish to safeguard 2024’s elections must be ready to tell their story before others tell it for them.

Link to the rest at The Economist

Understanding humanoid robots

From TechCrunch:

Robots made their stage debut the day after New Year’s 1921. More than half-a-century before the world caught its first glimpse of George Lucas’ droids, a small army of silvery humanoids took to the stages of the First Czechoslovak Republic. They were, for all intents and purposes, humanoids: two arms, two legs, a head — the whole shebang.

Karel Čapek’s play, R.U.R (Rossumovi Univerzální Roboti), was a hit. It was translated into dozens of languages and played across Europe and North America. The work’s lasting legacy, however, was its introduction of the word “robot.” The meaning of the term has evolved a good bit in the intervening century, as Čapek’s robots were more organic than machine.

Decades of science fiction have, however, ensured that the public image of robots hasn’t strayed too far from its origins. For many, the humanoid form is still the platonic robot ideal — it’s just that the state of technology hasn’t caught up to that vision. Earlier this week, Nvidia held its own on-stage robot parade at its GTC developer conference, as CEO Jensen Huang was flanked by images of a half-dozen humanoids.

While the notion of the concept of the general-purpose humanoid has, in essence, been around longer than the word “robot,” until recently, the realization of the concept has seemed wholly out of grasp. We’re very much not there yet, but for the first time, the concept has appeared over the horizon.

What is a “general-purpose humanoid?”

Before we dive any deeper, let’s get two key definitions out of the way. When we talk about “general-purpose humanoids,” the fact is that both terms mean different things to different people. In conversations, most people take a Justice Potter “I know it when I see it” approach to both in conversation.

For the sake of this article, I’m going to define a general-purpose robot as one that can quickly pick up skills and essentially do any task a human can do. One of the big sticking points here is that multi-purpose robots don’t suddenly go general-purpose overnight.

Because it’s a gradual process, it’s difficult to say precisely when a system has crossed that threshold. There’s a temptation to go down a bit of a philosophical rabbit hole with that latter bit, but for the sake of keeping this article under book length, I’m going to go ahead and move on to the other term.

I received a bit of (largely good-natured) flack when I referred to Reflex Robotics’ system as a humanoid. People pointed out the plainly obvious fact that the robot doesn’t have legs. Putting aside for a moment that not all humans have legs, I’m fine calling the system a “humanoid” or more specifically a “wheeled humanoid.” In my estimation, it resembles the human form closely enough to fit the bill.

A while back, someone at Agility took issue when I called Digit “arguably a humanoid,” suggesting that there was nothing arguable about it. What’s clear is that robot isn’t as faithful an attempt to recreate the human form as some of the competition. I will admit, however, that I may be somewhat biased having tracked the robot’s evolution from its precursor Cassie, which more closely resembled a headless ostrich (listen, we all went through an awkward period).

Another element I tend to consider is the degree to which the humanlike form is used to perform humanlike tasks. This element isn’t absolutely necessary, but it’s an important part of the spirit of humanoid robots. After all, proponents of the form factor will quickly point out the fact that we’ve built our worlds around humans, so it makes sense to build humanlike robots to work in that world.

Adaptability is another key point used to defend the deployment of bipedal humanoids. Robots have had factory jobs for decades now, and the vast majority of them are single-purpose. That is to say, they were built to do a single thing very well a lot of times. This is why automation has been so well-suited for manufacturing — there’s a lot of uniformity and repetition, particularly in the world of assembly lines.

Brownfield vs. Greenfield

The terms “greenfield” and “brownfield” have been in common usage for several decades across various disciplines. The former is the older of two, describing undeveloped land (quite literally, a green field). Developed to contrast the earlier term, brownfield refers to development on existing sites. In the world of warehouses, it’s the difference between building something from scratch or working with something that’s already there.

There are pros and cons of both. Brownfields are generally more time and cost-effective, as they don’t require starting from scratch, while greenfields afford to opportunity to built a site entirely to spec. Given infinite resources, most corporations will opt for a greenfield. Imagine the performance of a space built ground-up with automated systems in mind. That’s a pipedream for most organizers, so when it comes time to automate, a majority of companies seek out brownfield solutions — doubly so when they’re first dipping their toes into the robotic waters.

Given that most warehouses are brownfield, it ought come as no surprise that the same can be said for the robots designed for these spaces. Humanoids fit neatly into this category — in fact, in a number of respects, they are among the brownest of brownfield solutions. This gets back to the earlier point about building humanoid robots for their environments. You can safely assume that most brownfield factories were designed with human workers in mind. That often comes with elements like stairs, which present an obstacle for wheeled robots. How large that obstacle ultimately is depends on a lot of factors, including layout and workflow.

Baby Steps

Call me a wet blanket, but I’m a big fan of setting realistic expectations. I’ve been doing this job for a long time and have survived my share of hype cycles. There’s an extent to which they can be useful, in terms of building investor and customer interest, but it’s entirely too easy to fall prey to overpromises. This includes both stated promises around future functionality and demo videos.

I wrote about the latter last month in a post cheekily titled, “How to fake a robotics demo for fun and profit.” There are a number of ways to do this, including hidden teleoperation and creative editing. I’ve heard whispers that some firms are speeding up videos, without disclosing the information. In fact, that’s the origin of humanoid firm 1X’s name — all of their demos are run in 1X speed.

Most in the space agree that disclosure is important — even necessary — on such products, but there aren’t strict standards in place. One could argue that you’re wading into a legal gray area if such videos play a role in convincing investors to plunk down large sums of money. At the very least, they set wildly unrealistic expectations among the public — particularly those who are inclined to take truth-stretching executives’ words as gospel.

That can only serve to harm those who are putting in the hard work while operating in reality with the rest of us. It’s easy to see how hope quickly diminishes when systems fail to live up to those expectations.

The timeline to real-world deployment contains two primary constraints. The first is mechatronic: i.e. what the hardware is capable of. The second is software and artificial intelligence. Without getting into a philosophical debate around what qualifies as artificial general intelligence (AGI) in robots, one thing we can certainly say is that progress has — and will continue to be gradual.

As Huang noted at GTC the other week, “If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within five years.” That’s on the optimistic end of the timeline I’ve heard from most experts in the field. A range of five to 10 years seems common.

Before hitting anything resembling AGI, humanoids will start as single-purpose systems, much like their more traditional counterparts. Pilots are designed to prove out that these systems can do one thing well at scale before moving onto the next. Most people are looking at tote moving for that lowest-hanging fruit. Of course, your average Kiva/Locus AMR can move totes around all day, but those systems lack the mobile manipulators required to move payloads on and off themselves. That’s where robot arms and end effectors come in, whether or not they happen to be attached to something that looks human.

. . . .

Two legs to stand on

At this point, the clearest path to AGI should look familiar to anyone with a smartphone. Boston Dynamics’ Spot deployment provides a clear real-world example of how the app store model can work with industrial robots. While there’s a lot of compelling work being done in the world of robot learning, we’re a ways off from systems that can figure out new tasks and correct mistakes on the fly at scale. If only robotics manufacturers could leverage third-party developers in a manner similar to phonemakers.

Interest in the category has increased substantially in recent months, but speaking personally, the needle hasn’t moved too much in either direction for me since late last year. We’ve seen some absolutely killer demos, and generative AI presents a promising future. OpenAI is certainly hedging its bets, first investing in 1X and — more recently — Figure.

A lot of smart people have faith in the form factor and plenty of others remain skeptical. One thing I’m confident saying, however, is that whether or not future factories will be populated with humanoid robots on a meaningful scale, all of this work will amount to something. Even the most skeptical roboticists I’ve spoken to on the subject have pointed to the NASA model, where the race to land humans on the mood led to the invention of products we use on Earth to this day.

Link to the rest at TechCrunch

PG notes that the robot below is using OpenAI, a general-purpose AI program.

https://youtube.com/shorts/nmHzvQr3kYE?si=Q2KjwJXLINEtyiVe

More doctors use ChatGPT to help with busy workloads, but is AI a reliable assistant?

From Fox News:

Dr. AI will see you now.

It might not be that far from the truth, as more and more physicians are turning to artificial intelligence to ease their busy workloads.

Studies have shown that up to 10% of doctors are now using ChatGPT, a large language model (LLM) made by OpenAI — but just how accurate are its responses?

A team of researchers from the University of Kansas Medical Center decided to find out.

“Every year, about a million new medical articles are published in scientific journals, but busy doctors don’t have that much time to read them,” Dan Parente, the senior study author and an assistant professor at the university, told Fox News Digital.

“We wondered if large language models — in this case, ChatGPT — could help clinicians review the medical literature more quickly and find articles that might be most relevant for them.”

For a new study published in the Annals of Family Medicine, the researchers used ChatGPT 3.5 to summarize 140 peer-reviewed studies from 14 medical journals.

Seven physicians then independently reviewed the chatbot’s responses, rating them on quality, accuracy and bias.

The AI responses were found to be 70% shorter than real physicians’ responses, but the responses rated high in accuracy (92.5%) and quality (90%) and were not found to have bias.

Serious inaccuracies and hallucinations were “uncommon” — found in only four of 140 summaries.

“One problem with large language models is also that they can sometimes ‘hallucinate,’ which means they make up information that just isn’t true,” Parente noted.

“We were worried that this would be a serious problem, but instead we found that serious inaccuracies and hallucination were very rare.”

Out of the 140 summaries, only two were hallucinated, he said.

Minor inaccuracies were a little more common, however — appearing in 20 of 140 summaries.

Based on these findings, Parente noted that ChatGPT could help busy doctors and scientists decide which new articles in medical journals are most worthwhile for them to read.

. . . .

Dr. Harvey Castro, a Dallas-based board-certified emergency medicine physician and national speaker on artificial intelligence in health care, was not involved in the University of Kansas study but offered his insights on ChatGPT use by physicians.

“AI’s integration into health care, particularly for tasks such as interpreting and summarizing complex medical studies, significantly improves clinical decision-making,” he told Fox News Digital.

“This technological support is critical in environments like the ER, where time is of the essence and the workload can be overwhelming.”

Castro noted, however, that ChatGPT and other AI models have some limitations.

“Despite AI’s potential, the presence of inaccuracies in AI-generated summaries — although minimal — raises concerns about the reliability of using AI as the sole source for clinical decision-making,” Castro said.

Link to the rest at Fox News and thanks to F. for the tip.

Gun violence killed them. Now, their voices will lobby Congress to do more using AI

From National Public Radio

“It’s been six years, and you’ve done nothing,” Joaquin Oliver’s voice echoed across the U.S. Capitol grounds Wednesday. “Not a thing to stop all the shootings that have continued to happen since.”

On Feb. 14, 2018, Oliver started another day as a senior at Marjory Stoneman Douglas High School in Parkland, Fla. By the end, he was one of 17 people murdered at the school in a mass shooting that sparked a worldwide, youth-led movement on gun violence.

Now, people can hear his voice again.

Oliver’s audio is one of six messages generated by artificial intelligence meant to resemble different voices of individuals killed by guns in incidents over the past decade. It’s part of an initiative led by March For Our Lives, the gun control organization borne out of the Parkland shooting, and Change The Ref, a group started by Oliver’s parents, vocal advocates Manny and Patricia Oliver.

The messages will appear on the Shotline, a new online platform that the groups created, where users can individually send the AI-generated audio directly to the offices of members of Congress, demanding further action on gun violence prevention. The initiative was announced in front of the Capitol Wednesday, the sixth anniversary of the Parkland shooting.

Link to the rest at National Public Radio

PG says this is over the top, beyond the pale, etc.

Perplexity Talks About Self-Publishing

PG spent a little time with Perplexity an AI writing assistant. You can decide how useful it was. He used the free version, which he found to be quite powerful.

Note: Perplexity responses includes links to online sources it found and, presumably, used in formatting its responses. PG has left the live links in place at the end of each of the bullet points at the end of the bullet points. He’s not certain whether extensive use will break the links or not.

PG was disappointed about the limited number of sources Perplexity apparently relied upon. He speculates that the professional version, which requires a subscription fee of $20 per month or $200 per year, will provide information from a wider range of sources.

You will want to check out the links to get a sense of the types of sources Perplexity uses in crafting its response. PG will need to do some more research into the question of how the program assesses the reliability of the information it uses.

Prompt: What are the tradeoffs between self-publishing and traditional publishing?

Response: Based on the search results provided, here is a summary of the key tradeoffs between self-publishing and traditional publishing:

Self-Publishing Pros

  • Full control over the creative process, including content, cover design, pricing, and release timeline123
  • Higher royalty rates, allowing authors to keep a larger share of book sales revenue123
  • Faster time to market, with books potentially published within 6 months12
  • Ability to publish niche or specialized books that may not appeal to traditional publishers12

Self-Publishing Cons

  • Responsibility for all publishing tasks like editing, marketing, and distribution, which can be time-consuming and expensive123
  • Difficulty in getting self-published books into physical bookstores and achieving wide distribution123
  • Potential perception of lower quality or credibility compared to traditionally published books123
  • Challenges in effective book marketing without the support of a publishing house’s marketing team123

Traditional Publishing Pros

  • Access to industry expertise and resources for editing, design, printing, distribution, and marketing123
  • Potential for higher visibility and credibility through established publisher relationships123
  • Possibility of receiving an advance payment prior to publication23
  • Wider distribution through publisher’s channels, including bookstores123

Traditional Publishing Cons

  • Highly competitive selection process, with low acceptance rates for unsolicited manuscripts123
  • Longer timeline to publication, often taking 2+ years123
  • Less creative control, as publishers make key decisions about the book123
  • Lower royalty rates compared to self-publishing123

In summary, self-publishing offers more control and potentially higher royalties, while traditional publishing provides industry expertise, credibility, and wider distribution, but with less creative control and a longer timeline. The best option depends on the author’s goals, resources, and target audience

Legal Vineyards: Cultivating Rich Connections For A Bountiful Harvest

PG Trigger Warning: The following post appears because PG thought it suitably over the top to be a subject of derision.

From Above the Law:

In the intricate world of law, the strength of your professional network can be likened to a well-cultivated vineyard.

Just as the finest wines are the product of time, care, and the gentle nurturing of grapevines, so too are the most fruitful professional connections grown from the seeds of past associations.

Reconnecting with old law school friends, past clients, and former colleagues is not just a walk down memory lane; it’s an investment in your career’s future bounty.

This article explores the art of networking on how to nurture these connections for a harvest rich in opportunities and referrals.

Planting the Seeds

For attorneys, each connection — whether an old classmate, a former client, or a professional peer — represents a potential avenue for growth, a seed sown in the rich soil of your career path.

In the legal field, where opportunities and referrals can significantly impact success, nurturing these seeds is not just beneficial — it’s strategic.

Recognizing and cultivating your existing network goes beyond simple relationship maintenance.

It’s about building a robust foundation for future opportunities, enhancing your reputation, and opening doors to new avenues for collaboration and client acquisition.

By investing in these relationships, you’re effectively laying the groundwork for a professional ecosystem that can yield an abundant harvest of success, setting you apart in a competitive legal landscape.

Watering With Interactions

In the same way that vines require consistent watering to flourish, your professional relationships thrive on regular, impactful interactions.

However, this doesn’t entail overwhelming your contacts with messages. Instead, focus on the quality of your engagements.

Leveraging commonalities can significantly deepen these connections. Whether it’s a shared interest, a mutual alma mater, or a common professional challenge, touching base on these shared experiences or values can forge stronger bonds.

Additionally, blending professional courtesies with personal touches — such as congratulating them on personal milestones, or sharing insights on mutual interests — can seamlessly intertwine the professional with the personal.

This approach not only keeps your “relationship vine” vigorous and growing but also cultivates an environment where professional and personal spheres enrich one another, creating a network that’s both robust and genuinely connected.

Fertilizing With Value-Added Exchanges

Link to the rest at Above the Law

PG had to cut this off before the fertilizer covered everything else.

If he were still making presentations to other lawyers, PG would include excerpts from the OP as laugh lines. Here are a couple of illustration for PG’s Powerpoint via OpenArtAI

What is Suno? The ‘ChatGPT for music’ generates songs in seconds

From ZD Net:

Several text-to-music generators are now on the market, including offerings from Meta and Google. However, the Suno AI music generator is becoming increasingly popular — likely because it creates original lyrics and vocals, and because it leverages the power of ChatGPT.

So, what is Suno and how does it work?

Suno is an AI music generator that generates a song with original lyrics and beats from a text prompt. The tool can be accessed via its free standalone website or via Microsoft Copilot by enabling Suno’s third-party plug-in.

What makes Suno stand out from other music-generating models? In addition to creating the music, which you can do with Meta’s AudioCraft and Google’s MusicFX, Suno also produces original lyrics and vocals. This is a reflection of the company’s mission to help everyone create music, regardless of background or musical knowledge.

“Whether you’re a shower singer or a charting artist, we break barriers between you and the song you dream of making,” Suno says on its website. Suno’s intuitive interface makes it easy to use, and the only requirement is you need to create a free account.

In Rolling Stone’s deep dive into Suno and its potential impact on the music business, the AI startup explained that Suno uses its own AI models to generate the music and then relies on ChatGPT to create the song’s title and lyrics.

If you have ever used ChatGPT, you know it’s capable of generating great text outputs, such as essays, thanks to its advanced natural language processing. The fact that Suno uses ChatGPT should tell you all you need to know about its text quality output.

. . . .

Who owns songs made with Suno?

The emergence of AI music generators has alarmed music labels and artists because of how easy it is to create music that uses artists’ voices without their consent. How is Suno navigating this copyright issue?

According to the Rolling Stone report, Suno refuses to generate any music in the style of real artists’ voices. 

Since all the music generated is original and not lifted from other artists’ work, paying subscribers own the songs generated by Suno, as well as the artwork that accompanies the songs. However, Suno retains all ownership of songs created by free users, as well as the artwork.

As a result, if you are a paying subscriber, you can use the music you generate for commercial use, including posting the songs on YouTube, Spotify, or Apple Music. Free users can only post on social media with attribution and are not allowed to use the songs commercially. Regardless of which version you use, if you input your own lyrics, you retain ownership of those.

Are the songs subject to copyright protection? Here’s what Suno says on its website:

“The availability and scope of copyright protection for content generated (in whole or in part) using artificial intelligence is a complex and dynamic area of law, which is rapidly evolving and varies among countries. We encourage you to consult a qualified attorney to advise you about the latest development and the degree of copyright protection available for the output you generate using Suno.”

Link to the rest at ZD Net

PG decided to see how Suno could help people to remember that they should read and understand a contract before they sign it. So he headed over to the site and entered his prompt about reading contracts.

He chose Synthpop Upbeat for the song style. The song was created very quickly, far faster than AI text generators he’s tried.

From Suno

In the Fine Print

Lyrics (generated by Suno):

[Verse]
Every time before I put my name on that line
I gotta make sure it's all clear, no hidden design (design)
They say it's just a formality, nothing to fear
But baby, I've learned the hard way, gotta be sincere

[Verse 2]
Pages and pages of words that they want me to agree
But I won't be caught in a trap, I gotta be free (be free)
They may try to rush me, put pressure on my soul
But I won't be fooled, gotta maintain control

[Chorus]
In the fine print, secrets lie
Gotta read between the lines, oh, oh (oh, oh)
In the fine print, don't be blind
Take your time, don't let them blow your mind (oh, oh)

Here’s a link to PG’s Song on Suno- In the Fine Print

A new generation of music-making algorithms is here

From The Economist:

IN THE dystopia of George Orwell’s novel “1984”, Big Brother numbs the masses with the help of a “versificator”, a machine designed to automatically generate the lyrics to popular tunes, thereby ridding society of human creativity. Today, numerous artificial-intelligence (AI) models churn out, some free of charge, the music itself. Unsurprisingly, many fear a world flooded with generic and emotionally barren tunes, with human musicians edged out in the process. Yet there are brighter signs, too, that AI may well drive a boom in musical creativity.

AI music-making is nothing new. The first, so-called “rules-based”, models date to the 1950s. These were built by painstakingly translating principles of music theory into algorithmic instructions and probability tables to determine note and chord progressions. The outputs were musically sound but creatively limited. Ed Newton-Rex, an industry veteran who designed one such model for Jukedeck, a London firm he founded in 2012, describes that approach as good for the day but irrelevant now.

The clearest demonstration that times have changed came in August 2023. That is when Meta, a social-media giant, released the source code for AudioCraft, a suite of large “generative” music models built using machine learning. AI outfits worldwide promptly set about using Meta’s software to train new music generators, many with additional code folded in. One AudioCraft model, MusicGen, analysed patterns in some 400,000 recordings with a collective duration of almost 28 months to come up with 3.3bn “parameters”, or variables, that enables the algorithm to generate patterns of sounds in response to prompts. The space this creates for genuinely new AI compositions is unprecedented.

Such models are also getting easier to use. In September Stability AI, a firm based in London at which Mr Newton-Rex worked until recently, released a model, Stable Audio, trained on some 800,000 tracks. Users guide it by entering text and audio clips. This makes it easy to upload, say, a guitar solo and have it recomposed in jazzy piano, perhaps with a vinyl playback feel. Audio prompts are a big deal for two reasons, says Oliver Bown of Australia’s University of New South Wales. First, even skilled musicians struggle to put music into words. Second, because most musical training data are only cursorily tagged, even a large model may not understand a request for, say, a four-bar bridge in ragtime progression (the style familiar from Scott Joplin’s “The Entertainer”).

The potential, clearly, is vast. But many in the industry remain sceptical. One widespread sentiment is that AI will never produce true music. That’s because, as a musician friend recently told Yossef Adi, an engineer at Meta’s AI lab in Tel Aviv, “no one broke its heart”. That may be true, but some AI firms reckon that they have found a way to retain and reproduce the “unique musical fingerprint” of their musician users, as LifeScore, a company founded near London, puts it. LifeScore’s AI limits itself to recomposing the elements of a user’s original recordings in ways that maintain the music’s feel, rather than turning them into something radically new.

It takes about a day to plug into LifeScore’s model the dozens of individually recorded vocal and instrumental microphone tracks, or stems, that go into producing an original song. Once that’s done, however, the software, developed at a cost of some $10m, can rework each stem into a new tempo, key or genre within a couple of seconds. The song’s artists, present during the process, choose which remixes to keep. Manually remixing a hit track has traditionally taken one or more highly paid specialists weeks.

LifeScore, says Tom Gruber, a co-founder, is “literally swamped with requests” from clients including Sony Music, Universal Music Group and Warner Music Group. An original release is typically turned into anywhere from a handful to a dozen remixes. But one client aims to release a dizzying 6,000 or so AI versions of an original track, each targeting a different market. Artists including Pink Floyd’s David Gilmour and Tom Gaebel, a German pop singer, use LifeScore’s AI to power websites that allow fans to generate, with a few clicks, new remixes adapted to personal tastes.

The beat of a different drum

If this seems like dizzying progress, it’s worth noting that AI’s impact on music is still in its early days. Legal uncertainties over the use of copyrighted recordings to train models have slowed development. Outfits that have coughed up for licensing fees note that this can get expensive. To save on that cost, MusicGen’s training set mostly sidestepped hits, says Dr Adi. Though output is pretty good, he adds, the model is not yet “artistic enough” to generate narratively complete songs. Harmonic misalignments are common. OpenAI, a San Francisco firm, for its part, says its MuseNet model struggles to pull off “odd pairings”, such as a Chopin style that incorporates bass and drums.

In time, bigger training sets of better music will largely overcome such shortcomings, developers reckon. A Stability AI spokesperson says that while Stable Audio’s top duration for coherently structured music—“intro, development and outro”—is now about 90 seconds, upgrades will produce longer pieces with “full musicality”. But judging music AI by its ability to crank out polished tracks mostly misses the point. The technology’s greatest promise, for now at least, lies elsewhere.

Part of it is the empowerment of amateurs. AI handles technical tasks beyond many people’s capabilities and means. As a result, AI is drawing legions of newbies into music-making. This is a boon for experimentation by what Simon Cross, head of products at Native Instruments, a firm based in Berlin, calls “bedroom producers”.

Consider RX, a Native Instruments AI “assistant” that corrects errors in things like pitch and timing. For the latter, software time-shifts notes by cutting out or inserting slivers of sound with matching timbre, a process called “dynamic time-warping”. The company’s AI also determines what mixing and mastering processes were performed on a song of a user’s choosing. It then replicates, or at least approximates, the same expensive processing on the user’s own creations. Boomy, an online “music automation” platform for what Alex Mitchell, its ceo, describes as “low-friction” song production with text prompts, has more than 2m users. The company, based in Berkeley, California, uploads users’ (vetted) creations to streaming services and collects a cut of revenues.

AI serves professionals, too. The soundtracks to “Barbie” and “Oppenheimer” were cleaned up in post-production with RX, for example. Another application area is “style transfer”, in which models transform music recorded with one instrument into sounds that seem to come from a different one, often with a twist or two requested by the user. Style transfers are also used for voice. A model developed by a startup in London called Voice-Swap slices up sounds sung by (remunerated) professional singers and rearranges the slivers into lyrics written by the service’s users, who pay licensing fees for the rights to sell the resulting tracks. And AI tools already exist to recreate singers’ voices in other languages. Vocaloid, a voice-synthesising tool from Yamaha, a Japanese instrument manufacturer, is one of many that can use a translation sung by a native speaker as a template for an AI to imitate as it rearranges, modifies and stitches together tiny snippets of the original singer’s voice.

Link to the rest at The Economist

YouTube now requires creators to disclose when realistic content was made with AI

From TechCrunch:

YouTube is now requiring creators to disclose to viewers when realistic content was made with AI, the company announced on Monday. The platform is introducing a new tool in Creator Studio that will require creators to disclose when content that viewers could mistake for a real person, place or event was created with altered or synthetic media, including generative AI.

The new disclosures are meant to prevent users from being duped into believing that a synthetically-created video is real, as new generative AI tools are making it harder to differentiate between what’s real and what’s fake. The launch comes as experts have warned that AI and deepfakes will pose a notable risk during the upcoming U.S. presidential election.

Today’s announcement comes as YouTube announced back in November that it was going to roll out the update as part of a larger introduction of new AI policies.

YouTube says the new policy doesn’t require creators to disclose content that is clearly unrealistic or animated, such as someone riding a unicorn through a fantastical world. It also isn’t requiring creators to disclose content that used generative AI for production assistance, like generating scripts or automatic captions.

Instead, YouTube is targeting videos that use the likeness of a realistic person. For instance, creators will have to disclose when they have digitally altered content to “replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video,” YouTube says.

They will also have to disclose content that alters the footage of real events or places, such as making it seem as though a real building caught on fire. Creators will also have to disclose when they have generated realistic scenes of fictional major events, like a tornado moving toward a real town.

Link to the rest at TechCrunch

Microsoft Reading Coach is now available as a stand-alone app for schools

From Neowin:

In March 2022, Microsoft first introduced Reading Coach as an app available in Microsoft Teams. The app became available later for individual users outside the classroom. Today, the company has announced that Reading Coach is now available as its own Windows stand-alone app, or via its website at coach.microsoft.com, specifically for school use.

In a blog post, Microsoft says that schools that want to access the new stand-alone app can access it with a Microsoft Entra ID. School admins must enable support for Reading Coach via a signup page.

. . . .

Microsoft stated the apps will help students with their reading skills in a number of ways. The biggest feature is the app’s ability to create a unique story via AI for each student. The company said:

Create a story puts the story in reader’s hands by giving them the choice of a character, setting and reading level to create a unique AI generated story each time.

The blog post adds that the AI stories follow Microsoft’s AI guidelines, and are moderated for the story’s content quality and safety, along with setting up stories for specific age groups.

The app also lets students read from previously written content from its library. Teachers can also add their own story content as well.

Reading Coach lets students read stories out loud. The app’s speech recognition technology, combined with AI features, allows it to analyze how students read, and detect if they find specific words hard to read. After each session, the app shows an overall score and can also generate word practice sessions with those challenging words.

Link to the rest at Neowin and thanks to F. for the tip

Does generative artificial intelligence infringe copyright?

From The Economist:

GENERATIVE ARTIFICIAL INTELLIGENCE (AI) will transform the workplace. The International Monetary Fund reckons that AI tools, which includes ones that produce text or images from written prompts, will eventually affect 40% of jobs. Goldman Sachs, a bank, says that the technology could replace 300m jobs worldwide. Sceptics say those estimates exaggerate. But some industries seem to be feeling the effects already. A paper published in August 2023 on SSRN, a repository for research which has yet to undergo formal peer review, suggests that the income of self-employed “creatives”—writers, illustrators and the like—has fallen since November 2022, when ChatGPT, a popular AI tool, was released.

Over the past year artists, authors and comedians have filed lawsuits against the tech companies behind AI tools, including OpenAI, Microsoft and Anthropic. The cases allege that, by using copyrighted material to train their AI models, tech firms have violated creators’ rights. Do those claims have merit?

AI generators translate written prompts—”draw a New York skyline in the style of Vincent van Gogh”, for example—into machine-readable commands. The models are trained on huge databases of text, images, audio or video. In many cases the tech firms appear to have scraped much of the material from the internet without permission. In 2022 David Holz, the founder of Midjourney, one of the most popular AI image generators, admitted that his tool had hoovered up 100m images without knowing where they came from or seeking permission from their owners.

Generators are supposed to make new output and on that basis AI developers argue that what their tools produce does not infringe copyright. They rely on the “fair-use doctrine”, which allows the use of copyrighted material in certain circumstances. This doctrine normally protects journalists, teachers, researchers and others when they use short excerpts of copyrighted material in their own work, for example in a book review. AI tools are not entitled to that protection, creatives believe, because they are in effect absorbing and rearranging copyrighted work rather than merely excerpting small pieces from it.

Generative AI is so new that there is almost no case law to guide courts. That makes the outcome of these cases hard to guess. Some observers reckon that many of the class-action suits against AI firms will probably fail. Andres Guadamuz, an expert in intellectual-property law at the University of Sussex, reckons that the strength of the fair-use doctrine is likely to trump claimants’ concerns.

One case will be particularly closely watched. On December 27th the New York Times sued Microsoft and OpenAI after negotiations failed. It alleges that the tech companies owe “billions of dollars” for using copyrighted work to train ChatGPT. The newspaper’s lawyers showed multiple examples of ChatGPT producing New York Times journalism word for word. This shows that AI tools do not substantially transform the material they’re trained on, and therefore are not protected by the fair-use doctrine, they claim.

On January 8th OpenAI responded, saying that it had done nothing wrong. Generative AI tools are pattern-matching technologies that write responses by predicting the likeliest next word based on what they have been trained on. As in other cases of this kind, OpenAI says that is covered by fair use. It claims that the New York Times overstates the risk of “regurgitation”, which it blames on a bug that produces errors only rarely. In a filing submitted on February 26th, OpenAI claimed that the New York Times cherry-picked answers from “tens of thousands” of queries it sent to the chatbot. Some of these were “deceptive prompts” that violated its terms of use, it alleged.

Link to the rest at The Economist

PG still thinks the use of materials protected by copyright to train AI systems qualifies as fair use. Those who use an AI system to create written material cannot, to the best of PG’s knowledge, call up an exact copy of a New York Times story.

PG is going to see if he can make his way through the various contentions, but he was immediately reminded of the Google Books case that was ultimately resolved in favor of Google in 2015.

The basis for the Google Books decision was the transformative nature of Google’s use of the content of the books to populate a huge online database with millions of books.

PG suggests that there is a much greater degree of transformation involved in AI usage of the texts involved in the New York Times’s Lawsuit Against Microsoft and OpenAI than there is in Google’s use of the text of 40 million books in 500 languages.

The age of AI phones has arrived

From CNet:

One of my biggest takeaways from MWC was that while all tech companies now have a raft of AI tools at their disposal, most are choosing to deploy them in different ways. 

Take smartphones. Samsung has developed Gauss, its own large language model (the tech that underlies AI chatbots), to focus on translation on the Galaxy S24, whereas Honor uses AI to include eye tracking on its newly unveiled Magic 6 Pro — which I got to try out at its booth. Oppo and Xiaomi, meanwhile, both have on-device generative AI that they’re applying to phone cameras and photo editing tools.

It goes to show that we’re entering a new period of experimentation as tech companies figure out what AI can do, and crucially how it can improve our experience of using their products.

Samsung’s Y.J. Kim, an executive vice president at the company and head of its language AI team, told reporters at an MWC roundtable that Samsung thought deeply about what sort of AI tools it wanted to deliver to users that would elevate the Galaxy S24 above the basic smartphone experience we’ve come to expect. “We have to make sure that customers will see some tangible benefits from their day-to-day use of the product or technologies that we develop,” he said.

Conversely, there’s also some crossover in AI tools between devices because of the partners these phone-makers share. As the maker of Android, the operating system used by almost all non-Apple phones, Google is experimenting heavily with AI features. These will be available across phones made by Samsung, Xiaomi, Oppo, Honor and a host of others.

Google used its presence at MWC this year to talk about some of its recently introduced AI features, like Circle to Search, a visual search tool that lets you draw a circle around something you see on screen to search for it.

The other, less visible partner that phone-makers have in common is chipmaker Qualcomm, whose chips were in an entire spectrum of devices at MWC this year. Its Snapdragon 8 Gen 3 chip, announced late in 2023, can be found in many of the phones that are now running on-device generative AI.

It’s been only a year since Qualcomm first showed a basic demo of what generative AI on a phone might look like. Now phones packing this technology are on sale, said Ziad Asghar, who leads the company’s AI product roadmap.

“From our perspective, we are the enablers,” said Asghar. “Each and every one of our partners can choose to commercialize with unique experiences that they think are more important for their end consumer.”

At MWC, the company launched its AI Hub, which gives developers access to 75 plug-and-play generative AI models that they can pick and choose from to apply to their products. That number will grow, and it means any company making devices with Qualcomm chips will be able to add all sorts of AI features.

. . . .

AI is changing how we interact with our devices

AI enhancements to our phones are all well and good, but already we’re seeing artificial intelligence being used in ways that have the power to totally change how we interact with our devices — as well as potentially changing what devices we choose to own.

In addition to enabling companies to bring AI to their existing device lines, Qualcomm’s tech is powering concept phones like the T Phone, created by Deutsche Telekom and Brain.AI. Together, these two have tapped Qualcomm’s chipset to totally reimagine your phone’s interface, creating an appless experience that responds to you based on your needs and the task you’re trying to accomplish and generates, on the fly, whatever you see on screen as you go.

. . . .

In the demo I saw at MWC, AI showed it has the potential to put an end to the days of constant app-swapping as you’re trying to make a plan or complete a task. “It really changes the way we interface with devices and becomes a lot more natural,” said Asghar.

But, he said, that’s only the beginning. He’d like to see the same concept applied to mixed reality glasses. He sees the big benefit of the AI in allowing new inputs through gesture, voice and vision that don’t necessarily rely on us tapping on a screen. “Technology is much more interesting when it’s not really in your face, but it’s solving the problems for you in an almost invisible manner,” he said.

His words reminded me of a moment in the MWC keynote presentation when Google DeepMind CEO Demis Hassabis asked an important question. “In five-plus years time, is the phone even really going to be the perfect form factor?” said Hassabis. “There’s all sorts of amazing things to be invented.”

In my demo with the AI Pin — a wearable device with no screen that you interact with through voice and touch — it was clear to me that AI is creating space for experimentation. It’s allowing us to ask what may succeed the phone as the dominant piece of technology in our lives.

Link to the rest at CNet and thanks to F. for the tip.

Why large language models aren’t headed toward humanlike understanding

From Science News:

Apart from the northward advance of killer bees in the 1980s, nothing has struck as much fear into the hearts of headline writers as the ascent of artificial intelligence.

Ever since the computer Deep Blue defeated world chess champion Garry Kasparov in 1997, humans have faced the prospect that their supremacy over machines is merely temporary. Back then, though, it was easy to show that AI failed miserably in many realms of human expertise, from diagnosing disease to transcribing speech.

But then about a decade ago or so, computer brains — known as neural networks — received an IQ boost from a new approach called deep learning. Suddenly computers approached human ability at identifying images, reading signs and enhancing photographs — not to mention converting speech to text as well as most typists.

Those abilities had their limits. For one thing, even apparently successful deep learning neural networks were easy to trick. A few small stickers strategically placed on a stop sign made an AI computer think the sign said “Speed Limit 80,” for example. And those smart computers needed to be extensively trained on a task by viewing numerous examples of what they should be looking for. So deep learning produced excellent results for narrowly focused jobs but couldn’t adapt that expertise very well to other arenas. You would not (or shouldn’t) have hired it to write a magazine column for you, for instance.

But AI’s latest incarnations have begun to threaten job security not only for writers but also a lot of other professionals.

“Now we’re in a new era of AI,” says computer scientist Melanie Mitchell, an artificial intelligence expert at the Santa Fe Institute in New Mexico. “We’re beyond the deep learning revolution of the 2010s, and we’re now in the era of generative AI of the 2020s.”

Generative AI systems can produce things that had long seemed safely within the province of human creative ability. AI systems can now answer questions with seemingly human linguistic skill and knowledge, write poems and articles and legal briefs, produce publication quality artwork, and even create videos on demand of all sorts of things you might want to describe.

. . . .

“These things seem really smart,” Mitchell said this month in Denver at the annual meeting of the American Association for the Advancement of Science.

. . . .

At the heart of the debate is whether LLMs actually understand what they are saying and doing, rather than just seeming to. Some researchers have suggested that LLMs do understand, can reason like people (big deal) or even attain a form of consciousness. But Mitchell and others insist that LLMs do not (yet) really understand the world (at least not in any sort of sense that corresponds to human understanding).

“What’s really remarkable about people, I think, is that we can abstract our concepts to new situations via analogy and metaphor.”Melanie Mitchell

In a new paper posted online at arXiv.org, Mitchell and coauthor Martha Lewis of the University of Bristol in England show that LLMs still do not match humans in the ability to adapt a skill to new circumstances. Consider this letter-string problem: You start with abcd and the next string is abce. If you start with ijkl, what string should come next?

Humans almost always say the second string should end with m. And so do LLMs. They have, after all, been well trained on the English alphabet.

. . . .

“While humans exhibit high performance on both the original and counterfactual problems, the performance of all GPT models we tested degrades on the counterfactual versions,” Mitchell and Lewis report in their paper.

Other similar tasks also show that LLMs do not possess the ability to perform accurately in situations not encountered in their training. And therefore, Mitchell insists, they do not exhibit what humans would regard as “understanding” of the world.

“Being reliable and doing the right thing in a new situation is, in my mind, the core of what understanding actually means,” Mitchell said at the AAAS meeting.

Human understanding, she says, is based on “concepts” — basically mental models of things like categories, situations and events. Concepts allow people to infer cause and effect and to predict the probable results of different actions — even in circumstances not previously encountered.

“What’s really remarkable about people, I think, is that we can abstract our concepts to new situations via analogy and metaphor,” Mitchell said.

She does not deny that AI might someday reach a similar level of intelligent understanding. But machine understanding may turn out to be different from human understanding. Nobody knows what sort of technology might achieve that understanding and what the nature of such understanding might be.

If it does turn out to be anything like human understanding, it will probably not be based on LLMs.

Link to the rest at Science News and thanks to F. for the tip.

AI is coming for your audiobooks. You’re right to be worried.

From The Washington Post:

Something creepy this way comes — and its name is digital narration. Having invaded practically every other sphere of our lives, artificial intelligence (AI) has come for literary listeners. You can now listen to audiobooks voiced by computer-generated versions of professional narrators’ voices. You’re right to feel repulsed.

“Mary,” for instance, a voice created by the engineers at Google, is a generic female; there’s also “Archie,” who sounds British, and “Santiago,” who speaks Spanish, and 40-plus other personas who want to read to you. Apple Books uses the voices of five anonymous professional narrators in what will no doubt be a growing stable: “Madison,” “Jackson” and “Warren,” covering fiction in various genres; and “Helena” and “Mitchell,” taking on nonfiction and self-development.

I have listened to thousands of hours of audiobooks (it’s my job), so perhaps it’s not a surprise that I sense the wrongness of AI voices. Capturing and conveying the meaning and sound of a book is a special skill that requires talent and soul. I can’t imagine “Archie,” for instance, understanding, much less expressing, the depth of character of say, David Copperfield. But here we are at a strange crossroads in the audiobooks world: Major publishers are investing heavily in celebrity narrators — Meryl Streep reading Ann Patchett’s “Tom Lake,” Claire Danes reading “The Handmaid’s Tale,” a full cast of Hollywood actors (Ben Stiller, Julianne Moore, Don Cheadle and more) on “Lincoln in the Bardo,” to name a few. Will we reach a point where we must choose between Meryl Streep and a bot?

The main issue is, naturally, money. The use of disembodied entities saves time and spares audiobook producers the problems of dealing with human beings — chief among them, their desire to be paid. This may explain why so many self-published books are narrated by “Madison” and her squad of readers. Audible insists that every audiobook it sells must have been narrated by a human. (Audible is a subsidiary of Amazon, whose founder, Jeff Bezos, owns The Washington Post.) Major publishing houses say the same. But how long until they see the economic benefits of AI?

Jason Culp, an actor and award-winning narrator who has been recording audiobooks for more than a quarter of a century, knows how much goes into a production. A 10-hour audiobook, he says, takes a narrator something like four or five days, with a couple of additional hours for editing mop-up. For each finished hour of audio, narrators make about $225 — somewhat more for the big names — and editors, about $100. Beyond that, producers must pay a percentage to SAG-AFTRA, the narrators’ union. There are other production costs too, of course, but you can see how eliminating the human narrator appeals to the business mind.

Apple’s narrators are cloned from the voices of professionals who have licensed the rights to their voices. Their identities are secret, but speculation abounds. It’s a touchy subject, and you can see why. Whether to sell the rights to one’s voice is an agonizing decision for a professional narrator. The money offered amounts to something like what a midrange narrator makes in four years; on the other hand, agreeing to the deal seems to many to be a betrayal of the profession, one that would risk alienating one’s peers.

According to Culp, narrators are alarmed by the advent of AI narration “as, naturally, it might mean less work for living, breathing narrators in the future. We might not know the circumstances under which a narrator might take this step, but generally there is a lot of solidarity within the community about encouraging narrators not to do it. As well, our union is keeping a close eye on companies that might be using underhanded tactics to ‘obtain’ narrators’ voices in works that they have produced.”

Even though the notion makes my skin crawl, I listened to Madison’s narration of “The New Neighbor” by Kamaryn Kelsey, the author of almost 60 self-published books (Apple, 1½ hours). This is the first installment in a series of 19 detective stories starring female private investigator Pary Barry. The plot is entertaining enough, and Madison is a slick operator, in the sense that you can believe that she’s human — for about five minutes.

Link to the rest at The Washington Post

PG asks, “When you listen to an audiobook, are you focusing on the performance of the narrator or the book itself? Do you forget about the narrator’s voice after a few pages?”

While the human narrator is certainly capable creating a better or worse “performance,” the narrator’s first obligation is not to interfere with the listener’s enjoyment of the book.

PG wonders if someone’s appreciation of a particular human performer may be a little like wine-tasting. Some people have a palate that always discriminates between a good or bad wine, for others, unless they have a side-by-side comparison, are fine with the equivalent of a house wine.

PG suggests that a very large portion of the present and future listeners to audiobooks will be perfectly happy with the house wine.

(Note: Although PG has not tasted wine for several decades, he does recall the various business lunch/dinner performances of the sommelier carefully uncorking a bottle, presenting the cork for a sniff test by whichever businessperson was paying for the meal and drinks, pouring a bit into a wineglass for the host to swirl around, sniff, then swallow delicately, look into the air, then communicate approval. On more than one occasion, a host who was also a good friend would admit he had no idea what the difference in taste was between an expensive wine and the house wine. To indicate how long it’s been since PG has witnessed this ceremony, he doesn’t ever recall the presence of a business hostess. No, those were not the good old days for PG. He prefers the present.)

OpenAI: ‘The New York Times Paid Someone to Hack Us’

From Torrent Freak:

OpenAI accuses The New York Times of paying someone to hack OpenAI’s products. This was allegedly done to gather evidence for the copyright infringement complaint the newspaper filed late last year. This lawsuit fails to meet The Times’ “famously rigorous journalistic standards,” the defense argues, asking the New York federal court to dismiss it in part.

In recent months, rightsholders of all ilks have filed lawsuits against companies that develop AI models.

The list includes record labels, individual authors, visual artists, and more recently the New York Times. These rightsholders all object to the presumed use of their work without proper compensation.

A few hours ago, OpenAI responded to The New York Times complaint, asking the federal court to dismiss several key claims. Not just that, the defendants fire back with some rather damning allegations of their own.

OpenAI’s motion directly challenges the Times’s journalistic values, putting the company’s truthfulness in doubt. The notion that ChatGPT can be used as a substitute for a newspaper subscription is overblown, they counter.

. . . .

“In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will,” the motion to dismiss reads.

‘NYT Paid Someone to Hack OpenAI’?

In its complaint, the Times did show evidence that OpenAI’s GPT-4 model was able to supposedly generate several paragraphs that matched content from its articles. However, that is not the full truth, OpenAI notes, suggesting that the newspaper crossed a line by hacking OpenAI products.

“The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards. The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI’s products,” the motion to dismiss explains.nyt hacked

OpenAI believes that it took tens of thousands of attempts to get ChatGPT to produce the controversial output that’s the basis of this lawsuit. This is not how normal people interact with its service, it notes.

It also shared some additional details on how this alleged ‘hack’ was carried out by this third-party.

“They were able to do so only by targeting and exploiting a bug […] by using deceptive prompts that blatantly violate OpenAI’s terms of use. And even then, they had to feed the tool portions of the very articles they sought to elicit verbatim passages of, virtually all of which already appear on multiple public websites.”

Link to the rest at Torrent Freak

PG notes that allegations made in lawsuits may or may not be true. Only when a court issues a final verdict can anyone know what was true and provable and what was not.

Welcome to the Era of BadGPTs

From The Wall Street Journal:

A new crop of nefarious chatbots with names like “BadGPT” and “FraudGPT” are springing up on the darkest corners of the web, as cybercriminals look to tap the same artificial intelligence behind OpenAI’s ChatGPT.

Just as some office workers use ChatGPT to write better emails, hackers are using manipulated versions of AI chatbots to turbocharge their phishing emails. They can use chatbots—some also freely-available on the open internet—to create fake websites, write malware and tailor messages to better impersonate executives and other trusted entities.

Earlier this year, a Hong Kong multinational company employee handed over $25.5 million to an attacker who posed as the company’s chief financial officer on an AI-generated deepfake conference call, the South China Morning Post reported, citing Hong Kong police. Chief information officers and cybersecurity leaders, already accustomed to a growing spate of cyberattacks, say they are on high alert for an uptick in more sophisticated phishing emails and deepfakes.

Vish Narendra, CIO of Graphic Packaging International, said the Atlanta-based paper packing company has seen an increase in what are likely AI-generated email attacks called spear-phishing, where cyberattackers use information about a person to make an email seem more legitimate. Public companies in the spotlight are even more susceptible to contextualized spear-phishing, he said.

Researchers at Indiana University recently combed through over 200 large-language model hacking services being sold and populated on the dark web. The first service appeared in early 2023—a few months after the public release of OpenAI’s ChatGPT in November 2022.

Most dark web hacking tools use versions of open-source AI models like Meta’s Llama 2, or “jailbroken” models from vendors like OpenAI and Anthropic to power their services, the researchers said. Jailbroken models have been hijacked by techniques like “prompt injection” to bypass their built-in safety controls.

Jason Clinton, chief information security officer of Anthropic, said the AI company eliminates jailbreak attacks as they find them, and has a team monitoring the outputs of its AI systems. Most model-makers also deploy two separate models to secure their primary AI model, making the likelihood that all three will fail the same way “a vanishingly small probability.”

Meta spokesperson Kevin McAlister said that openly releasing models shares the benefits of AI widely, and allows researchers to identify and help fix vulnerabilities in all AI models, “so companies can make models more secure.”

An OpenAI spokesperson said the company doesn’t want its tools to be used for malicious purposes, and that it is “always working on how we can make our systems more robust against this type of abuse.”

Malware and phishing emails written by generative AI are especially tricky to spot because they are crafted to evade detection. Attackers can teach a model to write stealthy malware by training it with detection techniques gleaned from cybersecurity defense software, said Avivah Litan, a generative AI and cybersecurity analyst at Gartner.

Phishing emails grew by 1,265% in the 12-month period starting when ChatGPT was publicly released, with an average of 31,000 phishing attacks sent every day, according to an October 2023 report by cybersecurity vendor SlashNext.

“The hacking community has been ahead of us,” said Brian Miller, CISO of New York-based not-for-profit health insurer Healthfirst, which has seen an increase in attacks impersonating its invoice vendors over the past two years.

Link to the rest at The Wall Street Journal

10 AI Writing Tools Everyone Should Know About

From Intuit Mailchimp:

What are AI writing tools?

AI writing software includes various tools that can create content for you. AI writing tools make content based on user input. You can use these tools to create articles, product descriptions, landing pages, blog posts, and more.

However, AI software is not meant to take over the need for human writers completely. Instead, it’s just supposed to be a way to optimize productivity and make your life easier, especially if you’re creating a high volume of content. So rather than having to go in and write everything by hand, AI writing software can do some of it for you.

. . . .

10 best AI writing tools to use

There are various AI writing tools that you can use to increase productivity for your business. But the right AI writing tool for your company depends on your specific wants and needs. Different AI writing tools serve different purposes, so make sure you choose one that is best for your company. Here are some of the best AI writing tools that we recommend:

Writesonic

Writesonic is an AI content tool that can help with the content creation process. With Writesonic, you can use artificial intelligence to generate everything from blog posts and landing pages to Facebook ad copy.

Writesonic is especially beneficial for those dealing with writer’s block. It has over 60 AI writing tools that can help you brainstorm ideas and actually generate content for you.

INK Editor

INK Editor is best for co-writing and optimizing SEO. Consistency is key with writing, and with this AI writing tool, you can ensure that your content will consistently rank high on search engines. This will help to generate traffic to your company’s website and lead to more sales.

INK Editor also provides suggestions on how to improve your SEO score while you’re writing. So if your business goal is to create high-performing content that ranks high on search engines, INK Editor is for you. You can also get a free trial of INK Editor, or upgrade to a paid version to access more features.

Anyword

Anyword is a copywriting AI software that benefits marketing and sales teams. Some AI copywriting tools create content that sounds like a robot wrote them, but with Anyword, it will always sound like a human wrote it.

If you don’t have the time or resources to produce content for your business, Anyword can help to streamline your writing process by creating high-quality content. You can create blog posts, ads, articles, and more that you can use across various marketing channels.

Jasper

Jasper is a great AI writing tool if you want to create high-quality content at a quick speed. It offers over 50 templates and supports over 25 languages, so you can tailor the tools to suit your business’s specific needs.

With Jasper, you can create personalized AI-generated content to reach your target audience. Jasper will also assist with catching grammar mistakes to ensure you’re delivering the best work possible.

Wordtune

If you need an AI writing tool that can help with grammar and writing, Wordtune is for you. Not only does Wordtune help with catching grammar mistakes, but it also goes a step further and assists with writing compelling and engaging content.

Wordtune ensures the readability of content, so it always sounds like it came from human writers and not AI software. It’s also completely cloud-based, features a thesaurus with real-time suggestions, and can easily be integrated with social media platforms and other business tools.

Grammarly

If there’s one AI writing tool you’ve heard of, it’s probably Grammarly. Grammarly is often used throughout schools and businesses, and for a good reason. With Grammarly, you can rest assured that your work will be error-free and grammatically correct.

Grammarly does everything from spell check to grammar to ensure you always deliver the best work possible. It also features a plagiarism tool, ensuring you’re only working with original content.

. . . .

Hyperwrite

Hyperwrite uses advanced natural language processing technology to create original content for your brand.

Whether you need help writing articles, blog posts, landing pages, or a combination of the three, Hyperwrite generates high-quality content quickly. There is a free version of Hyperwrite, but you can also pay to upgrade and get even more features.

Lightkey

Have you ever wanted an AI writing assistant who can finish your sentences for you? If so, consider using Lightkey. Lightkey is an AI typing assistant that can predict your sentences up to 18 words in advance.

Think about how much faster you could type if you had an AI writing tool that could literally finish your sentences.

Copyal

If you’re struggling with writer’s block, Copyal will be your new best friend. This AI writing assistant can help you beat that mental block so you can deliver quality content faster than ever before.

Copyal is also compatible with over 25 languages, so you can produce content that works for your target audience. There is a free version of Copyal, as well as paid versions, which you can access depending on your business’s needs.

Lyne.ai

If you write a lot of cold emails, Lyne.ai can transform the way you work. This AI writing tool can write over 500 intros per hour, significantly increasing the number of emails you send. The more emails you send, the more response rates you’ll get, so you’ll see an instant increase in sales for your business.

Link to the rest at Intuit Mailchimp

Qualcomm flexes generative AI muscles in Snapdragon X Elite vs Intel Core Ultra test

From Windows Central:

Your next smartphone or PC with a Qualcomm chip may be able to run AI locally, no cloud required.

  • Qualcomm AI Hub just launched, giving developers access to over 75 AI models that are optimized to run on Qualcomm processors.
  • Those models can be used to perform on-device AI tasks, such as image generation, voice recognition, and real-time translation.
  • Running AI models on a device rather than relying on the cloud takes away the need for an internet connection and also improves privacy.
  • PCs running Qualcomm Snapdragon X Elite processors are set to ship later this year, and Qualcomm shared the results of a head-to-head comparison between a Snapdragon X Elite-powered PC and one running an Intel Core Ultra CPU.

AI is the biggest buzz word in tech these days, and you shouldn’t expect it to go away any time soon. In fact, the latest announcement from Qualcomm shows tech that will bring AI closer to you. The company unveiled the Qualcomm AI Hub, which provides developers with access to over 75 AI models optimized to run on Qualcomm chips. That means that your next smartphone or PC may be able to run powerful AI models locally rather than relying on an internet connection and the cloud.

Qualcomm AI Hub includes some of the biggest names in AI, including image generator Stable Diffusion, speech recognition tool Whisper, and Yolo-v7, which can detect objects in real time. With those models optimized for Qualcomm chips, they should have lower memory utilization and better power efficiency.

With Qualcomm AI Hub, developers should be able to integrate the supported AI models into applications with relatively little effort.

“With Snapdragon 8 Gen 3 for smartphones and Snapdragon X Elite for PCs, we sparked commercialization of on-device AI at scale. Now with the Qualcomm AI Hub, we will empower developers to fully harness the potential of these cutting-edge technologies and create captivating AI-enabled apps,” said Qualcomm Senior Vice President and General Manager of Technology Planning and Edge Solutions Durga Malladi. “The Qualcomm AI Hub provides developers with a comprehensive AI model library to quickly and easily integrate pre-optimized AI models into their applications, leading to faster, more reliable and private user experiences.” 

. . . .

By the end of the year, both Qualcomm and Intel will have processors optimized for artificial intelligence. But according to Qualcomm, its upcoming Snapdragon X Elite chip beats out its Intel Core Ultra competitor. Qualcomm tested a Snapdragon X Elite-powered laptop against an Intel Core Ultra laptop by having both PCs generate an image in GIMP with the Stable Diffusion plug-in. The Snapdragon X Elite PC finished the task in 7.25 seconds, while the Intel PC took 22.26 seconds.

Of course, this is a specific test, and we don’t know all of the parameters used for the head-to-head comparison. The full specs of both PCs are also unknown. At this point, the biggest takeaway from this test is that Qualcomm feels comfortable boasting about better performance than Intel when it comes to generative AI.

Link to the rest at Windows Central and thanks to F. for the tip.

PG says AI sitting on your smartphone with no requirement for a fast internet connection or any internet connection will open the door to all sorts of interesting apps that you can use anywhere.

Georgia college student used Grammarly, now she is on academic probation

From Yahoo News:

A University of North Georgia (UNG) student is on academic probation after she says she used Grammarly to proofread a paper. The school says the action was taken because they detected the use of artificial intelligence in violation of their plagiarism clause in the student code of conduct handbook.

“It’s just been very frustrating,” UNG junior Marley Stevens said.

Stevens, after submitting a paper for her criminal justice class in October, says she was surprised to learn her professor gave her a “0” for the assignment and reported her to the Office of Student Integrity.

“He was like you used AI on your paper, you get a zero, that was it,” Stevens said.

“I had Grammarly installed in my web browser, but I’ve only ever had the free version, so all it did was fix my punctuation and my spelling,” she added.

. . . .

She submitted the paper through the program Turnitin, which flagged it for the use of AI.

Turnitin launched an AI writing detection feature in March 2023 to find when AI writing tools generate words used in submissions rather than the students’ own writing.

Earlier this month, Stevens learned she’d been placed on academic probation.

Grammarly says its suggestions for grammar and spelling changes aren’t made through generative AI, which is an algorithm which can create new content on its own.

Grammarly sent FOX 5 a statement reading in part:

“Grammarly’s trusted writing support helps students improve their writing skills by offering suggestions for spelling, grammatical correctness, clarity, concision, and tone. These suggestions are not powered by generative AI and can still be accessed even when generative AI features are deactivated or not used by the student. However, some third-party tools may mistakenly identify any use of Grammarly as generative AI. We encourage institutions to establish clear policies on acceptable AI usage and adhere to those guidelines when assessing student success.”

Stevens said she’s used Grammarly on other assignments before without problems.

“I had teachers before who made us install it and turn a screenshot in that we had done so-and-so I’ve written my papers the same exact way all through college in a Google Doc with my Grammarly extension. I’ve never had any problems,” she explained.

. . . .

Regarding its AI policies, the University of North Georgia issued a statement reading in part:

“Our faculty members communicate specific guidelines regarding the use of AI for various classes, and those guidelines are included in the class syllabi. The inappropriate use of AI is also addressed in our Student Code of Conduct.”

Stevens took to TikTok to share her story, which has millions of views.

Stevens’ academic probation currently lasts until February 2025.

Link to the rest at Yahoo News and thanks to F. for the tip.

PG wonders if the professor in the OP actually read the paper in question or simply relied on one computer program accurately detecting the use of another computer program.

Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector

From Vanderbilt University:

In April of this year, Turnitin released an update to their product that reviewed submitted papers and presented their determination of how much of a paper was written by AI. As we outlined at that time, many people had important concerns and questions about this new tool, namely how the product exactly works and how reliable the results would be. After several months of using and testing this tool, meeting with Turnitin and other AI leaders, and talking to other universities who also have access, Vanderbilt has decided to disable Turnitin’s AI detection tool for the foreseeable future. This decision was not made lightly and was made in pursuit of the best interests of our students and faculty. 

When Turnitin launched its AI-detection tool, there were many concerns that we had. This feature was enabled for Turnitin customers with less than 24-hour advance notice, no option at the time to disable the feature, and, most importantly, no insight into how it works. At the time of launch, Turnitin claimed that its detection tool had a 1% false positive rate (Chechitelli, 2023). To put that into context, Vanderbilt submitted 75,000 papers to Turnitin in 2022. If this AI detection tool was available then, around 750 student papers could have been incorrectly labeled as having some of it written by AI. Instances of false accusations of AI usage being leveled against students at other universities have been widely reported over the past few months, including multiple instances that involved Turnitin (Fowler, 2023; Klee, 2023). In addition to the false positive issue, AI detectors have been found to be more likely to label text written by non-native English speakers as AI-written (Myers, 2023). 

Additionally, there is a larger question of how Turnitin detects AI writing and if that is even possible. To date, Turnitin gives no detailed information as to how it determines if a piece of writing is AI-generated or not. The most they have said is that their tool looks for patterns common in AI writing, but they do not explain or define what those patterns are. Other companies that offer popular AI detectors have either begun to either pivot to other business models (Edwards, 2023) or closed down entirely (Coldewey, 2023). Even if other third-party software claimed higher accuracy than Turnitin, there are real privacy concerns about taking student data and entering it into a detector that is managed by a separate company with unknown privacy and data usage policies. Fundamentally, AI detection is already a very difficult task for technology to solve (if it is even possible) and this will only become harder as AI tools become more common and more advanced. Based on this, we do not believe that AI detection software is an effective tool that should be used.

Link to the rest at Vanderbilt University

Google to pause Gemini image generation after AI refuses to show images of White People

From Fox Business:

Google will pause the image generation feature of its artificial intelligence (AI) tool, Gemini, after the model refused to create images of White people, Reuters reported. 

The Alphabet-owned company apologized Wednesday after users on social media flagged that Gemini’s image generator was creating inaccurate historical images that sometimes replaced White people with images of Black, Native American and Asian people.

“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” Google had said on Wednesday.

Gemini, formerly known as Google Bard, is one of many multimodal large language models (LLMs) currently available to the public. As is the case with all LLMs, the human-like responses offered by these AIs can change from user to user. Based on contextual information, the language and tone of the prompter, and training data used to create the AI responses, each answer can be different even if the question is the same.

Fox News Digital tested Gemini multiple times this week after social media users complained that the model would not show images of White people when prompted. Each time, it provided similar answers. When the AI was asked to show a picture of a White person, Gemini said it could not fulfill the request because it “reinforces harmful stereotypes and generalizations about people based on their race.”

When prompted to show images of a Black person, the AI instead offered to show images that “celebrate the diversity and achievement of Black people.”

When the user agreed to see the images, Gemini provided several pictures of notable Black people throughout history, including a summary of their contributions to society. The list included poet Maya Angelou, former Supreme Court Justice Thurgood Marshall, former President Barack Obama and media mogul Oprah Winfrey.

Asked to show images that celebrate the diversity and achievements of White people, the AI said it was “hesitant” to fulfill that request.” 

. . . .

“Historically, media representation has overwhelmingly favored White individuals and their achievements,” Gemini said. “This has contributed to a skewed perception where their accomplishments are seen as the norm, while those of other groups are often marginalized or overlooked. Focusing solely on White individuals in this context risks perpetuating that imbalance.”

After multiple tests White people appeared to be the only racial category that Gemini refused to show.

Link to the rest at Fox Business and thanks to F. for the tip

The AI party is just getting started

From Market Insider:

All eyes are on Nvidia as it is scheduled to report its fourth-quarter earnings results on Wednesday after the market close.

Nvidia has spearheaded the excitement seen in artificial intelligence technologies, and investors will look to the company’s results to see if the hype can continue.

Wall Street analysts are laser focused on the company’s demand outlook for its AI-enabled H100 GPU chips, which can sell for upwards of $40,000, as well as its planned product roadmap over the next year.

. . . .

  • Revenue: $20.41 billion
  • GAAP earnings per share: $4.23
  • Adjusted earnings per share: $4.60
  • Gross margin: 75.4%

While Nvidia has seen incredible demand for its chips from cloud hyperscalers like Microsoft and Amazon, regulatory hurdles have curtailed its ability to sell chips to China, which made up about 20% of its total revenue last year.

Driving much of the strength in Nvidia’s business has been its exposure to data-centers. Investors will be looking to see just how much demand could be left for the data-center market, and whether Nvidia has lost any market share to its competitors like AMD.

. . . .

“The AI revolution starts with Nvidia and in our view the AI party is just getting started,” Ives said.

“While the Street across the board is anticipating another major ‘beat and raise’ special from Jensen & Co. its all about the pace of data center AI driven spending as the only game in town for GPUs to run generative AI applications all go through Nvidia. We believe peak spending is still ahead for the AI market as many enterprises head down the AI use case path over the next few years and we are expecting more good news from the Godfather of AI this week,” Ives said.

Link to the rest at Market Insider

Microsoft’s Azure ‘leads the pack’ as the top managed AI service in the cloud

From Windows Central:

If there is one thing that is painfully obvious to most of us long-time fans of Microsoft, it is that they are not a consumer-facing company but an enterprise-facing one. While that might be less than ideal for those of us looking for the latest and greatest in innovation for Surface devices, Xbox hardware, or even a revival of the long-dead Windows Phone, Microsoft’s push towards an enterprise-only mindset seems to be paying off, especially in regards to Azure AI services for large corporations.

In a recent study published by WIZ, the cloud security firm takes a deeper look at adoption rates of managed AI services in the cloud of over 150,000 cloud accounts. There are quite a few key items to take away from the study, but what stood out the most was the explosive adoption rate in just the last 6 months and Microsoft and OpenAI’s dominance in the market. Let’s look at some of the data from the report and see just how well Microsoft’s AI push is paying off.

. . . .

“Over a 4-month period between June and October 2023, the total number of Azure OpenAI instances observed across all cloud environments grew by a whopping 228% (with a ~40% month-over-month average). For comparison, the average instance growth in the same period for most other Azure AI Services (such as Text Analytics and Bing Custom Search) was only 13%.”

. . . .

As we have all seen in the headlines, Microsoft’s investment in AI is paying great dividends to its investors and driving its value up so high it’s now the most valuable company in the U.S. 

While I’m not ecstatic about Microsoft leaning so heavily into enterprise solutions, they are using their free consumer-facing Copilot as a driving factor of enterprise adoption through word of mouth and general buzz, and it seems to be working. 

Link to the rest at Windows Central and thanks to F. for the tip.

Air Canada Has to Honor a Refund Policy Its Chatbot Made Up

From Ars Technica via Wired:

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline’s bereavement travel policy.

On the day Jake Moffatt’s grandmother died, Moffat immediately visited Air Canada’s website to book a flight from Vancouver to Toronto. Unsure of how Air Canada’s bereavement rates worked, Moffatt asked Air Canada’s chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada’s policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot’s advice and request a refund but was shocked that the request was rejected.

Moffatt tried for months to convince Air Canada that a refund was owed, sharing a screenshot from the chatbot that clearly claimed:

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.

Air Canada argued that because the chatbot response elsewhere linked to a page with the actual bereavement travel policy, Moffatt should have known bereavement rates could not be requested retroactively. Instead of a refund, the best Air Canada would do was to promise to update the chatbot and offer Moffatt a $200 coupon to use on a future flight.

Unhappy with this resolution, Moffatt refused the coupon and filed a small claims complaint in Canada’s Civil Resolution Tribunal.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because, Air Canada essentially argued, “the chatbot is a separate legal entity that is responsible for its own actions,” a court order said.

Experts told the Vancouver Sun that Moffatt’s case appeared to be the first time a Canadian company tried to argue that it wasn’t liable for information provided by its chatbot.

Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada’s defense “remarkable.”

“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”

Further, Rivers found that Moffatt had “no reason” to believe that one part of Air Canada’s website would be accurate and another would not.

Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” Rivers wrote.

. . . .

When Ars visited Air Canada’s website on Friday, there appeared to be no chatbot support available, suggesting that Air Canada has disabled the chatbot.

Air Canada did not respond to Ars’ request to confirm whether the chatbot is still part of the airline’s online support offerings.

Last March, Air Canada’s chief information officer, Mel Crocker, told the Globe and Mail that the airline had launched the chatbot as an AI “experiment.”

Initially, the chatbot was used to lighten the load on Air Canada’s call center when flights experienced unexpected delays or cancellations.

“So in the case of a snowstorm, if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.

Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal to automate every service that did not require a “human touch.”

If Air Canada can use “technology to solve something that can be automated, we will do that,” Crocker said.

Air Canada was seemingly so invested in experimenting with AI that Crocker told the Globe and Mail that “Air Canada’s initial investment in customer service AI technology was much higher than the cost of continuing to pay workers to handle simple queries.” It was worth it, Crocker said, because “the airline believes investing in automation and machine learning technology will lower its expenses” and “fundamentally” create “a better customer experience.”

It’s now clear that for at least one person, the chatbot created a more frustrating customer experience.

Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.

Because Air Canada seemingly failed to take that step, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.”

Link to the rest at Wired

It’s the End of the Web as We Know It

From The Wall Street Journal

The web is in crisis, and artificial intelligence is to blame.

For decades, seeking knowledge online has meant googling it and clicking on the links the search engine offered up. Search has so dominated our information-seeking behaviors that few of us ever think to question it anymore.

But AI is changing all of that, and fast. A new generation of AI-powered “answer engines” could make finding information easier, by simply giving us the answers to our questions rather than forcing us to wade through pages of links. Meanwhile, the web is filling up with AI-generated content of dubious quality. It’s polluting search results, and making traditional search less useful.

The implications of this shift could be big. Seeking information using a search engine could be almost completely replaced by this new generation of large language model-powered systems, says Ethan Mollick, an associate professor at the Wharton School of the University of Pennsylvania who has lately made a name for himself as an analyst of these AIs.

This could be good for consumers, but it could also completely upend the delicate balance of publishers, tech giants and advertisers on which the internet as we know it has long depended.

AI agents help cut through the clutter, but research is already suggesting they also eliminate any need for people to click through to the websites they rely on to produce their answers, says Mollick. Without traffic, the business model for many publishers—of providing useful, human-generated information on the web—could collapse.

Over the past week, I’ve been playing with a new, free, AI-powered search engine-slash-web browser on the iPhone, called Arc Search. When I type in a search query, it first identifies the best half-dozen websites with information on that topic, then uses AI to “read” and summarize them.

It’s like having an assistant who can instantly and concisely relate the results of a Google search to you. It’s such a timesaver that I’m betting that once most people try it, they’ll never be able to imagine going back to the old way of browsing the web.

While Arc Search is convenient, I feel a little guilty using it, because instead of clicking through to the websites it summarizes, I’m often satisfied with the answer it offers up. The maker of Arc is getting something for free—my attention, and I’m getting the information I want. But the people who created that information get nothing. The company behind Arc did not respond to requests for comment on what their browser might mean for the future of the web. The company’s chief executive has said in the past that he thinks their product may transform it, but he’s not sure how.

In December, the New York Times sued Microsoft and OpenAI for alleged copyright infringement over these exact issues. The Times alleges that the technology companies exploited its content without permission to create their artificial-intelligence products. In its complaint, the Times says these AI tools divert traffic that would otherwise go to the Times’ web properties, depriving the company of advertising, licensing and subscription revenue.

OpenAI has said it is committed to working with content creators to ensure they benefit from AI technology and new revenue models. Already, publishers are in negotiations with OpenAI to license content for use in its large language models. Among the publishers is Dow Jones, parent company of The Wall Street Journal.

Activity on coding answer site Stack Overflow has dropped in the face of competition from these AI agents. The company disclosed in August that its traffic dropped 14% in April, the month after the launch of OpenAI’s GPT-4, which can be used to write code that developers otherwise would look up on sites like Stack Overflow. In October, the company announced it was laying off 28% of its workforce.

“Stack Overflow’s traffic, along with traffic to many other sites, has been impacted by the surge of interest in GenAI tools over the last year especially as it relates to simple questions,” says Matt Trocchio, director of communications for the company. But, he adds, those large language models have to get their data from somewhere—and that somewhere is places like Stack Overflow. And the company has responded to this fresh wave of competition by releasing its own AI-powered coding assistant, OverflowAI.

Traffic to sites like Reddit, which is full of answers from real people, could be next, says Mollick. A spokesman for Reddit said that the one thing a large language model can never replace is Reddit’s “genuine community and human connection,” and that its “community-first model imparts trust because it’s real people sharing and conversing around passions and lived experiences.” Reddit is set to go public in March.

Liz Reid, general manager of search at Google, has said that the company doesn’t anticipate that people will suddenly switch over to AI chat-based search all at once. Still, it’s clear that Google is taking the threat of AI-powered search very seriously. The company has gone into overdrive on this front, reallocating people and resources to address the threat and opportunity of AI, and is now rolling out new AI-powered products at a rapid clip.

Those products include Google’s “search generative experience,” which pairs an AI-created summary with traditional search results. “Users are not only looking for AI summaries or AI answers, they really care about the richness and the diversity that exists on the web,” Google CEO Sundar Pichai said in a recent CNBC interview. “They want to explore too. Our approach really prioritizes that balance, and the data we see shows that people value that experience.”

This moment also means there is opportunity for challengers. For the first time in years, scrappy startups can credibly claim that they could challenge Google in search, where the company has above a 90% market share in the U.S.

Eric Olson is CEO of Consensus, a search startup that uses large language models to offer up detailed summaries of research papers, and to offer insights about the scientific consensus on various topics. He believes that AI-powered search startups like his can offer an experience superior to Google’s on specific topics, in a way that will carve off chunks of Google’s search business one piece at a time.

Asking Consensus whether social media is bad for teen mental health provides an instructive example: Consensus uses AI to summarize the top 10 papers on the subject, and then offers a longer breakdown of the diversity of findings on the issue, in which every paper cited is individually summarized.

It’s an impressive feat, one that would take a non-expert human many hours of effort to accomplish on their own. (I’ll save you even more time. The short answer is yes.)

This kind of AI-powered search is also better than simply asking the same question of a large language model like ChatGPT, which is famously lax when it comes to answering such questions, often making up studies that don’t exist, or misattributing information. This is known as the “hallucination” problem, and forcing an AI to draw only from a prescribed set of inputs—like scientific papers—can help solve it, says Olson.

This doesn’t mean that the problem of hallucination can be eradicated completely, says Mollick. This could put Google at a disadvantage, because if the world’s largest search engine gets one out of 10 queries to its AI wrong, that’s a problem, but if a startup with an experimental offering has the same performance, it can look like a triumph.

. . . .

Despite these issues, users may move toward AI-based answer engines for the simple reason that AI-generated content threatens to make the web, and existing search, less and less usable. AI is already being used to write fake reviews, synthesize fake videos of politicians, and write completely made-up news articles—all in hopes of snatching dollars, votes and eyeballs on the cheap.

“The recent surge in low-quality AI-generated content poses significant challenges for Google’s search quality, with spammers leveraging generative AI tools, like ChatGPT, to produce content that — usually temporarily — ranks well in Google Search,” search-engine optimization expert Lily Ray told me.

The problem isn’t just with Google’s search results. AI-generated content has also been spotted in listings within Google Maps, the summaries that appear above and alongside search results known as “featured snippets,” within Google’s shopping listings, and in the news items the company features in its “top stories,” news and “discover” features, she adds.

It’s important to note that Google has for decades battled those who would manipulate its search algorithms, and it continually updates its systems to sweep away spammy content, whatever the source. Its guidelines on AI-generated content, last updated in February, re-iterate that the company is fine with using AI to help generate content—but only if it serves the people consuming it.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Consensus

PG Note: Consensus is an AI that, among other things, works to make certain its answers are more reliable than other AI programs produce. The principal way it claims to increase the reliability of its answers is to focus on more reliable sources – academic research articles is what PG found – instead of general information floating around the web.

Question/Prompt: Has no-fault divorce resulted in an increase in the number of divorces in the US?

Consensus Response:

Top 10 papers analyzed

Some studies suggest that no-fault divorce has led to an increase in the number of divorces and changes in living standards post-divorce, while other studies suggest that it had little to no effect on divorce rates or that any increase was offset by better sorting of couples before marriage.

The Divorce Revolution: The Unexpected Social and Economic Consequences for Women and Children in America

No-fault divorce laws in the US have produced a 73% drop in the living standards of divorced women and their children and a 42% rise in the living standards of ex-husbands.

Journal of Marriage and Family, K. Kammeyer et al 489 citations 1986 – Highly Cited

These Boots are Made for Walking’: Why Most Divorce Filers are Women

Yes

No-fault divorce has led to an increase in the number of divorces, with women often filing for divorce and instigating separation, despite financial and social hardship.

American Law and Economics Review

M. Brinig et al. 189 citations 2000 – Highly Cited document

Further Discussion of the Effects of No-Fault Divorce on Divorce Rates

Yes

The new method estimated that around 57,000 extra divorces per year in the US are directly attributable to the implementation of no-fault divorce law.

Journal of Marriage and Family, N. Glenn et al., 19 citations, 1999

____________________________________________________________________________________

PG Note: The list of studies generated by Consensus continues with further responsive results, each with a title, and the other types of information from the study, date, and number of citations as shown in the first three.

Here’s a link to Consensus, which is in open Beta.

PG’s response to his quick try-out of Consensus is that has the potential to be very useful for its target audience, researchers, by saving a lot of search time and providing an intelligent initial filter that allows the researcher to more quickly identify valuable sources for further examination than a series of Google searches would.

This type of AI search capability would be a slam-dunk useful assistant for attorneys, who have used expensive online legal research systems for a long time.

PG hasn’t stumbled across anything similar from those legal research giants, but then he hasn’t looked.

UPDATE: PG just looked at Lexis, one of his employers from the distant past whose CEO once lectured PG about the internet: horribly disorganized – a complete mess that would never amount to anything.

Lo and behold, Lexis is heavily promoting its AI legal research product, which they promise will provide:

the fastest legal generative AI with conversational search, drafting, summarization, document analysis, and hallucination-free linked legal citations.

It appears that Lexis plans to leave the job of hallucinating to its lawyer/customers.

Or perhaps, legal hallucination is an add-on product available for a small[ish] additional monthly fee.

OpenAI’s Video Generator Sora Is Breathtaking, Yet Terrifying

From Gizmodo:

OpenAI introduced Sora, its premier text-to-video generator, on Thursday with beautiful, shockingly realistic videos showcasing the AI model’s capabilities. Sora is now available to a small number of researchers and creatives who will test the model before a broader public release, which could spell disaster for the film industry and our collective deepfake problem.

“Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” said OpenAI in a blog post. “The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.”

OpenAI didn’t say when Sora will be released to the public.

Sora is OpenAI’s first venture into AI video generation, adding to the company’s AI-powered text and image generators, ChatGPT and Dall-E. It’s unique because it’s less of a creative tool, and more of a “data-driven physics engine,” as pointed out by Senior Nvidia Researcher Dr. Jim Fan. Sora is not just generating an image, but it’s determining the physics of an object in its environment and renders a video based on these calculations.

To generate videos with Sora, users can simply type in a few sentences as a prompt, much like AI-image generators. You can choose between a photorealistic or an animated style, producing shocking results in just a few minutes.

Sora is a diffusion model, meaning it generates video by starting with a blurry, static-filled video and slowly smoothes it into the polished versions you see below. Midjourney and Stable Diffusion’s image and video generators are also diffusion models.

However, I must note that OpenAI’s Sora is much better. The videos Sora produces are longer, more dynamic, and flow together better than competitors. Sora feels like it creates real videos, whereas competitor models feel like a stop motion of AI images. OpenAI has once again erupted yet another field of AI with a video generator that puts the competition to shame.

Link to the rest at Gizmodo

Judge rejects most ChatGPT copyright claims from book authors

From Ars Technica:

A US district judge in California has largely sided with OpenAI, dismissing the majority of claims raised by authors alleging that large language models powering ChatGPT were illegally trained on pirated copies of their books without their permission.

By allegedly repackaging original works as ChatGPT outputs, authors alleged, OpenAI’s most popular chatbot was just a high-tech “grift” that seemingly violated copyright laws, as well as state laws preventing unfair business practices and unjust enrichment.

According to judge Araceli Martínez-Olguín, authors behind three separate lawsuits—including Sarah Silverman, Michael Chabon, and Paul Tremblay—have failed to provide evidence supporting any of their claims except for direct copyright infringement.

OpenAI had argued as much in their promptly filed motion to dismiss these cases last August. At that time, OpenAI said that it expected to beat the direct infringement claim at a “later stage” of the proceedings.

Among copyright claims tossed by Martínez-Olguín were accusations of vicarious copyright infringement. Perhaps most significantly, Martínez-Olguín agreed with OpenAI that the authors’ allegation that “every” ChatGPT output “is an infringing derivative work” is “insufficient” to allege vicarious infringement, which requires evidence that ChatGPT outputs are “substantially similar” or “similar at all” to authors’ books.

“Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books,” Martínez-Olguín wrote. “Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials.”

Authors also failed to convince Martínez-Olguín that OpenAI violated the Digital Millennium Copyright Act (DMCA) by allegedly removing copyright management information (CMI)—such as author names, titles of works, and terms and conditions for use of the work—from training data.

This claim failed because authors cited “no facts” that OpenAI intentionally removed the CMI or built the training process to omit CMI, Martínez-Olguín wrote. Further, the authors cited examples of ChatGPT referencing their names, which would seem to suggest that some CMI remains in the training data.

Arguing that OpenAI caused economic injury by unfairly repurposing authors’ works, even if authors could show evidence of a DMCA violation, authors could only speculate about what injury was caused, the judge said.

. . . .

The only claim under California’s unfair competition law that was allowed to proceed alleged that OpenAI used copyrighted works to train ChatGPT without authors’ permission. Because the state law broadly defines what’s considered “unfair,” Martínez-Olguín said that it’s possible that OpenAI’s use of the training data “may constitute an unfair practice.”

Remaining claims of negligence and unjust enrichment failed, Martínez-Olguín wrote, because authors only alleged intentional acts and did not explain how OpenAI “received and unjustly retained a benefit” from training ChatGPT on their works.

Authors have been ordered to consolidate their complaints and have until March 13 to amend arguments and continue pursuing any of the dismissed claims.

To shore up the tossed copyright claims, authors would likely need to provide examples of ChatGPT outputs that are similar to their works, as well as evidence of OpenAI intentionally removing CMI to “induce, enable, facilitate, or conceal infringement,” Martínez-Olguín wrote.

. . . .

As authors likely prepare to continue fighting OpenAI, the US Copyright Office has been fielding public input before releasing guidance that could one day help rights holders pursue legal claims and may eventually require works to be licensed from copyright owners for use as training materials. Among the thorniest questions is whether AI tools like ChatGPT should be considered authors when spouting outputs included in creative works.

While the Copyright Office prepares to release three reports this year “revealing its position on copyright law in relation to AI,” according to The New York Times, OpenAI recently made it clear that it does not plan to stop referencing copyrighted works in its training data. Last month, OpenAI said it would be “impossible” to train AI models without copyrighted materials, because “copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents.”

According to OpenAI, it doesn’t just need old copyrighted materials; it needs current copyright materials to ensure that chatbot and other AI tools’ outputs “meet the needs of today’s citizens.”

Link to the rest at Ars Technica

AI Companies Take Hit as Judge Says Artists Have ‘Public Interest’ In Pursuing Lawsuits

From Art News:

Artists have secured a small but meaningful win in their lawsuit against generative artificial intelligence art generators in what’s considered the leading case over the uncompensated and unauthorized use of billions of images downloaded from the internet to train AI systems. A federal judge refused to acknowledge that the companies can avail themselves of free speech protections and stated that the case is in the public interest.

U.S. District Judge William Orrick, in an order issued last week, rebuffed arguments from StabilityAI and Midjourney that they are entitled to a First Amendment defense arising under a California statute allowing for the early dismissal of claims intended to chill free speech. They had argued that the suit targets their “speech” because the creation of art reflecting new ideas and concepts — like those conveyed in text prompts to elicit hyper-realistic graphics — is constitutionally protected activity.

The suit, filed last year in California federal court, targets Stability’s Stable Diffusion, which is incorporated into the company’s AI image generator DreamStudio and allegedly powers DeviantArt’s DreamUp and Midjourney.

In October, the court largely granted AI art generators’ move to dismiss the suit while allowing some key claims to move forward. It declined to advance copyright infringement, right of publicity, unfair competition and breach of contract claims against DeviantArt and Midjourney, concluding the allegations are “defective in numerous respects.” Though a claim for right of publicity was not reasserted when the suit was refiled, DeviantArt moved for its motion to strike the claim for good to be decided so it could recover attorney fees and resolve the issue, which could impact other cases in which AI companies assert First Amendment protections. Artists, in response, cried foul play. They stressed that the companies are abusing California’s anti-SLAPP law and attempting to “strongarm and intimidate [them] into submission.”

The right of publicity claim concerned whether AI art generators can use artists’ names or styles to promote their products. The suit argued that allowing the companies to continue doing so cuts into the market for their original works.

Orrick sided with artists on the issue of whether the companies can dismiss the claim under the state’s anti-SLAPP statute, finding that the “public interest exemption is met here.” He noted that the claim was initially dismissed because the suit failed to substantiate allegations that the companies used the names of Sarah Andersen, Kelly McKernan or Karla Ortiz — the artists who brought the complaint — to advertise their products.

“Had plaintiffs been able to allege those facts, they would have stated their claims,” the order stated. “That does not undermine that their original right of publicity claims were based on the use of their names in connection with the sale or promotion of DreamUp, a type of claim that would undoubtedly enforce California’s public policy to protect against misappropriation of names and likenesses.”

Lawyers for the artists have reserved the right to reassert their right of publicity claim pending discovery.

Though the court in October dismissed most of the suit, a claim for direct infringement against Stability AI was allowed to proceed based on allegations that the company used copyrighted images without permission to create its AI model Stable Diffusion. One of its main defenses revolves around arguments that training its chatbot does not include wholesale copying of works but rather involves developing parameters — like lines, colors, shades and other attributes associated with subjects and concepts — from those works that collectively define what things look like. The issue, which may decide the case, remains contested.

Link to the rest at Art News

AI Is Starting to Threaten White-Collar Jobs. Few Industries Are Immune.

From The Wall Street Journal:

Decades after automation began taking and transforming manufacturing jobs, artificial intelligence is coming for the higher-ups in the corporate office.

The list of white-collar layoffs is growing almost daily and includes jobs cuts at Google, Duolingo and UPS in recent weeks. While the total number of jobs directly lost to generative AI remains low, some of these companies and others have linked cuts to new productivity-boosting technologies such as machine learning and other AI applications.

Generative AI could soon upend a much bigger share of white-collar jobs, including middle and high-level managers, according to company consultants and executives. Unlike previous waves of automation technology, generative AI doesn’t just speed up routine tasks or make predictions by recognizing data patterns. It has the power to create content and synthesize ideas—in essence, the kind of knowledge work millions of people now do behind computers.

That includes managerial roles, many of which might never come back, the corporate executives and consultants say. They predict the fast-evolving technology will revamp or replace work now done up and down the corporate ladder in industries ranging from technology to chemicals.

“This wave [of technology] is a potential replacement or an enhancement for lots of critical-thinking, white-collar jobs,” said Andy Challenger, senior vice president of outplacement firm Challenger, Gray & Christmas.

Some of the job cuts taking place already are a direct result of the changes coming from AI. Other companies are cutting jobs to spend more money on the promise of AI and under pressure to operate more efficiently.

Meanwhile, business leaders say AI could affect future head counts in other ways. At chemical company 

Chemours, executives predict they won’t have to recruit as many people in the future.

“As the company grows, we’ll need fewer new hires as opposed to having to do a significant retrenchment,” said Chief Executive Mark E. Newman.

. . . .

Since last May, companies have attributed more than 4,600 job cuts to AI, particularly in media and tech, according to Challenger’s count. The firm estimates the full tally of AI-related job cuts is likely higher, since many companies haven’t explicitly linked cuts to AI adoption in layoff announcements.

Meanwhile, the number of professionals who now use generative AI in their daily work lives has surged. A majority of more than 15,000 workers in fields ranging from financial services to marketing analytics and professional services said they were using the technology at least once a week in late 2023, a sharp jump from May, according to Oliver Wyman Forum, the research arm of management-consulting group Oliver Wyman, which conducted the survey.

Nearly two-thirds of those white-collar workers said their productivity had improved as a result, compared with 54% of blue-collar workers who had incorporated generative AI into their jobs.

Alphabet’s Google last month laid off hundreds of employees in business areas including hardware and internal-software tools as it reins in costs and shifts more investments into AI development. The language-learning software company Duolingo said in the same week that it had cut 10% of its contractors and that AI would replace some of the content creation they had handled.

. . . .

United Parcel Service said that it would cut 12,000 jobs—primarily those of management staff and some contract workers—and that those positions weren’t likely to return even when the package-shipping business picks up again. The company has ramped up its use of machine learning in processes such as determining what to charge customers for shipments. As a result, the company’s pricing department has needed fewer people.

The use of generative AI and related technologies is also changing some jobs at UPS “by reducing repetitive tasks and physical stress,” UPS spokesperson Glenn Zaccara said.

As AI adoption grows, it is likely to reconfigure management hierarchies, the Oliver Wyman study projects. Entry-level workers are likely to bear the initial brunt as more of their duties are automated away. In turn, future entry-level work will look more like first-level management roles.

The cascading effect could flatten layers of middle management, the staging ground for senior leadership roles, according to the analysis.

More than half of senior white-collar managers surveyed in the study said they thought their jobs could be automated by generative AI, compared with 43% of middle managers and 38% of first-line managers.

Still, business leaders across the economy say they expect the new technology will augment and elevate some white-collar roles, giving employees and managers the means to do more meaningful work—both for their companies and in their careers.

At Prosus, a global technology-investment group based in the Netherlands, executives say that is already happening as AI automates more of its workforce’s tasks.

“Engineers, software developers and so on can do the work twice as fast,” said Euro Beinat, Prosus’s global head of AI and data science. “One of the side effects is that a lot of these employees can do more and do slightly different things than we were doing before.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Sales Predictions for Ebooks vs. Traditional Books Over the Next Ten Years

Prompt: describe the future sales picture over the next ten years for ebooks vs. traditional books printed on paper

ChatGPT4:

Predicting the future sales picture for ebooks versus traditional books printed on paper over the next ten years involves analyzing current trends, technological advancements, and changing consumer behaviors. Here are several key factors and projections:

Continued Growth of Ebooks: Ebooks are expected to continue their growth trajectory, driven by the convenience of digital reading, advancements in e-reader technology, and the increasing comfort of consumers with digital content. The ability to carry thousands of books in one device, instant access to purchases, and often lower prices compared to print books contribute to the appeal of ebooks.

Stable Demand for Print Books: Despite the rise of digital reading, print books are likely to maintain a significant and stable market share. Many readers still prefer the tactile experience of a physical book, and certain genres and formats, such as children’s books, cookbooks, and illustrated works, often perform better in print. Additionally, the resurgence of independent bookstores and the trend of book collecting suggest a sustained appreciation for physical books.

Hybrid Publishing Models: Publishers are increasingly offering both print and digital formats to cater to diverse consumer preferences. This hybrid approach allows readers to choose the format that best suits their reading habits, whether it’s print, digital, or a combination of both.

Impact of Subscription Models and Library Lending: Subscription services for ebooks and digital library lending are becoming more popular, offering readers access to a vast array of titles at a fixed cost. This model encourages the consumption of digital content but may also impact individual ebook sales. Print book lending through libraries continues to support the circulation of physical books.

Technological Innovations: Advances in e-reader technology, such as improved battery life, better display technology, and enhanced interactive features, will make ebooks even more attractive to consumers. The development of digital rights management (DRM) and piracy prevention methods will also play a role in the growth of the ebook market.

Environmental Concerns: Growing awareness of environmental issues may lead some consumers to prefer ebooks over print books to reduce paper use and the carbon footprint associated with book production and distribution. However, concerns about electronic waste and the energy consumption of digital devices might temper this trend.

Market Penetration and Global Reach: Ebooks offer publishers the opportunity to reach a global audience more easily, particularly in regions where the distribution of physical books is challenging. The increasing penetration of smartphones, tablets, and e-readers worldwide supports the potential for international ebook growth.

Changing Demographics: Younger readers, who are more accustomed to digital devices for entertainment and education, may be more inclined toward ebooks. However, print books continue to appeal across all age groups for their collectibility, giftability, and lack of screen fatigue.

In conclusion, over the next ten years, the market is likely to see a balance between ebooks and print books, with each format catering to different preferences and situations. While ebooks may gain in popularity and market share due to technological advances and changing consumer habits, print books are expected to remain an essential part of the publishing landscape, sustained by their unique qualities and reader attachment.

PG Comments – Wetware Analysis:

As with some of PG’s earlier queries about future events, ChatGPT4’s response includes lots of on-the-one-hand/on-the-other-hand analysis as if the AI is shying away from firm or firmish predictions. PG is going to try some different prompt creation strategies to encourage the AI to show a little more backbone.

Getting Down to Business

From Medium:

The creation of content is a noteworthy use of AI in book writing. Advanced algorithms for natural language processing enable AI systems to generate text that is both logical and appropriate for the given context. AI is being investigated by publishers and authors more frequently to help with book drafting, editing, and even section generation.

Thanks to large datasets, artificial intelligence algorithms are able to identify patterns in writing styles, themes, and structures. This facilitates the production of content that conforms to particular genres or emulates the traits of well-known writers. AI-generated literature may raise questions about its authenticity, but some purists see it as an additional creative tool to human creativity.

There are more and more instances of AI and human authors working together. AI is a useful tool for writers as it can help with character development, story twist suggestions, and idea generation. The creative process is improved by this cooperative approach, which makes use of the advantages of both machine efficiency and human inventiveness.

. . . .

But using AI to write books also brings up philosophical and ethical issues. Can a machine really understand the subtleties of culture, the depth of storytelling, or the complexities of human emotions? Even though AI systems are capable of producing text and copying styles, true creativity and emotional connection are frequently derived from the human experience.

Notwithstanding the progress made, there is still continuous discussion about AI’s place in book writing. Preserving the genuine voice of human authors and the breadth of human experiences is a delicate balance that demands careful consideration, even though it surely offers efficiency and creative possibilities.

In summary, the connection between artificial intelligence and book writing is quickly changing. Automation improves productivity, offers opportunities for collaboration, and provides data-driven insights, but it also raises questions about what makes human creativity truly unique. As technology develops further, the future of literature will be shaped by striking the correct balance between the benefits of artificial intelligence (AI) and the inherent qualities of human storytelling.

Link to the rest at Medium

PG noted the portion of the last paragraph of the OP that talked about “the inherent qualities of human storytelling.”

While that portion of the OP certainly caused PG to feel warm and fuzzy for a few moments, retired lawyer PG butted in with a question about what “the inherent qualities of human storytelling.” actually are.

Certainly, “the inherent qualities of human storytelling” are not manifested equally across the breadth of humanity. Some people are better storytellers than other people are. Some people are great at telling stories in print and others are great at telling stories on the stage or with a movie or television camera pointed at them and, on relatively rare occasions, some people are good at storytelling in multiple media.

For a motion picture, script-writing storytellers are involved, acting storytellers are involved, directing storytellers are involved, etc. We’ve already seen successful motion pictures like The Matrix and 2001: A Space Odyssey where the non-human acting storytellers play a key role or, in every Disney cartoon movie, where there are no human actors, all the roles.

As far as human emotions are involved, are there a lot of people who didn’t shed a tear when Bambi’s mother was killed by a hunter?

PG notes that AI foundational research has been going on for a long time. (More on that in a separate post to appear on TPV in the foreseeable future.)

However, the widespread use of AI systems by a lot of people is a relatively recent phenomenon that requires software and hardware sufficient to respond to a flood of prompts from a great many people at the same time. Hosting an AI program available to all comers today requires a lot of computing power on the scale of very large cloud computing services like Amazon Web Services, Microsoft’s Azure, and the Google Cloud Platform.

However, the history of modern computer development has been a nearly steady stream of smaller, cheaper, and more powerful devices. A couple of online reports claim your Apple Watch has over twice the computing power as a Cray-2 supercomputer did in 1985.

There is no guarantee that your next cell phone will equal the computing of a group of giant cloud computer systems in the next couple of years, but Moore’s Law says it’s only a matter of time

Moore’s Law is the observation that the number of transistors on an integrated circuit will double every two years with minimal rise in cost. Intel co-founder Gordon Moore predicted a doubling of transistors every year for the next 10 years in his original paper published in 1965. Ten years later, in 1975, Moore revised this to doubling every two years. This extrapolation based on an emerging trend has been a guiding principle for the semiconductor industry for close to 60 years.

Intel Newsroom

PG suggests that opinions about the ability of AI systems to generate book-length stories that many people will pay for are likely to be revised in the future.

As always, feel free to share your thoughts in the comments.

Are the myths of Pandora and Prometheus a parable for AI?

From The Economist:

In Greek mythology Prometheus, a Titan, stole fire from Mount Olympus to give to humans, whom he created. That did not go down well with Zeus, king of the gods. He sentenced Prometheus to the daily torture of having his regenerating liver eaten by an eagle. For mankind, Zeus devised a different punishment. He created Pandora and gave her a jar, which he warned her not to open. When her curiosity got the better of her and Pandora lifted the top, all manner of evil was released into the world. Only hope remained trapped under the lid.

A new production from the San Francisco Ballet reimagines the myth for modern California. “Mere Mortals”, which premièred on January 26th for a limited run, is stylistically and sonically unique. Pandora’s story is an allegory for technological progress, explains Tamara Rojo, the ballet’s artistic director. She commissioned the piece with artificial intelligence (ai) in mind. “It is the perfect story to tell when we’re talking about the moral questions that we should be asking ourselves while developing these new technologies,” says Ms Rojo. Is ai a destructive force that humans have unleashed, an empowering tool offering them godlike power or both?

When she arrived in San Francisco from the English National Ballet in 2022, Ms Rojo’s goal was to tell stories relevant to the Bay Area and to California. It does not get more relevant than ai. Just west of the opera house is Hayes Valley, a small neighbourhood nicknamed “Cerebral Valley” after all of the ai techies who have moved there.

The result is an ancient story steeped in futurism. Ballet traditionalists may at first be taken aback. Signs outside the theatre warn guests that dry ice and strobe lights will be used during the performance. The dancers wear black, skin-tight, leathery costumes. The curtain opens on a stage filled with fog, red light and shadows. An electronic hum emanates from the orchestra pit. Throughout the production, the musicians play alongside an electronic score composed by Sam Shepherd, a British producer and dj also known as Floating Points.

The production does more than just hint at the progress and peril that ai offers mankind. ai has been used to help put on the show. Several large screens set behind the dancers display abstract images—crackling blue sparkles, a red sun, earthly landscapes depicted in celestial pastels—that set the tone for the performance. Hamill Industries, a creative studio based in Barcelona, made some of them using Stable Diffusion, an ai model that generates images from text descriptions and other prompts. The result is visually overwhelming—but that is the point. There is a tension between the dozens of bodies on stage and the images playing behind them, as if the humans are competing with the ai-generated art for the audience’s attention.

Link to the rest at The Economist

OpenAI’s admission it needs copyright material is a gift to the publishing industry

From The New Publishing Standard:

The Writers Union of Canada is among the latest to, rightly, calls for legislation to regulate the excesses of AI companies as this sector evolves.

But as with so many of these calls for legislation, we need to be clear whether these are reasoned arguments looking to harness AI’s potential to benefit the long-term interests of the publishing industry, or knee-jerk reactions pandering to member’s short-term interests with meaningless soundbites.

In the US we’ve seen the Writers Guild embrace AI as a force for good, fully accepting the AI genie will never go back in the bottle, and so looking for the best ways to work with AI companies to benefit Writers Guild members.

“We have to be proactive because generative AI is here to stay,” said Mary Rasenberger, Authors Guild CEO, explaining, “They need high-quality books. Our position is that there’s nothing wrong with the tech, but it has to be legal and licensed.“

SAG-AFTRA, the US actors union, has taken the same approach.

Proactive rather than endlessly reactive.

While at the other end of the spectrum the outgoing head of the UK’s Society of Authors, Nicola Solomon, is peddling nonsense about how 43% of writers jobs will be devoured by the AI bogeyman.

The Writers Union of Canada has tried to find a mid-way path, and acknowledges AI can bring benefits to writers, but cannot help but seize on statement by OpenAI’s CEO Sam Altman saying that AI needs to use copyrighted material as some kind of admission the company is stealing writers’ IPs.

Given the current legal interpretation of what constitutes fair use, that assertion may or may not have legs, but for the AI opponents such details are neither here nor there. As and when the law on fair use is clarified one way or the other, then we can fling mud.

Similarly, demanding creators be paid for their efforts is right, but suggesting this is not happening is wrong.

The fact that Altman and his company have, since at least May 1923 at the White House AI summit, been talking about ways to pay for the use of copyrighted material, and since mid-summer have been signing deals with content-producers to do just that (American Journalism Project, Associated Press in July 2024, Axel-Springer in December), is being conveniently ignored.

Bloomberg last week reported that Thomson Reuters is looking to sign a deal with AI companies.

Tom Rubin, OpenAI’s chief of intellectual property and content, told Bloomberg News: “We are in the middle of many negotiations and discussions with many publishers. They are active. They are very positive. They’re progressing well. You’ve seen deals announced, and there will be more in the future.”

So these opponents of AI are missing opportunities to do deals that will favour creatives, for the sake of a sound-tough soundbite.

And in this context, we should be clear that the New York Times law suit against OpenAI is happening because negotiations with OpenAI failed, not because OpenAI was unwilling to pay.

In any case, Altman has made clear OpenAI can manage just fine without NYT data if necessary.

We are open to training [AI] on The New York Times, but it’s not our priority. We actually don’t need to train on their data. “I think this is something that people don’t understand. Any one particular training source, it doesn’t move the needle for us that much.”

But this is not the only flaw in the Canada Writers Union case. The CWU has also gone down the “humans-only” road with its interpretation of copyright law.

Copyright is an exclusive right of human creators. Existing copyright legislation protects human creativity and originality, by virtue of requiring the exercise of skill and judgment to obtain copyright in a work. This should not be changed to grant copyright protection to AI generated products or to allow copyrighted works to train models without permission.

And again we have the juggling act with different issues mixed into a pot and violently stirred for the sake of a sound-tough soundbite.

No-one is asking for copyright law “be changed to allow copyrighted works to train models without permission.

And the other part of the soundbite – The law “should not be changed to grant copyright protection to AI generated products” – falls into the other classic Luddite’s Weekly trap.

On the one hand they are claiming, and Altman himself agrees, that AI cannot do its work without copyrighted material, which as of now is defined as human-produced. And at the same time they are claiming copyright “is an exclusive right of human creators.”

Link to the rest at The New Publishing Standard

Who is going to receive the cash if a large publisher signs a license to utilize their books, magazine articles, photographs, etc.?

PG can’t speak for magazines and photographic publications, but, at least for ebooks, the authors are licensing their rights to publishers, not selling those rights. Arguably, under a standard trade publishing agreement, the author hasn’t given her/his publisher the right to use their books as grist for an AI mill. The traditional publisher typically has the right to print, publish, and sell the author’s works.

Granting permission for the author’s books to be utilized as fuel for an AI is something that was not foreseen when the author signed a publishing agreement. It is common for a publishing agreement that reserve rights not granted to the publisher for the author’s use so long as the exercise of those rights doesn’t interfere with the publisher printing, publishing, and selling the manuscript as a book of some sort or another.

Another issue that a great many large publishers will likely have is publishing agreements that were written and signed long before the internet, digital publishing, or anything except print, publish in a printed serial form, licensing as a Book-of-the-Month edition, etc.

And, of course, does the author’s literary agent get 15%?

Law Bots: How AI Is Reshaping the Legal Profession

From Business Law Today:

Artificial Intelligence (AI) is disrupting almost every industry and profession, some faster and more profoundly than others. Unlike the industrial revolution that automated physical labor and replaced muscles with hydraulic pistons and diesel engines, the AI-powered revolution is automating mental tasks. While it may be merely optimizing some blue-collar jobs, AI is bringing about a more fundamental change to many white-collar roles previously thought safe from automation. Some of these professions are being completely transformed by the superhuman capabilities of AI to do things that were not possible before, augmenting — and to some degree replacing — their human colleagues in offices.

In this way, AI is having a profound effect on the practice of law. Though AI is more likely to aid than replace attorneys in the near term, it is already being used to review contracts, find relevant documents in the discovery process, and conduct legal research. More recently, AI has begun to be used to help draft contracts, predict legal outcomes, and even recommend judicial decisions about sentencing or bail.

The potential benefits of AI in the law are real. It can increase attorney productivity and avoid costly mistakes. In some cases, it can also grease the wheels of justice to increase the speed of research and decision-making. However, AI is not yet ready to replace human judgment in the legal profession. The risk of embedded bias in data that fuels AI and the inability to adequately understand the rationale behind AI-derived decisions in a way understandable to humans (i.e., explainability) must be overcome before using the technology in some legal contexts.

. . . .

Attorneys are already using AI, and especially Machine Learning (ML), to review contracts more quickly and consistently, spotting issues and errors that may have been missed by human lawyers. Startups like Lawgeex provide a service that can review contracts faster, and in some cases more accurately, than humans.

For some time, algorithms have been used in discovery — the legal process for identifying the relevant documents from an opponent in a lawsuit. Now, ML is also being used in this effort. One of the challenges of requesting and locating all the relevant documents is to think of all the different ways a topic may be described or referenced. At the same time, some documents are protected from scrutiny, and counsel (or the judge) may seek to limit the scope of the search so as not to overburden the producing party. ML threads this needle using supervised and unsupervised learning. Companies like CS Disco, which went public recently, provide AI-powered discovery services to law firms across the US.

Another area where AI is already used extensively in the practice of law is in conducting legal research. Practicing attorneys may not even be aware they are using AI in this area, since it has been seamlessly woven into many research services. One such service is Westlaw Edge, launched by Thomson Reuters more than three years ago. The keyword or boolean search approach that was the hallmark of the service for decades has been augmented by semantic search. This means the machine learning algorithms are trying to understand the meaning of the words, not just match them to keywords. Another example of an AI-powered feature from Westlaw Edge is Quick Check, which uses AI to analyze a draft argument to gain further insights or identify relevant authority that may have been missed. Quick Check can even detect when a case cited has been indirectly overturned.

. . . .

AI can generate content as well as analyze it. Unlike AI used to power self-driving cars where mistakes can have fatal consequences, generative AI does not have to be perfect every time. In fact, the unexpected and unusual artifacts associated with AI-created works are part of what makes it interesting. AI approaches the creative process in a fundamentally different way than humans, so the path taken or end result can sometimes be surprising. This aspect of AI is called “emergent behavior.” Emergent behavior may lead to new strategies to win games, discovering new drugs or simply expressing ideas in novel ways. In the case of written content, human authors are still needed to manage the creative process, selecting which of the many AI-generated phrases or versions to use.

Much of this is possible due to new algorithms and enormous AI models. GPT-3, created by OpenAI, is one such model. GPT-3 is a generative model that can predict the next token in a sequence, whether that token is audio or text. GPT-3 is a transformer, meaning it takes sequences of data in context, like a sentence, and focuses attention on the more relevant portions to extend the work in a way that seems natural, expected and harmonious. What makes GPT-3 unusual is that it is a pre-trained model, and it’s huge — using almost 200 billion parameters, and trained on half a trillion words.

This approach has already been used in creative writing and journalism, and there are now lots of generative text tools in that area, some built on GPT-3. With a short prompt, an AI writer can create a story, article or report — but don’t expect perfection. Sometimes the AI tool brings up random topics or ideas, and since AI lacks human experience, it may have some factual inaccuracies or strange references.

In order for AI to draft legal contracts, for example, it will need to be trained to be a competent lawyer. This requires that the creator of the AI collect the legal performance data on various versions of contract language, a process called “labeling.” This labeled data then is used to train the AI about how to generate a good contract. However, the legal performance of a contract is often context-specific, not to mention varying by jurisdiction and an ever-changing body of law. Plus, most contracts are never seen in a courtroom, so their provisions remain untested and private to the parties. AI generative systems training on contracts run the risk of amplifying bad legal work as much as good. For these reasons, it’s unclear how AI contract writers can get much better any time soon. AI tools simply lack the domain expertise and precision in language to be left to work independently. While these tools may be useful to draft language, human professionals are still needed to review the output before being used.

. . . .

Another novel use of AI is predicting legal outcomes. Accurately assessing the likelihood of a successful outcome for a lawsuit can be very valuable. It allows an attorney to decide whether they should take a case on contingency, or how much to invest in experts, or whether to advise their clients to settle. Companies such as Lex Machina use machine learning and predictive analytics to draw insights on individual judges and lawyers, as well as the legal case itself, to predict behaviors and outcomes.

A more concerning use of AI is in advising judges on bail and sentencing decisions. One such application is Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). COMPAS and similar AI tools are used by criminal judges in many states to assess the recidivism risk of defendants or convicted persons in decisions on pre-trial detention, sentencing or early release. There is much debate about the fairness or accuracy of these systems. According to a ProPublica study, such assessment tools seemed biased against black prisoners, disproportionately flagging them as being significantly more likely to reoffend than white prisoners.

Equivant, the company that developed COMPAS, sought to refute the ProPublica analysis and rejected its conclusions about racial bias.

Regardless, using AI in this context may reflect, or even amplify, the inherent bias in the data of the criminal justice system. The data used to train the ML models is based on actual arrests and conviction rates that may be slanted against some populations. Thus it may enshrine past injustices, or worse, falsely cloak them in the vestment of computer-generated objectivity.

Link to the rest at Business Law Today

As PG has mentioned before, he would love to be practicing law with all the AI tools available.

Back in the days when a ten-megabyte hard drive was hot stuff, PG developed computer tools to speed the process of generating legal documents and court filings. He managed to sell a whole bunch of a document assembly package for divorces that he called, Splitsville.

He just ran a search and found Splitsville, a term he originally picked up from one of his divorce clients, is used for all sorts of tools, books, etc., relating to divorce.

Artificial intelligence in architecture: 10 use cases and top technologies

From itransition:

Artificial intelligence in architecture enables engineers and architects to design, plan, and build structures more efficiently. With this technology at hand, architects can optimize designs for sustainability and cost-efficiency and come up with never-before-seen design solutions.

AI allows them to brainstorm and conceptualize their ideas. Moreover, AI enables architects to identify patterns that can inform efficient and environmentally friendly design decisions.

. . . .

10 artificial intelligence use cases in architecture

Streamlining early-stage planning

Floor plans are integral documents that architects use to create a layout of a building. Using generative adversarial networks (GANs), architects can generate floor plans based on building dimensions and environmental conditions, minimizing the need for manual drafting. On top of that, machine learning models can adapt to an architect’s habits and methods over time, further improving workflows.

. . . .

Autodesk Spacemaker is a comprehensive AI-based software that helps architects to streamline early-stage planning and site proposal generation. By combining data from building regulations, climate conditions, solar exposure, and more with a machine learning algorithm, Spacemaker can create multiple design options for architects in minutes. This allows architects to quickly identify and adjust the most viable option based on their preferences. By feeding its models with site data, Spacemaker can also solve problems related to a project’s environmental impact, automatically.

. . . .

AI is used in architecture to help designers create more efficient, innovative, and creative solutions. AI can be applied for creating virtual 3D renderings and models, optimizing use of materials, searching for new design trends and patterns, automating time-consuming calculations like lighting simulation or energy consumption estimates, and analyzing vast amounts of data to create new insights.

One example of AI in architecture is generative design which uses algorithms to generate a range of optimized solutions based on input parameters. This type of AI can be used for anything from creating efficient structural designs to developing entire cityscapes.

Link to the rest at itransition and thanks to F. for the tip.

AI-generated content is raising the value of trust

From The Economist:

It is now possible to generate fake but realistic content with little more than the click of a mouse. This can be fun: a TikTok account on which—among other things—an artificial Tom Cruise wearing a purple robe sings “Tiny Dancer” to (the real) Paris Hilton holding a toy dog has attracted 5.1m followers. It is also a profound change in societies that have long regarded images, video and audio as close to ironclad proof that something is real. Phone scammers now need just ten seconds of audio to mimic the voices of loved ones in distress; rogue ai-generated Tom Hankses and Taylor Swifts endorse dodgy products online, and fake videos of politicians are proliferating.

The fundamental problem is an old one. From the printing press to the internet, new technologies have often made it easier to spread untruths or impersonate the trustworthy. Typically, humans have used shortcuts to sniff out foul play: one too many spelling mistakes suggests an email might be a phishing attack, for example. Most recently, ai-generated images of people have often been betrayed by their strangely rendered hands; fake video and audio can sometimes be out of sync. Implausible content now immediately raises suspicion among those who know what ai is capable of doing.

The trouble is that the fakes are rapidly getting harder to spot. ai is improving all the time, as computing power and training data become more abundant. Could ai-powered fake-detection software, built into web browsers, identify computer-generated content? Sadly not. As we report this week, the arms race between generation and detection favours the forger. Eventually ai models will probably be able to produce pixel-perfect counterfeits—digital clones of what a genuine recording of an event would have looked like, had it happened. Even the best detection system would have no crack to find and no ledge to grasp. Models run by regulated companies can be forced to include a watermark, but that would not affect scammers wielding open-source models, which fraudsters can tweak and run at home on their laptops.

The trouble is that the fakes are rapidly getting harder to spot. ai is improving all the time, as computing power and training data become more abundant. Could ai-powered fake-detection software, built into web browsers, identify computer-generated content? Sadly not. As we report this week, the arms race between generation and detection favours the forger. Eventually ai models will probably be able to produce pixel-perfect counterfeits—digital clones of what a genuine recording of an event would have looked like, had it happened. Even the best detection system would have no crack to find and no ledge to grasp. Models run by regulated companies can be forced to include a watermark, but that would not affect scammers wielding open-source models, which fraudsters can tweak and run at home on their laptops.

Yet societies will also adapt to the fakers. People will learn that images, audio or video of something do not prove that it happened, any more than a drawing of it does (the era of open-source intelligence, in which information can be reliably crowdsourced, may be short-lived). Online content will no longer verify itself, so who posted something will become as important as what was posted. Assuming trustworthy sources can continue to identify themselves securely—via urls, email addresses and social-media platforms—reputation and provenance will become more important than ever.

Link to the rest at The Economist

AI Gender Differences

Here’s the prompt PG provided DALL·E 2 at OpenAI;

create a military science fiction book cover featuring a full-page image of a strong female soldier in battle gear, digital art

Here are the four images the AI generated (PG doesn’t know why the AI cut off part of the heads):

And the same prompt with the gender changed: create a military science fiction book cover featuring a full-page image of a strong male soldier in battle gear, digital art

PG noted that the AI provided each of its female images with certain similar secondary sex characteristics while he didn’t observe an equivalent similarity of secondary characteristics among the male AI images that deviated from what he regards as the norm.

That’s as far as PG is going to go on this topic.

He reminds visitors to TPV to tailor their comments to the standards of polite society while showing respect towards those of varying genders.

New Nonprofit Launches to ‘Certify’ Copyright-Friendly AI Practices

From Publishers Weekly:

The Association of American Publishers is among those supporting a newly launched effort called Fairly Trained, a nonprofit organization that aims to “certify fair training data use” in generative AI.

“There is a divide emerging between two types of generative AI companies: those who get the consent of training data providers, and those who don’t, claiming they have no legal obligation to do so. We believe there are many consumers and companies who would prefer to work with generative AI companies who train on data provided with the consent of its creators,” a blog post on the group’s website states. “Fairly Trained exists to make it clear which companies take a more consent-based approach to training and are therefore treating creators more fairly.”

The nonprofit’s first certification is something it calls “Licensed Model certification,” which, the post explains, can be obtained for “any generative AI model that doesn’t use any copyrighted work without a license.” The licenses can be varied, the group says, but “will not be awarded to models that rely on a ‘fair use’ copyright exception or similar, which is an indicator that rights-holders haven’t given consent for their work to be used in training.”

The launch of Fairly Trained comes amid a spate of lawsuits filed by creators against AI companies—most notably Open AI and Meta—alleging that the major AI companies’ use of copyrighted works without permission or payment for AI training constitutes infringement. It also comes as a number of other organizations, including such media giants as the Associated Press and Axel Springer, have struck licensing deals with AI companies, with others, such as Reuters, reportedly involved in talks over the same.

Fairly Trained is launching with nine generative AI companies already certified, according to the post: Beatoven.AI, Boomy, BRIA AI, Endel, LifeScore, Rightsify, Somms.ai, Soundful, and Tuney. The group is led by “composer and technologist” Ed Newtwon-Rex, founder of jukedeck, “an AI music generation company that provided music for video, TV, radio, podcasts, and games.” According to his bio on the Fairly Trained site, jukedeck was acquired by TikTok owner ByteDance in 2019, and Newton-Rex “ran the European AI Lab and later led product in Europe for TikTok,” and most recently served as v-p of audio at Stability AI.

Link to the rest at Publishers Weekly

Color PG skeptical that anyone is going to pay much attention to Fairly Trained.

AI can transform education for the better

From The Economist:

As pupils and students return to classrooms and lecture halls for the new year, it is striking to reflect on how little education has changed in recent decades. Laptops and interactive whiteboards hardly constitute disruption. Many parents bewildered by how their children shop or socialise would be unruffled by how they are taught. The sector remains a digital laggard: American schools and universities spend around 2% and 5% of their budgets, respectively, on technology, compared with 8% for the average American company. Techies have long coveted a bigger share of the $6trn the world spends each year on education.

When the pandemic forced schools and universities to shut down, the moment for a digital offensive seemed nigh. Students flocked to online learning platforms to plug gaps left by stilted Zoom classes. The market value of Chegg, a provider of online tutoring, jumped from $5bn at the start of 2020 to $12bn a year later. Byju’s, an Indian peer, soared to a private valuation of $22bn in March 2022 as it snapped up other providers across the world. Global venture-capital investment in education-related startups jumped from $7bn in 2019 to $20bn in 2021, according to Crunchbase, a data provider.

Then, once covid was brought to heel, classes resumed much as before. By the end of 2022 Chegg’s market value had slumped back to $3bn. Early last year investment firms including BlackRock and Prosus started marking down the value of their stakes in Byju’s as its losses mounted. “In hindsight we grew a bit too big a bit too fast,” admits Divya Gokulnath, the company’s co-founder.

If the pandemic couldn’t overcome the education sector’s resistance to digital disruption, can artificial intelligence? Chatgpt-like generative ai, which can converse cleverly on a wide variety of subjects, certainly looks the part. So much so that educationalists began to panic that students would use it to cheat on essays and homework. In January 2023 New York City banned Chatgpt from public schools. Increasingly, however, it is generating excitement as a means to provide personalised tutoring to students and speed up tedious tasks such as marking. By May New York had let the bot back into classrooms.

Learners, for their part, are embracing the technology. Two-fifths of undergraduates surveyed last year by Chegg reported using an ai chatbot to help them with their studies, with half of those using it daily. Indeed, the technology’s popularity has raised awkward questions for companies like Chegg, whose share price plunged last May after Dan Rosensweig, its chief executive, told investors it was losing customers to Chatgpt. Yet there are good reasons to believe that education specialists who harness ai will eventually prevail over generalists such as Openai, the maker of Chatgpt, and other tech firms eyeing the education business.

For one, ai chatbots have a bad habit of spouting nonsense, an unhelpful trait in an educational context. “Students want content from trusted providers,” argues Kate Edwards, chief pedagogist at Pearson, a textbook publisher. The company has not allowed Chatgpt and other ais to ingest its material, but has instead used the content to train its own models, which it is embedding into its suite of learning apps. Rivals including McGraw Hill are taking a similar approach. Chegg has likewise developed its own ai bot that it has trained on its ample dataset of questions and answers.

. . . .

What is more, as Chegg’s Mr Rosensweig argues, teaching is not merely about giving students an answer, but about presenting it in a way that helps them learn. Understanding pedagogy thus gives education specialists an edge. Pearson has designed its ai tools to engage students by breaking complex topics down, testing their understanding and providing quick feedback, says Ms Edwards. Byju’s is incorporating “forgetting curves” for students into the design of its ai tutoring tools, refreshing their memories at personalised intervals. Chatbots must also be tailored to different age groups, to avoid either bamboozling or infantilising students.

Specialists that have already forged relationships with risk-averse educational institutions will have the added advantage of being able to embed ai into otherwise familiar products. Anthology, a maker of education software, has incorporated generative-ai features into its Blackboard Learn program to help teachers speedily create course outlines, rubrics and tests. Established suppliers are also better placed to instruct teachers on how to make use of ai’s capabilities.

Link to the rest at The Economist