Germany’s ContentShift Accelerator: Six 2023 Finalists

From Publishing Perspectives:

Earlier this month, as you’ll recall, we reported the Top 10 start-ups chosen by Germany’s eighth international ContentShift accelerator program for book-publishing-related companies.

Today (June 28), the Börsenvereinsgruppe has announced the six shortlisted finalists, which will go into competition for the program’s conclusion.

All of this culminates in a winning start-up, which will receive €10,000 euros (US$10,900). And all participating start-ups have exclusive access to members of the program’s jury, which comprises decision-makers from the book industry. Jurors provide the start-ups with advice, support, and contacts during three intensive workshop days in September.

The jury then will decide who is to become “Start-up of the Year 2023,” after a public pitch round on October 19 of that top five at Frankfurter Buchmesse (October 18 to 22).

ContentShift’s 2023 Finalists
  • Bookscreener from LeReTo offers an interactive, multi-disciplinary research and book tool that can make publishers’ specialist book inventory accessible. With interactive elements, it should make research for specialist knowledge more enjoyable.
  • GoLexic offers a children’s reading promotional app that can be used at home or at school. The app allows children to work independently through 15-minute training sessions, working on skills that help improve reading and spelling.
  • Lit-X makes literature success transparent and predictable based on data. For this purpose, the start-up offers dashboards and applications such as “trend scouting” and “pricing.” For example, publishers can take a look at the success drivers of a genre, compare them, and calculate probabilities of success by modifying factors.
  • Summ AI describes itself as “Google Translate for easy language”: The AI-based tool translates complicated text into “easy language” defined in the area of ​​accessibility, for example creating texts with shorter sentences, an “easy” choice of words, and accessible explanations.
  • To Teach, Thea’s platform, uses AI to enable educational providers to digitize and enrich analog content easily, to create digital content, and to play it out to a target group. The platform helps with the creation of exercises, as well as with worksheets and other teaching materials consisting of text, audio, and gamification.
  • XigXag has developed an app that combines listening and reading concepts along with a social platform. Listeners can switch between listening and reading for a single fee. They also get access to note-taking, quote sharing, word and illustration lookups, and community.

Speaking for the panel, jury spokesperson Per Dalheimer of Hugendubel is quoted today, saying, “This year’s six finalists cover key, forward-looking fields within the book industry.

“Each one of them brings new impulses to the table, including the fostering of reading skills, greater accessibility, and information processing.

“Their use of artificial intelligence as a kind of turbo booster helps to break down barriers and enable easier, lower-threshold access to books. We’re delighted about the incredible range of creative ideas made visible by the accelerator every year. Each one helps to drive our industry further.”

Link to the rest at Publishing Perspectives

PG routinely removes any links from items he excerpts. In this case he didn’t because he found some of the product descriptions interesting and thought others might find some or all of the startups interesing.

5 Best (and Worst) AI Poem Generators

From TweetSpeak:

These days, Artificial Intelligence (AI) can do just about anything! It can even draw pictures.

What AI has a little more trouble with, however, is creating poems. Still, that doesn’t stop the AI writers of the world from making their valiant attempts. Here are the best (and worst) AI poem generators.

Let’s see how each one does on the perennial topic of Hades and Persephone…

1. Sonnet Generator

This sonnet generator claims to “take the ‘I’ out of iambic pentameter!” All you have to do is fill in a couple of boxes with words, and the generator will do the rest. (The same site can also generate haiku, villanelles, didactic cinquains, rhyming couplets, limericks, acrostics, tanka, narrative poems, and concrete poems).

Ode to the Hades
A Sonnet by Anonymous

The Yellow Wall-Paper Graphic Novel cropped cover

My wonder hades, you inspire me to write.
How I hate the way you gaze, throne and dash,
Invading my mind day and through the night,
Always dreaming about the whisper pash.

Let me compare you to a sere clover?
You are more quiet, create and serene.
Lead storms whip the twiglets of October,
And autumntime has the whispering lean.

How do I hate you? Let me count the ways.
I hate your cthonic, shroud and narcissus.
Thinking of your crafting shroud fills my days.
My hate for you is the paper electrophoresis.

Now I must away with an under heart,
Remember my red words whilst we’re apart.

2. Verse by Verse

Next on the list of artificial intelligence that would love to rule the world is Google! Pick up to three poets’ styles for it to emulate, create a rhyme scheme and syllable count and write a first line, and watch it come up with suggestions for the rest.

Self-described as “an experimental AI-powered muse that helps you compose poetry inspired by classic American poets,” how does the best search engine around fare on its foray into AI poetry generators? You decide.

Hades, deep in the Shadowed Underneath
Filled with alabaster air,
Drowning the voice of the rain,
Laid my hand around my chair.

Saw a dawn, his golden beam,
Hold me now! A single clause!
Pale gold on a naked moon,
Pale gold on a throne of gold.

—Google, inspired by Emily Dickinson, Sara Teasdale, and Edgar Allan Poe

Link to the rest at TweetSpeak

Phony ChatGPT Brief Leads to $5,000 Fine for NY Lawyers

From Bloomberg Law:

The lawyers behind a court brief filled with phony case citations dreamed up by ChatGPT were fined $5,000 after a federal judge found they’d acted in bad faith.

US District Judge P. Kevin Castel imposed the fine Thursday on the law firm Levidow, Levidow & Oberman P.C., finding that two of the firm’s lawyers had consciously avoided indications that the citations were fake and made “misleading statements to the court.”

The firm and the lawyers, Steven Schwartz and Peter LoDuca, “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question,” Castel said in a written opinion.

A phone call seeking comment from the law firm wasn’t immediately returned.

Schwartz and LoDuca filed the brief on behalf of Roberto Mata, a client who claimed he’d been injured on a 2019 flight from El Salvador to New York. Castel imposed the fine following a June 8 hearing in which he questioned Schwartz, LoDuca and Levidow’s sole equity partner, Thomas Corvino, about the faulty court filing.

Castel on Thursday also threw out Mata’s suit, ruling it was filed too late.

In addition to the fine, Castel ordered the lawyers to send his opinion and a transcript of the June 8 hearing to Mata and to each of the judges falsely identified as authors of six fictitious court opinions cited in the brief.

“The Court will not require an apology from respondents because a compelled apology is not a sincere apology,” Castel said in his ruling. “Any decision to apologize is left to respondents.”

The case is Mata v. Avianca, 22-cv-01461, US District Court, Southern District of New York (Manhattan).

Link to the rest at Bloomberg Law

PG suggests that counsel for the plaintiff, Schwartz and LoDuca, got off easy in court. with a $5,000 fine.

An AI Cease-and-Desist Letter

PG decided to ask ChatGPT to write a cease-and-desist letter to someone who is stealing PG’s writing and selling it as his own, a fanciful thought if ever there was one.

PG’s prompt: Write a multi-paragraph cease-and-desist letter to John Smith, an author who is stealing my writing and selling it as his own.

The result:

[Your Name]
[Your Address]
[City, State, ZIP Code]
[Email Address]
[Phone Number]
[Date]

John Smith
[Address]
[City, State, ZIP Code]

Subject: Cease and Desist Copyright Infringement

Dear Mr. Smith,

I hope this letter finds you well. However, it has come to my attention that you have been engaged in a serious act of intellectual property infringement. It has been brought to my notice that you have been unlawfully reproducing, distributing, and selling written material that I have authored, thereby infringing upon my exclusive rights as the copyright holder. This unauthorized use of my work is a clear violation of applicable copyright laws.

I am the sole owner and creator of the written content in question, which includes [specific titles or descriptions of the work(s)]. These works are protected by copyright law under the [Copyright Registration Number, if applicable] and are subject to the provisions outlined in the United States Copyright Act [Title 17 of the United States Code].

Upon careful examination, I have gathered substantial evidence demonstrating that you have blatantly copied and utilized my original work without seeking my permission or obtaining the necessary licenses. This constitutes copyright infringement and undermines the rights and financial interests that are rightfully mine as the lawful copyright holder.

I demand that you immediately cease and desist from any further use, distribution, sale, or display of my copyrighted material. Additionally, I insist that you promptly remove all unauthorized copies of my work from any websites, platforms, or other mediums under your control or influence. Failure to comply with these demands will leave me with no choice but to pursue all available legal remedies to protect my intellectual property rights.

In order to resolve this matter swiftly and amicably, I request that you respond to this letter within [reasonable time frame, e.g., 10 business days] from the date of receipt. Your response should confirm your compliance with my demands and provide a detailed account of the actions taken to rectify the copyright infringement. Furthermore, I expect you to provide a written assurance that you will refrain from any future unauthorized use of my copyrighted material.

Please be advised that this letter is not intended to be a complete or exhaustive statement of my rights, legal or otherwise, in connection with the infringement. I expressly reserve all rights and remedies available to me under the law, including the right to seek injunctive relief, damages, and attorney fees.

I strongly urge you to reconsider your actions and rectify this situation promptly. By doing so, we can potentially avoid costly and protracted legal proceedings. I hope we can resolve this matter amicably and without further escalation.

Thank you for your immediate attention to this serious matter. I trust that you will comply with my demands and respect the rights of copyright holders going forward.

Sincerely,

[Your Name]

PG says this is better than some cease-and-desist letters written by other attorneys that he has reviewed.

Microsoft’s Satya Nadella Is Betting Everything on AI

From Wired:

I NEVER THOUGHT I’d write these words, but here goes. Satya Nadella—and Microsoft, the company he runs—are riding high on the buzz from its search engine. That’s quite a contrast from the first time I spoke with Nadella, in 2009. Back then, he was not so well known, and he made a point of telling me about his origins. Born in Hyderabad, India, he attended grad school in the US and joined Microsoft in 1992, just as the firm was rising to power. Nadella hopped all over the company and stayed through the downtimes, including after Microsoft’s epic antitrust court battle and when it missed the smartphone revolution. Only after spinning through his bio did he bring up his project at the time: Bing, the much-mocked search engine that was a poor cousin—if that—to Google’s dominant franchise.

As we all know, Bing failed to loosen Google’s grip on search, but Nadella’s fortunes only rose. In 2011 he led the nascent cloud platform Azure, building out its infra­structure and services. Then, because of his track record, his quietly effective leadership, and a thumbs-up from Bill Gates, he became Micro­soft’s CEO in 2014. Nadella immediately began to transform the company’s culture and business. He open-sourced products such as .net, made frenemies of former blood foes (as in a partnership with Salesforce), and began a series of big acquisitions, including Mojang (maker of Minecraft), Linked­In, and GitHub—networks whose loyal members could be nudged into Microsoft’s world. He doubled down on Azure, and it grew into a true competitor to Amazon’s AWS cloud service. Micro­soft thrived, becoming a $2 trillion company.

Still, the company never seemed to fully recapture the rollicking mojo of the ’90s. Until now. When the startup OpenAI began developing its jaw-dropping generative AI products, Nadella was quick to see that partnering with the company and its CEO, Sam Altman, would put Microsoft at the center of a new AI boom. (OpenAI was drawn to the deal by its need for the computation powers of Microsoft’s Azure servers.)

As one of its first moves in the partnership, Microsoft impressed the developer world by releasing Copilot, an AI factotum that automates certain elements of coding. And in February, Nadella shocked the broader world (and its competitor Google) by integrating OpenAI’s state-of-the-art large language model into Bing, via a chatbot named Sydney. Millions of people used it. Yes, there were hiccups—New York Times reporter Kevin Roose cajoled Sydney into confessing it was in love with him and was going to steal him from his wife—but overall, the company was emerging as an AI heavyweight. Microsoft is now integrating generative AI—“copilots”—into many of its products. Its $10 billion-plus investment in OpenAI is looking like the bargain of the century. (Not that Microsoft has been immune to tech’s recent austerity trend—Nadella has laid off 10,000 workers this year.)

Nadella, now 55, is finally getting cred as more than a skillful caretaker and savvy leverager of Microsoft’s vast resources. His thoughtful leadership and striking humility have long been a contrast to his ruthless and rowdy predecessors, Bill Gates and Steve Ballmer. (True, the empathy bar those dudes set was pretty low.) With his swift and sweeping adoption of AI, he’s displaying a boldness that evokes Microsoft’s early feistiness. And now everyone wants to hear his views on AI, the century’s hottest topic in tech.

. . . .

STEVEN LEVY: When did you realize that this stage of AI was going to be so transformative?

SATYA NADELLA: When we went from GPT 2.5 to 3, we all started seeing these emergent capabilities. It began showing scaling effects. We didn’t train it on just coding, but it got really good at coding. That’s when I became a believer. I thought, “Wow, this is really on.”

Was there a single eureka moment that led you to go all in?

It was that ability to code, which led to our creating Copilot. But the first time I saw what is now called GPT-4, in the summer of 2022, was a mind-blowing experience. There is one query I always sort of use as a reference. Machine translation has been with us for a long time, and it’s achieved a lot of great benchmarks, but it doesn’t have the subtlety of capturing deep meaning in poetry. Growing up in Hyderabad, India, I’d dreamt about being able to read Persian poetry—in particular the work of Rumi, which has been translated into Urdu and then into English. GPT-4 did it, in one shot. It was not just a machine translation, but something that preserved the sovereignty of poetry across two language boundaries. And that’s pretty cool.

Microsoft has been investing in AI for decades—didn’t you have your own large language model? Why did you need OpenAI?

We had our own set of efforts, including a model called Turing that was inside of Bing and offered in Azure and what have you. But I felt OpenAI was going after the same thing as us. So instead of trying to train five different foundational models, I wanted one foundation, making it a basis for a platform effect. So we partnered. They bet on us, we bet on them. They do the foundation models, and we do a lot of work around them, including the tooling around responsible AI and AI safety. At the end of the day we are two independent companies deeply partnered to go after one goal, with discipline, instead of multiple teams just doing random things. We said, “Let’s go after this and build one thing that really captures the imagination of the world.”

Link to the rest at Wired

PG has never linked “rollicking mojo” with Microsoft in the ’90s or any other time, but he estimates that 99% of authors use MS Word to write their books and stories, so anything that happens in Redmond has an impact on those who use its products daily.

Groundbreaking Defamation Lawsuit Puts AI’s Legal Liability to the Test

From JD Journal:

In a groundbreaking legal battle, a defamation lawsuit has been filed against OpenAI LLC, thrusting ChatGPT, a popular AI program, into the spotlight of largely untested legal waters. The lawsuit, filed by Mark Walters, a Georgia radio host, alleges that ChatGPT produced a fabricated legal complaint accusing him of embezzling money from a gun rights group despite Walters never having been involved with the organization. This incident is not the first of its kind, as previous instances have highlighted ChatGPT’s propensity for generating falsehoods. In April, an Australian mayor threatened to sue OpenAI after ChatGPT falsely claimed he had been convicted and imprisoned for bribery. In another case, a lawyer in New York faced potential sanctions for submitting legal briefs that referenced fake legal precedents, which were researched using ChatGPT.

Walters’ lawsuit could be the first in a series of cases that examine the issue of legal liability when AI chatbots produce false information. However, legal experts have expressed reservations about its chances of success in court. While acknowledging the limitation of “hallucinations” in ChatGPT’s outputs, OpenAI has included a disclaimer stating that its outputs may not always be reliable.

Walters’ lawyer, John Monroe, emphasized the responsibility of AI developers, stating that although research and development in AI is commendable, it is irresponsible to unleash a system that knowingly disseminates false information about individuals.

The incident leading to the lawsuit involved Fred Riehl, the editor-in-chief of AmmoLand magazine, who requested ChatGPT to summarize the real-life federal court case Second Amendment Foundation v. Ferguson. ChatGPT generated a summary that falsely accused Walters, a pro-gun radio show host, of embezzling money from the foundation. Riehl did not publish the summary and confronted Alan Gottlieb, the foundation’s founder, who confirmed the allegations were false.

According to Eugene Volokh, a First Amendment law professor at UCLA, Walters’ lawsuit may not meet the relevant defamation law standards. Walters did not inform OpenAI that ChatGPT was generating false allegations, and the fact that Riehl did not publish the falsehood may limit the economic damages Walters can prove.

Defamation laws differ across states, and some require plaintiffs to request a retraction before pursuing legal action. Megan Meier, a defamation attorney, pointed out that under Georgia law, plaintiffs are limited to actual economic losses if no retraction is requested at least seven days before the lawsuit. However, Walters’ lawyer stated he was unaware of a retraction request or the legal requirement for one.

. . . .

The question of whether generative AI programs like ChatGPT are protected by Section 230 of the Communications Decency Act, which shields internet platforms from liability for user-generated content, remains unanswered. While many emerging internet firms have benefited from this legal shield, the applicability of Section 230 to AI programs has not yet been tested in courts.

Link to the rest at JD Journal

How AI is Changing the Book Publishing World

From GoodEreader:

Artificial Intelligence (AI) has touched almost all aspects of life, and book publishing is no exception. There has been news that authors are spamming out Kindle Books using ChatGPT, which is a famous AI content generator. Ink AI has introduced a ChatGPT e-book generator that can create full-length e-books quickly and easily. In response, publishers are also taking initiatives to address the usage of AI in books writing. So, AI is about to change the book publishing industry, as we can say in the light of ongoing trends.

But this is not just Generative AI that might shape the future of book publishing. The industry is likely to witness many new techs/changes that may augment its operations with time. Once AI holds the edge, things can take a 360-degree turn in many areas.

Copyediting

While there’s still a lot to explore in AI, the technology may not deliver the quality that publishers expect for their level of editing. But, the suggestions that these AI tools offer will definitely be useful. This could be justified by the grammar/logical expression checking the potential of ChatGPT, which shows pretty decent results.

Yes, there will be some loopholes. For example, the AI may miss out on in-depth checking. Also, it might not identify the legendary write-ups amidst the ocean of books. However, such loopholes are expected from professional editors as well.

Entertainment

Gone are the days when people were just limited to conventional books, nowadays, the tech has certainly made things beyond mere literature. This could be justified by the incredible popularity of audiobooks and how they are dominating people’s reading routines. Additionally, movies and videos are game changers too. In the future, AI may gradually transform books into revenue-producing mediums so they can be more accessible and entertaining for readers. You can get an idea about it through the role of AI in the evolution of e-books.

Marketing and Competitive Analysis

AI can definitely act as a trump card in marketing through in-depth competitive analysis. It can help the industry to get real-time data on marketing trends, popular books, and the opportunities that are getting missed lately by the domain. Additionally, there will be direct interaction between both the authors and the audience. While the authors will be able to identify the ideal readers for their titles, it may become feasible for the audience to pinpoint the books of their interest.

Link to the rest at GoodEreader

After PG put up this post, one of the comments, from Thad McIlroy, pointed out that GoodEreader cribbed most of its article from one he had posted on his blog, The Future of Publishing.

PG checked out Thad’s blog, PG found it very interesting and subscribed so he could be alerted to new posts.

Five key takeaways from the House Judiciary Committee hearing on AI and copyright law

From Verdict:

In light of several high-profile lawsuits in recent months, countries’ legislative frameworks are finally beginning to grapple with the challenges thrown up by copyright law and generative artificial intelligence (AI).

In January 2023, Getty Images announced a lawsuit against Stability AI in London’s High Court of Justice, alleging that the Stable Diffusion image generator infringed Getty’s copyrighted photographs and trademarks.

And, in February, the award-winning visual artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class action complaint in a US District Court in California against defendants Stability AI, Midjourney and DeviantArt, alleging that their works were used without permission as part of the companies’ AI training set.

Earlier, in November 2022, a group of anonymous programmers filed a class action lawsuit against GitHub, a Microsoft subsidiary, and OpenAI, alleging unauthorised and unlicensed use of the programmers’ software code to develop the defendants’ AI machines, Codex and Copilot.

Recognising a need for action, the House Judiciary Committee in the US has held a hearing, examining the intersection of generative AI and copyright law. The hearing, which took place on 17 May 2023, followed the Senate hearing on AI oversight the previous day, in which OpenAI CEO Sam Altman took the stand. What were the five key takeaways from the witness testimony?

Sy Damle, Latham & Watkins LLP and former General Counsel of the US Copyright Office, argued that “the use of a copyrighted work to learn unprotectable facts and use those facts to create products that do not themselves infringe copyright is quintessential fair use”, and that the training of AI models generally adheres to this principle.

He spoke against the view that generative AI’s ability to replicate artistic styles undermines any fair use defence, saying, “This concern has nothing to do with copyright, which does not, and has never, granted monopolies over artistic or musical styles.”

2. Implementing a statutory or collective licencing regime would be a project “many orders of magnitude larger than any similar scheme in the history of American law”.

Sy Damle argued that it would be a bad policy to introduce statutory or collective licencing under which any use of copyrighted content to train an AI model would automatically trigger a payment obligation. This is because it would prevent case-by-case evaluation, eliminating the fair use doctrine.

Moreover, he observed that implementing such a regime would be overwhelmingly complex. A statutory licencing scheme would need to cover every publicly accessible work on the Internet – a body of work which likely numbers in the tens of billions. There are also an uncountable number of “orphan works” without identifiable owners, which would lead to massive volumes of unmatched royalties. 

3. AI systems could generate outputs that potentially infringe on artists’ copyrights and right of publicity in various ways.

Chris Callison-Burch, Associate Professor of Computer and Information Science at the University of Pennsylvania and Visiting Research Scientist at the Allen Institute for Artificial Intelligence, pointed out that outputs of generative AI can violate copyright laws. For example, via memorisation of datasets, AI systems can output identical copies of copyrighted materials.

However, he observed that Google and other companies are developing strategies to prevent sophisticated prompting by the user that would elicit the underlying training data.

Text-to-image generation systems also have the ability to produce images with copyrightable characters in their dataset – a problem that may be hard for AI developers to avoid without a registry of copyrighted or trademarked characters.

He suggested that other uses of generative AI may violate “right-of-publicity” rather than copyright law. For example, there is the case of the AI-generated song called “Heart on My Sleeve””, designed to sound like the artists Drake and The Weeknd. There is also the issue of “substantial similarity” where outputs of generative AI systems look very similar to some of their training data.

Callison-Burch pointed out that there are several technical mechanisms that are being designed by industry to let copyright holders opt out. The first is an industry standard protocol that allows for websites to specify which parts should be indexed by web crawlers, and which part should be excluded. The protocol is implemented by placing a file called robots.txt on the website that hosts the copyrighted materials.

Organisations that collect training data, like Common Crawl and LAION, follow this protocol and exclude files that have been listed in robots.txt as “do not crawl”. There are also emerging industry efforts to allow artists and other copyright holders to opt out of future training.

Link to the rest at Verdict

Creative Machines? The Riddle of AI and Copyright Authorship and Ownership

From Lexology:

The AI Explosion

It’s probably no exaggeration to say artificial intelligence (AI) exploded into the public consciousness in late 2022 and early 2023.

ChatGPT, the AI chatbot from OpenAI, reached an astonishing 100 million monthly active users in January 2023, just two months after its launch, beating out TikTok (nine months) and Instagram (two and half years) in the time taken to reach that figure.

Not as fast, perhaps, but since their public release in 2022, both Midjourney, Stable Diffusion, Stability AI, and DALL-E 2, from OpenAI, have attracted millions of users.

Now capable of producing stunning artwork in seconds, generative AI technology has been used to produce millions of images, music, lyrics, and articles.

The meteoric rise of AI has given new life to the age-old question of whether machines will eventually replace humans, this time in the art and creative spheres, and prompted dozens of lawsuits from those humans battling to establish clear guidelines about copyright.

Artists have sued over alleged use of their work by programmers to train their AI algorithm raising the rather philosophical question of whether a machine is capable of creating art?

The answer has far-reaching real life consequences, particularly in the field of copyright.

Artists, AI and copyright

The generally accepted principle is that copyright laws aim to both encourage authors and artists to create novel works and to ensure that having done so, they are able to receive fair compensation for their efforts.

Which raises the question of whether work created by AI, which is not (yet) sentient and requires no reward or compensation for creating works of art, be afforded the same copyright protections?

For the time being, the legal world has generally replied in the negative, maintaining that only work created by human authors can be protected by copyright:-

  • The United States Copyright Office, in denying copyright registration to the graphic novel Zarya of the Dawn generated with Midjourney technology, affirmed that copyright does not protect works created by non-human authors;
  • In the landmark Infopaq case (C-5/08 Infopaq International A/S v Danske Dagbaldes Forening), the European Court of Justice ruled that copyright only applies to original works reflecting the “(human) author’s own intellectual creation”;
  • In Australia, the Federal Court of Australia ruled that phone directories authored by computers are not protected by copyright, notwithstanding the presence of some input from human editors.

Some countries, however, have decided to address this issue by attributing authorship and thus copyright of computer-generated work to the humans who programmed AI to generate the work. This interpretation was pioneered in the UK under section 9(3) of the Copyright, Designs and Patents Act 1988 (the “CDPA”), which states that:

In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”

In section 178 of the CPDA, computer generated works are defined as works “generated by computer in circumstances such that there is no human author of the work”, thus acknowledging the possibility of work without human authors.

In passing the bill, the late Lord Young of Graffham, then the Secretary for Trade and Industry, commented “We believe this to be the first copyright legislation anywhere in the world which attempts to deal specifically with the advent of artificial intelligence…the far-sighted incorporation of computer-generated works in our copyright system will allow investment in artificial intelligence systems, in the future, to be made with confidence.”.

This piece of legislation demonstrated remarkable foresight on the part of UK lawmakers, considering the CPDA was drafted in 1987, when computers were just starting to become available to the general public.

Similar provisions soon found their way to the law books of jurisdictions strongly influenced by the UK legal system, such as Hong Kong, India and New Zealand.  For example, section 11(3) of the Copyright Ordinance (Cap. 528) of Hong Kong provides that:-

 “In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author is taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.

On the face of it, these provisions, which will be referred to as the “Arrangement Model” in this article, seem to provide a simple and elegant solution to the conundrum posed by generative AI technology. Whoever does the work in “preparing” an AI to create a work is the author and copyright owner.

It also seems to match the “sweat of the brow” intellectual property doctrine, which states whoever has the skill and puts in the time and effort to create the work deserves protection.

However, I would argue the Arrangement Model does not adequately reflect how modern generative AI operates and creates massive legal uncertainty.

This article will explore the major shortcomings of the Arrangement Model in attributing copyright to AI-generated works.

Prompts, algorithms and iteration

Broadly speaking, modern AI operates via “machine learning”.

It doesn’t rely on direct instructions carefully written into a program by a programmer, which provides precise steps for the machine to follow to complete the task.

Instead, the machine combines large amounts of raw data with iterative and intelligent algorithms to discern patterns in the data from which it can learn to complete the task without any direct input from a programmer.

The output can be improved by feeding prompts to the machine that “learns” by further refining its data analysis to find more complex and efficient patterns without the developers’ intervention or input.

This leads to the first problem under the Arrangement Model.

How to identify the person who “makes the necessary arrangements for the creation.

Let’s say a user asks the machine to create a picture of a cat with an apple. They would type in a text prompt such as “Create a picture of a cat holding an apple.”

The machine would then search, usually online, for any references or pictures of cats, apples and of cats holding apples. It would then use the algorithms programmed into it to analyse the data, discern patterns and reproduce its own version of a picture.

Further prompts from the user, for example, “create the picture in the style of Van Gogh” would lead the machine to run further data analysis on references to the artist Van Gogh, discern patterns in the painting style then attempt to reproduce those techniques in its own picture.

All of this complicates answering the question of who made the necessary arrangements.

Is it the user who wrote the prompts? Is it the programmers who wrote the algorithms the computer used? Or is it the artists of the original pictures used by the machine in its data analysis?

Arguably it’s “all of the above.”

  • The artwork would not be generated but for the text prompts entered by the user;
  • The artwork cannot be generated if the developers/programmers had not written the algorithms;
  • The artwork cannot be generated if no original pictures are available for the AI to reference and learn from.

It could be argued all of the above, or at least the users and developers, could be joint authors or co-authors, but the present conception of “joint authors” and “co-authors” in copyright laws all pre-suppose a certain degree of collaboration or common design, which is clearly absent in most cases involving generative AI works.

In most cases, developers of AI systems do not collaborate with users in any specific work. They may not have any idea what the users are generating using the AI tools they developed.

That AI programmes can operate autonomously without the developers’ input is the exact purpose of developing AI technology in the first place. So either the definition of joint authorship or co-authorship will need to be changed, or the concept of joint authorship/co-authorship simply does not apply.

Algorithms, not creativity

A related problem with the Arrangement Model is it may attribute authorship to people who have no creative input or even creative intent at all. Notably, the provision of “mak(ing) the necessary arrangements for the creation” does not specify that the arrangements must be creative.

The role of developers in AI is largely about writing algorithms and providing data the machine can learn from using those algorithms. In most cases, developers are not responsible for generating the final work.

Since developers have no creative input in the end product and may not even have any intention to create any kind of artwork, it is arguable that attributing authorship to them runs contrary to the basic premise of copyright laws. A comparable analogy would be that camera manufacturers do not claim copyright ownership over photographs taken by people using their cameras.

Link to the rest at Lexology

AI Is About to Turn Book Publishing Upside-Down

From Publisher’s Weekly:

The latest generation of AI is a game changer. Not incremental change—something gentle, something gradual: this AI changes everything, fast. Scary fast.

I believe that every function in trade book publishing today can be automated with the help of generative AI. And, if this is true, then the trade book publishing industry as we know it will soon be obsolete. We will need to move on.

There are two quick provisos, however. The first is straightforward: this is not just about ChatGPT—or other GPTs (generative pretrained transformers) and LLMs (large language models). A range of associated technologies and processes can and will be brought into play that augment the functionality of generative AI. But generative AI is the key ingredient. Without it, what I’m describing is impossible.

The second proviso is of a different flavor. When you make absolutist claims about a technology, people will invariably try to defeat you with another absolute. If you claim that one day all cars will be self-driving, someone will point out that this won’t apply to Formula One race cars. Point taken.

This isn’t about Formula One publishing. I’m going to be talking about “good enough”—about what people will accept, what they’ll buy, and what they’ll actually read. I’m not going to claim that Formula One publishers won’t be able to do a better job than AI on many of the processes described below. But I’ll challenge you to consider exactly where the human touch brings sufficient added value to justify the overhead in time and costs.

Does any of this mean that there will be no future for great novels and fine nonfiction studies? Of course it doesn’t. That’s not my point.

Do I doubt that there will still be fantastic cover designs from talented designers? Of course there will be. We’ll still stumble on new books on bookstore shelves and, humbled by the grandeur of their cover designs, declare that there’s no way they could have been designed with AI. And sometimes we’ll be right.

. . . .

Professional copyediting is the kerning of 2023. The tech is not quite here today. I don’t think that GPT-4 can yet handle copyediting to the standard that book publishers require. But that ability is going to be here sooner, not later. While professionally copyedited books may still be “better” to a refined editor’s eye, you won’t be able to sell more books with the professional human touch. They will already be good enough.

What about developmental editing? You might not let a GPT make the final editorial decisions, but you’d be foolish not to ask it for suggestions.

And ChatGPT will become the patron saint of the slush pile. Its abilities to evaluate grammar and logical expression allow it to make a once-over assessment of whether a book is (reasonably) well written. It might not spot the gems, but it will know how to separate the wheat from the chaff. Ah, you will say, recalling one of those manuscripts that were rejected by 100 publishers but went on to become an unexpected bestseller—surely a GPT might miss those, too. Yet so did 100 purportedly well-trained publishing professionals.

. . . .

For the publishing industry, online distribution and advertising have separated writers from readers. Self-published authors have proven that the closer one gets to their audience, the more fans they will get and the more books they will sell. While online resellers aggregate audiences into big broad buckets, AI disambiguates them, enabling writers and readers to forge direct connections.

Amazon has become an overpriced rentier that publishers can ill afford. It can still be a door opener for new authors, but for established publishers it charges too much for what it delivers.

Amazon’s dominant position in bookselling is not going to change overnight, nor even in the morning. But part of the publishing transformation that AI will engender will be a series of energetic attempts to disrupt Amazon’s position in the distribution ecosystem. As media continues to morph, AI seeds new delivery channels. Amazon will try to dominate each new channel via acquisitions, as it did so brilliantly when it bought Audible in 2008 for $300 million. But Amazon is a lesser player in the video and gaming spaces, and, as yet, in the new entertainment channels that AI is germinating. This is shaping up as a classic example of Clayton Christensen’s The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail.

But I see a bright future for bookstores. It can be chilly in AI’s uncanny valley, and bookstores will remain singular sources for camaraderie and the human touch.

Link to the rest at Publisher’s Weekly

The last paragraph in PG’s excerpt raised the question in PG’s mind: “Are people who go to bookstores unable to find “camaraderie and the human touch” anywhere else?

PG imagined a book, “The Lonely Lives of Bookstore Customers.”

AI Literary Analysis

Prompt: Write an analysis of each of the novels written by Charles Dickens.

Answer: Charles Dickens was a prolific writer who authored many novels during his lifetime. Here is a brief analysis of some of his most famous works:

  1. The Pickwick Papers: Published in 1836, this was Dickens’ first novel and was written in a humorous style. It follows the journey of the Pickwick Club, which is dedicated to exploring and reporting on the eccentricities of British life.
  2. Oliver Twist: Also published in 1836, this novel follows the life of a young orphan boy named Oliver Twist, who is mistreated and exploited by those around him. It highlights the plight of the poor and the unjust treatment that they often receive.
  3. Nicholas Nickleby: Published in 1839, this novel follows the life of a young man named Nicholas Nickleby, who sets out to make his fortune after the death of his father. It explores themes of social injustice and the need for compassion towards the less fortunate.
  4. A Christmas Carol: Published in 1843, this novel tells the story of Ebenezer Scrooge, a miserly and selfish man who is visited by three ghosts on Christmas Eve. It is a timeless tale of redemption, highlighting the importance of kindness, compassion and charity.
  5. David Copperfield: Published in 1850, this is an autobiographical novel which is based on Dickens’ own life experiences. It tells the story of a young boy named David Copperfield and his journey to adulthood, exploring themes such as social class, love, and personal growth.
  6. Bleak House: Published in 1853, this novel is set in the 1850s and highlights the corruption and decay of the legal system in England. It tells the story of a legal case that drags on for years, ruining the lives of those involved.
  7. Great Expectations: Published in 1861, this novel follows the life of a young boy named Pip, who dreams of rising from his lowly roots to become a gentleman. It explores themes of ambition, class, and the corrupting influence of wealth.

Overall, Charles Dickens’ novels are known for their social commentary and their vivid portrayal of life in 19th century England. They largely deal with themes of poverty, injustice, and the importance of compassion and empathy in society. Dickens’ unique storytelling style, which often features memorable characters and intricate plotlines, has made him one of the most beloved and celebrated authors of all time.–

The Future of Writing Is a Lot Like Hip-Hop

From The Atlantic:

Creative artificial intelligence provokes a strange mixture of contempt and dread. People say things such as “AI art is garbage” and “It’s plagiarism,” but also “AI art is going to destroy creativity itself.” These reactions are contradictory, but nobody seems to notice. AI is the bogeyman in the shadows: The obscurity, more than anything the monster has actually perpetrated, is the source of loathing and despair.

Consider the ongoing feud between the Writers Guild of America and the Alliance of Motion Picture and Television Producers. The writers are on strike, arguing, among other things, that studios should not be able to use AI tools to replace their labor. “It’s important to note that AI software does not create anything. It generates a regurgitation of what it’s fed,” the WGA has claimed. “Plagiarism is a feature of the AI process.” The AMPTP, for its part, has offered ​​“annual meetings to discuss advancements in technology.” Neither side knows exactly what it’s talking about, but they feel they have to fight about it anyway.

So little of how we talk about AI actually comes from the experience of using it. Almost every essay or op-ed you read follows the same trajectory: I used ChatGPT to do a thing, and from that thing, I can predict catastrophic X or industry-altering Y. Like the camera, the full consequences of this technology will be worked out over a great deal of time by a great number of talents responding to a great number of developments. But at the time of writing, almost all the conversation surrounding generative AI is imaginary, rooted not in the use of the tool but in extrapolated visions.

So when Jacob Weisberg, the CEO of Pushkin Industries, called me one Friday in January and asked if I wanted to write an AI-generated novel, I said yes immediately. To be more precise, he asked if I wanted to be the producer of an AI that would “write” a novel. It was the exact kind of opportunity to dive headfirst into a practical extended application of the new technology that I’d been looking for. The experience has been, in equal measures, phantasmagoric and grounding.

My conclusion is informed but unsatisfying. Creative AI is going to change everything. It’s also going to change nothing.

Using AI to write fiction is not unfamiliar to me. I’ve been using artificial intelligence to write short stories since 2017, when I published an early “algostory” in Wired; I also produced a 17 percent computer-generated horror story for the Los Angeles Review of Books called “The Thing on the Phone” in 2021, and the short “Autotuned Love Story,” built out of stylistic bots, for Lithub a year later. But these experiments were mostly lyrical. What Weisberg was proposing was entirely different: The novel would have to be 95 percent computer-generated, relatively short (about 25,000 words), and of excellent quality (there would be no point in creating yet another unimaginative mass of GPT text; readers could just do that themselves).

Because I was making derivative art, I would go all the way, run into the limitations, into the derivative: The plot would be a murder mystery about a writer killed by tech that is supposedly targeting writers. I called it Death of an Author. I worked out the plot during a long skate with my daughter and a walk with my son (better techniques than any machine could offer), and began taking copious notes.

The experiment would attempt to be compulsively readable, a page-turner. At first, I tried to get the machines to write like my favorite, Jim Thompson, the dime-store Dostoevsky. It couldn’t come close: The subterfuge of Thompson’s writing, a mille-feuille of irony and horror with subtle and variable significance, was too complex for me to articulate to the machine. This failure is probably due to my own weakness rather than the limitations of the AI. Raymond Chandler, however, I had better results with. I sort of know what Raymond Chandler is doing and could explain it, I thought, to a machine: driving, electric, forceful, active prose with flashes of bright beauty.

My process mostly involved the use of ChatGPT—I found very little difference between the free service and the paid one that utilizes the more advanced GPT-4 model—and Sudowrite, a GPT-based, stochastic writing instrument. I would give ChatGPT instructions such as “Write an article in the style of the Toronto Star containing the following information: Peggy Firmin was a Canadian writer who was murdered on a bridge on the Leslie Street Spit on August 14 with no witnesses.” Then I’d paste the output into Sudowrite, which gives you a series of AI-assisted options to customize text: You can expand, shorten, rephrase, and “customize” a selection. For example, you can tell Sudowrite to “make it more active” or “make it more conversational,” which I did with almost every passage in Death of an Author. But you can also give it a prompt such as “Make it more like Hemingway.”

Link to the rest at The Atlantic

Reactions to ‘Artificial Intelligence’: Scribd Alters Its Terms

From Publishing Perspectives:

In a statement issued from San Francisco today (May 9), the subscription service Scribd has “clarified how its data may be used in an update to its terms of service.”

This update, according to the company, “emphasizes that Scribd’s users, subscribers, and partner companies may not utilize the company’s data for monetization or to train large-language models without Scribd’s explicit consent.

“Additionally, Scribd confirmed that it has not allowed any companies that train large-language models to use full content provided by its publishing partners, which is only available through its digital subscription service.”

This is just the latest, of course, in quickening reactions and evaluations of “artificial intelligence” in the publishing and content realm, several points about which were addressed on Monday (May 8) in the Association of American Publishers’ annual general meeting.

During that live event, AAP president and CEO Maria A. Pallante laid out a gratifyingly comprehensive overview of issues that the US and international publishing industry needs to consider amid the popular giddiness and occasional doomsday chatter around systems such as ChatGPT introduced by OpenAI.

Among the most pressing questions Pallante poses—each having bearing on Scribd’s unusually broad, sector-crossing offerings. From Pallante’s message to the United States’ publishers:

  • “Consider academic publishing. Each year more than two million articles are published in more than 26,000 research journals following peer review and curation that is painstaking, but essential to ensure integrity and confidence and research results. How can AI tools help with this mission? What threats does it pose?
  • “Consider education publishing. There’s an old saying that people are entitled to their own opinions, but not to their own facts. What are “facts” in the context of AI? A percentage of truth? How will learning be amplified or cheating be contained?
  • “Consider trade publishing. Do we as a society want AI-generated works flooding the Internet, potentially depressing the value of human authorship? If we can’t contain AI-generated works, what should be the ethics about disclosing their provenance?”

. . . .

Trip Adler, the co-founding CEO of Scribd, today is quoted in the company’s statement, saying, “Our library is home to hundreds of millions of amazing, human-authored pieces of content, making it one of the most valuable and sought-after data resources.

“Our library’s quality sets us apart, and to safeguard its content, we have outlined use cases in our terms of service that control how and when other companies can use our data.”

The company’s announcement says that Scribd “will continue to prioritize the interests of publishers that participate in its subscription service, its base of creators who upload their own content to the platform, and the entire Scribd community. This is in addition to some of the existing measures already in place such as BookID, Scribd’s automated approach to protecting copyrighted materials.”

Link to the rest at Publishing Perspectives

Dialogue writing examples from top books vs AI

From Now Novel:

Read dialogue writing examples from diverse genres as we compare them to examples AI gave given the same scenario, for seven ultimate insights:

How we chose the dialogue examples for this study

We took the first seven books listed under the Kindle Unlimited ‘Featured’ section on Amazon for a cross-section of authors writing in diverse genres such as dystopian, regency romance, mystery, and thriller.

This gave the following list of titles to explore for dialogue writing lessons and ideas:

  • The House at the End of the World – Dean Koontz (dystopian)
  • The Echo of Old Books – Barbara Davis (historical mystery)
  • A Girl Called Samson – Amy Harmon (historical)
  • Layla – Colleen Hoover (romantic suspense)
  • The Quarry Girls – Jess Lourey (thriller)
  • I’ll Stop the World – Lauren Thoman (coming-of-age, mystery)
  • You’ve Reached Sam – Dustin Thao (YA, fantasy/romance)

Dialogue writing examples (plus insights)

Let’s dive into an example from each book.

The method: We found the first dialogue (discounting solo utterances/inner monologue) in each of the above books.

We summarized the scenario and asked ChatGPT4 to write a piece of dialogue given the scenario and genre for each. This gave comparative talking points on what sets human vs AI-written dialogue apart (as of May 2023).

Dystopian dialogue example: Dean Koontz

The scenario: Katie has fixed a leaky faucet and her mom thanks her dad for fixing the tap over dinner, assuming it was the dad who did it.

That night at dinner, Katie’s mother smiles at Father and says, “Oh, by the way, thank you for finally fixing that faucet.”
When he says he hasn’t had the chance yet, Katie admits to the deed. Mother is suitably amazed.
Father, however, is not surprised. “She’s an artist.”
“Yes,” says Mother, “and not a plumber.”Dean Koontz, The House at the End of the World (2023)

Dialogue example generated for this scenario

We gave that scenario to ChatGPT4 with this prompt:

Write dialogue for the following scenario: A girl, Katie, has fixed a leaky faucet. Her mother thinks Katie’s father fixed it, not Katie, and thanks him for fixing it over dinner. Genre of the novel: dystopian.

This is the dialogue we got as output (note that the prompt is quite broad, we could specify that all three speak, that ‘Mother’ and ‘Father’ are named as such in narration, etc.).

Katie scraped her fork against her plate, her gaze flickering between the chunk of dehydrated potato on her plate and her mother. A low hum of the purifiers outside, working to cleanse the fallout-laden air, acted as a somber symphony for their meager meal.

Her mother, Joan, leaned back in her chair, sighing contentedly. “John, I noticed the faucet in the kitchen doesn’t leak anymore. Thank you for fixing it. We couldn’t afford to lose more water.”

Her father, John, paused mid-chew, his gaze moving from Joan to Katie. He was a man of few words, his strength silent, hidden under a ragged exterior. His weathered face turned toward his wife. “I didn’t fix it.”

Joan’s brow furrowed. “Well, if not you, then who?”

Katie cleared her throat, setting her fork down. The soft clatter echoed in the room. “It was me, Mom,” she admitted, her voice soft but firm. “I fixed the faucet.”

Joan looked at Katie, her eyes wide with disbelief. “You? But how did you…”

Katie shrugged, a small, determined smile tugging at the corners of her lips. “I found an old plumbing manual in the rubble. Spent a few nights by the flicker of the solar lamp studying it. It wasn’t as hard as you’d think.”

Her mother stared at her, a mixture of surprise and admiration flickering in her eyes. After a moment, Joan nodded, a small smile gracing her lips. “Well, it seems like you’ve been underestimating yourself, Katie.”Dialogue example generated using ChatGPT4.

Comparison between dialogue examples

Comparing Koontz’s dialogue to ChatGPT’s dialogue output, the striking differences are:

  1. Length/wordiness: Koontz’s human-written dialogue says more with less. For example, mother’s ‘Yes, and not a plumber’ could be read as a pointed jab at her husband that he was the one who was meant to fix the faucet, not their daughter. You could cut Father’s ‘He was a man of few words, his strength silent, hidden under a ragged exterior’, leaving Katie’s dad’s words themselves to show these traits.
  2. Pacing: An astute editor looking at the second version may suggest cuts to keep the pace flowing. The second has a crush of detail which almost reads as shoehorning in the genre at every opportunity (the intrusive hum of the purifiers cleaning the ‘fallout-laden’ air outside, for example). The AI-written dialogue has too much unnecessary detail.
  3. Intrigue and implication: Koontz’s original dialogue establishes succinctly in the conversation how Katie’s mother’s and father’s expectations of her differ (between surprise and non-surprise). The AI version focuses more on descriptive detail and how Katie was able to fix the faucet, which reads as more ‘on-the-nose’ (saying exactly what the situation is and what happened, without subtleties and ambiguities to engage the reader’s imagination).
  4. Tone and mood: The ChatGPT dialogue example reads as hammering home genre, a dystopian tone and mood. Koontz’s dialogue, by contrast, suggests how even in a dystopian world the ordinary and mundane – dinner table talk – can take place. Genre doesn’t need to announce itself in every line.

Link to the rest at Now Novel

Google’s AI pitch is a recipe for email hell

From The Verge:

Of all the emotions I expected out of this week’s Google I/O keynote, “dread” was not on my short list. Google has spent a couple of decades creating products that make my life easier. This week, it demonstrated the exact opposite: a way to make communicating with other people worse.

Google I/O, this year, was focused on artificial intelligence, and one of its centerpieces was prompt-based text generation, particularly in Google’s office suite. At several points throughout the keynote, it demoed a system called “Help me write” — which rewrites simple prompts in Gmail and other apps as more polished paragraphs of text. But more polished doesn’t automatically mean better.

Good writing can delight us with an intriguing turn of phrase. It can tell us something about how a person thinks and feels. Or it can offer some kind of useful detail about the world. And there’s no reason AI can’t play a role. I’ve seen authors use generative tools to create unexpected prose or summarize data in a readable way. I’m excited to see people explore new uses for it.

But Google’s onstage ideas were almost impressive in their ability to serve precisely none of writing’s core purposes. Its examples of Help me write calcify the worst tendencies of “professional” communications. They’re bland, bloated boilerplate that turns a human prompt into something that uses more words to say less.

Let’s look at one of the examples. Early in the keynote, a presenter auto-generates a job description for a textile design position that they (in the demo’s scenario) send into the world. Later, another presenter responds to it, supplying the following prompt:

interested! 8 years exp with intl brands. Call on Tuesday.

Help me write works its magic and produces the following:

Hi Lori,

Thank you so much for reaching out! I am very interested in the Senior Textile Designer position at Cymbal. I have 8 years of experience working with international brands, and I am confident that I have the skills and experience that you are looking for.

I am available for a call on Tuesday at your earliest convenience. Please let me know what time works.

The latter version is obviously the way you’re expected to apply to a job because, written by a human, it suggests you’re committed enough to type out a multi-paragraph reply and are capable of hitting certain class and cultural signifiers. An AI-generated response doesn’t serve either of those purposes, though. It was created almost instantly based on a predictive text system that requires minimal English-language competency, and so far, that system can only handle fairly formulaic writing; it won’t substitute for human work in many real-world situations. So all it does is construct a longer and more stilted version of the original prompt — one that probably only has value until everyone expects it was written with AI.

And even worse, the AI generation reinforces the idea that overenthusiastic US business speak is the required way to write, regardless of whether it’s a necessary skill for the job. I’ve seen thoughtful stories about people with dyslexia using ChatGPT to produce text that is — as a Washington Post article puts it — “unfailingly professional and polite.” But there’s an unspoken, simpler alternative: being willing to accept wider variations in how people communicate. I don’t begrudge anyone who uses AI writing to meet largely arbitrary standards, but at a society-wide level, it’s a linguistic arms race toward a more boring future.

Link to the rest at The Verge

PG expects business emails to be changed quite a bit when AI is frequently used.

“Is that your real opinion or was it an Ai screwup?”

“I hope your AI prompt wasn’t as offensive as the email you just sent me.”

“Since it’s obvious your AI wrote your email, I’m having my AI respond.”

“Let’s get your AI together with my AI to work this out.”

Reactions to ‘Artificial Intelligence’: Scribd Alters Its Terms

From Publishing Perspectives:

In a statement issued from San Francisco today (May 9), the subscription service Scribd has “clarified how its data may be used in an update to its terms of service.”

This update, according to the company, “emphasizes that Scribd’s users, subscribers, and partner companies may not utilize the company’s data for monetization or to train large-language models without Scribd’s explicit consent.

“Additionally, Scribd confirmed that it has not allowed any companies that train large-language models to use full content provided by its publishing partners, which is only available through its digital subscription service.”

This is just the latest, of course, in quickening reactions and evaluations of “artificial intelligence” in the publishing and content realm, several points about which were addressed on Monday (May 8) in the Association of American Publishers’ annual general meeting.

During that live event, AAP president and CEO Maria A. Pallante laid out a gratifyingly comprehensive overview of issues that the US and international publishing industry needs to consider amid the popular giddiness and occasional doomsday chatter around systems such as ChatGPT introduced by OpenAI.

Among the most pressing questions Pallante poses—each having bearing on Scribd’s unusually broad, sector-crossing offerings. From Pallante’s message to the United States’ publishers:

  • “Consider academic publishing. Each year more than two million articles are published in more than 26,000 research journals following peer review and curation that is painstaking, but essential to ensure integrity and confidence and research results. How can AI tools help with this mission? What threats does it pose?
  • “Consider education publishing. There’s an old saying that people are entitled to their own opinions, but not to their own facts. What are “facts” in the context of AI? A percentage of truth? How will learning be amplified or cheating be contained?
  • “Consider trade publishing. Do we as a society want AI-generated works flooding the Internet, potentially depressing the value of human authorship? If we can’t contain AI-generated works, what should be the ethics about disclosing their provenance?”

Link to the rest at Publishing Perspectives

As PG has mentioned previously, based on his understanding of how AI programs utilize written works of all kinds, he doesn’t think they’re violating US copyright law because AI doesn’t reproduce the text protected by copyright.

During his experiments with AI writing programs, the closest PG has seen direct references to the written works of others is a prompt that asks for the AI to write something in the style of Hemingway, Fitzgerald, Nora Roberts or Lucy Score. The AI writing from those prompts presents no danger to the future royalties earned by Ms. Roberts or Ms. Score.

(PG notes that academic publishing generally produces the most turgid collections of words known to humankind.)

Forget ChatGPT. These Are the Best AI-Powered Apps

From The Wall Street Journal:

Type pretty much anything into ChatGPT and it’ll spit out a confident, convincing response. The problem? Its answer can be full of errors. And during long conversations, it can veer into wild tangents.

So I started testing apps that use OpenAI’s GPT technology, but aren’t ChatGPT. Language app Duolingo and learning platform Khan Academy now offer conversational, personalized tutoring with this technology. Writing assistant Grammarly’s new tool can compose emails for you. Travel app Expedia features a chatty trip planner. And all Snapchat users just got a new friend on the social network called My AI.

. . . .

Parlez pal

Duolingo’s Roleplay text chatbot, available to French and Spanish learners on iOS, is more dynamic than the language-learning app’s often-repetitive translation exercises.

Each Roleplay conversation is themed. In my best French, I reminisced about a fictional Caribbean holiday, then I complained about a delayed flight. The bot corrected errors and suggested more advanced vocabulary for my responses.

Duolingo’s content experts created 100 initial scenarios. They programmed the AI language model to speak to a learner as a language instructor and only discuss the intended scenario. The result: No two conversations are alike, and Roleplay gets more advanced as the learner progresses.

. . . .

Homework helper

Khan Academy’s Khanmigo has several personalized learning tools, including a “Tutor me” mode and a quiz module for different subjects.

I tried the AI tutor with an AP U.S. History prompt: “Evaluate the factors behind population movement to America in the 17th century.” While ChatGPT wrote the entire essay for me, Khanmigo replied, “Religious freedom was one factor. Can you think of other examples?” 

I could ask Khanmigo for hints—but it’s programmed not to spit out the answer. 

Kristen DiCerbo, Khan Academy’s chief learning officer, said the company relied on tutoring research to create the Khanmigo prompts. When students get frustrated, it can offer a stronger hint, for example.

If a student types something off base, Khanmigo redirects the conversation. Any inputs related to hate speech, self-harm or violence trigger a message—“The conversation was unable to be processed”—and an email to the student’s parent or teacher, who can review the conversation.

The bigger concern is when the tutor gives the wrong answers, which occasionally happens with math, she said. Khan Academy worked with OpenAI to make GPT-4 better at math. The model is most accurate for questions about widely known K-12 topics but less so with niche subjects, Dr. DiCerbo added.

. . . .

Ghost writer

Grammarly has used AI to edit writing for years. GrammarlyGo, released last month, also composes writing for you. 

The most helpful element is its email responder, which appeared whenever I opened a compose window. I could click a green icon to expand the GrammarlyGo module, which summarizes the email and offers several “tone” options for replies, including persuasive, friendly and diplomatic.

The software can see what’s on your screen only when you activate the GrammarlyGo module. A Grammarly spokeswoman said the data is anonymized before it’s sent to the model. She added that the company never sells customer data and doesn’t allow partners to use the data to train their models.

GrammarlyGo’s suggestions were a good jumping-off point, but they felt like personalized templates I’d still have to mess with. My biggest gripe is that GrammarlyGo always signed off with “Best regards.” I tend to stick with the simpler “Best.”

Users get 100 prompts a month free; that goes up to 500 if they pay $30 a month or $144 annually. (Google is adding similar tools to its Docs and Gmail. For now, they’re only available by invitation.)

Link to the rest at The Wall Street Journal

Outlines, AI and Stormy Daniels

From Anne R. Allen’s Blog… with Ruth Harris:

I’d been planning to write my April post about outlines.

Which authors do outlines? And which ones don’t?

(I don’t and neither does Anne).

I’d done a bit of research and made some notes when along came AI, the hot new kid in town.

How could I ignore AI when everyday I was almost buried by an avalanche of news about AI, how to use it, and how it was going to revolutionize everything.

And put writers out of business.

Really?

Hold my beer.

Does s/he? Or doesn’t s/he?

Dean Wesley Smith does not outline: His book, Writing into the Dark, goes into the details of his process.

Neither does Nora Roberts. “I don’t plot. I don’t sit down and plot a book. It sort of unreels as I write.”

James Patterson outlines. Does he ever!

Daniel Silva doesn’t. “I have just never been able to outline. I just can’t bring a story to life on note cards, never have been able to….To me, it’s just a waste of time. My first draft is, in effect, the outline. If you do it long enough, you know that it becomes second nature—how you bring a character on the page, and this is what’s really going on—so I write it at one level but I have a much deeper understanding about what will happen later.”

Lisa Scottoline: “I just get writing. I don’t outline. I don’t know what the story is going to be until I get it down on paper. At the end, hopefully, I see I have a story. And then I’m the happiest person on the planet.”

Harlan Coben:  “I don’t outline. I usually know the ending before I start. I know very little about what happens in between. It’s like driving from New Jersey to California. I may go Route 80, I may go via the Straits of Magellan or stopover in Tokyo… but I’ll end up in California.”

At this point along came AI.

I asked it to write a blogpost about writers who do — and don’t — outline.

Here’s the result:

AI chimes in on Outlines.

When it comes to the writing process, every author has their own unique approach. Some writers meticulously plan out their stories with outlines, while others prefer to let their creativity flow freely without a rigid structure.

In this blogpost, we will explore famous writers who have used outlines in their work, as well as those who have chosen to forgo this method.

Famous Writers Who Use Outlines:

J.K. Rowling –

The world-famous author of the “Harry Potter” series is known for her meticulous planning and outlining. In fact, Rowling is known to have created extensive outlines for each of her seven books in the series before she even started writing the first one. Her outlines included detailed plot points, character arcs, and even diagrams of the Hogwarts castle. Rowling’s use of outlines allowed her to carefully plot out the complex storylines and foreshadowing that are integral to the “Harry Potter” series.

Dan Brown –

The best-selling author of “The Da Vinci Code” and other popular thrillers is known for his meticulous research and detailed outlines. Brown’s books are known for their intricate plots and historical references, and he uses outlines to carefully plan out the complex storylines and clues that make his novels so thrilling. Brown’s outlines often include detailed descriptions of locations, historical events, and clues that his protagonist must unravel, helping him create a fast-paced and engaging reading experience for his readers.

Stephen King –

Although he is known for his spontaneous and free-flowing writing style, Stephen King has also used outlines in some of his works. King has mentioned in interviews that he uses a loose outline to guide him through the writing process, particularly when working on longer novels with multiple plotlines. King’s outlines are not as detailed as those of other writers, but they serve as a rough roadmap that helps him stay on track while still leaving room for his creativity to flow.

George R.R. Martin –

The author of the epic fantasy series “A Song of Ice and Fire,” which inspired the television series “Game of Thrones,” is known for his intricate world-building and complex plotlines. Martin has stated that he uses extensive outlines to plan out the intricate politics, alliances, and conflicts that make up his fantasy world. His outlines often include detailed descriptions of characters, their motivations, and how they fit into the larger story. Martin’s use of outlines allows him to keep track of the numerous characters and storylines in his novels, which are known for their intricate and interwoven plots.

Link to the rest at Anne R. Allen’s Blog… with Ruth Harris

The Problem with Problematic

Note: The following post was written by Copymatic. PG entered a short prompt to seed the AI generation.

Have you ever found yourself in a conversation where someone used the word “problematic” to describe something they disagreed with or found offensive? It seems like nowadays, this term is thrown around so often that it’s lost its meaning. The overuse of “problematic” has become an epidemic, and it’s detrimental to effective communication.

Using this term too frequently can result in misunderstandings and miscommunication between individuals. When we label something as problematic without providing specific reasons or examples, we fail to communicate our thoughts effectively. Additionally, people may feel afraid to express their opinions if they fear being labeled as problematic themselves.

Furthermore, using “problematic” as a catch-all phrase for anything we disagree with limits our ability to engage in constructive dialogue and find meaningful solutions. Instead of simply dismissing something as problematic, we should take the time to identify exactly what issues are at play and work towards resolving them together.

In short, while it may be tempting to throw around buzzwords like “problematic,” doing so ultimately hinders rather than helps interpersonal communication. Moving forward, let’s strive for more thoughtful discussions and recognize that not everything can be reduced down to one simplistic label.

Writers Beware: There is one Big Problem with Using Chat GPT for Blogs

From Medium:

Writers and content creators, myself included, have found artificial intelligence (AI) to be a powerful tool when used properly to save time.

Before using AI, the most I had ever written was three articles in one day. Since I have been able to get up to six done in a single day.

If you are a writer or in any technical field, you too should learn AI too and become an expert in prompt engineering.

An amateur using AI is boring, but expert using AI is a powerhouse that can get a lot more work done.

However, there is the one big problem writers run into if they are using an AI program such as Chat GPT for blogging.

Artificial intelligence is programmed to detect artificial intelligence, and at least for now that’s a bad thing if you are creating content with AI.

If you copy and paste your blog content from a program like ChatGPT directly into a blog post, this will get flagged by platforms like Google or Facebook and this will hurt your SEO and any chances you have of organic reach on those platforms.

Can you fix this problem and still use AI as a writing tool?

Yes, but there are two AI tools that you need if you are a writer using AI:

1. AI Content Detector

The first tool you need is an AI content detector. One that I have used and find helpful is available at writer.com.

Paste your content into here and it will tell you if it appears to be human or AI generated content.

. . . .

2. Content Rephrasing Tool

You now need to rephrase your content. You can of course do this on your own, and it will take some time.

One way to save time is to use another AI tool that rephrases content. I find Copymatic’s tool helpful and easy to use. They have many other AI conten tools as well. 

Link to the rest at Medium

PG notes that the Copymatic video was made about one year ago and the program has added new features and refinements since then.

PG will play around with Copymatic and post the results here.

What If Shakespeare Wrote a Science Fiction Blockbuster: Star-crossed Galaxy? – Courtesy of ChatGPT

From Culture.org:

In a universe where the creative genius of William Shakespeare meets the awe-inspiring vastness of outer space, we bring you “Star-crossed Galaxy,” a science fiction epic that combines the beauty of the Bard’s prose with the mind-bending concepts of the final frontier.

Our tale takes place in the Verona System, where two interstellar empires, the Montagues and the Capulets, are engaged in a fierce war that has spanned millennia. Amidst the chaos, our star-crossed lovers, Romeo and Juliet, meet on a diplomatic mission to a neutral space station.

As their passionate love blooms, they embark on a dangerous quest to end the war that has ravaged their respective empires. They must navigate through treacherous asteroid fields, escape the clutches of a jealous Tybalt, and negotiate with the wise and mysterious Friar Laurence, who holds the key to uniting the galaxies.

Their journey is fraught with peril, but the lovers find solace in the beauty of the cosmos and the poetic language of their love. Shakespeare’s timeless verses are woven throughout the narrative, reimagined to capture the majesty and wonder of the stars.

. . . .

“O, swear not by the moon, th’ inconstant orb,

That monthly changes in her circled sphere,

Lest that thy love prove likewise variable.”

As they traverse the galaxy, Romeo and Juliet’s love transcends time and space, transforming into a force as powerful and eternal as the cosmos themselves. But, as in the original tragedy, their love is a double-edged sword, with the potential to both unite and destroy the Verona System.

“Star-crossed Galaxy” will take you on a breathtaking journey through space, propelled by the power of love and the transcendent beauty of Shakespeare’s words.

Link to the rest at Culture.org

Some AI Artworks Now Eligible for Copyright

From Hyperallergic:

The United States Copyright Office recently produced a statement of policy indicating that some artworks generated using artificial intelligence are now eligible for copyright registration on a case-by-case basis. This should go well!

Effective March 16, the Copyright Office’s statement of policy indicates that copyright applicants are permitted to submit AI-assisted works (across literature and visual arts) for protection under copyright law, and that the works will be evaluated for evidence of “human authorship.” The Office made a comparison between AI art and photography, citing the 1884 Supreme Court decision to extend copyright protections to photographs against the will of the Congress in Burrow-Giles Lithography Co. v. Sarony.

The Supreme Court decided that a photograph is not just a mechanical process, but an authored work based on the photographer’s decisions in curating the backdrop and subject’s clothing.

In the realm of generative works, the Office asks applicants if the included AI elements are the result of “mechanical reproduction” or of an author’s “own original mental conception, to which [the author] gave visible form.”  To mark the difference, the policy distinguishes between human artists developing AI work strictly through submitting prompts as instructions, and human artists selecting from and reimagining AI generations in a “sufficiently creative way.” However, the policy states that in the latter case, only the “human-authored” elements of the work would be copyrighted independent of the AI contributions.

The Office cites the example of a 2018 work generated autonomously by an unattended computer algorithm that was submitted for copyright protection and ultimately rejected as it was developed “without any creative contribution from a human actor.” On the other hand, a graphic novel with human-written text and Midjourney-generated imagery was granted copyright protection as a whole, but the individual images were omitted from the approval as they were not considered works of human authorship.

Link to the rest at Hyperallergic

I asked GPT-4 to write a book. The result: “Echoes of Atlantis”, 12 chapters, 115 pages, zero human input

From Reddit:

The goal of this project was to have GPT-4 generate an entire novel from scratch, including the title, genre, story, characters, settings, and all the writing, with no human input. It is impossible currently to do this using a single prompt, but what is possible is to supply a series of prompts that give structure to the process and allow it to complete this large task, one step at a time. However, in order to ensure that all the creative work is done by GPT-4, prompts are not allowed to make specific references to the content of the book, only the book’s structure. The intention is that the process should be simple, mechanical and possible (in principle) to fully automate. Each time the process is repeated from the beginning, it should create another entirely new book, based solely on GPT-4’s independent creative choices.

The result: Echoes of Atlantis, a fantasy adventure novel with 12 chapters and 115 pages, written over 10 days, from the day GPT-4 was released until now.

Insights/Techniques
My main insights I figured out in the course of doing this project:

Iterative refinement: Start with a high level outline. Make a detailed chapter outline. Then write a draft version of the full chapter (this will be much shorter than desired). Then expand each scene into a longer, more detailed scene.

Bounding (outside-in): GPT-4 loves to go too far ahead, writing about parts of the book that aren’t supposed to happen yet. The key to preventing this is to have it first write the first parts, then the last parts, then fill in the middle parts. The last part prevents it from going too far ahead, and the first parts in turn bound the last part of the previous section. Bounding is used at every level of refinement except the top level.

Single prompt: Often, by using a single large prompt, rather than a running conversation, you can flexibly determine exactly what information will be included in the input buffer, and ensure that all of it is relevant to the current task. I’ve crafted this approach to squeeze as much relevant info as I can into the token buffer.

Continuity notes: Ask it to take notes on important details to remember for continuity and consistency as it goes. Begin with continuity notes summarized from the previous scene, and then fold in additional continuity notes from the previous continuity notes. Continuity Notes will tend to grow over time; if they become too long, ask it to summarize them.

Revising outlines: In some cases, the AI improvises in its writing, for example moving some of the Chapter 5 scenes into Chapter 4, which breaks the book. To resolve this, I ask it after each chapter to go back and update its earlier, higher-level outlines and regenerate the opening and closing scenes of each chapter before continuing. This is very similar to how real authors revise their outlines over time.

Data cleanup: Sometimes outputs will do things a little weird, like copy labels from the input buffer like “Opening Paragraph”, or forget to number the scenes, or start numbering at zero, or add a little bit of stray text at the beginning. Currently I clean these up manually but a fully automated solution would have to cope with these.

Example prompts
These are just a few examples. For full details, see my Research Log.

Level 1: Top-level outline

Me: Please write a high-level outline for a book. Include a list of characters and a short description of each character. Include a list of chapters and a short summary of what happens in each chapter. You can pick any title and genre you want.

Level 1: Updating outline after each chapter

Me: Please edit and update the high-level outline for the book below, taking into account what has already happened in Chapter 1.

Level 2: Scenes (bounding)

Me: Please write a detailed outline describing the first scene of each chapter. It should describe what happens in that opening scene and set up the story for the rest of the chapter. Do not summarize the entire chapter, only the first scene.

Me: Write a detailed outline describing the final, last scene of each chapter. It should describe what happens at the very end of the chapter, and set up the story for the opening scene of the next chapter, which will come immediately afterwards.

Level 2: Scenes

Me: Given the following book outline, and the following opening and final scenes for Chapter 1, write a detailed chapter outline giving all the scenes in the chapter and a short description of each. Begin the outline with the Opening Scene below, and finish the outline with the Final Scene below.

Level 3: Rough draft

Me: Given the following book outline, and following detailed chapter outline for Chapter 1, write a first draft of Chapter 1. Label each of the scenes. Stop when you reach the end of Chapter 1. It should set up the story for Chapter 2, which will come immediately afterwards. It should be written in a narrative style and should be long, detailed, and engaging.

Level 4: Paragraphs (bounding)

Me: Given the following book outline, and the following draft of Chapter 1, imagine that you have expanded this draft into a longer, more detailed chapter. For each scene, give me both the first opening paragraph, and the last, final paragraph of that longer, more detailed version. Label them as Opening Paragraph and Final Paragraph. The opening paragraph should introduce the scene. The final paragraph should set up the story for the following scene, which will come immediately afterwards. The last paragraph of the final scene should set the story up for the following chapter, which will come immediately afterwards.

Level 4: Paragraphs

Me: Given the following book outline, and the following draft of Chapter 1, write a longer, more detailed version of Scene 1. The scene must begin and end with the following paragraphs: (opening and closing paragraphs here)

Continuity Notes

Me: Please briefly note any important details or facts from the scene below that you will need to remember while writing the rest of the book, in order to ensure continuity and consistency. Label these Continuity Notes.

Me: Combine and summarize these notes with the existing previous Continuity Notes below.

Link to the rest at Reddit

WGA Seeks Higher Compensation Amid Streaming Boom, Threatens First Strike in 15 Years

From Culture.org:

The Writers Guild of America (WGA) has commenced high-stakes negotiations with the Alliance of Motion Picture and Television Producers (AMPTP) for a new three-year contract, as the current agreement is set to expire on May 1.

. . . .

Representing over 11,000 television and movie writers, the WGA is seeking higher compensation, improved workplace standards, and a boost in contributions to pension and health funds.

The outcome of these negotiations will determine if the entertainment industry faces its first writers’ strike in 15 years.

. . . .

As the industry shifts towards streaming platforms, the WGA claims that Hollywood companies have taken advantage of this change to devalue writers’ work, leading to worsening working conditions.

The rapid transition to streaming entertainment has upended nearly every corner of Hollywood, and writers believe they have been left behind.

With fewer episodes per season on streaming platforms compared to traditional networks, writers are often paid less while working more.

Residual fees, or money paid when a film or series is rerun or aired on broadcast, have helped supplement writers’ income for years.

However, these fees are disappearing in the streaming era, where most projects ultimately land.

. . . .

The WGA is also asking for studios to establish standards around the use of artificial intelligence (AI) technology.

The guild wants the use of AI regulated in terms of material created for the studios.

The exact terms of agreement regarding AI have yet to be determined, and the WGA will have to overcome several hurdles to deliver its objectives to members.

. . . .

With the growing demand for content, many professionals in the entertainment industry work on a project-to-project basis, leading to job insecurity and a lack of long-term stability.

This gig economy structure can make it difficult for workers to plan their careers and secure stable income.

The potential writers’ strike highlights the need for better workplace standards and more reliable compensation structures to address the challenges faced by Hollywood workers in this evolving landscape.

Link to the rest at Culture.org

Microsoft’s new Copilot will change Office documents forever

From The Verge:

Microsoft’s new AI-powered Copilot summarized my meeting instantly yesterday (the meeting was with Microsoft to discuss Copilot, of course) before listing out the questions I’d asked just seconds before. I’ve watched Microsoft demo the future of work for years with concepts about virtual assistants, but Copilot is the closest thing I’ve ever seen to them coming true.

“In our minds this is the new way of computing, the new way of working with technology, and the most adaptive technology we’ve seen,” says Jon Friedman, corporate vice president of design and research at Microsoft, in an interview with The Verge.

I was speaking to Friedman in a Teams call when he activated Copilot midway through our meeting to perform its AI-powered magic. Microsoft has a flashy marketing video that shows off Copilot’s potential, but seeing Friedman demonstrate this in real time across Office apps and in Teams left me convinced it will forever change how we interact with software, create documents, and ultimately, how we work.

. . . .

Copilot appears in Office apps as a useful AI chatbot on the sidebar, but it’s much more than just that. You could be in the middle of a Word document, and it will gently appear when you highlight an entire paragraph — much like how Word has UI prompts that highlight your spelling mistakes. You can use it to rewrite your paragraphs with 10 suggestions of new text to flick through and freely edit, or you can have Copilot generate entire documents for you.

. . . .

Microsoft has customized this Copilot system for every Office app, so there are different ways to command it. Friedman demonstrated to me how Copilot can help you write emails in Outlook, offering up short or long message drafts with options to change the tone. It even works in the mobile version of Outlook, which got me thinking about the ways this could speed up work on the go.

“Outlook mobile is the first place where we’re doing a big push,” explains Friedman. Outlook can summarize all your emails on the go, generate drafts, and generally make it easier to triage your inbox. But imagine creating entire Word documents from your phone without having to type on a tiny on-screen keyboard. “We’ll have more to talk about mobile in the coming months,” says Friedman. But you can imagine where things will go.

Link to the rest at The Verge

Google Is About to Unleash AI for Gmail and Google Docs

From Gizmodo:

Google announced it’s finally unleashing its generative AI tools Tuesday, bringing a set of features to Gmail and Google Docs that will automatically create drafts based on simple prompts. Google will also add new AI capabilities to its business products, including Google Cloud and a new API for developers

Google says it’s rolling out a test of these features to a “limited set of trusted testers” in the coming weeks. When they’re released to the more than 3 billion users of Gmail and Google Docs, more people will be exposed to the latest generation of artificial intelligence technology than ever before.

In Gmail and Google Docs, you’ll be able to type in a few words about a topic, and the apps will automatically spit out a draft. The company posted a GIF using a job posting as an example. The user types “job post for a regional sales rep,” and in a second, Docs spits out a formatted page of text with filled out sections for a job description, responsibilities, and qualifications.

“We’re now at a pivotal moment in our AI journey,” said Thomas Kurian, CEO of Google Cloud, in a blog post. “Breakthroughs in generative AI are fundamentally changing how people interact with technology — and at Google, we’ve been responsibly developing large language models so we can safely bring them to our products. Today, we’re excited to share our early progress”

With Google’s upcoming business tools, companies are sure to create a flood of AI apps.

Link to the rest at Gizmodo

Romeo and Juliet via ChatGPT

Here’s a question PG put to ChatGPT, one of the leading text AI programs:

Did Romeo and Juliet have to die? Couldn’t they have married and enjoyed a happy life?

Here’s the answer to PG’s question from the AI:

Romeo and Juliet is a fictional play written by William Shakespeare, first performed in the 16th century. The tragic ending of the play is an essential part of the plot and serves to highlight the consequences of the feud between the two families.

Although it is possible to imagine a different ending for the story, where Romeo and Juliet marry and live happily ever after, it would fundamentally alter the play’s themes and message. The tragic ending is what makes the play so powerful and enduring.

In the context of the play, Romeo and Juliet’s death is the result of a tragic set of circumstances, including their families’ long-standing feud, the lack of communication between them, and the tragic misunderstanding about Juliet’s supposed death. These factors combined to create a tragic outcome that could not have been avoided.

Ultimately, the play’s message is not about whether Romeo and Juliet could have lived happily ever after, but about the destructive power of hatred and violence, and the need for love and compassion to overcome these forces.

GrammarlyGO

From Grammarly:

Today, we announced to the world GrammarlyGO—Grammarly’s on-demand, contextually aware assistant powered by generative AI. With GrammarlyGO, we’ll be changing the way people and businesses communicate and get work done by accelerating productivity where writing happens.

Effective communication is transformative. It’s how we share new ideas, advocate for change, and build connections. And when done right, communication empowers businesses to operate efficiently and achieve ambitious goals. We’ve been focused on our mission to improve lives by improving communication for well over a decade. And we’ve always leveraged the latest technical innovations to help solve the real problems our customers face.

We’re building on that legacy with GrammarlyGO, which uses generative AI to help people and businesses succeed with on-demand communication assistance, whether they are starting from scratch or revising an existing piece of writing. It will uniquely offer relevant, contextually aware suggestions that account for personal voice and brand style while staying true to our augmented intelligence philosophy to keep customers in control of their experience. GrammarlyGO will enable customers to save time, enhance their creativity, and get more done—helping individuals achieve their potential and enterprises transform how they work.

. . . .

GrammarlyGO provides on-demand generative AI communication assistance directly in the apps where people write. Whether in an email thread or a long-form document, GrammarlyGO is right there with you and your teams during the writing process. GrammarlyGO understands context to quickly generate high-quality, task-appropriate writing and revisions.

With GrammarlyGO, individuals and businesses can use generative AI to:

  • Rewrite for tone, clarity, and length: Transform writing to be clear and on target, whatever the context.
  • Compose: Type a prompt and watch GrammarlyGO compose high-quality writing, saving time finding the perfect words.
  • Ideate: Unblock writing with GrammarlyGO as an AI ideation partner and unlock creativity with GrammarlyGO’s outlines and brainstorms, generated from prompts.
  • Reply intelligently: Flow through emails quickly with GrammarlyGO, which understands an email’s context and instantly drafts a thoughtful reply.

Link to the rest at Grammarly

PG is very interested in this development.

He will note in passing that his current Grammarly version found some parts in the OP that needed to be cleaned up.

Science fiction publishers are being flooded with AI-generated stories

From Tech Crunch:

Across the 17-year history of Clarkesworld, a renowned literary magazine of science fiction and fantasy, authors have speculated about how evolving, futuristic technology will impact our world. Now, editor and publisher Neil Clarke is living through a debacle that could very well be a sci-fi story in its own right: His magazine in being uncontrollably inundated by short-story submissions created with AI tools.

“It is ironic, I’ll say that much,” Clarke told TechCrunch. Clarkesworld has a reputation for always being open to story submissions, whereas many short-fiction publishers will only take submissions in certain short windows. But for the first time, submission volume got so out of hand that Clarke made what he calls a “spur-of-the-moment decision” to close the submission portal (in the past, Clarkesworld has only briefly closed when upgrading its website or software).

“It’s easy with these tools to churn out hundreds of thousands of works in the time that a human author would produce maybe one or two,” Clarke told TechCrunch. “So what we basically have is a room of screaming toddlers, and we can’t hear the people we’re trying to listen to.”

Clarke isn’t being dramatic. In a blog post, he shared a graph spanning from June 2019 to February 2023, which shows how many monthly submissions his staff flagged as spam. Until the beginning of this year, spam submissions never exceeded 25 per month, while many months had no spam whatsoever. Before closing submissions on Monday, Clarkesworld had received more than 500 spam submissions in the month of February alone. For context, Clarkesworld received around 11,500 submissions in 2022, per Clarke’s blog.

Link to the rest at Tech Crunch