WGGB publishes policy position on risks and benefits of AI

This content has been archived. It may no longer be accurate or relevant.

FromThe Bookseller:

The Writers’ Guild of Great Britain, WGGB, has published Writers and AI, a policy position statement outlining the challenges caused by AI and the risks that go with it, as well as the potential benefits of AI.

The statement was released in response to the union’s recent survey, which revealed that 65% of 500 respondents believed the increased use of AI will reduce their income from writing, while 61% were worried that AI could replace jobs in their craft areas.

The survey came on top of an early impact assessment by OpenAI which indicated that the exposure risk to poets, lyricists and creative writers was amongst the highest, at 68.8%.

Additionally, a recent report by KPMG, Generative AI and the UK Labour Market estimated that 43% of the tasks associated with authors, writers and translators could be automated, with humans “fine tuning” machine output.

The policy position statement published in response to the data addresses various ongoing concerns about AI. These include decreased job opportunities for writers, the suppression of writer pay, infringements of copyright and the use of writers’ work without their permission, plus lack of adequate regulation from the government.

The statement says: “While the AI systems are not yet sophisticated enough to produce works which accurately mimic the standard of writing produced by professional writers, this is a likely future scenario.

However, the WGGB does not believe that AI will be able to replicate the originality, authenticity, enthusiasm and humanity that professional writers put into their storytelling.”

. . . .

The policy position statement makes a number of recommendations, which will be used to inform the union’s lobbying and campaigning work. It outlines that AI developers should only use writers’ work if they have been given express permission to do so, reflecting the view of 80% of respondents to the WGGB survey.

In addition, the statement outlines that AI developers should maintain “clear and accessible” logs of the information used to train their tool to allow writers to check if their work has been used. Where content has been generated or decisions have been made by AI and not a human being, it adds that this needs to be clearly labelled as such.

The statement goes on to outline that where AI has been used to create content, developers should appropriately credit the authors whose work has been used to create such content. It adds that writers should also be fairly compensated when developers use their work.

Meanwhile, 59% of respondents to the WGGB AI survey reported believing that a new, independent regulator should be set up to oversee and monitor the expansion of AI. The union echoes this position in the statement, saying it “believes the government should set up a new regulatory body whose remits specifically covers AI, applicable to all future and previous AI development work, so that writers and others are able to assert their rights regarding work which has already been used without their knowledge or permission”.

The government should not allow any copyright exceptions to allow text and data mining for commercial purposes, the statement adds, as this would allow AI developers to “scrape writers’ work form online sources, without permission or payment”.

It also outlines that there should be “clear, accessible and affordable” routes for writers to challenge the practices of AI developers and bring claims regarding the use of their work.

Link to the rest at The Bookseller

PG’s response to the OP: “In your dreams.”

As PG has mentioned previously, he believes that the use of writings of all sorts to train an AI is not a violation of the copyright of the creators of those writings.

“Inspired by” creations have never, to the best of PG’s knowledge, been regarded as violations of the copyright of those who created the source of the inspiration. Copying and republishing the original in whole or in substantial part is what triggers the right of the original creator to assert a violation of copyright protection.

The development of artificial intelligence systems is taking place all over the world. If Britain slows down its AI efforts (keeping detailed and accessible logs of each work used, in whole or in part, to prime the creation) at the request of traditional publishing and some of its authors, researchers in other nations will move forward and Britain will be left behind.

PG recalls some of the words of the British poet, Stephen Spender:

I think continually of those who were truly great.
Who, from the womb, remembered the soul’s history
Through corridors of light, where the hours are suns,
Endless and singing. Whose lovely ambition
Was that their lips, still touched with fire,
Should tell of the Spirit, clothed from head to foot in song.
And who hoarded from the Spring branches
The desires falling across their bodies like blossoms.

What is precious, is never to forget
The essential delight of the blood drawn from ageless springs
Breaking through rocks in worlds before our earth.
Never to deny its pleasure in the morning simple light
Nor its grave evening demand for love.
Never to allow gradually the traffic to smother
With noise and fog, the flowering of the spirit.

PG can’t imagine anyone criticizing Spender’s expressed obsessions or claiming that his thoughts of other creators and their works meant that Spender could not use such thoughts and works to create something of his own.

5 thoughts on “WGGB publishes policy position on risks and benefits of AI”

  1. Did you have any comment on the Sarah Silverman case? Apologies if I missed it.

    But all the actions seem to be following a desire to make books follow the blurred line case. Where every copyrightable element is not present but it “feels” like something else.

    Oh it has spirit and song in the same Line, sure feels like something Spender wrote.

    • Silverman is but one of several authors suing over CnatGPT.
      The pileup is just beginning.
      They’re in for a surprise.

      Remember that we’ve been here before with the Authors Guild vs Google.
      Scanning content for use in a database is legal.
      Using database data to compose a report is legal.
      Summaries of books are legal. (Cliff notes.)
      And, where it comes to video and images: style is not protected.

      In the US there are the Fair Use guidelines to consider:

      “Under the fair use doctrine of the U.S. copyright statute, it is permissible to use limited portions of a work including quotes, for purposes such as commentary, criticism, news reporting, and scholarly reports.

      “There are no legal rules permitting the use of a specific number of words, a certain number of musical notes, or percentage of a work.

      “Section 107 of the Copyright Act provides the statutory framework for determining whether something is a fair use and identifies certain types of uses—such as criticism, comment, news reporting, teaching, scholarship, and research—as examples of activities that may qualify as fair use.

      “Four factors are considered when determining fair use: What is the purpose of the use? What is the nature of the copyrighted work? How much of the work will be used? What is the market effect on the original work of the use? Fair use is determined by weighing these four factors either for fair use or for asking permission to use the work.”

      The market effect is typically measured by whether the challenged work can substitute for the work of the challenger.

      In Silverman’s case, is the summary going to provide the reader with a similar experience as reading her book? Doubtful. It might steer away would be buyers but so do book reviews and critical analyses.

      More than in the google case, which presented snippets of book content, GPT summaries of a book only convey the *idea* of what’s in the book and how it is presented, not the actual verbatim content.

      Only question is how many billable hours are racked up before the claims are rejected.

  2. The notion that AI can be controlled, regulated, and limited in its use has little basis in reality. I suppose we might also try regulating the use of calculators, math, or word processors.

    I suspect we will soon see commercially successful fiction that is generated by providing a detailed outline and then editing the AI product. But, how will we know? Ask another AI?

  3. The genie is already out of the bottle – it is already too late. It is weird.

    But I still don’t think it can write fiction.

    It can have sports and weather and routine office results – I care nothing about those except accuracy, and the outcry over AI just plain LYING has been hilarious – I can’t imagine a lawyer sending results based on AI to a judge without thoroughly and personally checking all the references – but to find that they were fake?

    If AI sticks its nose in creating, I’m going to have a real problem. If it’s any good.

    “In your dreams” is still the correct response.

    • Agreed on both counts.

      First because “AI” isn’t what they think it is. And fear.
      And second because LLM models are trained on mass volume and thus will always tend to go for the “average” answer. As in standard narrative voice and mediocrity.

      The best creative content goes where few if any have gone before. Out if the box thinking, if you were. Generative software is by nature constrained to stay inside the box. In fact, the biggest problem still faced by existing models come when the model jumps the fence and goes off into the weeds.

      Yet it is in the weeds that true creativity is to be found.

Comments are closed.