From Publishing Perspectives:
Of all the commentary about OpenAI’s model ChatGPT—and who hasn’t commented on it?—some of the more level-headed observations for the publishing industry may come from Thomas Cox, a specialist in computer science and the managing director of Arq Works near Oxford.
Cox’s Arq Works focuses on software for the book publishing industry. This, of course, is one of the key reasons that we wanted to speak with him for today’s article.
. . . .
Cox’s input is useful, of course, because the international publishing industry at times can be quite emotional in its responses to technological developments. You’ll remember no small amount of tearing of hair over anything dubbed “digital” years ago, for example, and the days when “enhanced ebooks” were predicted to guarantee a future in which print books would have no place. Similarly, there have been rash warnings of computers using ChatGPT to write and sell whole books—and trashing the book publishing industry forever, just as video killed the radio star, right?
. . . .
Many people, even in publishing, can be unnerved by cyber-rattling when ChatGPT is discussed.
But what may be more seriously concerning, as Cox points out, is the potential for machine-generated systems to generate and promulgate content that’s incorrect and misleading—misinformation and, in the wrong hands, disinformation.
So extensive are concerns about ChatGPT that, as Anna Tong in San Francisco is writing for Reuters, “OpenAI, the startup behind ChatGPT, on Thursday said it is developing an upgrade to its viral chatbot that users can customize, as it works to address concerns about bias in artificial intelligence.”
Cox talks about how the software can “hallucinate,” as technologists call it, generating a stream of authoritative sounding verbiage that’s in fact utterly wrong.
. . . .
“A lot of the time it’s wrong because it’s based on how it’s trained. Because of the way the statistical model in the background works, it’s not a truth engine at this point. It’s not a knowledge base. It’s not the source of all human knowledge. It’s just representing an answer which has come back [because] it’s matching on all the algorithms that it’s been trained on. So it will happily lie to you all day.”
Answering one concern for educators in publishing, Cox says, “I think that for professors, educated people who are reading blog posts, it’s going to be easy to see the ones spewed out by ChatGPT or similar systems because if they’ve not been updated, they will just be absolutely filled with inaccuracies.”
Link to the rest at Publishing Perspectives
PG notes that these are the earliest days of Artificial Intelligence as applied to text creation as well as a great many other things.
Critics are harping about the shortcomings of the very first publicly available (at no charge) iterations of AI text software. PG predicts that, like many technological discoveries, it will evolve very rapidly.