What ‘AI’ Isn’t: An Interview With Thomas Cox on ChatGPT

This content has been archived. It may no longer be accurate or relevant.

From Publishing Perspectives:

Of all the commentary about OpenAI’s model ChatGPT—and who hasn’t commented on it?—some of the more level-headed observations for the publishing industry may come from Thomas Cox, a specialist in computer science and the managing director of Arq Works near Oxford.

Cox’s Arq Works focuses on software for the book publishing industry. This, of course, is one of the key reasons that we wanted to speak with him for today’s article.

. . . .

Cox’s input is useful, of course, because the international publishing industry at times can be quite emotional in its responses to technological developments. You’ll remember no small amount of tearing of hair over anything dubbed “digital” years ago, for example, and the days when “enhanced ebooks” were predicted to guarantee a future in which print books would have no place. Similarly, there have been rash warnings of computers using ChatGPT to write and sell whole books—and trashing the book publishing industry forever, just as video killed the radio star, right?

. . . .

Many people, even in publishing, can be unnerved by cyber-rattling when ChatGPT is discussed.

Thomas Cox

But what may be more seriously concerning, as Cox points out, is the potential for machine-generated systems to generate and promulgate content that’s incorrect and misleading—misinformation and, in the wrong hands, disinformation.

So extensive are concerns about ChatGPT that, as Anna Tong in San Francisco is writing for Reuters, “OpenAI, the startup behind ChatGPT, on Thursday said it is developing an upgrade to its viral chatbot that users can customize, as it works to address concerns about bias in artificial intelligence.”

Cox talks about how the software can “hallucinate,” as technologists call it, generating a stream of authoritative sounding verbiage that’s in fact utterly wrong.

. . . .

“A lot of the time it’s wrong because it’s based on how it’s trained. Because of the way the statistical model in the background works, it’s not a truth engine at this point. It’s not a knowledge base. It’s not the source of all human knowledge. It’s just representing an answer which has come back [because] it’s matching on all the algorithms that it’s been trained on. So it will happily lie to you all day.”

Answering one concern for educators in publishing, Cox says, “I think that for professors, educated people who are reading blog posts, it’s going to be easy to see the ones spewed out by ChatGPT or similar systems because if they’ve not been updated, they will just be absolutely filled with inaccuracies.”

Link to the rest at Publishing Perspectives

PG notes that these are the earliest days of Artificial Intelligence as applied to text creation as well as a great many other things.

Critics are harping about the shortcomings of the very first publicly available (at no charge) iterations of AI text software. PG predicts that, like many technological discoveries, it will evolve very rapidly.

Wright Brothers First Flight
(1903)
Sikorsky S-21 Russky Vityaz, the first four-engine airplane ever flown
(1913)
Mitsubishi 1MF
First Airplane to Take Off from and Land on an Aircraft Carrier
(1923)
Hawker Hurricane
Top Speed: 340 mph
(1935)
B-29 SuperFortress
Carried 20,000 pounds of bombs to targets more than 2,000 miles away.
Dropped the first atomic bombs on Hiroshima and Nagasaki
(1944)

15 thoughts on “What ‘AI’ Isn’t: An Interview With Thomas Cox on ChatGPT”

  1. the current approach of AI is explicitly limited to lookup/retrieval of it’s training data.

    This is incredibly useful, but the rebranding of this as AI is misleading.

    Intelligence implies the ability to reason and deal with novel situations, “AI” doesn’t do this, it just matches it to it’s training set and regurgitates the training data

    The Tesla “AI day” presentation videos are very useful to get a good idea of what current AI is and isn’t.

    • Not quite.
      “AI” uses the training dataset to generate a data processing Model that responds to new inputs. The output data need not be in the training model, just be produced by what the model “learns” from the data. The CharBot models are trained to “understand” natural language and respond according to their training and purpose. But that isn’t all or even the most important thing they can do.

      Try this:

      https://www.msn.com/en-us/news/us/an-ai-successfully-flew-an-f-16-fighter-jet-for-17-hours/ar-AA17tNdi

      (Now, add in that the USAF has almost 2000 F16’s in the boneyard that could be adapted for “AI” drone use, say three per F35… A bit more significant than doing a web search.)

      Each Model is a different black box that accepts different inputs and spits out different outputs.
      For example, the training dataset could be image data from paintings or LIDAR data from a satellite both going through the same image processing model or, conversely, you could send the same data through different Models.

      In the case of ChatGPT vs BING, BING has two components: the OpenAI based Model trained with newer data *plus* an up-to-date web crawler named Prometheus which is *not* part of ChatGPT. The model takes the query, interprets it, runs it againsts the Prometheus database, and either returns a list of links or, if in Chat mode, composes a natural language reply. The output need not be in the training data. (A classic example is a query about movie times in a particular region. ChatGPT can’t answer, Bing can, even though 2023 data was not in the training data.)

      The exact same Neural Network can also generate a different Model by training on different data. (That is one way Microsoft intends to monetize their OpenAI investment; charging for letting companies create their own Models trained on their proprietary inhouse data to process their own inputs via AZURE.) BTW, creating a Model can require ridiculous amounts of computing power but running one need not. It will depend on the task. Current Smartphones with the fancy onboard camera features? Probably a small Model at work. One of the bad-boy “AI” apps getting all the bad press, STABLE DIFFUSION, can be run on a PC with a high end NVIDIA graphics card. The game is just beginningm

      Finally, the most useful models are those, like Tesla’s and John Deere’s, (or DARPA’s) where the output has nothing to do with the training data; those models take real time sensor data, “recognize” what that sensor detects and issue proportional instructions to a control system. In that mode they are more like advanced analog computers.

      A recent article I saw about THE EXPANSE gave an interesting and compelling answer to why their future, centuries from now, has no AI’s. To which the authors effectively replied that there is AI all over: embedded in the hardware in use. And yes, that makes sense: it’s no different than the ubiquity of microprocessors that live around us, in phones, TVs, cars, ovens, fridges, watches…

      Eventually everything that has a control system will be run by either an “AI” model or software crafted by one. As far back as the 80’s it was floated that eventually we woukd see software written by software. The path to that is now open.

      The future isn’t ChatBots, but embedded “AI”.
      Now we just have to avoid creating BERSERKERS or BOLOS.
      (Not holding my breath, though.)

      • Just ran into this:
        https://m.youtube.com/watch?v=p3zHoCEEvdY

        It includes video of the pilotless F16 doing things no human pilot could pssibly tolerate. It has long been known the F16 can do manuevers that generate forces way beyond what the most capable human can tolerate (~9G).
        In addition, the control system can respond to an onboard pilot as if it were another type of plane, C17, F22,F35, Poseidon, whatever. That was the primary goal of the program, a chameleon trainer platform. But then they discovered the thing can dogfight…

        Chalk one up for embedded “AI”.

    • I match my training set, too. I have a training set of dogs. I have seen lots of them. When I see a four-legged, 30 pound critter with a lolling tongue, I consult the training set and say “Dog.” Without that training set, I wouldn’t know what it is. I could consult other training sets in my catalog to see what comes closest.

      I don’t think we can accurately define intelligence, but there is a parade of folks who want to define what it is not. If we simply follow a Turing model and look at results, human and machine results get closer and closer.

      There was a comment here a few days back about the failure of ChatGPT because it recited some facts and then started bullshitting. What could be more human than that?

      • As Asimov pointed out decades ago, a True AI would be a better person than a human.
        A software entity that is as good as a human is a low bar to clear; you get a Skynet or a Colosus. If you’re to create a true AI, you want an R. Daneel Olivaw or Mycroft Holmes. Otherwise, why bother?
        Actual humans are easy to make. 😉

      • If we ‘simply follow a Turing model’, we are not looking at intelligence at all, but merely at the ability to manipulate linguistic symbols. This is a small subset of human intelligence.

  2. Experiment:

    Anyone tried having two users ask the exact same question? First-time users would be best to eliminate any history effect. Do they get the same answer?

  3. I, and I suspect everyone who writes professionally, use “AI tools,” though we don’t think of them as such: spell check, thesaurus, etc. are tools. They are functions that help the writer with accuracy, etc., and I personally see no problem with that.

    My only problem is with fiction writers who use AI to generate story content and then present the work as being specifically their own. Beginning with my next publication, this (or something similar) will go into the front matter:

    This fiction is a Creation, the result of a partnership between a human writer and the character(s) he accessed with his creative subconscious. This is in no part the block-by-block, artificial construction of any sort of AI or of any conscious, critical, human mind. What you read here is what actually happened there.

  4. AI’s ‘Kitty Hawk’ came in the 1960s, with systems like ELIZA. Don’t fool yourself that this is a brand-new invention.

    All these systems do is manipulate text in various ways. They have no frame of reference; no way of knowing what any of the words actually mean. If they convey genuine information, it’s because they mined it from some online source. If they convey false information, well, the Internet is full of that, too – but also, these bots are proving themselves to be experts at BSing. They will answer queries by making up plausible strings of words that look as if they were informative, but have no connection with reality.

    Of course, many people in the media are impressed by this, because a lot of them do little more than manipulate text themselves, and routinely cover subjects of which they have no adequate understanding – technology in particular. And people in traditional publishing are impressed, because they want there to be a way that they can generate profitable books without having to pay a bunch of pesky writers.

    • Correct.
      That’s why the media is pearl clutching over the “AI” hype.
      There’s no intelligence in the new software. Just very good (but silp limited) dynamic algorithms. But “dynamic algorithm” isn’t as hype worthy as “machine learning” much less “AI”.

      The hype will die down soon enough but the undlying tech will roll merrily along.
      Lost in all the nose over the Big chatbot going off the rails in long conversation is the undeniable fact that the basic search function is waayy better than Google and its ability to abstract and summarize is useful.

      Eventually folks will stop hyperventilating and notice it’s just another tool.

  5. “OpenAI, the startup behind ChatGPT, on Thursday said it is developing an upgrade to its viral chatbot that users can customize, as it works to address concerns about bias in artificial intelligence.”.

    Oh, don’t work too hard. This might be a feature, not a bug, because it can stimulate people to be skeptical and more questioning. If people finally grasp that the computer is only as good as its programming, it might get them to consider how it was programmed, for what purpose, or who did the programming (if relevant). Unless the answers are easily checked and irrefutable, e.g., mathematical equations, then you’re not supposed to think of the computer’s responses as pronouncements from Mt. Sinai.

    “I think that for professors, educated people who are reading blog posts, it’s going to be easy to see the ones spewed out by ChatGPT or similar systems because if they’ve not been updated, they will just be absolutely filled with inaccuracies.”

    Like Wikipedia! Thinking of the real-life Le Chevalier D’Eon, who is set to appear in a biopic about Le Chevalier de Saint-George, (the reason is at the 9:29 mark in that link). When the anime series about D’Eon came out, the DVDs included Wikipedia articles about the real D’Eon. Since trans became a topic du jour, that article has undergone several revisions. Whether you agree with the revisions or not, the point is that this problem isn’t new. However, so long as we live in a world where all the sources remain accessible (no book burnings) this is not particularly threatening a problem. Again, the bot factor invites the humans who use it to be skeptical. In my best Martha Stewart voice, “It’s a good thing.”

    • Adding on to this — in elementary school they had us do seminars on “library skills.” They’d pull us out of class and a lady would teach us the card catalog system and the different parts of a book. “This is the frontispiece, and this is the colophon,” and so forth. We spent a lot of time discussing the copyright page because it was going to be important to our lives as students. What I mean is, the copyright page let us know when a book had been published, and where. In matters of history and science, a book’s value sometimes hinges on the copyright date. A book on a particular era of British history published before the discovery of a relevant hoard may be less informative than one written post-discovery. My school’s science textbook mentioned that scientists previously believed Mars had canals and Venus had jungles. Hence your John Carter and “Enchantress of Venus” adventures, which were less possible to write after 1975.

      ChatGPT being updated or not does not present a special problem in this regard. Even kids don’t have to be fooled. If they’re properly taught how not to be, anyway.

  6. MS identified why the Bing Chatbox was “going rampant” (1): because it keeps track of previous comments in a conversation, trying to factor in too many comments/queries confused the model as it tried to factor them all in. (Shades of Hal!)
    As a result, they have (for now?) put limits on its conversations: five coments/queries per conversation, 50 conversations per day.

    https://www.windowscentral.com/software-apps/bing/microsoft-limits-bing-ai-chatbot-to-just-5-messages-per-topic-in-effort-to-reduce-rampancy

    If we go back to the piece where the bot was going bonkers it was in extended conversations, often ones that were trying to extract emotion from the model. As per usual, it is the human side of the equation where the problem lies. In this case, MS has been phasing in the chatbot precisely to identify how humans would stress the model (They learned from their TAY chatbot experiment.)

    1) Rampancy is a term coined by the developers of the HALO XBOX games to expain why the game AI’s were destroyed after short usage lives: over time, too much data and interaction with humans and aliens would drive the AIs insane. In random ways. So far, they got that right.

Comments are closed.