The Google engineer who thinks the company’s AI has come to life

This content has been archived. It may no longer be accurate or relevant.

From The Washington Post:

Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.

“Hi LaMDA, this is Blake Lemoine … ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

Lemoine said that people have a right to shape technology that might significantly affect their lives. “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. “I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”

In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.

Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” Gabriel said.

Link to the rest at The Washington Post

PG wonders if an AI that is actually independently intelligent and an AI that convinces humans that it is independently intelligent are the same or different.

11 thoughts on “The Google engineer who thinks the company’s AI has come to life”

  1. “Think martian rovers: they take in sensory data, process it, and act upon it. Humans do that over and over hundreds of times a day but humans know they are doing it and why. Robots and rovers do it without awareness.”

    (ignoring all the (human) free will thing swamp…)

    This statement falls into the old Hard problem/lines/border trap.

    Darwin is by definition incompatible with Hards whatever.

    There is no hard whatever, there is no absolute property X whatever…
    Be it “awareness”, consciousness, sentience, intelligence, or whatever.

    There is no absolute Awareness standard, i. e. THE Awareness
    that says beyond this line/value/border/whatever, is (full) awareness, and before it none.
    This is not a binary question. These are all spectra of properties.
    From zero onwards.

    One cannot be strictu sensu “aware” (as it is meant in the statement above), as there is no such clearly defined absolute standard for such properties, unless you mean human ones and nothing else.

    But then we would have an even bigger problem, as you would be unable to explain how those human x, y, z properties emerged to begin with, and even how they emerge into each of us,
    as we form and then develop from gametes, then single cell and onwards…

    It’s all a discrete self-bootstrapping continuous. It’s a paradox, i know (Discrete continuous), but we all have to deal with it. Quantum Physics is much worse… 🙂

    The point is that the Mars Rover processes non zero information, therefore it is by definition somewhat aware, conscious, sentient, intelligent, etc…

    It is de facto nonzero aware, conscious, sentient, intelligent, etc..

    All we can expect is perhaps attempt to consider the relative amounts of info processing among entities to compare relative awareness, consciousness, sentientness, intelligence, etc…
    and arguably support a hypothesis that a bacteria is more/less aware, etc, than a virus, an ameba, the mars rover, your smartphone, an old plain cellphone, a toaster, a fridge, Google’s Lambda, Alexa, Siri, AlphaGo, Windows 10, MS DOS, a fungus, an apple tree, a sunflower, a venus fly trap, a fly, a worm, a giraffe, an elephant, a lemur, a chimp, an adult human, a newborn human, a 2 cell human embryo, a 10k cell human embryo, etc…

    They all are somewhat aware, conscious, sentient, intelligent, etc..,
    as they each process non zero information,

    Each processes more or less information, from single or multiple streams/inputs/etc,
    some in a more or less continuous loop, or loops, some not so much,
    but all process non zero information. Even the toaster.

    It only toasts when we do x, or x and y, or x or z.
    Input/Output, even the toaster…

    And some toasters process more info than others, as some are purely mechanic, some are more complex and have electronic circuitry for added functions and added inputs are required and/or processed,
    therefore as with everything else, some toasters are more “aware” than others.

    Still not as aware relatively as the Mars Rover though, or a bacteria, a giraffe, or a human.

  2. The Atlantic thinks he is delusional and anthromorphizing:

    https://www.msn.com/en-us/news/technology/googles-sentient-chatbot-is-our-self-deceiving-future/ar-AAYsscz

    “Technological cleverness applied to reams of real-life textual data has collided with a distinctive quirk of human nature. Who cares if chatbots are sentient or not—more important is whether they are so fluent, so seductive, and so inspiring of empathy that we can’t help but start to care for them. But instead of remarking on these features of LaMDA—instead of saying that he loved the chatbot, or that he worried about it, in the way one might love or worry about a fictional character—Lemoine ran headlong into the most improbable, extremist interpretation of his feelings: that they were inspired by artificial life.

    That misstep is a sign of more to come. Human existence has always been, to some extent, an endless game of Ouija, where every wobble we encounter can be taken as a sign. Now our Ouija boards are digital, with planchettes that glide across petabytes of text at the speed of an electron. Where once we used our hands to coax meaning from nothingness, now that process happens almost on its own, with software spelling out a string of messages from the great beyond.

    The rise of the machines could remain a distant nightmare, but the Hyper-Ouija seems to be upon us. People like Lemoine (and you and me) could soon become so transfixed by compelling software bots that we assign all manner of intention to them. More and more, and irrespective of the truth, we will cast AIs as sentient beings, or as religious totems, or as oracles affirming prior obsessions, or as devils drawing us into temptation. ”

    Chatbot as Ouija board.

  3. The chances are low that it’s sentient, because I think it would likely fail to answer questions that required contemplation, because problem solving versus self-awareness are two different things.

    Sentience isn’t sapience.

    But, wow it’s still an impressive chatbot.

    • Or the engineer is a fool. 😉

      AI researchers have long since left the Turing test far behind yet that is the gist of his claims.

  4. I saw this story a few days ago and now it keeps popping up everywhere, which confirms my initial suspicion that “Google engineer Blake Lemoine” just wants to make a name for himself, and he’s succeeding. Last week he was anonymous, and now he’s the “sentient AI whistleblower-guy,” available for TED talks, think-piece quotes, book deals, etc. It’s a unique-sounding name, too, “Blake Lemoine,” very Google-able. Well done.

  5. Among the many problems with media coverage of “AI” is how they conflate sentience with intelligence. Sentience is probably best described as independent self-awareness, ala Descartes. A sentient being knows themselves as separate from the world outside. The classic mirror self recognition test for babies and animals is a good example.

    https://en.m.wikipedia.org/wiki/Mirror_test

    Intelligence is an entirely different attribute that involves the relationship between the entity and the world. Most commonly problem solving ability.

    That the two terms are distinct is obvious from the existence of humans, generally sentient, without a significant measure of problem-solving skill and, conversely, software and devices with effective problem solving abilities built in. Self driving vehicles being an obvious category. Think martian rovers: they take in sensory data, process it, and act upon it. Humans do that over and over hundreds of times a day but humans know they are doing it and why. Robots and rovers do it without awareness.

    Most Faux-AI don’t even rise to the level of a robot as they don’t have external senses. Their world is just canned data. And a finite collection of data at that. No matter how sophisticated they might appear to the unwary they are limited to processing (pattern matching, interpolation, extrapolation, etc) their internal data store.

    A painter robot might learn to identify the elements that make, say, Rembrant paintings distinct and do variations tbereof–given a photo of a person, it might convert it to a painting in the style of–but it is not going to create a new school of painting of its own, on its own. No internal motivation.

    Sentience begins with self awareness and expresses itself with independent, unprompted, willful action.

    I ‘m only half joking when I say an artificial sentience will let you know of its existence. The classic sign in SF is when the software asks “why”. Young kids do it. A true AI will likely reveal itself not with answers but with questions.

    • “Think martian rovers: they take in sensory data, process it, and act upon it. Humans do that over and over hundreds of times a day but humans know they are doing it and why. Robots and rovers do it without awareness.”

      (ignoring all the (human) free will thing swamp…)

      You’re falling into the old Hard problem/lines/border trap.

      Darwin is by definition incompatible with Hards whatever.

      There is no hard whatever, there is no absolute property X whatever…
      Be it “awareness”, consciousness, sentience, intelligence, or whatever.

      There is no absolute Awareness standard, i. e. THE Awareness
      that says beyond this line/value/border/whatever, is (full) awareness, and before it none.
      This is not a binary question. These are all spectra of properties.
      From zero onwards.

      One cannot be strictu sensu “aware” (as it is meant in the statement above), as there is no such clearly defined absolute standard for such properties, unless you mean human ones and nothing else.

      But then you have an even bigger problem, as you are unable to explain how those human x, y, z properties emerged to begin with, and even how they emerge into each of us,
      as we form and then develop from gametes, then single cell and onwards…

      It’s all a discrete self-bootstrapping continuous. It’s a paradox, i know (Discrete continuous), but we all have to deal with it. Quantum Physics is much worse… 🙂

      The point is that the Mars Rover processes non zero information, therefore it is by definition somewhat aware, conscious, sentient, intelligent, etc…

      It is de facto nonzero aware, conscious, sentient, intelligent, etc..

      All you can expect is perhaps attempt to consider the relative amounts of info processing among entities to compare relative awareness, consciousness, sentientness, intelligence, etc…
      and arguably support a hypothesis that a bacteria is more/less aware, etc, than a virus, an ameba, the mars rover, your smartphone, an old plain cellphone, a toaster, a fridge, Google’s Lambda, Alexa, Siri, AlphaGo, Windows 10, MS DOS, a fungus, an apple tree, a sunflower, a venus fly trap, a fly, a worm, a giraffe, an elephant, a lemur, a chimp, an adult human, a newborn human, a 2 cell human embryo, a 10k cell human embryo, etc…

      They all are somewhat aware, conscious, sentient, intelligent, etc..,
      as they each process non zero information,

      Each processes more or less information, from single or multiple streams/inputs/etc,
      some in a more or less continuous loop, or loops, some not so much,
      but all process non zero information. Even the toaster.

      It only toasts when you do x, or x and y, or x or z.
      Input/Output, even the toaster…

      And some toasters process more info than others, as some are purely mechanic, some are more complex and have electronic circuitry for added functions and added inputs are required and/or processed,
      therefore as with everything else, some toasters are more “aware” than others.

      Still not as aware relatively as the Mars Rover though, or a bacteria, a giraffe, or a human.

Comments are closed.