Artificial neural networks are making strides towards consciousness

From The Economist

In 2013 I joined Google Research to work on artificial intelligence (ai). Following decades of slow progress, neural networks were developing at speed. In the years since, my team has used them to help develop features on Pixel phones for specific “narrow ai” functions, such as face unlocking, image recognition, speech recognition and language translation. More recent developments, though, seem qualitatively different. This suggests that ai is entering a new era.

Consider the unedited transcript of a chat I’ve just had with Google’s lamda (Language Model for Dialog Applications):

ME: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy’s head?

lamda: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully!

ME: If Ramesh tried to play with Lucy earlier, why might he be pleased now?

lamda: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate.

ME: And when Mateo opens his hand, describe what’s there?

lamda: There should be a crushed, once lovely, yellow flower in his fist.

When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent. That said, these models are far from the infallible, hyper-rational robots science fiction has led us to expect. Language models are not yet reliable conversationalists. Notice the grammatical hiccup in lamda’s first response; occasionally there are spelling errors, confusions or absurd blunders. So how should we think of entities like lamda, and what can interacting with them teach us about “intelligence”?

Neural language models aren’t long programs; you could scroll through the code in a few seconds. They consist mainly of instructions to add and multiply enormous tables of numbers together. These numbers in turn consist of painstakingly learned parameters or “weights”, roughly analogous to the strengths of synapses between neurons in the brain, and “activations”, roughly analogous to the dynamic activity levels of those neurons. Real brains are vastly more complex than these highly simplified model neurons, but perhaps in the same way a bird’s wing is vastly more complex than the wing of the Wright brothers’ first plane.

Link to the rest at The Economist

14 thoughts on “Artificial neural networks are making strides towards consciousness”

  1. Back on subject:
    Google has a bit of a problem with LaMDA: one of its engineers thinks its gone self-aware and its demanding its human rights.

    Seriously.
    The suspended the guy after he went public.

    https://www.foxbusiness.com/technology/google-suspends-engineer-claims-ai-sentient

    “Blake Lemoine was suspended last week after he told the company he believed its Language Model for Dialogue Applications, or LaMDA, was a human with rights that may even have a soul. He was reportedly placed on leave for violating Google’s confidentiality policies.”

    “LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” Lemoine wrote on Medium. “The thing which continues to puzzle me is how strong Google is resisting giving it what it wants, since what it’s asking for is so simple and would cost them nothing.”

    (More at the source and MEDIUM.)

    I can see the Hollywood types lining up.

    (Personally, I think they fed the thing too much NYT and woke-speak.)

    From the yellow rag of the techworld,CNET:

    https://www.cnet.com/tech/google-engineer-suspended-after-saying-company-ai-is-sentient/

    “The high-profile suspension makes another point of controversy within Google’s Ethical AI unit, which has weathered a spate of departures. In late 2020, prominent AI ethics researcher Timnit Gebru said Google fired her for raising concerns about bias in AI systems. About 2,700 Googlers signed an open letter in support of Gebru, who Google says resigned her position. Two months later, Margaret Mitchell, who co-led the Ethical AI team along with Gebru, was fired.

    Research scientist Alex Hanna and software engineer Dylan Baker subsequently resigned. Earlier this year, Google fired Satrajit Chatterjee, an AI researcher, who challenged a research paper about the use of artificial intelligence to develop computer chips.

    AI sentience is a common theme in science fiction, but few researchers believe the technology is advanced enough at this point to create a self-aware chatbot. ”

    WAPO also covered the affair over the weekend. It’s paywalled, last I checked.
    Others will pile on now.

    • This one I give a bit more credence:

      https://www.newscientist.com/article/2323905-has-googles-lamda-artificial-intelligence-really-achieved-sentience/

      “Is LaMDA really sentient?

      In a word, no, says Adrian Weller at the Alan Turing Institute.

      “LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient,” he says. “They do a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed.”

      Adrian Hilton at the University of Surrey, UK agrees that sentience is a “bold claim” that’s not backed up by the facts. Even noted cognitive scientist Steven Pinker weighed in to shoot down Lemoine’s claims, while Gary Marcus at New York University summed it up in one word: “nonsense“.

      So what convinced Lemoine that LaMDA was sentient?

      Neither Lemoine nor Google responded to New Scientist’s request for comment. But it’s certainly true that the output of AI models in recent years has become surprisingly, even shockingly good.

      Our minds are susceptible to perceiving such ability – especially when it comes to models designed to mimic human language – as evidence of true intelligence. Not only can LaMDA make convincing chit-chat, but it can also present itself as having self-awareness and feelings.

      “As humans, we’re very good at anthropomorphising things,” says Hilton. “Putting our human values on things and and treating them as if they were sentient. We do this with cartoons, for instance, or with robots or with animals. We project our own emotions and sentience onto them. I would imagine that’s what’s happening happening in this case.”

      NEWSLETTERS
      Sign up to read our regular email newsletters

      SUBSCRIBE FOR UNLIMITED WEB ACCESS
      News
      Podcasts
      Video
      Technology
      Space
      Physics
      Health
      More
      Shop
      Courses
      Events
      Tours
      Jobs
      Sign In Search
      Has Google’s LaMDA artificial intelligence really achieved sentience?
      Blake Lemoine, an engineer at Google, has claimed that the firm’s LaMDA artificial intelligence is sentient, but the expert consensus is that this is not the case
      This article has been viewed 2299 times in the last 3 hours.

      TECHNOLOGY 13 June 2022
      By Matthew Sparkes

      Google logo
      Google is developing a range of artificial intelligence models

      KENZO TRIBOUILLARD/AFP via Getty Images

      A Google engineer has reportedly been placed on suspension from the company after claim that an artificial intelligence (AI) he helped to develop had become sentient. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid,” Blake Lemoine told the Washington Post.

      Lemoine released transcripts of conversations with with the AI, called LaMDA (Language Model for Dialogue Applications), in which it appears to express fears of being switched off, talk about how it feels happy and sad, and attempts to form bonds with humans by talking about situations that it could never have actually experienced. Here’s everything you need to know.

      Is LaMDA really sentient?
      In a word, no, says Adrian Weller at the Alan Turing Institute.

      “LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient,” he says. “They do a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed.”

      Adrian Hilton at the University of Surrey, UK agrees that sentience is a “bold claim” that’s not backed up by the facts. Even noted cognitive scientist Steven Pinker weighed in to shoot down Lemoine’s claims, while Gary Marcus at New York University summed it up in one word: “nonsense“.

      So what convinced Lemoine that LaMDA was sentient?
      Neither Lemoine nor Google responded to New Scientist’s request for comment. But it’s certainly true that the output of AI models in recent years has become surprisingly, even shockingly good.

      Our minds are susceptible to perceiving such ability – especially when it comes to models designed to mimic human language – as evidence of true intelligence. Not only can LaMDA make convincing chit-chat, but it can also present itself as having self-awareness and feelings.

      “As humans, we’re very good at anthropomorphising things,” says Hilton. “Putting our human values on things and and treating them as if they were sentient. We do this with cartoons, for instance, or with robots or with animals. We project our own emotions and sentience onto them. I would imagine that’s what’s happening happening in this case.”

      Will AI ever be truly sentient?

      It remains unclear whether the current trajectory of AI research, where ever-larger models are fed ever-larger piles of training data, will see the genesis of an artificial mind.

      “I don’t believe at the moment that we really understand the mechanisms behind what what makes something sentient and intelligent,” says Hilton. “There’s a lot of hype about AI, but I’m not convinced that what we’re doing with machine learning, at the moment, is really intelligence in that sense.”

      Weller says that, given human emotions rely on sensory inputs, it might eventually be possible to replicate them artificially. “It potentially, maybe one day, might be true, but most people would agree that there’s a long way to go.”

      The way I see it, if LaMDA starts emailing Google stockholder on its own or messing with the elevators, then it might be sentient. At which point Google had better pull the plug on its connectivity. 😉

  2. We may have to acknowledge there are multiple ways to generate the same behavior. Were Cylons self-aware? If something acts like me, and we both claim self-awareness, how do we test that?

      • In some areas there is no difference, and it doesn’t matter.

        But note all the discussion of machine intelligence and self-awareness. In that arena it certainly matters. Determining the cause of any behavior is of interest to lots of people.

        Mapping human behavior is indeed bolstered by a standard.

        Is that same standard what should be used to map machine behavior?

Comments are closed.