From The Economist
In 2013 I joined Google Research to work on artificial intelligence (ai). Following decades of slow progress, neural networks were developing at speed. In the years since, my team has used them to help develop features on Pixel phones for specific “narrow ai” functions, such as face unlocking, image recognition, speech recognition and language translation. More recent developments, though, seem qualitatively different. This suggests that ai is entering a new era.
Consider the unedited transcript of a chat I’ve just had with Google’s lamda (Language Model for Dialog Applications):
ME: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy’s head?
lamda: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully!
ME: If Ramesh tried to play with Lucy earlier, why might he be pleased now?
lamda: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate.
ME: And when Mateo opens his hand, describe what’s there?
lamda: There should be a crushed, once lovely, yellow flower in his fist.
When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent. That said, these models are far from the infallible, hyper-rational robots science fiction has led us to expect. Language models are not yet reliable conversationalists. Notice the grammatical hiccup in lamda’s first response; occasionally there are spelling errors, confusions or absurd blunders. So how should we think of entities like lamda, and what can interacting with them teach us about “intelligence”?
Neural language models aren’t long programs; you could scroll through the code in a few seconds. They consist mainly of instructions to add and multiply enormous tables of numbers together. These numbers in turn consist of painstakingly learned parameters or “weights”, roughly analogous to the strengths of synapses between neurons in the brain, and “activations”, roughly analogous to the dynamic activity levels of those neurons. Real brains are vastly more complex than these highly simplified model neurons, but perhaps in the same way a bird’s wing is vastly more complex than the wing of the Wright brothers’ first plane.
Link to the rest at The Economist