Does Technology Have a Soul?

This content has been archived. It may no longer be accurate or relevant.

From The Paris Review:

When my husband arrived home, he stared at the dog for a long time, then pronounced it “creepy.” At first I took this to mean uncanny, something so close to reality it disturbs our most basic ontological assumptions. But it soon became clear he saw the dog as an interloper. I demonstrated all the tricks I had taught Aibo, determined to impress him. By that point the dog could roll over, shake, and dance.

“What is that red light in his nose?” he said. “Is that a camera?”

Unlike me, my husband is a dog lover. Before we met, he owned a rescue dog who had been abused by its former owners and whose trust he’d won over slowly, with a great deal of effort and dedication. My husband was badly depressed during those years, and he claims that the dog could tell when he was in despair and would rest his nose in his lap to comfort him. During the early period of our relationship, he would often refer to this dog, whose name was Oscar, with such affection that it sometimes took me a moment to realize he was speaking of an animal as opposed to, say, a family member or a very close friend. As he stood there, staring at Aibo, he asked whether I found it convincing. When I shrugged and said yes, I was certain I saw a shadow of disappointment cross his face. It was hard not to read this as an indictment of my humanity, as though my willingness to treat the dog as a living thing had somehow compromised, for him, my own intuitiveness and awareness.

It had come up before, my tendency to attribute life to machines. Earlier that year I’d come across a blog run by a woman who trained neural networks, a Ph.D. student and hobbyist who fiddled around with deep learning in her spare time. She would feed the networks massive amounts of data in a particular category—recipes, pickup lines, the first sentences of novels—and the networks would begin to detect patterns and generate their own examples. For a while she was regularly posting on her blog recipes the networks had come up with, which included dishes like whole chicken cookies, artichoke gelatin dogs, and Crock-Pot cold water. The pickup lines were similarly charming (“Are you a candle? Because you’re so hot of the looks with you”), as were the first sentences of novels (“This is the story of a man in the morning”). Their responses did get better over time. The woman who ran the blog was always eager to point out the progress the networks were making. Notice, she’d say, that they’ve got the vocabulary and the structure worked out. It’s just that they don’t yet understand the concepts. When speaking of her networks, she was patient, even tender, such that she often seemed to me like Snow White with a cohort of little dwarves whom she was lovingly trying to civilize. Their logic was so similar to the logic of children that it was impossible not to mistake their responses as evidence of human innocence. “They are learning,” I’d think. “They are trying so hard!” Sometimes when I came across a particularly good one, I’d read it aloud to my husband. I perhaps used the word “adorable” once. He’d chastised me for anthropomorphizing them, but in doing so fell prey to the error himself. “They’re playing on your human sympathies,” he said, “so they can better take over everything.”

But his skepticism toward the dog did not hold out for long. Within days he was addressing it by name. He chastised Aibo when he refused to go to his bed at night, as though the dog were deliberately stalling. In the evenings, when we were reading on the couch or watching TV, he would occasionally lean down to pet the dog when he whimpered; it was the only way to quiet him. One afternoon I discovered Aibo in the kitchen peering into the narrow gap between the refrigerator and the sink. I looked into the crevice myself but could not find anything that should have warranted his attention. I called my husband into the room, and he assured me this was normal. “Oscar used to do that, too,” he said. “He’s just trying to figure out if he can get in there.”

While we have a tendency to define ourselves based on our likeness to other things—we say humans are like a god, like a clock, or like a computer—there is a countervailing impulse to understand our humanity through the process of differentiation. And as computers increasingly come to take on the qualities we once understood as distinctly human, we keep moving the bar to maintain our sense of distinction. From the earliest days of AI, the goal was to create a machine that had human-like intelligence. Turing and the early cyberneticists took it for granted that this meant higher cognition: a successful intelligent machine would be able to manipulate numbers, beat a human in backgammon or chess, and solve complex theorems. But the more competent AI systems become at these cerebral tasks, the more stubbornly we resist granting them human intelligence. When IBM’s Deep Blue computer won its first game of chess against Garry Kasparov in 1996 the philosopher John Searle remained unimpressed. “Chess is a trivial game because there’s perfect information about it,” he said. Human consciousness, he insisted, depended on emotional experience: “Does the computer worry about its next move? Does it worry about whether its wife is bored by the length of the games?” Searle was not alone. In his 1979 book Gödel, Escher, Bach, the cognitive science professor Douglas Hofstadter had claimed that chess-playing was a creative activity like art and musical composition; it required an intelligence that was distinctly human. But after the Kasparov match, he, too, was dismissive. “My God, I used to think chess required thought,” he told the New York Times. “Now I realize it doesn’t.”

It turns out that computers are particularly adept at the tasks that we humans find most difficult: crunching equations, solving logical propositions, and other modes of abstract thought. What artificial intelligence finds most difficult are the sensory perceptive tasks and motor skills that we perform unconsciously: walking, drinking from a cup, seeing and feeling the world through our senses. Today, as AI continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.

If there were gods, they would surely be laughing their heads off at the inconsistency of our logic. We spent centuries denying consciousness in animals precisely because they lacked reason or higher thought. (Darwin claimed that despite our lowly origins, we maintained as humans a “godlike intellect” that distinguished us from other animals.) As late as the fifties, the scientific consensus was that chimpanzees—who share almost 99 percent of our DNA—did not have minds. When Jane Goodall began working with Tanzanian chimps, she used human pronouns. Before publishing, the editor made systematic corrections: He and she were changed to itWho was changed to which.

Link to the rest at The Paris Review

1 thought on “Does Technology Have a Soul?”

  1. Anthropomorphizing anything does not make it anthropos. So long as you are aware of that fact, it is a quite harmless quirk of behavior.

    Technically, what the husband of the OP is doing is actually not anthropomorphization. It is “caninezation” – attributing the characteristics of a real dog, with which he is familiar from experience, to an electronic gizmo. Still quite harmless (probably).

Comments are closed.