Do We Have Minds of Our Own?

This content has been archived. It may no longer be accurate or relevant.

Not exactly to do with writing and being an author, but perhaps a writing prompt.

From The New Yorker:

In order to do science, we’ve had to dismiss the mind. This was, in any case, the bargain that was made in the seventeenth century, when Descartes and Galileo deemed consciousness a subjective phenomenon unfit for empirical study. If the world was to be reducible to physical causation, then all mental experiences—intention, agency, purpose, meaning—must be secondary qualities, inexplicable within the framework of materialism. And so the world was divided in two: mind and matter. This dualistic solution helped to pave the way for the Enlightenment and the technological and scientific advances of the coming centuries. But an uneasiness has always hovered over the bargain, a suspicion that the problem was less solved than shelved. At the beginning of the eighteenth century, Leibniz struggled to accept that perception could be explained through mechanical causes—he proposed that if there were a machine that could produce thought and feeling, and if it were large enough that a person could walk inside of it, as he could walk inside a mill, the observer would find nothing but inert gears and levers. “He would find only pieces working upon one another, but never would he find anything to explain Perception,” he wrote.

Today we tend to regard the mind not as a mill but as a computer, but, otherwise, the problem exists in much the same way that Leibniz formulated it three hundred years ago. In 1995, David Chalmers, a shaggy-haired Australian philosopher who has been called a “rock star” of the field, famously dubbed consciousness “the hard problem,” as a way of distinguishing it from comparatively “easy” problems, such as how the brain integrates information, focusses attention, and stores memories. Neuroscientists have made significant progress on the easier problems, using fMRIs and other devices. Engineers, meanwhile, have created impressive simulations of the brain in artificial neural networks—though the abilities of these machines have only made the difference between intelligence and consciousness more stark. Artificial intelligence can now beat us in chess and Go; it can predict the onset of cancer as well as human oncologists and recognize financial fraud more accurately than professional auditors. But, if intelligence and reason can be performed without subjective awareness, then what is responsible for consciousness? Answering this question, Chalmers argued, was not simply a matter of locating a process in the brain that is responsible for producing consciousness or correlated with it. Such a discovery still would fail to explain why such correlations exist or why they lead to one kind of experience rather than another—or to nothing at all.

. . . .

One line of reductionist thinking insists that the hard problem is not really so hard—or that it is, perhaps, simply unnecessary. In his new book, “Rethinking Consciousness: A Scientific Theory of Subjective Experience,” the neuroscientist and psychologist Michael Graziano writes that consciousness is simply a mental illusion, a simplified interface that humans evolved as a survival strategy in order to model the processes of the brain. He calls this the “attention schema.” According to Graziano’s theory, the attention schema is an attribute of the brain that allows us to monitor mental activity—tracking where our focus is directed and helping us predict where it might be drawn in the future—much the way that other mental models oversee, for instance, the position of our arms and legs in space. Because the attention schema streamlines the complex noise of calculations and electrochemical signals of our brains into a caricature of mental activity, we falsely believe that our minds are amorphous and nonphysical. The body schema can delude a woman who has lost an arm into thinking that it’s still there, and Graziano argues that the “mind” is like a phantom limb: “One is the ghost in the body and the other is the ghost in the head.”

. . . .

I suspect that most people would find this proposition alarming. On the other hand, many of us already, on some level, distrust the reality of our own minds. The recent vogue for “mindfulness” implies that we are passive observers of an essentially mechanistic existence—that consciousness can only be summoned fleetingly, through great effort. Plagued by a midday funk, we are often quicker to attribute it to bad gut flora or having consumed gluten than to the theatre of beliefs and ideas.

And what, really, are the alternatives for someone who wants to explain consciousness in strictly physical terms? Another option, perhaps the only other option, is to conclude that mind is one with the material world—that everything, in other words, is conscious. This may sound like New Age bunk, but a version of this concept, called integrated information theory, or I.I.T., is widely considered one of the field’s most promising theories in recent years. One of its pioneers, the neuroscientist Christof Koch, has a new book, “The Feeling of Life Itself: Why Consciousness Is Widespread but Can’t Be Computed,” in which he argues that consciousness is not unique to humans but exists throughout the animal kingdom and the insect world, and even at the microphysical level. Koch, an outspoken vegetarian, has long argued that animals share consciousness with humans; this new book extends consciousness further down the chain of being. Central to I.I.T. is the notion that consciousness is not an either/or state but a continuum—some “systems,” in other words, are more conscious than others.

. . . .

Another term for this is panpsychism—the belief that consciousness is ubiquitous in nature. In the final chapters of the book, Koch commits himself to this philosophy, claiming his place among a lineage of thinkers—including Leibniz, William James, and Alfred North Whitehead—who similarly believed that matter and soul were one substance. This solution avoids the ungainliness of dualism: panpsychism, Koch argues, “elegantly eliminates the need to explain how the mental emerges out of the physical and vice versa. Both coexist.”

. . . .

Like Koch, Graziano, when entertaining such seemingly fanciful ideas, shifts into a mode that oddly mixes lyricism and technical rigor. “The mind is a trillion-stranded sculpture made of information, constantly changing and beautifully complicated,” he writes. “But nothing in it is so mysterious that it can’t in principle be copied to a different information-processing device, like a file copied from one computer to another.”

Link to the rest at The New Yorker


.

3 thoughts on “Do We Have Minds of Our Own?”

  1. Oh, good grief. Not only did I have to set up an account and log in to your site to leave comments, when I do leave them, they are held for moderation. PG, you couldn’t do a better job of killing off discussion on your posts if you tried.

    • I need to turn off some of the security that a variety of hack attempts caused me to put in place, Tom. See my long post of a few days ago.

  2. Like all reductionists, Graziano dives headlong into the fallacy of the stolen concept. An illusion is a false experience. Consciousness, when you come down to it, is our awareness that we are having experiences. If we had no consciousness, we could not be aware of experiencing illusions – and if you aren’t aware of an illusion, it simply isn’t there. The whole reductionist position is incoherent.

Comments are closed.