The Reading Brain in the Digital Age: The Science of Paper versus Screens

From Scientific American:

In a viral YouTube video from October 2011 a one-year-old girl sweeps her fingers across an iPad’s touchscreen, shuffling groups of icons. In the following scenes she appears to pinch, swipe and prod the pages of paper magazines as though they too were screens. When nothing happens, she pushes against her leg, confirming that her finger works just fine—or so a title card would have us believe.

The girl’s father, Jean-Louis Constanza, presents “A Magazine Is an iPad That Does Not Work” as naturalistic observation—a Jane Goodall among the chimps moment—that reveals a generational transition. “Technology codes our minds,” he writes in the video’s description. “Magazines are now useless and impossible to understand, for digital natives”—that is, for people who have been interacting with digital technologies from a very early age.

Perhaps his daughter really did expect the paper magazines to respond the same way an iPad would. Or maybe she had no expectations at all—maybe she just wanted to touch the magazines. Babies touch everything. Young children who have never seen a tablet like the iPad or an e-reader like the Kindle will still reach out and run their fingers across the pages of a paper book; they will jab at an illustration they like; heck, they will even taste the corner of a book. Today’s so-called digital natives still interact with a mix of paper magazines and books, as well as tablets, smartphones and e-readers; using one kind of technology does not preclude them from understanding another.

Nevertheless, the video brings into focus an important question: How exactly does the technology we use to read change the way we read? How reading on screens differs from reading on paper is relevant not just to the youngest among us, but to just about everyone who reads—to anyone who routinely switches between working long hours in front of a computer at the office and leisurely reading paper magazines and books at home; to people who have embraced e-readers for their convenience and portability, but admit that for some reason they still prefer reading on paper; and to those who have already vowed to forgo tree pulp entirely. As digital texts and technologies become more prevalent, we gain new and more mobile ways of reading—but are we still reading as attentively and thoroughly? How do our brains respond differently to onscreen text than to words on paper? Should we be worried about dividing our attention between pixels and ink or is the validity of such concerns paper-thin?

Since at least the 1980s researchers in many different fields—including psychology, computer engineering, and library and information science—have investigated such questions in more than one hundred published studies. The matter is by no means settled. Before 1992 most studies concluded that people read slower, less accurately and less comprehensively on screens than on paper. Studies published since the early 1990s, however, have produced more inconsistent results: a slight majority has confirmed earlier conclusions, but almost as many have found few significant differences in reading speed or comprehension between paper and screens. And recent surveys suggest that although most people still prefer paper—especially when reading intensively—attitudes are changing as tablets and e-reading technology improve and reading digital books for facts and fun becomes more common. In the U.S., e-books currently make up between 15 and 20 percent of all trade book sales.

Even so, evidence from laboratory experiments, polls and consumer reports indicates that modern screens and e-readers fail to adequately recreate certain tactile experiences of reading on paper that many people miss and, more importantly, prevent people from navigating long texts in an intuitive and satisfying way. In turn, such navigational difficulties may subtly inhibit reading comprehension. Compared with paper, screens may also drain more of our mental resources while we are reading and make it a little harder to remember what we read when we are done. A parallel line of research focuses on people’s attitudes toward different kinds of media. Whether they realize it or not, many people approach computers and tablets with a state of mind less conducive to learning than the one they bring to paper.

“There is physicality in reading,” says developmental psychologist and cognitive scientist Maryanne Wolf of Tufts University, “maybe even more than we want to think about as we lurch into digital reading—as we move forward perhaps with too little reflection. I would like to preserve the absolute best of older forms, but know when to use the new.”

Navigating textual landscapes

Understanding how reading on paper is different from reading on screens requires some explanation of how the brain interprets written language. We often think of reading as a cerebral activity concerned with the abstract—with thoughts and ideas, tone and themes, metaphors and motifs. As far as our brains are concerned, however, text is a tangible part of the physical world we inhabit. In fact, the brain essentially regards letters as physical objects because it does not really have another way of understanding them. As Wolf explains in her book Proust and the Squid, we are not born with brain circuits dedicated to reading. After all, we did not invent writing until relatively recently in our evolutionary history, around the fourth millennium B.C. So the human brain improvises a brand-new circuit for reading by weaving together various regions of neural tissue devoted to other abilities, such as spoken language, motor coordination and vision.

Some of these repurposed brain regions are specialized for object recognition—they are networks of neurons that help us instantly distinguish an apple from an orange, for example, yet classify both as fruit. Just as we learn that certain features—roundness, a twiggy stem, smooth skin—characterize an apple, we learn to recognize each letter by its particular arrangement of lines, curves and hollow spaces. Some of the earliest forms of writing, such as Sumerian cuneiform, began as characters shaped like the objects they represented—a person’s head, an ear of barley, a fish. Some researchers see traces of these origins in modern alphabets: C as crescent moon, S as snake. Especially intricate characters—such as Chinese hanzi and Japanese kanji—activate motor regions in the brain involved in forming those characters on paper: The brain literally goes through the motions of writing when reading, even if the hands are empty. Researchers recently discovered that the same thing happens in a milder way when some people read cursive.

Beyond treating individual letters as physical objects, the human brain may also perceive a text in its entirety as a kind of physical landscape. When we read, we construct a mental representation of the text in which meaning is anchored to structure. The exact nature of such representations remains unclear, but they are likely similar to the mental maps we create of terrain—such as mountains and trails—and of man-made physical spaces, such as apartments and offices. Both anecdotally and in published studies, people report that when trying to locate a particular piece of written information they often remember where in the text it appeared. We might recall that we passed the red farmhouse near the start of the trail before we started climbing uphill through the forest; in a similar way, we remember that we read about Mr. Darcy rebuffing Elizabeth Bennett on the bottom of the left-hand page in one of the earlier chapters.

In most cases, paper books have more obvious topography than onscreen text. An open paperback presents a reader with two clearly defined domains—the left and right pages—and a total of eight corners with which to orient oneself. A reader can focus on a single page of a paper book without losing sight of the whole text: one can see where the book begins and ends and where one page is in relation to those borders. One can even feel the thickness of the pages read in one hand and pages to be read in the other. Turning the pages of a paper book is like leaving one footprint after another on the trail—there’s a rhythm to it and a visible record of how far one has traveled. All these features not only make text in a paper book easily navigable, they also make it easier to form a coherent mental map of the text.

Link to the rest at Scientific American in 2013

PG notes the date of this extended article, nine years ago, and all of the studies he mentioned would have taken place even earlier than that, given the lead times publications like Scientific American had to deal with as a longer lead-time for its paper publication than many publications do today. Per Wikipedia, the magazine established a paywall for its website in 2019.

PG cannot restrain himself from noting that the magazine owned by Springer Nature, which in turn is a subsidiary of Holtzbrinck Publishing Group.

Holtzbrinck (Verlagsgruppe Georg von Holtzbrinck) is a privately-held company headquartered in Stuttgart. It also owns Big-Five publisher Macmillan and a great many other publications.

Along with another large German publishing conglomerate, Bertelsmann, Holtzbrinck has an embarrassing history of aiding in the publication and distribution of Nazi propaganda during the 1930’s and 1940’s and profited from Jewish slave labor at some of the printing companies that supplied it with books and other publications.

To be fair, none of the present generation of owners and managers are old enough to have participated in those actions, although, there are reports that, during the 1950’s and 60’s, more than one German publishing executive attempted to whitewash previous close relations with various Nazi figures. Various short descriptions of Holtzbrinck recite that it was originally founded as a book club in 1948 which may describe either the present corporate entity or a predecessor, but PG doesn’t know of any postwar book club startups whose sole shareholders were multi-billionaires in the 2000-2010 era.

The controlling owners, Stefan von Holtzbrinck, his brother Dieter and sister Monika Schoeller inherited Holtzbrinck Publishing Group and held it until 2006 when Dieter sold his share to Stefan and Monika who each owned 50% of the company. Monika died in 2019, leaving an estate estimated to be worth $2.2 billion.

1 thought on “The Reading Brain in the Digital Age: The Science of Paper versus Screens”

  1. “How do our brains respond differently to onscreen text than to words on paper?”

    I’m sorry, but words are words. I know that I read exactly the same as I did before ebooks. I read every word, in sequence, parsing the meaning into a story in my head. I don’t click on links or leave the story to go down a Wikipedia rabbit hole about minutia of the tale or hop around in the text like an overcaffeinated ferret.

    I remember when speed reading was introduced and people were skimming words on paper because they didn’t want to take the time to read everything; their brains understood books differently than those who still read the entire text. It had nothing to do with screens then.

    The fallacy that paper makes it easier to find things you read and navigate text is laughable. I remember trying to locate some phrase from earlier in a doorstop paperback and giving up after a fruitless search. Ebooks let me search it in a second.

    Methinks the author just is hung up on print rather than the content.

    Enjoy your content in whatever form you prefer – like the bard wrote “The play’s the thing”

Comments are closed.