Social Media Could Make It Impossible to Grow Up

This content has been archived. It may no longer be accurate or relevant.

The following is a longer post/excerpt than PG usually includes on TPV, but the topic fascinated him.

PG is happy that the foolish things he did in high school and college have disappeared into thickening mists of the fading memories scrabbling for survival within the minds of himself and fellow members of the Order of Lavishly Idiotic Youth.

From Wired:

Several decades into the age of digital media, the ability to leave one’s childhood and adolescent years behind is now imperiled. Although exact numbers are hard to come by, it is evident that a majority of young people with access to mobile phones take and circulate selfies on a daily basis. There is also growing evidence that selfies are not simply a tween and teen obses­sion. Toddlers enjoy taking selfies, too, and whether intentionally or unintentionally, have even managed to put their images into circula­tion. What is the cost of this excessive documentation? More spe­cifically, what does it mean to come of age in an era when images of childhood and adolescence, and even the social networks formed during this fleeting period of life, are so easily preserved and may stubbornly persist with or without one’s intention or desire? Can one ever transcend one’s youth if it remains perpetually present?

The crisis we face concerning the persistence of childhood images was the least of concerns when digital technologies began to restruc­ture our everyday lives in the early 1990s. Media scholars, sociolo­gists, educational researchers, and alarmists of all political stripes were more likely to bemoan the loss of childhood than to worry about the prospect of childhood’s perpetual presence. A few educa­tors and educational researchers were earnestly exploring the poten­tial benefits of the internet and other emerging digital technologies, but the period was marked by widespread moral panic about new media technologies. As a result, much of the earliest research on young people and the internet sought either to support or to refute fears about what was about to unfold online.

. . . .

Many adults feared that if left to surf the web alone, children would suffer a quick and irreparable loss of innocence. These concerns were fueled by reports about what allegedly lurked online. At a time when many adults were just beginning to venture online, the internet was still commonly depicted in the popular media as a place where anyone could easily wander into a sexually charged multiuser domain (MUD), hang out with computer hackers and learn the tricks of their criminal trade, or hone their skills as a terrorist or bomb builder. In fact, doing any of these things usually required more than a single foray onto the web. But that did little to curtail perceptions of the internet as a dark and dangerous place where threats of all kinds were waiting at the welcome gate.

. . . .

A common theme underpinning both popular and scholarly arti­cles about the internet in the 1990s was that this new technology had created a shift in power and access to knowledge. A widely reprinted 1993 article ominously titled “Caution: Children at Play on the Infor­mation Highway” warned, “Dropping children in front of the com­puter is a little like letting them cruise the mall for the afternoon. But when parents drop their sons or daughters off at a real mall, they gen­erally set ground rules: Don’t talk to strangers, don’t go into Victoria’s Secret, and here’s the amount of money you’ll be able to spend. At the electronic mall, few parents are setting the rules or even have a clue about how to set them.”

. . . .

In such a context, it is easy to understand why the imperiled innocence of children was invoked as a rationale for increased regulation and monitoring of the internet. In the United States, the Communications Decency Act, signed into law by President Clinton in 1996, gained considerable support due to widespread fears that without increased regulation of communications, the nation’s children were doomed to become perverts and digital vigi­lantes.

. . . .

Jenkins was not the only one to insist that the real challenge was to empower children and adolescents to use the internet in productive and innovative ways so as to build a new and vibrant public sphere. We now know that a critical mass of educators and parents did choose to allow children ample access to the internet in the 1990s and early 2000s. Those young people ended up building many of the social media and sharing economy platforms that would transform the lives of people of all ages by the end of the first decade of the new millen­nium.

. . . .

Among the more well­-known skeptics was another media theorist, Neil Postman. Postman argued in his 1982 book The Disappearance of Childhood that new media were eroding the distinction between childhood and adulthood. “With the electric media’s rapid and egalitarian dis­closure of the total content of the adult world, several profound consequences result,” he claimed. These consequences included a diminishment of the authority of adults and the curiosity of children. Although not necessarily invested in the idea of childhood innocence, Postman was invested in the idea and ideal of childhood, which he believed was already in decline. This, he contended, had much to do with the fact that childhood—a relatively recent historical invention—is a construct that has always been deeply entangled with the history of media technologies.

While there have, of course, always been young people, a number of scholars have posited that the concept of childhood is an early modern invention. Postman not only adopted this position but also argued that this concept was one of the far­-reaching consequences of movable type, which first appeared in Mainz, Germany, in the late 15th century. With the spread of print culture, orality was de­moted, creating a hierarchy between those who could read and those who could not. The very young were increasingly placed outside the adult world of literacy.

During this period, something else occurred: different types of printed works began to be produced for different types of readers. In the 16th century, there were no age­-based grades or corresponding books. New readers, whether they were 5 or 35, were expected to read the same basic books. By the late 18th century, however, the world had changed. Children had access to children’s books, and adults had access to adult books. Children were now regarded as a separate category that required protection from the evils of the adult world. But the reign of childhood (according to Postman, a period running roughly from the mid-19th to the mid-20th centuries) would prove short­-lived. Although earlier communications technologies and broadcasting mediums, from the telegraph to cinema, were already chipping away at childhood, the arrival of television in the mid­-20th century marked the beginning of the end. Postman con­cludes, “Television erodes the dividing line between childhood and adulthood in three ways, all having to do with its undifferentiated ac­cessibility: first, because it requires no instruction to grasp its form; second, because it does not make complex demands on either mind or behavior; and third, because it does not segregate its audience.”

. . . .

In the final chapter, Postman poses and responds to six questions, including the following: “Are there any communication technologies that have the potential to sustain the need for child­hood?” In response to his own question, he replies, “The only technology that has this capacity is the computer.” To program a computer, he explains, one must in essence learn a language, a skill that would have to be acquired in childhood: “Should it be deemed necessary that everyone must know how computers work, how they impose their special world­view, how they alter our definition of judgment—that is, should it be deemed necessary that there be uni­versal computer literacy—it is conceivable that the schooling of the young will increase in importance and a youth culture different from adult culture might be sustained.” But things could turn out dif­ferently. If economic and political interests decide that they would be better served by “allowing the bulk of a semiliterate population to entertain itself with the magic of visual computer games, to use and be used by computers without understanding … childhood could, without obstruction, continue on its journey to oblivion.”

. . . .

Thanks to Xerox’s graphical user interface, eventually popularized by Apple, by the 2000s one could do many things with computers without knowledge of or interest in their inner workings. The other thing that Postman did not anticipate is that young people would be more adept at building and programming computers than most older adults. Fluency in this new language, unlike most other languages, did not deepen or expand with age. By the late 1990s, there was little doubt that adults were not in control of the digital revolution. The most ubiquitous digital tools and platforms of our era, from Google to Facebook to Airbnb, would all be invented by people just out of their teens. What was the result? In the end, childhood as it once existed (i.e., in the pre-­television era) was not restored, but Postman’s fear that childhood would disappear also proved wrong. Instead, some­thing quite unexpected happened.

. . . .

Today, the distinction be­tween childhood and adulthood has reemerged, but not in the way that Postman imagined.

In our current digital age, child and adolescent culture is alive and well. Most young people spend hours online every day exploring worlds in which most adults take little interest and to which they have only limited access. But this is where the real difference lies. In the world of print, adults determined what children could and could not access—after all, adults operated the printing presses, purchased the books, and controlled the libraries. Now, children are free to build their own worlds and, more importantly, to populate these worlds with their own content. The content, perhaps not surprisingly, is pre­dominantly centered on the self (the selfie being emblematic of this tendency). So, in a sense, childhood has survived, but its nature—what it is and how it is experienced and represented—is increas­ingly in the hands of young people themselves. If childhood was once constructed and recorded by adults and mirrored back to children (e.g., in a carefully curated family photo album or a series of home video clips), this is no longer the case. Today, young people create im­ages and put them into circulation without the interference of adults.

In sharp contrast to Postman’s prediction, childhood never did disappear. Instead, it has become ubiquitous in a new and un­expected way. Today, childhood and adolescence are more visible and pervasive than ever before. For the first time in history, children and adolescents have widespread access to the technologies needed to represent their lives, circulate these representations, and forge networks with each other, often with little or no adult supervision. The potential danger is no longer childhood’s disappearance, but rather the possibility of a perpetual childhood.

Link to the rest at Wired

Here’s the blurb for The End of Forgetting:

Thanks to Facebook and Instagram, our younger selves have been captured and preserved online. But what happens, Kate Eichhorn asks, when we can’t leave our most embarrassing moments behind? Rather than a childhood cut short by a loss of innocence, the real crisis of the digital age may be the specter of a childhood that can never be forgotten.

And here’s a review from Inside Higher Ed:

Someone brought a video recorder to Thanksgiving 1980, during my final year of high school. Not a close relative, certainly. Back then, it was too insanely extravagant a piece of consumer electronics for any of us to imagine buying one. (Not for several years, anyway.)

The camera sat on a tripod and recorded the holiday goings-on, which were shown — continuously, as they were happening — on a nearby television set. It would have been able to record two to four hours, depending on the format and system. A blank video cassette cost the equivalent of $50 to $75 in today’s currency. There was much apprehension over very young family members getting too close and knocking something over.

The novelty of seeing one’s actions and expressions from the outside, in real time, was intriguing but unsettling. Nothing meaningful or interesting happened, and I cannot imagine anybody getting bored enough to watch the recording. But it means that my 17-year-old doppelgänger may be preserved on a tape in an attic someplace in Oklahoma, and that possibility, however slim, has kept the memory vivid. No adolescent photograph would ever be as awkward. The tape was probably Betamax: technological obsolescence can have its upside.

. . . .

Most 17-year-olds today probably do not remember a time when they had not yet seen themselves onscreen. Chances are that many of the videos will have been their own recordings. Creating them requires no technical skill, and duplicating or transporting them is equally effortless.

None of the technology is unwieldy or uncommon, or all that expensive. And while the storage capacity of a phone or laptop is not boundless, neither is it much of an obstacle. Everything ends up in the cloud eventually. (That may not be literally true, but all trends lead in that direction.) “With analogue media,” Eichhorn says, “there is invariably a time lag between the moment of production and the moment of broadcasting; in the case of digital media, production and broadcasting often happen simultaneously or near simultaneously. Adolescents are in effect … experiencing the social world via documentary platform.” And it is a kind of social death when they can’t.

In this cultural ecosystem, the normal excruciations of adolescent self-consciousness are ramped up and acted out — often before an audience of unlimited potential size — then preserved for posterity, in endlessly duplicable form.

. . . .

The potential for embarrassment increased by several orders of magnitude after America’s Funniest Home Videos debuted at the end of 1989, but even that looks minimal in the wake of YouTube. Two or three cases of extreme humiliation and bullying via digital video are now familiar to millions of people.

Eichhorn discusses them while acknowledging the ethical dilemma that doing so runs the risk of perpetuating mindless cruelty. But her point is that the famous examples represent the tip of the iceberg. Digital images are produced and circulated now in ways that encourage the self-expression and experimentation that Erikson regarded as one of the privileges of youth — while at the same time creating a permanent record that is potentially inescapable.

Inescapable, that is, because unforgettable.

Link to the rest at Inside Higher Ed

Over 400 years ago, William Shakespeare famously wrote, “The evil that men do lives after them; the good is oft interred with their bones.”

Perhaps the Bard peered into the future and somehow discerned Twitter and Facebook.

It occurs to PG that Twitter and Facebook would have made lovely names for a couple of the fools which populate some of his plays.

This is to make an ass of me, to fright me if they could.

– Bottom, Act 3, Scene 1, A Midsummer Night’s Dream