Social Media

Three Years of Misery Inside Google, the Happiest Company in Tech

13 August 2019

From Wired:

On a bright Monday in January 2017, at 2:30 in the afternoon, about a thousand Google employees—horrified, alarmed, and a little giddy—began pouring out of the company’s offices in Mountain View, California. They packed themselves into a cheerful courtyard outside the main campus café, a parklike area dotted with picnic tables and a shade structure that resembles a giant game of pickup sticks. Many of them held up handmade signs: “Proud Iranian-American Googler,” “Even Introverts Are Here,” and of course, “Don’t Be Evil!” written in the same kindergarten colors as the Google logo.

After a few rounds of call-and-response chanting and testimonials from individual staffers, someone adjusted the rally’s microphone for the next speaker’s tall, lanky frame. Sundar Pichai, Google’s soft-spoken CEO of 15 months, stood in the small clearing in the dense crowd that served as a makeshift stage. “Over the last 24 to 48 hours, we’ve all been working very hard,” he said, “and every step of the way I’ve felt the support of 60,000 people behind me.”

It was, to be precise, January 30; Donald Trump’s presidency was 10 days old. And Executive Order 13769—a federal travel ban on citizens from Iran, Iraq, Libya, Somalia, Sudan, Syria, and Yemen, and a wholesale suspension of US refugee admissions—had been in effect for 73 hours, trapping hundreds of travelers in limbo at the nation’s airports. For the moment, the company’s trademark admonition against evil was being directed at a clear, unmistakably external target: the White House.

To all the world it looked as if Google—one of the most powerful, pro-immigrant, and ostensibly progressive corporations in the United States—was taking a unified stand. But that appearance of unanimity masked a welter of executive-level indecision and anxiety. It probably would have been more apt if Pichai had said that, over the previous 48 hours, he had been backed into a corner by thousands of his employees.

In those first days of the Trump era, Google’s leaders were desperate to avoid confrontation with the new regime. The company’s history of close ties to the Obama administration left executives feeling especially vulnerable to the reactionary movement—incubated partly on Google’s own video platform, YouTube—that had memed, rallied, and voted Trump into office. (It didn’t help that Eric Schmidt, then executive chairman of Google’s parent company, Alphabet, had been an adviser to Hillary Clinton’s campaign, or that some 90 percent of political donations by Google employees had gone to Democrats in 2016.) Kent Walker, Google’s risk-averse vice president of public policy, had been advising staffers not to do anything that might upset Steve Bannon or Breitbart. So when the travel ban was announced on the afternoon of Friday, January 27, Google executives initially hoped to “just keep [their] heads down and allow it to blow over,” according to an employee who was close to those early calculations.

But the tribal dictates of Google’s own workforce made lying low pretty much impossible. Larry Page and Sergey Brin, the former Montessori kids who founded Google as Stanford grad students in the late ’90s, had designed their company’s famously open culture to facilitate free thinking. Employees were “obligated to dissent” if they saw something they disagreed with, and they were encouraged to “bring their whole selves” to work rather than check their politics and personal lives at the door. And the wild thing about Google was that so many employees complied. They weighed in on thousands of online mailing lists, including IndustryInfo, a mega forum with more than 30,000 members; Coffee Beans, a forum for discussing diversity; and Poly-Discuss, a list for polyamorous Googlers. They posted incessantly on an employee-only version of Google+ and on Memegen, an internal tool for creating and upvoting memes. On Thursdays, Google would host a company-wide meeting called TGIF, known for its no-holds-barred Q&As where employees could, and did, aggressively challenge executives.

All that oversharing and debate was made possible by another element of Google’s social contract. Like other corporations, Google enforces strict policies requiring employees to keep company business confidential. But for Google employees, nondisclosure wasn’t just a rule, it was a sacred bargain—one that earned them candor from leadership and a safe space to speak freely about their kinks, grievances, and disagreements on internal forums.

Finally, to a remarkable extent, Google’s workers really do take “Don’t Be Evil” to heart. C-suite meetings have been known to grind to a halt if someone asks, “Wait, is this evil?” To many employees, it’s axiomatic: Facebook is craven, Amazon is aggro, Apple is secretive, and Microsoft is staid, but Google genuinely wants to do good.

All of those precepts sent Google’s workforce into full tilt after the travel ban was announced. Memegen went flush with images bearing captions like “We stand with you” and “We are you.” Jewglers and HOLA, affinity groups for Jewish and Latinx employees, quickly pledged their support for Google’s Muslim group. According to The Wall Street Journal, members of one mailing list brainstormed whether there might be ways to “leverage” Google’s search results to surface ways of helping immigrants; some proposed that the company should intervene in searches for terms like “Islam,” “Muslim,” or “Iran” that were showing “Islamophobic, algorithmically biased results.” (Google says none of those ideas were taken up.) At around 2 pm that Saturday, an employee on a mailing list for Iranian Googlers floated the possibility of staging a walkout in Mountain View. “I wanted to check first whether anyone thinks this is a bad idea,” the employee wrote. Within 48 hours, a time had been locked down and an internal website set up.

. . . .

As the Trump era wore on, Google continued to brace itself for all manner of external assaults, and not just from the right. The 2016 election and its aftermath set off a backlash against Silicon Valley that seemed to come from all sides. Lawmakers and the media were waking up to the extractive nature of Big Tech’s free services. And Google—the company that had casually introduced the internet to consumer surveillance, orderer of the world’s information, owner of eight products with more than a billion users each—knew that it would be an inevitable target.

But in many respects, Google’s most vexing threats during that period came from inside the company itself. Over the next two and a half years, the company would find itself in the same position over and over again: a nearly $800 billion planetary force seemingly powerless against groups of employees—on the left and the right alike—who could hold the company hostage to its own public image.

In a larger sense, Google found itself and its culture deeply maladapted to a new set of political, social, and business imperatives. To invent products like Gmail, Earth, and Translate, you need coddled geniuses free to let their minds run wild. But to lock down lucrative government contracts or expand into coveted foreign markets, as Google increasingly needed to do, you need to be able to issue orders and give clients what they want.

For this article, WIRED spoke with 47 current and former Google employees. Most of them requested anonymity. Together, they described a period of growing distrust and disillusionment inside Google that echoed the fury roaring outside the company’s walls. And in all that time, Google could never quite anticipate the right incoming collision. After the travel ban walkout, for example, the company’s leaders expected the worst—and that it would come from Washington. “I knew we were snowballing toward something,” a former executive says. “I thought it was going to be Trump calling us out in the press. I didn’t think it was gonna be some guy writing a memo.”

. . . .

“[Conservative male Google engineer James]

Damore framed his memo as an appeal for intellectual diversity, identifying his reasoning as a conservative political position silenced by Google’s “ideological echo chamber.” “It’s a perspective that desperately needs to be told at Google,” Damore wrote.

Plenty of Damore’s colleagues, however, had heard this perspective before. Ad nauseam. “People would write stuff like that every month,” says one former Google executive. When the subject of diversifying Google’s workforce comes up in big meetings and internal forums, one black female employee says, “you pretty much need to wait about 10 seconds before someone jumps in and says we’re lowering the bar.”

. . . .

To Liz Fong-Jones, a site reliability engineer at Google, the memo’s arguments were especially familiar. Google’s engineers are not unionized, but inside Google, Fong-Jones essentially performed the function of a union rep, translating employee concerns to managers on everything from product decisions to inclusion practices. She had acquired this informal role around the time the company released Google+ to the public in 2011; before launch, she warned executives against requiring people to use their real names on the platform, arguing that anonymity was important for vulnerable groups. When public uproar played out much as Fong-Jones had predicted, she sat across from executives to negotiate a new policy—then explained the necessary compromises to irate employees. After that, managers and employees started coming to her to mediate internal tensions of all sorts.

As part of this internal advocacy work, Fong-Jones had become attuned to the way discussions about diversity on internal forums were beset by men like Cernekee, Damore, and other coworkers who were “just asking questions.” To her mind, Google’s management had allowed these dynamics to fester for too long, and now it was time for executives to take a stand. In an internal Google+ post, she wrote that “the only way to deal with all the heads of the medusa is to no-platform all of them.”

. . . .

On Monday morning, Google’s top management finally met to discuss what to do about Damore. The room, according to reporting by Recode, was split. Half the executives believed Damore shouldn’t be fired. Then YouTube CEO Susan Wojcicki and head of communications Jessica Powell urged their colleagues to consider how they would have reacted if Damore had applied the same arguments to race, rather than gender. That persuaded them: The engineer had to go. In a note to employees, Pichai said he was firing Damore for perpetuating gender stereotypes.

In his message, Pichai tried to assure the left without alienating the right. “To suggest a group of our colleagues have traits that make them less biologically suited to that work is offensive and not OK,” he wrote. “At the same time, there are co-workers who are questioning whether they can safely express their views in the workplace (especially those with a minority viewpoint). They too feel under threat, and that is also not OK. People must feel free to express dissent.”

. . . .

In the past Google had fired an employee for leaking internal memes from Memegen. But when the targeted employees reported harassment, they say, Google’s security team told them that the leaking of screenshots might fall under the legal definition of “protected concerted activity”—the same labor right invoked by Cernekee.

To Fong-Jones, the security team’s answer was both shocking and instructive; she didn’t realize a leaker could be protected. “Everyone thought Google had an absolute right to stop you from talking about anything related to Google,” she says. Yet here Google’s hands were apparently tied by labor law.

Link to the rest at Wired

PG reminds one and all that TPV is not a political blog.

The reason he posted this excerpt from a much longer article is because most SEO strategies used by many authors (or promotional service-providers) are focused on Google. Additionally, within Amazon’s world, similar SEO practices often come into play with respect to book descriptions, the wording of advertisements, etc.

PG doesn’t recall seeing anything recently about Amazon’s practices impacting the visibility of categories of books that promote disfavored ideas but he may have simply missed such reports.

That said, Google and Amazon recruit engineers from the same overall pool of young smart recent college graduates.

PG is particularly concerned about the rising acceptance and use of deplatforming, a form of political action/prior restraint that proactively shuts down controversial speakers or speech, frequently by denying them access to a venue in which to express their opinion.

Under established First Amendment law in the United States, prior restraint of speech (prohibiting  speech or other expression before the speech happens) by government action is greatly disfavored.

A distinction is drawn between prior restraint by government and prior restraint by non-government actors. However, for PG, the underlying rationale disfavoring prior restraint is still persuasive, particularly when prior restraint is focused on squelching a popularly-disfavored view and exercised by a large and powerful corporation against an individual.

 

If I Have One Technology Tip of the Day,

25 July 2019

If I have one technology tip of the day, it’s this: No matter how good the video on YouTube is, don’t read the comments, just don’t, because it will make you hate all humans.

~  Matt Groening

The Secret to Success on Youtube? Kids

25 July 2019

From Wired:

Kids love YouTube. They love the pinging of the xylophones in the upbeat “Thank you song” on CoCoMelon, a channel with more than 53 million subscribers that plays animated nursery rhymes. They love watching other kids open and test toys, as they do on Ryan ToysReview (subscriber count: 20,749,585). And they love the Baby Shark song. Possibly because of the fun dance moves and possibly because they want to drive adults crazy.

These trends are nothing new, but now we have more than vast subscriber counts or astounding click numbers to illustrate just how central videos featuring kids are to the platform. In a report Thursday, the Pew Research Center said that in the vast ecosystem of YouTube’s English-language videos, children’s content and content featuring kids under 13 are some of the most popular videos on the site.

For the study, researchers analyzed the videos posted by 43,000 YouTube channels, each with more than 250,000 subscribers, during the first week of 2019. There was a lot to work with. In those seven days, these channels posted almost a quarter-million videos totalling more than 48,000 hours. For the record, the authors note, “a single person watching videos for eight hours a day (with no breaks or days off) would need more than 16 years to watch all the content.”

Those videos covered everything from politics to video games. Most were not intended for kids. But the most popular featured kids. Researchers found that just 2 percent of the videos they analyzed featured a child or children that appeared to be younger than 13. “However, this small subset of videos averaged three times as many views as did other types of videos,” says the report.

There have been studies of niche communities within YouTube, but “We hadn’t seen something like this done before,” says Aaron Smith, director of the data lab team at Pew. Although YouTube children’s content wasn’t the impetus for the study, Smith says the results weren’t surprising: “We had a sense that this kind of content would be fairly popular. We know that lots of parents let their kids watch videos on YouTube.”

Videos with cheery, if nonsensical, titles like “Funny Uncle John Pretend Play w/ Pizza Food Kitchen Restaurant Cooking Kids Toys,” and “No No, Baby Rides the Scooter!” racked up over 6 million views each. “SUPERHERO BABIES MAKE A GINGERBREAD HOUSE SUPERHERO BABIES PLAY DOH CARTOONS FOR KIDS,” attracted almost 14 million views.

Not all the videos that featured young kids were nursery rhymes or traditional kids content like Mister Rogers’ Neighborhood. Pew’s analysis found that only 21 percent of videos featuring children appeared to have been aimed at kids. But videos that were both aimed at kids and featured kids were the most popular videos in Pew’s analysis, averaging four times as many views as other “general audience” videos.

As for the other 79 percent of videos that had kids but weren’t directly aimed at children? They did better too, getting “substantially more” attention than other videos aimed at teens and adults. The five most popular videos from the week Pew studied included a baby name reveal and family vlogs with titles like “WELCOMING A NEW MEMBER OF FAMILY!!” One was a sliming video. None are immediately alarming, though Smith couldn’t comment on why that kind of content was so attractive to so many viewers. “Why that type of material pops is unclear to me,” he says. “Someone clearly is enjoying it but it’s not clear who those folks are or what their motivations are for doing that.”

Link to the rest at Wired

PG’s general impression is that the videos that traditional publishers post on YouTube to promote books look cheap and are lame. The ones he recalls had very few viewers at the time he checked them.

However, he wondered if any authors have popular YouTube channels that play a significant part in the promotion and marketing of their books. Feel free to point out examples in the comments.

PG is particularly interested in productive YouTube channels from authors who are not megaseller/JK Rowling, etc., authors.

The Artist Behind Social Media’s Latest Big Idea

24 July 2019

From Medium:

Since 2012, an Illinois-based artist named Ben Grosser has been exploring how numbers — the number of likes on a post, the number of friends or followers you’ve amassed — shape the experience of using social media platforms such as Facebook, Twitter, Instagram, and YouTube. To anyone who would listen, he has espoused the view that those numbers, known as metrics, mold our online behavior in ways deeper and more insidious than we realize — and that we’d all be better off without them.

Seven years later, in a very different era for social media, the world’s largest tech companies have themselves begun experimenting with what Grosser calls “demetrication.” Twitter rolled out a beta app in which reply threads no longer display the number of likes, retweets, and replies on each tweet, unless you tap on it specifically. Instagram announced last week that it’s expanding a test that goes much farther, hiding the number of likes and video views on every post in your feed. You can still see how many people liked your own posts, but the move will remove any possibility of comparing the numbers on your own beach selfie to your friend’s (or frenemy’s). And YouTube opted in May to replace real-time subscriber counts on its channels with rounded estimates.

. . . .

The CEOs of both Twitter and Instagram have articulated their rationales in terms that evoke Grosser’s critiques, noting how the visual prominence of like and follower counts can encourage people to treat the platforms like a competition.

. . . .

Grosser was an artist, programmer, and graduate student at the University of Illinois in 2012 when he started reflecting on some of the queasier aspects of his relationship with Facebook, such as the way he found himself judging his posts by how many likes they received. “I started realizing how obsessed I was feeling about those numbers, and wondering why was I having those feelings, and wondering, whom did those feelings benefit?”

Link to the rest at Medium

PG also notes that there are a great many ways to artificially increase likes, replies, retweets, etc., that any comparison of those numbers between authors (or anyone else) is almost certainly not reflective of the truth.

Why Should Authors Read Your Bad Reviews?

10 July 2019

From The Guardian:

History has yet to find the book that is universally adored – or the author who enjoys reading bad reviews. While Angie Thomas has topped the charts and scooped up armloads of awards for her two young adult novels, The Hate U Give and On the Come Up, her recent request that book bloggers stop sending her their negative reviews saw her on the receiving end of a wave of vitriol.

Thomas wasn’t asking reviewers to stop writing bad reviews. She was just asking that they didn’t give her a prod on Twitter or Instagram to tell her about it.

“Guess what? WE ARE PEOPLE WITH FEELINGS. What’s the point of tagging an author in a negative review? Really?” she wrote on Twitter. “We have to protect our mental space. Too many opinions, good or bad, can affect that … Getting feedback from too many sources can harm your writing process. I have a group of people whose feedback I value – my editor, my agent, other authors who act as beta readers. With the position I’m in, social media is for interacting with readers, not for getting critiques.”

. . . .

Authors have been saying for years that they would prefer not to be tagged in bad reviews on social media. In November, Lauren Groff wrote that “tweeting it [a review] to a writer is like grabbing their cheeks and shouting it into their face”. It even happened before the social media era, such as when Alain de Botton was told about Caleb Crain’s review of The Pleasures and Sorrows of Work (“I will hate you till the day I die and wish you nothing but ill will in every career move you make,” wrote de Botton on Crain’s website). And perhaps, Martin Amis’s publishers should have kept him in the dark about Tibor Fischer’s write up of Yellow Dog. (“A creep and a wretch. Oh yeah: and a fat-arse,” said Amis of Fischer.)

But the blowback against Thomas is disproportionate when compared to how reviewers have reacted to this request in the past. As she herself points out: “Plenty of white authors have said the same thing about reviews without getting the same kind of attacks that I’ve received.”

Link to the rest at The Guardian

Social Media Could Make It Impossible to Grow Up

9 July 2019
Comments Off on Social Media Could Make It Impossible to Grow Up

The following is a longer post/excerpt than PG usually includes on TPV, but the topic fascinated him.

PG is happy that the foolish things he did in high school and college have disappeared into thickening mists of the fading memories scrabbling for survival within the minds of himself and fellow members of the Order of Lavishly Idiotic Youth.

From Wired:

Several decades into the age of digital media, the ability to leave one’s childhood and adolescent years behind is now imperiled. Although exact numbers are hard to come by, it is evident that a majority of young people with access to mobile phones take and circulate selfies on a daily basis. There is also growing evidence that selfies are not simply a tween and teen obses­sion. Toddlers enjoy taking selfies, too, and whether intentionally or unintentionally, have even managed to put their images into circula­tion. What is the cost of this excessive documentation? More spe­cifically, what does it mean to come of age in an era when images of childhood and adolescence, and even the social networks formed during this fleeting period of life, are so easily preserved and may stubbornly persist with or without one’s intention or desire? Can one ever transcend one’s youth if it remains perpetually present?

The crisis we face concerning the persistence of childhood images was the least of concerns when digital technologies began to restruc­ture our everyday lives in the early 1990s. Media scholars, sociolo­gists, educational researchers, and alarmists of all political stripes were more likely to bemoan the loss of childhood than to worry about the prospect of childhood’s perpetual presence. A few educa­tors and educational researchers were earnestly exploring the poten­tial benefits of the internet and other emerging digital technologies, but the period was marked by widespread moral panic about new media technologies. As a result, much of the earliest research on young people and the internet sought either to support or to refute fears about what was about to unfold online.

. . . .

Many adults feared that if left to surf the web alone, children would suffer a quick and irreparable loss of innocence. These concerns were fueled by reports about what allegedly lurked online. At a time when many adults were just beginning to venture online, the internet was still commonly depicted in the popular media as a place where anyone could easily wander into a sexually charged multiuser domain (MUD), hang out with computer hackers and learn the tricks of their criminal trade, or hone their skills as a terrorist or bomb builder. In fact, doing any of these things usually required more than a single foray onto the web. But that did little to curtail perceptions of the internet as a dark and dangerous place where threats of all kinds were waiting at the welcome gate.

. . . .

A common theme underpinning both popular and scholarly arti­cles about the internet in the 1990s was that this new technology had created a shift in power and access to knowledge. A widely reprinted 1993 article ominously titled “Caution: Children at Play on the Infor­mation Highway” warned, “Dropping children in front of the com­puter is a little like letting them cruise the mall for the afternoon. But when parents drop their sons or daughters off at a real mall, they gen­erally set ground rules: Don’t talk to strangers, don’t go into Victoria’s Secret, and here’s the amount of money you’ll be able to spend. At the electronic mall, few parents are setting the rules or even have a clue about how to set them.”

. . . .

In such a context, it is easy to understand why the imperiled innocence of children was invoked as a rationale for increased regulation and monitoring of the internet. In the United States, the Communications Decency Act, signed into law by President Clinton in 1996, gained considerable support due to widespread fears that without increased regulation of communications, the nation’s children were doomed to become perverts and digital vigi­lantes.

. . . .

Jenkins was not the only one to insist that the real challenge was to empower children and adolescents to use the internet in productive and innovative ways so as to build a new and vibrant public sphere. We now know that a critical mass of educators and parents did choose to allow children ample access to the internet in the 1990s and early 2000s. Those young people ended up building many of the social media and sharing economy platforms that would transform the lives of people of all ages by the end of the first decade of the new millen­nium.

. . . .

Among the more well­-known skeptics was another media theorist, Neil Postman. Postman argued in his 1982 book The Disappearance of Childhood that new media were eroding the distinction between childhood and adulthood. “With the electric media’s rapid and egalitarian dis­closure of the total content of the adult world, several profound consequences result,” he claimed. These consequences included a diminishment of the authority of adults and the curiosity of children. Although not necessarily invested in the idea of childhood innocence, Postman was invested in the idea and ideal of childhood, which he believed was already in decline. This, he contended, had much to do with the fact that childhood—a relatively recent historical invention—is a construct that has always been deeply entangled with the history of media technologies.

While there have, of course, always been young people, a number of scholars have posited that the concept of childhood is an early modern invention. Postman not only adopted this position but also argued that this concept was one of the far­-reaching consequences of movable type, which first appeared in Mainz, Germany, in the late 15th century. With the spread of print culture, orality was de­moted, creating a hierarchy between those who could read and those who could not. The very young were increasingly placed outside the adult world of literacy.

During this period, something else occurred: different types of printed works began to be produced for different types of readers. In the 16th century, there were no age­-based grades or corresponding books. New readers, whether they were 5 or 35, were expected to read the same basic books. By the late 18th century, however, the world had changed. Children had access to children’s books, and adults had access to adult books. Children were now regarded as a separate category that required protection from the evils of the adult world. But the reign of childhood (according to Postman, a period running roughly from the mid-19th to the mid-20th centuries) would prove short­-lived. Although earlier communications technologies and broadcasting mediums, from the telegraph to cinema, were already chipping away at childhood, the arrival of television in the mid­-20th century marked the beginning of the end. Postman con­cludes, “Television erodes the dividing line between childhood and adulthood in three ways, all having to do with its undifferentiated ac­cessibility: first, because it requires no instruction to grasp its form; second, because it does not make complex demands on either mind or behavior; and third, because it does not segregate its audience.”

. . . .

In the final chapter, Postman poses and responds to six questions, including the following: “Are there any communication technologies that have the potential to sustain the need for child­hood?” In response to his own question, he replies, “The only technology that has this capacity is the computer.” To program a computer, he explains, one must in essence learn a language, a skill that would have to be acquired in childhood: “Should it be deemed necessary that everyone must know how computers work, how they impose their special world­view, how they alter our definition of judgment—that is, should it be deemed necessary that there be uni­versal computer literacy—it is conceivable that the schooling of the young will increase in importance and a youth culture different from adult culture might be sustained.” But things could turn out dif­ferently. If economic and political interests decide that they would be better served by “allowing the bulk of a semiliterate population to entertain itself with the magic of visual computer games, to use and be used by computers without understanding … childhood could, without obstruction, continue on its journey to oblivion.”

. . . .

Thanks to Xerox’s graphical user interface, eventually popularized by Apple, by the 2000s one could do many things with computers without knowledge of or interest in their inner workings. The other thing that Postman did not anticipate is that young people would be more adept at building and programming computers than most older adults. Fluency in this new language, unlike most other languages, did not deepen or expand with age. By the late 1990s, there was little doubt that adults were not in control of the digital revolution. The most ubiquitous digital tools and platforms of our era, from Google to Facebook to Airbnb, would all be invented by people just out of their teens. What was the result? In the end, childhood as it once existed (i.e., in the pre-­television era) was not restored, but Postman’s fear that childhood would disappear also proved wrong. Instead, some­thing quite unexpected happened.

. . . .

Today, the distinction be­tween childhood and adulthood has reemerged, but not in the way that Postman imagined.

In our current digital age, child and adolescent culture is alive and well. Most young people spend hours online every day exploring worlds in which most adults take little interest and to which they have only limited access. But this is where the real difference lies. In the world of print, adults determined what children could and could not access—after all, adults operated the printing presses, purchased the books, and controlled the libraries. Now, children are free to build their own worlds and, more importantly, to populate these worlds with their own content. The content, perhaps not surprisingly, is pre­dominantly centered on the self (the selfie being emblematic of this tendency). So, in a sense, childhood has survived, but its nature—what it is and how it is experienced and represented—is increas­ingly in the hands of young people themselves. If childhood was once constructed and recorded by adults and mirrored back to children (e.g., in a carefully curated family photo album or a series of home video clips), this is no longer the case. Today, young people create im­ages and put them into circulation without the interference of adults.

In sharp contrast to Postman’s prediction, childhood never did disappear. Instead, it has become ubiquitous in a new and un­expected way. Today, childhood and adolescence are more visible and pervasive than ever before. For the first time in history, children and adolescents have widespread access to the technologies needed to represent their lives, circulate these representations, and forge networks with each other, often with little or no adult supervision. The potential danger is no longer childhood’s disappearance, but rather the possibility of a perpetual childhood.

Link to the rest at Wired

Here’s the blurb for The End of Forgetting:

Thanks to Facebook and Instagram, our younger selves have been captured and preserved online. But what happens, Kate Eichhorn asks, when we can’t leave our most embarrassing moments behind? Rather than a childhood cut short by a loss of innocence, the real crisis of the digital age may be the specter of a childhood that can never be forgotten.

And here’s a review from Inside Higher Ed:

Someone brought a video recorder to Thanksgiving 1980, during my final year of high school. Not a close relative, certainly. Back then, it was too insanely extravagant a piece of consumer electronics for any of us to imagine buying one. (Not for several years, anyway.)

The camera sat on a tripod and recorded the holiday goings-on, which were shown — continuously, as they were happening — on a nearby television set. It would have been able to record two to four hours, depending on the format and system. A blank video cassette cost the equivalent of $50 to $75 in today’s currency. There was much apprehension over very young family members getting too close and knocking something over.

The novelty of seeing one’s actions and expressions from the outside, in real time, was intriguing but unsettling. Nothing meaningful or interesting happened, and I cannot imagine anybody getting bored enough to watch the recording. But it means that my 17-year-old doppelgänger may be preserved on a tape in an attic someplace in Oklahoma, and that possibility, however slim, has kept the memory vivid. No adolescent photograph would ever be as awkward. The tape was probably Betamax: technological obsolescence can have its upside.

. . . .

Most 17-year-olds today probably do not remember a time when they had not yet seen themselves onscreen. Chances are that many of the videos will have been their own recordings. Creating them requires no technical skill, and duplicating or transporting them is equally effortless.

None of the technology is unwieldy or uncommon, or all that expensive. And while the storage capacity of a phone or laptop is not boundless, neither is it much of an obstacle. Everything ends up in the cloud eventually. (That may not be literally true, but all trends lead in that direction.) “With analogue media,” Eichhorn says, “there is invariably a time lag between the moment of production and the moment of broadcasting; in the case of digital media, production and broadcasting often happen simultaneously or near simultaneously. Adolescents are in effect … experiencing the social world via documentary platform.” And it is a kind of social death when they can’t.

In this cultural ecosystem, the normal excruciations of adolescent self-consciousness are ramped up and acted out — often before an audience of unlimited potential size — then preserved for posterity, in endlessly duplicable form.

. . . .

The potential for embarrassment increased by several orders of magnitude after America’s Funniest Home Videos debuted at the end of 1989, but even that looks minimal in the wake of YouTube. Two or three cases of extreme humiliation and bullying via digital video are now familiar to millions of people.

Eichhorn discusses them while acknowledging the ethical dilemma that doing so runs the risk of perpetuating mindless cruelty. But her point is that the famous examples represent the tip of the iceberg. Digital images are produced and circulated now in ways that encourage the self-expression and experimentation that Erikson regarded as one of the privileges of youth — while at the same time creating a permanent record that is potentially inescapable.

Inescapable, that is, because unforgettable.

Link to the rest at Inside Higher Ed

Over 400 years ago, William Shakespeare famously wrote, “The evil that men do lives after them; the good is oft interred with their bones.”

Perhaps the Bard peered into the future and somehow discerned Twitter and Facebook.

It occurs to PG that Twitter and Facebook would have made lovely names for a couple of the fools which populate some of his plays.

This is to make an ass of me, to fright me if they could.

– Bottom, Act 3, Scene 1, A Midsummer Night’s Dream

What You Get Back When You Reclaim Your Time from Social Media

8 July 2019

From Medium:

Last fall, in the midst of touring for my latest book, I stepped away from the public stage that arguably made my publishing career possible.

After investing six years into growing my following on Twitter from zero to over 42,000, with millions of monthly engagements, I left the platform, at least for now. This might not sound like such a momentous decision, but it was for me. My 127,000 tweets — an average of 57 tweets per day — had dramatically raised my profile as a writer, sociologist, and scholar on race. The platform prompted countless interactions and conversations, frequent media attention, and valuable professional opportunities — such as connecting me with my literary agent and helping secure a publishing deal for the book I’m still touring with, How to Be Less Stupid About Race.

Yet at the precise moment when most writers would have redoubled their efforts to promote their work, I felt compelled to step down from my bully pulpit and shutter my most successful social media account.

. . . .

Most disconcertingly, even when I wasn’t tweeting, I found myself thinking in tweets — crafting pithy, retweetable observations about my life, social dynamics, and world events to share with my followers as soon as I could get my hot hands on my phone or laptop.

And then I reached a critical breaking point. Crisscrossing the nation for the book tour and connecting with readers in real life was a new, thrilling experience for me, but it was also unspeakably exhausting.

. . . .

While the vast majority of my interactions with folks at book events were uplifting and supportive, I never quite knew what to expect from Q&As. I felt the constant need to mentally prepare for everything from microaggressions to outright hostility.

What I faced most often, however, were the racialized and gendered expectations that I provide on-the-spot emotional processing, counseling, and strategizing for a never-ending stream of racial dilemmas and existential trauma. “How do I deal with my racist cousin?” a white woman would ask, expecting a sensible answer in 60 seconds or less, while a dozen people waited in line behind her. “What should I do about racism on my job?” a man urgently inquired as I signed a copy of the book.

. . . .

But as I struggled to give the fullness of my attention and intention to each and every person who I met on the road, I began to realize that I had little energy left for myself, and no energy at all for Twitter.

I began experiencing debilitating insomnia for the first time in my life. Anxiety became a daily concern.

. . . .

Media technology companies such as Facebook, Twitter, and Instagram are quite literally invested in making us internet addicts. They’re effectively manipulating social psychological responses to ensure that our clicks and engagements don’t fizzle — or else their bottom line will. Sean Parker, one of Facebook’s founders, described the platform’s “like” button as “a social validation feedback loop… exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

. . . .

If it feels difficult to quit social media, it’s because corporate strategists and programmers work very hard to embed their apps with digital carrots that ensure that scrolling through our feed feels deeply rewarding. Facebook “likes,” Twitter “hearts,” and Instagram notifications all drive addictive behavior by doling out intermittent and unpredictable rewards. These rewards, in turn, fuel the release of neurotransmitters, including dopamine, which are associated with the experience of pleasure in the brain.

. . . .

Of course, the sense of community created on social media has many potential benefits when used appropriately and in moderation. But these fleeting digital rewards come at a great price. Social media apps are able to stealthily manipulate our brains into believing that we are experiencing pleasure, despite the fact that heavy usage leads to increased depression, anxiety, sleep disorders, and reduced quality of life.

. . . .

“Computer internet businesses are about exploiting psychology… we want to psychologically figure out how to manipulate you as fast as possible and then give you back that dopamine hit. We did that brilliantly at Facebook.” And it’s not just the individual that this affects, he observed: “We have created tools that are ripping apart the social fabric of how society works.”

. . . .

As Alex Hern has pointed out in the Guardian, many social media executives and developers have either stopped using their own products or never used them excessively in the first place. Facebook made Palihapitiya a billionaire, but he has said he doesn’t use Facebook himself, and his own children are not allowed to use social media. Jack Dorsey, Twitter’s CEO, “rarely replies to strangers and avoids discussions or arguments on the site,” Hern wrote. “He doesn’t live-tweet TV shows or sporting fixtures. In fact, he doesn’t really ‘use’ Twitter; he just posts on it occasionally.” Mark Zuckerberg, meanwhile, has an entire team to manage and curate his social media account for him.

. . . .

Emotionally, my mood has greatly improved. I feel less glum, pessimistic, and angry with the world than when I was constantly “connected.” I keep a gratitude journal and count my blessings. While I sometimes miss the creative, intellectual, and political community of my tweethearts, I’m immensely relieved to no longer feel the mental pressure of organizing a social media press conference several times a day in response to trending hashtags, controversies, and tragedies.

Link to the rest at Medium

 

 

These Researchers Are Trying to Keep Facebook Users from Feeling Depressed

8 July 2019

From Fortune:

A couple of years ago, a group of researchers at Facebook realized that users felt worse about themselves after incessantly scrolling through their news feeds. The researchers decided to do something about it.

They surveyed Facebook users about their emotional reactions to using the social network. Those findings helped drive one of the biggest changes Facebook has made to date: Showing users more posts from friends and family rather than businesses.

The point was to increase interaction between users, whether commenting or liking posts. The more that people did so, the better they felt, the research found.

The change pushed by Facebook’s little-known well-being team is just one of many issues the group has explored. Its mission is to reduce any negative effects associated with using Facebook, a nearly ubiquitous presence in modern life.

. . . .

The team has a major challenge ahead as it aims to solve a growing conundrum within the tech industry: How to positively impact users’ lives. And over the years, various independent studies have shown that using Facebook can increase depression and make users feel less satisfied with their lives.

A study earlier this year by researchers unaffiliated with Facebook found that people who deactivated their Facebook accounts were happier and more satisfied and felt less anxious and depressed. It was in sharp contrast to Facebook CEO Mark Zuckerberg, who often brags about Facebook being a critical tool for connecting the world.

. . . .

Similarly, a study by researchers at Yale and the University of California at San Diego published in the American Journal of Epidemiology in 2017 suggested that the more people used Facebook, the worse their mental health and personal satisfaction.

. . . .

Any suggestion the team makes to Facebook’s management is only that—a suggestion. Facebook’s leaders get the ultimate say about what should be adopted.

And that’s the problem, suggests Jennifer Grygiel, a Syracuse University assistant professor who has studied the problem of policing social media content. Hate speech, violent content, and harassment have become so widespread that Facebook can’t keep up. And while users may complain, Facebook is under no requirement to make any changes to address those problems or any others.

“These are corporate entities, and they’re accountable to their shareholders and their profits,” Grygiel said. “They say that they want to help us—that they are putting processes in place to protect us—but they aren’t and don’t have the resources in place to do that.”

The problem with Facebook’s internal research boils down to one thing, says Grygiel: “Nothing can truly be independent when Mark Zuckerberg is the majority shareholder of the company.”

. . . .

Facebook’s researchers define the term “well-being” as “how people perceive their lives.” Within that scope, they focus on three specific areas: unhealthy amounts of time spent on Facebook, loneliness, and declines in self-worth related to users comparing themselves to others.

“These issues are things that have a deep impact on people lives and have played out on Facebook,” Facebook’s Guadagno said.

. . . .

Well-being also shows up in one of the company’s risk factors in last year’s annual report. Facebook said that its overall business could be harmed if users felt that the social network was negatively affecting their well-being.

“The company has always cared about well-being,” said Chandra Mohan Janakiraman, Facebook’s well-being product manager. “What’s changed is our understanding in terms of the impact our product has had.”

Link to the rest at Fortune

While reading the OP, a thought flitted across PG’s mind (so you have been warned).

Many years ago, large advertising agencies made a great deal of money from their clients who manufactured and sold cigarettes.

A couple of PG’s college friends worked at those agencies. As reported, walking into the head office of a major cigarette company involved seeing large bowls of loose cigarettes (separate bowls for each brand of cigarette) and ample ashtrays wherever one might go in those offices – waiting rooms, conference rooms, break rooms, elevator lobbies, etc. – across multiple floors. Someone was evidently tasked with replenishing the cigarette bowls because they were never empty. Elegant table-top cigarette lighters were close by in case you had forgotten to bring your own.

One of the most consistent messages carried by the advertising across all brands of cigarettes was that smoking made you feel great, enhanced your sense of well-being and was integral to a fulfilling and active social life.

The enjoyable lives of smokers were all wrapped up with cigarettes.

The final quote in the excerpt above, “The company has always cared about well-being . . . . What’s changed is our understanding in terms of the impact our product has had,” resonated like a smoky echo from other large, wealthy and influential businesses in times past.

Next Page »