Putting Ourselves Back in the Equation

From The Wall Street Journal:

Popular science often feels like a kind of voyeurism. Those who can’t manage the real thing are given a thrilling glimpse of its intrigue and excitement but kept at a distance. So a book that tackles the mind-boggling triad of physics, consciousness and artificial intelligence might be expected to provide little more than intellectual titillation. The science journalist George Musser even says at its end that “many physicists and neuroscientists are just as perplexed as the rest of us.”

But Mr. Musser knows that the point of popular science is not for the reader to understand everything fully but to get a sense of what’s at stake, what kinds of answers are being offered to difficult questions, and why it all matters. One could not ask more of “Putting Ourselves Back in the Equation”—on all three counts it delivers.

The central puzzle of the book is what the contemporary philosopher William Seager has called a “ticking time bomb”: the intellectual division between the objective and the subjective. Natural science was able to make such great strides after the Middle Ages only because it left the analysis of thoughts and feelings to poets and philosophers, focusing instead on measurable observables. The strategy worked a treat until it hit two brick walls.

The first is the nature of consciousness. Modern neuroscience, at first, stuck to examining the brain events that corresponded to conscious experiences: the “neural correlates of consciousness.” But at a certain point it became clear that such a focus left out a good deal. How is it possible that mushy cells give rise to sensations, emotions and perceptions? The science of mind had to ignore precisely what it was supposed to explain because a purely objective account of consciousness cannot encompass its subjective character.

And then—a second and related problem—physicists discovered that they couldn’t leave conscious minds out of their equations. A central tenet of quantum theory is that observers change what they observe. This is embarrassing. Physics is meant to describe the mind-independent world. But its best description ended up having minds—with their particular points of view—at its center. So for physics to be anything like complete, it has to find a way to kick minds out again or account for what makes them conscious and why they should affect physical matter.

Mr. Musser provides a chatty and informal overview of the many ways in which physicists have been trying to rise to these challenges. He speaks to many of the leading scientists in the field, trying a bit too hard to make them seem like regular folks so that we don’t feel intimidated. A bigger challenge, for the reader, is that he introduces us to so many theories that it’s difficult to judge which should be taken most seriously and which lean toward the cranky. Given that even the most well-evidenced theories in physics sound crazy, our intuitions are no guide.

But by the end a number of general insights shine through. The central one is that we have to think of both physics and consciousness in terms of networks and relations. You can’t find consciousness in single neurons, no matter how hard you look. The reductive approach, which seeks to break down phenomena to their smallest parts, doesn’t work for everything. The clearest evidence of the limits of reductionism is quantum entanglement, or “spooky action at a distance,” the title-phrase of Mr. Musser’s previous book. This is the phenomenon by which two particles appear to affect each other even though they are too far apart for any information to pass between them without exceeding the speed of light, a physical impossibility. No explanation of this oddity is possible if we focus reductively on the particles as discrete entities. Instead we have to see them as interrelated.

Consciousness, too, seems to depend upon patterns of interconnectedness. For a while now researchers into artificial intelligence have realized that we can get nothing close to human reasoning if we have computers that follow only linear processes. AI took off when scientists started to create neural networks, in which processes are conducted in parallel, mimicking the brain’s capacity to run different processes at the same time in its many parts.

This insight led to the currently hottest theory in consciousness studies, integrated information theory, which holds that consciousness is essentially the result of information being kept in whole systems rather than in parts. Adherents even quantify the degree of this integration with the Greek letter phi, which, says Mr. Musser, “represents the amount of information that is held collectively in the network rather than stored in its individual elements.” The higher the value of phi, the more conscious the system is.

By the end of “Putting Ourselves Back in the Equation,” Carlo Rovelli emerges as the physicist who is on the hottest trail. For Mr. Rovelli, there are no independent, objective facts. Truth is always a matter of relations. We understand that New York is west of London but east of San Francisco. Mr. Rovelli argues that all physical properties are like this: Nothing is anything until it is related to something else. It’s an old idea, found in ancient Greek, Chinese and Indian philosophy, and scientists are discovering it anew.

Link to the rest at The Wall Street Journal

Witness to a Prosecution

From The Wall Street Journal:

In the popular perception of the typical white-collar case, a judicious government prosecutes a mendacious executive on a mountain of incontrovertible evidence. Think Bernie Madoff or Sam Bankman-Fried. Then there’s Michael Milken, the former “junk bond king” from the infamous “decade of greed.” If there were a Mount Rushmore of white-collar crime, all three men might have a place.

Thanks to Richard Sandler, however, you can now scratch one of those names off that list. In “Witness to a Prosecution,” Mr. Sandler, a childhood friend who was Mr. Milken’s personal lawyer at the time, walks the reader through Mr. Milken’s 30-plus year legal odyssey, beginning in 1986 with the federal government’s investigation, followed by his indictment, plea bargain, and prison term, right through to his pardon by President Donald Trump in 2020. The author tells a convincing and concerning story of how the government targeted a largely innocent man and, when presented with proof of that innocence, refused to turn away from a bad case.

I have always been more than a bit skeptical about Mr. Sandler’s underlying thesis—and the thesis of many of Mr. Milken’s supporters on this page. After all, Mr. Milken served nearly two years in jail, pleaded guilty to six felonies and paid a large fortune to settle with the government.

I have also read books, chief among them James B. Stewart’s “Den of Thieves” (1991), that seem to make the case for Mr. Milken’s culpability—the methods he employed as head of Drexel Burnham Lambert’s high-yield department, the alleged epicenter of the destructive “leveraged buyout” mania of the 1980s that cratered companies and led to mass unemployment; his alliances with smarmy corporate raiders; his supposed insider trading with the notorious arbitrageur Ivan Boesky. The list goes on.

After reading Mr. Sandler’s account, I no longer believe in Mr. Milken’s guilt, and neither should you. The author argues that most of what we know about Mr. Milken’s misdeeds is grossly exaggerated, if not downright wrong. What the government was able to prove in the court of law, as opposed to the court of public opinion, were mere regulatory infractions: “aiding and abetting” a client’s failure to file an accurate stock-ownership form with the SEC, a violation of broker-dealer reporting requirements, assisting with the filing of a false tax return. There was no insider-trading charge involving Mr. Boesky or anyone else, because the feds couldn’t prove one.

The witnesses against Mr. Milken, among them Mr. Boesky, led investigators on a wild-goose chase that turned up relatively little. One key piece of evidence linking the two men: A $5.3 million payment to Drexel from Mr. Boesky for what turned out to be routine corporate finance work that the feds thought looked shady.

When you digest the reality of the case against Mr. Milken, you find that much of it was nonsense. As Mr. Sandler puts it: “The nature of prosecution and the technicality and uniqueness of the regulatory violations . . . certainly never would have been pursued had Michael not been so successful in disrupting the traditional way business was done on Wall Street.”

That gets us to why Mr. Milken was prosecuted so viciously. The lead prosecutor on the case, Rudy Giuliani, was the U.S. Attorney for the Southern District of New York. It’s hard to square the current Mr. Giuliani, fighting to keep his law license while being enmeshed in Mr. Trump’s election-denying imbroglio, with the man who was then the nation’s foremost crime fighter, taking on mobsters, corrupt politicians and those targeted as unscrupulous Wall Street financiers.

Mr. Giuliani’s ambition for political office—he would later become mayor of New York City—made Mr. Milken an enticing target, Mr. Sandler tells us. The author suggests that Mr. Giuliani made up for his weak legal case by crafting an image of the defendant as an evil bankster and feeding it through leaks to an all-too-compliant media. “Michael Milken became the subject of intensive newspaper articles, press leaks, rumors, and innuendo for years before he was charged with anything,” the author writes. “I am sure Giuliani and his team of prosecutors believed that Mike would succumb to the pressure and quickly settle and cooperate and implicate others. When this did not happen, the prosecutors became more committed to using their immense power to pressure Michael and try to win at all costs.”

Link to the rest at The Wall Street Journal

AAP’s September StatShot: US Book Market Up 0.8 Percent YTD

From Publishing Perspectives:

In its September 2023 StatShot report released this morning (December 12), the Association of American Publishers (AAP) cites total revenues across all categories to have been flat as compared to September 2022, at US1.4 billion.

The American market’s year-to-date revenues, the AAP reports, were up 0.8 percent at US$9.4 billion for the first nine months of the year.

As Katy Hershberger at Publishers Lunch today is noting, children’s books continued to gain in September, up 5.2 percent over the same month in 2022, sales this year reaching $272.8 million.

Publishing Perspectives readers know, that the AAP’s numbers reflect reported revenue for tracked categories including trade (consumer books); higher education course materials; and professional publishing.

. . . .

Trade Book Revenues

Year-Over-Year Numbers
Trade revenues were down 0.4 percent in September over the same month last year, at $905.9 million.

In print formats:

  • Hardback revenues were up 7.2 percent, coming in at $379 million
  • Paperbacks were down 4.9 percent, with $299.1 million in revenue
  • Mass market was down 39.5 percent to $11.3 million
  • Special bindings were up 11.8 percent, with $27.1 million in revenue

In digital formats:

  • Ebook revenues were down 1.8 percent for the month as compared to September 2022, for a total of $85.2 million
  • The closely monitored digital audio format was up 3.2 percent for September 2022, coming in at $69.9 million in revenue
  • Physical audio was down 24.4 percent, coming in at $1.2 million

Link to the rest at Publishing Perspectives

PG notes that the Association of American Publishers includes far more publishers than the large trade fiction and non-fiction publishers in New York City, the ones that the New York Times uses for its best-seller lists.

The AAP stats include educational publishers that provide textbooks for all of the different levels of education in the US. It also includes religious publishers and business publishers providing books for the business, medical, law, technical and scientific markets.

Amsterdam’s Elsevier: Research and Real-World Impact

From Publishing Perspectives:

As we work to recoup some of the relevant material released to the news media near the end of our publication year, we look now at two significant research reports from Elsevier, one on research evaluation and the other on real-world impact—both increasingly pressing interests in the world of academic publication.

In the 30-page report “Back to Earth: Landing Real-World Impact in Research Evaluation,” the program carried out a survey of 400 academic leaders, funders, and researchers in seven countries about real-world impact as part of academic evaluation. Key findings include:

  • Sixty-six percent of respondents say academia has a moral responsibility to incorporate real-world impact into standard research evaluation​
  • Seventy percent say they are passionate about research that has a positive real-world impact
  • Fifty-three percent say a more holistic approach to evaluation would improve research cost-effectiveness.\
  • Fifty-one percent of respondents identified at least one serious problem with current methods of research evaluation
  • In terms of barriers to change, 56 percent of those surveyed said the “lack of common frameworks or methodologies” while 48 percent said “lack of consensus on what constitutes impact”

In this report, it’s interesting to note some of the differences, culture-to-culture in the question of how important it is for research “to aim for real-world impact.” Particularly in the coronavirus COVID-19 pandemic, there could hardly have been a time when it was so obvious, the need that the world-at-large has for the most sophisticated, committed, and efficient research.

Nevertheless, this graphic indicates that surveyed personnel on this point came in on the affirmative side (yes, research should aim for real-world impact) at rates up to 93 percent in the United Kingdom and a low of 64 percent in the Elsevier report’s home, the Netherlands.

Another very interesting point in this report compares the view of funders and those of researchers.

While funders surveyed seem to agree with researchers that more holistic approaches are important, the funders did say that they were more in agreement with the researchers that the current system creates vested interests.

And it’s the researchers who said they were more passionate than the funders about having “real-world impact as researchers and academic leaders.”

Topping the list of barriers offered by funders to a more holistic form of research assessment was lack of resources at 53 percent, on a 53-percent par with lack of consensus on what actually constitutes impact.

Also running heavily were the lack of a common framework or methodology in holistic method of assessing research’s impact, at 49 percent. But another tie came in next, with 38 percent each of respondents saying that two more barriers are “achieving sufficient alignment between different actors” and “complexity.”

Link to the rest at Publishing Perspectives

The Quest for Cather

From The American Scholar:

Willa Cather loathed biographers, professors, and autograph fiends. After her war novel, One of Ours, won the Pulitzer in 1923, she decided to cull the herd. “This is not a case for the Federal Bureau of Investigation,” she told one researcher. Burn my letters and manuscripts, she begged her friends. Hollywood filmed a loose adaptation of A Lost Lady, starring Barbara Stanwyck, in 1934, and Cather soon forbade any further screen, radio, and television versions of her work. No direct quotations from surviving correspondence, she ordered libraries, and for decades a family trust enforced her commands.

Archival scholars managed to undermine what her major biographer James Woodress called “the traps, pitfalls and barricades she placed in the biographer’s path,” even as literary critics reveled in trench warfare over Cather’s sexuality. In 2018, her letters finally entered the public domain, allowing Benjamin Taylor to create the first post-ban life of Cather for general readers.

Chasing Bright Medusas is timed for the 150th anniversary of Cather’s birth in Virginia. The title alludes to her 1920 story collection on art’s perils, Youth and the Bright Medusa. (“It is strange to come at last to write with calm enjoyment,” she told a college friend. “But Lord—what a lot of life one uses up chasing ‘bright Medusas,’ doesn’t one?”) Soon she urged modern writers to toss the furniture of naturalism out the window, making room for atmosphere and emotion. What she wanted was the unfurnished novel, or “novel démeublé,” as she called it in a 1922 essay. Paraphrasing Dumas, she posited that “to make a drama, all you need is one passion, and four walls.”

Chasing Bright Medusas is an appreciation, a fan’s notes, a life démeublé. Taylor’s love of Cather’s sublime prose is evident and endearing, but in his telling of her life, context is sometimes defenestrated, too. When Taylor sets an idealistic Cather against cynical younger male rivals, we learn of Ernest Hemingway’s mockery but not of William Faulkner’s declaration that the greatest American novelists were Herman Melville, Theodore Dreiser, Hemingway, and Cather. Taylor rightly notes that Cather’s First World War novel differs from Hemingway’s merely in tone, not understanding; A Farewell to Arms and One of Ours, he writes, “ask to be read side by side.”

. . . .

Not much Dark Cather surfaces in Bright Medusas—a pity, for she was a genius of horror. My Ántonia brims with bizarre ways to die (thrown to ravening wolves, suicide by threshing machine); carp devour a little girl in Shadows on the Rock; and One of Ours rivals Cormac McCarthy for mutilation and gore. Cather repeatedly changed her name and lied about her birthdate, as a professional time traveler must, on the page and in life. She saw her mother weep for a Confederate brother lost at Manassas, rode the Plains six years after Little Bighorn, was the first woman to receive an honorary degree from severely masculinist Princeton, and though in failing health, spent World War II writing back to GIs who read her work in Armed Services Editions paperbacks, especially the ones who picked up Death Comes for the Archbishop, assuming it was a murder mystery. She died the same spring that Elton John and Kareem Abdul-Jabbar and David Letterman were born. She is one of ours.

Link to the rest at The American Scholar

How to Interpret the Constitution

From The Wall Street Journal:

It is a testament to our nation’s commitment to the rule of law that, nearly 250 years after its ratification, Americans still argue about the Constitution. And as often as we argue about the outcomes of controversial hot-button constitutional cases, we argue about the methodologies that lead judges to make their rulings.

Today originalism—the idea that constitutional meaning should be considered as being fixed at the time of enactment—is the dominant judicial philosophy, thanks in part to decades of persuasive arguments put forward by conservative and libertarian lawyers and scholars. But there are different flavors of originalism, corresponding to different understandings of “original meaning”—the framers’ intent, a provision’s “public meaning,” or its expected application, to name a few—and various liberal lawyers and politicians propose their own interpretative methods, originalist or otherwise.

Cass Sunstein, a Harvard Law School professor, has written “How to Interpret the Constitution,” a clear, accessible survey that discusses originalist interpretations alongside their competitors. Among those nonoriginalist approaches are John Hart Ely’s argument for democracy-enforcing judicial review, Ronald Dworkin’s moral reading of the Constitution and James Bradley Thayer’s advocacy of extreme judicial restraint. Those are all varieties of what has been called “living constitutionalism”; they all allow that the Constitution may evolve in meaning without being amended.

Mr. Sunstein argues repeatedly that the Constitution “does not contain the instructions for its own interpretation.” To some degree, this is true: Though statutes often include definitions in their wording, the Constitution, for the most part, does not. For example, the first words of the First Amendment read: “Congress shall make no law respecting an establishment of religion.” Neither “respecting,” “establishment,” nor “religion” are set out with clear definitions, and the Supreme Court has entertained many possible meanings for each over the course of American history.

There is also no explicit constitutional command, in the text of the Constitution or in the works of the Founders, that those tasked with interpreting it must follow any particular method, either originalist or living-constitutionalist. “The idea of interpretation is capacious,” Mr. Sunstein writes. He therefore proposes his own first principle for choosing among methods: “Any particular approach to the Constitution must be defended on the ground that it makes the relevant constitutional order better rather than worse.”

Originalists propose that we resolve constitutional ambiguities by unearthing the law’s true and unchanged meaning. Mr. Sunstein, by contrast, proposes that judges and other constitutional interpreters rely on their “firm intuitions” to determine which constitutional rules are desirable and then see what theories might yield those rules. To do so, the author borrows from John Rawls, the giant of 20th-century liberal political theory, to endorse a methodology of “reflective equilibrium,” in which “our moral and political judgments line up with one another, do not contradict each other, and support one another.”

“In deciding how to interpret the Constitution,” Mr. Sunstein writes, “we have to think about what we are most firmly committed to, and about what we are least likely to be willing to give up.” He reveals how he would apply this methodology in his own case by listing his 10 “fixed points,” or constitutional outcomes that resonate with his own sense of rightness and justice. They are “clearly correct” propositions in the author’s view and include the contentions that “the Constitution does not forbid maximum hour and minimum wage laws” and that “the First Amendment should be understood to give very broad protection to political speech.” Of course, you might believe exactly the opposite. That, to Mr. Sunstein, is equally legitimate. One begins to wonder at this point how much “interpretation” exactly is going on.

Consider Mr. Sunstein’s claim that judges and justices should interpret laws in a manner that improves the constitutional order. Why shouldn’t we just allow legislators, who figure nowhere in Mr. Sunstein’s philosophy, to make legitimate changes to legislation when needed? We have mechanisms for improving our nation’s laws, and we have one for improving our Constitution. The Republicans who revamped our constitutional system in the aftermath of the Civil War by devising the Reconstruction Amendments—banning slavery, guaranteeing the equal protection of the law and enforcing individual rights against the states—understood that they couldn’t simply project their moral and political views onto the unamended law. They had to change the Constitution.

Like most nonoriginalists, Mr. Sunstein evades the key insight that gives originalism its appeal. It begins with a phrase from the Constitution that refutes Mr. Sunstein’s premise that the document doesn’t contain instructions for its own interpretation. “This Constitution,” it proclaims, “shall be the supreme law of the land.” The Constitution is a legal document, even if its provisions are sometimes more ambiguous at first glance than we would want a law to be. And laws have the crucial characteristic sometimes known as “fixation”: They remain unchanged until changed by authorized means. Constitutional interpretation must be constrained by this basic principle of legal reasoning.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Bookworms and Fieldworkers

From The Nation:

In the years leading up to the outbreak of the 1905 revolution in Russia, Eduard Bernstein—the spirited German advocate of socialist revisionism—warned his Marxist colleagues about the dangers of an “almost mythical faith in the nameless masses.” More skeptic than firebrand, Bernstein worried that Karl Kautsky and other leaders of the international socialist movement placed too much confidence in the spontaneous emergence of an organized and disciplined working class: “The mob, the assembled crowd, the ‘people on the street’…is a power that can be everything—revolutionary and reactionary, heroic and cowardly, human and bestial.” Just as the French Revolution had descended into terror, the masses could once again combust into a violent flame. “We should pay them heed,” Bernstein warned, “but if we are supposed to idolize them, we must just as well become fire worshippers.”

Among the votaries of European socialism, Bernstein has seldom enjoyed much acclaim, not least because he symbolized the spirit of pragmatism and parliamentary reform that ended up on the losing side of the debates that roiled the socialist movement in the decades preceding the Bolsheviks’ victory in 1917. For historians who are less partisan, however, the time may well seem ripe for a new appraisal—a revision of revisionism—that casts Bernstein and his reformist wing in a more favorable light.

This is the ambition of Christina Morina in The Invention of Marxism, recently translated into English by Elizabeth Janik. A study of Bernstein, Kautsky, Lenin, Jean Jaurès, Rosa Luxemburg, and other early Marxist luminaries, the book bears a rather breathless subtitle—“How an Idea Changed Everything”—that is far too ambitious for any author, but it is nonetheless a searching account of Marxism’s early days. Although it offers no certain answers as to what the “idea” of Marxism really consists in, it does provide a welter of personal and biographical detail that enriches our sense of Marxism’s varied history and the lives of its party leaders.

How should we write the history of Marxism? Over the past century, when political opinion has been sharply divided on the meaning and legacy of the socialist tradition, historians have felt compelled to choose one of two modes of narrative: either triumphant or tragic. Both of these approaches are freighted by ideology, yet neither has permitted a truly honest reckoning with the political realities of the Marxist past.

Morina, a scholar whose training reflects the methods of social and political history associated with the University of Bielefeld in Germany, where she now works as a professor, has set out to write a history that avoids strong ideological verdicts and places a greater emphasis on the sociology of intellectuals and the details of the Marxists’ personal lives, a method that also draws inspiration from the new trend in the history of emotions pioneered by scholars such as Ute Frevert. No doubt the book also reflects her own experiences as a child in East Germany, where she witnessed the “absurdities and inhumanity” of an authoritarian state that was arguably socialist in name only.

The fruit of her efforts is a group biography that explores the fate of nine “protagonists” from the first generation of the European socialist movement following the death of Karl Marx in 1883. Morina weaves together their personal and party histories with unusual skill, though without quite telling us “how an idea changed everything.” Perhaps the key difficulty is the method of prosopography itself, which fractures the book into individual life stories and leaves little room for a continuous political narrative. Those who are not already familiar with the broader history of European socialism will find it difficult to understand how the various national parties (in France, Germany, Austria, and Russia) all participated in a common struggle. But there is a case for her approach nonetheless, as it leads to some unique insights. By examining how personality and emotion shape one’s political commitments, Morina paints a portrait of Marxism less as a specific theory than as a shared language and a set of informal dispositions that spawned a variety of competing interpretations. Her nine protagonists were not, she explains, gifted with a sudden revelation of the truth. Each underwent a slow and emotional process through which the ideas of Marx became a common framework for explaining and evaluating political events.

While we now take this framework for granted as Marxist doctrine, Morina notes that the creation of Marxism was itself “a vast political project” that developed only gradually. The term gained “ideological meaning and political heft” only in the 1870s and 1880s, as works by Marx and Engels spread across the world in various editions and translations. For Morina, this means that the task of the social historian is to understand how those works were received, often on a case-by-case basis. The result is a book that tells us a great deal about these early Marxists as individuals, though much less about Marxism as a comprehensive theory or idea.

. . . .

If Marxism is an idea, it’s only because of the intellectuals who carried it forward and helped ensure its longevity—and many (though, of course, not all) of these intellectuals were by origin and education members of the bourgeoisie, not members of the working class lionized in Marxist theory.

Morina is acutely aware of this irony, and it informs all of her judgments in the book, some of them subtle, others overt. Running through The Invention of Marxism is a powerful current of unease about the “abstraction” of theory and the great distance that separated some of Marxism’s most esteemed theorists from the world they wished to understand. Although they were passionate in their principled commitment to the working classes, they often knew little about the workers’ actual lives, and at times they responded with revulsion—or at least discomfort—when exposed to the real suffering of the proletariat for whom they claimed to speak.

Morina takes special care to note that many of the party theorists in her tale enjoyed the rare privilege of a university education at a time when less than 1 percent of secondary school students in Western Europe went on to study at university. Karl Kautsky, a leading member of the German Social Democratic Party, was born into a home of writers and artists, and his parents were highly committed to his schooling. Victor Adler, a leader of the Social Democratic Workers’ Party of Austria, was a practicing physician as well as a publisher—he founded Gleichheit (Equality), the first socialist party newspaper in the Hapsburg Empire. Rosa Luxemburg studied at the University of Zurich and was by all reports an exceptionally precocious child whose parents grew prosperous thanks to her father’s success as a timber merchant; her theoretical acumen and political passion elevated her to prominent seats, first in the German Social Democratic Party and later in the Independent Social Democrats, the Spartacus League, and the Communist Party. Jean Jaurès, born in the South of France, rose to the top of his class and attended the École Normale Supérieure, where his classmates included Émile Durkheim and Henri Bergson, before he emerged as the most influential leader in the French Socialist Party.

The other protagonists in Morina’s tale enjoyed equal or even greater advantages. Vladimir Ulyanov (later Lenin) was born into a prosperous Russian family that owned estates; his father, a liberal teacher elevated to the post of school inspector, was eventually granted a title of nobility, while his mother came from a family of landowners with German, Swedish, and Russian origins and spoke several languages. Georgi Plekhanov, the “father of Russian Marxism,” had parents who owned serfs and belonged to the Tatar nobility; following the Emancipation Edict of 1861, Plekhanov’s family fell into financial decline, but thanks in part to his mother, he enjoyed a very strong education. Only two figures in Morina’s book were not the beneficiaries of wealth and education: Jules Guesde (born Bazile), later a major figure in French Marxism and socialism and an opponent of Jaurès; and Eduard Bernstein, whose father was a plumber and who never attended university and worked as a bank employee to support his activities in the German Social Democratic Party.

These protagonists, most of them members of the middle class, belonged to what Morina calls a “voluntary elite.” Her group study, though often engaging, remains poised in an uncertain space between intellectual history and party chronicle, without ever truly resolving itself into a satisfactory version of either. Needless to say, this ambivalence may be baked into the topic itself, since Marxism is perhaps distinctive in its contempt for mere theorizing and its constant refrain that we must bridge the gap between theory and practice. After all, has there ever been a Marxist who did not insist that their ideas were not correlated with material events? Morina, though hardly a Marxist in her methods, suggests that her study exemplifies the genre of Erfahrungsgeschichte, or the “history of lived experience.” Experience, however, is itself a concept of some controversy, since it hints at some bedrock of individual reality beyond interpretation and deeper than mere ideas. And this would seem to be Morina’s point: By turning our attention to the biographical and emotional history of the European socialist tradition, she hopes to remind us that Marxist intellectuals were not bloodless theoreticians but human beings caught up in the same world of passions and interests they wished to explain.

Link to the rest at The Nation

PG suggests that the only definitive accomplishment of more than 100 years of Marxism/Communism is to demonstrate that Communism is an excellent way for a dictator to gain and hold power.

The poor become poorer and the productive middle-class is decimated in part because talents and accomplishments in any field other than politics are only rarely a path to any sort of security. It’s a soul-crushing environment for anyone who isn’t a thug.

The Biggest Questions: What is death?

From MIT Technology Review:

Just as birth certificates note the time we enter the world, death certificates mark the moment we exit it. This practice reflects traditional notions about life and death as binaries. We are here until, suddenly, like a light switched off, we are gone.

But while this idea of death is pervasive, evidence is building that it is an outdated social construct, not really grounded in biology. Dying is in fact a process—one with no clear point demarcating the threshold across which someone cannot come back.

Scientists and many doctors have already embraced this more nuanced understanding of death. As society catches up, the implications for the living could be profound. “There is potential for many people to be revived again,” says Sam Parnia, director of critical care and resuscitation research at NYU Langone Health.

Neuroscientists, for example, are learning that the brain can survive surprising levels of oxygen deprivation. This means the window of time that doctors have to reverse the death process could someday be extended. Other organs likewise seem to be recoverable for much longer than is reflected in current medical practice, opening up possibilities for expanding the availability of organ donations.

To do so, though, we need to reconsider how we conceive of and approach life and death. Rather than thinking of death as an event from which one cannot recover, Parnia says, we should instead view it as a transient process of oxygen deprivation that has the potential to become irreversible if enough time passes or medical interventions fail. If we adopt this mindset about death, Parnia says, “then suddenly, everyone will say, ‘Let’s treat it.’”   

Moving goalposts 

Legal and biological definitions of death typically refer to the “irreversible cessation” of life-sustaining processes supported by the heart, lungs, and brain. The heart is the most common point of failure, and for the vast majority of human history, when it stopped there was generally no coming back. 

That changed around 1960, with the invention of CPR. Until then, resuming a stalled heartbeat had largely been considered the stuff of miracles; now, it was within the grasp of modern medicine. CPR forced the first major rethink of death as a concept. “Cardiac arrest” entered the lexicon, creating a clear semantic separation between the temporary loss of heart function and the permanent cessation of life. 

Around the same time, the advent of positive-pressure mechanical ventilators, which work by delivering breaths of air to the lungs, began allowing people who incurred catastrophic brain injury—for example, from a shot to the head, a massive stroke, or a car accident—to continue breathing. In autopsies after these patients died, however, researchers discovered that in some cases their brains had been so severely damaged that the tissue had begun to liquefy. In such cases, ventilators had essentially created “a beating-heart cadaver,” says Christof Koch, a neuroscientist at the Allen Institute in Seattle.

These observations led to the concept of brain death and ushered in medical, ethical, and legal debate about the ability to declare such patients dead before their heart stops beating. Many countries did eventually adopt some form of this new definition. Whether we talk about brain death or biological death, though, the scientific intricacies behind these processes are far from established. “The more we characterize the dying brain, the more we have questions,” says Charlotte Martial, a neuroscientist at the University of Liège in Belgium. “It’s a very, very complex phenomenon.” 

Brains on the brink

Traditionally, doctors have thought that the brain begins incurring damage minutes after it’s deprived of oxygen. While that’s the conventional wisdom, says Jimo Borjigin, a neuroscientist at the University of Michigan, “you have to wonder, why would our brain be built in such a fragile manner?” 

Recent research suggests that perhaps it actually isn’t. In 2019, scientists reported in Nature that they were able to restore a suite of functions in the brains of 32 pigs that had been decapitated in a slaughterhouse four hours earlier. The researchers restarted circulation and cellular activity in the brains using an oxygen-rich artificial blood infused with a cocktail of protective pharmaceuticals. They also included drugs that stopped neurons from firing, preventing any chance that the pig brains would regain consciousness. They kept the brains alive for up to 36 hours before ending the experiment. “Our work shows there’s probably a lot more damage from lack of oxygen that’s reversible than people thought before,” says coauthor Stephen Latham, a bioethicist at Yale University. 

In 2022, Latham and colleagues published a second paper in Nature announcing that they’d been able to recover many functions in multiple organs, including the brain and heart, in whole-body pigs that had been killed an hour earlier. They continued the experiment for six hours and confirmed that the anesthetized, previously dead animals had regained circulation and that numerous key cellular functions were active. 

“What these studies have shown is that the line between life and death isn’t as clear as we once thought,” says Nenad Sestan, a neuroscientist at the Yale School of Medicine and senior author of both pig studies. Death “takes longer than we thought, and at least some of the processes can be stopped and reversed.” 

A handful of studies in humans have also suggested that the brain is better than we thought at handling a lack of oxygen after the heart stops beating. “When the brain is deprived of life-sustaining oxygen, in some cases there seems to be this paradoxical electrical surge,” Koch says. “For reasons we don’t understand, it’s hyperactive for at least a few minutes.” 

In a study published in September in Resuscitation, Parnia and his colleagues collected brain oxygen and electrical activity data from 85 patients who experienced cardiac arrest while they were in the hospital. Most of the patients’ brain activity initially flatlined on EEG monitors, but for around 40% of them, near-normal electrical activity intermittently reemerged in their brains up to 60 minutes into CPR. 

Similarly, in a study published in Proceedings of the National Academy of Sciences in May, Borjigin and her colleagues reported surges of activity in the brains of two comatose patients after their ventilators had been removed. The EEG signatures occurred just before the patients died and had all the hallmarks of consciousness, Bojigin says. While many questions remain, such findings raise tantalizing questions about the death process and the mechanisms of consciousness. 

Life after death

The more scientists can learn about the mechanisms behind the dying process, the greater the chances of developing “more systematic rescue efforts,” Borjigin says. In best-case scenarios, she adds, this line of study could have “the potential to rewrite medical practices and save a lot of people.” 

Everyone, of course, does eventually have to die and will someday be beyond saving. But a more exact understanding of the dying process could enable doctors to save some previously healthy people who meet an unexpected early end and whose bodies are still relatively intact. Examples could include people who suffer heart attacks, succumb to a deadly loss of blood, or choke or drown. The fact that many of these people die and stay dead simply reflects “a lack of proper resource allocation, medical knowledge, or sufficient advancement to bring them back,” Parnia says.  

Borjigin’s hope is to eventually understand the dying process “second by second.” Such discoveries could not only contribute to medical advancements, she says, but also “revise and revolutionize our understanding of brain function.”

Link to the rest at MIT Technology Review

Writing Stories to Seek Answers to Life’s Thorny Questions

From Writers Digest:

Write what you know. As a novelist, I’d argue that adage is bad advice. None of the 30-plus romance and romantic suspense novels I’ve written over the last 15 years would’ve been possible if I’d abided by it. I’ve never been Amish. I’ve never been stalked by a serial killer. I have, however, been diagnosed with stage 4 ovarian cancer. For the past eight years I’ve been living with a terminal disease. It’s a club to which no one wants to belong, and the members can’t leave (only expire). It’s been a long season of fear, anxiety, anger, despair, and loss of faith, but also memories made, joy, peace, silliness, laughter, faith increased, and even hope.

The statistics for late-stage ovarian cancer are grim. Less than 20 percent of women diagnosed at stage 4 live past five years. It’s the most deadly gynecological cancer. Yet here I am. I’ve survived my expiration date. Why? That’s the big question that haunts me. Why me and not Anna Dewdney, who wrote the children’s picture book series Llama Llama my kids loved growing up? Why me and not the woman sitting in the next pew at church?

I’ve read that writing stories is one way we seek answers to questions that otherwise seem impossible to resolve. I decided to write a novel exploring the journey of two sisters—one an oncologist and the other a kindergarten teacher diagnosed with ovarian cancer. Let them figure out whether there’s some hidden meaning I’m supposed to find in this grueling marathon. The Year of Goodbyes and Hellos was the easiest and hardest novel I’ve ever written. The first draft of the 110,000-word tome took about eight months. The words poured out. I couldn’t write fast enough.

It helped that I’d already done the technical research in the form of seven years of CT scans, PET scans, lab work, surgery, oncology appointments, literally hundreds of hours spent in cancer clinics and infusion rooms, and hundreds of hours spent poring over the latest research into new treatments and clinical trials. I speak the language. I have a storehouse of memories of healthcare workers, including doctors, nurses, medical assistants, technicians, and admin staff, ranging from eccentric and wonderful to downright mean. My Russian oncologist with her garden and her crazy dog stories is my favorite.

I used this on-the-job training to get the technical details right. That turned out to be the easy part. Taking out the memories and holding them up to the light proved to be much harder. Starting in January 2016, when I came back to the office after Christmas break to find a voice mail message from my oncologist saying a “benchmark” chest CT scan had revealed masses in the lining of my lungs. CT scans can be wrong, I thought, WebMD says so.

Then came the PET scan. And the waiting—which turned out to be standard throughout this journey. Waiting and more waiting. I paced outside the clinic for two hours, waiting to see the oncologist. Finally, in that freezing exam room, she uttered the words “ovarian cancer.”

The first of many memories. Surgery to remove my female body parts. Losing my hair—twice. Chemo, remission, progression, chemo, remission, progression. Round and round went the merry-go-round. And now I’m in a Phase I clinical trial with all the side effects and all the uncertainty.

But I also sifted through the memories for the gold nuggets. The birth of a grandchild. A 35th wedding anniversary spent in Costa Rica. Christmases. Birthdays. Spring days. Hummingbirds. Peanut butter toast. Writing stories.

I discovered life is still good. That I can have cancer and still hang on to my faith. That God is still good and I’m still here.

Why? To write this book? Maybe. It’s a universal story to which readers will relate. We all have loved ones we’re afraid of losing—regardless of the cause. Most of us will reach a point where we have to concede that time is, in fact, finite. That we are not immortal. Perhaps reading this story will provoke thought and make the conversations that follow easier in some small way.

Link to the rest at Writers Digest

If Scientists Were Angels


From The New Atlantis:

Francis Bacon is known, above all, for conceiving of a great and terrible human project: the conquest of nature for “the relief of man’s estate.” This project, still ongoing, has its champions. “If the point of philosophy is to change the world,” Peter Thiel posits, “Sir Francis Bacon may be the most successful philosopher ever.” But critics abound. Bacon stands accused of alienating human beings from nature, abandoning the wisdom of the ancients, degrading a philosophy dedicated to the contemplation of truth, and replacing it with something cruder, a science of power.

In The Abolition of Man, C. S. Lewis goes so far as to compare Bacon to Christopher Marlowe’s Faustus:

You will read in some critics that Faustus has a thirst for knowledge. In reality, he hardly mentions it. It is not truth he wants … but gold and guns and girls. “All things that move between the quiet poles shall be at his command” and “a sound magician is a mighty god.” In the same spirit Bacon condemns those who value knowledge as an end in itself: this, for him, is to use as a mistress for pleasure what ought to be a spouse for fruit. The true object is to extend Man’s power to the performance of all things possible.

Lewis draws the final phrase of this critique from Bacon’s New Atlantis, the 1627 utopian novella from which this journal takes its name. But why would a publication like The New Atlantis, dedicated to the persistent questioning of science and technology, name itself after a philosopher’s utopian dreams about magicians on the verge of becoming mighty gods?

According to the journal’s self-description on page 2 of every print issue, this is not the whole story. Bacon’s book raises questions about the moral and political difficulties that accompany the technological powerhouse it depicts, even if it “offers no obvious answers.”

Perhaps it seduces more than it warns. But the tale also hints at some of the dilemmas that arise with the ability to remake and reconfigure the natural world: governing science, so that it might flourish freely without destroying or dehumanizing us, and understanding the effect of technology on human life, human aspiration, and the human good. To a great extent, we live in the world Bacon imagined, and now we must find a way to live well with both its burdens and its blessings. This very challenge, which now confronts our own society most forcefully, is the focus of this journal.

The fact is, people have been puzzling over Bacon’s uncanny utopia for four hundred years without being able to pin it down. The reason for this is simple: We’ve been reading it wrong. Bacon’s New Atlantis is not an image of things hoped for or of things to come. It is an instructive fable about what happens when human beings stumble across the boundary between things human and things divine, a story about fear, intimidation, and desire.

Human beings have always lusted after knowledge, specifically that knowledge which promises to open our eyes so that we might become like gods. Bacon did not invent or ignite this desire, but he did understand it better than most.

In form, Francis Bacon’s New Atlantis is modeled loosely on Thomas More’s Utopia. A ship full of European sailors lands on a previously unknown island in the Americas where they find a civilized society in many ways superior to their own. The narrator describes the customs and institutions of this society, which in Bacon is called “Bensalem,” Hebrew for “son of peace.” Sometimes Bacon echoes, sometimes improves upon, More’s earlier work. But at the end of the story, Bacon turns to focus solely on the most original feature of the island, an institution called Solomon’s House, or the College of the Six Days Works.

This secretive society of natural philosophers seeks nothing less than “the effecting of all things possible,” as C. S. Lewis duly notes. Bacon devotes a quarter of the total text of New Atlantis to an unadorned account of the powers and insights the philosophers in Solomon’s House have. Then the work ends abruptly with no account of the sailors’ trip home or the results of their discovery. The story ends mid-paragraph, with a final line tacked on at the end: “The rest was not perfected.”

What is the meaning of this tale? The first and simplest answer was given by William Rawley, Bacon’s chaplain, who was responsible for publishing New Atlantis after Bacon’s death. He wrote in his preface to the work: “This fable my Lord devised, to the end that he might exhibit therein a model or description of a college instituted for the interpreting of nature and the producing of great and marvellous works for the benefit of men….” The founders of the Royal Society, Great Britain’s famous scientific academy, seem to have had a similar idea a few decades later: Bacon “had the true Imagination of the whole Extent of this Enterprise, as it is now set on foot.”

Link to the rest at The New Atlantis

American Visions

From The Wall Street Journal:

‘Brace your nerves and steel your face and be nothing daunted,” an Irish immigrant named John Stott wrote on the back of the trans-Atlantic tickets he was sending to folks back home in the 1848, hoping that they, too, would come to America, “this Great Continent.” He added: “There will be dificultyes to meet with but then consider the object you have in view.” The last sentence could serve as a motto for Edward L. Ayers’s “American Visions,” a sweeping, briskly narrated history of the United States as it limped its circuitous way to the Civil War.

There’s much in Mr. Ayers’s book about those “dificultyes.” During the first half of the 19th century, the country tripled in size. In 1848, at the end of a costly war that brought half of Mexico into the national fold, President James Polk exulted: “The United States are now estimated to be nearly as large as the whole of Europe.” As trans-Atlantic voyages shortened from multiple weeks to a handful of days, space in the urban centers came at a premium. In New York, notes Mr. Ayers, boarding houses were soon packing five to a room. Many of these new Americans were Irish (more than 700,000 arrived between 1846 and 1850 alone)—too many, in the eyes of native-born citizens. “They come not only to work & eat, or die,” worried the Philadelphia lawyer Sidney George Fisher, “but to vote.”

More people didn’t mean more freedoms. Mr. Ayers is clear-eyed about the violence that troubled the early republic. He reminds us not only of the thousands of Seminole lives lost when they were forcibly removed from Florida but also of the price paid by the soldiers who were sent into malaria-ridden swamps on President Andrew Jackson’s orders (the whole operation would end up costing, Mr. Ayers writes, a trillion dollars in today’s currency). Switching between local events and national developments, “American Visions” captures the growing rift in the fragile national fabric: As Southern racial attitudes became entrenched, Northerners settled into unearned smugness about their own moral superiority. Sometimes, confronted with the stories retold in this book, one wonders how people carried on at all. The abolitionist Elijah Lovejoy, standing before the charred body of a lynched man, felt that part of his life had ended at that point, too: “As we turned away, in bitterness of heart we prayed that we might not live.”

Undeterred, Mr. Ayers describes himself, in his introduction, as an “optimistic person.” And his “American Visions,” despite the jaundiced eye it casts over much of the republic’s early history, is an inspiring book, promoting a sturdy sense of patriotism—one that, aware of the nation’s failings, remembers its “highest ideals of equality and mutual respect.”

In “American Visions,” by contrast, Mr. Ayers focuses less on the observers than on those who took matters into their own hands. In 1827, Jarena Lee, a young widow and member of Philadelphia’s African Methodist Episcopal Church, traveled across 2,325 miles to deliver 178 sermons in churches, barns and meeting halls. “Why should it be thought impossible, heterodox, or improper, for a woman to preach?” she asked defiantly. The similarly restless John Chapman (1774-1845), a Swedenborgian and the model for “Johnny Appleseed,” roamed the country, barefoot and dressed in rags, giving the gift of apple trees (and perhaps also of hard cider) to struggling farmers, reminding them of God’s presence in nature. Rather than await divine validation, the Massachusetts reformer Dorothea Dix (1802-87) successfully lobbied for better facilities to house those considered insane, becoming, in her words, “the Revelation of hundreds of wailing, suffering creatures.”

But “American Visions” has moments of hilarity, too, particularly when a characteristically American inclination to bring things down to earth—Mark Twain would become its consummate chronicler—manifests itself in the face of lofty rhetoric. In 1843, the multitalented Samuel F.B. Morse persuaded Congress to fund, to the tune of $30,000 (about $1.2 million in today’s money), a telegraphic test line between Washington, D.C., and Baltimore: 38 miles of wire strung along the Baltimore Ohio Railroad. On May 24, 1844, Morse sat down in the chambers of the Supreme Court to transmit his first long-distance message, a sonorous invocation suggested by a friend: “What hath God wrought!” Drawn from Numbers 23:23, the phrase seemed appropriate, the weight of biblical wisdom translated into Morse’s dots and dashes, a prediction of future national greatness. The response that came in from Baltimore was underwhelming: “Yes.” Indeed, as Mr. Ayers points out, it would take years of innovation before the telegraph supplied the “sensorium of communicated intelligence” the newspapers had envisioned.

“American Visions” beautifully shows how remarkably resilient dreams of a better republic remained even in the darkest of times. When he heard talk of America’s “manifest destiny,” the elderly Albert Gallatin, formerly Thomas Jefferson’s Treasury secretary, didn’t mince words: “Allegations of superiority of race and destiny are but pretences under which to disguise ambition, cupidity, or silly vanity.” Resistance against exclusive definitions of community was, Mr. Ayers contends, in the American grain. When the Mormon prophet Joseph Smith founded Nauvoo, his holy city in Illinois, he insisted that the new settlement was open to “persons of all languages, and of every tongue, and of every color.”

The big-tent approach is also Mr. Ayers’s method. It is refreshing to encounter a book that gives equal billing to Stephen Foster’s “Oh! Susanna” and Nathaniel Hawthorne’s “The Scarlet Letter.” Yet Mr. Ayers’s inclusiveness comes with value judgments, too. Most will agree that Edgar Allan Poe was a genius, but was he “the most brilliant writer in the United States,” surpassing Herman Melville or Walt Whitman? And was Henry Wadsworth Longfellow’s 1855 epic, “The Song of Hiawatha,” really little more than a flowery ersatz Native American fantasy “larded . . . with footnotes”? A closer look reveals the commendable effort Longfellow made (in endnotes, for what it’s worth) to document the Ojibwe words that laced his lines.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Social media overload, exhaustion, and use discontinuance

From ScienceDirect:

Abstract

While users’ discontinuance of use has posed a challenge for social media in recent years, there is a paucity of knowledge on the relationships between different dimensions of overload and how overload adversely affects users’ social media discontinuance behaviors. To address this knowledge gap, this study employed the stressor–strain–outcome (SSO) framework to explain social media discontinuance behaviors from an overload perspective. It also conceptualized social media overload as a multidimensional construct consisting of system feature overload, information overload, and social overload. The proposed research model was empirically validated via 412 valid questionnaire responses collected from Facebook users. Our results indicated that the three types of overload are interconnected through system feature overload. System feature overload, information overload, and social overload engender user exhaustion, which in turn leads to users’ discontinued usage of social media. This study extends current technostress research by demonstrating the value of the SSO perspective in explaining users’ social media discontinuance.

Introduction

Facebook is probably one of the most successful information systems (IS) applications offered to users. Although its users’ passion remains strong, and the number of its global monthly active users keeps growing, Facebook is also facing the challenge of user discontinuance in the competitive environment. According to the report from Pew Research centre (2018), 26% of Facebook users have deleted the Facebook app from their mobile phones in 2017.

IS user behaviors, such as IS use and continuance, have been extensively studied in the IS field in recent years. This line of research helps explain why individuals use particular systems. Recently, IS discontinuance has attracted the attention of scholars, leading to research examining why individuals might stop using specific systems. A few scholars have examined the drivers of discontinued IS usage intentions for social media users from various perspectives. For instance, social cognitive theory has been applied to illustrate discontinuous usage intentions on Facebook by testing the relationships among self-observation, judgmental process and behaviors (Turel, 2015). Social support theory has also been used to examine social media discontinuance by investigating the negative direct impact of social overload on social media discontinuance intentions (Maier et al., 2015a). Discontinuance intentions refers to a user’s intention to change behavioral patterns by reducing usage intensity or even taking the radical step of suspending their behaviors (Maier et al., 2015a). Noticeably, whilst existing studies focus on discontinuous intentions as the outcome, discontinuous usage behaviors — referring to the next stage of discontinuous intentions in the IS lifecycle — have received little attention to date (Furneaux & Wade, 2011; Maier et al., 2015b). Specifically, there is limited knowledge concerning the psychological mechanisms underlying social media discontinuance behaviors. Uncovering these mechanisms is important because IS developers are keen to understand why users abandon their systems.

One potential reason for IS use discontinuance is exhaustion from overload (users’ weariness from the demands of their IS usage), which can manifest in different forms. First, to meet users’ needs or profitability goals, social media is constantly adding or updating features. Individual users can find it hard to adapt to new functions or interfaces, and thus they perceive a system feature overload (Zhang et al., 2016). Second, individual users spend considerable time processing information on social media, which includes irrelevant information like gossip, spam, rumors and forced content. This in turn can increase users’ information overload (Zhang et al., 2016). Third, the number of individual users’ social media friends increases with the popularity of social media. Individual users have to interact with their contacts on social media to show that they care about them, which can involve reading their posts, answering their questions or helping with their problems. Users need to give a lot of social support to their contacts on social media, but offering them too much social support might lead to social overload (Maier et al., 2015a). Maier et al. (2015a) found that some individuals experience social overload in their social media use, and they argued that social overload is an explanation for social media discontinuance.

However, studies on the different dimensions of overload remain scarce. Little is known on how the different types of overload (such as system feature overload, information overload, and social overload) as stressors lead to users’ social media discontinuance behaviors. Specifically, the ways by which the different dimensions of overload are interconnected remain unaddressed. This work offers an important extension of current research in the form of a detailed theoretical understanding of the psychological mechanisms underlying social media discontinuance.

To address the above gap in research, this study applies the stressor–strain–outcome (SSO) perspective to investigate the relationships between the different dimensions of social media overload and how different types of overload can relate to discontinued Facebook use. More specifically, this study extends Maier et al.’s (2015a) study by investigating a set of distinct types of overload (system feature, information and social overload) instead of only social overload as stressors. Furthermore, this study extends Zhang et al.’s (2016) study by examining the relationships between different compositions of overload, which provides a deeper understanding of the role of overload in explaining discontinuance usage. The proposed research model was empirically validated in the context of Facebook use using 412 valid responses from Facebook users collected via a survey questionnaire. The findings yield two key contributions. The first contribution applies to the compositions of system feature, information and social overload and the empirical validation of their relationships. The second contribution is the understanding of social media discontinuance from overload perspective enabled by the SSO framework.

The manuscript consists of eight sections, inclusive of the introduction. The next section reviews extant literature on IS discontinuance, the SSO framework, social media exhaustion, and overload.

. . . .

IS discontinuance

IS discontinuance has been widely studied in IS literature as a post-adoption behavior (Shen, Li, & Sun, 2018), and it refers to a user-level decision to abandon or reduce the use of an IS (Parthasarathy & Bhattacherjee, 1998). Discontinuance and continuance have often been considered the two sides of IS use (Turel, Connelly, & Fisk, 2013). More recent research has theorized that IS discontinuance is a distinct behavior, not simply the opposite of IS continuance (Cao & Sun, 2018; Maier et al.

. . . .

Conclusions

The dark side of IS usage has attracted the attention of scholars in recent years. The demands of IS use affect individual users and societies at large. Such demands are among the key reasons for the IS discontinuance intentions of users. Individuals may simply feel exhausted due to IS use. Researchers have called for more research on this topic and stressed the importance of understanding the fundamental mechanisms that underlie such negative outcomes.

Link to the rest at ScienceDirect

PG expects that many, if not all, visitors to TPV have likely OD’d on social media on one or more occasions.

Unfortunately, Elsevier’s Science Direct provides only an overview of the original publication for those without a paid subscription. Any visitors to TPV who are employees of universities, colleges, research organizations, etc., may be able to sign in to Elsevier or a similar overpriced online repository and read the entire study and, additionally, many, if not all, additional materials referenced in the paper.

Doomscrolling or doomsurfing is the act of spending an excessive amount of time reading large quantities of negative news online. In 2019, a study by the National Academy of Sciences found that doomscrolling can be linked to a decline in mental and physical health.

Wikipedia

My Mother, My Novel, and Me

From Publishers Weekly:

The first time my mom was diagnosed with cancer, I was writing a novel about immortality. She had gone to the doctor after discovering a sizable lump on the side of her stomach. A biopsy revealed the worst possible news: colon cancer. They could remove the lump surgically, but given its size, the doctors suspected it might be too late.

I could not imagine the world without my mom. I told myself she would beat this. My mom had always defied the odds. Born Black and poor to illiterate parents in the South during Jim Crow, she was the only member of her immediate family who graduated from high school. She then earned a full scholarship to college and became a teacher, effectively pulling herself out of poverty and raising me in a safe, middle-class community. She was the toughest person I knew.

My mom’s surgery went well, and the pathology report was better than expected: with the tumor removed, there was no cancer in her body. It felt significant, somehow, that I was writing a novel about immortality and my mom had staved off death, as though the novel was a good luck charm or talisman. I poured everything I had into the book, thanking it for its role in saving my mom.

Two years later, the cancer returned. It had spread to my mom’s stomach. The doctors suggested palliative chemo; they could stop the cancer from spreading to buy her time, but they could not cure her.

But my mom decided not to undergo chemo. Instead of five years, she would have maybe two and a half. I couldn’t accept this, and for the next several months my mom and I fought constantly. I cried. I begged. I guilt-tripped. I made rational arguments and emotional appeals. Nothing would sway her.

With my mom’s diagnosis, finding a publisher for my novel felt more urgent. Her fate and the novel’s had become inextricably linked in my mind—if one made it, the other would, too. I began sending the novel out to a series of independent presses, desperate to find a home for it quickly.

In my novel, immortality comes about suddenly, abruptly changing the fates of people who had previously been diagnosed with terminal diseases. While I didn’t believe immortality was around the corner in real life, I saw articles about new experimental treatments that were extending lives or curing some forms of cancer.

I redoubled my efforts to convince my mom to get chemo, but she was unyielding. She was completely asymptomatic and knew that chemo could come with side effects that would lower her quality of life. Eventually, I realized I had to respect my mom’s wishes—lest I lose her long before she died.

Link to the rest at Publishers Weekly

Higher Ed Has Become a Threat to America

From The Wall Street Journal:

America faces a formidable range of calamities: crime out of control, borders in chaos by design, children poorly educated while sexualized and politicized against parental opposition, unconstitutional censorship, a press that does government PR rather than oversight, our institutions and corporations debased in the name of “diversity, equity and inclusion”—and more. To these has been added an outbreak of virulent antisemitism.

Every one of these degradations can be traced wholly or in large part to a single source: the corruption of higher education by radical political activists.

Children’s test scores have plummeted because college education departments train teachers to prioritize “social justice” over education. Censorship started with one-party campuses shutting down conservative voices. The coddling of criminals originated with academia’s devotion to Michel Foucault’s idea that criminals are victims, not victimizers. The drive to separate children from their parents begins in longstanding campus contempt for the suburban home and nuclear family. Radicalized college journalism departments promote far-left advocacy. Open borders reflect pro-globalism and anti-nation state sentiment among radical professors. DEI started as a campus ruse to justify racial quotas. Campus antisemitism grew out of ideologies like “anticolonialism,” “anticapitalism” and “intersectionality.”

Never have college campuses exerted so great or so destructive an influence. Once an indispensable support of our advanced society, academia has become a cancer metastasizing through its vital organs. The radical left is the cause, most obviously through the one-party campuses having graduated an entire generation of young Americans indoctrinated with their ideas.

And there are other ways. Academia has a monopoly on training for the most influential professions. The destructive influence of campus schools of education and journalism already noted is matched in the law, medicine, social work, etc. Academia’s suppression of the Constitution causes still more damage. Hostility to the Constitution leads to banana-republic shenanigans: suppression of antigovernment speech, the press’s acting as mouthpiece for government, law enforcement used to harass opponents of the government.

Higher education by and for political radicals was foreseen and banned by the American Association of University Professors, which in a celebrated 1915 policy statement warned teachers “against taking unfair advantage of the student’s immaturity by indoctrinating him with the teacher’s own opinions.” The AAUP already understood that political indoctrination would stamp out opposing views, which means the end of rational analysis and debate, the essential core of higher education. The 1915 statement is still a recognized professional standard—except that almost everywhere it is ignored, at least until the public is looking.

Optimists see signs of hope in growing public hostility to campus foolishness, but radical control of the campuses becomes more complete every day as older professors retire and are replaced by more radicals. A bellwether: The membership of the National Association of Diversity Officers in Higher Education—which represents the enforcers of radical orthodoxy—has tripled in the past three years.

An advanced society can’t tolerate the capture of its educational system by a fringe political sect that despises its Constitution and way of life. We have no choice: We must take back control of higher education from cultural vandals who have learned nothing from the disastrous history of societies that have implemented their ideas.

. . . .

Personnel is policy. Effective reform means only one thing: getting those political activists out of the classrooms and replacing them with academic thinkers and teachers. (No, that isn’t the same as replacing left with right.) Nothing less will do. Political activists have been converting money intended for higher education to an unauthorized use—advancing their goal of transforming America. That is tantamount to embezzlement. While we let it continue we are financing our own destruction as a society.

But how can we stop them? State lawmakers can condition continued funding on the legitimate use of that money and install new campus leadership mandated to replace professors who are violating the terms of their employment. Though only possible in red states, this would bring about competition between corrupt institutions and sound ones. Employers would soon notice the difference between educated and indoctrinated young people. Legislatures in Florida, Texas and North Carolina have begun to take steps to reform their universities, but only at Florida’s New College is a crucial restructuring of the faculty under way.

But the only real solution is for more Americans to grasp the depth of the problem and change their behavior accordingly. Most parents and students seem to be on autopilot: Young Jack is 18, so it’s time for college. His family still assumes that students will be taught by professors who are smart, well-informed and with broad sympathies. No longer. Professors are now predominantly closed-minded, ignorant and stupid enough to believe that Marxism works despite overwhelming historical evidence that it doesn’t. If enough parents and students gave serious thought to the question whether this ridiculous version of a college education is still worth four years of a young person’s life and tens or hundreds of thousands of dollars, corrupt institutions of higher education would collapse, creating the space for better ones to arise.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Opportunists and Patriots

From The Wall Street Journal:

In the aftermath of the American Revolution, the concepts of loyalty and legitimacy emerged fitfully and confusingly. Under the monarchy, these concepts had been straightforward: All subjects were united by their loyalty to the sovereign, whose will was their command. But in founding a republic on principles at once lofty and vague, the Founders created a problem that vexes us still. If a nation is defined by its commitment to shared ideals, who draws the line between a difference of opinion and a difference of principle? Where does loyal opposition end and treason begin? What distinguishes the transgressions of a demagogue from the enraged voice of the people?

Today we rely on nearly 250 years of shared history and tradition to navigate the vague boundaries suggested by these unanswerable questions—and yet we can hardly keep from leaping at one another’s throats. The Founders built an arena of partisan politics without grasping the full fury of the beast they had unleashed within it.

Founding Partisans” by H.W. Brands and “A Republic of Scoundrels,” a collection of essays edited by David Head and Timothy Hemmis, are as different as two books on the founding can be. But each captures the moral confusion of the era, when the rules of democratic politics were still unwritten and everything seemed up for grabs.

Mr. Brands, a prolific historian and a professor at the University of Texas, provides a brisk account of the controversies that first divided the heroes of the Revolution. He begins with the Federalists’ effort to replace the Articles of Confederation with a stronger national government and concludes with the Jeffersonian Republicans’ repudiation of the Federalists in the election of 1800, the first transfer of power in U.S. history.

Though the Federalists organized themselves into a national political party, they didn’t understand themselves as one. Political parties, or “factions,” to use the Founders’ term, were understood as regrettable evils. They existed to serve narrow or sinister interests. An organized political party was thus, by definition, “opposed to the general welfare,” Mr. Brands writes.

The Founders hoped that the Constitution would suppress the influence of factions, but they assumed that virtuous leaders (namely, themselves) would naturally agree with one another. The discovery that so many leading figures disagreed on important matters came as a shock. Each side in this deepening divide began to see their opponents as a menace to the republic.

The failure to anticipate the pull of partisanship was nowhere more evident than in the Constitution’s provisions for electing the president: Each appointed elector, chosen by the states in a manner determined by their legislatures, would vote for two people, at least one of whom could not inhabit the elector’s own state. The thinking was that electors would name a local favorite on the first ballot and, on the second, the worthiest citizen throughout the land. The runner-up would be vice president.

This process, reflecting a hope that Americans would ultimately choose a leader independent of faction, was incompatible with an election in which a candidate would be supported by a party against his rivals. Sure enough, in 1800 the Democratic-Republicans voted in lockstep for Thomas Jefferson as their president and Aaron Burr as vice president. But no one thought to ensure that Burr received at least one less electoral vote. The result was a tie, allowing the defeated Federalists in the House to decide who would be president. Burr slyly advertised that he was willing to make a deal with his adversaries.

The crisis passed, thanks to Alexander Hamilton’s intervention. In this sense, the outcome seemed to vindicate the Founders’ hope that virtuous leaders would combine against conniving partisans. But it was a close-run thing.

Mr. Brands follows countless other historians in providing a blow-by-blow account of the nation’s first experience with partisan combat, though not a single historian is cited in the text or notes. He relies instead on the Founders’ own words to capture the controversies in which they participated. This choice gives his narrative an immediacy that heavy-handed analysis often diminishes. Indeed, “Founding Partisans” reads less like a work of history than a journalist’s insider account of high politics, except here the intemperate, backbiting quotations come from sources who are safely dead rather than anonymous.

Link to the rest at The Wall Street Journal

You Can’t Unsubscribe From Grief

From Electric Lit:

Replying All on the Death Announcement Email

On New Year’s Day, I got an email from an old writer friend announcing plans to end her life. Her life was already ending. This expedited ending-of-life had been approved by a medical professional. She was electing to die with dignity. Her death was scheduled for the following day. Like a hair appointment or a visit to the dentist.

It wasn’t an email directly to me. I subscribe to her newsletter.

Farewell, the subject line read. That was her voice. Grand and direct. There was no beating around the bush. Happy New Year! the email began and then: I’m planning to end my life.

After I closed the email, I tried to stop thinking about her, but that night, on the eve of when I knew she was going to die, I couldn’t sleep. I googled her name, read every article that appeared on my screen. Read all the hits that weren’t actually about her. The ones with her name crossed out that the algorithm insisted were relevant. Maybe it knew something I didn’t.

I read about all the diseases I was probably suffering from that had nothing to do with her (or the disease that was killing her), I read about all the new diet trends that would shed my hips of love handles (I hadn’t seen her since she got sick, but in her last photo she was rail thin), I read about a minor celebrity cheating on another minor celebrity and then them reconciling and then them breaking up and then them getting back together again (she loved the thrill of gossip)—I read everything in the hopes of catching a glimpse of my soon-to-be dead old writer friend.

A week later, I got an email from a literary magazine announcing the death of its co-founder. I did not know its co-founder. I just subscribed to the newsletter.

I read the announcement from the literary magazine as if it were the announcement of the death of my old writer friend because after she died, I didn’t receive such an email. Because she was not here to write one. Or to send one. Though she could’ve scheduled one. Which is a thought I’ve had more than once since her death. Why didn’t she do that? That would’ve felt so like her. Not so fast, it might’ve read. I’m still here.

After the newsletter announcing the death of the literary magazine co-founder, my inbox was flooded.

I am so sorry to hear this. May you and yours find comfort. Keep him close to your heart.

I didn’t email anyone when my old writer friend died because it felt like I didn’t know her well enough. We met at a writing residency in Wyoming in 2016. We watched the presidential election together: I baked cookies, she bought liquor. We only inhabited the same space for a handful of weeks. So, how can I justify the vacuum suck of losing her?

The day after the election, we sat at a kitchen table and talked about our bodies. About who they belonged to. About culpability. I remember us disagreeing. The strangeness of feeling so connected to each other and then realizing, suddenly, that we may not actually know each other.  

I cannot keep the literary magazine co-founder close to my heart because I did not know him at all.

Life is eternal! Your memories are the tap that keeps him living!

I think my old writer friend would’ve liked the idea of tapping a memory, like a keg or a maple tree.

Link to the rest at Electric Lit

The Desolate Wilderness

From an account of the Pilgrims’ journey to Plymouth in 1620, as recorded by Nathaniel Morton – Via The Wall Street Journal’s editorial page each Thanksgiving day:

Here beginneth the chronicle of those memorable circumstances of the year 1620, as recorded by Nathaniel Morton, keeper of the records of Plymouth Colony, based on the account of William Bradford, sometime governor thereof:

So they left that goodly and pleasant city of Leyden, which had been their resting-place for above eleven years, but they knew that they were pilgrims and strangers here below, and looked not much on these things, but lifted up their eyes to Heaven, their dearest country, where God hath prepared for them a city (Heb. XI, 16), and therein quieted their spirits.

When they came to Delfs-Haven they found the ship and all things ready, and such of their friends as could not come with them followed after them, and sundry came from Amsterdam to see them shipt, and to take their leaves of them. One night was spent with little sleep with the most, but with friendly entertainment and Christian discourse, and other real expressions of true Christian love.

The next day they went on board, and their friends with them, where truly doleful was the sight of that sad and mournful parting, to hear what sighs and sobs and prayers did sound amongst them; what tears did gush from every eye, and pithy speeches pierced each other’s heart, that sundry of the Dutch strangers that stood on the Key as spectators could not refrain from tears. But the tide (which stays for no man) calling them away, that were thus loath to depart, their Reverend Pastor, falling down on his knees, and they all with him, with watery cheeks commended them with the most fervent prayers unto the Lord and His blessing; and then with mutual embraces and many tears they took their leaves one of another, which proved to be the last leave to many of them.

Being now passed the vast ocean, and a sea of troubles before them in expectations, they had now no friends to welcome them, no inns to entertain or refresh them, no houses, or much less towns, to repair unto to seek for succour; and for the season it was winter, and they that know the winters of the country know them to be sharp and violent, subject to cruel and fierce storms, dangerous to travel to known places, much more to search unknown coasts.

Besides, what could they see but a hideous and desolate wilderness, full of wilde beasts and wilde men? and what multitudes of them there were, they then knew not: for which way soever they turned their eyes (save upward to Heaven) they could have but little solace or content in respect of any outward object; for summer being ended, all things stand in appearance with a weatherbeaten face, and the whole country, full of woods and thickets, represented a wild and savage hew.

If they looked behind them, there was a mighty ocean which they had passed, and was now as a main bar or gulph to separate them from all the civil parts of the world.

Link to the rest at The Wall Street Journal

Today is Thanksgiving Day, a major holiday in the United States, so PG won’t be making any more posts today. He’ll be back tomorrow after his body finishes processing an overdose of tryptophan from today’s turkey dinner with extended family.

The Tory’s Wife

From The Wall Street Journal:

The Revolutionary War liberated Americans from the oppressive colonial rule of the British, but as the postwar era began, it wasn’t clear how much women benefited from the hard-won freedoms accorded to men. Wives were still subject to the English common law of coverture, which gave husbands control of their property; women had next-to-no political rights. Had the Revolution changed anything?

Cynthia Kierner, a professor of history at George Mason University, examines this question in “The Tory’s Wife.” Her short, readable volume recounts the story of Jane Welborn Spurgin, a farm wife and mother of 13, in the backwoods of North Carolina. Other than the fact that she was literate, there was nothing particularly notable about Jane. She didn’t move in elite circles, and she certainly wasn’t famous. Yet she publicly claimed her rights as a citizen of the new American republic in a series of petitions to the state legislature. Ms. Kierner’s focus on Jane stems from her scholarly interest in how the war drew ordinary women into the political sphere.

Jane was a Whig, a patriot who supported the American uprising. She courageously put her patriotism into action when, in early 1781, she provided food and shelter to the American commander in the South, Gen. Nathanael Greene, who set up camp on her family’s property in the North Carolina town of Abbotts Creek. If the British attack, Greene told her, grab the kids and run for the basement. When the general sought her advice in finding a trustworthy person to spy on British Gen. Charles Cornwallis, who was camped nearby, she recommended one of her sons for the dangerous job.

Jane’s husband, William, did not share his wife’s fidelity to the cause of liberty. While Jane was aiding the Continental Army, he was fighting against his fellow North Carolinians as an officer in the Tory militia, for which he had recruited like-minded Loyalists. Before the war, William was a moderately prosperous landowner and a justice of the peace. As war broke out, he was deemed “an Enemy to his Country” by a local political committee, and he spent much of the war in hiding. He disappeared after the American victory, eventually turning up in Canada with a woman he called his wife. The crown rewarded his loyalty with a generous gift of acreage in what is now Ontario.

Jane was not so fortunate. Back in North Carolina, the postwar state legislature passed a bill confiscating Tory property, thus putting her, the wife of a traitor, in danger of losing her family’s home. She refused to move out and set about asserting her ownership rights in three petitions. Referring to herself in the third person, she wrote: “She has always behaved herself as a good Citizen and well attached to the government. She thinks it extremely hard to be deprived of the Common rights of other Citizens.” The “other Citizens” were, of course, men.

For most women, Ms. Kierner writes, “the right to petition was the only political right they formally possessed” under the law. It wasn’t unusual for women to petition for state support for themselves and their children, couching their requests in humble and ingratiating language. Their husbands were dead or missing, and they wanted the state to take on the role of their protector.

In contrast, Jane’s petitions were more direct and less deferential. Ms. Kierner characterizes them as “bold” and distinctive for their “legalism and clarity.” Jane even secured the support of 78 neighbors, who co-signed her second petition in 1788. It is noteworthy, the author says, that Jane “framed her own claim to citizenship in terms of the right to own and protect her family’s property.” That is, she based her case on a traditional understanding of citizenship “as deriving from one’s material stake in society.”

As “The Tory’s Wife” opens, Ms. Kierner warns readers not to expect a conventional biography. Official records yield little information about ordinary women like Jane. The author paints a fuller portrait of William, who had a public role as a justice of the peace and a notorious Tory.

In noting the paucity of sources that document Jane’s life, Ms. Kierner observes that much of the material is “necessarily contextual and sometimes speculative.” The author turns the lack of factual information to her readers’ advantage, providing often fascinating details about life in rural North Carolina—especially about women who struggled to survive the upheavals of war. She cites the grim punishment of a mother for the crime of teaching her children to support the revolution. A son recalled that she was “tied up and whipped by the Tories, her house burned and property destroyed.”

The Whig-Tory divide wasn’t found only in the Spurgins’ marriage. It was prevalent in the wider society, and Ms. Kierner deftly describes North Carolinians’ bitter and often violent struggles. She quotes Gen. Greene, who wrote: “The whole Country is in danger of being laid waste by the Whigs and the Tories, who pursue each other with as much relentless Fury as Beasts of Prey.” There were times when the divided populace seemed to be fighting a civil war, not a war of independence.

And what of Jane, the book’s putative heroine? The North Carolina legislature eventually awarded her a portion of the land she had demanded without acknowledging the legitimacy of her claims. The settlement delivered security for her and her children if not support for her argument on her rights as a citizen. Ms. Kierner concludes that “the Revolution led Jane, like many other Americans, to confront authority and to reimagine her relationship to the polity and the men who ran it.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Of Hamas and Historical Ignorance

PG Note: This OP is a little closer to politics than PG usually strays. What caught his eye is a significant problem with literacy/reading and the consequences arising therefrom.

From The Dispatch:

In the wake of Hamas’ brutal assault on Israel, campuses across the United States have been home to rallies and demonstrations that are nominally pro-Palestinian but effectively celebrations of the terrorist group. Students at George Washington projected slogans such as “Glory to the Martyrs” on a campus building, a Cornell student was arrested for threatening to rape and kill Jewish students, and numerous campuses have been home to antisemitic assaults and vandalism.

The reaction has highlighted the degree to which we’ve left a generation of youth vulnerable to ludicrous doctrines, social media manipulation, and genuinely bad actors. The shocking support among young adults for Hamas’ assault draws on historic ignorance and crude postmodern notions of justice and victimhood, in which torture and kidnapping were rebranded a justifiable response to “colonial privilege.”

The problem starts well before students arrive at college. The average high school student knows little about American history, and even less about the world. A 2018 survey found that 41 percent of adult Americans couldn’t identify Auschwitz as a Nazi concentration camp. Among millennials specifically, two-thirds couldn’t identify Auschwitz and 22 percent had never heard of the Holocaust. So much for “Never forget.” 

Such findings are of a piece with the abysmal performance of younger students in history and geography on the National Assessment of Educational Progress. In the most recent assessment, of a nationally representative sample of eighth-graders, just 13 percent of students were judged “proficient” in U.S. history and just 22 percent in civics. These results continue a decade-long decline.

As Natalie Wexler, author of The Knowledge Gap, has aptly put it, “You can’t think critically about what you don’t know.” This problem isn’t new. But it’s taken on added urgency in a time of intense polarization, declining academic achievement, ubiquitous social media, and rapidly advancing deepfake technology. 

In 1978, alarmed by test results from poor, minority students at a Richmond community college who were ignorant of foundational historic figures and events, scholar E.D. Hirsch began researching the role of background knowledge in reading comprehension. His 1987 book Cultural Literacy became a surprise bestseller and sparked a push for a more rigorous curriculum. 

But teacher training and schools of education largely rejected Hirsch and clung fast to a progressive consensus that children should learn self-confidence and “skills,” not dates and names. Those engaged in preparing a new generation of teachers rejected Hirsch’s belief in the importance of knowledge as “elitist, Eurocentric and focused too heavily on rote memorization,” as a Virginia magazine profile of the famed UVA professor described. Indeed, a decade later, one of us taught alongside the genial, soft-spoken Hirsch, only to see fellow UVA education school faculty quietly steer their students away from this dangerous figure.

The internet supposedly made learning all those “mere facts” unnecessary, anyway. As scholar Mark Bauerlein recounted in his 2009 book The Dumbest Generation, education professors and advocates enamored of “21st century skills” insisted that we needed to move “beyond facts, skills, and right answers” and that students could always just look up all that other stuff.

The skills-over-facts trend paralleled a push to jettison traditional historical narratives and moral certainties in favor of critical theories. Beginning in the 1980s, Howard Zinn’s enormously influential (if oft-inaccurate) People’s History of the United States recast America’s story as one of unbroken villainy and oppression. It was a finalist for the National Book Award in 1980 and was added to high school curricula across the land.  

The unapologetic aim of Zinn’s work—and that of its latter-day, award-winning imitator the 1619 Project—was not to explore our simultaneously wonderful and woeful history but to impress on young people that America and its allies are oppressive colonial powers (that the U.S. is, according to the architect of the 1619 Project, a “slavocracy”).

As a teacher said to one of us recently regarding developments in Israel and Gaza: “Many kids have little to no understanding of the historical context. I feel overwhelmed trying to explain things to them in a side comment here or there.” This teacher lives 10 miles from the state capital, in a town where the average household earns $130,000. This community, filled with educated parents and well-regarded schools, sends the vast majority of its high school graduates on to four-year colleges.

This teacher knows that geography, history, religion, economics, and philosophy are essential to understanding the context of these attacks. But these are subjects that too few schools teach coherently or consistently. Last year K-12 teachers told RAND that it’s more important for civics education to promote environmental activism than “knowledge of social, political, and civic institutions.” 

Teachers who hold these beliefs are unlikely to give students the knowledge or grounding they need to make sense of the world around them. Indeed, the same teachers told RAND their top two priorities for civics education are “promoting critical and independent thinking” and “developing skills in conflict resolution.” What’s striking is that these responses are strikingly content-free. 

In 2020, RAND surveyed high school civics teachers about what they thought graduates needed to know. Just 43 percent thought it essential that they know about periods such as “the Civil War and the Cold War.” Less than two-thirds thought it essential for graduates to know the protections guaranteed by the Bill of Rights.

Critical thinking doesn’t happen in a vacuum. There’s no way for anyone to form meaningful independent judgments on what’s unfolding in Israel and Gaza if they haven’t learned much about history, geography, economics, or political systems. 

This is pretty instructive when it comes to understanding, for instance, how Osama bin Laden’s “Letter to America” has recently gone viral on TikTok. It’s not obvious that Gen Z is eagerly searching for wisdom from mass murderers. But as they spend hours casting about social media, youth who know little about the events or aftermath of 9/11 are encountering a long-dead figure who promises to provide the history and moral clarity they’re not getting elsewhere. We’re sending ill-equipped, confused youth out into the wilds of social media, and we’re reaping the unsurprising result.

As academic rigor and traditional norms have retreated, the space has increasingly been filled by moral relativism and contempt for Western civilization. The result is progressive students who hail Hamas as an ally—an odd way to regard theocratic ideologues who are cavalier about rape, murdering homosexuals, and treating women as chattel.

Writing from a Jerusalem university emptied of students by the present conflict, economist Russ Roberts recently observed, “Open societies are going to have to come to terms with the reality” that some citizens “want to live in a very different kind of society and are willing to use violence and the threat of violence to intimidate and harm people they disagree with.”

Link to the rest at The Dispatch

An irregular life

From TLS:

In 1796 a young law clerk called William Henry Ireland published a book under the modestly antiquarian title Miscellaneous Papers and Legal Instruments Under the Hand and Seal of William Shakespeare. Ireland’s cache included a letter from Elizabeth I praising the Sonnets, amorous verses written by Shakespeare to “Anna Hatherrewaye”, a usefully explicit “Profession of Faith”, a manuscript of King Lear and promise of books from Shakespeare’s library, “in which are many books with Notes in his own hand”.

Of course Ireland had made it all up, snaffling old bits of parchment, concocting a dark brown ink and reverse-engineering Shakespeare’s handwriting from the facsimile signatures that Edmond Malone had reproduced in his 1790 edition of the plays. If a Shakespearean document is too good to be true, it almost certainly isn’t, but, as the scams prevalent in our own day show us, many of us are still willing to be tricked by impossible promises of what we most desire. The appeal of Ireland’s con was the promise of explanation, its manufactured archive the means fully to understand the works and their author. His genius was to recognize the biographical itches in Shakespeare studies that need a good scratch: his relationship with his wife; his religion, politics and reading; his methods of working. Little new evidence about Shakespeare’s life has come to light since this brilliant attempt to short-circuit the search: the questions remain unanswered. Ireland’s Miscellaneous Papers were themselves a kind of biography, in which speculation and documentation had become confused.

For Margreta de Grazia, in her lapidary book Shakespeare Without a Life, Ireland’s forgeries embody a new energy around documenting the author and symbolize the disappointments of the era’s hunger for new evidence. Malone’s biographical researches also ended in frustration. His posthumously published Life of Shakespeare listed with icy regret many of the antiquaries, actors and others who might have gathered information about the playwright during the seventeenth century. William Dugdale, Anthony Wood, John Dryden, William Davenant and Thomas Betterton are all fingered for their carelessness, even as Malone acknowledges that “the truth is, our ancestors paid very little attention to posterity”.

Link to the rest at TLS

“Arguments on a daily basis”: How couples who disagree politically navigate news

From Nieman Lab:

If you have a significant other in your life, chances are they probably share your politics: you both lean Democratic or Republican (or independent) together.

Romantic partnerships have long grown out of shared values and attitudes, but polarization has amplified these tendencies. These days, more than ever in the U.S., political sorting is happening more readily across neighborhoods, friendships, and dating and marriage relationships. And as political identity increasingly becomes synonymous with many people’s larger sense of social identity, the stakes for political disagreement seem higher than they once were: Partisan zealots are now more likely to wish the worst on their perceived foes on the other side.

But many couples are not politically in sync with each other, and there has been surprisingly little research about what such “cross-cutting” relationships mean for news consumption and political discussion in such politically mismatched pairings.

Emily Van Duyn of the University of Illinois at Urbana-Champaign offers a first-of-its-kind analysis of this issue in her article “Negotiating news: How cross-cutting romantic partners select, consume, and discuss news together,” published last month in the journal Political Communication.

She sought to address three questions: How do romantic partners in cross-cutting relationships influence each other’s selection and consumption of news? How do those patterns of news selection and consumption shape the conversations about politics and political news that happen between partners? And, ultimately, what does the role of news mean for these cross-cutting couples — is it helpful or harmful to their relationship?

To answer these questions, she conducted in-depth interviews with 67 U.S. residents in cross-cutting relationships. She chose to interview just one individual from each couple, in part because people might not be so candid with their comments if they knew their partner was being interviewed, particularly in cases where people strategically avoid talking about politics to maintain the relationship. Of the 67 interviewees, slightly less than half were married while the others were dating or cohabitating. All but five participants were in opposite-sex relationships.

Van Duyn found that cross-cutting couples deal with two main difficulties when navigating news: negotiated exposure, as couples try to influence news selection and consumption in the relationship, and two-step conflict, in which issues surrounding news — what type, how much, from which sources, etc. — not only led to discussion about politics but also to significant friction between partners.

Consider first the problem of negotiating what news to introduce into the relationship — or whether to avoid news altogether. “For one, the process of selecting and consuming news was especially difficult for cross-cutting romantic partners,” Van Duyn writes, “because it presented a choice that involved recognizing political differences and finding a way to navigate these differences. In turn, who selected news, what was selected, and how those choices were negotiated over time became a political act as much as an interpersonal one.”

For one couple studied, that meant sharing control over what TV news channel was playing during the day: the conservative woman would decide in the morning, and her liberal boyfriend took charge in the afternoon. For others, that meant finding shared news rituals they could both agree on — like watching the evening news on ABC while preparing dinner each night — while allowing space for individual podcast or social media consumption that tailored to each other’s interests.

And, for others, it meant a pulling away from news and politics altogether. One respondent said that he and his Democratic girlfriend “never share articles” on social media and “never watch the news together at all.” This was not intentional at first, he said, but the avoidance arose gradually out of a worry that sharing articles would incite conflict: “I guess we’re both afraid to bring up politics…I’d say I try to avoid it. I think she does too. So, we kind of both avoid it at all costs just so we don’t get into any arguments whatsoever.”

Link to the rest at Nieman Lab

Perhaps the OP is a character prompt or “couple prompt”.

PG suggests that politicizing everything is a bad idea for society. He also suggests that people who put politics over relationships or inject politics into relationships take politics way too seriously.

PG can honestly say that he has no idea what the political beliefs his closest friends are.There was a time when injecting politics (or religion) into a social setting was considered bad manners.

Perhaps politics is the new religion for a great many people.

The Dangerous Radicalism of Longing

From The Dispatch:

In a recent episode of Daryl Dixon, the new Walking Dead spinoff, Daryl says, “You can’t miss what you never had.” And for some reason I keep thinking about it. 

According to the internet, this is originally a Hunter S. Thompson quote, though I suspect someone said it before him. This is one of those sayings that sounds profound and wise but isn’t actually true. Or at least to the extent it’s true, it really depends on context and the definitions of “can’t” and “miss.”

I think an enormous number of our problems come from people who miss things they never had. Just off the top of my head: Palestinians miss having a viable country, but have never had one. Lots of people who grew up without siblings, or fathers, or best friends miss having such people in their lives a great deal. People who had bad experiences in high school, or who never went to college, miss things they never had. 

Maybe this helps explain why I say the definitions of “can’t” and “miss” are important here. For the sorts of people described above, missing what you didn’t have is a kind of longing. And people in fact do and can have such longings. “Missing,” it seems, conveys a statement of fact. You actually had something and lost it. “I had a brother. He’s gone. I miss him.” That’s a different statement than “I always wished I had a brother.”

Regardless, such desires are very, very, common. Regret—a good word for combining both “miss” and “longing”—over what might have been, what was lost, or what you never had is one of the most powerful human emotions and one of the greatest drivers of despair. Such feelings are also one of the most powerful motivations for human action. 

The first nationalists of the late 18th and early 19th centuries were full of longing for a nation that had never actually existed. Sometimes they invented an ancient past of national identity and claimed they were seeking to restore what was lost to the Romans or some other conquerors. 

The romantics, who helped create nationalism, played similar games. The idea of the “noble savage” was essentially a kind of unscientific science fiction. It was based on ideas that had no actual basis in anthropological or sociological fact. But it did have a good deal of theological support. Man, before the fall, lived in happy ignorance and harmony with nature. Knowledge or technology or modernity ripped us out of this blissful state. All that was required to return to it was the will to return to it. 

I think that in a very fundamental—and very oversimplified—way, all radicalism stems from these kinds of longings. Karl Marx was very much a romantic, and his vision for the end of history looked very much like Rousseau’s vision of the beginning of history. Once all class consciousness was swept away, once the economic aristocracy was toppled or liquidated, everybody would be able to live in an unconstrained state of natural bliss and autonomy. The Marx-influenced radicals pushing for “national liberation” in the 20th century were not as fully utopian, but they believed that all of the suffering and inequities of their lives could be erased with a cleansing purge of imperial control. The Islamic radicals of Iran and elsewhere believed that all that was required to live in spiritual harmony and happiness was to remove the decadent bulwarks of “Western” liberalism, religious pluralism, secularism, capitalism, etc. 

None of these stories ended well, and many ended horribly. 

But the radicalism such desires can inspire aren’t the problem with missing what you never had.

. . . .

 As families shrink or break down, as the sinew of local communities breaks down, the government is seen as a necessary substitute. No, I don’t think all women should stay at home and rely on their parents or husbands as providers and breadwinners. But in a society where so many biological fathers have little desire to be real fathers or actual husbands, the demand for the state to compensate for what’s missing increases. 

This isn’t just a point about the growth of the welfare state or those darn progressives. It’s just one example of how people miss what they never had—fathers, husbands, healthy families and communities—and look for cheap substitutes for them. As I’ve been saying forever, “The government can’t love you.” But when you lack people who love you, when you lack a sense of community, the hunger remains and you pursue whatever you think might satisfy it. 

Another form of longing drives this tendency: nostalgia, which might be the best rebuttal to the claim, “You can’t miss what you never had.” Nostalgia is one of the most powerful forces in politics and life. I would be lying if I said I wasn’t prone to it. Nostalgia is a neologism coined by a Swiss medical student to describe the melancholy (a medical term back then) felt by Swiss mercenaries who fought far from home. It’s a mashup of sorrow or despair and “homecoming.” It’s come to mean homesickness for the past. 

The problem with nostalgia, at least in politics and economics, is that it is a highly selective remembering—and misremembering—of the past. We tend not only to emphasize the good stuff and forget the bad stuff, we exaggerate the good stuff beyond reality. This has always been my problem with “Make America Great Again.” It’s a nostalgia-soaked misdiagnosis of the past that tells people they can have what they miss but never had, at least not in the way they remember it. 

Link to the rest at The Dispatch

When American Words Invaded the Greatest English Dictionary

From The Wall Street Journal:

Most people think of the 20-volume Oxford English Dictionary as a quintessentially British production, but if you pore carefully over the first edition, compiled between 1858 and 1928, you will find thousands of American words.

There are familiar words describing nature particular to the U.S., like prairieskunkcoyote and chipmunk, but also more recondite ones, like catawba (a species of grape and type of sparkling wine), catawampous (fierce, destructive) and cottondom (the region in which cotton is grown). Today, Americanisms are easy for modern lexicographers to find because of the internet and access to large data sets. But all of the American words in that first edition found their way to Oxford in an age when communication across the Atlantic was far more difficult.

The OED was one of the world’s first crowdsourced projects—the Wikipedia of the 19th century—in which people around the English-speaking world were invited to read their country’s books and submit words for consideration on 4-by-6-inch slips of paper. Until recently, it wasn’t known how many people responded, exactly who they were or how they helped. But in 2014, several years after working as an editor on the OED, I was revisiting a hidden corner of the Oxford University Press basement where the dictionary’s archive is stored, and I came across a dusty box.

Inside the box was a small black book tied with cream-colored ribbon. On its pages was the immaculate handwriting of James Murray, the OED’s longest-serving editor. It was his 150-year-old address book recording the names and addresses of people who contributed to the largest English dictionary ever written.

There were six address books in all from that era, and for the past eight years I have researched the people listed inside. Three thousand or so in total, they were a vivid and eccentric bunch. Most were not the scholarly elite you might expect. The top four contributors globally, one of whom sent in 165,061 slips, were all connected with psychiatric hospitals (or “lunatic asylums” as they were called at the time); three were inmates and one was a chief administrator. There were three murderers and the owner of the world’s largest collection of pornography who, yes, sent in sex words, especially related to bondage and flagellation. 

You can’t go a page or two in Murray’s address books without seeing a name that he had underlined in thick red pencil. These are the Americans: politicians, soldiers, librarians, homemakers, booksellers, lawyers, coin collectors and pharmacists. They ranged from luminaries like Noah Thomas Porter, who edited Webster’s Dictionary and became president of Yale University, to unknowns such as 21-year-old Carille Winthrop Atwood, who loved the classical world and lived in a large house with several other young women in a fashionable area of San Francisco. The most prolific American contributor was Job Pierson, a clergyman from Ionia, Mich., who owned the state’s largest private library and sent in 43,055 slips featuring words from poetry, drama and religion. 

Murray marked Americanism with a “U.S” label, including casket (coffin),  comforter (eiderdown), baggage (luggage), biscuit (scone) and faucet (tap). He was often at pains to add details: For pecan tree, he included that it was “common in [the] Ohio and Mississippi valleys.” He noted that candy, not quite an Americanism, was “in [the] U.S. used more widely than in Great Britain, including toffy and the like.”

. . . .

Some American contributors involved in certain causes sought to make sure that their associated words got into the dictionary, like Anna Thorpe Wetherill, an anti-slavery activist in Philadelphia, who hid escaped slaves at her home. Her contributions included abhorrent and abolition.

Others turned to their hobbies. Noteworthy Philadelphian Henry Phillips, Jr., an antiquarian and pioneer of the new language Esperanto, ensured that the dictionary had a generous coverage of words relating to coins and numismatics: electrum (coins made of an alloy of gold and silver with traces of copper) and gun money (money coined from the metal of old guns). 

Francis Atkins, a medical doctor at a military base in New Mexico, read books relating to Native American cultures and sent in sweat-house (a hut in which hot air or vapor baths are taken) and squash (the vegetable), a word borrowed from the Narragansett asquutasquash. He also contributed ranching words: rutting season (mating season), pronghorn (an antelope) and bison (a wild ox).

Others had their favorite authors. Anna Wyckoff Olcott, one of 27 contributors from New York City (she lived on West 13th Street in Manhattan), took responsibility for providing entries from the works of Louisa May Alcott. Those included the term deaconed, from “Little Women,” defined in the OED as “U.S. slang” meaning the practice of packing fruit with the finest specimens on top. (“The blanc-mange was lumpy, and the strawberries not as ripe as they looked, having been skilfully ‘deaconed.’”)

In Boston, Nathan Matthews advised the OED for six years before becoming the city’s mayor and the person who spearheaded Boston’s subway system, the first in the U.S. But it was his brother, the historian and etymologist Albert Matthews, who was the second-highest ranking American contributor, sending in 30,480 slips from his reading of American historical sources including Benjamin Franklin, George Washington, John Adams, Henry Wadsworth Longfellow and Washington Irving. 

Albert Matthews in particular enabled the OED to include words that no Brit would have ever have heard or needed to use. He sent in stockadedwhitefish and a rare American use of suck, meaning “the place at which a body of water moves in such a way as to suck objects into its vortex.” His reading of Daniel Denton’s “A Brief Description of New York” (1670) provided evidence for persimmonpossum, raccoon skinpowwow (spelled at the time “pawow”) and the first time that huckleberry ever appeared in print: “The fruits Natural to the Island are Mulberries, Posimons, Grapes great and small, Huckelberries.” 

Link to the rest at The Wall Street Journal

Mark Meadows sued by book publisher over false election claims

From The Hill:

The publisher of Mark Meadows’s book is suing the former White House chief of staff, arguing in court filings Friday morning that he violated an agreement with All Seasons Press by including false statements about former President Trump’s claims surrounding the 2020 election.

“Meadows, the former White House Chief of Staff under President Donald J. Trump, promised and represented that ‘all statements contained in the Work are true and based on reasonable research for accuracy’ and that he ‘has not made any misrepresentations to the Publisher about the Work,’” the publishing company writes in its suit, filed in court in Sarasota County, Fla.

“Meadows breached those warranties causing ASP to suffer significant monetary and reputational damage when the media widely reported … that he warned President Trump against claiming that election fraud corrupted the electoral votes cast in the 2020 Presidential Election and that neither he nor former President Trump actually believed such claims.”

The suit comes after ABC News reported that Meadows received immunity to testify before a grand jury convened to hear evidence from special counsel Jack Smith, reportedly contradicting statements he made in his book. 

. . . .

Meadows’s book, “The Chief’s Chief,” was published in 2021 and spends ample time reflecting on the election.

“Meadows’ reported statements to the Special Prosecutor and/or his staff and his reported grand jury testimony squarely contradict the statements in his Book, one central theme of which is that President Trump was the true winner of the 2020 Presidential Election and that election was ‘stolen’ and ‘rigged’ with the help from ‘allies in the liberal media,’ who ignored ‘actual evidence of fraud,’” the company writes in the filing.

According to Meadows’s testimony, as reported by ABC News, Trump was being “dishonest” with voters when he claimed victory on election night. ABC reported that Meadows admitted Trump lost the election when questioned by prosecutors.

Link to the rest at The Hill

I’ve researched time for 15 years – here’s how my perception of it has changed

From The Conversation:

Time is one of those things that most of us take for granted. We spend our lives portioning it into work-time, family-time and me-time. Rarely do we sit and think about how and why we choreograph our lives through this strange medium. A lot of people only appreciate time when they have an experience that makes them realise how limited it is.

My own interest in time grew from one of those “time is running out” experiences. Eighteen years ago, while at university, I was driving down a country lane when another vehicle strayed onto my side of the road and collided with my car. I can still vividly remember the way in which time slowed down, grinding to a near halt, in the moments before my car impacted with the oncoming vehicle. Time literally seemed to stand still. The elasticity of time and its ability to wax and wane in different situations shone out like never before. From that moment I was hooked.

I have spent the last 15 years trying to answer questions such as: Why does time slow down in near death situations? Does time really pass more quickly as you get older? How do our brains process time?

My attempts to answer these questions often involve putting people into extreme situations to explore how their experience of time is affected. Some of the participants in my experiments have been given electric shocks to induce pain , others have traversed 100-metre-high crumbling bridges (albeit in virtual reality), some have even spent 12 months in isolation on Antarctica. At the heart of this work is an attempt to understand how our interaction with our environment shapes our experience of time.

Thinking time

This research has taught me that time’s flexibility is an inherent part of the way in which we process it. We are not like clocks which record seconds and minutes with perfect accuracy. Instead, our brain appears to be wired to perceive time in a way which is responsive to the world around us.

The way in which our brain processes time is closely related to the way in which it processes emotion. This is because some of the brain areas involved in the regulation of emotional and physiological arousal are also involved in the processing of time. During heightened emotion, the activation caused by the brain attempts to maintain stability, which alters its ability to process time.

So, when we experience fear, joy, anxiety or sadness, emotional processing and time processing interact. This results in the sensation of time passing more speeding up or slowing down. Time really does fly when you’re having fun and drag when you’re bored.

Changes in our experience of time are most profound during periods of extreme emotion. In near death experiences, like my car crash for example, time slows to the point of stopping. We don’t know why our brains distort sensory information during trauma.

Ancient adaptations

One possibility is that time distortions are an evolutionary survival intervention. Our perception of time may be fundamental to our fight and flight response. This insight into time has taught me that in times of crisis, knee jerk responses are unlikely to be the best ones. Instead, it would seem that slowing down helps me succeed.

Being a time-nerd, I spend a lot of time thinking about time. Before COVID, I would have said that I thought about it more than most. However, this changed during the pandemic.

Think back to those early lockdown days. Time started to slip and slide like never before. Hours sometimes felt like weeks and days merged into one another. Newspaper headlines and social media were awash with the idea that COVID had mangled our sense of time. They were not wrong. COVID time-warps were observed around the world. One study found that 80% of participants felt like time slowed down during the second English lockdown.

We no longer had a choice about how and when we spent our time. Home-time, work-time and me-time were suddenly rolled into one. This loss of control over our schedules made us pay attention to time. People now appear less willing to “waste time” commuting and instead place a greater value on jobs with flexibility over where and when you work. Governments and employers still appear unsure how to grapple with the ever-changing time landscape. What does seem clear however is that COVID permanently altered our relationship with time.

Link to the rest at The Conversation

Links to scholarly works included in the origianal.

“I cannot wait to possess you”: Reading 18th-century letters for the first time

From Ars Technica:

University of Cambridge historian Renaud Morieux was poring over materials at the National Archives in Kew when he came across a box holding three piles of sealed letters held together by ribbons. The archivist gave him permission to open the letters, all addressed to 18th-century French sailors from their loved ones and seized by Great Britain’s Royal Navy during the Seven Years’ War (1756-1763).

“I realized I was the first person to read these very personal messages since they’re written,” said Morieux, who just published his analysis of the letters in the journal Annales Histoire Sciences Sociales. “These letters are about universal human experiences, they’re not unique to France or the 18th century. They reveal how we all cope with major life challenges. When we are separated from loved ones by events beyond our control like the pandemic or wars, we have to work out how to stay in touch, how to reassure, care for people and keep the passion alive. Today we have Zoom and WhatsApp. In the 18th century, people only had letters, but what they wrote about feels very familiar.”

England and France have a long, complicated history of being at war, most notably the Hundred Years’ War in the 14th and 15th centuries. The two countries were also almost continuously at war during the 18th century, including the Seven Years’ War, which was fought in Europe, the Americas, and Asia-Pacific as England and France tried to establish global dominance with the aid of their respective allies. The war technically evolved out of the North American colonies when England tried to expand into territory the French had already claimed. (Fun fact: A 22-year-old George Washington led a 1754 ambush on a French force at the Battle of Jumonville Glen.) But the conflict soon spread beyond colonial borders, and the British went on to seize hundreds of French ships at sea.

According to Morieux, despite its collection of excellent ships during this period, France was short on experienced sailors, and the large numbers imprisoned by the British—nearly a third of all French sailors in 1758—didn’t help matters. Many sailors eventually returned home, although a few died during their imprisonment, usually from malnutrition or illness. It was no easy feat delivering correspondence from France to a constantly moving ship; often multiple copies were sent to different ports in hopes of increasing the odds of a letter reaching its intended recipient.

This particular batch of letters was addressed to various crew members of a French warship called the Galitee, which was captured by a British ship called the Essex en route from Bordeaux to Quebec in 1758. Morieux’s genealogical research accounted for every member of the crew. Naturally, some of the missives were love letters from wives to their husbands, such as the one Marie Dubosc wrote to her husband, a ship’s lieutenant named Louis Chambrelan, in 1758, professing herself his “forever faithful wife.” Morieux’s research showed that Marie died the following year before her husband was released; Chambrelan remarried when he returned to France, having never received his late wife’s missive.

Morieux read several letters addressed to a young sailor from Normandy named Nicolas Quesnel, from both his 61-year-old mother, Marguerite, and his fiancée, Marianne. Marguerite’s letters chided the young man for writing more often to Marianne and not to her, laying the guilt thick. “I think more about you than you about me,” the mother wrote (or more likely, dictated to a trusted scribe), adding, “I think I am for the tomb, I have been ill for three weeks.” (Translation: “Why don’t you write to your poor sick mother before I die?”)

Apparently, Quesnel’s neglect of his mother caused some tension with the fiancée since Marianne wrote three weeks later asking him to please write to his mom and remove the “black cloud” in the household. But then Marguerite merely complained that Quesnel made no mention of his stepfather in his letters home, so the poor young man really couldn’t win. Quesnel survived his imprisonment, per Morieux, and ended up working on a transatlantic slave ship.

Link to the rest at Ars Technica

The King’s English? Forgeddabouddit!

From Literary Review:

Does the misuse of the word ‘literally’ make your toes curl? Do the vocal tics of young ’uns set you worrying about the decline of the noble English language? You are not alone. But your fears are misplaced – at least according to the linguist Valerie Fridland.

Fridland’s Like, Literally, Dude does an excellent job of vindicating words and ways of speaking we love to hate. Tracing your ‘verys’ and your singular ‘theys’ across centuries and continents, Fridland offers a history of linguistic pet peeves that are much older than we might assume and have more important functions in communication than most of us would like to give them credit for.

Take intensifiers like ‘totally’, ‘pretty’ and ‘completely’. We might consciously believe them to be exaggerations undermining the speaker’s point, yet people consistently report seeing linguistic booster-users as more authoritative and likeable than others.

Then take ‘um’ and ‘uh’ (or ‘umm’ and ‘uhh’, and their consonant-multiplying siblings). Both receive an undue amount of flak for being fillers, supposedly deployed when the speaker is grasping for words, unsure what they want to say or lacking ideas. But this is not so. Fridland explains that they typically precede unfamiliar words or ideas, as well as complex sentence structures. Such non-semantic additions do what silent pauses and coughing can’t: they help the speaker speak and the listener listen. Similarly, the widely abhorred free-floating ‘like’ does not cut randomly into a ‘proper’ sentence but rather inserts itself, according to the logic of the language, either at the beginning of a sentence or before a verb, noun or adjective. It’s a form of ‘discourse marker’, used to ‘contribute to how we understand each other by providing clues to a speaker’s intentions’, writes Fridland. She points out that Shakespeare used discourse markers frequently, while the epic poem Beowulf begins with one (Hwæt!).

If what we think is ‘bad English’ is so good, why is nobody encouraging us to use those little flashy friends like ‘dude’, ‘actually’ and ‘WTF’? Corporate career guides and oratory platforms like Toastmasters warn against too many interrupters. The reason is that they supposedly make you sound insecure, weak, inexperienced and right-out dumb – like a young woman, basically. The world of power and prestige is rife with bias against ‘like’ and company, and so are our day-to-day interactions with friends and neighbours, who may judge us for that extra ‘literally’ or spontaneous ‘oh’. It’s precisely this prejudice that Fridland sets out to dismantle, arguing that linguistic change is a natural occurrence and that pronouncements on the bad and the good of language are socially motivated.

When we devalue a group’s speech habits, we perceive otherness fuelled by differences in race, class, gender, sexuality and education. To say ‘three dollar’ rather than ‘three dollars’ is not sloppy, Fridland states, but part and parcel of consonant loss at the end of English words that has its roots in the late Middle Ages, when the stress patterns of Norman French and Old Norse led to final letters being cast off. Why should we embarrass others for similar habits?

Fridland does well to burst the bubble of mockery around Californian girls’ vocal fry (think the creaking voices of Paris Hilton and the Kardashians), unpicking the social meanings we attach to verbal patterns we find unacceptable. We tend to dislike (and believe reprehensible) what we’re not regularly exposed to. And that often happens to be the language of vulnerable communities, such as black and brown people, teenagers and women. These groups often propel linguistic change. Children and teenagers, for example, are voracious speakers, eager to explore and play with new forms of language as their speech patterns haven’t quite settled. Women – particularly young women – are the Formula 1 drivers of language change, and have always been. Fridland explains that many modern English forms, such as the S ending of the third-person singular verb (‘it does’ rather than ‘it doth’), were pushed on by women and girls, whose ears tend to be more sensitive to linguistic nuances. While men are likely to snap up already-current words, such as ‘bro’, in order to signal social affiliation, women create new verbal spaces into which other people eventually step. ‘What women [bring] to the fore’, Fridland says, is ‘novelty in the form of this expressivity, not greater expressivity itself’. Once a change has become widely accepted, there is no difference in gender use, despite our perceptions

Link to the rest at Literary Review

For those who don’t know what a vocal fry (formerly glottal fry) is, you’ll hear and see an example below.

https://youtu.be/iY-ehIyRY_Q?si=HAkinbljEy0M0cQM

Whither philosophy?

From Aeon:

‘As long as there has been such a subject as philosophy, there have been people who hated and despised it,’ reads the opening line of Bernard Williams’s article ‘On Hating and Despising Philosophy’ (1996). Almost 30 years later, philosophy is not hated so much as it is viewed with a mixture of uncertainty and indifference. As Kieran Setiya recently put it in the London Review of Books, academic philosophy in particular is ‘in a state of some confusion’. There are many reasons for philosophy’s stagnation, though the dual influences of specialisation and commercialisation, in particular, have turned philosophy into something that scarcely resembles the discipline as it was practised by the likes of Aristotle, Spinoza or Nietzsche.

Philosophers have always been concerned with the question of how best to philosophise. In ancient Greece, philosophy was frequently conducted outdoors, in public venues such as the Lyceum, while philosophical works were often written in a dialogue format. Augustine delivered his philosophy as confessions. Niccolò Machia

Machiavelli wrote philosophical treatises in the ‘mirrors for princes’ literary genre, while his most famous work, The Prince, was written as though it were an instruction for a ruler. Thomas More maintained the dialogue format that had been popular in ancient Greece when writing his famed philosophical novel Utopia (1516). By the late 1500s, Michel de Montaigne had popularised the essay, combining anecdote with autobiography.

In the century that followed, Francis Bacon was distinctly aphoristic in his works, while Thomas Hobbes wrote Leviathan (1651) in a lecture-style format. Baruch Spinoza’s work was unusual in being modelled after Euclid’s geometry. The Enlightenment saw a divergent approach to philosophy regarding form and content. Many works maintained the narrative model that had been used by Machiavelli and More, as in Voltaire’s Candide (1759), while Jean-Jacques Rousseau re-popularised the confessional format of philosophical writing. Immanuel Kant, however, was far less accessible in his writings. His often-impenetrable style would become increasingly popular in philosophy, taken up most consequentially in the work of G W F Hegel. Despite the renowned complexity of their works, both philosophers would become enduringly influential in modern philosophy.

In the 19th century, Friedrich Nietzsche, greatly influenced by Arthur Schopenhauer, wrote in an aphoristic style, expressing his ideas – often as they came to him – in bursts of energetic prose. There are very few philosophers who have managed to capture the importance and intellectual rigour of philosophy while being as impassioned and poetic as Nietzsche. Perhaps this accounts for his enduring appeal among readers, though it would also account for the scepticism he often faces in more analytical traditions, where Nietzsche is not always treated as a ‘serious’ philosopher.

The 20th century proved to be a crucial turning point. While many great works were published, philosophy also became highly specialised. The rise of specialisation in academia diminished philosophy’s broader influence on artists and the general public. Philosophy became less involved with society more broadly and broke off into narrowly specialised fields, such as philosophy of mind, hermeneutics, semiotics, pragmatism and phenomenology.

There are different opinions about why specialisation took such a hold on philosophy. According to Terrance MacMullan, the rise of specialisation began in the 1960s, when universities were becoming more radicalised. During this time, academics began to dismiss non-academics as ‘dupes’. The problem grew when academics began to emulate the jargon-laden styles of philosophers like Jacques Derrida, deciding to speak mostly to each other, rather than to the general public. As MacMullan writes in ‘Jon Stewart and the New Public Intellectual’ (2007):

It’s much easier and more comfortable to speak to someone who shares your assumptions and uses your terms than someone who might challenge your assumptions in unexpected ways or ask you to explain what you mean.

Adrian Moore, on the other hand, explains that specialisation is seen as a way to distinguish oneself:

Academics in general, and philosophers in particular, need to make their mark on their profession in order to progress, and the only realistic way that they have of doing this, at least at an early stage in their careers, is by writing about very specific issues to which they can make a genuinely distinctive contribution.

Moore nevertheless laments the rise in specialisation, noting that, while specialists might be necessary in some instances, ‘there’s a danger that [philosophy] will end up not being pursued at all, in any meaningfully integrated way.’

Indeed, while specialisation might help academics to distinguish themselves in their field, their concentrated focus also means that their work is less likely to have a broader impact. In favouring specialisation, academics have not only narrowed the scope of philosophy, but have also unwittingly excluded those who may have their own contributions to make from outside the academy.

Expertise counts for much in today’s intellectual climate, and it makes sense that those educated and trained in specific fields would be given greater consideration than a dabbler. But it is those philosophers who wrote on a wide range of areas that left a profound mark on philosophy. Aristotle dedicated himself to a plethora of fields, including science, economics, political theory, art, dance, biology, zoology, botany, metaphysics, rhetoric and psychology. Today, any researcher who draws on different, ‘antagonistic’ fields would be accused of deviating from their specialisation. Consequently, monumental books that defied tradition – from Aristotle’s Nicomachean Ethics to Nietzsche’s Beyond Good and Evil (1886) – are few and far between. This is not to say, however, that there are no influential philosophers. Saul Kripke and Derek Parfit, both not long deceased, are perhaps the most significant philosophers in recent years, but their influence is primarily confined to academia. Martha Nussbaum on the other hand, is one of the most important and prolific philosophers working today. Her contributions to ethics, law and emotion have been both highly regarded and far-reaching, and she is often lauded for her style and rigour, illustrating that not all philosophers are focused on narrow fields of specialisation.

But ‘the blight of specialisation’, as David Bloor calls it, remains stubbornly engrained in the practice of philosophy, and ‘represents an artificial barrier to the free traffic of ideas.’ John Potts, meanwhile, argues that an emphasis on specialisation has effectively discouraged any new icons from emerging:

A command of history, philosophy, theology, psychology, philology, literature and the Classics fostered German intellectuals of the calibre of Nietzsche and Weber, to name just two of the most influential universal scholars; such figures became much rarer in the 20th century, as academic research came to favour specialisation over generalisation.

Reading Nietzsche may at times be arduous and convoluted, but it is never dull

By demoting the significance of generalised thinking, the connective tissue that naturally exists between various disciplines is obscured. One is expected, instead, to abide by the methodologies inherent in their field. If, as Henri Bergson argued in The Creative Mind (1946), philosophy is supposed to ‘lead us to a completer perception of reality’, then this ongoing emphasis on specialisation today compromises how much we can truly know about the world in any meaningful depth, compromising the task of philosophy itself. As Milan Kundera put it in The Art of the Novel (1988):

The rise of the sciences propelled man into the tunnels of the specialised disciplines. The more he advanced in knowledge, the less clearly could he see either the world as a whole or his own self, and he plunged further into what Husserl’s pupil Heidegger called, in a beautiful and almost magical phrase, ‘the forgetting of being’.

To narrow one’s approach to knowledge to any one field, any one area of specialisation, is to reduce one’s view of the world to the regulations of competing discourses, trivialising knowledge as something reducible to a methodology. Under such conditions, knowledge is merely a vessel, a code or a tool, something to be mastered and manipulated.

By moving away from a more generalised focus, philosophy became increasingly detached from the more poetic style that nourished its spirit. James Miller, for instance, called pre-20th-century philosophy a ‘species of poetry’. Nietzsche’s own unique, poetic writing style can account for much of the renown his ideas continue to receive (and also much of the criticism levelled at him by other philosophers). Reading Nietzsche may at times be arduous and convoluted, but it is never dull. Indeed, Tamsin Shaw spoke of Nietzsche as less a philosopher and more a ‘philosopher-poet’. Jean-Paul Sartre called him ‘a poet who had the misfortune of having been taken for a philosopher’.

While many sought to separate philosophy from other creative styles and pursuits, notably poetry and literature, Mary Midgley insisted that ‘poetry exists to express [our] visions directly, in concentrated form.’ Even Martin Heidegger, whose writing was far less poetic than Nietzsche’s, called for ‘a poet in a destitute time’, and saw poets as those who reach directly into the abyss during the ‘world’s night’

Link to the rest at Aeon

Travels With Tocqueville Beyond America

From The Wall Street Journal:

Alexis de Tocqueville (1805-59) was neither a systematic thinker nor a system builder, neither a philosopher nor a historian. His subject was society—make that societies, their strengths and their weaknesses, which he studied always in search of what gives them their character. Along with Machiavelli, Montesquieu, Max Weber, Ortega y Gasset, Tocqueville was a cosmopolitan intellectual of the kind that appears only at the interval of centuries.

Tocqueville is of course best known for his “Democracy in America,” a work which may be more quoted from than actually read. The first part of it was published in 1835, based on observations made when he visited the U.S. in 1831, at age 26. His powers of observation, and skill at generalization, were evident at the outset. They never slackened over the remainder of his life.

Tocqueville’s skill at formulating observations was unfailingly acute. “In politics, shared hatreds are almost always the basis of friendships,” he wrote. “History is a gallery of pictures in which there are few originals and many copies.” At the close of “Democracy in America,” he predicted the coming hegemonies of Russia and the U.S. George Santayana, in a letter to his friend Horace Kallen, wrote: “Intelligence is the power of seeing things in the past, present, and future as they have been, are, and will be.” He might have been describing Alexis de Tocqueville.

The first volume of “Democracy in America” was well received. The second volume, published in 1840—more critical and more dubious of the virtues of democracy—was less so. Yet the work stayed in print for a full century, even though its author’s reputation had long since faded. Then, in 1938, with the publication of Tocqueville’s correspondence and other hitherto uncollected writings, that reputation, more than revived, became set in marble.

Travels With Tocqueville Beyond America” by Jeremy Jennings, a professor of political theory at King’s College London, thus joins a long shelf of books dedicated to the man and his works. Four full biographies of Tocqueville have been published, the last, Hugh Brogan’s “Alexis de Tocqueville: A Life,” in 2006. Nearly every aspect of Tocqueville’s work has been treated in essays, articles and book-length studies. I happened to have published a slender volume myself, “Alexis de Tocqueville, Democracy’s Guide” (2006), in which I wrote: “What would have surprised Tocqueville, one suspects, is the persistence with which his writings have remained alive, part of the conversation on the great subject of the importance of politics in life.” It would have surprised him, I believe, because of his innate modesty and his belief that his work was far from finished.

Tocqueville’s trip to America, which would be the making of him, had its origin in his wish to escape the reign of Louis-Philippe, king of France, whose Orléans family had been sympathetic to the French Revolution and were thus viewed askance by the house of Tocqueville. With his friend Gustave de Beaumont, Tocqueville proposed a visit to America to study penal institutions in the new republic; the two magistrates were granted permission, though they would have to pay their own expenses.

In “Travels With Tocqueville Beyond America,” Mr. Jennings sets out the importance of travel to Alexis de Tocqueville. “In exploring why, where, and how Tocqueville travelled,” he writes, “this volume seeks to show that travel played an integral role in framing and informing his intellectual enquiries.” Throughout his life, we learn, “Tocqueville longed to travel,” and this appetite for travel did not “diminish with either age or illness.” As Tocqueville wrote to his friend Louis de Kergorlay: “I liken man in this world to a traveller who is walking constantly toward an increasingly cold region and who is forced to move more as he advances.”

Mr. Jennings proves a splendid guide to Tocqueville’s travels. These included trips, some lengthier than others, to Italy, Algeria, Germany, Switzerland, England and Ireland. Basing his book on Tocqueville’s rich correspondence and notebooks, Mr. Jennings describes his subject’s preparations, his arrivals, his daily encounters in what for Tocqueville were new lands. Even when he did not publish works about these places, he was recording his thoughts. Above all, the author establishes the unceasing intellectual stimulation that Tocqueville found in travel. The spirit of inquiry was never quiescent in him, and, as Mr. Jennings notes, even on his honeymoon “Tocqueville managed to find time to study the Swiss political system.”

Much of the attraction of “Travels with Tocqueville Beyond America” derives from its chronicle of Tocqueville’s quotidian life and his many interesting opinions of historical and contemporary figures. Tocqueville said that Napoleon was “as great as a man can be without virtue.” His English friend Nassau Senior records Tocqueville saying of Napoleon that his “taste was defective in everything, in small things as well as great ones; in books, art, and in women as well as in ambition and glory; and his idolizers cannot be men of much better taste.”

Tocqueville remarked on the “impatience always aroused in him by the national self-satisfaction of the Germans,” and found Italy “the most unpleasant country I have ever visited on my travels.” As for Switzerland, he noted that “at the bottom of their souls the Swiss show no deep respect for law, no love of legality, no abhorrence of the use of force, without which there cannot be a free country.”

Yet he described America as “the most singular country in the world.” Among other things, during his nine months there, he was taken by its citizens’ enthusiasm for their own system of government. Americans, he found, “believe in the wisdom of the masses, assuming the latter are well informed; and appear to be unclouded by suspicions that the populace may never share in a special kind of knowledge indispensable for governing a state.”

He, Tocqueville, did not share their unabated enthusiasm: “What I see in this country tells me that, even in the most favorable circumstances, and they exist here, the government of the multitude is not a good thing.” Tocqueville was wary of what had been done to the American Indian, and predicted that “within a hundred years there will not remain in North America either a single tribe or even a single man belonging to the most remarkable of Indian races.” His views on slavery in America were even bleaker, harsher. “The Americans are, of all modern peoples, those who have pushed equality and inequality furthest among men,” he wrote. He thought, correctly as we now know, slavery to be “the most formidable of all the evils that threaten the future of the United States.”

Alexis de Tocqueville was a passionate man, and about liberty he was most passionate of all. By liberty he meant the absence of despotism, whether by monarchs or multitudes. “Liberty is the first of my passions,” he wrote, referring to it as “a good so precious and necessary,” adding that “whoever seeks for anything from freedom but itself is made for slavery.”

Link to the rest at The Wall Street Journal

The New York Times built a robot to help make article tagging easier

From NiemanLab:

If you write online, you know that a final, tedious part of the process is adding tags to your story before sending it out to the wider world.

Tags and keywords in articles help readers dig deeper into related stories and topics, and give search audiences another way to discover stories. A Nieman Lab reader could go down a rabbit hole of tags, finding all our stories mentioning Snapchat, Nick Denton, or Mystery Science Theater 3000.

Those tags can also help newsrooms create new products and find inventive ways of collecting content. That’s one reason The New York Times Research and Development lab is experimenting with a new tool that automates the tagging process using machine learning — and does it in real time.

The Times R&D Editor tool analyzes text as it’s written and suggests tags along the way, in much the way that spell-check tools highlight misspelled words:

Editor is an experimental text editing interface that explores how collaboration between machine learning systems and journalists could afford fine-grained annotation and tagging of news articles. Our approach applies machine learning techniques interactively, as part of the writing process, rather than retroactively. This approach can offload the burden of work to the computational processes, and can create affordances for journalists to augment, edit and correct those processes with their knowledge.

It’s similar to Thomson Reuters’ Open Calais system, which extracts metadata from text files of any kind. Editor works by connecting the corpus of tags housed at the Times with an artificial neural network designed to read over a writer’s shoulder in a text editing system. They explain:

As the journalist is writing in the text editor, every word, phrase and sentence is emitted on to the network so that any microservice can process that text and send relevant metadata back to the editor interface. Annotated phrases are highlighted in the text as it is written. When journalists finish writing, they can simply review the suggested annotations with as little effort as is required to perform a spell check, correcting, verifying or removing tags where needed. Editor also has a contextual menu that allows the journalist to make annotations that only a person would be able to judge, like identifying a pull quote, a fact, a key point, etc.

“We started looking at what we could do if we started tagging smaller entities in the articles. [We thought] it might afford greater capabilities for reuses and other types of presentation,” said Alexis Lloyd, creative director at the Times R&D Lab.

Tags are a big deal at the Times; the paper has a system of article tags that goes back over 100 years. That metadata makes things like Times Topics pages possible. It’s an important process that is entirely manual, relying on reporters and editors to provide a context layer around every story. And in some cases, that process can lag: The Times’ innovation report cited many gaps in the paper’s metadata system as a strategic weakness:
“Everyone forgets about metadata,” said John O’Donovan, the chief technology officer for The Financial Times. “They think they can just make stuff and then forget about how it is organized in terms of how you describe your content. But all your assets are useless to you unless you have metadata — your archive is full of stuff that is of no value because you can’t find it and don’t know what it’s about.”

Lloyd said the idea behind Editor was not just to make the metadata process more efficient, but also to make it more granular. By using a system that combs through articles at a word-by-word level, the amount of data associated with people, places, companies, and events becomes that much richer.

And that much more data opens new doors for potential products, Lloyd told me. “Having that underlying metadata helps to scale to all kinds of new platforms as they emerge,” she said. “It’s part of our broader thinking about the future of news and how that will become more complex, in terms of forms and formats.”

. . . .

The key feature of the automatic tagging system relies on bringing machines into the mix, an idea that inspires conflicting ideas of progress and dread in some journalists. For Editor to work, the lab needed to build a way for machines and humans to supplement each other’s strengths. Humans are great at seeing context and connections and understanding language, while machines can do computations at enormous scale and have perfect memory. Mike Dewar, a data scientist at the Times R&D lab, said the artificial neural network makes connections between the text and an index of terms pulled from every article in the Times archive.

It took around four months to build Editor, and part of that time was spent training the neural network in how a reporter might tag certain stories. Dewar said that teaching the network the way tags are associated with certain phrases or words gives it a benchmark to use when checking text in the future.

The biggest challenge was latency, as Editor works to make connections between what’s being written and the index of tags. In order for Editor to be really effective, it has to operate at the speed of typing, Dewar said: “It needs to respond very quickly.”

. . . .

Robots continue to expand their foothold in the world of journalism. In March, the AP said it planned to use its automated reporting services to increase college sports coverage. Lloyd has experimented with how bots can work more cooperatively with people, or at least learn from them and their Slack conversations.

Link to the rest at NiemanLab

What is Left Unsaid: How Some Words Do—Or Don’t—Make It Into Print

From The Literary Hub:

One summer morning in 1883, Alexander John Ellis sat at his desk in front of three large bay windows, opened wide to catch any breeze that London’s Kensington had to give. From his chair, he could hear the birds in the plane trees and see right down Argyll Road, its five-story white stucco Georgian houses resembling layers of an expensive wedding cake. By the time everyone else was rising, Ellis had generally already been up for several hours. Early morning was his favorite time of day. Ellis loved the notion of getting ahead while others were sleeping, and getting work done before his neighbor, a master singer, started his scales and taught his students by the open window. “The nuisance is awful at times,” he wrote to Murray. Ellis always ate the same light breakfast of a French roll with butter, and drank his signature beverage: a cup of warm water with a little milk.

This day, as every day, his first act on waking was to weigh himself naked, before dressing for the day. Always the same boots and coat, affectionately named Barges and Dreadnought, before heading straight to his desk on the second floor. He needed to weigh himself before putting on his clothes for one main reason: Dreadnought was heavy. Dreadnought had twenty-eight pockets, each one stuffed full with eccentric items. Ellis made a noise like a kitchen drawer as he walked. When he sat down, eyewitnesses said that his pockets “stood upright like sentinels.” They were variously full of letters, nail clippers, string, a knife sharpener, a book and philological papers in case of emergency, and two things that a teetotaller and someone who watched his weight rarely needed: a corkscrew and a scone, just in case friends were in want of either. These last two items sum up Ellis; he was kind-hearted and always thought of his friends before himself.

On his desk, there were signs of everything that he held dear: a draft of the fifth and final volume of his monumental book, On Early English Pronunciation, daguerreotypes of Venice and his three children, a tuning fork, and a favorite quotation from Auguste Comte, the founder of altruism, “Man’s only right is to do his duty. The intellect should always be the servant of the heart, and should never be its slave.”

This morning held a special excitement: also spread out in front of him were Murray’s proof sheets for the first section of the Dictionary (words A to Ant)—all 362 pages of them. Murray had sent them to Ellis for his comment. As Ellis’s eyes skimmed the proofs, he could not help looking for his own name in the Introduction. He felt a sense of profound satisfaction to see “A. J. Ellis, Esq, FRS (Phonology)” listed between Prof. Frederick Pollock (Legal terms) and Dr P. H. Pye-Smith (Medical and Biological words).

Ellis’s passions were pronunciation, music, and mathematics, and his expertise in all of these areas had been sought by Murray who had had difficulty finding British academics to help him (by contrast, American scholars were eager to be involved). He had helped Murray with the very first entry in the Dictionary—A: not only the sound A, “the low-back-wide vowel formed with the widest opening of the jaws, pharynx, and lips,” but also the musical sense of A, “the 6th note of the diatonic scale of C major,” and finally the algebraic sense of A, “as in a, b, c, early letters of the alphabet used to express known quantities, as x, y, z are to express the unknown.” Ellis was happy to see these and other results of his work on the printed page, including the words air, alert, algebra.

Many people, not only in Britain but around the world, were eagerly awaiting the appearance of the first part of the Dictionary, and Murray particularly wanted Ellis’s opinion on the draft Introduction, which he knew he had to get just right. It all read perfectly to Ellis except for one section. “The Dictionary aims at being exhaustive,” Murray had written. “Not everyone who consults it will require all the information supplied; everyone, it is hoped, will find what he actually wants.”

Is it really exhaustive? Ellis wondered. What about slang and coarse words? He scribbled to Murray in the margin (and the page with the scribble still survives today in the archives), “You omit slang & perhaps obscenities, thus are by no means exhaustive. Though disagreeable, obscene words are part of the life of a language.” Feeling satisfied with his contribution to Murray’s landmark first part of the Dictionary, and admiring of the project as a whole, Ellis placed the corrected draft into an envelope and placed it by his front door, ready for the morning post.

Ellis had raised an important question about inclusion, but he was not quite right about the boundaries of the Dictionary. Murray had included slang but it was true that, so far, he had left out obscenities. We can only imagine the uproar in Victorian society had he not. Murray would agonize over his decision to leave them out, but also had to be mindful of the Obscene Publications Act of 1857 which made it illegal to expose the public to any content judged to be grossly indecent.

Murray’s caution proved wise when, a few years later, a fellow lexicographer and one of the Dictionary People, John Stephen Farmer, had his own legal drama. Farmer was writing a slang dictionary with William Henley, and was struggling to publish the second volume (containing the letters C and F) of his work on grounds of obscenity. Farmer took his publisher to court for breach of contract in 1891, and tried to convince a jury that writing about obscene words in a dictionary did not make him personally guilty of obscenity, but he lost the case and was ordered to pay costs. Eventually, he found fresh printers and avoided the Obscene Publications Act by arguing that his dictionary was published privately for subscribers only, not the public, and the remarkable Slang and Its Analogues by Farmer and Henley was published in seven volumes (from 1890 to 1904), with cunt and fuck and many other words regarded as lewd on its pages. Farmer’s legal case and the public outcry that ensued was a clear deterrent for Murray.

. . . .

Each of Murray’s advisers had different notions of what was offensively salacious. His adviser on medical terms, James Dixon, who was a retired surgeon living in Dorking, Surrey, had been all right with including cunt, but absolutely drew the line with a word which he considered so obscene it had to be sent to Murray in a small envelope marked PRIVATE, sealed within a larger envelope. Inside the intriguing packaging was a message advising him not to include the word condom. “I am writing on a very obscene subject. There is an article called Cundum…a contrivance used by fornicators, to save themselves from a well- deserved clap; also by others who wish to enjoy copulation without the possibility of impregnation,” he wrote to Murray. “Everything obscene comes from France, and I had supposed this affair was named after the city of Condom, which gives title to a Bishop.” But he had found a quotation from 1705 referring to a “Quondam” which made him rethink his assumption that it was named after the town in France. “I suppose Cundom or Quondam will be too utterly obscene for the Dictionary,” he concluded. Murray left it out.

Dixon was the man who unwisely advised Murray to delete the entry for appendicitis because it was, according to Dixon, just another itis-word. “Surely you will not attempt to enter all the crack-jaw medical and surgical words. What do you think of ‘Dacryocystosyringoketokleitis’? You know doctors think the way to indicate any inflammation is to tack on ‘itis’ to a word.” The word’s deletion turned out to be an embarrassment to Murray and Oxford University Press when, in 1902, the coronation of Edward VII was postponed because of the King’s attack of appendicitis. Suddenly everyone was using the word, but no one could find it in the Dictionary, and since the letter A was already published it could not be added until the Supplement volume in 1933.

But back to the summer of 1883. Murray received the corrected proofs from Ellis. He not only appreciated Ellis’s feedback but also trusted his judgement: he promptly deleted all claims to exhaustiveness and wrote, “The aim of this Dictionary is to furnish an adequate account of the meaning, origin, and history of English words now in general use, or known to be in use.”

. . . .

I had been wondering how Ellis got to be such a word nerd? I was fascinated by what I discovered. To begin with, something very unusual happened when he was eleven years old. His mother’s cousin, a schoolmaster called William Ellis, offered to give the young boy a substantial inheritance if he would change his surname from Sharpe to Ellis. Mr and Mrs Sharpe agreed, and from then on “Alexander John Sharpe” became “Alexander John Ellis.” The young boy was enrolled at Shrewsbury School and Eton, educated at Trinity College, Cambridge, and never had to earn money for the rest of his life.

Ellis’s wealth enabled him to be the quintessential “gentleman scholar,” an expert in almost everything he did, be it music, mathematics, languages, phonetics, travel, or daguerreotype photography. He was a polymath for whom life was more a science than an art. He published over 300 articles and books, and his works are quoted in the OED 200 times.

His interest in accent and pronunciation was inspired by the fact that he was born to a middle-class family in Hoxton, east London, where he was exposed to working-class cockney speakers, followed by schooling at Shrewsbury with its Welsh and English accents, and then exposed to the Received Pronunciation of the upper and upper-middle classes at Eton and Cambridge.

Words were like children to Ellis. He loved them equally, regardless of whether they were common, technical, scientific, slang, or foreign. He read the Dictionary as though it were a novel. Some words gave him pure delight in both their sound and meaning such as absquatulate, to abscond or decamp, with a quotation from Haliburton’s Clockmaker. “Absquotilate [sic] it in style, you old skunk…and show the gentlemen what you can do.” But it was their sounds that captured his imagination most. The quality of a whisper or a creak; the stress of a syllable; high pitch or low pitch.

Most people hear sounds, but Ellis saw them. He saw the air move in the mouth, the way the tip of the tongue touched the ridge of the teeth for a t; the vibration of vocal cords to change it to a d; and how the base of the tongue moved back in the mouth to block the flow of air for a g. Every sound was a picture for Ellis. He devoted his life to painting these pictures, describing their systematic order so the world might better understand the fundamentals of language.

His book On Early English Pronunciation, published in five volumes between 1869 and 1889, traced the pronunciation of English from the Middle Ages to the late nineteenth century and established him as a world authority on English phonology, a pioneer in the field of speech-sound studies. For the nineteenth-century section of the book, Ellis enlisted the help of hundreds of informants across Britain and a small group of experts, including Murray and others within the OED network. The result was the first major study of British dialects.

No language yet existed for the patterns Ellis was identifying, so he often had to invent the words, which subsequently made it into the Dictionary: palatalized, to make a palatal sound (by moving the point of contact between tongue and palate further forward in the mouth); labialization, the action of making a speech sound labial (articulated with both lips); and labiopalatalized, a sound made into a labiopalatal (articulated with the front of the tongue against the hard palate and the lips). He also invented the words septendecimal, relating to a seventeenth (in music); and phonetician, which originally referred to an advocate of phonetic spelling, rather than its current meaning of “an expert of phonetics.” Quite a few of his inventions have since fallen out of use and appear in the Dictionary with a dagger sign (which indicates obsolescence) beside them, such as vocalistic, of or relating to vowels, and phonotyper, an advocate of phonotypy (another term which Ellis invented, meaning “a system of phonetic printing”).

Ellis was one of the phoneticians on whom George Bernard Shaw modeled the character of Henry Higgins, that master of pronunciation, in his play Pygmalion, later turned into the musical My Fair Lady. Higgins (as a bet with his gentlemen friends) teaches Eliza Doolittle to speak “proper” English; but Ellis had none of Henry Higgins’s snobbery or arrogance. He was a generous, down-to-earth man, a frequent correspondent with friends, happy to offer advice when asked, and always working to bring people together and support them.

Link to the rest at The Literary Hub

PG notes that the OP is from a recently released book titled, The Dictionary People: The Unsung Heroes Who Created the Oxford English Dictionary. Since PG has had a long-time attraction to The OED, he is likely going to obtain a copy of The Dictionary People in a reputable manner.

For a long time, PG has owned a copy of The Compact Edition of the Oxford English Dictionary, which consists of two heavy volumes in a case that also holds a magnifying glass. (You didn’t think the folks in Oxford could squeeze the 20 heavy volumes in the original set into two heavy volumes without squeezing the original words down to mouse-type size did you?)

During his younger days, PG could read the tiny type in the Compact Edition without using the magnifying glass, but that is no longer true.

However, the many virtues of The Compact Edition notwithstanding, all dictionary nerds know that there is no substitute for the full and weighty Oxford English Dictionary Experience.

And don’t forget the additional three volumes of the Oxford English Dictionary Additions Series, each of which includes about 3,000 new words, for example, buckyball, nanotechnology, and freeware.

Shakespeare in Bloomsbury

From The Wall Street Journal:

I went to Shakespeare’s Globe to see “The Winter’s Tale” in London last March, on a freezing, rainy night. The mood was brightened by the production’s droll Autolycus, one of the Bard’s great con men and clowns. He teased and cajoled; he brought theatergoers up to dance with the actors; he threw in references to Brexit and Boris. Decorum resumed in the final act, in which the statue comes to life, with all the grave enchantment the text demanded.

When the revels ended, I shuffled with the crowd toward the Underground and happened to glance down a garbage-strewn alleyway, where I saw a skinny, shivering, tawny little fox. Unaware that this is a common sight in the city, I felt caught in the same time warp that the ancient play, with its modern interjections, had just evinced. It was as if the year was 1610 and the fox had hitched a ride on a rural wagon to the big city—yet somehow it was also here in 2023. The Britons who first saw “The Winter’s Tale” were mourning the death of their long-reigning Elizabeth; Londoners in our century had just lost their own. Both eras had recently seen the theaters close and reopen because of plague. Both audiences of the Globe had wanted to believe that a statue had come to life, and maybe it had.

As it turns out, these are just the kinds of ruminations that the Bloomsbury group, that famous coterie of early-20th-century British writers and artists, would have dismissed as lightweight and slightly vulgar. (The original group included the writers Virginia and Leonard Woolf, the painters Vanessa Bell and Duncan Grant, the biographer Lytton Strachey, the economist John Maynard Keynes, and the art critics Clive Bell and Roger Fry.) Bloomsbury’s keen interest in Shakespeare did not lie in comparisons between their age and the Elizabethans’, in the historical roots of the plays, or in questions about provenance. They were not much concerned with Shakespeare’s character or with his beliefs. They deplored most of the professional productions they saw, complaining that they were (as one of them said) “smothered in scenery” and objecting to the fussy intonations of the players. “Acting it they spoil the poetry,” Virginia Woolf wrote to her nephew in 1935.

Instead, for the most part, the Bloomsbury group exercised its passion for Shakespeare simply by reading the plays and the sonnets, sometimes aloud together, but more often silently to themselves. Their relationship with him existed almost entirely through his language, with which they all felt an evangelical connection, intense and personal. In the beginning and the end, for them, was the word.

The subject of how different eras engage with Shakespeare is a juicy one, and an excellent choice for Marjorie Garber, a longtime professor of English at Harvard as well as the distinguished author of six previous books about Shakespeare among more than 20 volumes on subjects literary and otherwise. “Shakespeare in Bloomsbury” is a survey rather than an argument, proposing no more tendentious a thesis than that the members of the group adored Shakespeare and that she is going to show readers how in the most expansive and delightful way possible.

And this she does, propelling those readers through a lively inventory of the playwright’s imprints on Bloomsbury’s lives and works. She points out the ways in which Virginia Woolf’s frequent nods to Shakespeare serve as a “network of shared reference,” a handshake of recognition between a writer and her audience. Woolf’s 1927 novel “To the Lighthouse,” for instance, expects readers to identify its refrain of “Lights, lights, lights” as a line from “Hamlet.” Woolf uses the allusion to weave images of brightness through a narrative that plays with time passing, observing light as an ambiguous flicker in an impermanent world, one that “welcomes and protects,” as Ms. Garber notes, but one that “can also warn of danger if its signals are seen and understood.” “Orlando” (1928) blurs fiction and fact along with time, offering glimpses of an unnamed poet of the Elizabethan age who shows up at Knole, the ancestral estate of Thomas Sackville, who was a Tudor-era forebear of Woolf’s great friend Vita Sackville-West. Sackville was a cousin of Elizabeth I, a statesman and dramatist who co-authored the first English play written in blank verse. By connecting Knole with her shadow-image of Shakespeare, Woolf seduces readers into celebrating a dual aesthetic inheritance that for her represents the heart of Englishness.

Woolf and the other Bloomsbury members counted on Shakespeare’s plays to console and counsel as well as to inspire. In 1904, when young Leonard Woolf traveled to Ceylon to take an administrative post in the colonial civil service, he brought along a miniature edition of the works of Shakespeare and Milton, along with a 90-volume set of Voltaire, as bulwarks of familiarity against his fears of the unknown. Two years later, when Lytton Strachey wrote to Leonard about the shocking death of their mutual friend Thoby Stephen, Strachey relied on “Antony and Cleopatra” to express his grief: “There is nothing left remarkable / Beneath the visiting moon.”

Clive Bell, a founder of Bloomsbury who never felt entirely accepted by the group, saw the Bard as a token of belonging, telling a paramour who had recently enjoyed an Old Vic staging of “Measure for Measure” that “we, of course, only read Shakespeare.” Keynes parlayed his own veneration into civic munificence, using his government influence as an economic adviser to establish and support funding for the Cambridge Arts Theatre and to oversee the public institution that became, in 1945, the Arts Council of Great Britain.

The members of Bloomsbury defined themselves as modern rebels against the stodginess of Victorian culture. Yet their faith in the primacy of Shakespeare transcended the differences between generations, linking old and new centuries together. After a visit to Stratford-upon-Avon in 1934, Virginia Woolf commented in her diary on the “sunny impersonality” of the playwright’s garden and house, noting that he’s “serenely absent-present; both at once; radiating around one . . . but never to be pinned down.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

How a Collective of Incarcerated Writers Published an Anthology From Prison

From Electric Lit:

It would make sense that any history would begin at Stillwater Prison, where so much of the story and mythology of prison in Minnesota also begins. It is where Cole Younger of the famous James-Younger gang did their time, and where they spent their own money to start the Prison Mirror, the world’s oldest and continuously run prison newspaper.

My first experience with a writing community came when I was still near the beginning of my sentence, decades ago, and was welcomed into the Stillwater Poetry Group (spg), the first place where I felt that art was something to be taken seriously. As part of the spg, we met with so many interesting local writers: Desdamona, Wang Ping, Ed Bok Lee, and J. Otis Powell, among others. It was exhilarating, until decision-makers in the facility realized the threat that artists and poets pose to the ideas of the captivity business. After only a year and a half, the group was disbanded. It was my first lesson in how easily good things in prison get discarded. Watching art and culture go away can create a bleak and hopeless landscape that will jade and obscure a person’s faith in creative community. It was a pattern shown to us repeatedly.

Several years later, after a long education shutdown and budget cuts, and years into Minnesota’s own mass incarceration expansion era, a new wave of incarcerated writers/thinkers/persons were emerging at Stillwater. Dr. Deborah Appleman and Dr. John Schmidt volunteered to teach courses on linguistics, literary theory, and creative writing. Out of these classes, a semblance of a new writer’s community was created and a book was published. Letters to a Young Man and Other Writings offered us both the gratification of seeing our words in print and a renewed sense of purpose. Then, collectively, we waited, just as before, for the facility to let the professors back in to cultivate our new community. Again, we were reminded of how good things in these places are rarely allowed to come back once they’ve left.

During those early classes I formed a friendship with Chris Cabrera, a genius young artist with whom I shared similar lofty aspirations for both our work and our lives. We spent hours conversing and arguing over the creative and intellectual visions we had. Cabrera would shout these big, abstract rhetorical questions at me, one after another, as we tried to figure out what so many more years as artists in prison would look like without fundamental change. We argued whether art was enough to free us, and to what extent we might go to make our dreams reality—or if it would even make a difference in a system that had pretty much always disregarded our work and our humanity. In the end, I think we agreed that neither of us wanted to disappear without the chance for our work to be realized, or at least the chance for it to be recognized and embraced by the people about whom we cared most.

Chris envisioned an ongoing writing program facilitated mostly by a collective of incarcerated writers. Ideally, it would harness resources so that it could offer writing classes and opportunities throughout a writer’s incarceration. I thought it was a great idea, but our experiences with administration and abandonment in the past made me suspicious of programming in these places. I wanted to publish and to have a career, even if it had to be behind these walls. I was working on a book project and was constantly worried something administrative would mess it up. We both argued that a collective couldn’t work unless we were ultimately reconnected to the greater, free-world literary community to which we had very little introduction. It was lofty thinking for guys who had sparse writing credits between them, and who really had no formal writing instruction outside an early creative writing course. Our experiences with Dr. Appleman, though, had empowered much of our thinking. Why not think big? Another writer from our community and I had just won the Pen Prison Writing Awards. Why shouldn’t we believe our work and our community had a right to be cultivated?

It was from these conversations that the Stillwater Writers’ Collective (swc) was born, out of an agreement that our power was as a community, and a realization that if we didn’t support each other, who would? We also realized that it was hard to get our peers, even when they are threatened, to write when there aren’t instructors to read and validate their work. Historically, there just hadn’t been enough support or success in our prison system to warrant that kind of confidence.

The swc was also created because our small cohort agreed that, at some point, someone or something was going to come along with opportunities that we had been waiting for throughout the long stretches of our collective incarcerations. There was agreement that as a community we would need to be ready so that the blessing we felt was supposed to be ours wouldn’t get passed along to somebody else. We believed it would be a crime for the story of writing in the Minnesota state prison system to be told, or written, without us. Just as the foundations of these old structures had been laid by the hands of the imprisoned, we were trying to lay a new literary and intellectual foundation. We were fortunate to have the support we needed from our then-education director, who introduced Jen Bowen and Minnesota Prison Writing Workshop (mpww) to us, and whose own vision made for an ideal partnership for the community at Stillwater, and throughout the state, to grow into what it has.

American Precariat: Parables of Exclusion is the culmination of a special partnership between Coffee House Press and the Minnesota Prison Writing Workshop with an editorial board starting with twelve writers from the prisoner-created collectives of the Minnesota Correctional Facilities at Stillwater, Faribault, and Moose Lake.

For the past decade, mpww has provided a first-of-its-kind ongoing writing program within Minnesota state prisons. What started from a single creative writing course taught by the organization’s founder, Jen Bowen, has expanded from one facility to every prison in the state. The program offers a wide range of writing classes at all levels of the learning spectrum, as well as an extensive mentorship program. The workshop has become a model admired by potential prison writing programs across the country.

Before mpww, there was already a burgeoning community of talented, but mostly unrecognized, artists and writers incarcerated in the state of Minnesota. Mpww’s presence offered opportunities and resources to meet and take instruction from the larger literary community in the state, helping us to grow into a stronger community and to develop as individual writers. The relationship between mpww and the incarcerated writing community has produced numerous awards and countless publishing credits for many of the workshop’s students, as well as for many of the incredible writers that make up the mpww instructor staff and mentor program.

The twelve members of this book’s editorial staff are a small group of the much larger collectives that have grown up in our state, and throughout the country, in the sense that writers and artists always find each other in these kinds of spaces. There are creation stories that connect to make this community possible.

Most of us on the editorial board of this project recognize how exceptional it is to have the opportunities mpww provides. It affords us agency in our work that most incarcerated writing communities in the country do not share. Writing communities have and do exist in other prison systems that don’t have the same kind of programming infrastructure that we have in Minnesota. Ever since human beings began using confinement as a means to control other human beings, there have been writers imprisoned. Writers have risked their safety and their futures to find ways to sneak their words out into the world. The written word matters. Just as likely—and for just as long—writing and intellectual communities have existed in those spaces. Just like we did, artists will always find each other. It’s like a law of nature—if you put a thousand people in a single space, the artists, even with their own divergent energies, will gravitate toward each other.

Time in the life of a writer, or a prisoner, is an emergency. Incarcerated writing communities provide for us what we can only assume they offer to non-incarcerated writing communities: peer support, friend- ship, competition, rivalry, and shared stakes in the success of their members. These communities offer reminders of time and the emergencies time represents. Classes get canceled and cut. In 2005, our whole education department shut down for months and every computer in the joint was wiped and scoured. Stories, essays, poetry, and even an anthology of our work disappeared from the universe. There are lockdowns, seizures of materials, intentionally, and sometimes collaterally. There are surprise transfers that leave us without computer access, and we must figure out how to keep the things we need most. We, who are working hard to mend some of the wounds in the social and familial fabric of our lives, live with a stopwatch to create evidence that will show something redemptive within us.

I published my first memoir, This Is Where I Am, after 17 years in prison with the support of my small but unified family unit. Less than a year later, my mom passed away. She was my last living blood relative. Deadlines, story and book completions fulfill the need to have whole pieces of writing that can speak for the incomplete parts of our lives and families. They are our main emergency.

We build community because we can’t expect, demand, or control the machinations of the captivity business. Likewise, we can’t be sure that the politics of confinement will provide the spiritual and artistic resources we need to transcend our encagements. These collectives are our expression of both community and art. They provide our agency. The carceral state will not feed the kind of hunger an artist in these kinds of places experiences. So, we find ways to feed each other. There is a ceiling to the kinds of programming corrections provides, and this includes education. A member of the collective (and the editorial board) connected me with the right people to be able to finish my bachelor’s in English when the prison system was unable to help me. Most of the computer labs in the system were originally proposed, and in many cases set up, by members of our community who knew their value. There is a constant nourishing in the books and magazines we pass around. There are the friendships—the several successions where one member will encourage the work of a newer writer to keep revising, because they see the genuine value, and then, later, they see these stories win awards or find publication in reputable journals. There are also the rivalries, so strong and ingrained into the history of collectives. They have driven some to become the writers they were never sure they were supposed to become. We join forces because individually we are writers and poets and artists, but collectively we are power and possibility and refutation of the hypocrisy of the carceral complex.

Does your life matter? Does your art matter? I hope so. I know that I could never rely on an ever-constricting prison system at a pivot point of mass incarceration to answer these questions for me.

There is great significance to a panel of incarcerated writers editing an anthology on the precarious class in 2023. We, the editors, are the same population who have been tweaking and revising our work so that our voices might gain acceptance into the journals and anthologies we’ve hoped would validate our efforts. We are trying to make greater sense of our place in the larger, broader world. It matters that this is a volume edited by the imprisoned, because the history of class hasn’t always been written by the powerful, but they have always been its editors. We are a group of human beings who sought out community to consolidate the power of our own work; we, the incarcerated, are editing this most recent chapter on class. As a group, we have come to understand, or have tried to understand, power and class distinctions through the ways we have, as an incarcerated community, categorized and divided ourselves. Incarceration is the extension of the same mechanisms of power and marginalization that Black, brown, queer, and impoverished human beings have been manipulated and oppressed by through the institutions of our society. We are the depository of that pipeline.

Just as the largest of corporations believed that they could drop sewage into nearby rivers, or bury our human footprint in a land-fill or in a plastic swirl in the oceans, without the earth spitting its truth back at all of us, we dispose of human problems into the chasm of the penal system without confronting the socioeconomic circumstances that created the problems in the first place. The power dimensions that are at once manipulative, deceptive, and plain old mean are also cowardly and speak to the fragility of the human place in the eco-system. We have felt for so long—and our social and economic systems support the belief—that human beings must control each other to control the world.

As a broader, new American society in the wake of a global pandemic, we’ve now felt the soft incarceration of being sequestered, a fear of being trapped, and a fear of catching invisible sickness with uncertain consequences. The trapped analogy is obvious. The pathologies in all forms—viral, bacterial, psycho-sociological—well, we’ve been passing them back and forth unknowingly for generations until we are too sick to know any better. We watched, from inside and out, as a knee pushed on a neck and the stop-clock-emergency-of-time ran out, and then, like so often in our history, we have watched the fire and the rage. We bite down because we know that the violence of taking a person’s time and all their hope can’t be represented in a short video clip on tv, or even elicit the flash or rage such violent taking should.

During the course of this project, our editorial board went through two cohorts—the first, pre-pandemic, totaled twelve individual editors in three separate correctional facilities while the second consisted of a much smaller concentration of editors. Covid-19 did just what time in these places does—change and complicate things further. There were expected and unexpected transfers, incongruent security priorities and lockdowns that made it impossible for our cohorts to meet, so we had to depend on individual institutions to relay memos and manuscripts. Institutions have never been known for an ability to make adjustments to benefit the humanity of their inhabitants. In the pandemic, prisons reverted to the answer they knew best—tightened security. Our project went from finding its purpose and personality to frozen indefinitely—and that continued well beyond when the rest of the world started to open and venture out again. Significant effort was made to keep up momentum, but it was extremely difficult to keep twelve humans, all separated in different carceral compartments, connected to each other and to a changing outside world. When we did come back to this work, we were without members from both cohorts and access to the entire group from Stillwater was cut off. We were left with the cohort from Faribault, with participation from a couple of transferred editors in an entirely different facility in Moose Lake. And by that time, the entire world had transformed. Editing a book about class looked, felt, and tasted exponentially different.

. . . .

In so many ways, prisons are secrets hidden from the rest of the world. Society has always hidden its most disturbing transgressions. Yet, culture still matters in these hidden spaces. We, the incarcerated, are the caretakers of it. If a prison is old enough, it remembers the prisoners that quarried the granite for its walls, or laid the bricks for its cell blocks that we have spent a century inhabiting. The incarcerated have always been more expendable than the buildings that house us, but our ideas echo long after we have left our initials scratched into old slabs of inmate-laid concrete, or scribbled on the walls of holding tanks. The state may maintain the institutions, but we nurture the culture, always—we, the artists, students, musicians, and writers. Prison writing communities are proof of a force stronger than single unread poems or stories. They are proof that there are more of us coming!

Link to the rest at Electric Lit

PG hasn’t had a client charged with or convicted of a crime for a very long time nor has he had any reason to visit a client in jail or prison. He is thankful because these were very difficult cases for him to deal with.

All of PG’s criminal clients had committed a crime of one sort or another. While there have been a number of cases in which a miscarriage of justice has occurred and an innocent person has been punished as a criminal for a crime he/she did not commit, such cases represent a vanishingly small percentage of the total number of people charged with a criminal offense.

Most people charged with a crime have done something wrong. There might be compelling extenuating circumstances that provide an excuse for the individual’s criminal acts. Or the person may have been charged with a crime more serious than would be justified by her/his actual behavior.

Part of the reason criminal cases were almost always difficult for PG to deal with is that every person charged with a crime has parents, children, spouses and/or friends who are devastated by the criminal charges that have been brought against someone they care about deeply. Bad people may hang around other bad people most of the time, but PG never handled a criminal case where there were not some good and innocent family members who were deeply saddened by the criminal charges.

If the loved one is incarcerated for an extended period of time, the heartache often lasts for an extended period of time.

Reading the OP brought to mind another book with a similar theme.

Sustaining vs. Disruptive Innovation: What’s the Difference?

From The Harvard Business School:

Innovation is on the minds of professionals across industries, and rightfully so. Strategizing for innovation can enable businesses to provide customers with continued value, create new market segments, and push competitors out of segments they once owned.

According to a recent McKinsey Global Survey, 84 percent of executives feel innovation is extremely or very important to their companies’ growth strategies. When formulating a business strategy, understanding the different types of innovation can help you conceptualize your business’s place in its industry, identify what your current innovation strategy is and if you’d like to change it, and recognize competitors’ innovation strategies.

These insights can inform innovation strategies that drive purposeful, proactive product decisions to disrupt an industry or avoid being disrupted by another organization.

The two types of innovation are sustaining and disruptive. Here’s a breakdown of each, the key factors that differentiate them, and the importance of incorporating disruptive innovation into your strategic mindset.

What Is Sustaining Innovation?

Sustaining innovation occurs when a company creates better-performing products to sell for higher profits to its best customers. Typically, sustaining innovation is a strategy used by companies already successful in their industries. The motivating factor in sustaining innovation is profit; by creating better products for its best customers, a business can pursue ever-higher profit margins.

One example discussed in the online course Disruptive Strategy is the introduction of laptops in the computing industry. Laptop computers were a sustaining innovation that followed the personal desktop computer. The computers’ qualities and abilities were roughly equal, with the laptop offering novel portability. This leveled-up version of the same product catered to desktop users willing to pay for the increased flexibility the laptop provided.

In a vacuum, relying on sustaining innovation is a sound strategy that involves continually creating better versions of your product to gain higher profit margins from customers who are willing to pay. Yet, some of the most successful companies built on sustaining innovation fail.

“Why is it that good companies run by good, smart people find it so hard to sustain their success?” Harvard Business School Professor Clayton Christensen asks in Disruptive Strategy. “In our research, success is very hard to sustain. The common reason why successful companies fail is this phenomenon we call ‘disruption.’”

. . . .

What Is Disruptive Innovation?

Disruptive innovation—the second type of innovation and the force behind disruption—occurs when a company with fewer resources moves upmarket and challenges an incumbent business. There are two types of disruptive innovation:

  • Low-end disruption, in which a company uses a low-cost business model to enter at the bottom of an existing market and claim a segment
  • New-market disruption, in which a company creates and claims a new segment in an existing market by catering to an underserved customer base

Both types of disruptive innovation cause the incumbent company—which relies on sustaining innovation—to retreat upmarket rather than fight the new entrant. This is because the entrant has selected a segment (either at the bottom of the existing market or a new market segment) in which profit margins are relatively low. The incumbent company’s innovation strategy is driven by higher profit margins, causing them to pull out of the segment in question and focus on those with even higher profit margins.

As the entrant’s product offerings improve, it moves into segments with those higher profit margins. Once again, the incumbent company is motivated to retreat upmarket rather than fight for the lower-profit market segments.

Eventually, the entrant pushes the incumbent out of the market altogether, having improved its product so much that it claims all existing market segments or renders the incumbent’s products obsolete.

Returning to the example of the computing industry, the introduction of smartphones was a disruptive innovation, specifically new-market disruption. Smartphones catered to a new market segment of customers who didn’t need the level of capabilities offered by a laptop—basic, convenient internet access at a fraction of the cost of a desktop or laptop computer was enough. As the quality of smartphones improves, the laptop and desktop may be pushed further upmarket and, eventually, into obsolescence.

Link to the rest at The Harvard Business School

PG has been interested in disruptive technology/innovation ever since he first heard the term a very long time ago.

The term disruptive technology was originally coined by Harvard Business School professor Clayton Christensen in 1995 and further expounded in his book The Innovator’s Dilemma in 1997. In the follow-up work, The Innovator’s Solution, he replaced the term with disruptive innovation.

Hunting the Falcon

From The Wall Street Journal:

‘Divorced, beheaded, died, divorced, beheaded, survived.” So were schoolchildren once taught in order to remember the fate of Henry VIII’s six wives. Anne Boleyn, the second one (that would be “beheaded”), was by far the most interesting and intelligent, the only one of the six who engaged actively in politics, and the only one whom the monstrous Henry ever loved.

To understand Anne’s story it is necessary first to understand Henry, and John Guy and Julia Fox, husband-and-wife authors who have each published previous works of Tudor-era history, give a compelling portrait of Henry in “Hunting the Falcon,” an absorbing chronicle of the king’s marriage to Anne Boleyn.

As a young king, Henry was intelligent and glamorous but “over-indulged by a doting mother and over-protected by an autocratic father,” the authors write. He grew into “a narcissist who saw exercising control as his birthright, a man who never accepted blame for his own actions and always looked for scapegoats.” The Golden Boy became a sullen, terrifying brute, England’s Stalin.

Anne was neither royal nor noble. She belonged to the rapidly rising gentry class. Her father, Sir Thomas Boleyn, was a greedy and ambitious public servant, employed sometimes on diplomatic missions, an able man who later proved despicable when his daughter’s fate hung in the balance.

In her youth, Anne was sent to France—a sort of finishing school. There she joined the retinue of Henry’s sister Mary, who had, for reasons of state, been affianced to the “gouty, toothless, libidinous” widower Louis XII. Anne would end up spending seven years in France, much of it at the royal court. She became an accomplished young lady and was always a Francophile, encouraging Henry to ally England with France rather than Spain.

Henry fell in love with Anne not long after her return to England in 1522. Her elder sister Mary had already been Henry’s mistress—Mary is the headliner in Philippa Gregory’s 2001 novel, “The Other Boleyn Girl”—but Anne held out for marriage and would do so for several years, a remarkable feat.

As we know from the many popular treatments of this story, a marriage between Anne Boleyn and the king would be possible only if Henry’s marriage to Catherine of Aragon could be annulled. When the king’s chief minister, Cardinal Wolsey, failed to achieve this end, Anne urged Henry to dismiss him. He needed little urging, but readers of Hilary Mantel’s Thomas Cromwell novels will remember that the royal adviser’s loyalty to the fallen cardinal made him Anne’s enemy—even though, in politics and religion, they were as one. Anne favored religious reform, like Cromwell, though she was never a Protestant. She is better described as an evangelical Catholic. According to Anne’s chaplain, the authors write, “her apartments were hives of evangelical piety with her ladies reading the English Bible and sewing clothes for the poor.”

The break with Rome, engineered in part by Cromwell’s management of Parliament, finally made the marriage possible. In sure anticipation of the annulment—delivered by the new archbishop of Canterbury, Thomas Cranmer—Anne had at last gone to bed with Henry. She was already pregnant when the marriage took place, in January 1533, when the still doting Henry arranged a magnificent coronation for his true and only queen. “Anne was determined,” the authors write, “that everyone who mattered should attend,” though Thomas More chose to stay away, feeling that he couldn’t grace the coronation of a queen “he believed to be living in adultery.” The child proved to be a disappointment, a daughter (the future Elizabeth I), not the son Henry craved.

Anne, the authors stress, was never popular. Some called her a whore and were hanged for their impudence. She was a political power as no previous queen had been, but the security of her position and influence depended on her giving birth to a son. Two miscarriages made her position perilous, all the more so because Henry was wearying of her public activity—she “pushed hard for her people,” Mr. Guy and Ms. Fox write, aiming to fill posts and secure preferments. The adored mistress was becoming a tiresome wife. Henry already had his eye on a replacement, a demure girl named Jane Seymour. He wanted to be rid of Anne.

Cromwell was ready to oblige. Anne had been careless, allowing men to mingle with women in her apartments in the style of the French court. Cromwell first seized Mark Smeaton, a young musician reported to have looked longingly at the queen. Cromwell sent him to the Tower to be tortured. Naturally a confession followed. There were other suspects, among them Anne’s brother, George. Materials for a trial were quickly assembled. Anne’s contemptible father, Sir Thomas, escaped the purge by, as the authors put it, “consenting to condemn her.” Anne, briefly hysterical on first being admitted to the Tower, recovered her spirit, but the trial was a grisly farce, a well-managed show trial.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Emperor of Rome

From The Wall Street Journal:

In June of the year 68, the emperor Nero, on learning that the Roman Senate had declared him a public enemy, plunged a dagger into his throat (with the loyal assistance of his private secretary). A generation later, the emperor Domitian was hacked to death by a handful of palace aides in what was a rather messy job. A century on, Commodus was strangled in the bath by his personal trainer (Plan B for the conspirators, following a botched poisoning). In 235, in a more conventional military coup staged on the Rhine frontier, the emperor Severus Alexander was cut down (“clinging to his mother,” according to a contemporary historian) by soldiers ready for a change of boss.

Assassination was the main occupational hazard of ruling the Roman Empire: More than a dozen Roman rulers met violent ends between Julius Caesar in 44 B.C. and Severus Alexander. What makes the above list notable is that each of these deaths also spelled the end of a family’s tenure in power. Roman imperial dynasties were evanescent, too.

“The corridors of power” in ancient Rome “were always bloodstained,” observes Mary Beard in the opening pages of “Emperor of Rome,” her study of the emperors who ruled antiquity’s most famous empire. The rotating door to the empire’s halls of power forms a contrast with the durability of the imperium as a political formation.

Ms. Beard, a retired professor of classics at Cambridge, has become the most visible face of classics worldwide. On this side of the pond, she is a public intellectual. On the other side, she is that and more: a celebrity. Her gifts for putting serious scholarship into accessible terms, for bringing a critical eye to the study of classics without being a scold (while still making the study of the ancient world seem entertaining) has translated well to TV and a spate of books admired by specialists and the wider public alike.

“Emperor of Rome” is billed as a sequel to her blockbuster, “SPQR,” which also treated the Roman Empire in its later chapters. Unlike its predecessor, “Emperor of Rome” has no chronological narrative. It looks at the rulers of Rome through the prism of 10 separate themes, from “power dining” to imperial travel, as Ms. Beard returns to subjects she has treated throughout her career (imperial portraiture, Roman triumphs, deification). Each of the themes offers a vivid way to re-examine what we know, and don’t, about life at the top.

For all its detail and diverse interests, the book’s unifying argument might be that it is very hard to grasp the truth of what the emperors were actually like. Instead, we have a tissue of propaganda and gossip, sycophancy and slander. “The image of Roman emperors that has come down to us,” Ms. Beard writes, “is a complicated and multilayered construction: a glorious combination of hard historical evidence, spin, political invention and reinvention, fantasies of power, and the projection of Roman (and some modern) anxieties.”

As Ms. Beard points out, despite the proliferation of the emperor’s image that began in the reign of Augustus (died A.D. 14), it is hard to know what the emperors even looked like. Domitian, for instance, had a “pot belly, thin legs, and hardly any hair,” Ms. Beard writes. His baldness was a touchy issue, and he wrote a manual on hair care. None of this is reflected in the generously coiffed busts that survive. The deeper point is that image should not be mistaken for reality.

The reputation of a Roman emperor depended inordinately on whether the next emperor needed his predecessor to have been remembered as “good” or “bad.” An emperor was judged by how he balanced the delicate, ambiguous expectations laid upon those who wielded absolute power. He was to be generous but not profligate, sympathetic but not soft. His comportment at table, in the circus, even in bed was a reflection of the anxieties and fantasies of the ruling class. In his role as commander in chief, the emperor’s obligations were clearer: to win in battle and to share in the toughness of the common soldier.

In other words, the emperors of Rome were the wealthiest and most powerful people in the world, tightly constricted by the protocols that governed the proper exercise of authority. The emperors were never outright monarchs—as they were at pains to emphasize, distancing themselves emphatically from titles that implied kingship. They continued to share power with the Senate and the army; in fact, their power depended on how well they could orchestrate these institutions. The emperors, descendants of the warring oligarchs of the late republic, inherited a militaristic culture and a sprawling empire stretched tautly to its limits. They governed all this with a skeleton crew of administrators, many of them enslaved or free members of their household staffs, often via letters that moved at the speed of horse.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Rivals

From The Wall Street Journal:

In 1619, René Descartes resolved to transform the study of philosophy—a broad ambit that at the time embraced mathematics and the study of nature. He hoped to deduce all observable phenomena—the paths of planets, the beating of the heart—from a few foundational laws or principles. After years of effort, and despite triumphs such as the invention of analytic geometry, he conceded defeat. As he sought to explain events that were increasingly subtle (“plus particulières”), he realized that he needed more data and that experiments—many of them—would be required. But he had little interest in engaging with other researchers or relying on the assistance of volunteers (who would distract him with “useless conversation”). Descartes, explains the historian of science Lorraine Daston, “was probably the last major thinker to believe that science could be conducted in splendid solitude.”

A way had to be found for doing science—or, as it was termed, “natural philosophy”—in a collaborative form. How would it be possible to balance the benefit of working together with the challenge of tolerating competitors? In “Rivals,” a compact and elegant primer, Ms. Daston leads us through the evolution of scientific collaboration over the past 350 years.

Intellectual communities, she reminds us, existed even in ancient times, focusing on “the transmission of knowledge . . . deepened and broadened by centuries of commentary and criticism.” The observations within Pliny the Elder’s “Natural History” and Ptolemy’s studies of astronomy, Ms. Daston explains, were diligently recycled through generations. What arose in the 17th century was something more expansive and original: an effort at coordinated empirical inquiry.

After several attempts at collective authorship failed miserably—a group of anatomists, for instance, descended to name-calling at their meetings—scholars tended to work on their own but correspond extensively. Even so, “animosity, not amiability” set the tone, Ms. Daston says, and savants readily savaged their rivals in public.

Some collaborations did at least get off the ground. One sought to measure the distance between the earth and sun by scrutinizing the so-called transit of Venus—the planet’s rarely seen movement across the disc of the sun, which occurred in 1761 and again 1769; another focused on discovering the common features of weather across the globe. While neither effort was all that productive—not least because of the difficulty in transporting fragile instruments and the variability in measurement technique—both stimulated the interest of a far-flung community of scholars.

The 19th century brought dramatic improvements in transport and communication, allowing distant scientists to connect and collaborate. One surprising source of inspiration: the Universal Postal Union, which harmonized international mail delivery. Established in Bern in 1874, it was, Ms. Daston writes, “the paragon of successful international governance, and the one utopian scheme that really worked.” It was organized by postal leaders (not government) and was composed of specialists who were immersed in quotidian challenges and had a stake in the outcome.

Several international efforts soon adopted a similar model. The Carte du Ciel aimed to map the stars, while the International Cloud Atlas aimed to catalog cloud patterns in order to improve weather forecasting. The success of both projects, Ms. Daston says, reflects the importance of a charismatic organizer, who functions as a “scientific diplomat.” She draws a portrait of Swedish meteorologist Hugo Hildebrandsson, who spoke three languages and traveled tirelessly championing the ICA, hearing out colleagues and smoothing ruffled feathers. She also highlights the sense of rapport among the expert participants—often developed over extended meals with flowing drinks. One enthusiastic ICA participant, Britain’s Robert Scott, recalled its bonhomie and “good fellowship,” suggesting, Ms. Daston writes, that it was “in the fostering of this feeling, much more than in the discussion of abstruse scientific questions, that the real value of these international gatherings is to be found.”

One of the enduring achievements of 19th-century internationalism was the standardizing of weights, measures and nomenclature, arrived at mostly in meetings of disciplinary specialists. Disciplines, Ms. Daston explains, had emerged as the foundational unit of scientific training and practice as universities, starting in Germany, shifted their focus from transmitting knowledge to acquiring it. Scholars strove to publish in specialized journals that “forged disciplinary reputations, standards of evidence and rigor, and accepted doctrine, often in the crucible of fiery debate.”

Over the past 75 years, we’ve witnessed an “explosive growth of science along all dimensions,” Ms. Daston reports. Alongside this expansion has arisen a concern about oversight. Since the 1970s, the peer-review process has been formalized, and a broader range of experts now evaluate manuscripts and funding applications. Keeping up with the volume of submissions has become an abiding challenge.

The sheer size of the science has resulted in a reliance on quantitative metrics, which, Ms. Daston says, “replace expert judgement with benchmarks.” The Hirsch Index, for example, seeks to distill a researcher’s productivity and impact into a single number. There is always the danger of gaming the system—e.g., reviewers insisting on the inclusion of their own publications in the work they are being asked to scrutinize.

While the scientific community “was never exactly a peaceable kingdom,” Ms. Daston writes, “it was held together by a hunger for recognition that only other experts could confer” and by “face-to-face relationships among its members.”

Link to the rest at The Wall Street Journal

The Forgers

From The Wall Street Journal:

By 1940 the possibility of escape had become so small for the millions of Jews trapped in Nazi-occupied Europe that even the most outlandish-sounding gambit seemed worth a try. One such ploy was to obtain a forged Paraguayan passport. Far-fetched though that plan may have been, the exit papers helped somewhere between 800 and 3,000 Jews flee across Nazi borders and survive the Holocaust.

Put into perspective, that makes the tactic among the most successful wartime efforts to free Jews from the Nazi death machine. (Perhaps the best-known strategy, organized by the German industrialist Oskar Schindler, saved approximately 1,200 Jews.) Yet, as we learn in Roger Moorhouse’s valuable but uneven chronicle, “The Forgers: The Forgotten Story of the Holocaust’s Most Audacious Rescue Operation,” for years the Paraguayan-passport scheme remained virtually forgotten. Mr. Moorhouse’s subject thus encompasses a double mystery: how the operation worked and succeeded, and why it seems to have disappeared from the historical record. The author does better explaining the former than the latter.

Of the many threads that weave through the story, perhaps the most intriguing describes the unlikely bedfellows who masterminded the operation: a Swiss-based group of officials from the Polish government-in-exile, working in tandem with Polish-born Jewish community leaders who had found refuge in Switzerland. Then, as now, the very notion of a joint Polish-Jewish humanitarian project can seem surprising against the backdrop of Poland’s long history of often-violent antisemitism. But in numerous documents cited by Mr. Moorhouse, a British World War II historian and the author of “Poland 1939,” the Polish government-in-exile affirmed its intention to protect all Polish citizens, Jewish or otherwise, from Nazi persecution.

The declaration was humanitarian, but the intention was also strategic. Jan Karski—the Polish resistance fighter sent by the government-in-exile on clandestine trips into Nazi-occupied Poland to report on conditions there—warned in early 1940 that Polish antisemitism could “create something akin to a narrow bridge” that would align Polish citizens with Nazi aims and persuade them to collaborate. To counter that, Karski advised building “a common front . . . an understanding that in the end both peoples are being unjustly persecuted by the same enemy.” Karski would later smuggle out his eyewitness accounts of the harrowing conditions in Warsaw’s Jewish ghetto and at Izbica, the transit camp from which Jews were sent to the Bełżec death camp. He would travel to Washington and London to attest to the mass murder of Jews by cyanide at Auschwitz. To save the Jews of Poland, these reports made clear, additional Allied intervention would be needed.

That help did not arrive. In the breach, the men who would launch the Paraguayan-passports scheme found each other. Their team, called the Ładoś Group, was led by the career Polish diplomat Aleksander Ładoś, who came to the Swiss capital of Bern in 1940 to serve as the unofficial ambassador of Poland’s government-in-exile. In addition to being “positively predisposed toward Poland’s Jewish populations,” Mr. Moorhouse tells us, Ładoś “was not a man overly burdened with respect for the legal sanctity of official documents.” He was the perfect candidate to helm the covert operation.

Rudolf Hügli, a Swiss notary and an honorary consul of Paraguay, would, for a fee, provide the blank passport documents. Joining Ładoś in providing cover for and operating the scheme were three consular colleagues: Stefan Ryniewicz, his deputy; Konstanty Rokicki, whose responsibilities included handling passport applications and often filling in the names of the passport bearers; and Juliusz Kühl, an attaché who himself was Jewish and was the legation’s representative for Jewish affairs.

Kühl became the liaison to several representatives from Jewish aid organizations, most notably Abraham Silberschein, a former member of the Polish Parliament who had helped found the Relief Committee for Jewish War Victims, and Rabbi Israel Chaim Eiss, a founder of the Orthodox Agudath Israel movement. These two community leaders, along with several others, took charge of spreading word of the passport scheme through the ghettos and camps. They also oversaw the coded correspondence used to collect individual information and photos for the fake passports, and managed the passports’ delivery.

After a German spy exposed the scheme in 1943, Swiss authorities arrested or sanctioned most of the group and no more passports were issued. But the documents that had already been distributed still helped some survive, Mr. Moorhouse explains. As the war neared its end, reclaiming soldiers to fight for Hitler took precedence in Germany over vetoing forged passports. And so Germany sought a deal with the Allies to trade German prisoners of war for those Jewish camp inmates who held papers, valid or not, from countries beyond the Reich. Even so, the negotiations were so prolonged that many Jews died while awaiting their release.

Link to the rest at The Wall Street Journal

The Canceling of the American Mind

From The Wall Street Journal:

We’re in a terrible spot, and everybody knows it. Americans on the right and left detest each other, excoriate each other and, with every flaring of rage, move further from any sense of pluralistic common cause. Citizens have lost confidence in officialdom. Fashionable ideologies that brook no good-faith dissent have surged into every corner of life. Make a minor demurral, even a joke, and you risk being subjected to the ghastly nullification rituals of what is called cancel culture.

It is this predicament, all of it, that Greg Lukianoff and Rikki Schlott address in “The Canceling of the American Mind,” a lucid and comprehensive look at where we are and how we got here, and, less persuasively, what we can do to make things better.

The authors do not merely analyze; they are in the fray. Mr. Lukianoff is the president of the Foundation for Individual Rights and Expression, which defends free speech in workplaces and on campus. He is also co-author, with New York University’s Jonathan Haidt, of “The Coddling of the American Mind,” an important 2018 book about emotional fragility among young adults. Ms. Schlott’s college interest in “Coddling” eventually brought her to FIRE; she’s also a columnist for the New York Post.

“Cancel culture” is an imperfect term, but its meaning is well understood: incidents of public shaming and professional defenestration, often ginned up by activists high on their own sanctimony. “Cancel Culture has upended lives, ruined careers, undermined companies, hindered the production of knowledge, destroyed trust in institutions, and plunged us into an ever-worsening culture war.”

When did it start? The authors say 2014, when cancellations “exploded” in higher education. By 2017, following the outward migration of campus groupthink, canceling moved into art, publishing, comedy, journalism and, more recently, medicine, science and law.

Many of the authors’ examples will be familiar to habitués of Twitter—lately renamed X. It is the public square where witches are burned. It is also where witches, on the pyre, frantically recant and promise to “do better.” These apologies never save the victims, though they probably add to the enjoyment of their persecutors.

Politically, the authors strive to be evenhanded. They identify cancelers on the right (whose targets include progressive college professors and Republicans who fall afoul of Donald Trump) and fault thinkers such as Harvard law professor Adrian Vermeule for fostering illiberality. Regrettably, they endorse the claim that conservatives are engaged in “book banning,” though most of their examples involve the curation of what’s on offer in public-school libraries. It is irksome that the authors don’t acknowledge that, in truth, no book is banned in the U.S.; you can buy whatever you want to read.

Efforts to address both sides notwithstanding, “The Canceling of the American Mind” leaves little doubt that cancel culture is primarily a tool of the left. Mr. Lukianoff and Ms. Schlott parse the Catch-22 tricks used to put targets in the wrong. They write of a “perfect rhetorical fortress” that allows righteous lefties to dismiss anyone for anything, to attribute to anyone a “phobia” or an “ism,” and to claim that the person in question is inflicting injury. What the authors call “thought-terminating cliches,” such as “dog whistles” and “punching down,” add to the weirdly chant-like hyperbole when cancelers get going. Jennifer Sey, fired from Levi Strauss for objecting to the Covid lockdowns, was smeared as “a racist, a eugenicist, and a QAnon conspirator.” James Bennet, dislodged from the New York Times for publishing an op-ed by Sen. Tom Cotton, was said to have made Times readers “vulnerable to harm.”

Someday it may be funny—actually it’s funny now, in a horrible way—that a Yale lecturer’s mild surprise at finding an artisanal coffee shop in rural Ohio could be spun into accusations of “dehumanization” and used to try to get her axed, as happened to Sally Satel when she came back east after a year of field research into the opioid crisis. Or that a teenager’s incautious tweets could destroy a woman’s career a decade later, as happened to Alexi McCammond when she was offered the top job at Teen Vogue. Or that some black intellectuals are told by progressives that they’re “not really black,” as John McWhorter has found.

It’s all a reminder of the sheer nuttery that has engulfed us since we all got online. Political correctness predated the laptop and the smartphone, as Mr. Lukianoff and Ms. Schlott remind us, but it took social media to bring the mass pile-ons of cancel culture.

In the final third of “The Canceling of the American Mind” the authors offer solutions. They call for the reinvigoration of free-speech culture, something 20th-century Americans will remember. Parents should teach the golden rule and a genuine appreciation for viewpoint diversity. Corporate executives should establish free-speech standards, get human resources on board, and, if a controversy blows up, avoid large meetings that might “devolve into browbeating struggle sessions.”

Link to the rest at The Wall Street Journal

The Rigor of Angels

From The Wall Street Journal:

In an era defined by anxiety, it would seem only natural that we should hanker after the eternal verities, as a bulwark against the threats and confusions that daily beset us. However, William Egginton’s “The Rigor of Angels: Borges, Heisenberg, Kant, and the Ultimate Nature of Reality” is here to assure us that not only is uncertainty built into the deepest structures of reality, but that we should gladly accept this fact, and be content with the limitations of our capacity to understand and absorb the world. As the author says of his three seemingly unlikely bedfellows, they “shared an uncommon immunity to the temptation to think they knew God’s secret plan.”

Why does Mr. Egginton, who teaches literature and philosophy at Johns Hopkins University, yoke together a writer of fiction, a quantum physicist and an Enlightenment philosopher? The common thread he finds running through the thought of all three might be called affirmative skepticism, a focus on the idea that the nature of things—the nature of nature—is unknowable in the ordinary sense. Instead, we play an active role in “creating our own reality.”

The man who signed his name to the American Revolution, baseball’s greatest moments, the power of volcanoes and more.

This is not so outlandish a claim as it might seem. Knowledge, says Mr. Egginton, is “our own way of making sense of a reality whose ultimate nature may not conform to our conceptions of it.” How do we understand the reds in a Vermeer painting, the furred skin of a peach, a Beethoven crescendo? Since the mind itself is deeply involved in generating such particular, elusive experiences, “is it not possible, likely even, that the other phenomena we encounter have a similar origin?” By “other phenomena” the author means our commonplace, day-to-day doings—eating, sleeping, working. Are we complicit in, and necessary to, what the medieval philosopher Duns Scotus called haecceity, the this-ness of the world in which we have our existence?

Early on, Mr. Egginton delves into the work of a 22-year-old Jorge Luis Borges, on the brink of an artistic venture that would set him among the immortals. Obsessed with time and memory, the young Argentinean writer realized, as had Immanuel Kant before him, that there are no moments of time, only a continuous flow. Mr. Egginton writes: “The conceit of slowing time down to a single frame, honing the moment of an observation to a pure present, destroys the observation itself. The closer we look, the more the present vanishes from our grasp.”

The implications of this insight are far-reaching, and undermine traditional notions of our being in time. In Zeno’s paradox of the race between Achilles and the tortoise, the former can never overtake the latter because he has to pass through infinite subdivisions of distance, each requiring its own fraction of time to be traversed. But this is only the case if time can be broken down into an infinity of segments, and it cannot—it is a continuum. So the Greek warrior streaks past the poor old shambling reptile.

In Borges’s story “Funes the Memorious,” a young man suffers a head injury that gives him the ability to recall every detail of everything he experiences. He can reconstruct an entire day from the past—but it takes him a subsequent day to do so. And afterward he will remember the day in which he reconstructed that previous day, ad infinitum. What Mr. Egginton calls an “utter perfection of perception” is utterly stultifying. In order to perceive at all, the observer must “generalize, ever so slightly, and connect the difference between two moments in space-time. Without this slight blur . . . all there would be is an eternal present.” We must fool ourselves into thinking that time is granular.

What all three of Mr. Egginton’s subjects recognized was that much of our understanding of reality is in fact misunderstanding. We imagine things so because we require them to be so. Hence Einstein’s famous insistence that God does not play dice with the world—which provoked the Danish physicist Niels Bohr to urge the old boy to “stop telling God what to do.”

Werner Heisenberg was 23 when he took himself off to a small island in the North Sea to grapple with one of the more resistant puzzles of quantum theory: An electron circling the nucleus of an atom will “jump” from one orbit to another without seeming to exist in between.

Heisenberg’s explanation, put simply, states that it is impossible to know simultaneously the position and the momentum of an atomic particle. Consequent on this extraordinary but easily demonstrable fact is that ultimate reality, if it exists, is permanently beyond the scope of the human eye and its manmade aids. Nor can we adequately describe in words what is “out there.”

What is now known as Heisenberg’s uncertainty principle was a scientific triumph almost comparable to Einstein’s theory of relativity—but Einstein could not accept Heisenberg’s conclusions. All the same, Heisenberg was right, even if what he had to tell us seemed to fly against all reason. As he said, “About the ultimate things we cannot speak.”

Bohr himself reportedly told Heisenberg, “When it comes to atoms, language can be used only as in poetry.” As the Italian physicist Carlo Rovelli puts it in his recent book “Helgoland,” quantum reality is “intricate and fragile as Venetian lace. Every interaction is an event, and it is these light and ephemeral events that weave reality.” Quantum physics at once pulls the rug from under us and lands us in a hammock.

What we need to be wary of, Mr. Egginton argues, is not a barrier against understanding erected by science: “Rather, we should guard against creating that wall ourselves by imposing a prejudice we have about what reality must be like.”

Link to the rest at The Wall Street Journal

How Science Fiction Informs the Future of Innovation

From Automation Alley:

The remarkably prophetic capacity of humans to imagine and harness the future has shaped the evolution of humankind. Straight-line extrapolations and nonlinear predictions based on present-day facts have helped civilization discover mesmerizing technologies first described in science fiction novels and cinematic features from bygone eras. Therein we encounter thought-provoking ideas similar to the innovative products that we take for granted today.  

Historically, creative writers and movie directors have had an innate talent to envisage the future rapidly accelerating toward us. Even comic book writers introduced Dick Tracy’s 2-way wrist radio in 1946 that mirrors our indispensable Apple smart watches. When The Jetsons premiered in 1962, they foresaw flat screen televisions, video calls, drones, holograms, flying taxis, and digital newspapers. The moon landing in 1969 was loosely divined in From the Earth to the Moon published in 1865. With uncanny accuracy, the book’s author described astronauts in an aluminum capsule being launched into space from Florida. In 1984, William Gibson’s novel Neuromancer conjured the World Wide Web, hacking, and virtual reality. Steven Soderbergh’s 2011 film Contagion depicts a quick spreading worldwide virus that enlightened the World Health Organization to declare Covid-19 a pandemic nine years later.  

. . . .

Consequently, business and industry strategists and public policymakers have increasingly looked to science fiction to see what lies ahead of the curve in a reimagined world. For example, the Financial Times recently described how defense establishments worldwide use visionaries to prognosticate the future of warfare based on fictional intelligence. Some of their predictions are captured in Stories From Tomorrow, published by the United Kingdom’s Ministry of Defense. Applying “useful fiction” (admittedly with non-fiction embedded within the author’s story telling), this compendium of eight narratives sparked interesting insights about tomorrow’s revolutionary technologies. Unlike WarGames, the folly of war was not one of them.  

Instead, the authors explore how theoretical quantum computing can render sophisticated cyber defense systems, digital electronic communications, and supercomputers utterly defenseless to a future enemy attack. Current countermeasures such as artificial intelligence (AI), algorithms, and encryption methods are yet no match for a quantum apocalypse — unless the wizards at the Defense Advanced Research Projects Agency (DARPA) have something up their sleeve. The increasing automation of the world’s military includes carrier air fleets driven by AI-enabled aerial vehicles within the next decade. The U.S. Navy recently took delivery of an unmanned, fully autonomous warship that can remain at sea for up to 30 days.  

. . . .

Based on their raw computational power, quantum computing may deliver solutions to the world’s most pressing problems including income disparity, poverty, climate change, and disease prevention. Stories From Tomorrow also focus on data models powered by AI and machine learning tools that can produce digital twins capable of simulating real-time counterparts of physical objects, processes, entities, and even humans. These forward-looking models can “simulate everything from business ecosystems, military campaigns, and even entire countries, allowing deeper understanding as well as potential manipulation.”  In Virtual You: How Building Your Digital Twin Will Revolutionize Medicine, authors Peter Coveney and Roger Highfield explain how scientists have brought glimmers of hope to digitalizing identical copies of people (commonly referred to as doppelgangers) to trigger early detection of disease. Akin to a parallel universe, it allows doctors to prescribe custom-made medical protocols based on a patient’s chemical, electrical, and mechanical systems.  

Medicine has already sequenced DNA, mapped out the human genome, edited genes, and created stem cells in ways that allow for personalized medicine to mitigate symptoms and eliminate potential ailments in their preliminary stages by drilling down to their cellular level (think nanomedicine). Imagine fabricating molecular proteins based on a patient’s genetic code and identifying biomarkers to accelerate the speed of targeted drug delivery systems to counter Alzheimer’s, heart disease, stroke, and cancer. The moral quandary and ethical catch-22 surrounding the manipulation of embryonic fetal cells in the quest for genetic perfection is a bit dystopian and reminiscent of historical attempts at racial superiority. The dangers of genome editing were the basis for the science fiction thriller Gattaca (1997) where human sperm and embryos were manipulated in a laboratory using technology like the modern-day CRISPR. Even more disconcerting is the ability of nation-states to create deadly pathogens and other biological agents aimed at a specific ethnic group or race of people.  

The blinding pace of scientific discovery significantly compresses time, making it difficult for humans to absorb and make sense of it all given our limits of cognition. “Biological evolution happens over thousands of years, but [the] digital evolution has occurred in less than one generation,” according to Professor Roy Altman of NYU. Compare and contrast this to The First Industrial Revolution in 1765-1869 (characterized by mechanization and steam power) and the Second Industrial Revolution in 1870-1968 (e.g., new energy sources, internal combustion engines, airplanes, telegraph, and telephone). Both had one-hundred-year runways to gradually diffuse and allow people to slowly adapt and accept the pace of change.  

Conversely, The Third Industrial Revolution was condensed to forty-one-years (1969-2010) with the advent of computers, the Internet, 4G, automation and robotics, space expeditions, and biotechnology. These force multipliers set the stage for The Fourth Industrial Revolution or Industry 4.0 as coined by Germany in 2010. Here we have seen in a matter of just thirteen years a dazzling array of co-dependent technologies working together to digitalize the world economy with a software-first approach to manufacturing. This represents a major transformation from traditional manufacturing because today a product’s hardware and software (i.e., sensors) are inextricably intertwined and indistinguishable from one another.  

Many of our solutions to historical challenges beget new challenges requiring more creative solutions. Despite increasing world living standards, industrial revolutions have had disastrous effects on air and water quality that now require environmental remediation. Weapon systems designed to protect us can also destroy us, and lead to our very extinction. Computers and the internet have made us vulnerable to cyber-attacks on critical infrastructure, intellectual property, and financial systems. The point here is that rarely do novel ideas arrive completely formed and readily applied to everyday life without unseen implications down the road. Yet, this is the essence of progress.

. . . .

You do not have to be a registered technophobe to observe how the tectonic plates of fictional worlds versus the one we inhabit are rubbing up against one another. Look no further than how AI aided by machine learning can assign inanimate objects with sentient qualities capable of human emotions and free will. This blurs the line between humans and machines. Futurist Ray Kurzweil, a prophet of both techno-doom and salvation, points out the concept of singularity where science and technology outstrips human intelligence. This phenomenon could place human reason and decision making at the mercy of metadata and computer chips.  

Link to the rest at Automation Alley

‘Pax’ Review: A Real-Life Game of Thrones

From The Wall Street Journal:

“They make a desert and call it peace.” That harrowing critique of the high Roman Empire is often attributed to the historian Tacitus. In fact, the line is found in a speech that Tacitus quotes (or invents), delivered by a barbarian chieftain, Calgacus, on the eve of a battle against Roman forces. Tacitus surely didn’t endorse the sentiment, especially since his own father-in-law was facing Calgacus that day. To Tacitus, pax Romana, “Roman peace,” implied, first and foremost, stability and order across the Mediterranean world, not wanton destruction.

Whose viewpoint are we modern folk to adopt, that of Calgacus, the victim of Roman aggression, or that of his foes, who led the West on a path of unparalleled progress? The question is implicitly posed by historian Tom Holland at the outset of “Pax,” a lively survey of Roman warfare and foreign affairs at the height of the empire. Mr. Holland gives the Calgacus quote as one of the book’s epigrams, right beside an opposing opinion by Pliny the Elder. Pliny proclaimed “the Romans and the boundless majesty of their peace” to be a gift from the gods, as bright as a “second sun.”

“If there was light, there was also darkness,” Mr. Holland writes, balancing “sanitation, medicine, education, wine, public order” and other fruits of the pax against the rivers of blood that were spilled to secure them. He ranges Edward Gibbon’s famous pronouncement, that the high Roman Empire was the “most happy and prosperous” era in all history up to his day (the late 18th century), against the nightmarish portrait in Revelation of the Whore of Babylon, where Rome is reimagined as a gruesome, blood-drinking fiend.

“Pax” leaves it largely to readers to struggle with this opposition. As he exits his preface, Mr. Holland sets moral questions aside and turns his hand to what he does best: sure-footed, tight-wound historical narrative, enlivened with keen insights. He has a talent for making readers at home in the ancient world, even if they’re first-time visitors. In this book he describes an era few but specialists know in any depth: the seven decades between the deaths of Nero (A.D. 68) and Hadrian (138), a span that saw nine rulers come and go, including four in a single, turbulent year.

That year, A.D. 69, occupies about half the length of “Pax.” Mr. Holland takes us in painstaking detail through the civil war that brought Galba, Otho, and Vitellius to power in quick succession. Rome’s first dynastic line, that of the Julio-Claudians, had come to an end the previous year with the forced suicide of Nero. Absent any other path to the emperorship, the leaders of Rome’s far-flung armies used main force to establish legitimacy. Finally Vespasian, the first of the so-called Flavian line, managed to hold onto rule.

During his account of this year-long melee Mr. Holland casts frequent looks backward to Nero, a figure whose complex legacy had to be dealt with by all of the Flavians. Though lower-class Romans had idolized Nero, the political class deemed him an embarrassing failure, and his successors did their best to distance their reign from his. The Flavian amphitheater, aka the Colosseum, was built atop demolished portions of Nero’s pleasure palace, the Golden House, to signal to Rome the end of the Neronian adventure in megalomania.

The second half of “Pax” moves at a faster clip, covering nearly 70 years and numerous foreign wars and rebellions. The siege of Jerusalem led by Vespasian was brought to a conclusion by Titus, his son, but the Jews rose up and were conquered twice more. Roman arms ventured north into Scotland and eastward across the Danube and the Euphrates. The empire reached its greatest extent under Trajan, who died on campaign in the East in 117; his successor, Hadrian, retrenched, relinquishing some of Trajan’s conquests and building the wall across Britain that bears his name. To give up expansion was not a popular move, as Mr. Holland makes clear. Hadrian obscured the pullback by giving Trajan lavish funeral rites, then interring his ashes inside the carved column, still standing in Rome, that illustrates some of his triumphs.

Link to the rest at The Wall Street Journal

PG notes that, when he was preparing this post, the audio version of Pax was available for $5.95 US “with discounted Audible membership” (Whatever that means.).

He has no idea if, considering any fine print, this is a good deal, how long this offer will last or whether something similar is being offered outside the United States.

‘The Two-Parent Privilege’ Review: Where Have All the Good Men Gone?

From The Wall Street Journal:

For families with young children, morning routines can resemble an assembly line: Make breakfast. Remind the kids to brush their teeth. Negotiate which snacks to include in their backpacks. Remind them again to brush their teeth. Look for shoes. Head out the door. Head back in the door to get the stray backpacks.

In our household, when one parent is out of town, the process seems to intensify and can feel like the “I Love Lucy” episode in which Lucy takes a job wrapping chocolates. Quickly overwhelmed by the speed of the conveyor belt, she starts shoving chocolates anywhere they’ll fit, and concludes, “I think we’re fighting a losing game.”

Over the past 50 years, the number of one-parent households in America have dramatically increased. In 2019, 57% of U.S. children lived with two parents, down from 80% in 1980. Is the rise of single-parent households an emblem of empowerment or a sign of dwindling support for children?

Discussions of parenting can be fraught, dominated by feelings over facts, and too often tinged with judgment rather than support. The problem is, in part, that there has been limited accessible evidence on the causal effect of household logistics on children’s outcomes.

Enter “The Two-Parent Privilege: How Americans Stopped Getting Married and Started Falling Behind,” Melissa Kearney’s clear-eyed look at the economic impact of having a second parent at home. Ms. Kearney is an economist at the University of Maryland; her topics of research range from the social impact of the MTV show “16 and Pregnant” to the recent Covid baby bust. As she notes, “discomfort and hesitancy have stifled public conversation on a critically important topic that has sweeping implications not just for the well-being of American children and families but for the country’s well-being.”

Ms. Kearney’s objective is two-fold: first, to offer a data-driven overview of the rise and impact of single parenting; second, to propose strategies to enable more kids to live in stable households.

When it comes to the economic well-being of children, she argues, having two parents really is better than one—on average. Consider the conclusion of a 2004 paper, “Is Making Divorce Easier Bad for Children? The Long-Run Implications of Unilateral Divorce,” by the economist Jonathan Gruber. “As a result of the increased incidence of parental divorce,” Ms. Kearney tells us, “children wound up having lower levels of education, lower levels of income, and more marital churn themselves (both more marriages and more separations), as compared to similarly situated children who did not live in places where unilateral divorce laws were in effect.” Moreover, Ms. Kearney notes that children living with a stepparent also tend to have worse behavioral outcomes than those whose birth parents remained married.

While divorce is common, the spike in the number of single-parent households is mainly driven by an increase in the share of mothers who have never married—particularly among those who are less educated. In 2019, 60% of children whose mothers had a high-school degree (but less than a four-year college degree) lived with both parents, “a huge drop from the 83% who did in 1980” and low relative to the roughly 84% of children of college-educated mothers who lived with both parents in 2019. The author also notes significant gaps in family structure according to race: In 2019, 38% of black children lived with married parents, compared with 77% of white children and 88% of Asian children.

What is driving these changes? Among other factors, Ms. Kearney refers to the lack of “marriageable men,” pointing to a 2019 paper by the economists David Autor, David Dorn and Gordon Hanson, “When Work Disappears: Manufacturing Decline and the Falling Marriage Market Value of Young Men.” The paper analyzes the effect of drops in income for less-educated men, driven by increased international competition in manufacturing, and finds, Ms. Kearney tells us, that “the trade-induced reduction in men’s relative earnings led to lower levels of marriage and a higher share of unmarried mothers. It also led to an increase in the share of children living in single-mother households with below-poverty levels of income.” Reintroducing economic opportunities (for instance, through fracking booms) doesn’t seem to reverse this trend—suggesting an interplay between economic shocks and evolving social norms.

Link to the rest at The Wall Street Journal

Is It Worthwhile to Write My Memoir, Especially If a Publishing Deal Is Unlikely?

From Jane Friedman:

Question

In the eighth decade of my life and after having three books traditionally published—a travel memoir 50 years ago and two novels more recently—I am pondering the wisdom of writing a very personal memoir.

What has moved me most to think about this is the #MeToo movement: I was the victim of date rape while working as a civilian employee on an American army base in France from 1963–1964. While my time in France was indeed a wonderful one, a dream come true, tarnished only by this one incident, I sometimes reflect on the high percentage of women who have suffered sexual abuse, many while serving in the military. I was advised not to report this case by my immediate superior with the very real threat that the perpetrator (an officer) most likely would not be punished, and it would likely mean the loss of my job.

The memoir I am thinking of and which I have partially written is about much more than this incident; it is also about the loss of innocence and the excitement of discovering a foreign culture. It includes the story of my first true romance, an interracial affair. I was the “innocent” white girl in love with an African American enlisted man—two “no-no’s” for I was told during my training that it was absolutely not advised to date enlisted men, but only officers, “men of a higher caliber.” Race was not mentioned but implied by the times and by several other statements. These experiences in addition to the opportunity I had to develop wonderful life-long friendships with several French citizens prompts me to want to share them in a memoir. I would like to know if this is worth my writing; would it be received well or would you offer a caveat to me, to avoid what may be a well-worn subject matter?

—Memoirist with a Dilemma

P.S. I would love to have a traditional publisher if I do finish this memoir, but in today’s world, I think it is highly unlikely I would find one interested in an octogenarian author.


Dear Memoirist with a Dilemma,

Oh my goodness, there are so many layers to this question!

I think I want to start by saying that even if #MeToo feels like it’s run its course, even if it feels like the publishing world is tired of women’s stories about rape, or maybe just tired of women’s stories or memoirs, period…I assure you, the market is not oversaturated with memoirs by women in their eighth decade.

Which, as you know, doesn’t mean there’s an easy path ahead of you. The publishing world may not be receptive to a memoir like this for any number of reasons—some of which might be valid and some of which are utter bullshit. Your age might be one of those reasons, but it’s not the only one. Publishing is a highly uncertain field with few guarantees, and the market for memoirs can be particularly uncertain.

As it happens, I’m writing this response on Labor Day, so in answering your question about the value of writing a memoir—and about the worth of writing—I do first want to acknowledge writing (and art-making, generally) as a form of labor that, like any labor, should be fairly compensated, monetarily.

That said, for better and worse, many artistic and writing projects fall largely outside the realm of capitalism. Recently, I was listening to one of the first episodes of the “Wiser Than Me” podcast*, hosted by Julia Louis-Dreyfus; it’s an interview with Isabel Allende (who didn’t start writing novels until 40), who channeled Elizabeth Gilbert giving advice to young writers—which you are not, but maybe this is actually just decent advice for any writer: “Don’t expect your writing to give you fame or money, right? Because you love the process, right? And that’s the whole point, love the process.” 

Which is just to say that, if you’re asking whether writing this memoir is likely to justify your time and energy, financially—well, unfortunately, that’s probably a very short response letter. It’s almost certainly not.

But that doesn’t mean you shouldn’t write it, or that writing this memoir would be unwise, in some way, or unworthy of your time and energy. The answer, here, lies in the whyWhy do you want to write this memoir?

Do you love the process? Do you think you’ll feel better about the world on the average day when you’ve sat down to work on this book than on a day when you haven’t? Do you enjoy writing more than you don’t enjoy it?

If your answers to those questions are enthusiastically positive, then that’s reason enough to write.

There might be other, even more significant reasons to dive fully into this project. Writing a memoir isn’t therapeutic, per se, but the process of writing and rewriting our personal stories can be a rewarding process, one that’s often full of (good) surprises.

In this case, you’re talking about revisiting experiences—including an assault—after 60 years; the opportunity to reshape your story and to reconsider what you make of it might be incredibly meaningful. Indeed, it sounds like you’re already doing this to some extent, inspired in part by the #MeToo movement and other people’s sharing of their stories. One of the reasons #MeToo took off was because it defused and transformed a particular kind of shame and loneliness an awful lot of women had been sitting with for too long. Perhaps you, too, have been feeling that way.

Does revisiting this time and your experiences—the many good ones as well as the bad one—and considering them from fresh and maybe unexpected angles sound appealing and useful? Again, if your answer here is an enthusiastic yes: what are you waiting for?

(This might be an unpopular opinion, but for what it’s worth, I think it’s also completely valid to say, “Nah, I don’t need to relive all that.” But I think you wouldn’t have written in with this question if that were how you felt about it.)

Ultimately, both of those reasons are sort of personal and maybe even a little self-centered. And so what if they are? After all, as Mary Oliver put it in “The Summer Day” (which she wrote at age 62), “What is it you plan to do with your one wild and precious life?” You really don’t have to please anyone but yourself.

But I also understand that writing a memoir solely for the pleasure of it might not feel entirely satisfactory, either. We want our stories to make connections, and to matter to someone, right?

Link to the rest at Jane Friedman

The Band of Debunkers Busting Bad Scientists

From The Wall Street Journal:

An award-winning Harvard Business School professor and researcher spent years exploring the reasons people lie and cheat. A trio of behavioral scientists examining a handful of her academic papers concluded her own findings were drawn from falsified data.

It was a routine takedown for the three scientists—Joe Simmons, Leif Nelson and Uri Simonsohn—who have gained academic renown for debunking published studies built on faulty or fraudulent data. They use tips, number crunching and gut instincts to uncover deception. Over the past decade, they have come to their own finding: Numbers don’t lie but people do. 

“Once you see the pattern across many different papers, it becomes like a one in quadrillion chance that there’s some benign explanation,” said Simmons, a professor at the Wharton School of the University of Pennsylvania and a member of the trio who report their work on a blog called Data Colada. 

Simmons and his two colleagues are among a growing number of scientists in various fields around the world who moonlight as data detectives, sifting through studies published in scholarly journals for evidence of fraud. 

At least 5,500 faulty papers were retracted in 2022, compared with 119 in 2002, according to Retraction Watch, a website that keeps a tally. The jump largely reflects the investigative work of the Data Colada scientists and many other academic volunteers, said Dr. Ivan Oransky, the site’s co-founder. Their discoveries have led to embarrassing retractions, upended careers and retaliatory lawsuits. 

Neuroscientist Marc Tessier-Lavigne stepped down last month as president of Stanford University, following years of criticism about data in his published studies. Posts on PubPeer, a website where scientists dissect published studies, triggered scrutiny by the Stanford Daily. A university investigation followed, and three studies he co-wrote were retracted.

Stanford concluded that although Tessier-Lavigne didn’t personally engage in research misconduct or know about misconduct by others, he “failed to decisively and forthrightly correct mistakes in the scientific record.” Tessier-Lavigne, who remains on the faculty, declined to comment.

The hunt for misleading studies is more than academic. Flawed social-science research can lead to faulty corporate decisions about consumer behavior or misguided government rules and policies. Errant medical research risks harm to patients. Researchers in all fields can waste years and millions of dollars in grants trying to advance what turn out to be fraudulent findings.

The data detectives hope their work will keep science honest, at a time when the public’s faith in science is ebbing. The pressure to publish papers—which can yield jobs, grants, speaking engagements and seats on corporate advisory boards—pushes researchers to chase unique and interesting findings, sometimes at the expense of truth, according to Simmons and others.

“It drives me crazy that slow, good, careful science—if you do that stuff, if you do science that way, it means you publish less,” Simmons said. “Obviously, if you fake your data, you can get anything to work.”

The journal Nature this month alerted readers to questions raised about an article on the discovery of a room-temperature superconductor—a profound and far-reaching scientific finding, if true. Physicists who examined the work said the data didn’t add up. University of Rochester physicist Ranga Dias, who led the research, didn’t respond to a request for comment but has defended his work. Another paper he co-wrote was retracted in August after an investigation suggested some measurements had been fabricated or falsified. An earlier paper from Dias was retracted last year. The university is looking closely at more of his work.

Experts who examine suspect data in published studies count every retraction or correction of a faulty paper as a victory for scientific integrity and transparency. “If you think about bringing down a wall, you go brick by brick,” said Ben Mol, a physician and researcher at Monash University in Australia. He investigates clinic trials in obstetrics and gynecology. His alerts have prompted journals to retract some 100 papers with investigations ongoing in about 70 others.

Among those looking into other scientists’ work are Elisabeth Bik, a former microbiologist who specializes in spotting manipulated photographs in molecular biology experiments, and Jennifer Byrne, a cancer researcher at the University of Sydney who helped develop software to screen papers for faulty DNA sequences that would indicate the experiments couldn’t have worked.

“If you take the sleuths out of the equation,” Oransky said, “it’s very difficult to see how most of these retractions would have happened.”

The origins of Data Colada stretch back to Princeton University in 1999. Simmons and Nelson, fellow grad-school students, played in a cover band called Gibson 5000 and a softball team called the Psychoplasmatics. Nelson and Simonsohn got to know each other in 2007, when they were faculty members in the business school at the University of California, San Diego.

The trio became friends and, in 2011, published their first joint paper, “False-Positive Psychology.” It included a satirical experiment that used accepted research methods to demonstrate that people who listened to the Beatles song “When I’m Sixty-Four” grew younger. They wanted to show how research standards could accommodate absurd findings. “They’re kind of legendary for that,” said Yoel Inbar, a psychologist at the University of Toronto Scarborough. The study became the most cited paper in the journal Psychological Science.

When the trio launched Data Colada in 2013, it became a site to air ideas about the benefits and pitfalls of statistical tools and data analyses. “The whole goal was to get a few readers and to not embarrass ourselves,” Simmons said. Over time, he said, “We have accidentally trained ourselves to see fraud.”

They co-wrote an article published in 2014 that coined the now-common academic term “p-hacking,” which describes cherry-picking data or analyses to make insignificant results look statistically credible. Their early work contributed to a shift in research methods, including the practice of sharing data so other scientists can try to replicate published work.

“The three of them have done an amazing job of developing new methodologies to interrogate the credibility of research,” said Brian Nosek, executive director of the Center for Open Science, a nonprofit based in Charlottesville, Va., which advocates for reliable research.

Nelson, who teaches at the Haas School of Business at the University of California, Berkeley, is described by his partners as the big-picture guy, able to zoom out of the weeds and see the broad perspective.

Simonsohn is the technical whiz, at ease with opaque statistical techniques. “It is nothing short of a superpower,” Nelson said. Simonsohn was the first to learn how to spot the fingerprints of fraud in data sets.

Working together, Simonsohn said, “feels a lot like having a computer with three core processors working in parallel.”

The men first eyeball the data to see if they make sense in the context of the research. The first study Simonsohn examined for faulty data on the blog was obvious. Participants were asked to rate an experience on a scale from zero through 10, yet the data set inexplicably had negative numbers.

Another red flag is an improbable claim—say a study that said a runner could sprint 100 yards in half a second. Such findings always get a second look. “You immediately know, no way,” said Simonsohn, who teaches at the Esade Business School in Barcelona, Spain. Another telltale sign is perfect data in small data sets. Real-world data is chaotic, random.

Any one of those can trigger an examination of a paper’s underlying data. “Is it just an innocent error? Is it p-hacking?” Simmons said. “We never rush to say fraud.”

. . . .

Bad data goes undetected in academic journals largely because the publications rely on volunteer experts to ensure the quality of published work, not to detect fraud. Journals don’t have the expertise or personnel to examine underlying data for errors or deliberate manipulation, said Holden Thorp, editor in chief of the Science family of journals. 

. . . .

Thorp said he talks to Bik and other debunkers, noting that universities and other journal editors should do the same. “Nobody loves to hear from her,” he said. “But she’s usually right.”

The data sleuths have pushed journals to pay more attention to correcting the record, he said. Most have hired people to review allegations of bad data. Springer Nature, which publishes Nature and some 3,000 other journals, has a team of 20 research staffers, said Chris Graf, the company’s research integrity director, twice as many as when he took over in 2021.

Retraction Watch, which with research organization Crossref keep a log of some 50,000 papers discredited over the past century, estimated that, as of 2022, about eight papers have been retracted for every 10,000 published studies.

Bik and others said it can take months or years for journals to resolve complaints about suspect studies. Of nearly 800 papers that Bik reported to 40 journals in 2014 and 2015 for running misleading images, only a third had been corrected or retracted five years later, she said.

Link to the rest at The Wall Street Journal

Bartleby and Me

From The Wall Street Journal:

Gay Talese and Frank Sinatra have enjoyed a rich, symbiotic relationship, one that has long outlasted the singer, who died at 82 a quarter-century ago. Back in 1965, Mr. Talese trailed Sinatra around Las Vegas and Hollywood for a profile for Esquire magazine. At his peak after a triumphant comeback, Sinatra brushed off the writer’s pleas for an interview, but Mr. Talese produced a piece anyway. The result, “Frank Sinatra Has a Cold,” became one of the most celebrated magazine articles from the golden age of the slicks—and an enduring testament to Sinatra’s talent and fame.

Along with Joan Didion, Norman Mailer, Tom Wolfe and others, Mr. Talese has been acclaimed as a virtuoso of the novelistic New Journalism. Now 91, he has published a short and charming second memoir, “Bartleby and Me: Reflections of an Old Scrivener.” Once again, Sinatra takes center stage. But there’s more, especially the author’s take on the kind of journalism he’s practiced for seven decades, starting as a copy boy at the New York Times in 1953.

Mr. Talese takes his inspiration—and his title—from “Bartleby, the Scrivener,” Herman Melville’s 1853 short story about an inconsequential law clerk. “Growing up in a small town on the Jersey Shore in the late 1940s, I dreamed of someday working for a great newspaper,” Mr. Talese writes. “But I did not necessarily want to write news. . . . I wanted to specialize in writing about nobodies.”

His first published piece, carried without a byline on the Times’s editorial page, was about a “nobody” who operated the illuminated ribbon sign that announced the latest news around a lower floor of the old Times Tower in Times Square—a Bartleby for the age of Ike.

Thankfully for magazine journalism, Mr. Talese eventually overcame his original preoccupation, but before he did so he chronicled alley cats, bus drivers, ferry-boat captains, dress-mannequin designers, even those who pushed the three-wheeled rolling chairs along Atlantic City’s boardwalk. After two years of military service at Fort Knox—during which he contributed pieces to the Louisville Courier-Journal—he returned to the Times as a sports writer. (As a college correspondent for the Times in the late ’50s, I sometimes squatted at an empty desk near his in the uncrowded sports department.)

In 1965 Mr. Talese left the paper to join Esquire, then in its glory days under the brilliant editor Harold Hayes. The young writer promptly sold Hayes on a profile of figures at the Times, both obscure and heralded, starting with Alden Whitman. Whitman had revolutionized obituaries at the paper by conducting long premortem interviews with Harry Truman, Pablo Picasso and other luminaries. The lauded “Mr. Bad News” piece helped lay the groundwork for “The Kingdom and the Power,” Mr. Talese’s 1969 book about the Times—his first bestseller.

Bartleby’s murmurous response to the world was “I prefer not to,” while Sinatra famously belted out “I did it my way.” Still, the young Talese was drawn to him.

Fully a third of “Bartleby and Me” is a reconstruction of Mr. Talese’s frustrated pursuit of Sinatra—from his first glimpse of his lonely subject nursing a Jack Daniel’s at the bar of the Hollywood hangout The Daisy, to watching him pick a fight with a young writer because Sinatra didn’t like his boots, and at a recording session after an earlier one was aborted because the crooner had the sniffles. Sinatra genially blows off Mr. Talese’s requests to talk, so the writer interviews Sinatra’s entourage, including his sort-of-look-alike stand-in, as well as the little old lady who totes around his hairpieces, and his daughter Nancy. Mr. Talese even describes how he took his Sinatra notes on cut-down laundered-shirt cardboards.

The 14,000-word cover story ran in the April 1966 issue, was later published as a short book and, on the 70th anniversary of Esquire, was voted by its editors and staff the best piece ever to run in the magazine.

Link to the rest at The Wall Street Journal

How Did He Get Away With It?

From The City Journal:

Long before the rest of us were talking about blue and red America, Tom Wolfe not only recognized the cultural divide; he bridged it. When he began his career in the 1960s, the liberal establishment was more dominant and even smugger than it is today. There were no pesky voices on cable television or the web to challenge the Eastern elites’ hold on the national media. Then along came Wolfe, a lone voice celebrating the hinterland’s culture, mercilessly skewering the pretensions and dogmas of New York’s intelligentsia—and somehow triumphing.

How did he get away with it? The most entertaining analysis opens in theaters this weekend in New York and next weekend in Los Angeles and Toronto. The documentary, Radical Wolfe, is a superb chronicle of his life and career, told through footage of Wolfe (who died in 2018 at the age of 88) expounding in his famous white suits. It features the Jon Hamm reading from Wolfe’s work along with interviews with his friends and enemies, his daughter, Alexandra Wolfe, and his fans, including Christopher Buckley, Niall Ferguson, Gay Talese, and Peter Thiel. Director Richard Dewey draws on the insights and research of Michael Lewis, who pored through the archive of Wolfe’s letters and papers for a 2015 article in Vanity Fair, “How Tom Wolfe Became . . Tom Wolfe.”

Wolfe grew up in Richmond, Virginia, and remained true to his roots when he went north. In private, he remained the quiet, courtly Southern gentleman, the perpetual outsider gently bemused by the Yankees’ tribal beliefs and customs. “He was a contradictory character,” Talese observes in the film. “Such a polite person, such a well-mannered person. With a pen in his hand, he could be a terrorist.”

After getting a Ph.D. in American Studies at Yale, Wolfe worked as a fairly conventional reporter and feature writer at the Springfield Union in Massachusetts, the Washington Post, and the New York Herald Tribune. His breakthrough came during the New York newspaper strike of 1962–63. Needing money to pay his bills, he took an assignment from Esquire to write about custom-car culture in southern California, which fascinated him but left him with a severe case of writer’s block, as related in the film by Wolfe and Byron Dobell, his editor at Esquire.

With the deadline looming and a color photo spread already printed for the upcoming issue, Wolfe told his editor that he just couldn’t write the piece. Dobell told him to type up his notes so another writer could put some words next to the photo spread. Wolfe began typing that evening, stayed up all night, and delivered a 49-page letter to Dobell the next morning. The editor’s reaction: “It’s a masterpiece. This is unbelievable. We’d never seen anything like this. I struck out the ‘Dear Byron’ and struck out the parting words, and we ran it.” The headline was appropriately Wolfean: “There Goes (Varoom! Varoom!) That Kandy-Kolored (Thphhhhhh!) Tangerine-Flake Streamline Baby (Rahghhh!) Around the Bend (Brummmmmmmmmmmmmmm) . . .”

 Wolfe went on writing in his inimitable voice for the Herald Tribune Sunday supplement, which would be reincarnated as the independent New York magazine. His prose style—exclamation points, ellipses, long sentences, and streams of consciousness—appalled the high priests of the literary world, particularly when he applied it against their temple, the New Yorker. Other writers in the 1960s dreamed of being published in the magazine, but Wolfe wrote a two-part series savaging it as a moribund institution. The first piece in the series was titled “Tiny Mummies! The True Story of the Ruler of 43rd Street’s Land of the Walking Dead!” The second article, “Lost in the Whichy Thicket,” mocked its plodding articles, with their cluttered subordinate clauses and understated pseudo-British tone. He dismissed its fiction as a “laughingstock” that kept the magazine in business by serving as filler between pages of luxury ads aimed at suburban women:

Usually the stories are by women, and they recall their childhoods or domestic animals they have owned. Often they are by men, however, and they meditate over their wives and their little children with what used to be called “inchoate longings” for something else. The scene is some vague exurb or country place or summer place, something of the sort, cast in the mental atmosphere of tea cozies, fringe shawls, Morris chairs, glowing coals, wooden porches, frost on the pump handle, Papa out back in the wood bin, leaves falling, buds opening, bird-watcher types of birds, tufted grackles and things, singing, hearts rising and falling, but not far—in short, a great lily-of the-valley vat full of what Lenin called “bourgeois sentimentality.”

The empire struck back. The novelist J.D. Salinger emerged from seclusion to declare that the Herald Tribune would “likely never again stand for anything either respect-worthy or honorable” after Wolfe’s “inaccurate and sub-collegiate and gleeful and unrelievedly poisonous” attack on the New Yorker. There were more denunciations from the writers E.B. White, Ved Mehta, Muriel Spark, Murray Kempton, and from the syndicated columnists Joseph Alsop and Walter Lippmann.

. . . .

Literary critics sneered at his work, but his nonfiction books and novels were best-sellers that changed the national conversation. His coinages entered the common usage—the astronauts’ “Right Stuff,” Wall Street’s “Masters of the Universe,” Park Avenue’s “Social X-Rays.” He identified lowbrow “statuspheres” across America and made heroes out of the stock-car racer Junior Johnson and the fighter ace and test pilot Chuck Yeager. While doomsaying journalists and intellectuals were decrying American culture and modern technology, he declared that we were experiencing a “happiness explosion” and explained, “It’s only really Eng. Lit. intellectuals and Krishna groovies who try to despise the machine in America. The idea that we’re trapped by machines is a 19th-century romanticism invented by marvelous old frauds like Thoreau and William Morris.”

Link to the rest at The City Journal

PG was a huge fan of Wolfe’s writing style a very long time ago. The OP made him realize he should reread some of Wolfe’s books.

The Hallmarks of a Bad Argument

From Jane Friedman:

We need more debate.

I’m a firm believer that one of the biggest issues in our society—especially politically—is that people who disagree spend a lot less time talking to each other than they should.

Earlier in June, I wrote about how the two major political candidates are dodging debates. The next week I wrote about how a well known scientist (or someone like him) should actually engage Robert F. Kennedy Jr. on his views about vaccines.

In both cases, I received a lot of pushback. There are, simply put, many millions of Americans who believe that some minority views—whether it’s that the 2020 election was stolen or vaccines can be dangerous or climate change is going to imminently destroy the planet or there are more than two genders—are not worth discussing with the many people who hold those viewpoints. Many of these same people believe influential voices are not worth airing and are too dangerous to be on podcasts or in public libraries or in front of our children.

On the whole, I think this is wrongheaded. I’ve written a lot about why. But something I hadn’t considered is that people are skeptical about the value of debate because there are so many dishonest ways to have a debate. People aren’t so much afraid of a good idea losing to a bad idea; they are afraid that, because of bad-faith tactics, reasonable people will be fooled into believing the bad idea.

Because of that, I thought it would be a good idea to talk about all the dishonest ways of making arguments.

The nature of this job means not only that I get to immerse myself in politics, data, punditry, and research, but that I get a chance to develop a keen understanding of how people argue—for better, and for worse.

Let me give you an example.

Recently, we covered Donald Trump’s fourth indictment, when a grand jury in Fulton County, Georgia, indicted former President Donald Trump and 18 others over allegations of a sprawling conspiracy to overturn Joe Biden’s election victory in Georgia. As usual, we got some feedback and criticism from our readers—which we welcome. A couple people asked why Hillary Clinton isn’t also getting indicted, since she also has disputed that she lost a fair and open election.

This, of course, got me talking about the differences in these cases. Clinton conceded the election the night it was called, Trump didn’t. Clinton’s supporters didn’t swarm the Capitol hoping to overturn the results while she—as president—was silent. Trump’s supporters did, and he was.

Then we started having a conversation about what Hillary Clinton did do. She did say that the election was illegitimate and that Russia tampered, and continues to. She did use a private email server…

And now the topic of conversation has changed, from “did Trump deserve to be indicted” to “should Hillary Clinton have been indicted?”

This is an example of “whataboutism,” where the person you’re talking to or arguing with asks you about a different but similar circumstance, and in doing so changes the subject.

Whataboutism

This is probably the argument style I get from readers the most often. There is a good chance you are familiar with it. This argument is usually signaled by the phrase, “What about…?” For instance, anytime I write about Hunter Biden’s shady business deals, someone writes in and says, “What about the Trump children?” My answer is usually, “They also have some shady deals.”

The curse of whataboutism is that we can often do it forever. If you want to talk about White House nepotism, it’d take weeks (or years) to properly adjudicate all the instances in American history, and it would get us nowhere but to excuse the behavior of our own team. That is, of course, typically how this tactic is employed. Liberals aren’t invoking Jared Kushner to make the case that profiting off your family’s time in the White House is okay, they are doing it to excuse the sins of their preferred president’s kid—to make the case that it isn’t that bad, isn’t uncommon, or isn’t worth addressing until the other person gets held accountable first.

Now, there are times when this kind of argument is useful, and sometimes even enlightening. If we are truly asking where the line for prosecutable conduct is for a presidential candidate, it’s useful to find precedent and see where it is being applied inconsistently. If we’re asking “is the government consistent,” comparing Clinton, Trump, Biden, Nixon—it’s all on the table. The same is true if we’re asking about the bias of media organizations, and seeing if they cover similar stories differently, if the subject of the story is the major element that’s changing.

But if I write a story that says your favorite political candidate answered a question in a very poor way, and you respond by saying, “Well, this other politician said something bad, too—I think even worse. What about that statement?” That wouldn’t be helpful, or enlightening.

Furthermore, context is important. If I’m writing about Hunter Biden’s business deals I may reference how other similar situations were addressed or spoken about in the past. But when the topic of discussion is whether one person’s behavior was bad, saying that someone else did something bad does nothing to address the subject at hand. It just changes the subject.

Link to the rest at Jane Friedman

The OP is more political than PG’s usual selections. He requests that any disagreements in the comments not involve any personal attacks against anyone, candidate or not.

Blenheim’s £5M Gold Toilet Heist: 7 Suspects Await Legal Outcome

From Culture.org:

The Case That Flushed Four Years Down the Drain

Ladies and gentlemen, grab your metaphorical plungers because we’re about to dive into the long and winding pipe of the infamous £5 million golden toilet heist at Blenheim Palace. For four years, this crime has mystified investigators, and its audacity has shocked the art world. Now, finally, we might be on the brink of flushing out the truth.

Back in 2019, Italian artist Maurizio Cattelan unveiled an art piece that, well, dazzled in the literal sense. It was an 18-carat gold toilet entitled ‘America,’ initially exhibited at the Guggenheim Museum in New York. Let’s be honest: this toilet was no ordinary John; it was a symbol of opulence, of irony, and it attracted a whopping 100,000 people in New York eager to, ahem, experience it.

The golden spectacle was then relocated to Blenheim Palace in Oxfordshire, strategically placed in a chamber opposite the room where the British Bulldog himself, Winston Churchill, was born. But before anyone could say “seat’s taken,” it was stolen in a high-stakes heist on September 14, 2019, just a day after its grand UK unveiling.

H2: The Hurdles and Whirlpools of a Baffling Investigation

Despite the passage of four years and the arrest of seven suspects—six men aged between 36 and 68, and one 38-year-old woman—the investigative waters have been murky. Not a single charge has been filed. Until now, that is. It seems the cogs of justice are finally turning.

The Thames Valley Police recently submitted a comprehensive file of evidence to the Crown Prosecution Service (CPS), the organization responsible for pulling the flush, so to speak, on any charges. This move significantly raises the possibility that the seven suspects could soon find themselves in deep water.

Maurizio Cattelan, the mastermind behind the £4.8 million toilet—yes, let’s not forget the 200k difference—initially took the theft with a grain of artistic humor. “Who’s so stupid to steal a toilet? ‘America’ was the one percent for the 99 percent,” he mused. But everyone knows, stealing art is no laughing matter, especially when it’s a toilet that takes on the American Dream, as the Palace’s chief executive Dominic Hare pointed out. The theft didn’t just rob a stately home; it flushed a cultural commentary down the drain.

What Happened to the Golden Throne?

As curious as it sounds, investigators believe the golden toilet was melted down and transformed into jewelry. While not confirmed, this adds another layer of irony to the story—turning an art piece designed for the “99 percent” into an elite object once again, only this time in the form of necklaces and rings.

Link to the rest at Culture.org

PG says the author should have limited herself to fewer garderobe puns.

History, Fast and Slow

From The Chronicle of Higher Education:

Historians! Put down your tools! Your labors are at an end. The scientists have finally solved history, turning it from a jumble of haphazard facts (“just one damn thing after another”), into something measurable, testable and — most importantly — predictable.

That, in short, is the message of Peter Turchin’s provocative new book, End Times: Elites, Counter-Elites, and the Path of Political Disintegration. A theoretical biologist by training, Turchin came to the study of history late, after decades spent developing mathematical models for interactions between predators and prey. At some point he realized those same models could be applied to the boom-and-bust cycles governing the fortunes of states and empires.

Blessed with tenure, Turchin thus made a risky midcareer move to a different field. Or rather, with the historical sociologist Jack Goldstone, he co-founded a brand new one, which they dubbed cliodynamics. This is meant to be a “new science of history” that exposes the hidden processes driving political instability in “all complex societies.” End Times is at once a primer on this “flourishing” new field, which has attracted considerable attention in recent years, and a direct application of it to the political landscape of the United States as it appeared in the last quarter of 2022.

Turchin promises at the outset of the book that, by treating history itself as “Big Data,” he and his collaborators can explain why everything that happened in the past happened, and tell us, with reasonable certainty, what will happen next. Tackling the past, present, and future is a big job. As a consequence, the pace of End Times is brisk, the arguments flashy, and the conclusions, for the most part, unsatisfying.

The overall thesis of Turchin’s book is disarmingly simple. History is shaped by interaction between the elites and the masses. When these two groups are in equilibrium, harmony reigns. But when too many people are vying for elite status all at once, things go out of whack, and instability becomes inevitable. At this point, there are only two real ways to bring the system back in line. The first is to turn off the “wealth pump,” Turchin’s term for whatever economic mechanism — technological change, tax policy, or even agricultural overpopulation in the case of medieval France — is at work in a given moment enriching the elites and depressing the relatives wages of the masses. The second is for the elites to physically eliminate one another in a revolution or civil war.

Elite overproduction is thus for Turchin what class conflict was for Marx and asabiyyah, or group cohesion, was for Ibn Khaldun: It is the engine driving history forward. However, over the course of End Times, Turchin never pins down what exactly defines an elite, or whether they come in different or competing forms. Are cultural elites, for instance, distinct from economic elites, or political elites from religious elites? His explanation that elites are “power holders” is not so much clarifying as circular. For Turchin, elites — whether they are English barons, Russian serf owners, or Southern planters — are a natural fact, like gravity or the rain.

Some elites are more dangerous than others, however. Unemployed degree holders, according to Turchin, have been responsible for most social upheavals reaching back to the Revolutions of 1848. The law, a magnet for the politically ambitious, has been a particularly hazardous profession; as Turchin notes, Robespierre, Lenin, Castro, Lincoln, and Gandhi were all lawyers. In his telling, elite colleges have become factories for the creation of counter-elites, waiting to destabilize existing institutions and usurp the role of existing elites. The most dangerous of these by far is Yale Law School, that “forge of revolutionary cadres,” having produced both left-wingers like Chesa Boudin and right-wingers like Stewart Rhodes, the leader of the Oath Keepers.

Even if the remark about “revolutionary cadres” is made in jest, it is one of many moments in End Times that leads a reader to question the objectivity of Turchin’s take on American politics. Is the Republican Party really in the process of becoming a “true revolutionary party”? Maybe if you see Steve Bannon as its Lenin. Did “The Establishment” run a “counterinsurgency campaign” to get Trump out of office in 2020? Only if you think the election was rigged.

Some of the problem comes from his sources. Turchin intersperses the book with interviews with voters drawn from different rungs on the American social ladder. There is Steve, a blue-collar guy who thinks liberal elites are driving America into the ground; Kathryn, a Beltway 1-percenter who reads Steven Pinker to reassure herself that life has never been better than it is right now; and Jane, an Upper East Side Trotskyite hoping to bring the system down from the inside — which is why she’s a student at Yale Law.

The problem is that all of these figures are fictional, products of Turchin’s limited political imagination. Shorn of real-life interlocutors, his account of contemporary politics feels like the generic product of an ideological echo-chamber. This has an unfortunate effect on End Times’s predictions. Some of Turchin’s forecasts have already been disproved. Will Tucker Carlson be the “crystallization nucleus” for the formation of a genuinely insurgent, anti-establishmentarian Republican Party? Even a year ago, when End Times was published, that seemed like a stretch, but after Carlson’s firing by Fox News this past April, it seems about equal to my hopes of becoming starting quarterback for the Steelers.

Other predictions in End Times are more ominous, and even harder to credit. Turchin’s historical models predict that America will go through a spasm of political violence in the 2020s bad enough to thin the herd of elite aspirants and thus restore political cohesion, only for the violence to recur in 50 to 60 years. Only if wages can be brought up in the near term can this recurrence of violence can be avoided. However, even if this hypothetical New New Deal were to be implemented, it would not prevent a major internal crisis sometime later this decade.

Is America really on the cusp of a second Civil War? Perhaps, perhaps not, but no concrete piece of evidence presented in End Times would lead you to think so. Turchin has valuable things to say about rising inequality in the United States. But the connection between elite enrichment and popular rebellion is neither reliable, nor predictable — least of all in democracies.

In the absence of clearly drawn historical mechanisms, we have to trust Turchin’s models. But he never lets us see under the hood. For an approach to history that prides itself so much on quantitative rigor, cliodynamics — at least as presented in this book — seems strikingly low on actual data. Not a single graph or chart graces the pages of End Times. A pair of appendices do promise to explain the detailed workings of the databases and computations underlying Turchin’s predictions and pronouncements. But while this annex features a variety of things, including ruminations on Tolstoy, scenes from a science-fiction novel, and a rather charming thought experiment about social scientists orbiting Alpha Centauri, it does not clarify the inner workings of cliodynamics. All it offers are generalities along the lines of “one death is a tragedy … but a thousand deaths give us data” without ever explaining what it is that makes such data useful.

. . . .

Turchin is singularly ungenerous to professional historians. He rarely cites the work of scholars in the field, preferring to rely on his own summaries of major events or Wikipedia. Instead of crediting major historiographical concepts, such as the Military Revolution of the 17th century, to their original articulators, he refers readers to his own (often, forthcoming) books. At one point, Turchin even expresses his regret that so much precious historical knowledge is trapped in books and articles and — worst of all — in the “heads of individual scholars,” and fantasizes about a future in which spiderbots could automate the process of learning and harvest information directly from experts’ brains. One gets the feeling reading End Times that Turchin would like to do away with the messy business of human analysis and judgment entirely: all of the things that make history, from his perspective, such a frustratingly inefficient discipline.

Link to the rest at The Chronicle of Higher Education