Sister-in-law’s letters provide insights into Charles Dickens’ life and legacy

From The Guardian:

A collection of letters from one of Charles Dickens’ most valued confidantes will go on display in London for the first time this week to mark 212 years since the literary giant’s birth.

The letters were written by Dickens’ sister-in-law turned housekeeper and executor Georgina Hogarth, who was instrumental in preserving the author’s legacy after his death in 1870.

Acquired by the Charles Dickens Museum, the correspondence offers an illuminating insight into a woman who “had a huge impact both within Dickens’ life and after it”, said Emma Harper, a curator at the museum.

The extensive correspondence – a small portion of which will go on public display – was written by Hogarth to journalist Charles Kent between 1867 and 1898. It discusses Hogarth’s “unbearable” grief after Dickens’ death as well as her great admiration for the man.

“We’re hoping the letters provide some insight into Dickens’ personal life and Hogarth’s role in it,” said Harper, who is in the process of deciphering the 120-strong collection at a rate of around six letters a day. “It’s also just lovely to have Georgina’s own words because women, at this point in time, were not recorded quite so well.”

Hogarth lived with Dickens for 28 years and bore witness to some of the most significant events in his private life – namely, his separation from her sister Catherine, whom he left for 18-year-old actor Ellen Ternan in 1858.

Alongside biographer John Forster, she was an executor of Dickens’ will, and was responsible for distributing his estate. She also co-edited three volumes of letters with the author’s daughter Mamie, whom she had been involved in raising, alongside the nine other Dickens children.

“For us the key thing is to find out more about Georgina’s life and her relationship with Dickens, exploring their friendship and the role of women within the household of this very famous writer,” added Harper. “The letters will add a great deal to research about her.”

Writing to Kent on Dickens’ birthday on 7 February 1871 – the first after his death – Georgina expressed that life without her companion was “very very hard to bear” and that she wished to die. She caveated: “I know he is happy and blessed and indeed I do not think I would recall him to this dark world if I could.”

Link to the rest at The Guardian

How the ghostwriter of Biden’s memoirs ended up in the center of a classified documents probe

From The Associated Press:

President Joe Biden worked so closely with the ghostwriter with whom he is accused of sharing classified secrets that he once declared that he’d trust the author with his life.

Mark Zwonitzer worked with Biden on two memoirs, 2007’s “Promises to Keep” and “Promise Me, Dad,” which was published 10 years later. According to a report released Thursday by special counsel Robert Hur, Biden was sloppy in his handling of classified material found at his home and former office, and shared classified information contained in some of them with Zwonitzer while the two were working on the Biden’s second book.

Hur’s report says no criminal charges are warranted against Biden. It says his office considered charging Zwonitzer with obstruction of justice because the ghostwriter destroyed recordings of interviews he conducted with Biden while they worked on his second memoir together once he learned of the documents investigation. But Hur also said Zwonitzer offered “plausible, innocent reasons” for having done so and cooperated with investigators subsequently, meaning the evidence against him was likely “insufficient to obtain a conviction.”

Hours after the report was released, Biden addressed reporters at the White House and spoke to what he shared with Zwonitzer, saying, “I did not share classified information” adding he didn’t do so “with my ghostwriter. Guarantee you I did not.”

Zwonitzer did not immediately return messages seeking comment.

Years earlier, in an interview Biden conducted as part of the Audible audiobook version of “Promise Me, Dad,” Biden called Zwonitzer a “great, great guy” and said, “I trust him with my life.”

He added that Zwonitzer “helped me organize; that was his great asset to me.”

It may feel like less of one now.

Hur’s report says Biden saved notebooks from his time as vice president that contained classified information and used them to help Zwonitzer put together his memoir — sometimes reading from them verbatim for more than hour at a time. Biden did that, the report says, despite being aware from when he once suggested that Zwonitzer could be hired as historian for the Office of the Vice President, that the author did not have security clearance.

The report details one of the boxes recovered by federal investigators was labeled “mark Z,” and that one recorded conversation with Zwonitzer in 2017 had Biden saying that he’d “just found all the classified stuff downstairs” of a home he was then renting in Virginia.

Biden spoke to that incident Thursday night and, when pressed that Hur’s reports suggested he had read classified documents to his ghostwriter responded, “It did not say that.”

. . . .

Biden added, “it was not classified information in that document. That was not classified.”

Though the report concludes that the published finished product of “Promise Me, Dad” did not contain classified information, it says that Zwonitzer deleted recordings he made during his previous conservations with Biden after he learned about the special counsel’s probe.

But it also says that Zwonitzer offered explanations for his deletions and made available transcripts of the recordings. Additionally, he gave investigators his notes and the computer and external hard drive from which the recordings were removed, which allowed authorities to recover most of what had been deleted.

Link to the rest at The Associated Press

Against the Current

From TLS:

On July 4, 1845, a man from Concord, Massachusetts, declared his own independence and went into the woods nearby. On the shore of a pond there, Henry David Thoreau built a small wooden cabin, which he would call home for two years, two months and two days. From this base he began a philosophical project of “deliberate” living, intending to “earn [a] living by the labor of my hands only”. Though an ostensibly radical undertaking, this experiment was not a break with his past, but the logical culmination of years of searching and groping. Since graduating from Harvard in 1837 Thoreau had tried out many ways of earning his keep, and fortunately proved competent in almost everything he set his mind to. Asked once to describe his professional situation, he responded: “I don’t know whether mine is a profession, or a trade, or what not … I am a schoolmaster, a private tutor, a surveyor, a gardener, a farmer, a painter (I mean a house-painter), a carpenter, a mason, a day-laborer, a pencil-maker, a glass-paper-maker, a writer, and sometimes a poetaster”.

From this position, with any number of routes before him, yet none decided on, Thoreau was particularly well placed to consider questions about the nature, purpose and fundamental meaning of work. Yet he was also a born contrarian, a natural dissenter, with a knack for swimming against the current (his friend and mentor Ralph Waldo Emerson spoke of him as “spiced throughout with rebellion”), and when finally he emerged from the woods he was set not on a trade or career, but on life as a communal gadfly – a professional pain in the neck. “I do not propose to write an ode to dejection”, he writes in Walden (1854), “but to brag as lustily as a chanticleer in the morning, standing on his roost, if only to wake my neighbors up.” His self-imposed seclusion had allowed him to see his outsiderness anew, to understand it from within, to become of a piece with it.

This was a time of unprecedented change in American history. In a generation the country had gone from a motley collection of states, lagging the European powers, to a key player on the world stage. It was a sharp and swift upheaval, resulting not only in a dramatic depredation of the natural environment, but also in a dangerous straining of the country’s social fabric and a remaking of the American collective psyche. Thoreau had already seen the effects in 1843, when he visited New York City, which was then in the vanguard of the great transformation. The rapid technological advancements, the piling up of wealth, the relentless drive to prosperity, the general acceleration of life – such markers of progress may, he worried, end up killing the humanity in us. “I walked through New York yesterday – and met no real and living person”, he wrote in his diary. The future may have seemed radiant to some, but Thoreau was not impressed: “I am ashamed of my eyes that behold it. It is a thousand times meaner than I could have imagined. It is something to hate – that’s the advantage it will be to me”. That meanness would in time follow him back to Massachusetts. In Walden he protests the arrival of the railway in Concord, a stone’s throw from his cabin: “What an infinite bustle! I am awakened almost every night by the panting of the locomotive. It interrupts my dreams. There is no sabbath”.

At a moment when everything in America seemed to be accelerating, Thoreau, always true to form, came up with a counterproposal: slow down, do as little as you need to. Nothing, ideally. “Why should we be in such desperate haste to succeed, and in such desperate enterprises? If a man does not keep pace with his companions, perhaps it is because he hears a different drummer. Let him step to the music which he hears, however measured or far away.” Thoreau wanted not only to bring back Sabbath to a world that seemed to have lost it, but also to re-signify it. “The order of things should be somewhat reversed”, he had said a few years before, in his Harvard commencement speech. The “seventh should be man’s day of toil … and the other six his Sabbath of the affections of the soul”.

. . . .

Three recent books give us a sense of how the chanticleer of Concord keeps us awake today. In Thoreau’s Axe: Distraction and discipline in American culture, Caleb Smith uses Thoreau as the starting point for a wider discussion of attention and wakefulness in nineteenth-century America. Our concerns with distraction and dwindling mental focus, Smith argues, are nothing new. They were prefigured, centuries ago, by important public conversations – an “attention revival”, Smith calls them – sparked in America by the arrival of new economic systems and technologies, which threatened to dismantle traditional forms of life. Smith discusses twenty-eight short texts on attention by religious authors, fiction writers, social reformers and spiritual seekers. He examines Herman Melville’s Moby-Dick (1851), for example. We see here an Ishmael who, because of his “opium-like listlessness”, proves to be the most incompetent of masthead watchmen. “Over the course of three years at sea, he fails to call out a single whale.” Ishmael suffers chronically from a form of distraction that places him among the “romantic, melancholy, and absent-minded young men, disgusted with the carking cares of earth”, a condition to which we can relate only too well. “Today, in our age of new media and chronic attention deficit”, Smith writes, such “passages from the nineteenth century have a strange resonance.” His book is “a salvage operation”. In that century’s “ways of valuing and practicing attention” he hopes to find resources for “living through the present”.

Smith’s book has the merit of showing a meaningful continuity not only between our time and Thoreau’s, but also between Thoreau and like-minded thinkers of his century. It places his work in the broader tradition of “spiritual exercises”, developed over centuries by philosophers and religious thinkers, designed to “detach people’s minds from the passions and drama of everyday social life so they can focus on higher, more enduring realities”. As Smith sees it, whether Thoreau was conscious of it or not, he was “reworking an older asceticism”. Just as Christian penitents strove to master their flesh and discipline their lives, so Thoreau acted on himself to become more wakeful – or “mindful”, as we would say today. In this reading his famous walks in the woods were no ordinary perambulations, but opportunities to practise what he called “the discipline of looking always at what is to be seen”.

Link to the rest at TLS

A Terribly Serious Adventure

From The New York Times:

When setting out to write “A Terribly Serious Adventure: Philosophy and War at Oxford, 1900-1960,” Nikhil Krishnan certainly had his work cut out for him. How to generate excitement for a “much-maligned” philosophical tradition that hinges on finicky distinctions in language? Whose main figures were mostly well-to-do white men, routinely caricatured — and not always unfairly — for being suspicious of foreign ideas and imperiously, insufferably smug?

Krishnan, a philosopher at Cambridge, confesses up front that he, too, felt frustrated and resentful when he first encountered “linguistic” or “analytic” philosophy as a student at Oxford. He had wanted to study philosophy because he associated it with mysterious qualities like “depth” and “vision.” He consequently assumed that philosophical writing had to be densely “allusive”; after all, it was getting at something “ineffable.” But his undergraduate tutor, responding to Krishnan’s muddled excuse for some muddled writing, would have none of it. “On the contrary, these sorts of things are entirely and eminently effable,” the tutor said. “And I should be very grateful if you’d try to eff a few of them for your essay next week.”

A Terribly Serious Adventure” is lively storytelling as sly “redescription”: an attempt to recast the history of philosophy at Oxford in the mid-20th century by conveying not only what made it influential in its time but also what might make it vital in ours. The philosophers in this book were preoccupied with questions of language — though Krishnan says that to call what they practiced the “linguistic turn” is to obscure continuities with what came before and what came after. Still, Gilbert Ryle, one of the book’s central figures, believed that the philosophy he was doing marked some sort of break from a tradition that was full of woolly speculation about reality and truth. He joked that being appointed the chair in metaphysics — as Ryle was in 1945 — was like being named a chair in tropical diseases: “The holder was committed to eliminating his subject.”

As one of the mainstays in this book, Ryle keeps showing up as others come and go. Born in 1900, he became a fixture at Oxford, asking successive generations a version of the question that was posed to him as a student: “Now, Ryle, what exactly do you mean by …?” This insistence on clarification was foundational to his approach. He liked to use verbal puzzles constructed around ordinary examples: someone buying gloves, a circus seal performing tricks, a confectioner baking a cake. He argued against the “fatalist doctrine” by giving the example of a mountaineer in the path of an avalanche. The fatalist’s doomsaying misuses the language of inevitability. The unlucky mountaineer is doomed in one (immediate) sense but not in another: “The avalanche is practically unavoidable, but it is not logically inevitable.”

Language is full of expressions that Ryle called “systematically misleading.” Philosophers, he warned, could be seduced by imprecision. In the 1920s, having recognized Martin Heidegger as a “thinker of real importance,” Ryle nevertheless worried there was something in Heidegger’s writing style that suggested his school of phenomenology was “heading for bankruptcy and disaster and will end in either self-ruinous Subjectivism or in a windy mysticism” — in other words, metaphysics. Heidegger, of course, would join the Nazi Party in 1933.

Krishnan’s book is teeming with Oxford characters: A.J. Ayer, J.L. Austin, Peter Strawson and Isaiah Berlin, among others. There are cameos by Ludwig Wittgenstein and Theodor Adorno, who worked with Ryle on a dissertation about the phenomenologist Edmund Husserl. Krishnan also dedicates part of the book to Elizabeth Anscombe, Philippa Foot, Mary Midgley and Iris Murdoch — four women who met at Oxford and became important figures in moral philosophy. The linguistic analysis that reigned supreme at Oxford was limited and limiting, they argued, though they allowed that a careful parsing of language still had its place. “Bad writing,” Murdoch said, “is almost always full of the fumes of personality.”

Krishnan himself is so skillful at explicating the arguments of others that at various points it seems as if he must be stating his own position. But no — he mostly hangs back, elucidating a variety of ideas with the respect he thinks they deserve. His own example lays bare the distinction between the critical essay and the hatchet job. The Oxford-trained philosopher-turned-anthropologist Ernest Gellner heaped scorn on his mentors in a 1959 slash-and-burn polemic and derided their work as “rubbish.” Gellner’s salvo wasn’t an attempt at debate; “what it sought was the humiliation and destruction of the enemy,” Krishnan writes. “It was entertaining, especially if one had nothing at stake.”

This, as it happens, was a common charge against Ryle and his colleagues: that their approach was “superciliously apolitical,” as one reviewer of Gellner’s book put it, fixated on picayune verbal puzzles, with nothing at stake. But Krishnan urges us to see things another way. Superficially “flippant examples” about a foreign visitor to a university or a game of cricket could build up “to a more subversive point,” he writes. Verbal puzzles can get us to think more deeply and precisely about how language can warp or clarify our presuppositions; envisioning a game of cricket is less likely than a political example to get our hackles up.

“Conversation, rather than mere speech, was the thing,” Krishnan writes. And one-on-one tutorials — as opposed to enormous lectures — were essential. Students weren’t supposed to learn what to think, “but how.” He writes movingly of Austin’s widow, Jean, who continued to teach at Oxford after Austin died in 1960. “Finding her students altogether too quick to dismiss the philosophers they read as stupid, she enjoins humility and generosity: Read them charitably, don’t overestimate your own ability to refute what you’re only beginning to understand.”

Link to the rest at The New York Times

The Counterfeit Countess

From The Wall Street Journal:

The remarkable story of Janina Mehlberg almost didn’t see the light of day. A Holocaust survivor and a mathematics professor in Chicago, Mehlberg stood out for making her way in an academic field dominated by men. But while teaching her students and giving conference papers, she was privately writing an account of her life’s most remarkable episode: her daring impersonation of a Polish aristocrat in World War II, a deception that allowed her to aid Poles who had been imprisoned by the Nazis. She kept it all secret, but her husband, a survivor and distinguished philosopher of science in his own right, preserved and translated the memoir after her death in 1969.

The manuscript found its way to a historian in Florida, but at that time, there wasn’t enough interest in Holocaust stories for publication. Later the historian Elizabeth B. White received a copy of the memoir and then joined with her colleague Joanna Sliwa to bring the story to readers around the world. Stitching the pieces together, the pair collaborated with an international community of researchers to corroborate the memoir’s key elements. In “The Counterfeit Countess,” Ms. White and Ms. Sliwa put Mehlberg’s accomplishments in the context of the awful events of war, genocide, collaboration—all to properly frame the heroism of a woman whose decision to risk her own life saved uncounted others.

Janina Mehlberg was born Pepi Spinner in 1905 in the “comfortable elegance” of an assimilated Jewish family in the town of Zurawno, then in Poland and now part of Ukraine. Although a fragile child, Pepi became a star student, eventually heading to the university in Lwów (now Lviv), where she studied philosophy and mathematics with some of Europe’s leading thinkers. There she met her future husband and fellow-scholar Henry Mehlberg. By the early 1930s they were married and had both found teaching positions in Lwów.

The city was occupied by Soviet forces in 1939 and then by Germany in 1941. The arrival of Nazi rule in Lwów almost immediately touched off a spree of extreme violence against the city’s Jewish population; massacres of Jews by local militias began even before deportations to death camps were set in motion.

With the help of a family friend, Pepi Mehlberg and her husband fled Lwów for Lublin, a city ravaged by the war and near the site of the Majdanek death camp. On the journey, she realized that an audacious masquerade—leaving behind her identity as a Jewish professor to become the “Countess Janina Suchodolska”—would give her the means to help others. Once she became convinced “that she would not survive the war,” write Ms. White and Ms. Sliwa, “fear’s grip on her began to loosen. The problem she needed to solve, she realized, was not how to survive, but how to live what remained of her life.”

Mehlberg’s solution was to save as many people as possible, and she took extraordinary risks to see her mission through. Yet the authors emphasize that she was rigorous and unsentimental. She had fixed her goal of saving lives and then took only appropriate risks to accomplish her task. Allowing herself to be overcome by feeling, she knew, could get her and many others killed.

The Majdanek camp held Polish prisoners forced into slave labor, Russian prisoners of war, and Jews who would be murdered either by being shot at close range or poisoned by gas. The bodies of the dead, ranging in age from infants to the elderly, were incinerated in the crematoria or buried in pits dug by the camp’s inmates anticipating their own murders. As “the Countess,” Mehlberg served as the head of the Polish Main Welfare Council, visiting the camp regularly. The haughty, demanding countess negotiated ways to bring soup, bread, medicine—and hope—to a great many Polish prisoners. Betraying little emotion, this hidden Jew became a sort of patron saint by appearing again and again to witness their suffering and alleviate it as best she could. “Janina’s story is unique,” the authors assert. “She was a Jew who rescued non-Jews in the midst of the largest murder operation of the Holocaust.”

“The Counterfeit Countess,” too, is unsentimental. The writing is matter of fact; the authors include data about the numbers of meals served, the details of negotiations with Nazi officers, the changes in camp conditions as the war unfolded. Mehlberg recognized that the Germans were making trade-offs within their sick paradigm of racial superiority. Would it be more efficient to murder Poles or starve them while they worked? She persuaded Nazi higher-ups to let her organization provide thousands of tons of food to prisoners so that they could do the work that would feed the Nazi war machine. German commanders decided it served their interests to allow “the Countess” to continue providing food and medicine to enslaved workers.

Meanwhile, Mehlberg worked with the Polish resistance to coordinate efforts to undermine the Nazi regime, especially as the Germans began losing the war. In the eyes of those Polish patriots who knew of her courage, she was a hero. Yet when the war was over, Mehlberg (like many Jews who had taken on false identities) recognized that it was still too dangerous to reveal who she really was. The saddest pages of this often sad book describe the antisemitic violence that swept through Poland after the Nazis were defeated. As the Soviet Union imposed Stalinist orthodoxy on Eastern Europe, it was unsafe to be a Polish patriot or a Jew, or to be known to think freely about anything. Some poor souls—currently being tarred as colonizers by blinkered progressives—fled to Palestine. Mehlberg and her husband managed to settle in North America.

Link to the rest at The Wall Street Journal

Why Is Music Journalism Collapsing?

From The Honest Broker:

“This feels like the end of music reviews,” complained a depressed critic last night.

That gloomy prediction is a response to the demolition of Pitchfork, a leading music media outlet for the last 25 years. Parent company boss Anna Wintour sent out the bad news in an employee memo yesterday.

It’s more than just layoffs—the news is much worse than that. That’s because parent company Condé Nast also announced that Pitchfork will get merged into GQ.

Swallowed up by GQ? Is this some cruel joke?

Ah, for writers it’s all too familiar. In music media, disappearing jobs are now more common than backstage passes.

Just a few weeks ago, Bandcamp laid off 58 (out of 120) employees—including about “half of its core editorial staff.”

And Bandcamp was considered a more profitable, stable employer than most media outlets. The parent company before the recent sale (to Songtradr) and subsequent layoffs, Epic Games, will generate almost a billion dollars in income this year—but they clearly don’t want to waste that cash on music journalism.

Why is everybody hating on music writers?

Many people assume it’s just the same story as elsewhere in legacy media. And I’ve written about that myself—predicting that 2024 will see more implosions of this sort.

Sure, that’s part of the story.

But there’s a larger problem with the music economy that nobody wants to talk about. The layoffs aren’t just happening among lowly record reviewers—but everywhere in the music business.

Universal Music announced layoffs two days ago.

YouTube announced layoffs yesterday.

Soundcloud announced last week that the company is up for sale—after two rounds of layoffs during the last 18 months.

Spotify announced layoffs five weeks ago.

That same week, Tidal announced layoffs.

A few weeks earlier, Amazon Music laid off employees on three continents.

Meanwhile, almost every music streaming platform is trying to force through price increases (as predicted here). This is an admission that they don’t expect much growth from new users—so they need to squeeze old ones as hard as possible.

As you can see, the problem is more than just music writers—something is rotten at a deeper level.

What’s the real cause of the crisis? Let’s examine it, step by step:

  1. The dominant music companies decided that they could live comfortably off old music and passive listeners. Launching new artists was too hard—much better to keep playing the old songs over and over.
  2. So major labels (and investment groups) started investing huge sums into acquiring old song publishing catalogs.
  3. Meanwhile streaming platforms encouraged passive listening—so people don’t even know the names of songs or artists.
  4. The ideal situation was switching listeners to AI-generated tracks, which could be owned by the streaming platform—so no royalties are ever paid to musicians.
  5. These strategies have worked. Streaming fans don’t pay much attention to new music anymore.

I’ve warned about each of these—but we are now seeing the long-term results.

Link to the rest at The Honest Broker

PG notes that a lot of different parts of journalism has been collapsing for a long time.

In New York City and its environs, The New York Times is #3 by circulation. Newsday, which primarily covers Long Island, is #2. The Wall Street Journal, a paper that actually publishes a conservative editorial from time to time(!) has been #1 for a long time.

Newsday!!! A tabloid suburban newspaper. The ultimate snub for Manhattanites! Before you know it, a paper that covers farm news (New York Farmer) will have a larger circulation than The New York Times.

Infectious Generosity

From The Wall Street Journal:

In this unpleasant cultural moment, there’s no shortage of thinkers with ideas to cure what ails us. In “The Soul of Civility,” Alexandra Hudson argued that a broader spirit of respect and tolerance is the saving tonic. In “The Canceling of the American Mind,” Greg Lukianoff and Rikki Schlott made the case that free speech supplies the needful corrective. Now comes Chris Anderson to advocate the radical throwing-wide of arms, hearts and wallets as a means of ensuring a better future for all. In “Infectious Generosity,” Mr. Anderson describes with boyish enthusiasm and embarrassing naiveté his vision of how generosity could save the world. He imagines a future “lit up by maximum philanthropy,” in which human needs are gratified to such an extent that human beings cease to be handicapped by what cynics might call human nature.

“Let’s be generous in a way that gives people goosebumps,” writes the excitable Mr. Anderson, a British expatriate who has run the nonprofit speech-sponsoring TED organization for the past 23 years. “You don’t have to be rich or a creative genius. If you can adopt a generous mindset, seek to understand people you disagree with, and write words that are kind instead of cruel, you can help turn the tide.”

Mr. Anderson makes the case that beneficent acts need no longer go unknown. Good deeds and largess can go viral online, thus establishing as social norms the doing of good deeds and the giving of largess. Media executives can emphasize heart-warming stories. By such means, the internet can be a force-multiplier, making generosity contagious and causing people to crowdfund, spread kindness and share artistic work free.

To the author, generosity is “arguably the best idea humans have ever embraced.” As to why one ought place others before oneself, Mr. Anderson makes no transcendental claims, though as a former Christian he respects the role that religion plays in helping men and women to behave unselfishly—partly because they think God is watching and partly because, in most faith traditions, they are called to be charitable and give alms.

Money and the spreading of it plays a big part in “Infectious Generosity,” which is dotted with Liana Finck’s humorous drawings. Mr. Anderson believes that everyone, from billionaires downward, should give more than they do. He favors an examination of conscience—“Am I being generous with my money? Is my carbon footprint fully offset?”—and a secular sacrifice to replicate the Christian and Jewish practice of tithing (giving away 10% of one’s income) or the Muslim obligation of zakat (giving 2.5% of one’s wealth to charity).

Were all persons to follow suit, Mr. Anderson promises, the result would be a pot of philanthropic cash he puts at between $3.5 trillion and $10 trillion. Almost giddily, he lists the outcomes that such a flood of money could produce, from ending hunger and establishing a universal basic income “for the entire world” to making “meat alternatives as cheap and tasty as the real thing” by, in part, giving “meat alternatives the same advertising budget as the meat industry.”

The inclusion of meat substitutes on Mr. Anderson’s list of grandiose possibilities points to the fundamental silliness of “Infectious Generosity.” Meat substitutes, like heart-warming news operations, have not managed to entice the public. If people really wanted synthetic meat, they’d buy it; if they wanted media that feature only good news, they’d pay for them.

There are more serious problems with the premise of Mr. Anderson’s ecstatic vision. People differ on what is desirable, and desires conflict. Davos Man’s meatless cause is a cattle rancher’s anathema. A cash handout designed to uplift may just as easily corrupt. Do-gooders often fail to take into account the law of unintended consequences. For instance, the mosquito nets blanketing Africa to fight malaria are also causing ecological damage, because people are using them for fishing. Charlatans, too, are always ready to take advantage of donors with soft hearts and deep pockets.

Link to the rest at The Wall Street Journal

Optimal

From The Wall Street Journal:

What makes for a good work day? We might fantasize about achieving a state of “flow”—so enamored with work that time seems to stand still. But for an average day at the office, that’s asking a bit much. Better, say Daniel Goleman and Cary Cherniss, to aim for “a very satisfying day, one where we feel we did well in ways that matter to us, were in a mood that facilitates what we did, and felt ready to take on whatever challenges came along.”

In “Optimal: How to Sustain Personal and Organizational Excellence Every Day,” Messrs. Goleman and Cherniss tout the concept of emotional intelligence as the key to entering this state. Emotional intelligence involves self-mastery and empathy, social skills and stress resilience—virtues that a cursory glance at social media reveals to be more rare than one might hope. Still, they can be found outside the internet, not least in a successful workplace. The authors, both psychologists, marshal what they’ve learned through years of studying emotional intelligence to argue that its skill set offers “a crucial advantage in today’s tough business climate.” At many top companies, nearly everyone has a high IQ, so effectiveness is determined by something else—namely, the ability to manage one’s emotions and inspire others to want to do their best.

All this is unexceptionable and rather familiar by now. So the authors spend much of “Optimal” insisting that this worldview is more radical than it seems, casting it in opposition to the supposed conventional wisdom that classroom intelligence is all that matters: “The standard view . . . assumes that what makes you successful in school and academics is all you need for success in your career.”

The authors are most convincing when they show how emotional intelligence does, practically, pay off for people and organizations. For instance, one financial company found that advisers “had difficulty talking with clients about buying life insurance—preparing for your death, or even thinking about it, can be awkward or even off-putting.” But the advisers who went through emotional competence training, learning how to have difficult conversations, generated more sales than those who didn’t.

The authors tell the stories of several leaders whose wise self-control and empathy kept situations in check—such as the district manager for a large food-services company who responded to an (unsubstantiated) workplace-safety complaint from a disgruntled employee by trying to figure out why she was so upset. The manager was able to help that employee through the family health crisis that had precipitated the desire to lash out.

The chief executive of an architecture and engineering company was emotionally intelligent enough not to give a pep talk when she addressed employees after a round of layoffs. Instead she talked about her own sadness, presumably at having to lose good workers and disrupt their lives and gave people space to speak about their concerns before discussing the next steps. “While there was lingering anxiety,” the authors write, “gradually the collective mood shifted toward enthusiasm about the company’s future—an emotional contagion built on reality; facing the sad facts, but then looking beyond.”

Folks who are not leading organizations can use emotional intelligence to improve their own situations. Messrs. Goleman and Cherniss share the wonderful story of Govan Brown, a New York City bus driver who aimed to transfer his own “ebullient mood” to his passengers—taking care of his flock, as he saw it, and ensuring that everyone got off the bus happier than when they got on.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

She Talked Like a Millionaire, Slept in a Parking Garage and Fooled Nearly Everybody

From The Wall Street Journal:

University of Florida officials went back and forth with documentary filmmaker Jo Franklin over details for a planned gala in Franklin’s honor at the Four Seasons Hotel in Washington, D.C.

Franklin had pledged $2 million to her alma mater, and requested her guest list for the party include the entire staff of the PBS NewsHour. A day before the gala, school officials learned her seven-figure check had bounced. They boarded their flight to Washington, hoping to straighten everything out.

The next day, they found out Franklin hadn’t arrived at the Four Seasons, and the credit card number she gave the hotel wasn’t working. A person who identified as Franklin’s assistant emailed to say Franklin had broken her foot and couldn’t make it to Washington. University workers began phoning guests to say the gala was canceled.

The school’s esteemed graduate, once a journalist and documentary filmmaker specializing in the Middle East, emerged as a troubled and gifted fabulist. The $2 million gift was an illusion, one in a yearslong string of fantasies concocted by Franklin, who tumbled from a life of apparent success to homelessness. For years, she persuaded many around her that she was living the high life. Her family knew better.

“She is very ill and we need to have her put into a medical treatment facility of some type before she harms other people and herself,” her younger brother, George Franklin, wrote to family members days after they learned of the 2014 gala fiasco. Jo Franklin was 68 years old at the time and estranged from her daughter and siblings. 

In the years that followed, Franklin sometimes spent nights in a South Florida hotel parking garage. She was arrested a few times, once for allegedly stealing $11.98 worth of wine. Franklin also befriended a group of regulars at a local Starbucks who were impressed with her professional background and insights. “She was on point when it came to the political world, what’s going on in the world,” said Stephen Sussman, a Starbucks friend. Franklin made a point of her success. She mentioned having a driver and a home on affluent Jupiter Island, Fla. She said she stayed at a local hotel only to be closer to her job, which included working with government officials regarding the Saudis.

Over time, her image dissolved. Franklin’s friends noticed that she wore the same clothes. Her sandals had holes. She said she didn’t carry a cellphone because the Saudis were tracking her.

Lost in a surge of mental illness cases and a record-high homeless population are a growing number of Americans who can’t fully care for themselves but aren’t easily diverted into treatment, either on their own or involuntarily. Masses of homeless people around the U.S. have fueled aggressive efforts to push more of the mentally ill into care.

Franklin’s family worried for years about her mental stability and, like many others, were frustrated because they saw no easy path to getting her help. They didn’t know if she had ever been diagnosed or treated.

“My hope is just there was a way, even if she didn’t want it, to be forced to sit down with a mental health professional and figure out, ‘What is there to do here?’ ” said Franklin’s 38-year-old son, Hugh Trout, who last saw his mother nearly a decade ago.

George Franklin said his sister “wasn’t ever going to admit she had a problem.” He and the Starbucks friends had a plan to get Jo Franklin off the streets. It, too, involved a tall tale.

This account of Jo Franklin’s life is based on public records, emails and interviews with family, friends, former colleagues and associates familiar with her professional highs as well as her steep, slow-motion fall.

Josephine Anne Franklin was born in Chicago on July 31, 1946, the second oldest of four children in an upper middle-class family. The family moved to Tampa in the mid-1960s, and Jo graduated from the University of Florida in 1968. Her semester abroad in Lebanon kindled a lifelong fascination with the Middle East. She married a surgeon and had two children, Ashley Trout in 1981 and Hugh Trout four years later.

Franklin’s résumé included work as a producer for the MacNeil/Lehrer Report on PBS. She later made a series of documentaries for PBS on Saudi Arabia and the Middle East, as well as a series on the international space race. She often appeared on location in her films and was known professionally by her married name, Jo Franklin-Trout.

Her 1989 documentary “Days of Rage: The Young Palestinians,” aired on PBS and drew barbs from critics who saw a pro-Palestinian slant and lack of Israeli viewpoints. She said at the time that she wanted to showcase a rarely seen perspective.

Franklin spent the early part of the 1990s writing a novel, “The Wing of the Falcon,” billed as a love story and thriller set during the Persian Gulf War. The self-published novel sold poorly after its 1995 release, said Alan Gadney, whose company Franklin hired to produce and market the book, which Franklin had hoped would be made into a movie. A New York Times review acknowledged her Middle East expertise but said, “what she cannot do is write.”

Gadney said he later sued Franklin for roughly $25,000 she owed the company. “We went after her but she disappeared,” he said.

Franklin moved to California, and her family said they didn’t know how she supported herself during the roughly two decades she lived there. She split from her husband and, in 1996, a judge in their divorce case noted that Franklin had little taxable income from 1990 through 1992 and hadn’t filed a personal tax return since then. Yet she leased a Jaguar XJ6, the judge said, and had accumulated more than $150,000 in debt. 

He denied Franklin’s alimony request and awarded custody of her two children to Franklin’s ex-husband who lived in Washington. “Ms. Franklin is a person with many highly marketable skills,” the judge wrote.

Jerome Parks, an East Coast friend, kept in touch after she moved to California. “She was always talking about good things that were going to happen,” he said, “but they just never seemed to happen.”

Franklin’s family believed she became lost in her fantasies around the time of her divorce. Her daughter, Ashley Trout, who was sent to spend summers with her mother in California as a teenager, said Franklin lived beyond her means, focusing more on her image than productive work. Ashley said they had a combative relationship and that she often challenged her mother about lying and overspending.

When confronted, “she would just monologue for 30 minutes,” said Ashley, now 42. “Just a torrent, a firehose.” Her brother, Hugh, said, “When anyone started to tamper with that fantasyland, it would get very, very dark.”

In 2004, Ashley was taken to the hospital after a rock-climbing fall in Japan. Franklin called the hospital and said she was flying there on Colin Powell’s jet. “I get my mom on the phone and I tell her, listen, here’s the deal, there’s no jet,” Ashley said. “You don’t have access to Colin Powell’s jet.” 

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Mental Maps of the Founders

A plan of my farm on Little Huntg. Creek & Potomk. R., George Washington (Library of Congress)

From The Wall Street Journal:

On July 1, 1776, as the Declaration of Independence was about to be published, its author complained to a friend that it was painful, at such a fraught moment, to be in Philadelphia, “300 miles from one’s country.”

Thomas Jefferson’s sentiment, expressed casually in a private letter, reveals an arresting fact—namely, that the new nation did not yet exist even in the mind of the man who announced its birth. This inchoate quality was obvious in the sense that the 13 former colonies, united only in their bid for independence, didn’t create a genuine national government for more than a decade. But Jefferson’s remark hit on something more elemental, the geopolitical space that a citizen intuitively recognizes as “one’s country.”

Michael Barone’s “Mental Maps of the Founders” focuses on this spatial sense among key members of the revolutionary generation. By “mental maps,” Mr. Barone intends more than a regional affinity. He means a “geographical orientation”—the perspective that place confers. He argues that, for the Founders, it shaped “what the new nation they hoped they were creating would look like and be like.”

Mr. Barone, a distinguished journalist and political analyst, develops his theme through a series of six biographical portraits. He argues, for example, that the crucial event in George Washington’s life was his military campaign to oust the French from the Ohio country in 1754. A skirmish during that adventure began the Seven Years’ War, a struggle for empire waged across Europe, the Americas and Asia. Though Horace Walpole, writing in his diary in London, didn’t know Washington by name, he was referring to him when he wrote that “a volley fired by a young Virginian in the backwoods of America set the world on fire.” Washington’s youthful involvement in winning the Northwest established, for him, a lifelong sense of its strategic importance for any predominate power in North America. Almost by reflex, Mr. Barone writes, Washington “looked west by northwest from the front door of his beloved Mount Vernon to the continental nation he did more than anyone else to establish.”

The original 13 states could be easily divided into distinctive regional cultures: Puritan New England, with its homogeneous population and communal religious foundations; the pluralistic middle states, established by Quakers, the Dutch and Catholics, pragmatically preoccupied with commerce rather than political abstractions; and the Deep South, set apart by its planter elite and an overwhelming dependence on slave labor. Virginia, the largest state and the home of three of the Founders in Mr. Barone’s survey—Jefferson, Washington and James Madison—didn’t wholly belong to any of these regional blocks but possessed elements of each.

The other three figures in Mr. Barone’s study—Benjamin Franklin, Albert Gallatin and Alexander Hamilton—were immigrants to the regions in which they settled. Hamilton was born in the West Indies and settled in New York. Today we recognize his origins as the background of an immigrant. But Franklin also left one British colony (Massachusetts) and settled in another (Pennsylvania). And both men were born British subjects. Not so Gallatin, the Treasury secretary under both Jefferson and Madison, who came to America from the French-speaking part of Switzerland. Mr. Barone argues that these figures’ experiences as successful outsiders within their communities gave them a broader national outlook than that of their more insular countrymen.

The Founders’ geographical visions, as Mr. Barone shows, informed their sense of how America’s distinctive regional cultures might fuse into a common whole. Hamilton, inspired by the trade and credit networks that made the small sugar islands of his birth into engines of imperial wealth, favored a strong central government backed by a fiscal system that earned the loyalty of elites in all sections. Jefferson, dreaming of an egalitarian republic from his elevated place in Virginia’s planter aristocracy, believed in the proliferation of independent yeoman farmers united by an instinctive love of liberty. Madison, characteristically, added a more realistic version of Jefferson’s slightly utopian ideas. Pluralistic expansion, he felt, would prevent any one region from dominating the rest. The problem was not, in his view, too many regional subcultures but too few. These philosophical differences are well known, but Mr. Barone persuasively describes the place-derived assumptions underlying them.

Link to the rest at The Wall Street Journal

Lawrence of Arabia Review: Dreams of Empire

From The Wall Street Journal:

The British explorer Ranulph Fiennes is the first person to have crossed Antarctica on foot and the only living person to have circumnavigated the planet by its poles. In 2000 he fell through the ice while walking solo to the North Pole, leaving the fingertips of his left hand severely frostbitten. When he got home, Mr. Fiennes, a veteran of the elite Special Air Service, trimmed the fingertips with a saw, saving himself £6,000 in medical expenses and, he says, considerable pain.

In 2003 Mr. Fiennes had a heart attack and a bypass operation, then completed seven marathons on seven continents in seven days. In 2009 he reached the summit of Mount Everest on the third attempt. His other feats of endurance include writing more than 30 books, including biographies of the polar pioneers Ernest Shackleton and Robert Falcon Scott, handbooks for ultrarunners and travelers with weak hearts, and the perhaps inevitable family history “Mad Dogs and Englishmen.”

Mr. Fiennes (pronounced “Fines”) is a classic English type, the diffident hero and driven adventurer. He is the square peg who inspires irregular soldiers in inhospitable places. He crosses deserts, forests and frozen wastes, facing down danger and the limits of human endurance, death included.

The rarest such figure, combining all these characteristics of imperial legend with lasting historical significance, was T.E. Lawrence (1888-1935). Dubbed Lawrence of Arabia in his lifetime and immortalized twice over, once by himself in his 1926 memoir “Seven Pillars of Wisdom” and again by Peter O’Toole in the 1962 movie “Lawrence of Arabia,” Lawrence played a crucial but thwarted role in the shaping of the modern Middle East.

In 1916 Britain was at war with Germany’s Ottoman Turkish allies. Lawrence, an archaeologist turned intelligence officer, helped organize the Arab tribes of the Hejaz (today’s western Saudi Arabia) into a guerrilla army and led the Arab Revolt that, in October 1918, displaced the Turks from Damascus. The revolt raised hopes for a unified, self-determining Arab nation, but Lawrence’s political masters and their French allies connived at frustrating that ambition. After the war, the British and French took over from the Turks and created new borders and nations. The events of that era still complicate today’s local and global politics.

Mr. Fiennes’s “Lawrence of Arabia: My Journey in Search of T.E. Lawrence” is a casually elegant biography and an expert reflection on the kind of irregular warfare that Lawrence pioneered and Mr. Fiennes experienced as a young officer fighting a Marxist insurgency in the mountains of Oman in the late 1960s. Lawrence’s example, Mr. Fiennes writes, preceded him and “often inspired me to victory in life-or-death situations.” But Lawrence was also his companion in facing “impossible military and political odds, as well as confronting personal scars.”

The second of five sons, Lawrence grew up in a villa in Oxford. Fascinated by military history, especially the Crusades and Napoleon Bonaparte, he was pushed by his mother to achieve greatness. Lawrence’s motives in risking his life, Mr. Fiennes writes, were not just “his attachment to the Arabs and his hatred for the Ottomans.” He was not who he claimed to be or who he wanted to be. “Lawrence” was an assumed name. His father, an Irish baronet named Thomas Chapman, had eloped with the family maid. When Chapman’s wife refused to grant a divorce, he and his mistress adopted a new name and pretended to be married. Lawrence’s mother saddled him with the “burden that he was special” and a mission to “redeem the family.” Raised a strict Christian, Lawrence was 10 years old when he discovered his illegitimacy.

He became a feted student at Oxford, then cultivated the English romance with tribal life while digging at ancient Carchemish, north of Aleppo, Syria, from 1911 to 1914. “I am Arab in habits and slip in talking from English to French to Arabic unnoticing,” he wrote home. When war broke out in 1914, the British army needed Arabic speakers with local knowledge. Lawrence’s old Oxford tutor, now an intelligence officer, summoned him to Cairo as a mapmaker and adviser. But Lawrence already had a plan to redraw the map by unifying the Arab tribes against the Turks. “I want to pull them all together,” he wrote in 1915, “and roll up Syria by way of the Hedjaz” in the name of Hussein Ibn Ali, the Emir of Mecca.

Meanwhile Hussein and his four sons secretly planned their own revolt, but needed weapons and support. The British could not spare soldiers, but they sent Hussein money and antique rifles, as well as a promise from Henry McMahon, the British high commissioner in Egypt, to recognize Hussein’s rule from Aleppo to Aden if the revolt succeeded.

In July 1916, Hussein’s followers expelled the Turkish garrison from Mecca but, lacking heavy weapons, they were repulsed at the ports of Jidda, Yenbo and Rabegh. In October, with the revolt stalled and British confidence faltering, Lawrence sailed from Egypt for Jidda. From there, he disguised himself in Arab robes, crossed roughly 100 miles of desert by camel through sandstorms, bearing the heat and weeping saddle sores, met Hussein’s son Feisal at the rebel camp and launched his legend.

Lawrence said he preferred “the Arab untouched” to the “perfectly hopeless vulgarity of the half-Europeanized Arab.” In Feisal he found his ideal partner, tall and white-robed, with a “strange still watchfulness” and an “almost regal” bearing that reminded Lawrence of Richard the Lionheart and the young Napoleon. Returning to Cairo, Lawrence secured explosives, the support of Royal Navy ships and British advisers “to train Arab bands,” and a supply of gold.

The Turks controlled the Red Sea ports of the Hejaz via their new Damascus-Medina railroad. Lawrence soon saw that the Arabs could not match well-drilled Turkish troops and their German-supplied artillery. Inland, however, Lawrence believed that the tribesmen’s mobility, small numbers and familiarity with “one of the most trying countries in the world for civilized warfare” made them “the most elusive enemy an army ever had.” Lawrence convinced Feisal to adopt an indirect strategy: disrupt and pin down the Turks by sabotaging the railroad line, then bypass and isolate the Turkish garrisons at Medina and the Red Sea ports and push northward to Syria.

In November 1917, Ottoman troops captured Lawrence, disguised as a peasant, as he spied out the railroad junction of Deraa. He was brutally beaten and raped by the Turkish governor and his men. The shame he felt after that episode was only multiplied at the end of the war: After he entered Damascus on Oct. 1, 1918, in a Rolls Royce, he had to watch as the British commander Edmund Allenby informed Feisal that Britain and France did not intend to honor McMahon’s promises of a unified Arab kingdom.

Damaged by his experiences in Arabia and disenchanted by the political aftermath, Lawrence became a celebrity when “Seven Pillars of Wisdom” appeared in 1926, then did his best to disappear, enlisting in the Royal Air Force under an assumed name. He died in a motorbike accident in 1935.

Link to the rest at The Wall Street Journal

Lawrence is in the second row on the right below

Social media’s online diarists have a long lineage – Who are personal journals written for?

From The Economist:

The tales embrace the mundane and the seismic, from being dumped by a boyfriend before the school prom to the sudden death of a parent. The tone ranges from cheesy to heartbreaking. The storytellers are “journal influencers”, mostly young women reading their teenage diaries to audiences online.

Some videos are mingled with other content, merging pre-teen dreams with make-up tips; others are simple shrines to past selves. One influencer, Carrie Walker (pictured), draws 1.2m views for a half-hour read on YouTube; the shorter content on TikTok’s #diarytok tag has reached 54m. And sharing secrets presents commercial opportunity: selling notebooks and pens on Amazon; auctioning copies of diaries on eBay.

Many people think about writing a diary, especially at New Year. Some start. Some even keep it up. But why write, and for whom? Whether a novice facing a blank page or a seasoned scribbler with years of good meals and gossip in irregular notebooks, almost any diarist has asked themselves that question.

Sally Bayley of the University of Oxford, author of “The Private Life of the Diary”, regards sharing on social media as the antithesis of diary-keeping. The journal is “an attempt to be honest with yourself”. It is “an internal territory, which you are mapping onto the page”, inseparable from privacy. Even Sylvia Plath, a “theatrical individual”, Dr Bayley notes, wrote a diary in order to “generate a voice in private”.

Yet diaries have also long been shared, if more discreetly than on TikTok. Keeping a journal rose in popularity in the 19th century, especially among women. According to Cynthia Huff, an academic specialist in Victorian culture, diary-sharing then was “extremely common”.

Diaries were read aloud, sent to friends or left open for visitors to peruse. “That distinction between public and private really doesn’t hold at all,” says Professor Huff. Some diaries served practical uses, sharing advice on self-improvement, pregnancy or childbirth. British women in the colonies often sent diaries back home. They were “creating an extended family through these diaries” and fostering an ocean-spanning sense of Englishness.

Many journal videos also create a sense of community. They share stories of isolation: of suffering homophobia, struggles with body image or early romantic obsessions. They poke fun at the distorted expectations of youth and the disappointments of adulthood, with the ear of sympathetic strangers.

. . . .

The symbiosis of secrecy and celebration was perhaps best understood by Anaïs Nin, a 20th-century French-born American whose diary was an unapologetic exercise in self-creation. “I am in my Journal, and in my Journal only, nowhere else. Nothing shows on the outside. Perhaps I do not exist except as a fantastic character in this story.”

Nin’s mix of fantasy and truth included an illegal abortion, extramarital affairs and, most notoriously, an incestuous relationship with her father. Her assertions of confidentiality—“you won’t say anything, will you”; “only my Journal knows it” —treat the reader as the sole listener.

And yet, of course, Nin published her journal. Its scandalous content won her fame that her fiction had not. Her confessional texts penetrated the thin veil between public and privateThe diaries are a masterclass in broadcast secrecy, a megaphoned whisper.

“We write to taste life twice,” Nin wrote, “in the moment and in retrospection.” 

Link to the rest at The Economist

Social media’s online diarists have a long lineage

From The Economist:

The tales embrace the mundane and the seismic, from being dumped by a boyfriend before the school prom to the sudden death of a parent. The tone ranges from cheesy to heartbreaking. The storytellers are “journal influencers”, mostly young women reading their teenage diaries to audiences online.

Some videos are mingled with other content, merging pre-teen dreams with make-up tips; others are simple shrines to past selves. One influencer, Carrie Walker (pictured), draws 1.2m views for a half-hour read on YouTube; the shorter content on TikTok’s #diarytok tag has reached 54m. And sharing secrets presents commercial opportunity: selling notebooks and pens on Amazon; auctioning copies of diaries on eBay.

Many people think about writing a diary, especially at New Year. Some start. Some even keep it up. But why write, and for whom? Whether a novice facing a blank page or a seasoned scribbler with years of good meals and gossip in irregular notebooks, almost any diarist has asked themselves that question.

Sally Bayley of the University of Oxford, author of “The Private Life of the Diary”, regards sharing on social media as the antithesis of diary-keeping. The journal is “an attempt to be honest with yourself”. It is “an internal territory, which you are mapping onto the page”, inseparable from privacy. Even Sylvia Plath, a “theatrical individual”, Dr Bayley notes, wrote a diary in order to “generate a voice in private”.

Yet diaries have also long been shared, if more discreetly than on TikTok. Keeping a journal rose in popularity in the 19th century, especially among women. According to Cynthia Huff, an academic specialist in Victorian culture, diary-sharing then was “extremely common”.

Diaries were read aloud, sent to friends or left open for visitors to peruse. “That distinction between public and private really doesn’t hold at all,” says Professor Huff. Some diaries served practical uses, sharing advice on self-improvement, pregnancy or childbirth. British women in the colonies often sent diaries back home. They were “creating an extended family through these diaries” and fostering an ocean-spanning sense of Englishness.

Many journal videos also create a sense of community. They share stories of isolation: of suffering homophobia, struggles with body image or early romantic obsessions. They poke fun at the distorted expectations of youth and the disappointments of adulthood, with the ear of sympathetic strangers.

Some diary-sharers go further. At Queer Diary, a series of events across Britain begun in 2020 by Beth Watson, a performer, lgbtq adults read their old diaries to a live audience. The drama, confusion and mayhem of teenage life are performed to a sympathetic crowd. The celebration, Ms Watson says, is as important as the reflection.

The symbiosis of secrecy and celebration was perhaps best understood by Anaïs Nin, a 20th-century French-born American whose diary was an unapologetic exercise in self-creation. “I am in my Journal, and in my Journal only, nowhere else. Nothing shows on the outside. Perhaps I do not exist except as a fantastic character in this story.”

Nin’s mix of fantasy and truth included an illegal abortion, extramarital affairs and, most notoriously, an incestuous relationship with her father. Her assertions of confidentiality—“you won’t say anything, will you”; “only my Journal knows it” —treat the reader as the sole listener.

And yet, of course, Nin published her journal. Its scandalous content won her fame that her fiction had not. Her confessional texts penetrated the thin veil between public and privateThe diaries are a masterclass in broadcast secrecy, a megaphoned whisper.

Link to the rest at The Economist

‘A Book of Noises’ Review: Sound and Sensibility

From The Wall Street Journal:

All science is either physics or stamp collecting, the physicist Ernest Rutherford supposedly said, distinguishing what he saw as the pursuit of deep principles from the accumulation of endless instances. Even those who find this smug quip tiresome—it doesn’t help that it’s probably apocryphal—would concede that Caspar Henderson’s “A Book of Noises: Notes on the Auraculous” has a definite postal air. There’s physics here, certainly, but also bits of zoology, biology, anthropology, linguistics and geology, as well as music and literature.

Which is not to say that the book isn’t interesting or sometimes stimulating. Philately will get you somewhere. Mr. Henderson, a British journalist who is remarkably cheerful for someone who writes about environmental problems and human rights as well as science, explores in 48 short essays how sound, construed broadly, shapes the universe and the life in it, from the still-detectable “ripples” of the Big Bang to the song of nightingales.

As in Mr. Henderson’s previous books, “The Book of Barely Imagined Beings” (2012) and “A New Map of Wonders” (2017), the keynote is wonder—a secular awe at the variety and complexity of the world. “Glory be to Creation for wonky things,” one chapter begins, in a play on Gerard Manley Hopkins’s poem “Pied Beauty.” Hopkins was praising “all things counter, original, spare, strange” (and the Christian God), while Mr. Henderson is talking about asymmetrical animals, such as bottom-lying flatfish that “stare up with a second eye that has yanked itself all the way round their skulls,” or certain species of owls, whose uneven ears sharpen their ability to locate sounds. To this advantage, Mr. Henderson writes, they add a beak shaped to minimize sound reflection and ears whose acuity doesn’t diminish with age. “Go, owls!” he concludes brightly.

Also praised in the “Biophony” section are amphibians (“What a piece of work is a frog. How noble in breathing, which it does through its skin”), elephants (they use their feet to “hear” vibrations in the ground) and bat echolocation (“the more one considers it, the more amazing it is”). The entries in “Cosmophony” include one on “sound in space” that contemplates how your voice would sound in the thin, cold atmosphere of Mars—quiet—and one on the ancient connections between astronomy and music. Mr. Henderson also explains “sonification,” in which scientific data is turned into sound. Though mostly an artistic experiment or public-outreach gimmick, sonification has been used for research, harnessing our ears’ ability to discern subtle changes in signals. It even allowed one astrophysicist, Wanda Díaz-Merced, to continue her career after going blind.

From “Geophony” we learn that the Northern Lights can sound like crackles, sizzles, rustles or bangs, though the idea that they sounded like anything at all was long dismissed. It takes particular atmospheric conditions to make them audible, and even then they’re barely loud enough to hear over the chatter of other tourists. Another entry, by contrast, discusses the loudest sound in recorded history (the 1883 eruption of the volcano Krakatoa, whose reverberations looped the Earth for five days) and one of the loudest ever on Earth (the Chicxulub asteroid, death knell for the dinosaurs).

Link to the rest at The Wall Street Journal

How to Be Multiple and Twinkind

From The Wall Street Journal:

Twins have long sparked fascination, awe, fear and superstition. They appear in literature both high and low—from Shakespeare and Charles Dickens to the Bobbsey Twins and the Sweet Valley High young-adult books. In mythology, their stories are linked to the origins of nations (Romulus and Remus; Esau and Jacob) and the power of the gods (Castor and Pollux; Apollo and Artemis). Venerated in some cultures and feared in others, twins have served as emblems of affinity and rivalry and, for scientists and philosophers, as case studies in the search to understand the power of nature over nurture.

Today twins remain relatively rare—only 1 in 60 people is a twin in the U.S. and Europe—although the rate of twin births has risen precipitously since the 1980s, thanks mostly to in-vitro fertilization. The relative rarity of twins invites speculation about how their experience is distinct from that of singletons—and what that distinctiveness can teach us about personhood.

In “How to Be Multiple,” Helena de Bres, a philosophy professor at Wellesley College and herself an identical twin, draws on her own experience as a way to explore mind-body boundaries and the nature of individualism and autonomy. “What makes a person themself rather than someone else?” she asks. “Can one person be spread across two bodies? Might two people be contained in one?”

This approach works some of the time. It is true that the resemblances between twins—both physical and psychological—can seem uncanny, as if they are a single identity. Twins can express similar or identical tastes in food, living situations and even sexual preferences, as Ms. de Bres notes, suggesting bonds far stronger than those that exist among ordinary siblings.

. . . .

Ms. de Bres also argues that twins offer evidence of the so-called extended-mind thesis, which posits that, given our reliance on tools and close interaction with the physical world, some cognition can be understood as entwined with our material surroundings—extended. As an identical twin, Ms. de Bres describes in fascinating detail the ways in which her twin’s thoughts and her own—and their physical experiences—seem to merge. She notes the “experience of merger I sometimes have with [her twin] Julia: the sense that I’m thinking via her mind.” She describes the twin contemporary artists Tim and Greg Hildebrandt, who work together on pieces of art and sign only their last name, since, as Tim puts it, “it’s our painting, not Greg’s or mine.”

. . . .

More interesting is her claim that twins prompt a kind of unacknowledged existential dread in singletons because people “raised in Western cultures” are “deeply attached to being discrete individuals with sharply distinct physical, mental, and emotional barriers between ourselves and other humans.” Thus twinhood offers an alternative to the radical individualism ascendant in Western culture. Twins “and other freaks” encourage “social interdependence” and “model more communal ways of living.”

. . . .

Ms. de Bres notes that “much of the experience of twinhood is determined not by twinship itself but the response of non-twins to it.” As a mother of fraternal twin boys, I know too well that the parents of twins have pet peeves about how their children are treated by the world. I wince whenever someone refers to my sons as “the twins,” as if they always act in concert like the Hanna-Barbera cartoon Wonder Twins rather than as the very different individuals they are. In “Twinkind,” William Viney captures the full range of nontwin response, among much else, aiming to explore what he calls “the singular significance of twins.” His book is a visual feast, replete with reproductions of twins in paintings, sculptures, medical drawings and advertisements. Like Ms. de Bres, Mr. Viney is a twin, but his focus is less on his personal experience than on how people in different cultures and eras have understood twinhood.

“Transfigured as gods and monsters, spirits and animal familiars, and viewed as phantoms, freaks and clones,” Mr. Viney writes, “twins pass through a hall of unpredictable mirrors.” He describes, for instance, twins as symbols of both power and mystery in Egyptian, Greek, Aztec and Indo-European mythology, as well as the ways in which cultures change their views of twins over time. In the 18th and 19th centuries, the Oyo Yorubans (in present-day Nigeria) saw twins as dangerous—a “supernatural curse”—but by the early 20th century began to see them as markers of good fortune and “treated them with care and reverence as a deity in miniature.”

. . . .

Mr. Viney acknowledges the dangers of reading too much into twinhood. “One reason that a twin history is difficult to portray in one image or a few hundred . . . is that each twin is what US anthropologist Roy Wagner might have called a ‘fractal person’—neither singular nor plural.” Perhaps our fascination with twins is the result of that enduring, mysterious fact: As singletons, we can never really understand what it means to be multiple.

Link to the rest at The Wall Street Journal

Of course, The Princeton University Press can’t be bothered with a free preview of Twinkind the way everyone else does on Amazon because, ebooks and previews are for proles.

You can see a sort of queer preview of Twinkind by clicking Here, then cursoring down instead of clicking on each page the way Amazon and all non-troglodytes do.

Putting Ourselves Back in the Equation

From The Wall Street Journal:

Popular science often feels like a kind of voyeurism. Those who can’t manage the real thing are given a thrilling glimpse of its intrigue and excitement but kept at a distance. So a book that tackles the mind-boggling triad of physics, consciousness and artificial intelligence might be expected to provide little more than intellectual titillation. The science journalist George Musser even says at its end that “many physicists and neuroscientists are just as perplexed as the rest of us.”

But Mr. Musser knows that the point of popular science is not for the reader to understand everything fully but to get a sense of what’s at stake, what kinds of answers are being offered to difficult questions, and why it all matters. One could not ask more of “Putting Ourselves Back in the Equation”—on all three counts it delivers.

The central puzzle of the book is what the contemporary philosopher William Seager has called a “ticking time bomb”: the intellectual division between the objective and the subjective. Natural science was able to make such great strides after the Middle Ages only because it left the analysis of thoughts and feelings to poets and philosophers, focusing instead on measurable observables. The strategy worked a treat until it hit two brick walls.

The first is the nature of consciousness. Modern neuroscience, at first, stuck to examining the brain events that corresponded to conscious experiences: the “neural correlates of consciousness.” But at a certain point it became clear that such a focus left out a good deal. How is it possible that mushy cells give rise to sensations, emotions and perceptions? The science of mind had to ignore precisely what it was supposed to explain because a purely objective account of consciousness cannot encompass its subjective character.

And then—a second and related problem—physicists discovered that they couldn’t leave conscious minds out of their equations. A central tenet of quantum theory is that observers change what they observe. This is embarrassing. Physics is meant to describe the mind-independent world. But its best description ended up having minds—with their particular points of view—at its center. So for physics to be anything like complete, it has to find a way to kick minds out again or account for what makes them conscious and why they should affect physical matter.

Mr. Musser provides a chatty and informal overview of the many ways in which physicists have been trying to rise to these challenges. He speaks to many of the leading scientists in the field, trying a bit too hard to make them seem like regular folks so that we don’t feel intimidated. A bigger challenge, for the reader, is that he introduces us to so many theories that it’s difficult to judge which should be taken most seriously and which lean toward the cranky. Given that even the most well-evidenced theories in physics sound crazy, our intuitions are no guide.

But by the end a number of general insights shine through. The central one is that we have to think of both physics and consciousness in terms of networks and relations. You can’t find consciousness in single neurons, no matter how hard you look. The reductive approach, which seeks to break down phenomena to their smallest parts, doesn’t work for everything. The clearest evidence of the limits of reductionism is quantum entanglement, or “spooky action at a distance,” the title-phrase of Mr. Musser’s previous book. This is the phenomenon by which two particles appear to affect each other even though they are too far apart for any information to pass between them without exceeding the speed of light, a physical impossibility. No explanation of this oddity is possible if we focus reductively on the particles as discrete entities. Instead we have to see them as interrelated.

Consciousness, too, seems to depend upon patterns of interconnectedness. For a while now researchers into artificial intelligence have realized that we can get nothing close to human reasoning if we have computers that follow only linear processes. AI took off when scientists started to create neural networks, in which processes are conducted in parallel, mimicking the brain’s capacity to run different processes at the same time in its many parts.

This insight led to the currently hottest theory in consciousness studies, integrated information theory, which holds that consciousness is essentially the result of information being kept in whole systems rather than in parts. Adherents even quantify the degree of this integration with the Greek letter phi, which, says Mr. Musser, “represents the amount of information that is held collectively in the network rather than stored in its individual elements.” The higher the value of phi, the more conscious the system is.

By the end of “Putting Ourselves Back in the Equation,” Carlo Rovelli emerges as the physicist who is on the hottest trail. For Mr. Rovelli, there are no independent, objective facts. Truth is always a matter of relations. We understand that New York is west of London but east of San Francisco. Mr. Rovelli argues that all physical properties are like this: Nothing is anything until it is related to something else. It’s an old idea, found in ancient Greek, Chinese and Indian philosophy, and scientists are discovering it anew.

Link to the rest at The Wall Street Journal

Witness to a Prosecution

From The Wall Street Journal:

In the popular perception of the typical white-collar case, a judicious government prosecutes a mendacious executive on a mountain of incontrovertible evidence. Think Bernie Madoff or Sam Bankman-Fried. Then there’s Michael Milken, the former “junk bond king” from the infamous “decade of greed.” If there were a Mount Rushmore of white-collar crime, all three men might have a place.

Thanks to Richard Sandler, however, you can now scratch one of those names off that list. In “Witness to a Prosecution,” Mr. Sandler, a childhood friend who was Mr. Milken’s personal lawyer at the time, walks the reader through Mr. Milken’s 30-plus year legal odyssey, beginning in 1986 with the federal government’s investigation, followed by his indictment, plea bargain, and prison term, right through to his pardon by President Donald Trump in 2020. The author tells a convincing and concerning story of how the government targeted a largely innocent man and, when presented with proof of that innocence, refused to turn away from a bad case.

I have always been more than a bit skeptical about Mr. Sandler’s underlying thesis—and the thesis of many of Mr. Milken’s supporters on this page. After all, Mr. Milken served nearly two years in jail, pleaded guilty to six felonies and paid a large fortune to settle with the government.

I have also read books, chief among them James B. Stewart’s “Den of Thieves” (1991), that seem to make the case for Mr. Milken’s culpability—the methods he employed as head of Drexel Burnham Lambert’s high-yield department, the alleged epicenter of the destructive “leveraged buyout” mania of the 1980s that cratered companies and led to mass unemployment; his alliances with smarmy corporate raiders; his supposed insider trading with the notorious arbitrageur Ivan Boesky. The list goes on.

After reading Mr. Sandler’s account, I no longer believe in Mr. Milken’s guilt, and neither should you. The author argues that most of what we know about Mr. Milken’s misdeeds is grossly exaggerated, if not downright wrong. What the government was able to prove in the court of law, as opposed to the court of public opinion, were mere regulatory infractions: “aiding and abetting” a client’s failure to file an accurate stock-ownership form with the SEC, a violation of broker-dealer reporting requirements, assisting with the filing of a false tax return. There was no insider-trading charge involving Mr. Boesky or anyone else, because the feds couldn’t prove one.

The witnesses against Mr. Milken, among them Mr. Boesky, led investigators on a wild-goose chase that turned up relatively little. One key piece of evidence linking the two men: A $5.3 million payment to Drexel from Mr. Boesky for what turned out to be routine corporate finance work that the feds thought looked shady.

When you digest the reality of the case against Mr. Milken, you find that much of it was nonsense. As Mr. Sandler puts it: “The nature of prosecution and the technicality and uniqueness of the regulatory violations . . . certainly never would have been pursued had Michael not been so successful in disrupting the traditional way business was done on Wall Street.”

That gets us to why Mr. Milken was prosecuted so viciously. The lead prosecutor on the case, Rudy Giuliani, was the U.S. Attorney for the Southern District of New York. It’s hard to square the current Mr. Giuliani, fighting to keep his law license while being enmeshed in Mr. Trump’s election-denying imbroglio, with the man who was then the nation’s foremost crime fighter, taking on mobsters, corrupt politicians and those targeted as unscrupulous Wall Street financiers.

Mr. Giuliani’s ambition for political office—he would later become mayor of New York City—made Mr. Milken an enticing target, Mr. Sandler tells us. The author suggests that Mr. Giuliani made up for his weak legal case by crafting an image of the defendant as an evil bankster and feeding it through leaks to an all-too-compliant media. “Michael Milken became the subject of intensive newspaper articles, press leaks, rumors, and innuendo for years before he was charged with anything,” the author writes. “I am sure Giuliani and his team of prosecutors believed that Mike would succumb to the pressure and quickly settle and cooperate and implicate others. When this did not happen, the prosecutors became more committed to using their immense power to pressure Michael and try to win at all costs.”

Link to the rest at The Wall Street Journal

AAP’s September StatShot: US Book Market Up 0.8 Percent YTD

From Publishing Perspectives:

In its September 2023 StatShot report released this morning (December 12), the Association of American Publishers (AAP) cites total revenues across all categories to have been flat as compared to September 2022, at US1.4 billion.

The American market’s year-to-date revenues, the AAP reports, were up 0.8 percent at US$9.4 billion for the first nine months of the year.

As Katy Hershberger at Publishers Lunch today is noting, children’s books continued to gain in September, up 5.2 percent over the same month in 2022, sales this year reaching $272.8 million.

Publishing Perspectives readers know, that the AAP’s numbers reflect reported revenue for tracked categories including trade (consumer books); higher education course materials; and professional publishing.

. . . .

Trade Book Revenues

Year-Over-Year Numbers
Trade revenues were down 0.4 percent in September over the same month last year, at $905.9 million.

In print formats:

  • Hardback revenues were up 7.2 percent, coming in at $379 million
  • Paperbacks were down 4.9 percent, with $299.1 million in revenue
  • Mass market was down 39.5 percent to $11.3 million
  • Special bindings were up 11.8 percent, with $27.1 million in revenue

In digital formats:

  • Ebook revenues were down 1.8 percent for the month as compared to September 2022, for a total of $85.2 million
  • The closely monitored digital audio format was up 3.2 percent for September 2022, coming in at $69.9 million in revenue
  • Physical audio was down 24.4 percent, coming in at $1.2 million

Link to the rest at Publishing Perspectives

PG notes that the Association of American Publishers includes far more publishers than the large trade fiction and non-fiction publishers in New York City, the ones that the New York Times uses for its best-seller lists.

The AAP stats include educational publishers that provide textbooks for all of the different levels of education in the US. It also includes religious publishers and business publishers providing books for the business, medical, law, technical and scientific markets.

Amsterdam’s Elsevier: Research and Real-World Impact

From Publishing Perspectives:

As we work to recoup some of the relevant material released to the news media near the end of our publication year, we look now at two significant research reports from Elsevier, one on research evaluation and the other on real-world impact—both increasingly pressing interests in the world of academic publication.

In the 30-page report “Back to Earth: Landing Real-World Impact in Research Evaluation,” the program carried out a survey of 400 academic leaders, funders, and researchers in seven countries about real-world impact as part of academic evaluation. Key findings include:

  • Sixty-six percent of respondents say academia has a moral responsibility to incorporate real-world impact into standard research evaluation​
  • Seventy percent say they are passionate about research that has a positive real-world impact
  • Fifty-three percent say a more holistic approach to evaluation would improve research cost-effectiveness.\
  • Fifty-one percent of respondents identified at least one serious problem with current methods of research evaluation
  • In terms of barriers to change, 56 percent of those surveyed said the “lack of common frameworks or methodologies” while 48 percent said “lack of consensus on what constitutes impact”

In this report, it’s interesting to note some of the differences, culture-to-culture in the question of how important it is for research “to aim for real-world impact.” Particularly in the coronavirus COVID-19 pandemic, there could hardly have been a time when it was so obvious, the need that the world-at-large has for the most sophisticated, committed, and efficient research.

Nevertheless, this graphic indicates that surveyed personnel on this point came in on the affirmative side (yes, research should aim for real-world impact) at rates up to 93 percent in the United Kingdom and a low of 64 percent in the Elsevier report’s home, the Netherlands.

Another very interesting point in this report compares the view of funders and those of researchers.

While funders surveyed seem to agree with researchers that more holistic approaches are important, the funders did say that they were more in agreement with the researchers that the current system creates vested interests.

And it’s the researchers who said they were more passionate than the funders about having “real-world impact as researchers and academic leaders.”

Topping the list of barriers offered by funders to a more holistic form of research assessment was lack of resources at 53 percent, on a 53-percent par with lack of consensus on what actually constitutes impact.

Also running heavily were the lack of a common framework or methodology in holistic method of assessing research’s impact, at 49 percent. But another tie came in next, with 38 percent each of respondents saying that two more barriers are “achieving sufficient alignment between different actors” and “complexity.”

Link to the rest at Publishing Perspectives

The Quest for Cather

From The American Scholar:

Willa Cather loathed biographers, professors, and autograph fiends. After her war novel, One of Ours, won the Pulitzer in 1923, she decided to cull the herd. “This is not a case for the Federal Bureau of Investigation,” she told one researcher. Burn my letters and manuscripts, she begged her friends. Hollywood filmed a loose adaptation of A Lost Lady, starring Barbara Stanwyck, in 1934, and Cather soon forbade any further screen, radio, and television versions of her work. No direct quotations from surviving correspondence, she ordered libraries, and for decades a family trust enforced her commands.

Archival scholars managed to undermine what her major biographer James Woodress called “the traps, pitfalls and barricades she placed in the biographer’s path,” even as literary critics reveled in trench warfare over Cather’s sexuality. In 2018, her letters finally entered the public domain, allowing Benjamin Taylor to create the first post-ban life of Cather for general readers.

Chasing Bright Medusas is timed for the 150th anniversary of Cather’s birth in Virginia. The title alludes to her 1920 story collection on art’s perils, Youth and the Bright Medusa. (“It is strange to come at last to write with calm enjoyment,” she told a college friend. “But Lord—what a lot of life one uses up chasing ‘bright Medusas,’ doesn’t one?”) Soon she urged modern writers to toss the furniture of naturalism out the window, making room for atmosphere and emotion. What she wanted was the unfurnished novel, or “novel démeublé,” as she called it in a 1922 essay. Paraphrasing Dumas, she posited that “to make a drama, all you need is one passion, and four walls.”

Chasing Bright Medusas is an appreciation, a fan’s notes, a life démeublé. Taylor’s love of Cather’s sublime prose is evident and endearing, but in his telling of her life, context is sometimes defenestrated, too. When Taylor sets an idealistic Cather against cynical younger male rivals, we learn of Ernest Hemingway’s mockery but not of William Faulkner’s declaration that the greatest American novelists were Herman Melville, Theodore Dreiser, Hemingway, and Cather. Taylor rightly notes that Cather’s First World War novel differs from Hemingway’s merely in tone, not understanding; A Farewell to Arms and One of Ours, he writes, “ask to be read side by side.”

. . . .

Not much Dark Cather surfaces in Bright Medusas—a pity, for she was a genius of horror. My Ántonia brims with bizarre ways to die (thrown to ravening wolves, suicide by threshing machine); carp devour a little girl in Shadows on the Rock; and One of Ours rivals Cormac McCarthy for mutilation and gore. Cather repeatedly changed her name and lied about her birthdate, as a professional time traveler must, on the page and in life. She saw her mother weep for a Confederate brother lost at Manassas, rode the Plains six years after Little Bighorn, was the first woman to receive an honorary degree from severely masculinist Princeton, and though in failing health, spent World War II writing back to GIs who read her work in Armed Services Editions paperbacks, especially the ones who picked up Death Comes for the Archbishop, assuming it was a murder mystery. She died the same spring that Elton John and Kareem Abdul-Jabbar and David Letterman were born. She is one of ours.

Link to the rest at The American Scholar

How to Interpret the Constitution

From The Wall Street Journal:

It is a testament to our nation’s commitment to the rule of law that, nearly 250 years after its ratification, Americans still argue about the Constitution. And as often as we argue about the outcomes of controversial hot-button constitutional cases, we argue about the methodologies that lead judges to make their rulings.

Today originalism—the idea that constitutional meaning should be considered as being fixed at the time of enactment—is the dominant judicial philosophy, thanks in part to decades of persuasive arguments put forward by conservative and libertarian lawyers and scholars. But there are different flavors of originalism, corresponding to different understandings of “original meaning”—the framers’ intent, a provision’s “public meaning,” or its expected application, to name a few—and various liberal lawyers and politicians propose their own interpretative methods, originalist or otherwise.

Cass Sunstein, a Harvard Law School professor, has written “How to Interpret the Constitution,” a clear, accessible survey that discusses originalist interpretations alongside their competitors. Among those nonoriginalist approaches are John Hart Ely’s argument for democracy-enforcing judicial review, Ronald Dworkin’s moral reading of the Constitution and James Bradley Thayer’s advocacy of extreme judicial restraint. Those are all varieties of what has been called “living constitutionalism”; they all allow that the Constitution may evolve in meaning without being amended.

Mr. Sunstein argues repeatedly that the Constitution “does not contain the instructions for its own interpretation.” To some degree, this is true: Though statutes often include definitions in their wording, the Constitution, for the most part, does not. For example, the first words of the First Amendment read: “Congress shall make no law respecting an establishment of religion.” Neither “respecting,” “establishment,” nor “religion” are set out with clear definitions, and the Supreme Court has entertained many possible meanings for each over the course of American history.

There is also no explicit constitutional command, in the text of the Constitution or in the works of the Founders, that those tasked with interpreting it must follow any particular method, either originalist or living-constitutionalist. “The idea of interpretation is capacious,” Mr. Sunstein writes. He therefore proposes his own first principle for choosing among methods: “Any particular approach to the Constitution must be defended on the ground that it makes the relevant constitutional order better rather than worse.”

Originalists propose that we resolve constitutional ambiguities by unearthing the law’s true and unchanged meaning. Mr. Sunstein, by contrast, proposes that judges and other constitutional interpreters rely on their “firm intuitions” to determine which constitutional rules are desirable and then see what theories might yield those rules. To do so, the author borrows from John Rawls, the giant of 20th-century liberal political theory, to endorse a methodology of “reflective equilibrium,” in which “our moral and political judgments line up with one another, do not contradict each other, and support one another.”

“In deciding how to interpret the Constitution,” Mr. Sunstein writes, “we have to think about what we are most firmly committed to, and about what we are least likely to be willing to give up.” He reveals how he would apply this methodology in his own case by listing his 10 “fixed points,” or constitutional outcomes that resonate with his own sense of rightness and justice. They are “clearly correct” propositions in the author’s view and include the contentions that “the Constitution does not forbid maximum hour and minimum wage laws” and that “the First Amendment should be understood to give very broad protection to political speech.” Of course, you might believe exactly the opposite. That, to Mr. Sunstein, is equally legitimate. One begins to wonder at this point how much “interpretation” exactly is going on.

Consider Mr. Sunstein’s claim that judges and justices should interpret laws in a manner that improves the constitutional order. Why shouldn’t we just allow legislators, who figure nowhere in Mr. Sunstein’s philosophy, to make legitimate changes to legislation when needed? We have mechanisms for improving our nation’s laws, and we have one for improving our Constitution. The Republicans who revamped our constitutional system in the aftermath of the Civil War by devising the Reconstruction Amendments—banning slavery, guaranteeing the equal protection of the law and enforcing individual rights against the states—understood that they couldn’t simply project their moral and political views onto the unamended law. They had to change the Constitution.

Like most nonoriginalists, Mr. Sunstein evades the key insight that gives originalism its appeal. It begins with a phrase from the Constitution that refutes Mr. Sunstein’s premise that the document doesn’t contain instructions for its own interpretation. “This Constitution,” it proclaims, “shall be the supreme law of the land.” The Constitution is a legal document, even if its provisions are sometimes more ambiguous at first glance than we would want a law to be. And laws have the crucial characteristic sometimes known as “fixation”: They remain unchanged until changed by authorized means. Constitutional interpretation must be constrained by this basic principle of legal reasoning.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Bookworms and Fieldworkers

From The Nation:

In the years leading up to the outbreak of the 1905 revolution in Russia, Eduard Bernstein—the spirited German advocate of socialist revisionism—warned his Marxist colleagues about the dangers of an “almost mythical faith in the nameless masses.” More skeptic than firebrand, Bernstein worried that Karl Kautsky and other leaders of the international socialist movement placed too much confidence in the spontaneous emergence of an organized and disciplined working class: “The mob, the assembled crowd, the ‘people on the street’…is a power that can be everything—revolutionary and reactionary, heroic and cowardly, human and bestial.” Just as the French Revolution had descended into terror, the masses could once again combust into a violent flame. “We should pay them heed,” Bernstein warned, “but if we are supposed to idolize them, we must just as well become fire worshippers.”

Among the votaries of European socialism, Bernstein has seldom enjoyed much acclaim, not least because he symbolized the spirit of pragmatism and parliamentary reform that ended up on the losing side of the debates that roiled the socialist movement in the decades preceding the Bolsheviks’ victory in 1917. For historians who are less partisan, however, the time may well seem ripe for a new appraisal—a revision of revisionism—that casts Bernstein and his reformist wing in a more favorable light.

This is the ambition of Christina Morina in The Invention of Marxism, recently translated into English by Elizabeth Janik. A study of Bernstein, Kautsky, Lenin, Jean Jaurès, Rosa Luxemburg, and other early Marxist luminaries, the book bears a rather breathless subtitle—“How an Idea Changed Everything”—that is far too ambitious for any author, but it is nonetheless a searching account of Marxism’s early days. Although it offers no certain answers as to what the “idea” of Marxism really consists in, it does provide a welter of personal and biographical detail that enriches our sense of Marxism’s varied history and the lives of its party leaders.

How should we write the history of Marxism? Over the past century, when political opinion has been sharply divided on the meaning and legacy of the socialist tradition, historians have felt compelled to choose one of two modes of narrative: either triumphant or tragic. Both of these approaches are freighted by ideology, yet neither has permitted a truly honest reckoning with the political realities of the Marxist past.

Morina, a scholar whose training reflects the methods of social and political history associated with the University of Bielefeld in Germany, where she now works as a professor, has set out to write a history that avoids strong ideological verdicts and places a greater emphasis on the sociology of intellectuals and the details of the Marxists’ personal lives, a method that also draws inspiration from the new trend in the history of emotions pioneered by scholars such as Ute Frevert. No doubt the book also reflects her own experiences as a child in East Germany, where she witnessed the “absurdities and inhumanity” of an authoritarian state that was arguably socialist in name only.

The fruit of her efforts is a group biography that explores the fate of nine “protagonists” from the first generation of the European socialist movement following the death of Karl Marx in 1883. Morina weaves together their personal and party histories with unusual skill, though without quite telling us “how an idea changed everything.” Perhaps the key difficulty is the method of prosopography itself, which fractures the book into individual life stories and leaves little room for a continuous political narrative. Those who are not already familiar with the broader history of European socialism will find it difficult to understand how the various national parties (in France, Germany, Austria, and Russia) all participated in a common struggle. But there is a case for her approach nonetheless, as it leads to some unique insights. By examining how personality and emotion shape one’s political commitments, Morina paints a portrait of Marxism less as a specific theory than as a shared language and a set of informal dispositions that spawned a variety of competing interpretations. Her nine protagonists were not, she explains, gifted with a sudden revelation of the truth. Each underwent a slow and emotional process through which the ideas of Marx became a common framework for explaining and evaluating political events.

While we now take this framework for granted as Marxist doctrine, Morina notes that the creation of Marxism was itself “a vast political project” that developed only gradually. The term gained “ideological meaning and political heft” only in the 1870s and 1880s, as works by Marx and Engels spread across the world in various editions and translations. For Morina, this means that the task of the social historian is to understand how those works were received, often on a case-by-case basis. The result is a book that tells us a great deal about these early Marxists as individuals, though much less about Marxism as a comprehensive theory or idea.

. . . .

If Marxism is an idea, it’s only because of the intellectuals who carried it forward and helped ensure its longevity—and many (though, of course, not all) of these intellectuals were by origin and education members of the bourgeoisie, not members of the working class lionized in Marxist theory.

Morina is acutely aware of this irony, and it informs all of her judgments in the book, some of them subtle, others overt. Running through The Invention of Marxism is a powerful current of unease about the “abstraction” of theory and the great distance that separated some of Marxism’s most esteemed theorists from the world they wished to understand. Although they were passionate in their principled commitment to the working classes, they often knew little about the workers’ actual lives, and at times they responded with revulsion—or at least discomfort—when exposed to the real suffering of the proletariat for whom they claimed to speak.

Morina takes special care to note that many of the party theorists in her tale enjoyed the rare privilege of a university education at a time when less than 1 percent of secondary school students in Western Europe went on to study at university. Karl Kautsky, a leading member of the German Social Democratic Party, was born into a home of writers and artists, and his parents were highly committed to his schooling. Victor Adler, a leader of the Social Democratic Workers’ Party of Austria, was a practicing physician as well as a publisher—he founded Gleichheit (Equality), the first socialist party newspaper in the Hapsburg Empire. Rosa Luxemburg studied at the University of Zurich and was by all reports an exceptionally precocious child whose parents grew prosperous thanks to her father’s success as a timber merchant; her theoretical acumen and political passion elevated her to prominent seats, first in the German Social Democratic Party and later in the Independent Social Democrats, the Spartacus League, and the Communist Party. Jean Jaurès, born in the South of France, rose to the top of his class and attended the École Normale Supérieure, where his classmates included Émile Durkheim and Henri Bergson, before he emerged as the most influential leader in the French Socialist Party.

The other protagonists in Morina’s tale enjoyed equal or even greater advantages. Vladimir Ulyanov (later Lenin) was born into a prosperous Russian family that owned estates; his father, a liberal teacher elevated to the post of school inspector, was eventually granted a title of nobility, while his mother came from a family of landowners with German, Swedish, and Russian origins and spoke several languages. Georgi Plekhanov, the “father of Russian Marxism,” had parents who owned serfs and belonged to the Tatar nobility; following the Emancipation Edict of 1861, Plekhanov’s family fell into financial decline, but thanks in part to his mother, he enjoyed a very strong education. Only two figures in Morina’s book were not the beneficiaries of wealth and education: Jules Guesde (born Bazile), later a major figure in French Marxism and socialism and an opponent of Jaurès; and Eduard Bernstein, whose father was a plumber and who never attended university and worked as a bank employee to support his activities in the German Social Democratic Party.

These protagonists, most of them members of the middle class, belonged to what Morina calls a “voluntary elite.” Her group study, though often engaging, remains poised in an uncertain space between intellectual history and party chronicle, without ever truly resolving itself into a satisfactory version of either. Needless to say, this ambivalence may be baked into the topic itself, since Marxism is perhaps distinctive in its contempt for mere theorizing and its constant refrain that we must bridge the gap between theory and practice. After all, has there ever been a Marxist who did not insist that their ideas were not correlated with material events? Morina, though hardly a Marxist in her methods, suggests that her study exemplifies the genre of Erfahrungsgeschichte, or the “history of lived experience.” Experience, however, is itself a concept of some controversy, since it hints at some bedrock of individual reality beyond interpretation and deeper than mere ideas. And this would seem to be Morina’s point: By turning our attention to the biographical and emotional history of the European socialist tradition, she hopes to remind us that Marxist intellectuals were not bloodless theoreticians but human beings caught up in the same world of passions and interests they wished to explain.

Link to the rest at The Nation

PG suggests that the only definitive accomplishment of more than 100 years of Marxism/Communism is to demonstrate that Communism is an excellent way for a dictator to gain and hold power.

The poor become poorer and the productive middle-class is decimated in part because talents and accomplishments in any field other than politics are only rarely a path to any sort of security. It’s a soul-crushing environment for anyone who isn’t a thug.

The Biggest Questions: What is death?

From MIT Technology Review:

Just as birth certificates note the time we enter the world, death certificates mark the moment we exit it. This practice reflects traditional notions about life and death as binaries. We are here until, suddenly, like a light switched off, we are gone.

But while this idea of death is pervasive, evidence is building that it is an outdated social construct, not really grounded in biology. Dying is in fact a process—one with no clear point demarcating the threshold across which someone cannot come back.

Scientists and many doctors have already embraced this more nuanced understanding of death. As society catches up, the implications for the living could be profound. “There is potential for many people to be revived again,” says Sam Parnia, director of critical care and resuscitation research at NYU Langone Health.

Neuroscientists, for example, are learning that the brain can survive surprising levels of oxygen deprivation. This means the window of time that doctors have to reverse the death process could someday be extended. Other organs likewise seem to be recoverable for much longer than is reflected in current medical practice, opening up possibilities for expanding the availability of organ donations.

To do so, though, we need to reconsider how we conceive of and approach life and death. Rather than thinking of death as an event from which one cannot recover, Parnia says, we should instead view it as a transient process of oxygen deprivation that has the potential to become irreversible if enough time passes or medical interventions fail. If we adopt this mindset about death, Parnia says, “then suddenly, everyone will say, ‘Let’s treat it.’”   

Moving goalposts 

Legal and biological definitions of death typically refer to the “irreversible cessation” of life-sustaining processes supported by the heart, lungs, and brain. The heart is the most common point of failure, and for the vast majority of human history, when it stopped there was generally no coming back. 

That changed around 1960, with the invention of CPR. Until then, resuming a stalled heartbeat had largely been considered the stuff of miracles; now, it was within the grasp of modern medicine. CPR forced the first major rethink of death as a concept. “Cardiac arrest” entered the lexicon, creating a clear semantic separation between the temporary loss of heart function and the permanent cessation of life. 

Around the same time, the advent of positive-pressure mechanical ventilators, which work by delivering breaths of air to the lungs, began allowing people who incurred catastrophic brain injury—for example, from a shot to the head, a massive stroke, or a car accident—to continue breathing. In autopsies after these patients died, however, researchers discovered that in some cases their brains had been so severely damaged that the tissue had begun to liquefy. In such cases, ventilators had essentially created “a beating-heart cadaver,” says Christof Koch, a neuroscientist at the Allen Institute in Seattle.

These observations led to the concept of brain death and ushered in medical, ethical, and legal debate about the ability to declare such patients dead before their heart stops beating. Many countries did eventually adopt some form of this new definition. Whether we talk about brain death or biological death, though, the scientific intricacies behind these processes are far from established. “The more we characterize the dying brain, the more we have questions,” says Charlotte Martial, a neuroscientist at the University of Liège in Belgium. “It’s a very, very complex phenomenon.” 

Brains on the brink

Traditionally, doctors have thought that the brain begins incurring damage minutes after it’s deprived of oxygen. While that’s the conventional wisdom, says Jimo Borjigin, a neuroscientist at the University of Michigan, “you have to wonder, why would our brain be built in such a fragile manner?” 

Recent research suggests that perhaps it actually isn’t. In 2019, scientists reported in Nature that they were able to restore a suite of functions in the brains of 32 pigs that had been decapitated in a slaughterhouse four hours earlier. The researchers restarted circulation and cellular activity in the brains using an oxygen-rich artificial blood infused with a cocktail of protective pharmaceuticals. They also included drugs that stopped neurons from firing, preventing any chance that the pig brains would regain consciousness. They kept the brains alive for up to 36 hours before ending the experiment. “Our work shows there’s probably a lot more damage from lack of oxygen that’s reversible than people thought before,” says coauthor Stephen Latham, a bioethicist at Yale University. 

In 2022, Latham and colleagues published a second paper in Nature announcing that they’d been able to recover many functions in multiple organs, including the brain and heart, in whole-body pigs that had been killed an hour earlier. They continued the experiment for six hours and confirmed that the anesthetized, previously dead animals had regained circulation and that numerous key cellular functions were active. 

“What these studies have shown is that the line between life and death isn’t as clear as we once thought,” says Nenad Sestan, a neuroscientist at the Yale School of Medicine and senior author of both pig studies. Death “takes longer than we thought, and at least some of the processes can be stopped and reversed.” 

A handful of studies in humans have also suggested that the brain is better than we thought at handling a lack of oxygen after the heart stops beating. “When the brain is deprived of life-sustaining oxygen, in some cases there seems to be this paradoxical electrical surge,” Koch says. “For reasons we don’t understand, it’s hyperactive for at least a few minutes.” 

In a study published in September in Resuscitation, Parnia and his colleagues collected brain oxygen and electrical activity data from 85 patients who experienced cardiac arrest while they were in the hospital. Most of the patients’ brain activity initially flatlined on EEG monitors, but for around 40% of them, near-normal electrical activity intermittently reemerged in their brains up to 60 minutes into CPR. 

Similarly, in a study published in Proceedings of the National Academy of Sciences in May, Borjigin and her colleagues reported surges of activity in the brains of two comatose patients after their ventilators had been removed. The EEG signatures occurred just before the patients died and had all the hallmarks of consciousness, Bojigin says. While many questions remain, such findings raise tantalizing questions about the death process and the mechanisms of consciousness. 

Life after death

The more scientists can learn about the mechanisms behind the dying process, the greater the chances of developing “more systematic rescue efforts,” Borjigin says. In best-case scenarios, she adds, this line of study could have “the potential to rewrite medical practices and save a lot of people.” 

Everyone, of course, does eventually have to die and will someday be beyond saving. But a more exact understanding of the dying process could enable doctors to save some previously healthy people who meet an unexpected early end and whose bodies are still relatively intact. Examples could include people who suffer heart attacks, succumb to a deadly loss of blood, or choke or drown. The fact that many of these people die and stay dead simply reflects “a lack of proper resource allocation, medical knowledge, or sufficient advancement to bring them back,” Parnia says.  

Borjigin’s hope is to eventually understand the dying process “second by second.” Such discoveries could not only contribute to medical advancements, she says, but also “revise and revolutionize our understanding of brain function.”

Link to the rest at MIT Technology Review

Writing Stories to Seek Answers to Life’s Thorny Questions

From Writers Digest:

Write what you know. As a novelist, I’d argue that adage is bad advice. None of the 30-plus romance and romantic suspense novels I’ve written over the last 15 years would’ve been possible if I’d abided by it. I’ve never been Amish. I’ve never been stalked by a serial killer. I have, however, been diagnosed with stage 4 ovarian cancer. For the past eight years I’ve been living with a terminal disease. It’s a club to which no one wants to belong, and the members can’t leave (only expire). It’s been a long season of fear, anxiety, anger, despair, and loss of faith, but also memories made, joy, peace, silliness, laughter, faith increased, and even hope.

The statistics for late-stage ovarian cancer are grim. Less than 20 percent of women diagnosed at stage 4 live past five years. It’s the most deadly gynecological cancer. Yet here I am. I’ve survived my expiration date. Why? That’s the big question that haunts me. Why me and not Anna Dewdney, who wrote the children’s picture book series Llama Llama my kids loved growing up? Why me and not the woman sitting in the next pew at church?

I’ve read that writing stories is one way we seek answers to questions that otherwise seem impossible to resolve. I decided to write a novel exploring the journey of two sisters—one an oncologist and the other a kindergarten teacher diagnosed with ovarian cancer. Let them figure out whether there’s some hidden meaning I’m supposed to find in this grueling marathon. The Year of Goodbyes and Hellos was the easiest and hardest novel I’ve ever written. The first draft of the 110,000-word tome took about eight months. The words poured out. I couldn’t write fast enough.

It helped that I’d already done the technical research in the form of seven years of CT scans, PET scans, lab work, surgery, oncology appointments, literally hundreds of hours spent in cancer clinics and infusion rooms, and hundreds of hours spent poring over the latest research into new treatments and clinical trials. I speak the language. I have a storehouse of memories of healthcare workers, including doctors, nurses, medical assistants, technicians, and admin staff, ranging from eccentric and wonderful to downright mean. My Russian oncologist with her garden and her crazy dog stories is my favorite.

I used this on-the-job training to get the technical details right. That turned out to be the easy part. Taking out the memories and holding them up to the light proved to be much harder. Starting in January 2016, when I came back to the office after Christmas break to find a voice mail message from my oncologist saying a “benchmark” chest CT scan had revealed masses in the lining of my lungs. CT scans can be wrong, I thought, WebMD says so.

Then came the PET scan. And the waiting—which turned out to be standard throughout this journey. Waiting and more waiting. I paced outside the clinic for two hours, waiting to see the oncologist. Finally, in that freezing exam room, she uttered the words “ovarian cancer.”

The first of many memories. Surgery to remove my female body parts. Losing my hair—twice. Chemo, remission, progression, chemo, remission, progression. Round and round went the merry-go-round. And now I’m in a Phase I clinical trial with all the side effects and all the uncertainty.

But I also sifted through the memories for the gold nuggets. The birth of a grandchild. A 35th wedding anniversary spent in Costa Rica. Christmases. Birthdays. Spring days. Hummingbirds. Peanut butter toast. Writing stories.

I discovered life is still good. That I can have cancer and still hang on to my faith. That God is still good and I’m still here.

Why? To write this book? Maybe. It’s a universal story to which readers will relate. We all have loved ones we’re afraid of losing—regardless of the cause. Most of us will reach a point where we have to concede that time is, in fact, finite. That we are not immortal. Perhaps reading this story will provoke thought and make the conversations that follow easier in some small way.

Link to the rest at Writers Digest

If Scientists Were Angels


From The New Atlantis:

Francis Bacon is known, above all, for conceiving of a great and terrible human project: the conquest of nature for “the relief of man’s estate.” This project, still ongoing, has its champions. “If the point of philosophy is to change the world,” Peter Thiel posits, “Sir Francis Bacon may be the most successful philosopher ever.” But critics abound. Bacon stands accused of alienating human beings from nature, abandoning the wisdom of the ancients, degrading a philosophy dedicated to the contemplation of truth, and replacing it with something cruder, a science of power.

In The Abolition of Man, C. S. Lewis goes so far as to compare Bacon to Christopher Marlowe’s Faustus:

You will read in some critics that Faustus has a thirst for knowledge. In reality, he hardly mentions it. It is not truth he wants … but gold and guns and girls. “All things that move between the quiet poles shall be at his command” and “a sound magician is a mighty god.” In the same spirit Bacon condemns those who value knowledge as an end in itself: this, for him, is to use as a mistress for pleasure what ought to be a spouse for fruit. The true object is to extend Man’s power to the performance of all things possible.

Lewis draws the final phrase of this critique from Bacon’s New Atlantis, the 1627 utopian novella from which this journal takes its name. But why would a publication like The New Atlantis, dedicated to the persistent questioning of science and technology, name itself after a philosopher’s utopian dreams about magicians on the verge of becoming mighty gods?

According to the journal’s self-description on page 2 of every print issue, this is not the whole story. Bacon’s book raises questions about the moral and political difficulties that accompany the technological powerhouse it depicts, even if it “offers no obvious answers.”

Perhaps it seduces more than it warns. But the tale also hints at some of the dilemmas that arise with the ability to remake and reconfigure the natural world: governing science, so that it might flourish freely without destroying or dehumanizing us, and understanding the effect of technology on human life, human aspiration, and the human good. To a great extent, we live in the world Bacon imagined, and now we must find a way to live well with both its burdens and its blessings. This very challenge, which now confronts our own society most forcefully, is the focus of this journal.

The fact is, people have been puzzling over Bacon’s uncanny utopia for four hundred years without being able to pin it down. The reason for this is simple: We’ve been reading it wrong. Bacon’s New Atlantis is not an image of things hoped for or of things to come. It is an instructive fable about what happens when human beings stumble across the boundary between things human and things divine, a story about fear, intimidation, and desire.

Human beings have always lusted after knowledge, specifically that knowledge which promises to open our eyes so that we might become like gods. Bacon did not invent or ignite this desire, but he did understand it better than most.

In form, Francis Bacon’s New Atlantis is modeled loosely on Thomas More’s Utopia. A ship full of European sailors lands on a previously unknown island in the Americas where they find a civilized society in many ways superior to their own. The narrator describes the customs and institutions of this society, which in Bacon is called “Bensalem,” Hebrew for “son of peace.” Sometimes Bacon echoes, sometimes improves upon, More’s earlier work. But at the end of the story, Bacon turns to focus solely on the most original feature of the island, an institution called Solomon’s House, or the College of the Six Days Works.

This secretive society of natural philosophers seeks nothing less than “the effecting of all things possible,” as C. S. Lewis duly notes. Bacon devotes a quarter of the total text of New Atlantis to an unadorned account of the powers and insights the philosophers in Solomon’s House have. Then the work ends abruptly with no account of the sailors’ trip home or the results of their discovery. The story ends mid-paragraph, with a final line tacked on at the end: “The rest was not perfected.”

What is the meaning of this tale? The first and simplest answer was given by William Rawley, Bacon’s chaplain, who was responsible for publishing New Atlantis after Bacon’s death. He wrote in his preface to the work: “This fable my Lord devised, to the end that he might exhibit therein a model or description of a college instituted for the interpreting of nature and the producing of great and marvellous works for the benefit of men….” The founders of the Royal Society, Great Britain’s famous scientific academy, seem to have had a similar idea a few decades later: Bacon “had the true Imagination of the whole Extent of this Enterprise, as it is now set on foot.”

Link to the rest at The New Atlantis

American Visions

From The Wall Street Journal:

‘Brace your nerves and steel your face and be nothing daunted,” an Irish immigrant named John Stott wrote on the back of the trans-Atlantic tickets he was sending to folks back home in the 1848, hoping that they, too, would come to America, “this Great Continent.” He added: “There will be dificultyes to meet with but then consider the object you have in view.” The last sentence could serve as a motto for Edward L. Ayers’s “American Visions,” a sweeping, briskly narrated history of the United States as it limped its circuitous way to the Civil War.

There’s much in Mr. Ayers’s book about those “dificultyes.” During the first half of the 19th century, the country tripled in size. In 1848, at the end of a costly war that brought half of Mexico into the national fold, President James Polk exulted: “The United States are now estimated to be nearly as large as the whole of Europe.” As trans-Atlantic voyages shortened from multiple weeks to a handful of days, space in the urban centers came at a premium. In New York, notes Mr. Ayers, boarding houses were soon packing five to a room. Many of these new Americans were Irish (more than 700,000 arrived between 1846 and 1850 alone)—too many, in the eyes of native-born citizens. “They come not only to work & eat, or die,” worried the Philadelphia lawyer Sidney George Fisher, “but to vote.”

More people didn’t mean more freedoms. Mr. Ayers is clear-eyed about the violence that troubled the early republic. He reminds us not only of the thousands of Seminole lives lost when they were forcibly removed from Florida but also of the price paid by the soldiers who were sent into malaria-ridden swamps on President Andrew Jackson’s orders (the whole operation would end up costing, Mr. Ayers writes, a trillion dollars in today’s currency). Switching between local events and national developments, “American Visions” captures the growing rift in the fragile national fabric: As Southern racial attitudes became entrenched, Northerners settled into unearned smugness about their own moral superiority. Sometimes, confronted with the stories retold in this book, one wonders how people carried on at all. The abolitionist Elijah Lovejoy, standing before the charred body of a lynched man, felt that part of his life had ended at that point, too: “As we turned away, in bitterness of heart we prayed that we might not live.”

Undeterred, Mr. Ayers describes himself, in his introduction, as an “optimistic person.” And his “American Visions,” despite the jaundiced eye it casts over much of the republic’s early history, is an inspiring book, promoting a sturdy sense of patriotism—one that, aware of the nation’s failings, remembers its “highest ideals of equality and mutual respect.”

In “American Visions,” by contrast, Mr. Ayers focuses less on the observers than on those who took matters into their own hands. In 1827, Jarena Lee, a young widow and member of Philadelphia’s African Methodist Episcopal Church, traveled across 2,325 miles to deliver 178 sermons in churches, barns and meeting halls. “Why should it be thought impossible, heterodox, or improper, for a woman to preach?” she asked defiantly. The similarly restless John Chapman (1774-1845), a Swedenborgian and the model for “Johnny Appleseed,” roamed the country, barefoot and dressed in rags, giving the gift of apple trees (and perhaps also of hard cider) to struggling farmers, reminding them of God’s presence in nature. Rather than await divine validation, the Massachusetts reformer Dorothea Dix (1802-87) successfully lobbied for better facilities to house those considered insane, becoming, in her words, “the Revelation of hundreds of wailing, suffering creatures.”

But “American Visions” has moments of hilarity, too, particularly when a characteristically American inclination to bring things down to earth—Mark Twain would become its consummate chronicler—manifests itself in the face of lofty rhetoric. In 1843, the multitalented Samuel F.B. Morse persuaded Congress to fund, to the tune of $30,000 (about $1.2 million in today’s money), a telegraphic test line between Washington, D.C., and Baltimore: 38 miles of wire strung along the Baltimore Ohio Railroad. On May 24, 1844, Morse sat down in the chambers of the Supreme Court to transmit his first long-distance message, a sonorous invocation suggested by a friend: “What hath God wrought!” Drawn from Numbers 23:23, the phrase seemed appropriate, the weight of biblical wisdom translated into Morse’s dots and dashes, a prediction of future national greatness. The response that came in from Baltimore was underwhelming: “Yes.” Indeed, as Mr. Ayers points out, it would take years of innovation before the telegraph supplied the “sensorium of communicated intelligence” the newspapers had envisioned.

“American Visions” beautifully shows how remarkably resilient dreams of a better republic remained even in the darkest of times. When he heard talk of America’s “manifest destiny,” the elderly Albert Gallatin, formerly Thomas Jefferson’s Treasury secretary, didn’t mince words: “Allegations of superiority of race and destiny are but pretences under which to disguise ambition, cupidity, or silly vanity.” Resistance against exclusive definitions of community was, Mr. Ayers contends, in the American grain. When the Mormon prophet Joseph Smith founded Nauvoo, his holy city in Illinois, he insisted that the new settlement was open to “persons of all languages, and of every tongue, and of every color.”

The big-tent approach is also Mr. Ayers’s method. It is refreshing to encounter a book that gives equal billing to Stephen Foster’s “Oh! Susanna” and Nathaniel Hawthorne’s “The Scarlet Letter.” Yet Mr. Ayers’s inclusiveness comes with value judgments, too. Most will agree that Edgar Allan Poe was a genius, but was he “the most brilliant writer in the United States,” surpassing Herman Melville or Walt Whitman? And was Henry Wadsworth Longfellow’s 1855 epic, “The Song of Hiawatha,” really little more than a flowery ersatz Native American fantasy “larded . . . with footnotes”? A closer look reveals the commendable effort Longfellow made (in endnotes, for what it’s worth) to document the Ojibwe words that laced his lines.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Social media overload, exhaustion, and use discontinuance

From ScienceDirect:

Abstract

While users’ discontinuance of use has posed a challenge for social media in recent years, there is a paucity of knowledge on the relationships between different dimensions of overload and how overload adversely affects users’ social media discontinuance behaviors. To address this knowledge gap, this study employed the stressor–strain–outcome (SSO) framework to explain social media discontinuance behaviors from an overload perspective. It also conceptualized social media overload as a multidimensional construct consisting of system feature overload, information overload, and social overload. The proposed research model was empirically validated via 412 valid questionnaire responses collected from Facebook users. Our results indicated that the three types of overload are interconnected through system feature overload. System feature overload, information overload, and social overload engender user exhaustion, which in turn leads to users’ discontinued usage of social media. This study extends current technostress research by demonstrating the value of the SSO perspective in explaining users’ social media discontinuance.

Introduction

Facebook is probably one of the most successful information systems (IS) applications offered to users. Although its users’ passion remains strong, and the number of its global monthly active users keeps growing, Facebook is also facing the challenge of user discontinuance in the competitive environment. According to the report from Pew Research centre (2018), 26% of Facebook users have deleted the Facebook app from their mobile phones in 2017.

IS user behaviors, such as IS use and continuance, have been extensively studied in the IS field in recent years. This line of research helps explain why individuals use particular systems. Recently, IS discontinuance has attracted the attention of scholars, leading to research examining why individuals might stop using specific systems. A few scholars have examined the drivers of discontinued IS usage intentions for social media users from various perspectives. For instance, social cognitive theory has been applied to illustrate discontinuous usage intentions on Facebook by testing the relationships among self-observation, judgmental process and behaviors (Turel, 2015). Social support theory has also been used to examine social media discontinuance by investigating the negative direct impact of social overload on social media discontinuance intentions (Maier et al., 2015a). Discontinuance intentions refers to a user’s intention to change behavioral patterns by reducing usage intensity or even taking the radical step of suspending their behaviors (Maier et al., 2015a). Noticeably, whilst existing studies focus on discontinuous intentions as the outcome, discontinuous usage behaviors — referring to the next stage of discontinuous intentions in the IS lifecycle — have received little attention to date (Furneaux & Wade, 2011; Maier et al., 2015b). Specifically, there is limited knowledge concerning the psychological mechanisms underlying social media discontinuance behaviors. Uncovering these mechanisms is important because IS developers are keen to understand why users abandon their systems.

One potential reason for IS use discontinuance is exhaustion from overload (users’ weariness from the demands of their IS usage), which can manifest in different forms. First, to meet users’ needs or profitability goals, social media is constantly adding or updating features. Individual users can find it hard to adapt to new functions or interfaces, and thus they perceive a system feature overload (Zhang et al., 2016). Second, individual users spend considerable time processing information on social media, which includes irrelevant information like gossip, spam, rumors and forced content. This in turn can increase users’ information overload (Zhang et al., 2016). Third, the number of individual users’ social media friends increases with the popularity of social media. Individual users have to interact with their contacts on social media to show that they care about them, which can involve reading their posts, answering their questions or helping with their problems. Users need to give a lot of social support to their contacts on social media, but offering them too much social support might lead to social overload (Maier et al., 2015a). Maier et al. (2015a) found that some individuals experience social overload in their social media use, and they argued that social overload is an explanation for social media discontinuance.

However, studies on the different dimensions of overload remain scarce. Little is known on how the different types of overload (such as system feature overload, information overload, and social overload) as stressors lead to users’ social media discontinuance behaviors. Specifically, the ways by which the different dimensions of overload are interconnected remain unaddressed. This work offers an important extension of current research in the form of a detailed theoretical understanding of the psychological mechanisms underlying social media discontinuance.

To address the above gap in research, this study applies the stressor–strain–outcome (SSO) perspective to investigate the relationships between the different dimensions of social media overload and how different types of overload can relate to discontinued Facebook use. More specifically, this study extends Maier et al.’s (2015a) study by investigating a set of distinct types of overload (system feature, information and social overload) instead of only social overload as stressors. Furthermore, this study extends Zhang et al.’s (2016) study by examining the relationships between different compositions of overload, which provides a deeper understanding of the role of overload in explaining discontinuance usage. The proposed research model was empirically validated in the context of Facebook use using 412 valid responses from Facebook users collected via a survey questionnaire. The findings yield two key contributions. The first contribution applies to the compositions of system feature, information and social overload and the empirical validation of their relationships. The second contribution is the understanding of social media discontinuance from overload perspective enabled by the SSO framework.

The manuscript consists of eight sections, inclusive of the introduction. The next section reviews extant literature on IS discontinuance, the SSO framework, social media exhaustion, and overload.

. . . .

IS discontinuance

IS discontinuance has been widely studied in IS literature as a post-adoption behavior (Shen, Li, & Sun, 2018), and it refers to a user-level decision to abandon or reduce the use of an IS (Parthasarathy & Bhattacherjee, 1998). Discontinuance and continuance have often been considered the two sides of IS use (Turel, Connelly, & Fisk, 2013). More recent research has theorized that IS discontinuance is a distinct behavior, not simply the opposite of IS continuance (Cao & Sun, 2018; Maier et al.

. . . .

Conclusions

The dark side of IS usage has attracted the attention of scholars in recent years. The demands of IS use affect individual users and societies at large. Such demands are among the key reasons for the IS discontinuance intentions of users. Individuals may simply feel exhausted due to IS use. Researchers have called for more research on this topic and stressed the importance of understanding the fundamental mechanisms that underlie such negative outcomes.

Link to the rest at ScienceDirect

PG expects that many, if not all, visitors to TPV have likely OD’d on social media on one or more occasions.

Unfortunately, Elsevier’s Science Direct provides only an overview of the original publication for those without a paid subscription. Any visitors to TPV who are employees of universities, colleges, research organizations, etc., may be able to sign in to Elsevier or a similar overpriced online repository and read the entire study and, additionally, many, if not all, additional materials referenced in the paper.

Doomscrolling or doomsurfing is the act of spending an excessive amount of time reading large quantities of negative news online. In 2019, a study by the National Academy of Sciences found that doomscrolling can be linked to a decline in mental and physical health.

Wikipedia

My Mother, My Novel, and Me

From Publishers Weekly:

The first time my mom was diagnosed with cancer, I was writing a novel about immortality. She had gone to the doctor after discovering a sizable lump on the side of her stomach. A biopsy revealed the worst possible news: colon cancer. They could remove the lump surgically, but given its size, the doctors suspected it might be too late.

I could not imagine the world without my mom. I told myself she would beat this. My mom had always defied the odds. Born Black and poor to illiterate parents in the South during Jim Crow, she was the only member of her immediate family who graduated from high school. She then earned a full scholarship to college and became a teacher, effectively pulling herself out of poverty and raising me in a safe, middle-class community. She was the toughest person I knew.

My mom’s surgery went well, and the pathology report was better than expected: with the tumor removed, there was no cancer in her body. It felt significant, somehow, that I was writing a novel about immortality and my mom had staved off death, as though the novel was a good luck charm or talisman. I poured everything I had into the book, thanking it for its role in saving my mom.

Two years later, the cancer returned. It had spread to my mom’s stomach. The doctors suggested palliative chemo; they could stop the cancer from spreading to buy her time, but they could not cure her.

But my mom decided not to undergo chemo. Instead of five years, she would have maybe two and a half. I couldn’t accept this, and for the next several months my mom and I fought constantly. I cried. I begged. I guilt-tripped. I made rational arguments and emotional appeals. Nothing would sway her.

With my mom’s diagnosis, finding a publisher for my novel felt more urgent. Her fate and the novel’s had become inextricably linked in my mind—if one made it, the other would, too. I began sending the novel out to a series of independent presses, desperate to find a home for it quickly.

In my novel, immortality comes about suddenly, abruptly changing the fates of people who had previously been diagnosed with terminal diseases. While I didn’t believe immortality was around the corner in real life, I saw articles about new experimental treatments that were extending lives or curing some forms of cancer.

I redoubled my efforts to convince my mom to get chemo, but she was unyielding. She was completely asymptomatic and knew that chemo could come with side effects that would lower her quality of life. Eventually, I realized I had to respect my mom’s wishes—lest I lose her long before she died.

Link to the rest at Publishers Weekly

Higher Ed Has Become a Threat to America

From The Wall Street Journal:

America faces a formidable range of calamities: crime out of control, borders in chaos by design, children poorly educated while sexualized and politicized against parental opposition, unconstitutional censorship, a press that does government PR rather than oversight, our institutions and corporations debased in the name of “diversity, equity and inclusion”—and more. To these has been added an outbreak of virulent antisemitism.

Every one of these degradations can be traced wholly or in large part to a single source: the corruption of higher education by radical political activists.

Children’s test scores have plummeted because college education departments train teachers to prioritize “social justice” over education. Censorship started with one-party campuses shutting down conservative voices. The coddling of criminals originated with academia’s devotion to Michel Foucault’s idea that criminals are victims, not victimizers. The drive to separate children from their parents begins in longstanding campus contempt for the suburban home and nuclear family. Radicalized college journalism departments promote far-left advocacy. Open borders reflect pro-globalism and anti-nation state sentiment among radical professors. DEI started as a campus ruse to justify racial quotas. Campus antisemitism grew out of ideologies like “anticolonialism,” “anticapitalism” and “intersectionality.”

Never have college campuses exerted so great or so destructive an influence. Once an indispensable support of our advanced society, academia has become a cancer metastasizing through its vital organs. The radical left is the cause, most obviously through the one-party campuses having graduated an entire generation of young Americans indoctrinated with their ideas.

And there are other ways. Academia has a monopoly on training for the most influential professions. The destructive influence of campus schools of education and journalism already noted is matched in the law, medicine, social work, etc. Academia’s suppression of the Constitution causes still more damage. Hostility to the Constitution leads to banana-republic shenanigans: suppression of antigovernment speech, the press’s acting as mouthpiece for government, law enforcement used to harass opponents of the government.

Higher education by and for political radicals was foreseen and banned by the American Association of University Professors, which in a celebrated 1915 policy statement warned teachers “against taking unfair advantage of the student’s immaturity by indoctrinating him with the teacher’s own opinions.” The AAUP already understood that political indoctrination would stamp out opposing views, which means the end of rational analysis and debate, the essential core of higher education. The 1915 statement is still a recognized professional standard—except that almost everywhere it is ignored, at least until the public is looking.

Optimists see signs of hope in growing public hostility to campus foolishness, but radical control of the campuses becomes more complete every day as older professors retire and are replaced by more radicals. A bellwether: The membership of the National Association of Diversity Officers in Higher Education—which represents the enforcers of radical orthodoxy—has tripled in the past three years.

An advanced society can’t tolerate the capture of its educational system by a fringe political sect that despises its Constitution and way of life. We have no choice: We must take back control of higher education from cultural vandals who have learned nothing from the disastrous history of societies that have implemented their ideas.

. . . .

Personnel is policy. Effective reform means only one thing: getting those political activists out of the classrooms and replacing them with academic thinkers and teachers. (No, that isn’t the same as replacing left with right.) Nothing less will do. Political activists have been converting money intended for higher education to an unauthorized use—advancing their goal of transforming America. That is tantamount to embezzlement. While we let it continue we are financing our own destruction as a society.

But how can we stop them? State lawmakers can condition continued funding on the legitimate use of that money and install new campus leadership mandated to replace professors who are violating the terms of their employment. Though only possible in red states, this would bring about competition between corrupt institutions and sound ones. Employers would soon notice the difference between educated and indoctrinated young people. Legislatures in Florida, Texas and North Carolina have begun to take steps to reform their universities, but only at Florida’s New College is a crucial restructuring of the faculty under way.

But the only real solution is for more Americans to grasp the depth of the problem and change their behavior accordingly. Most parents and students seem to be on autopilot: Young Jack is 18, so it’s time for college. His family still assumes that students will be taught by professors who are smart, well-informed and with broad sympathies. No longer. Professors are now predominantly closed-minded, ignorant and stupid enough to believe that Marxism works despite overwhelming historical evidence that it doesn’t. If enough parents and students gave serious thought to the question whether this ridiculous version of a college education is still worth four years of a young person’s life and tens or hundreds of thousands of dollars, corrupt institutions of higher education would collapse, creating the space for better ones to arise.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Opportunists and Patriots

From The Wall Street Journal:

In the aftermath of the American Revolution, the concepts of loyalty and legitimacy emerged fitfully and confusingly. Under the monarchy, these concepts had been straightforward: All subjects were united by their loyalty to the sovereign, whose will was their command. But in founding a republic on principles at once lofty and vague, the Founders created a problem that vexes us still. If a nation is defined by its commitment to shared ideals, who draws the line between a difference of opinion and a difference of principle? Where does loyal opposition end and treason begin? What distinguishes the transgressions of a demagogue from the enraged voice of the people?

Today we rely on nearly 250 years of shared history and tradition to navigate the vague boundaries suggested by these unanswerable questions—and yet we can hardly keep from leaping at one another’s throats. The Founders built an arena of partisan politics without grasping the full fury of the beast they had unleashed within it.

Founding Partisans” by H.W. Brands and “A Republic of Scoundrels,” a collection of essays edited by David Head and Timothy Hemmis, are as different as two books on the founding can be. But each captures the moral confusion of the era, when the rules of democratic politics were still unwritten and everything seemed up for grabs.

Mr. Brands, a prolific historian and a professor at the University of Texas, provides a brisk account of the controversies that first divided the heroes of the Revolution. He begins with the Federalists’ effort to replace the Articles of Confederation with a stronger national government and concludes with the Jeffersonian Republicans’ repudiation of the Federalists in the election of 1800, the first transfer of power in U.S. history.

Though the Federalists organized themselves into a national political party, they didn’t understand themselves as one. Political parties, or “factions,” to use the Founders’ term, were understood as regrettable evils. They existed to serve narrow or sinister interests. An organized political party was thus, by definition, “opposed to the general welfare,” Mr. Brands writes.

The Founders hoped that the Constitution would suppress the influence of factions, but they assumed that virtuous leaders (namely, themselves) would naturally agree with one another. The discovery that so many leading figures disagreed on important matters came as a shock. Each side in this deepening divide began to see their opponents as a menace to the republic.

The failure to anticipate the pull of partisanship was nowhere more evident than in the Constitution’s provisions for electing the president: Each appointed elector, chosen by the states in a manner determined by their legislatures, would vote for two people, at least one of whom could not inhabit the elector’s own state. The thinking was that electors would name a local favorite on the first ballot and, on the second, the worthiest citizen throughout the land. The runner-up would be vice president.

This process, reflecting a hope that Americans would ultimately choose a leader independent of faction, was incompatible with an election in which a candidate would be supported by a party against his rivals. Sure enough, in 1800 the Democratic-Republicans voted in lockstep for Thomas Jefferson as their president and Aaron Burr as vice president. But no one thought to ensure that Burr received at least one less electoral vote. The result was a tie, allowing the defeated Federalists in the House to decide who would be president. Burr slyly advertised that he was willing to make a deal with his adversaries.

The crisis passed, thanks to Alexander Hamilton’s intervention. In this sense, the outcome seemed to vindicate the Founders’ hope that virtuous leaders would combine against conniving partisans. But it was a close-run thing.

Mr. Brands follows countless other historians in providing a blow-by-blow account of the nation’s first experience with partisan combat, though not a single historian is cited in the text or notes. He relies instead on the Founders’ own words to capture the controversies in which they participated. This choice gives his narrative an immediacy that heavy-handed analysis often diminishes. Indeed, “Founding Partisans” reads less like a work of history than a journalist’s insider account of high politics, except here the intemperate, backbiting quotations come from sources who are safely dead rather than anonymous.

Link to the rest at The Wall Street Journal

You Can’t Unsubscribe From Grief

From Electric Lit:

Replying All on the Death Announcement Email

On New Year’s Day, I got an email from an old writer friend announcing plans to end her life. Her life was already ending. This expedited ending-of-life had been approved by a medical professional. She was electing to die with dignity. Her death was scheduled for the following day. Like a hair appointment or a visit to the dentist.

It wasn’t an email directly to me. I subscribe to her newsletter.

Farewell, the subject line read. That was her voice. Grand and direct. There was no beating around the bush. Happy New Year! the email began and then: I’m planning to end my life.

After I closed the email, I tried to stop thinking about her, but that night, on the eve of when I knew she was going to die, I couldn’t sleep. I googled her name, read every article that appeared on my screen. Read all the hits that weren’t actually about her. The ones with her name crossed out that the algorithm insisted were relevant. Maybe it knew something I didn’t.

I read about all the diseases I was probably suffering from that had nothing to do with her (or the disease that was killing her), I read about all the new diet trends that would shed my hips of love handles (I hadn’t seen her since she got sick, but in her last photo she was rail thin), I read about a minor celebrity cheating on another minor celebrity and then them reconciling and then them breaking up and then them getting back together again (she loved the thrill of gossip)—I read everything in the hopes of catching a glimpse of my soon-to-be dead old writer friend.

A week later, I got an email from a literary magazine announcing the death of its co-founder. I did not know its co-founder. I just subscribed to the newsletter.

I read the announcement from the literary magazine as if it were the announcement of the death of my old writer friend because after she died, I didn’t receive such an email. Because she was not here to write one. Or to send one. Though she could’ve scheduled one. Which is a thought I’ve had more than once since her death. Why didn’t she do that? That would’ve felt so like her. Not so fast, it might’ve read. I’m still here.

After the newsletter announcing the death of the literary magazine co-founder, my inbox was flooded.

I am so sorry to hear this. May you and yours find comfort. Keep him close to your heart.

I didn’t email anyone when my old writer friend died because it felt like I didn’t know her well enough. We met at a writing residency in Wyoming in 2016. We watched the presidential election together: I baked cookies, she bought liquor. We only inhabited the same space for a handful of weeks. So, how can I justify the vacuum suck of losing her?

The day after the election, we sat at a kitchen table and talked about our bodies. About who they belonged to. About culpability. I remember us disagreeing. The strangeness of feeling so connected to each other and then realizing, suddenly, that we may not actually know each other.  

I cannot keep the literary magazine co-founder close to my heart because I did not know him at all.

Life is eternal! Your memories are the tap that keeps him living!

I think my old writer friend would’ve liked the idea of tapping a memory, like a keg or a maple tree.

Link to the rest at Electric Lit

The Desolate Wilderness

From an account of the Pilgrims’ journey to Plymouth in 1620, as recorded by Nathaniel Morton – Via The Wall Street Journal’s editorial page each Thanksgiving day:

Here beginneth the chronicle of those memorable circumstances of the year 1620, as recorded by Nathaniel Morton, keeper of the records of Plymouth Colony, based on the account of William Bradford, sometime governor thereof:

So they left that goodly and pleasant city of Leyden, which had been their resting-place for above eleven years, but they knew that they were pilgrims and strangers here below, and looked not much on these things, but lifted up their eyes to Heaven, their dearest country, where God hath prepared for them a city (Heb. XI, 16), and therein quieted their spirits.

When they came to Delfs-Haven they found the ship and all things ready, and such of their friends as could not come with them followed after them, and sundry came from Amsterdam to see them shipt, and to take their leaves of them. One night was spent with little sleep with the most, but with friendly entertainment and Christian discourse, and other real expressions of true Christian love.

The next day they went on board, and their friends with them, where truly doleful was the sight of that sad and mournful parting, to hear what sighs and sobs and prayers did sound amongst them; what tears did gush from every eye, and pithy speeches pierced each other’s heart, that sundry of the Dutch strangers that stood on the Key as spectators could not refrain from tears. But the tide (which stays for no man) calling them away, that were thus loath to depart, their Reverend Pastor, falling down on his knees, and they all with him, with watery cheeks commended them with the most fervent prayers unto the Lord and His blessing; and then with mutual embraces and many tears they took their leaves one of another, which proved to be the last leave to many of them.

Being now passed the vast ocean, and a sea of troubles before them in expectations, they had now no friends to welcome them, no inns to entertain or refresh them, no houses, or much less towns, to repair unto to seek for succour; and for the season it was winter, and they that know the winters of the country know them to be sharp and violent, subject to cruel and fierce storms, dangerous to travel to known places, much more to search unknown coasts.

Besides, what could they see but a hideous and desolate wilderness, full of wilde beasts and wilde men? and what multitudes of them there were, they then knew not: for which way soever they turned their eyes (save upward to Heaven) they could have but little solace or content in respect of any outward object; for summer being ended, all things stand in appearance with a weatherbeaten face, and the whole country, full of woods and thickets, represented a wild and savage hew.

If they looked behind them, there was a mighty ocean which they had passed, and was now as a main bar or gulph to separate them from all the civil parts of the world.

Link to the rest at The Wall Street Journal

Today is Thanksgiving Day, a major holiday in the United States, so PG won’t be making any more posts today. He’ll be back tomorrow after his body finishes processing an overdose of tryptophan from today’s turkey dinner with extended family.

The Tory’s Wife

From The Wall Street Journal:

The Revolutionary War liberated Americans from the oppressive colonial rule of the British, but as the postwar era began, it wasn’t clear how much women benefited from the hard-won freedoms accorded to men. Wives were still subject to the English common law of coverture, which gave husbands control of their property; women had next-to-no political rights. Had the Revolution changed anything?

Cynthia Kierner, a professor of history at George Mason University, examines this question in “The Tory’s Wife.” Her short, readable volume recounts the story of Jane Welborn Spurgin, a farm wife and mother of 13, in the backwoods of North Carolina. Other than the fact that she was literate, there was nothing particularly notable about Jane. She didn’t move in elite circles, and she certainly wasn’t famous. Yet she publicly claimed her rights as a citizen of the new American republic in a series of petitions to the state legislature. Ms. Kierner’s focus on Jane stems from her scholarly interest in how the war drew ordinary women into the political sphere.

Jane was a Whig, a patriot who supported the American uprising. She courageously put her patriotism into action when, in early 1781, she provided food and shelter to the American commander in the South, Gen. Nathanael Greene, who set up camp on her family’s property in the North Carolina town of Abbotts Creek. If the British attack, Greene told her, grab the kids and run for the basement. When the general sought her advice in finding a trustworthy person to spy on British Gen. Charles Cornwallis, who was camped nearby, she recommended one of her sons for the dangerous job.

Jane’s husband, William, did not share his wife’s fidelity to the cause of liberty. While Jane was aiding the Continental Army, he was fighting against his fellow North Carolinians as an officer in the Tory militia, for which he had recruited like-minded Loyalists. Before the war, William was a moderately prosperous landowner and a justice of the peace. As war broke out, he was deemed “an Enemy to his Country” by a local political committee, and he spent much of the war in hiding. He disappeared after the American victory, eventually turning up in Canada with a woman he called his wife. The crown rewarded his loyalty with a generous gift of acreage in what is now Ontario.

Jane was not so fortunate. Back in North Carolina, the postwar state legislature passed a bill confiscating Tory property, thus putting her, the wife of a traitor, in danger of losing her family’s home. She refused to move out and set about asserting her ownership rights in three petitions. Referring to herself in the third person, she wrote: “She has always behaved herself as a good Citizen and well attached to the government. She thinks it extremely hard to be deprived of the Common rights of other Citizens.” The “other Citizens” were, of course, men.

For most women, Ms. Kierner writes, “the right to petition was the only political right they formally possessed” under the law. It wasn’t unusual for women to petition for state support for themselves and their children, couching their requests in humble and ingratiating language. Their husbands were dead or missing, and they wanted the state to take on the role of their protector.

In contrast, Jane’s petitions were more direct and less deferential. Ms. Kierner characterizes them as “bold” and distinctive for their “legalism and clarity.” Jane even secured the support of 78 neighbors, who co-signed her second petition in 1788. It is noteworthy, the author says, that Jane “framed her own claim to citizenship in terms of the right to own and protect her family’s property.” That is, she based her case on a traditional understanding of citizenship “as deriving from one’s material stake in society.”

As “The Tory’s Wife” opens, Ms. Kierner warns readers not to expect a conventional biography. Official records yield little information about ordinary women like Jane. The author paints a fuller portrait of William, who had a public role as a justice of the peace and a notorious Tory.

In noting the paucity of sources that document Jane’s life, Ms. Kierner observes that much of the material is “necessarily contextual and sometimes speculative.” The author turns the lack of factual information to her readers’ advantage, providing often fascinating details about life in rural North Carolina—especially about women who struggled to survive the upheavals of war. She cites the grim punishment of a mother for the crime of teaching her children to support the revolution. A son recalled that she was “tied up and whipped by the Tories, her house burned and property destroyed.”

The Whig-Tory divide wasn’t found only in the Spurgins’ marriage. It was prevalent in the wider society, and Ms. Kierner deftly describes North Carolinians’ bitter and often violent struggles. She quotes Gen. Greene, who wrote: “The whole Country is in danger of being laid waste by the Whigs and the Tories, who pursue each other with as much relentless Fury as Beasts of Prey.” There were times when the divided populace seemed to be fighting a civil war, not a war of independence.

And what of Jane, the book’s putative heroine? The North Carolina legislature eventually awarded her a portion of the land she had demanded without acknowledging the legitimacy of her claims. The settlement delivered security for her and her children if not support for her argument on her rights as a citizen. Ms. Kierner concludes that “the Revolution led Jane, like many other Americans, to confront authority and to reimagine her relationship to the polity and the men who ran it.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Of Hamas and Historical Ignorance

PG Note: This OP is a little closer to politics than PG usually strays. What caught his eye is a significant problem with literacy/reading and the consequences arising therefrom.

From The Dispatch:

In the wake of Hamas’ brutal assault on Israel, campuses across the United States have been home to rallies and demonstrations that are nominally pro-Palestinian but effectively celebrations of the terrorist group. Students at George Washington projected slogans such as “Glory to the Martyrs” on a campus building, a Cornell student was arrested for threatening to rape and kill Jewish students, and numerous campuses have been home to antisemitic assaults and vandalism.

The reaction has highlighted the degree to which we’ve left a generation of youth vulnerable to ludicrous doctrines, social media manipulation, and genuinely bad actors. The shocking support among young adults for Hamas’ assault draws on historic ignorance and crude postmodern notions of justice and victimhood, in which torture and kidnapping were rebranded a justifiable response to “colonial privilege.”

The problem starts well before students arrive at college. The average high school student knows little about American history, and even less about the world. A 2018 survey found that 41 percent of adult Americans couldn’t identify Auschwitz as a Nazi concentration camp. Among millennials specifically, two-thirds couldn’t identify Auschwitz and 22 percent had never heard of the Holocaust. So much for “Never forget.” 

Such findings are of a piece with the abysmal performance of younger students in history and geography on the National Assessment of Educational Progress. In the most recent assessment, of a nationally representative sample of eighth-graders, just 13 percent of students were judged “proficient” in U.S. history and just 22 percent in civics. These results continue a decade-long decline.

As Natalie Wexler, author of The Knowledge Gap, has aptly put it, “You can’t think critically about what you don’t know.” This problem isn’t new. But it’s taken on added urgency in a time of intense polarization, declining academic achievement, ubiquitous social media, and rapidly advancing deepfake technology. 

In 1978, alarmed by test results from poor, minority students at a Richmond community college who were ignorant of foundational historic figures and events, scholar E.D. Hirsch began researching the role of background knowledge in reading comprehension. His 1987 book Cultural Literacy became a surprise bestseller and sparked a push for a more rigorous curriculum. 

But teacher training and schools of education largely rejected Hirsch and clung fast to a progressive consensus that children should learn self-confidence and “skills,” not dates and names. Those engaged in preparing a new generation of teachers rejected Hirsch’s belief in the importance of knowledge as “elitist, Eurocentric and focused too heavily on rote memorization,” as a Virginia magazine profile of the famed UVA professor described. Indeed, a decade later, one of us taught alongside the genial, soft-spoken Hirsch, only to see fellow UVA education school faculty quietly steer their students away from this dangerous figure.

The internet supposedly made learning all those “mere facts” unnecessary, anyway. As scholar Mark Bauerlein recounted in his 2009 book The Dumbest Generation, education professors and advocates enamored of “21st century skills” insisted that we needed to move “beyond facts, skills, and right answers” and that students could always just look up all that other stuff.

The skills-over-facts trend paralleled a push to jettison traditional historical narratives and moral certainties in favor of critical theories. Beginning in the 1980s, Howard Zinn’s enormously influential (if oft-inaccurate) People’s History of the United States recast America’s story as one of unbroken villainy and oppression. It was a finalist for the National Book Award in 1980 and was added to high school curricula across the land.  

The unapologetic aim of Zinn’s work—and that of its latter-day, award-winning imitator the 1619 Project—was not to explore our simultaneously wonderful and woeful history but to impress on young people that America and its allies are oppressive colonial powers (that the U.S. is, according to the architect of the 1619 Project, a “slavocracy”).

As a teacher said to one of us recently regarding developments in Israel and Gaza: “Many kids have little to no understanding of the historical context. I feel overwhelmed trying to explain things to them in a side comment here or there.” This teacher lives 10 miles from the state capital, in a town where the average household earns $130,000. This community, filled with educated parents and well-regarded schools, sends the vast majority of its high school graduates on to four-year colleges.

This teacher knows that geography, history, religion, economics, and philosophy are essential to understanding the context of these attacks. But these are subjects that too few schools teach coherently or consistently. Last year K-12 teachers told RAND that it’s more important for civics education to promote environmental activism than “knowledge of social, political, and civic institutions.” 

Teachers who hold these beliefs are unlikely to give students the knowledge or grounding they need to make sense of the world around them. Indeed, the same teachers told RAND their top two priorities for civics education are “promoting critical and independent thinking” and “developing skills in conflict resolution.” What’s striking is that these responses are strikingly content-free. 

In 2020, RAND surveyed high school civics teachers about what they thought graduates needed to know. Just 43 percent thought it essential that they know about periods such as “the Civil War and the Cold War.” Less than two-thirds thought it essential for graduates to know the protections guaranteed by the Bill of Rights.

Critical thinking doesn’t happen in a vacuum. There’s no way for anyone to form meaningful independent judgments on what’s unfolding in Israel and Gaza if they haven’t learned much about history, geography, economics, or political systems. 

This is pretty instructive when it comes to understanding, for instance, how Osama bin Laden’s “Letter to America” has recently gone viral on TikTok. It’s not obvious that Gen Z is eagerly searching for wisdom from mass murderers. But as they spend hours casting about social media, youth who know little about the events or aftermath of 9/11 are encountering a long-dead figure who promises to provide the history and moral clarity they’re not getting elsewhere. We’re sending ill-equipped, confused youth out into the wilds of social media, and we’re reaping the unsurprising result.

As academic rigor and traditional norms have retreated, the space has increasingly been filled by moral relativism and contempt for Western civilization. The result is progressive students who hail Hamas as an ally—an odd way to regard theocratic ideologues who are cavalier about rape, murdering homosexuals, and treating women as chattel.

Writing from a Jerusalem university emptied of students by the present conflict, economist Russ Roberts recently observed, “Open societies are going to have to come to terms with the reality” that some citizens “want to live in a very different kind of society and are willing to use violence and the threat of violence to intimidate and harm people they disagree with.”

Link to the rest at The Dispatch

An irregular life

From TLS:

In 1796 a young law clerk called William Henry Ireland published a book under the modestly antiquarian title Miscellaneous Papers and Legal Instruments Under the Hand and Seal of William Shakespeare. Ireland’s cache included a letter from Elizabeth I praising the Sonnets, amorous verses written by Shakespeare to “Anna Hatherrewaye”, a usefully explicit “Profession of Faith”, a manuscript of King Lear and promise of books from Shakespeare’s library, “in which are many books with Notes in his own hand”.

Of course Ireland had made it all up, snaffling old bits of parchment, concocting a dark brown ink and reverse-engineering Shakespeare’s handwriting from the facsimile signatures that Edmond Malone had reproduced in his 1790 edition of the plays. If a Shakespearean document is too good to be true, it almost certainly isn’t, but, as the scams prevalent in our own day show us, many of us are still willing to be tricked by impossible promises of what we most desire. The appeal of Ireland’s con was the promise of explanation, its manufactured archive the means fully to understand the works and their author. His genius was to recognize the biographical itches in Shakespeare studies that need a good scratch: his relationship with his wife; his religion, politics and reading; his methods of working. Little new evidence about Shakespeare’s life has come to light since this brilliant attempt to short-circuit the search: the questions remain unanswered. Ireland’s Miscellaneous Papers were themselves a kind of biography, in which speculation and documentation had become confused.

For Margreta de Grazia, in her lapidary book Shakespeare Without a Life, Ireland’s forgeries embody a new energy around documenting the author and symbolize the disappointments of the era’s hunger for new evidence. Malone’s biographical researches also ended in frustration. His posthumously published Life of Shakespeare listed with icy regret many of the antiquaries, actors and others who might have gathered information about the playwright during the seventeenth century. William Dugdale, Anthony Wood, John Dryden, William Davenant and Thomas Betterton are all fingered for their carelessness, even as Malone acknowledges that “the truth is, our ancestors paid very little attention to posterity”.

Link to the rest at TLS

“Arguments on a daily basis”: How couples who disagree politically navigate news

From Nieman Lab:

If you have a significant other in your life, chances are they probably share your politics: you both lean Democratic or Republican (or independent) together.

Romantic partnerships have long grown out of shared values and attitudes, but polarization has amplified these tendencies. These days, more than ever in the U.S., political sorting is happening more readily across neighborhoods, friendships, and dating and marriage relationships. And as political identity increasingly becomes synonymous with many people’s larger sense of social identity, the stakes for political disagreement seem higher than they once were: Partisan zealots are now more likely to wish the worst on their perceived foes on the other side.

But many couples are not politically in sync with each other, and there has been surprisingly little research about what such “cross-cutting” relationships mean for news consumption and political discussion in such politically mismatched pairings.

Emily Van Duyn of the University of Illinois at Urbana-Champaign offers a first-of-its-kind analysis of this issue in her article “Negotiating news: How cross-cutting romantic partners select, consume, and discuss news together,” published last month in the journal Political Communication.

She sought to address three questions: How do romantic partners in cross-cutting relationships influence each other’s selection and consumption of news? How do those patterns of news selection and consumption shape the conversations about politics and political news that happen between partners? And, ultimately, what does the role of news mean for these cross-cutting couples — is it helpful or harmful to their relationship?

To answer these questions, she conducted in-depth interviews with 67 U.S. residents in cross-cutting relationships. She chose to interview just one individual from each couple, in part because people might not be so candid with their comments if they knew their partner was being interviewed, particularly in cases where people strategically avoid talking about politics to maintain the relationship. Of the 67 interviewees, slightly less than half were married while the others were dating or cohabitating. All but five participants were in opposite-sex relationships.

Van Duyn found that cross-cutting couples deal with two main difficulties when navigating news: negotiated exposure, as couples try to influence news selection and consumption in the relationship, and two-step conflict, in which issues surrounding news — what type, how much, from which sources, etc. — not only led to discussion about politics but also to significant friction between partners.

Consider first the problem of negotiating what news to introduce into the relationship — or whether to avoid news altogether. “For one, the process of selecting and consuming news was especially difficult for cross-cutting romantic partners,” Van Duyn writes, “because it presented a choice that involved recognizing political differences and finding a way to navigate these differences. In turn, who selected news, what was selected, and how those choices were negotiated over time became a political act as much as an interpersonal one.”

For one couple studied, that meant sharing control over what TV news channel was playing during the day: the conservative woman would decide in the morning, and her liberal boyfriend took charge in the afternoon. For others, that meant finding shared news rituals they could both agree on — like watching the evening news on ABC while preparing dinner each night — while allowing space for individual podcast or social media consumption that tailored to each other’s interests.

And, for others, it meant a pulling away from news and politics altogether. One respondent said that he and his Democratic girlfriend “never share articles” on social media and “never watch the news together at all.” This was not intentional at first, he said, but the avoidance arose gradually out of a worry that sharing articles would incite conflict: “I guess we’re both afraid to bring up politics…I’d say I try to avoid it. I think she does too. So, we kind of both avoid it at all costs just so we don’t get into any arguments whatsoever.”

Link to the rest at Nieman Lab

Perhaps the OP is a character prompt or “couple prompt”.

PG suggests that politicizing everything is a bad idea for society. He also suggests that people who put politics over relationships or inject politics into relationships take politics way too seriously.

PG can honestly say that he has no idea what the political beliefs his closest friends are.There was a time when injecting politics (or religion) into a social setting was considered bad manners.

Perhaps politics is the new religion for a great many people.

The Dangerous Radicalism of Longing

From The Dispatch:

In a recent episode of Daryl Dixon, the new Walking Dead spinoff, Daryl says, “You can’t miss what you never had.” And for some reason I keep thinking about it. 

According to the internet, this is originally a Hunter S. Thompson quote, though I suspect someone said it before him. This is one of those sayings that sounds profound and wise but isn’t actually true. Or at least to the extent it’s true, it really depends on context and the definitions of “can’t” and “miss.”

I think an enormous number of our problems come from people who miss things they never had. Just off the top of my head: Palestinians miss having a viable country, but have never had one. Lots of people who grew up without siblings, or fathers, or best friends miss having such people in their lives a great deal. People who had bad experiences in high school, or who never went to college, miss things they never had. 

Maybe this helps explain why I say the definitions of “can’t” and “miss” are important here. For the sorts of people described above, missing what you didn’t have is a kind of longing. And people in fact do and can have such longings. “Missing,” it seems, conveys a statement of fact. You actually had something and lost it. “I had a brother. He’s gone. I miss him.” That’s a different statement than “I always wished I had a brother.”

Regardless, such desires are very, very, common. Regret—a good word for combining both “miss” and “longing”—over what might have been, what was lost, or what you never had is one of the most powerful human emotions and one of the greatest drivers of despair. Such feelings are also one of the most powerful motivations for human action. 

The first nationalists of the late 18th and early 19th centuries were full of longing for a nation that had never actually existed. Sometimes they invented an ancient past of national identity and claimed they were seeking to restore what was lost to the Romans or some other conquerors. 

The romantics, who helped create nationalism, played similar games. The idea of the “noble savage” was essentially a kind of unscientific science fiction. It was based on ideas that had no actual basis in anthropological or sociological fact. But it did have a good deal of theological support. Man, before the fall, lived in happy ignorance and harmony with nature. Knowledge or technology or modernity ripped us out of this blissful state. All that was required to return to it was the will to return to it. 

I think that in a very fundamental—and very oversimplified—way, all radicalism stems from these kinds of longings. Karl Marx was very much a romantic, and his vision for the end of history looked very much like Rousseau’s vision of the beginning of history. Once all class consciousness was swept away, once the economic aristocracy was toppled or liquidated, everybody would be able to live in an unconstrained state of natural bliss and autonomy. The Marx-influenced radicals pushing for “national liberation” in the 20th century were not as fully utopian, but they believed that all of the suffering and inequities of their lives could be erased with a cleansing purge of imperial control. The Islamic radicals of Iran and elsewhere believed that all that was required to live in spiritual harmony and happiness was to remove the decadent bulwarks of “Western” liberalism, religious pluralism, secularism, capitalism, etc. 

None of these stories ended well, and many ended horribly. 

But the radicalism such desires can inspire aren’t the problem with missing what you never had.

. . . .

 As families shrink or break down, as the sinew of local communities breaks down, the government is seen as a necessary substitute. No, I don’t think all women should stay at home and rely on their parents or husbands as providers and breadwinners. But in a society where so many biological fathers have little desire to be real fathers or actual husbands, the demand for the state to compensate for what’s missing increases. 

This isn’t just a point about the growth of the welfare state or those darn progressives. It’s just one example of how people miss what they never had—fathers, husbands, healthy families and communities—and look for cheap substitutes for them. As I’ve been saying forever, “The government can’t love you.” But when you lack people who love you, when you lack a sense of community, the hunger remains and you pursue whatever you think might satisfy it. 

Another form of longing drives this tendency: nostalgia, which might be the best rebuttal to the claim, “You can’t miss what you never had.” Nostalgia is one of the most powerful forces in politics and life. I would be lying if I said I wasn’t prone to it. Nostalgia is a neologism coined by a Swiss medical student to describe the melancholy (a medical term back then) felt by Swiss mercenaries who fought far from home. It’s a mashup of sorrow or despair and “homecoming.” It’s come to mean homesickness for the past. 

The problem with nostalgia, at least in politics and economics, is that it is a highly selective remembering—and misremembering—of the past. We tend not only to emphasize the good stuff and forget the bad stuff, we exaggerate the good stuff beyond reality. This has always been my problem with “Make America Great Again.” It’s a nostalgia-soaked misdiagnosis of the past that tells people they can have what they miss but never had, at least not in the way they remember it. 

Link to the rest at The Dispatch

When American Words Invaded the Greatest English Dictionary

From The Wall Street Journal:

Most people think of the 20-volume Oxford English Dictionary as a quintessentially British production, but if you pore carefully over the first edition, compiled between 1858 and 1928, you will find thousands of American words.

There are familiar words describing nature particular to the U.S., like prairieskunkcoyote and chipmunk, but also more recondite ones, like catawba (a species of grape and type of sparkling wine), catawampous (fierce, destructive) and cottondom (the region in which cotton is grown). Today, Americanisms are easy for modern lexicographers to find because of the internet and access to large data sets. But all of the American words in that first edition found their way to Oxford in an age when communication across the Atlantic was far more difficult.

The OED was one of the world’s first crowdsourced projects—the Wikipedia of the 19th century—in which people around the English-speaking world were invited to read their country’s books and submit words for consideration on 4-by-6-inch slips of paper. Until recently, it wasn’t known how many people responded, exactly who they were or how they helped. But in 2014, several years after working as an editor on the OED, I was revisiting a hidden corner of the Oxford University Press basement where the dictionary’s archive is stored, and I came across a dusty box.

Inside the box was a small black book tied with cream-colored ribbon. On its pages was the immaculate handwriting of James Murray, the OED’s longest-serving editor. It was his 150-year-old address book recording the names and addresses of people who contributed to the largest English dictionary ever written.

There were six address books in all from that era, and for the past eight years I have researched the people listed inside. Three thousand or so in total, they were a vivid and eccentric bunch. Most were not the scholarly elite you might expect. The top four contributors globally, one of whom sent in 165,061 slips, were all connected with psychiatric hospitals (or “lunatic asylums” as they were called at the time); three were inmates and one was a chief administrator. There were three murderers and the owner of the world’s largest collection of pornography who, yes, sent in sex words, especially related to bondage and flagellation. 

You can’t go a page or two in Murray’s address books without seeing a name that he had underlined in thick red pencil. These are the Americans: politicians, soldiers, librarians, homemakers, booksellers, lawyers, coin collectors and pharmacists. They ranged from luminaries like Noah Thomas Porter, who edited Webster’s Dictionary and became president of Yale University, to unknowns such as 21-year-old Carille Winthrop Atwood, who loved the classical world and lived in a large house with several other young women in a fashionable area of San Francisco. The most prolific American contributor was Job Pierson, a clergyman from Ionia, Mich., who owned the state’s largest private library and sent in 43,055 slips featuring words from poetry, drama and religion. 

Murray marked Americanism with a “U.S” label, including casket (coffin),  comforter (eiderdown), baggage (luggage), biscuit (scone) and faucet (tap). He was often at pains to add details: For pecan tree, he included that it was “common in [the] Ohio and Mississippi valleys.” He noted that candy, not quite an Americanism, was “in [the] U.S. used more widely than in Great Britain, including toffy and the like.”

. . . .

Some American contributors involved in certain causes sought to make sure that their associated words got into the dictionary, like Anna Thorpe Wetherill, an anti-slavery activist in Philadelphia, who hid escaped slaves at her home. Her contributions included abhorrent and abolition.

Others turned to their hobbies. Noteworthy Philadelphian Henry Phillips, Jr., an antiquarian and pioneer of the new language Esperanto, ensured that the dictionary had a generous coverage of words relating to coins and numismatics: electrum (coins made of an alloy of gold and silver with traces of copper) and gun money (money coined from the metal of old guns). 

Francis Atkins, a medical doctor at a military base in New Mexico, read books relating to Native American cultures and sent in sweat-house (a hut in which hot air or vapor baths are taken) and squash (the vegetable), a word borrowed from the Narragansett asquutasquash. He also contributed ranching words: rutting season (mating season), pronghorn (an antelope) and bison (a wild ox).

Others had their favorite authors. Anna Wyckoff Olcott, one of 27 contributors from New York City (she lived on West 13th Street in Manhattan), took responsibility for providing entries from the works of Louisa May Alcott. Those included the term deaconed, from “Little Women,” defined in the OED as “U.S. slang” meaning the practice of packing fruit with the finest specimens on top. (“The blanc-mange was lumpy, and the strawberries not as ripe as they looked, having been skilfully ‘deaconed.’”)

In Boston, Nathan Matthews advised the OED for six years before becoming the city’s mayor and the person who spearheaded Boston’s subway system, the first in the U.S. But it was his brother, the historian and etymologist Albert Matthews, who was the second-highest ranking American contributor, sending in 30,480 slips from his reading of American historical sources including Benjamin Franklin, George Washington, John Adams, Henry Wadsworth Longfellow and Washington Irving. 

Albert Matthews in particular enabled the OED to include words that no Brit would have ever have heard or needed to use. He sent in stockadedwhitefish and a rare American use of suck, meaning “the place at which a body of water moves in such a way as to suck objects into its vortex.” His reading of Daniel Denton’s “A Brief Description of New York” (1670) provided evidence for persimmonpossum, raccoon skinpowwow (spelled at the time “pawow”) and the first time that huckleberry ever appeared in print: “The fruits Natural to the Island are Mulberries, Posimons, Grapes great and small, Huckelberries.” 

Link to the rest at The Wall Street Journal

Mark Meadows sued by book publisher over false election claims

From The Hill:

The publisher of Mark Meadows’s book is suing the former White House chief of staff, arguing in court filings Friday morning that he violated an agreement with All Seasons Press by including false statements about former President Trump’s claims surrounding the 2020 election.

“Meadows, the former White House Chief of Staff under President Donald J. Trump, promised and represented that ‘all statements contained in the Work are true and based on reasonable research for accuracy’ and that he ‘has not made any misrepresentations to the Publisher about the Work,’” the publishing company writes in its suit, filed in court in Sarasota County, Fla.

“Meadows breached those warranties causing ASP to suffer significant monetary and reputational damage when the media widely reported … that he warned President Trump against claiming that election fraud corrupted the electoral votes cast in the 2020 Presidential Election and that neither he nor former President Trump actually believed such claims.”

The suit comes after ABC News reported that Meadows received immunity to testify before a grand jury convened to hear evidence from special counsel Jack Smith, reportedly contradicting statements he made in his book. 

. . . .

Meadows’s book, “The Chief’s Chief,” was published in 2021 and spends ample time reflecting on the election.

“Meadows’ reported statements to the Special Prosecutor and/or his staff and his reported grand jury testimony squarely contradict the statements in his Book, one central theme of which is that President Trump was the true winner of the 2020 Presidential Election and that election was ‘stolen’ and ‘rigged’ with the help from ‘allies in the liberal media,’ who ignored ‘actual evidence of fraud,’” the company writes in the filing.

According to Meadows’s testimony, as reported by ABC News, Trump was being “dishonest” with voters when he claimed victory on election night. ABC reported that Meadows admitted Trump lost the election when questioned by prosecutors.

Link to the rest at The Hill

I’ve researched time for 15 years – here’s how my perception of it has changed

From The Conversation:

Time is one of those things that most of us take for granted. We spend our lives portioning it into work-time, family-time and me-time. Rarely do we sit and think about how and why we choreograph our lives through this strange medium. A lot of people only appreciate time when they have an experience that makes them realise how limited it is.

My own interest in time grew from one of those “time is running out” experiences. Eighteen years ago, while at university, I was driving down a country lane when another vehicle strayed onto my side of the road and collided with my car. I can still vividly remember the way in which time slowed down, grinding to a near halt, in the moments before my car impacted with the oncoming vehicle. Time literally seemed to stand still. The elasticity of time and its ability to wax and wane in different situations shone out like never before. From that moment I was hooked.

I have spent the last 15 years trying to answer questions such as: Why does time slow down in near death situations? Does time really pass more quickly as you get older? How do our brains process time?

My attempts to answer these questions often involve putting people into extreme situations to explore how their experience of time is affected. Some of the participants in my experiments have been given electric shocks to induce pain , others have traversed 100-metre-high crumbling bridges (albeit in virtual reality), some have even spent 12 months in isolation on Antarctica. At the heart of this work is an attempt to understand how our interaction with our environment shapes our experience of time.

Thinking time

This research has taught me that time’s flexibility is an inherent part of the way in which we process it. We are not like clocks which record seconds and minutes with perfect accuracy. Instead, our brain appears to be wired to perceive time in a way which is responsive to the world around us.

The way in which our brain processes time is closely related to the way in which it processes emotion. This is because some of the brain areas involved in the regulation of emotional and physiological arousal are also involved in the processing of time. During heightened emotion, the activation caused by the brain attempts to maintain stability, which alters its ability to process time.

So, when we experience fear, joy, anxiety or sadness, emotional processing and time processing interact. This results in the sensation of time passing more speeding up or slowing down. Time really does fly when you’re having fun and drag when you’re bored.

Changes in our experience of time are most profound during periods of extreme emotion. In near death experiences, like my car crash for example, time slows to the point of stopping. We don’t know why our brains distort sensory information during trauma.

Ancient adaptations

One possibility is that time distortions are an evolutionary survival intervention. Our perception of time may be fundamental to our fight and flight response. This insight into time has taught me that in times of crisis, knee jerk responses are unlikely to be the best ones. Instead, it would seem that slowing down helps me succeed.

Being a time-nerd, I spend a lot of time thinking about time. Before COVID, I would have said that I thought about it more than most. However, this changed during the pandemic.

Think back to those early lockdown days. Time started to slip and slide like never before. Hours sometimes felt like weeks and days merged into one another. Newspaper headlines and social media were awash with the idea that COVID had mangled our sense of time. They were not wrong. COVID time-warps were observed around the world. One study found that 80% of participants felt like time slowed down during the second English lockdown.

We no longer had a choice about how and when we spent our time. Home-time, work-time and me-time were suddenly rolled into one. This loss of control over our schedules made us pay attention to time. People now appear less willing to “waste time” commuting and instead place a greater value on jobs with flexibility over where and when you work. Governments and employers still appear unsure how to grapple with the ever-changing time landscape. What does seem clear however is that COVID permanently altered our relationship with time.

Link to the rest at The Conversation

Links to scholarly works included in the origianal.

“I cannot wait to possess you”: Reading 18th-century letters for the first time

From Ars Technica:

University of Cambridge historian Renaud Morieux was poring over materials at the National Archives in Kew when he came across a box holding three piles of sealed letters held together by ribbons. The archivist gave him permission to open the letters, all addressed to 18th-century French sailors from their loved ones and seized by Great Britain’s Royal Navy during the Seven Years’ War (1756-1763).

“I realized I was the first person to read these very personal messages since they’re written,” said Morieux, who just published his analysis of the letters in the journal Annales Histoire Sciences Sociales. “These letters are about universal human experiences, they’re not unique to France or the 18th century. They reveal how we all cope with major life challenges. When we are separated from loved ones by events beyond our control like the pandemic or wars, we have to work out how to stay in touch, how to reassure, care for people and keep the passion alive. Today we have Zoom and WhatsApp. In the 18th century, people only had letters, but what they wrote about feels very familiar.”

England and France have a long, complicated history of being at war, most notably the Hundred Years’ War in the 14th and 15th centuries. The two countries were also almost continuously at war during the 18th century, including the Seven Years’ War, which was fought in Europe, the Americas, and Asia-Pacific as England and France tried to establish global dominance with the aid of their respective allies. The war technically evolved out of the North American colonies when England tried to expand into territory the French had already claimed. (Fun fact: A 22-year-old George Washington led a 1754 ambush on a French force at the Battle of Jumonville Glen.) But the conflict soon spread beyond colonial borders, and the British went on to seize hundreds of French ships at sea.

According to Morieux, despite its collection of excellent ships during this period, France was short on experienced sailors, and the large numbers imprisoned by the British—nearly a third of all French sailors in 1758—didn’t help matters. Many sailors eventually returned home, although a few died during their imprisonment, usually from malnutrition or illness. It was no easy feat delivering correspondence from France to a constantly moving ship; often multiple copies were sent to different ports in hopes of increasing the odds of a letter reaching its intended recipient.

This particular batch of letters was addressed to various crew members of a French warship called the Galitee, which was captured by a British ship called the Essex en route from Bordeaux to Quebec in 1758. Morieux’s genealogical research accounted for every member of the crew. Naturally, some of the missives were love letters from wives to their husbands, such as the one Marie Dubosc wrote to her husband, a ship’s lieutenant named Louis Chambrelan, in 1758, professing herself his “forever faithful wife.” Morieux’s research showed that Marie died the following year before her husband was released; Chambrelan remarried when he returned to France, having never received his late wife’s missive.

Morieux read several letters addressed to a young sailor from Normandy named Nicolas Quesnel, from both his 61-year-old mother, Marguerite, and his fiancée, Marianne. Marguerite’s letters chided the young man for writing more often to Marianne and not to her, laying the guilt thick. “I think more about you than you about me,” the mother wrote (or more likely, dictated to a trusted scribe), adding, “I think I am for the tomb, I have been ill for three weeks.” (Translation: “Why don’t you write to your poor sick mother before I die?”)

Apparently, Quesnel’s neglect of his mother caused some tension with the fiancée since Marianne wrote three weeks later asking him to please write to his mom and remove the “black cloud” in the household. But then Marguerite merely complained that Quesnel made no mention of his stepfather in his letters home, so the poor young man really couldn’t win. Quesnel survived his imprisonment, per Morieux, and ended up working on a transatlantic slave ship.

Link to the rest at Ars Technica

The King’s English? Forgeddabouddit!

From Literary Review:

Does the misuse of the word ‘literally’ make your toes curl? Do the vocal tics of young ’uns set you worrying about the decline of the noble English language? You are not alone. But your fears are misplaced – at least according to the linguist Valerie Fridland.

Fridland’s Like, Literally, Dude does an excellent job of vindicating words and ways of speaking we love to hate. Tracing your ‘verys’ and your singular ‘theys’ across centuries and continents, Fridland offers a history of linguistic pet peeves that are much older than we might assume and have more important functions in communication than most of us would like to give them credit for.

Take intensifiers like ‘totally’, ‘pretty’ and ‘completely’. We might consciously believe them to be exaggerations undermining the speaker’s point, yet people consistently report seeing linguistic booster-users as more authoritative and likeable than others.

Then take ‘um’ and ‘uh’ (or ‘umm’ and ‘uhh’, and their consonant-multiplying siblings). Both receive an undue amount of flak for being fillers, supposedly deployed when the speaker is grasping for words, unsure what they want to say or lacking ideas. But this is not so. Fridland explains that they typically precede unfamiliar words or ideas, as well as complex sentence structures. Such non-semantic additions do what silent pauses and coughing can’t: they help the speaker speak and the listener listen. Similarly, the widely abhorred free-floating ‘like’ does not cut randomly into a ‘proper’ sentence but rather inserts itself, according to the logic of the language, either at the beginning of a sentence or before a verb, noun or adjective. It’s a form of ‘discourse marker’, used to ‘contribute to how we understand each other by providing clues to a speaker’s intentions’, writes Fridland. She points out that Shakespeare used discourse markers frequently, while the epic poem Beowulf begins with one (Hwæt!).

If what we think is ‘bad English’ is so good, why is nobody encouraging us to use those little flashy friends like ‘dude’, ‘actually’ and ‘WTF’? Corporate career guides and oratory platforms like Toastmasters warn against too many interrupters. The reason is that they supposedly make you sound insecure, weak, inexperienced and right-out dumb – like a young woman, basically. The world of power and prestige is rife with bias against ‘like’ and company, and so are our day-to-day interactions with friends and neighbours, who may judge us for that extra ‘literally’ or spontaneous ‘oh’. It’s precisely this prejudice that Fridland sets out to dismantle, arguing that linguistic change is a natural occurrence and that pronouncements on the bad and the good of language are socially motivated.

When we devalue a group’s speech habits, we perceive otherness fuelled by differences in race, class, gender, sexuality and education. To say ‘three dollar’ rather than ‘three dollars’ is not sloppy, Fridland states, but part and parcel of consonant loss at the end of English words that has its roots in the late Middle Ages, when the stress patterns of Norman French and Old Norse led to final letters being cast off. Why should we embarrass others for similar habits?

Fridland does well to burst the bubble of mockery around Californian girls’ vocal fry (think the creaking voices of Paris Hilton and the Kardashians), unpicking the social meanings we attach to verbal patterns we find unacceptable. We tend to dislike (and believe reprehensible) what we’re not regularly exposed to. And that often happens to be the language of vulnerable communities, such as black and brown people, teenagers and women. These groups often propel linguistic change. Children and teenagers, for example, are voracious speakers, eager to explore and play with new forms of language as their speech patterns haven’t quite settled. Women – particularly young women – are the Formula 1 drivers of language change, and have always been. Fridland explains that many modern English forms, such as the S ending of the third-person singular verb (‘it does’ rather than ‘it doth’), were pushed on by women and girls, whose ears tend to be more sensitive to linguistic nuances. While men are likely to snap up already-current words, such as ‘bro’, in order to signal social affiliation, women create new verbal spaces into which other people eventually step. ‘What women [bring] to the fore’, Fridland says, is ‘novelty in the form of this expressivity, not greater expressivity itself’. Once a change has become widely accepted, there is no difference in gender use, despite our perceptions

Link to the rest at Literary Review

For those who don’t know what a vocal fry (formerly glottal fry) is, you’ll hear and see an example below.

https://youtu.be/iY-ehIyRY_Q?si=HAkinbljEy0M0cQM

Whither philosophy?

From Aeon:

‘As long as there has been such a subject as philosophy, there have been people who hated and despised it,’ reads the opening line of Bernard Williams’s article ‘On Hating and Despising Philosophy’ (1996). Almost 30 years later, philosophy is not hated so much as it is viewed with a mixture of uncertainty and indifference. As Kieran Setiya recently put it in the London Review of Books, academic philosophy in particular is ‘in a state of some confusion’. There are many reasons for philosophy’s stagnation, though the dual influences of specialisation and commercialisation, in particular, have turned philosophy into something that scarcely resembles the discipline as it was practised by the likes of Aristotle, Spinoza or Nietzsche.

Philosophers have always been concerned with the question of how best to philosophise. In ancient Greece, philosophy was frequently conducted outdoors, in public venues such as the Lyceum, while philosophical works were often written in a dialogue format. Augustine delivered his philosophy as confessions. Niccolò Machia

Machiavelli wrote philosophical treatises in the ‘mirrors for princes’ literary genre, while his most famous work, The Prince, was written as though it were an instruction for a ruler. Thomas More maintained the dialogue format that had been popular in ancient Greece when writing his famed philosophical novel Utopia (1516). By the late 1500s, Michel de Montaigne had popularised the essay, combining anecdote with autobiography.

In the century that followed, Francis Bacon was distinctly aphoristic in his works, while Thomas Hobbes wrote Leviathan (1651) in a lecture-style format. Baruch Spinoza’s work was unusual in being modelled after Euclid’s geometry. The Enlightenment saw a divergent approach to philosophy regarding form and content. Many works maintained the narrative model that had been used by Machiavelli and More, as in Voltaire’s Candide (1759), while Jean-Jacques Rousseau re-popularised the confessional format of philosophical writing. Immanuel Kant, however, was far less accessible in his writings. His often-impenetrable style would become increasingly popular in philosophy, taken up most consequentially in the work of G W F Hegel. Despite the renowned complexity of their works, both philosophers would become enduringly influential in modern philosophy.

In the 19th century, Friedrich Nietzsche, greatly influenced by Arthur Schopenhauer, wrote in an aphoristic style, expressing his ideas – often as they came to him – in bursts of energetic prose. There are very few philosophers who have managed to capture the importance and intellectual rigour of philosophy while being as impassioned and poetic as Nietzsche. Perhaps this accounts for his enduring appeal among readers, though it would also account for the scepticism he often faces in more analytical traditions, where Nietzsche is not always treated as a ‘serious’ philosopher.

The 20th century proved to be a crucial turning point. While many great works were published, philosophy also became highly specialised. The rise of specialisation in academia diminished philosophy’s broader influence on artists and the general public. Philosophy became less involved with society more broadly and broke off into narrowly specialised fields, such as philosophy of mind, hermeneutics, semiotics, pragmatism and phenomenology.

There are different opinions about why specialisation took such a hold on philosophy. According to Terrance MacMullan, the rise of specialisation began in the 1960s, when universities were becoming more radicalised. During this time, academics began to dismiss non-academics as ‘dupes’. The problem grew when academics began to emulate the jargon-laden styles of philosophers like Jacques Derrida, deciding to speak mostly to each other, rather than to the general public. As MacMullan writes in ‘Jon Stewart and the New Public Intellectual’ (2007):

It’s much easier and more comfortable to speak to someone who shares your assumptions and uses your terms than someone who might challenge your assumptions in unexpected ways or ask you to explain what you mean.

Adrian Moore, on the other hand, explains that specialisation is seen as a way to distinguish oneself:

Academics in general, and philosophers in particular, need to make their mark on their profession in order to progress, and the only realistic way that they have of doing this, at least at an early stage in their careers, is by writing about very specific issues to which they can make a genuinely distinctive contribution.

Moore nevertheless laments the rise in specialisation, noting that, while specialists might be necessary in some instances, ‘there’s a danger that [philosophy] will end up not being pursued at all, in any meaningfully integrated way.’

Indeed, while specialisation might help academics to distinguish themselves in their field, their concentrated focus also means that their work is less likely to have a broader impact. In favouring specialisation, academics have not only narrowed the scope of philosophy, but have also unwittingly excluded those who may have their own contributions to make from outside the academy.

Expertise counts for much in today’s intellectual climate, and it makes sense that those educated and trained in specific fields would be given greater consideration than a dabbler. But it is those philosophers who wrote on a wide range of areas that left a profound mark on philosophy. Aristotle dedicated himself to a plethora of fields, including science, economics, political theory, art, dance, biology, zoology, botany, metaphysics, rhetoric and psychology. Today, any researcher who draws on different, ‘antagonistic’ fields would be accused of deviating from their specialisation. Consequently, monumental books that defied tradition – from Aristotle’s Nicomachean Ethics to Nietzsche’s Beyond Good and Evil (1886) – are few and far between. This is not to say, however, that there are no influential philosophers. Saul Kripke and Derek Parfit, both not long deceased, are perhaps the most significant philosophers in recent years, but their influence is primarily confined to academia. Martha Nussbaum on the other hand, is one of the most important and prolific philosophers working today. Her contributions to ethics, law and emotion have been both highly regarded and far-reaching, and she is often lauded for her style and rigour, illustrating that not all philosophers are focused on narrow fields of specialisation.

But ‘the blight of specialisation’, as David Bloor calls it, remains stubbornly engrained in the practice of philosophy, and ‘represents an artificial barrier to the free traffic of ideas.’ John Potts, meanwhile, argues that an emphasis on specialisation has effectively discouraged any new icons from emerging:

A command of history, philosophy, theology, psychology, philology, literature and the Classics fostered German intellectuals of the calibre of Nietzsche and Weber, to name just two of the most influential universal scholars; such figures became much rarer in the 20th century, as academic research came to favour specialisation over generalisation.

Reading Nietzsche may at times be arduous and convoluted, but it is never dull

By demoting the significance of generalised thinking, the connective tissue that naturally exists between various disciplines is obscured. One is expected, instead, to abide by the methodologies inherent in their field. If, as Henri Bergson argued in The Creative Mind (1946), philosophy is supposed to ‘lead us to a completer perception of reality’, then this ongoing emphasis on specialisation today compromises how much we can truly know about the world in any meaningful depth, compromising the task of philosophy itself. As Milan Kundera put it in The Art of the Novel (1988):

The rise of the sciences propelled man into the tunnels of the specialised disciplines. The more he advanced in knowledge, the less clearly could he see either the world as a whole or his own self, and he plunged further into what Husserl’s pupil Heidegger called, in a beautiful and almost magical phrase, ‘the forgetting of being’.

To narrow one’s approach to knowledge to any one field, any one area of specialisation, is to reduce one’s view of the world to the regulations of competing discourses, trivialising knowledge as something reducible to a methodology. Under such conditions, knowledge is merely a vessel, a code or a tool, something to be mastered and manipulated.

By moving away from a more generalised focus, philosophy became increasingly detached from the more poetic style that nourished its spirit. James Miller, for instance, called pre-20th-century philosophy a ‘species of poetry’. Nietzsche’s own unique, poetic writing style can account for much of the renown his ideas continue to receive (and also much of the criticism levelled at him by other philosophers). Reading Nietzsche may at times be arduous and convoluted, but it is never dull. Indeed, Tamsin Shaw spoke of Nietzsche as less a philosopher and more a ‘philosopher-poet’. Jean-Paul Sartre called him ‘a poet who had the misfortune of having been taken for a philosopher’.

While many sought to separate philosophy from other creative styles and pursuits, notably poetry and literature, Mary Midgley insisted that ‘poetry exists to express [our] visions directly, in concentrated form.’ Even Martin Heidegger, whose writing was far less poetic than Nietzsche’s, called for ‘a poet in a destitute time’, and saw poets as those who reach directly into the abyss during the ‘world’s night’

Link to the rest at Aeon

Travels With Tocqueville Beyond America

From The Wall Street Journal:

Alexis de Tocqueville (1805-59) was neither a systematic thinker nor a system builder, neither a philosopher nor a historian. His subject was society—make that societies, their strengths and their weaknesses, which he studied always in search of what gives them their character. Along with Machiavelli, Montesquieu, Max Weber, Ortega y Gasset, Tocqueville was a cosmopolitan intellectual of the kind that appears only at the interval of centuries.

Tocqueville is of course best known for his “Democracy in America,” a work which may be more quoted from than actually read. The first part of it was published in 1835, based on observations made when he visited the U.S. in 1831, at age 26. His powers of observation, and skill at generalization, were evident at the outset. They never slackened over the remainder of his life.

Tocqueville’s skill at formulating observations was unfailingly acute. “In politics, shared hatreds are almost always the basis of friendships,” he wrote. “History is a gallery of pictures in which there are few originals and many copies.” At the close of “Democracy in America,” he predicted the coming hegemonies of Russia and the U.S. George Santayana, in a letter to his friend Horace Kallen, wrote: “Intelligence is the power of seeing things in the past, present, and future as they have been, are, and will be.” He might have been describing Alexis de Tocqueville.

The first volume of “Democracy in America” was well received. The second volume, published in 1840—more critical and more dubious of the virtues of democracy—was less so. Yet the work stayed in print for a full century, even though its author’s reputation had long since faded. Then, in 1938, with the publication of Tocqueville’s correspondence and other hitherto uncollected writings, that reputation, more than revived, became set in marble.

Travels With Tocqueville Beyond America” by Jeremy Jennings, a professor of political theory at King’s College London, thus joins a long shelf of books dedicated to the man and his works. Four full biographies of Tocqueville have been published, the last, Hugh Brogan’s “Alexis de Tocqueville: A Life,” in 2006. Nearly every aspect of Tocqueville’s work has been treated in essays, articles and book-length studies. I happened to have published a slender volume myself, “Alexis de Tocqueville, Democracy’s Guide” (2006), in which I wrote: “What would have surprised Tocqueville, one suspects, is the persistence with which his writings have remained alive, part of the conversation on the great subject of the importance of politics in life.” It would have surprised him, I believe, because of his innate modesty and his belief that his work was far from finished.

Tocqueville’s trip to America, which would be the making of him, had its origin in his wish to escape the reign of Louis-Philippe, king of France, whose Orléans family had been sympathetic to the French Revolution and were thus viewed askance by the house of Tocqueville. With his friend Gustave de Beaumont, Tocqueville proposed a visit to America to study penal institutions in the new republic; the two magistrates were granted permission, though they would have to pay their own expenses.

In “Travels With Tocqueville Beyond America,” Mr. Jennings sets out the importance of travel to Alexis de Tocqueville. “In exploring why, where, and how Tocqueville travelled,” he writes, “this volume seeks to show that travel played an integral role in framing and informing his intellectual enquiries.” Throughout his life, we learn, “Tocqueville longed to travel,” and this appetite for travel did not “diminish with either age or illness.” As Tocqueville wrote to his friend Louis de Kergorlay: “I liken man in this world to a traveller who is walking constantly toward an increasingly cold region and who is forced to move more as he advances.”

Mr. Jennings proves a splendid guide to Tocqueville’s travels. These included trips, some lengthier than others, to Italy, Algeria, Germany, Switzerland, England and Ireland. Basing his book on Tocqueville’s rich correspondence and notebooks, Mr. Jennings describes his subject’s preparations, his arrivals, his daily encounters in what for Tocqueville were new lands. Even when he did not publish works about these places, he was recording his thoughts. Above all, the author establishes the unceasing intellectual stimulation that Tocqueville found in travel. The spirit of inquiry was never quiescent in him, and, as Mr. Jennings notes, even on his honeymoon “Tocqueville managed to find time to study the Swiss political system.”

Much of the attraction of “Travels with Tocqueville Beyond America” derives from its chronicle of Tocqueville’s quotidian life and his many interesting opinions of historical and contemporary figures. Tocqueville said that Napoleon was “as great as a man can be without virtue.” His English friend Nassau Senior records Tocqueville saying of Napoleon that his “taste was defective in everything, in small things as well as great ones; in books, art, and in women as well as in ambition and glory; and his idolizers cannot be men of much better taste.”

Tocqueville remarked on the “impatience always aroused in him by the national self-satisfaction of the Germans,” and found Italy “the most unpleasant country I have ever visited on my travels.” As for Switzerland, he noted that “at the bottom of their souls the Swiss show no deep respect for law, no love of legality, no abhorrence of the use of force, without which there cannot be a free country.”

Yet he described America as “the most singular country in the world.” Among other things, during his nine months there, he was taken by its citizens’ enthusiasm for their own system of government. Americans, he found, “believe in the wisdom of the masses, assuming the latter are well informed; and appear to be unclouded by suspicions that the populace may never share in a special kind of knowledge indispensable for governing a state.”

He, Tocqueville, did not share their unabated enthusiasm: “What I see in this country tells me that, even in the most favorable circumstances, and they exist here, the government of the multitude is not a good thing.” Tocqueville was wary of what had been done to the American Indian, and predicted that “within a hundred years there will not remain in North America either a single tribe or even a single man belonging to the most remarkable of Indian races.” His views on slavery in America were even bleaker, harsher. “The Americans are, of all modern peoples, those who have pushed equality and inequality furthest among men,” he wrote. He thought, correctly as we now know, slavery to be “the most formidable of all the evils that threaten the future of the United States.”

Alexis de Tocqueville was a passionate man, and about liberty he was most passionate of all. By liberty he meant the absence of despotism, whether by monarchs or multitudes. “Liberty is the first of my passions,” he wrote, referring to it as “a good so precious and necessary,” adding that “whoever seeks for anything from freedom but itself is made for slavery.”

Link to the rest at The Wall Street Journal

The New York Times built a robot to help make article tagging easier

From NiemanLab:

If you write online, you know that a final, tedious part of the process is adding tags to your story before sending it out to the wider world.

Tags and keywords in articles help readers dig deeper into related stories and topics, and give search audiences another way to discover stories. A Nieman Lab reader could go down a rabbit hole of tags, finding all our stories mentioning Snapchat, Nick Denton, or Mystery Science Theater 3000.

Those tags can also help newsrooms create new products and find inventive ways of collecting content. That’s one reason The New York Times Research and Development lab is experimenting with a new tool that automates the tagging process using machine learning — and does it in real time.

The Times R&D Editor tool analyzes text as it’s written and suggests tags along the way, in much the way that spell-check tools highlight misspelled words:

Editor is an experimental text editing interface that explores how collaboration between machine learning systems and journalists could afford fine-grained annotation and tagging of news articles. Our approach applies machine learning techniques interactively, as part of the writing process, rather than retroactively. This approach can offload the burden of work to the computational processes, and can create affordances for journalists to augment, edit and correct those processes with their knowledge.

It’s similar to Thomson Reuters’ Open Calais system, which extracts metadata from text files of any kind. Editor works by connecting the corpus of tags housed at the Times with an artificial neural network designed to read over a writer’s shoulder in a text editing system. They explain:

As the journalist is writing in the text editor, every word, phrase and sentence is emitted on to the network so that any microservice can process that text and send relevant metadata back to the editor interface. Annotated phrases are highlighted in the text as it is written. When journalists finish writing, they can simply review the suggested annotations with as little effort as is required to perform a spell check, correcting, verifying or removing tags where needed. Editor also has a contextual menu that allows the journalist to make annotations that only a person would be able to judge, like identifying a pull quote, a fact, a key point, etc.

“We started looking at what we could do if we started tagging smaller entities in the articles. [We thought] it might afford greater capabilities for reuses and other types of presentation,” said Alexis Lloyd, creative director at the Times R&D Lab.

Tags are a big deal at the Times; the paper has a system of article tags that goes back over 100 years. That metadata makes things like Times Topics pages possible. It’s an important process that is entirely manual, relying on reporters and editors to provide a context layer around every story. And in some cases, that process can lag: The Times’ innovation report cited many gaps in the paper’s metadata system as a strategic weakness:
“Everyone forgets about metadata,” said John O’Donovan, the chief technology officer for The Financial Times. “They think they can just make stuff and then forget about how it is organized in terms of how you describe your content. But all your assets are useless to you unless you have metadata — your archive is full of stuff that is of no value because you can’t find it and don’t know what it’s about.”

Lloyd said the idea behind Editor was not just to make the metadata process more efficient, but also to make it more granular. By using a system that combs through articles at a word-by-word level, the amount of data associated with people, places, companies, and events becomes that much richer.

And that much more data opens new doors for potential products, Lloyd told me. “Having that underlying metadata helps to scale to all kinds of new platforms as they emerge,” she said. “It’s part of our broader thinking about the future of news and how that will become more complex, in terms of forms and formats.”

. . . .

The key feature of the automatic tagging system relies on bringing machines into the mix, an idea that inspires conflicting ideas of progress and dread in some journalists. For Editor to work, the lab needed to build a way for machines and humans to supplement each other’s strengths. Humans are great at seeing context and connections and understanding language, while machines can do computations at enormous scale and have perfect memory. Mike Dewar, a data scientist at the Times R&D lab, said the artificial neural network makes connections between the text and an index of terms pulled from every article in the Times archive.

It took around four months to build Editor, and part of that time was spent training the neural network in how a reporter might tag certain stories. Dewar said that teaching the network the way tags are associated with certain phrases or words gives it a benchmark to use when checking text in the future.

The biggest challenge was latency, as Editor works to make connections between what’s being written and the index of tags. In order for Editor to be really effective, it has to operate at the speed of typing, Dewar said: “It needs to respond very quickly.”

. . . .

Robots continue to expand their foothold in the world of journalism. In March, the AP said it planned to use its automated reporting services to increase college sports coverage. Lloyd has experimented with how bots can work more cooperatively with people, or at least learn from them and their Slack conversations.

Link to the rest at NiemanLab

7 Books About Life After a Civil War

From Electric Lit:

I remember traveling in the north of Sri Lanka, two years after the civil war, in areas where some of the worst fighting had taken place, and seeing yellow caution tape cordoning of large tracts of land. Signs warned in several languages of land mines. Later, I sat, safely ensconced in a Colombo café, as the leader of an NGO showed me pictures of women, protected by nothing more than plastic visors, crouched over piles of dirt and sand with implements that looked surprisingly like the kinds of rakes and hoes you find at a local Home Depot. The work clearing the land of mines, she told me, would likely take two decades.

I started working on my latest collection, Dark Tourist, after that 2011 trip as a way of exploring aftermath. Once the fighting has stopped, the ceasefire arranged, the peace treaty signed we turn our attention to the next conflict, too often ignoring the repercussions of the trauma and the attempts to heal. I wanted to explore the ways that grief both marks us and also the ways we manage to survive, to persevere, and to reckon with and make stories of our memories.

. . . .

Some of the books explore the impact of conflict on individuals who are trying to manage deep traumas. Others document the impact on generations one or two decades removed from the fighting. All the works are testament, to the need for fiction, creative nonfiction, and poetry to document and give voice long after the journalists and the NGOs decamp to other hot zones.  

A Passage North by Anuk Arudpragasm

Anuk Arudpragasm’s novel A Passage North begins with an invocation to the present:

“The present, we assume is eternally before us, one of the few things in life from which we cannot be parted.”

The novel goes on to carefully unravel that opening assertion. The present of the protagonist, Krishnan, is impinged on by multiple losses: the death of his father in a bombing during the height of the civil war; the estrangement of a lover, an activist who refuses to return to Sri Lanka; the imminent death of his aging grandmother; and his duty to her former caretaker. As Krishnan undertakes the titular voyage, the novel transforms into a meditation on loss and grief and also a reckoning in the ways his sorrow often blinds all of us to the suffering around us.

The Best We Could Do by Thi Bui

In a reversal of the traditional immigrant story, Thi Bui, in her graphic memoir, sets out to understand why her parents, both refugees from Vietnam, have failed her and her siblings. Bui’s delicate ink wash drawings provide a careful and detailed reconstruction of her father and mother’s experiences during the Vietnam war and their losses: the separation from family members, exile from home, the death of a child. As the memoir progresses, it becomes clear that Bui’s intent is not merely to document but to reconstruct, to revision, and finally, with deep care and compassion, to make her parents’ story truly part of her own. 

Link to the rest at Electric Lit

The Best We Could Do
Dark Tourist

Brilliant Together: On Feminist Memoirs

From Public Books:

During my first year of college, in 1991, I experienced a feminist awakening, in my horror at the spectacle of Anita Hill’s public humiliation in front of the Senate and the nation. But I didn’t stop to consider how the compound factor of race made her situation both different and worse. The stories we heard from second-wave feminists—our mothers’ generation—were likely to include Gloria Steinem and Take Back the Night. They were less likely to mention Audre Lorde and the Combahee River Collective.

My cohort of women was born between 1965 and 1980, the Pew Research Center’s parameters for Generation X. We grew up with the advances won by first-wave feminists (the right to vote, for example) and by the second-wave feminists of the 1960s and 1970s (widespread access to contraception, civil rights legislation that extended affirmative action to women, legal abortion, and large-scale public conversations about violence against women). The lazy, clichéd view of Gen Xers presumes us to be slackers, cynics, and reflexive ironists; we don’t even merit our own dismissive “OK, boomer” meme.

But we are also women in our 40s and 50s, the inheritors of feminism’s second wave and instigators of the third wave, and we are now in positions of power. With a fourth wave of feminism upon us, what should my generation of activists preserve from earlier stages of the movement, and what should we discard? How should feminist stories be told, and by whom? In particular, what elements of our foremothers’ second-wave feminism still feel essential to us in the 21st century, and how might we consider and address their failures, especially their failures of inclusion? How can we redefine the collective aspects of the feminist movement in a way that will endure?

One of the most important outcomes of the racial reckonings of 2020 is an influx of new feminist memoirs that reexamine the women’s movement from nonwhite perspectives—memoirs that deal explicitly with race and the failings of mainstream white feminism. And even before the events of last year, white second-wave feminists were beginning to engage more meaningfully with these issues, although an enormous amount of work remains.

Recent memoirs by Nancy K. Miller and Honor Moore, white women who are both public intellectuals and active participants in feminism’s second wave, make clear how the gains of the 1960s, ’70s, and ’80s enabled some women to rewrite their narratives. Instead of stories of birth, marriage, and death, they tell stories about friendship, intellectual and artistic development, relationships between mothers and daughters, political and cultural movements, and, above all, the struggle for women’s voices to be heard (and their bodies protected) in a patriarchal society.

Reading these stories now feels like tapping into a larger ongoing narrative, one that celebrates gains made while reminding us how fragile those gains are. Notably, both memoirs take on more than one woman’s story, suggesting that another way to rewrite women’s narratives is to bring them together, to see the power in what is shared. There is no single narrative of a woman’s life, no universal set of markers that defines women’s experience, and no one retrospection that can answer what feminism will become.

In her foundational Writing a Woman’s Life (1988), feminist literary critic and scholar Carolyn Heilbrun observes that writing the lives of notable women poses a challenge because their narratives don’t fit a heroic masculine mode of storytelling. Heilbrun argues that women can’t emulate or draw inspiration from biographies and memoirs that don’t exist:“Lives do not serve as models; only stories do that. And it is a hard thing to make up stories to live by. We can only retell and live by the stories we have read or heard. We live our lives through texts. … Whatever their form or medium, these stories have formed us all; they are what we must use to make new fictions, new narratives.”1 In the same way, feminist biographies and memoirs—and scholarship, journalism, and novels, too—can only make sense of their subjects by changing the way the story is told.

Feminism itself has never been an uncontested narrative. Debates over the definition of “feminist,” including the right to use this word regardless of political ideology, or the ability of women to support other women while distancing themselves from the label, take up a lot of space in public discourse. In a 2015 piece for The Atlantic, Sophie Gilbert suggests that:

Whatever the history, whatever the nuances, whatever the charged sentiments associated with political activism, being a feminist is very simple: It means believing that women are and should be equal to men in matters political, social, and economic. They should be able to vote. They should have equal protection under the law and equal access to healthcare and education. They should be paid as much as their male counterparts are for doing exactly the same job. Do you believe in these things? Then, you are a feminist.

Link to the rest at Public Books

If PG wanted to become a Public Intellectual, how would he go about doing it?

Is there an exam?

Do you major in “Public Intellectual Studies” in college?

Is it inherited? Do one or both of your parents have to be public intellectuals? Will a Public Intellectual grandparent do?

Is there a fee?

Do you get a Public Intellectual certificate to hang on your wall?

How about a Public Intellectual wallet card for when you’re not in your Public Intellectual Study?

Those visitors to TPV who are more informed about this topic than PG can feel free to enlighten him in the comments.

Looking for Trouble

From The Paris Review:

In March 1937, eight months into the Spanish Civil War, Virginia Cowles, a twenty-seven-year-old freelance journalist from Vermont who specialized in society gossip, put a bold proposal to her editor at Hearst newspapers: she wanted to go to Spain to report on both sides of the hostilities. Despite the fact that Cowles’s only qualification for combat reporting was her self-confessed “curiosity,” rather astonishingly, her editor agreed. “I knew no one in Spain and hadn’t the least idea how one went about such an assignment,” she explains innocently in the opening pages of Looking for Trouble, the bestselling memoir she published in 1941. She set off for Europe regardless.

In the four years between arriving in Spain and the publication of Looking for Trouble, Cowles travelled the length and breadth of Europe. She was something of an Anglophile, having been captivated as a child by the stories of King Arthur and his Knights, and thus happily relocated to London, stoically braving its inconveniences—the “lack of central heating, the fogs, the left-hand traffic”—in order to benefit from the front-row seat it offered her to the “sound and fury across the Channel.” In her words, living in the English capital in the late 1930s was “like sitting too near an orchestra and being deafened by the rising crescendo of the brass instruments.”

In 1937, Cowles arrived in Madrid, wearing high heels and a fur coat—the first of quite a few sartorial descriptions in the volume, usually given because the inexperienced Cowles finds herself inadvertently under or overdressed!—but was soon gamely venturing out to the frontlines, ducking to avoid the bullets that whined “like angry wasps” overhead. When not in the midst of the action, she was holed up in the now famous Hotel Florida, alongside Ernest Hemingway—“a massive, ruddy-cheeked man who went round Madrid in a pair of filthy brown trousers and a torn blue shirt”— and other war reporters. Among them, too, was fellow female journalist Martha Gellhorn, with whom Cowles would forge a close friendship; the two later co-wrote a play loosely based on their experiences, ‘Love Goes to Press’ (1946).

This was the beginning of Cowles’s relatively brief but impressively prolific career in war reporting. She was in Prague during the Munich crisis, and Berlin on the day Germany invaded Poland. In early 1939 she escaped “the gloom of London” by means of a six-week trip to Soviet Russia, hoping for what might be “almost a holiday.” She soon stood corrected, determining Moscow to be “the dreariest city on earth,” the depression of which “penetrated [her] bones like a damp fog.” She’d probably have felt less grim if she wasn’t so cold, but yet again, she’d arrived inadequately attired: this time without any woollen stockings, naively assuming she’d be able to buy what she needed when she got there. “Good heavens! Do you really think you can buy woollen stockings here?” a shocked French journalist asked when she tried to enlist his help in tracking some down. A year later, she was in Finland—this time clad in a thick suit, fur-lined boots and a sheepskin coat—travelling north towards the Arctic Circle to report on the Winter War, the bloody battle being waged by the Finns against the invading Russians. In June 1940, as everyone else fled the city, she flew into Paris to cover its fall to the Germans. Three months later, she was in London on the first day of the Blitz

Link to the rest at The Paris Review

When Women Ruled the World

From The Wall Street Journal:

In 1558, John Knox, the energetically tub-thumping Scottish reformer, railed against what would become even more true with the coronation of Elizabeth of England, the future Gloriana herself, a few months later: the unlikely and—to him—hideous rush of women into positions of supreme power in late 16th-century Europe. The cards of the game of birthright were serially falling in a female direction.

Knox’s ire was aimed at Marie de Guise, James V of Scotland’s French widow, who was serving as regent for her daughter Mary Stuart, Queen of Scots. He also targeted Mary Tudor—Mary I—who was, for a fleeting but feverish five-year spell, England’s first queen in her own right, until Elizabeth I succeeded at her (natural) death. Knox’s notorious “The First Blast of the Trumpet Against the Monstrous Regiment of Women” contains his argument in a nutshell: “To promote a Woman to bear rule, superiority, dominion or empire above any realm, nation or city is: A. Repugnant to nature. B. Contumely to GOD. C. The subversion of good order, of all equity and justice.”

Although Knox tried to apologize to Elizabeth for his trollish tract after she ascended the throne—it was only Catholic queens that troubled him, he explained—she never let him travel through her realm. He had to take to the perilous seas or stay put in Scotland. The following year his misery was complete: France’s King Henri II, died leaving his widow Catherine de’ Medici as La Reine Mère, effectively ruling France as the dominant mother of three French kings for the next few decades.

In 1565, Catherine encouraged or commanded her illustrious court poet, Pierre de Ronsard, to refute Knox’s diatribe in a book that celebrated female rule in elegant fashion. The whole is dedicated to Elizabeth, yet the central poem, the bergerie of “Elégies, mascarades et bergeries,” addresses Mary, Queen of Scots. Three decades later, Mary would be brutally and inefficiently beheaded for treason (the ax had to be swung three times), when she was discovered to have plotted against Elizabeth. But now Catherine saw no reason why a gift dedicated to both queens would trouble either: She “utterly discounted any personal jealousy,” according to Maureen Quilligan, and indeed religious difference, in favor of a we’re-all-queens-together spirit of cooperation. History proves that this was wishful thinking on Catherine’s part. Or does it?

This is the bold terrain of “When Women Ruled the World: Making the Renaissance in Europe” by Ms. Quilligan, emerita professor of English at Duke University and author of books on medieval and Renaissance literature. She has come up with an intriguing, inter-disciplinarian, revisionist argument: that through such “inalienable” gifts as poems, books, jewels and tapestries—that is, the sort of dynasty-defining possessions that are passed through generations—we should reappraise relations between these 16th-century queens presumed to loathed and envied one another. We should pay attention to their collaborations in life rather than just their competition to the death. Elizabeth and Catherine de’ Medici, for example, negotiated the Treaty of Troyes, succeeding where Henry VIII and François I had failed and achieving a lasting peace between France and England, certainly for their lifetimes.

Link to the rest at The Wall Street Journal (Should be a free link, but if, perchance it doesn’t work for you, PG apologizes.)

The Book of Mother

From Vogue:

Violaine Huisman’s debut novel, The Book of Mother, tells the story of a 20th- and 21st-century Parisian woman’s life and legacy. Part One is told from the perspective of Violaine, the younger of her two daughters, who is ten when Maman—her beautiful, charismatic, and wildly excessive mother—suffers a breakdown and is hospitalized. Part Two traces the arc of Maman’s, aka Catherine’s, life—from the emotional penury of her hardscrabble, working-class childhood; through her early success (earned through the harshest discipline) as a dancer; to a second marriage that finds her navigating a high-wire act between her life as a woman and the demands of motherhood, while feeling entirely out-of-place amidst the gauche caviar of upper-class Parisian intellectuals; to the betrayals of her third husband, which lead to her undoing. In Part Three, her daughters, now grown women, deal with Maman’s complex legacy.

I lived with the novel’s larger-than-life characters for months while translating Huisman’s winding, revved-up (and at times, improbably comic) Proustian sentences. I heard their voices and felt the shadow of history and the Shoah hanging over them as they breathed the heady air of Paris in the ‘70s and ‘80s, with its boutiques, salons, and swinging night clubs. More recently, I sat down with Violaine, who had returned briefly to New York—her home for the past 20-years—in the midst of an extended sojourn in France, to talk about The Book of Mother. The conversation that follows, over lunch at Café Sabarsky, has been edited and condensed.

In all our discussions about the book while I was translating it, I never asked you, how did you come to write The Book of Mother?

There were two moments of genesis. Ten years before the book’s publication in France [in 2018], I wrote my mother’s life story, but as a monologue, using only her voice. It was similar to the voice that I use in the novel for her tirades and harangues—that long, digressive, angry, wild tone.

I showed that manuscript to a publisher who admired it and gave me some suggestions, but I couldn’t find a way to revise it. Then, one year later, my mother died, and it became impossible to revise it. And then, two years after my mother died, I had my first child, and two years later, the second one.

So there was all this time of, literally, gestation. I realized that becoming a mother gave me a completely different perspective on who my mother was. I started understanding the conflict that she had faced, between her womanhood and her motherhood. So that was a huge turning point for me.

And then, days after coming home from the hospital after giving birth to my younger child, with the baby on my lap, I read 10:04, Ben Lerner’s second novel, and I had this epiphany, which was that in fiction—whether you are writing about your own stories or those of others—facts don’t matter. Facts are only relevant when it comes to history. I realized then that I had to distance myself from facts in order to give shape to my mother’s story, to create a coherent narrative. That’s something that Ben Lerner writes and talks about very beautifully, that fiction is the imaginative power to give form to the real, to make sense of the chaotic nature of living.

Because life makes no sense.

Life makes no sense. And the truth is, my mother didn’t know, my father didn’t know, why things happened that way. But fiction has the ability to create logic where there is none, to give coherence and stability to the story in a way that feels very powerful and personal.

And then, when the structure of the novel came to me—its organization in three parts—I knew even before I started writing exactly how it would be laid out. And that’s how I was able to write it.

Link to the rest at Vogue