Hitler’s American Gamble

From The Wall Street Journal:

Most Americans, if asked, would probably say that Franklin Roosevelt declared war on Nazi Germany, following the bombing of Pearl Harbor on Dec. 7, 1941. Since we were at war with imperial Japan, the logic would run, we were obliged to be at war with Japan’s Axis ally.

In fact, it was Adolf Hitler who declared war on the United States—four days after Pearl Harbor, on Dec. 11, 1941. By doing so, he managed to bring the full weight of America’s industrial might against him. The war declaration ranks as Hitler’s worst strategic blunder—even worse than his decision to invade the Soviet Union in June 1941, when he pitted the Wehrmacht against an opponent with much greater manpower reserves and strategic depth.

In “Hitler’s American Gamble,” Brendan Simms and Charlie Laderman—historians at the University of Cambridge and King’s College, London, respectively—provide an engaging and insightful account of the forces that shaped Hitler’s fateful decision. The authors note that, far from being an irrational or impulsive gesture, Hitler’s war on America “was a deliberate gamble.” It was driven, in part, by “his geopolitical calculations” and “his assessment of the balance of manpower and matériel.” The decision derived as well, the authors assert, from Hitler’s tortured view of the relations among Britain, the U.S., and, not least, the Jews in both Europe and America.

On the eve of Pearl Harbor, Hitler was heavily engaged in waging war on Britain, and he seemed close to winning. Yet he was hesitant to deliver the knock-out blow. He had already missed an opportunity to do so in the spring of 1941, when Britain evacuated Greece and Crete and Rommel’s Afrika Korps was scoring success after success against British forces in North Africa. Hitler believed that his real enemy was Winston Churchill, not the British people, and that the British people would eventually give up the fight and accept Nazi hegemony in Europe. At the same time, he was well aware that it was the U.S. and its supplies of food and war matériel—sent across the Atlantic under the terms of Lend-Lease—that were keeping Britain in the fight.

In Hitler’s mind, then, America was a grave threat to his plans for German hegemony—indeed, Germany was locked in deadly combat with “the Anglo-Saxon powers,” Britain and the United States. But that is not all. Hitler believed that, as Messrs. Simms and Laderman put it, “ ‘the Jews’ had manipulated the ‘Anglo-Saxons’ into war with the racially kindred Reich.” Race-paranoia was a critical component of Hitler’s “gamble.”

Hitler was convinced that Japan’s surprise attack would divert U.S. resources and attention just long enough to secure Britain’s isolation and surrender. And the German panzer divisions poised only 12 miles from Moscow signaled the imminent collapse of his only other opponent, Russia. In the event, he was wrong on both counts. What was about to collapse in Russia wasn’t the Red Army but the Wehrmacht, as Hitler’s panzers were thrown back from Moscow and nearly half a million German soldiers perished in the winter of 1941-42. Meanwhile, Japan’s victories in the aftermath of Pearl Harbor proved to be too brittle to last.

Link to the rest at The Wall Street Journal (Should be a free link, but if it stops working, PG apologizes for the paywall.

Live Like the Ancient Cynics

From The Atlantic:

There are a growing number of Marxists today. By which I mean followers of Groucho, not Karl. “Whatever it is, I’m against it,” Marx sang in his 1932 film, Horse Feathers. “I don’t know what they have to say / It makes no difference anyway.”

What was satire then is ideology today: Cynicism—the belief that people are generally morally bankrupt and behave treacherously in order to maximize self-interest—dominates American culture. Since 1964, the percentage of Americans who say they trust the government to do what is right “just about always” or “most of the time” has fallen 53 points, from 77 to 24 percent. Sentiments about other institutions in society follow similar patterns.

Whether cynicism is more warranted now than ever is yours to decide. But it won’t change the fact that the modern cynical outlook on life is terrible for your well-being. It makes you less healthy, less happy, less successful, and less respected by others.

The problem isn’t cynicism per se; it’s that modern people have lost the original meaning of cynicism. Instead of assuming that everyone and everything sucks, we should all live like the ancient Greek cynics, who rebelled against convention in a search for truth and enlightenment.

The original cynicism was a philosophical movement likely founded by Antisthenes, a student of Socrates, and popularized by Diogenes of Sinope around the fifth century B.C. It was based on a refusal to accept the assumptions and habits that discourage people from questioning conventional dogmas, and thus hold us back from the search for deep wisdom and happiness. Whereas a modern cynic might say, for instance, that the president is an idiot and thus his policies aren’t worth considering, the ancient cynic would examine each policy impartially.

The modern cynic rejects things out of hand (“This is stupid”), while the ancient cynic simply withholds judgment (“This may be right or wrong”). “Modern cynicism [has] come to describe something antithetical to its previous meanings, a psychological state hardened against both moral reflection and intellectual persuasion,” the University of Houston’s David Mazella wrote in The Making of Modern Cynicism.

There were no happiness surveys in Antisthenes’s times, so we can’t compare the ancient cynics’ life satisfaction with that of those around them who did not share their philosophy. We can most definitely conclude, however, that modern cynicism is detrimental. In one 2009 study, researchers examining negative cynical attitudes found that people who scored high in this characteristic on a personality test were roughly five times more likely to suffer from depression later in life. In other words, that smirking 25-year-old is at elevated risk of turning into a depressed 44-year-old.

Modern cynics also suffer poorer health than others. In 1991, researchers studying middle-aged men found that a cynical outlook significantly increased the odds of death from both cancer and heart disease—possibly because the cynics consumed more alcohol and tobacco than the non-cynics. In one 2017 study on middle-aged Finnish men, high cynicism also predicted premature mortality. (Although both of these studies involved only men, nothing suggests that the results are gender-specific.)

Link to the rest at The Atlantic

America’s Ever-Expanding Criminal Code

From The Wall Street Journal:

How many federal crimes has Congress created? The question seems like it ought to have a straightforward answer that citizens can look up. In fact it’s more like asking, “how many genes are in the human genome?” The answer is in the many thousands, but despite decades of counting, no one knows for sure.

A new project by the Heritage Foundation and George Mason University’s Mercatus Center says it is “the first effort to ‘count the Code’ since 2008.” The researchers created an algorithm with key phrases like “shall be punished” and “shall be fined or imprisoned” to search tens of thousands of pages in the U.S. Code.

In the 2019 Code, they found 1,510 criminal sections. By examining some of those sections at random, they estimated that they encompass 5,199 crimes in total. The Heritage Foundation report notes that “there is no single place where any citizen can go to learn” all federal criminal laws, and even if there were, some “are so vague that . . . no reasonable person could understand what they mean.”

By running their algorithm on past versions of the U.S. Code going back to 1994, the researchers also estimate the rate at which criminal laws are proliferating. There were about 36% more criminal sections in 2019 than 25 years earlier, for an overall growth rate of 1.27% per year. More than half of the growth took place from 1994 through 1996. Since the mid-1990s, the biggest annual increases were in 2005-2006 (2.48%) and 2011-2012 (2.76%).

These figures, the report emphasizes, don’t cover the 175,000 page Code of Federal Regulations, which contains an unknown number of crimes created by executive-branch officials under authority delegated by Congress. The results can be grimly amusing. Defense lawyer Mike Chase has highlighted many examples, such as a 2006 regulation that creates a potential five-year prison sentence for bringing more than $5 of nickels out of the U.S.

But even when it comes to conduct everyone agrees should be criminal, the inexorable expansion of the Code has serious consequences for justice and federalism. The Constitution envisioned that most lawbreaking would be handled by state governments, while the federal government’s jurisdiction would be narrower.

As Congress asserts jurisdiction over conduct already criminalized by states, however, that division erodes. “Duplicative” laws mean prosecutors can “charge different people committing the same offenses with different crimes, opening the door for bias,” the report notes.

Or they can be prosecuted twice for the same offense. The Supreme Court has held (most recently in 2019’s Gamble v. U.S.) that consecutive state and federal prosecutions don’t violate the Fifth Amendment’s double-jeopardy clause.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

In lieu of a rant, a sense of PG’s thoughts concerning the WSJ article, from The Oxford Eagle:

Lavrentiy Beria, the most ruthless and longest-serving secret police chief in Joseph Stalin’s reign of terror in Russia and Eastern Europe, bragged that he could prove criminal conduct on anyone, even the innocent.

“Show me the man and I’ll show you the crime” was Beria’s infamous boast. He served as deputy premier from 1941 until Stalin’s death in 1953, supervising the expansion of the gulags and other secret detention facilities for political prisoners. He became part of a post-Stalin, short-lived ruling troika until he was executed for treason after Nikita Khrushchev’s coup d’etat in 1953.

Beria targeted “the man” first, then proceeded to find or fabricate a crime. Beria’s modus operandi was to presume the man guilty, and fill in the blanks later.

Link to the rest at The Oxford Eagle

The triumph of culture

From The Economist:

They tore down the statue and rolled it into Bristol harbour, and none of them denied it. Yet this month a jury in England acquitted four people over the toppling of a likeness of Edward Colston, an English philanthropist and leading slave-trader who died in 1721. Part of the case for the defence was unusual for a courtroom, and revealing of the intellectual mood in Britain and beyond. The real offence, said the accused, was that the monument to such a monster was still standing. Facing criminal charges, they made an argument about art, and about history.

In an era of rising nationalism and seething partisanship, some borders—including those between countries and political camps—can seem to be hardening. But others are blurring, such as between politics and culture, statecraft and stagecraft. When the news vies for attention with entertainment, and is relished as meme and soap opera, entertainers have a political edge—and from France to Ukraine, television personalities have exploited it. Poets may no longer be the unacknowledged legislators of the world, but activist sports stars and outspoken children’s authors have a pretty big say.

The substance of public debate has evolved with the personnel, not least in the erosion of another distinction, between the present and the past. Witness the saga of Colston, who splashed back into the news 300 years after his death. A decade ago, the idea that Conservative ministers might lambast the National Trust, staid steward of English country houses—as they have over its interest in slavery and colonialism—would have seemed outlandish. (So, to American voters, would one run for the White House by the star of “The Apprentice”, let alone two.) Whoever controls the past may indeed control the future, but from the streets of post-imperial Britain to the school boards of America, they have a fight on their hands first.

Disputes over whose history is told, how and by whom, in part reflect a struggle over claims on power and virtue today. Adherents of “cancel culture”, that dismal oxymoron, believe some people, living and dead, are too discredited to be heard at all. In these rolling culture wars, The Economist has no fixed side. But neither are we neutral. Our liberal principles suggest that controversial voices should generally be audible—and that some statues should come down.

. . . .

Culture’s role in politics is not the only way it has become more salient. During lockdown, stories on the page and screen have offered vicarious adventures, and a sense of solidarity in adversity, to people across the world. Even as theatres and galleries closed, the technology of culture has developed to match this craving. If covid-19 has coloured the experience of the arts, meanwhile, in time the reverse will also be true: writers and artists will shape how the pandemic is understood and remembered, and we will be watching.

Link to the rest at The Economist

PG is more than a little concerned about cancel culture wherever it appears.

In one respect, the actions of the cancel culture mobs – physical and intellectual – can be classed with book burning. In the Twentieth Century, book burning was most prominently practiced by the Nazis, who burned the books of Jewish authors.

Book burning has a long history, too. The first recorded state-sponsored book burning was in China in 213 BC, according to Matthew Fishburn, the author of Burning Books. The burnings were ordered by Qin Shi Huang, the Chinese emperor who also started the Great Wall and the Terracotta army.

. . . .

On June 22, 2011 a group in The Netherlands burned the cover of The Book of Negroes, by Canadian author Lawrence Hill, continuing both an ancient and modern tradition.

Canadian Broadcasting Corporation

. . . .

Panhandling Repertoires and Routines for Overcoming the Nonperson Treatment

In this article, I present panhandling as a dynamic undertaking that requires conscious actions and purposeful modifications of self, performances, and emotions to gain the attention and interest of passersby. I show that describing and theorizing panhandling in terms of dramaturgical routines is useful in understanding the interactions and exchanges that constitute panhandling. In addition, repertoires rightly portray panhandlers as agents engaging the social world rather than as passive social types. From this perspective, sidewalks serve as stages on which panhandlers confront and overcome various forms of the nonperson treatment.

This facet of human nature – oppressing or attacking the Other – is most prevalent and dangerous when a group uses force/violence to punish one or more individuals who are perceived to be from a different tribe, species, race or social position – some sort of nonperson who is not a member of whatever group of Übermenschen have the power to threaten an individual or group which lacks legal, social or physical power sufficient to deter mistreatment.

Othering is a phenomenon in which some individuals or groups are defined and labeled as not fitting within the norms of a social group and, thus, may be treated in a manner different from those who are members of a social group, racial, ethnic, educational, professional class, etc.

It is an effect that influences how people perceive and treat those who are viewed as being part of the in-group versus those who are seen as being part of the out-group. Othering also involves attributing negative characteristics to people or groups that differentiate them from the perceived normative social group.

It is an “us vs. them” way of thinking about human connections and relationships. This process essentially involves looking at others and saying “they are not like me” or “they are not one of us so I am not required to give them the same respect I give those who are like me.”

It takes a mob to cancel an individual.

Othering is a way of negating another person’s individual humanity and, consequently, those that are have been othered are seen as less worthy of dignity and respect.

On an individual level, othering plays a role in the formation of prejudices against people and groups. On a larger scale, it can also play a role in the dehumanization of entire groups of people which can then be exploited to drive changes in institutions, governments, and societies. It can lead to the persecution of marginalized groups, the denial of rights based on group identities, or even acts of violence against others.

Verywellmind.com

In the United States, unfortunately, racial/ethnic minorities, religious minorities and language minorities have all been subject to some degree of the cancel culture of a time and place, sometimes geographically localized and at other times widespread.

PG argues that the actions of college students who “cancel” the ideas or speech of an individual or group are operating under the influence of the same class of degraded human nature that resulted in Jews being sent to concentration camps eighty years ago or the Native Americans being killed or forcibly ejected from their homes in the United States or the evil bourgeoisie who owned the means of production being attacked and killed because, by their nature, they were enemies of the proletariat.

Dominant languages can spread even without coercion

From The Economist:

Never think the world is in decline. A recent book, “Speak Not” by James Griffiths, looks at the bad old days when it was seen as acceptable to impose a culture on others through force. The author tells the stories of Welsh and Hawaiian—languages driven to the brink of death or irrelevance before being saved by determined activists.

. . . .

Americans fomented a coup in Hawaii that led to its eventual annexation. Missionaries built schools and fervently discouraged local customs like the hula, a performance in honour of ancestors that the Americans considered lascivious. Oppression of culture and of the language went hand in hand: by the late 20th century the only fluent Hawaiian-speakers were worryingly old. But activists fought to expand teaching of it, and eventually brought Hawaiian into many schools. The number of speakers is now growing. Even some of the state’s many citizens of other ethnicities find it fashionable to learn a bit.

Welsh survived centuries of union with England largely because of Wales’s relative isolation and poverty. But in the 19th century British authorities stepped up efforts to impose English; schoolchildren had to wear a token of shame (the “Welsh Not”) if they spoke their native language, the kind of tactic seen in language oppression around the world.

Again, activists fought back. In 1936 three of them set fires at an air-force training ground built despite local opposition. The perpetrators turned themselves in, then refused to speak any language but Welsh at their first trial. It ended in a mistrial; their second resulted in a conviction, but on their release nine months later the arsonists were feted as heroes. They had lit a fire under Welsh-language nationalism, which in later decades would not only halt the decline in Welsh-speakers, but reverse it. Today the right to speak Welsh at trial (and in many other contexts) is guaranteed.

Mr Griffiths’s book ends with a sadder tale. Though Mandarin is the world’s most-spoken native language, China still has hundreds of millions of native speakers of other Chinese languages such as Cantonese (often misleadingly called “dialects”), as well as non-Han languages like those used in Inner Mongolia and Tibet. Evidently regarding this variety as unbefitting for a country on the rise, the authorities have redoubled their efforts to get everyone speaking Mandarin—for instance by cutting down Cantonese television and resettling Han Chinese in Tibet, part of a wider bid to dilute its culture. A regime indifferent to the tut-tutting of outsiders can go even further than American and British colonialists.

But English spreads by less coercive means, too. Rosemary Salomone’s new book, “The Rise of English”, tells the tale of a language that has gone from strength to strength after the demise of Britain’s empire and perhaps also of America’s global dominance. These two forces gave English an impetus, but once momentum takes hold of a language, whether of growth or decline, it tends to continue. Everyone wants to speak a language used by lots of other influential people.

Link to the rest at The Economist

The Hotel of the Idle Moon

From The Millions:

It feels safe to say that no other writer of great stature wrote more often than William Trevor about old people. Only Alice Munro comes close—perhaps V. S. Pritchett. Here’s an incredibly stupid admission: when I first encountered Trevor, it made perfect sense to me that he would write so much about old people, since he so perfectly embodied the platonic ideal of an old person. Those twinkly, wise eyes! That signature Irish walking hat! I was in my 30, in the late aughts, when I first read Trevor, and he was in his late-70s. It did not occur to me that he had not always been in his late-70s, that many of the multitude of stories were written when he was my age at the time.

But enough about my stupidity, which I would prefer to reveal slowly over the course of this project, rather than all at once in an information dump. I mentioned Trevor’s interest in the elderly in a previous entry, the way aging is simpatico with his central theme, i.e. coming to terms with one’s life. But it’s more than that. Advanced age and accompanying senescence are, if not an obsession, a fixation. Sometimes, with writers, you viscerally sense the person, place, thing, idea, or general theme that quickens their pulse—you can hear the fingers tap that much faster on the typewriter. With Charles Portis, it’s cars, or more broadly, mechanical objects; with Ottessa Moshfegh it’s any bodily-related function: peeing, pooping, barfing. I sense, in Trevor’s stories, that quickening when it comes to senility, sundowning, the general incapacity of age.

In several of these stories we’ve covered already, an old person’s vulnerability provides a key plot point: Miss Winton’s fuddled inability to wrest control of the situation in “The Penthouse Apartment;” Miss Efoss becoming overwhelmed and subsumed by the Dutt’s desire for a child in “In at the Birth;” General Suffolk’s progressive drunkenness and weakness in “The General’s Day.” Even the titular Miss Smith, a relatively young person, undergoes a sort of premature dotage. This week’s story, “The Hotel of the Idle Moon,” provides the most explicit version of this yet, as a pair of married con artists, the Dankers, invade the country home of the ancient Marstons and their equally ancient servant Cronin. They likely poison Lord Marston, and proceed to consign the Lady and Cronin to a small wing of the house while they convert the rest into a hotel. Cronin dreams of cutting their throats with a razor strop, but in the end, he understands it to be absurd that he “imagined himself a match for the world and its conquerors.”

Link to the rest at The Millions

Dear John

From The Wall Street Journal:

‘If the Army had wanted you to have a wife, it would have issued you one.” It’s an oft-repeated quip within the armed forces. As Susan Carruthers demonstrates in “Dear John: Love and Loyalty in Wartime America,” it takes a very sturdy relationship to survive the institutional culture of the military.

Ms. Carruthers, a professor of U.S. and international history at the University of Warwick in England, takes as her central motif the “Dear John letter”—a breakup note sent by a woman at home to her man in uniform overseas. The term was first used, we are told, in a national newspaper in October 1943. Such letters have since become a symbol of the female treachery that can damage a man as deeply as the wartime loss of life or limb.

The author acknowledges that women had written rejection letters before—Ernest Hemingway received one after being hospitalized during World War I. But World War II involved more troops and lengthier overseas service, putting more romantic relationships under strain for longer periods of time.

In subsequent years, during the wars in Korea, Vietnam, Iraq and Afghanistan, Dear Johns have been mythologized within both popular culture and the armed forces. In 1953, Jean Shepard warbled: “Dear John oh how I hate to write / Dear John I must let you know tonight / That my love for you has died away like grass upon the lawn / And tonight I wed another dear John.”

The armed forces’ distrust of romantic relationships—and the apparent misogyny that underlies this view—ripples throughout Ms. Carruthers’s prose. From the start, the military feared that wives posed an alternative pole of attraction, pulling enlisted men’s attention away from duty and discipline. Writing in the Ladies’ Home Journal in 1942, the advice columnist Gretta Palmer told readers: “Among the officers, there is an unofficial belief that ‘a colonel must have a wife, a major should, a captain may and a lieutenant mustn’t.’ ”

Women who wrote letters to their sweethearts or husbands on the front were encouraged to make their missives sunny and supportive. A soldier’s rage at receiving a Dear John letter reflected his sense of betrayal. This sentiment was captured by Gen. George Patton when he said that women who wrote Dear John letters “should be shot as traitors.” There was no room in this picture for a woman’s gnawing anxieties, loneliness or sense of abandonment.

. . . .

Analysis of military wives ramped up in the 1970s as Army psychiatrists and psychologists began publishing studies of their behavior. During the Vietnam era, according to these studies, these women were full of inexpressible rage against both their absent husbands and the pressures to satisfy their husbands’ emotional needs while endlessly stifling their own. Returning prisoners of war were shocked to find that, in their absence, some of their wives had joined the antiwar movement. “The ending of marriages was woven into a larger national tapestry of loss,” Ms. Carruthers argues. “A lost war, lost respect for traditional values, lost male authority, lost national valor all tied together by allegations of individual and institutional disloyalty.” Yet Ms. Carruthers finds no evidence that any Dear John letter was prompted by disapproval of the war.

In her chapters dealing with emotional injuries and suicide, Ms. Carruthers discusses how the association between lost loves and lives lost became entrenched, especially after 2003, when the armed forces began compiling suicide statistics. The proposition that a romantic breakdown was the No. 1 precipitating event for active-duty suicide was treated as a claim that needed no further corroboration.

Yet, as Ms. Carruthers points out, precipitants are not necessarily causes. There are many contributing factors to the suicide of a psychologically vulnerable soldier, not leastof which is that distance aggravates existing problems in a marriage. A relationship that already included domestic violence, infidelity, money problems, sexual dysfunction or other conflicts will not blossom when one partner is in Kansas and the other is in Kabul. The author suggests that “it was (and still is) easier for some military commanders and psychiatrists to castigate failing relationships than to candidly reckon the psychological toll of prolonged war-waging.” A raft of new programs has recently been introduced to help soldiers build resilient relationships, but the programs still imply that “it’s the job of women to preserve ‘their’ soldier’s mental health.”

Link to the rest at The Wall Street Journal (This should be a free link, but PG apologizes if you hit a paywall, but hasn’t figured out a way around it.)

What a previous iconoclastic period reveals about the present one

From The Economist:


They struck
 at night, but many people must have seen them. First a group of young men stretched ropes across Cheapside, an east-west thoroughfare in the City of London, to block traffic. Then they attacked one of the largest, most famous images in Britain.

Cheapside Cross was a stone monument to Eleanor of Castile, queen consort of Edward I, which had stood in the capital since the 1290s. It was a tiered structure, rather like the candle-powered Erzgebirge pyramids that some put on their Christmas tables. It contained statues of God, Mary, a dove representing the Holy Ghost and other things offensive to contemporary eyes.

The iconoclasts probably could not reach the crucifix on top of the monument, which was as much as ten metres from the ground. They tried toppling lower statues by yanking on ropes, but failed. So they plucked the infant Christ from Mary’s lap, defaced her and smashed the arms off other images. Then they vanished. A reward was offered for information on the attackers, but there were no takers.

That attack took place in June 1581. It was just one of many on images in Britain between the 1530s and the 1640s. Indeed, it was only one of those on Cheapside Cross. The monument was assaulted again in 1601—when Mary lost her child for a second time and was stabbed in the breast—and was finally demolished in 1643, on the orders of the Parliamentary Committee for the Demolition of Monuments of Superstition and Idolatry. Change a few details, though, and it could have been in 2021.

Britain is in the midst of an image controversy, centred on the many public statues erected by the Victorians. In June 2020 a crowd inspired by the Black Lives Matter movement used ropes to pull a 125-year-old statue of Edward Colston, a slave trader and local benefactor, from its plinth and dumped it into Bristol harbour. Other statues were sprayed with paint. Local governments hurriedly removed some objects before crowds could get to them. A statue of Thomas Picton, a particularly violent governor of colonial Trinidad, was boxed up in Cardiff City Hall; Robert Milligan, a slave owner, was hauled out of West India Quay in east London on a flatbed lorry.

Local authorities and other organisations have launched inquiries into statues, monuments, murals and street names. Some of these have already begun to report. An impressively detailed one for the Welsh government found 13 items commemorating slave traders, as well as 56 memorials to people who owned plantations or benefited directly from slave labour, 120 to people who opposed the abolition of slavery and still others to colonialists. Officials have not yet decided what to do about many of them. Other investigations, including one commissioned by the mayor of London, continue.

The reaction to this assault on historical images has been just as fervent. Vigilantes, most of them polite, have formed protective cordons around statues, including those of Mahatma Gandhi and Lord Baden-Powell. Newspapers and politicians rail against vandals both unofficial and official. In his speech to the Conservative Party conference in October 2021, Boris Johnson condemned “know-nothing cancel-culture iconoclasm”. His government has written to museums, threatening to cut their funding if they remove images.

Large differences exist between the iconoclasm of the 16th and 17th centuries and today’s rows. The earlier iconoclasts had different motivations and different ideas about how images worked on the mind. They were far more destructive than modern iconoclasts. Medieval churches were filled with murals and painted statues of saints, almost all of which have been destroyed. Not one English parish church retains all its pre-Reformation stained glass; St Mary’s Church in Fairford, in the Cotswolds, comes closest.

But there are similarities between the iconoclastic waves, too, which ought to discomfort conservatives. Those who oppose removing or destroying images today often argue that people should learn about history, not try to eradicate it. They mean that historical figures should be studied in the round and placed in the context of their times, rather than judged solely by modern criteria. But iconoclasm also has a history, which in Britain is long and largely triumphant: the hammers tend to prevail.

Although Renaissance iconoclasts sometimes erased political symbols, most of their targets were religious. They were following divine law, as they interpreted it. British Protestants argued that the words of the law given to Moses on Mount Sinai, “Thou shalt not make unto thee any graven image,” began a separate commandment—the second—which specifically proscribed idolatry. They also cited examples from the Old Testament of believers destroying images. Moses burned the golden calf and ground it into powder. A statue of Dagon, a Philistine god, was magically destroyed when the Ark of the Covenant was placed in its temple, losing its head and its hands.

The reformers believed that images were actively dangerous. Objects were thought to interact with the people who gazed upon them, including by staring back. To look at an image was almost to embrace it or consume it. As Margaret Aston, the leading historian of English iconoclasm, pointed out, medieval churchgoers usually experienced the Eucharist only by looking at it. They believed it had a powerful effect on them nonetheless. Iconoclasts believed that images worked on people through their eyes, tugging them back into idolatry. Those images had to go.

. . . .

This is far removed from the modern understanding of images. Those who argue for the removal of statues today claim not that images are harmful in themselves, but that their presence in public places signals institutional reluctance to root out racism and other enormities. Students at Oriel College in Oxford who want a statue of the colonialist Cecil Rhodes to be taken down from its perch above the High Street argue that it “can be symbolically communicative of a subtler and insipid prejudice in Oriel and the university”.

Modern iconoclasts are also far more restrained than their predecessors. Topple the Racists, a website that lists images and memorials deemed offensive (some of which have already been removed) contains 151 items. In 1643, during the English civil war, the zealous iconoclast William Dowsing eradicated at least 120 images in Jesus College, Cambridge in a single day. Attacks were often violent. In 1559 a congregation in Perth, in Scotland, smashed a tabernacle above the altar by flinging stones at it during the service.

Link to the rest at The Economist

PG is generally opposed to any action that impairs the understanding of history, regardless of whether it offends the sensibilities of some of those who weren’t alive when that history was made.

The method in history’s madness

From The Guardian (in 2007):

As with all good ideas, one wonders why this one had not been thought of before. Despite countless books about the second world war, [Fateful Choices: Ten Decisions that Changed the World 1940-41 is] the first to examine the key decision-making processes during this crucial early period in sequence, and how fortunate that it is Ian Kershaw bringing his immense knowledge and clarity of thought to the task.

Major wartime decisions often appear either inevitable or idiotic, but that is because we view them in retrospect and often in isolation. Kershaw’s great strength is to explain the emotions as well as the circumstances that framed the choices. And he then shows how one decision affects the next. History may be “one damn thing after another”, but cause and effect is everything.

Kershaw begins with Churchill’s war cabinet in May 1940. French resistance had virtually collapsed and the British army, retreating towards Dunkirk, seemed to be doomed to destruction. French leaders wanted to approach Mussolini to discover what Hitler’s terms would be. The British war cabinet came close to following down that track, mainly influenced by the foreign secretary, Lord Halifax, but Churchill and others realised the danger just in time. Even to ask about conditions would undermine any attempt to fight if the terms were unacceptable. Churchill called it “the slippery slope of negotiations”.

Britain’s decision to fight on was crucial to the fate of western Europe. Only America had the power to reverse Nazi conquest, and Britain provided the only base to fight back. Hitler, whose main objective was the total subjugation of the Soviet Union, faced a quandary. Should he attack Britain directly with Operation Sealion? That was too dangerous with the Royal Navy and RAF intact. Should he follow the so-called “peripheral strategy”, of crushing British power in the Mediterranean and Middle East, although it would be impossible to reconcile the conflicting expectations of Mussolini, General Franco and Marshal Pétain? Or should he ignore the Bismarckian taboo of fighting a war on two fronts, and invade the Soviet Union before the United States could intervene? A rapid defeat of the Red Army, he argued, would force Britain to capitulate before Roosevelt could coax a reluctant Congress into all-out support. “It was madness,” concludes Kershaw, “but there was method in it.”

Roosevelt had to keep Britain in the war. The United States, he declared, should be the “great arsenal of democracy”. The first symbolic step was to hand over 50 antiquated destroyers. The next, and incomparably greater one, was Lend-Lease, providing the money and the weapons for the war. FDR suspected he could not carry the country until one of the Axis powers attacked the United States. Churchill was privately exasperated, but it is hard to fault Roosevelt’s instincts and his handling of events.

Mussolini showed the opposite of caution. Feeling patronised by Hitler, he launched a hopelessly inept attack on Greece from Albania without warning Berlin. Hitler was furious that the Balkans should be stirred up at the worst moment. The Wehrmacht then invaded Yugoslavia and Greece in the spring of 1941, which at least secured the southern flank for the invasion of the Soviet Union and protected Romanian oil reserves. Hitler, who remained sceptical of the airborne invasion of Crete in May, was reassured that the Allies could not use it later as a bomber base to attack the Ploesti oilfields. Hitler later claimed that this diversion southwards delayed the opening of Operation Barbarossa with fatal consequences, because the Wehrmacht was unable to reach Moscow before the winter. But Kershaw rightly discounts this. The heavy rains in central Europe that spring prevented the Luftwaffe from deploying to forward airfields.

Stalin, meanwhile, had persuaded himself that Hitler would never invade the Soviet Union before defeating Britain. The Nazi leader played cleverly on this idea, claiming that the troops massing on the border were being concealed there from the RAF while he prepared his assault on southern England. The Soviet dictator did not dare face the truth, because the Red Army was still in such a pitiful state after the purges and the neglect of his own crony, Marshal Voroshilov. He instinctively viewed British warnings of a Nazi attack as a deliberate “provokatsia” to force the Soviet Union to help an imperilled British Empire. Hitler, however, suffered from his own blind spot. He had failed to see any lessons in Japan’s cruel war in China launched in 1937. The vastness of China meant that the imperial army was overstretched, and its conspicuous brutality was counterproductive. It provoked resistance, not submission.

Ironically, the Wehrmacht’s overwhelming defeat of France had been the trigger for Japanese hopes, their “golden opportunity” to seize the French, Dutch and British colonies of southeast Asia. The hubris of the military-dominated Japanese government grew. Its leaders decided to strike south into the Pacific rather than attack the Soviet Union, partly because its army had received a bloody nose in 1939 at Khalkin-Gol from Red Army divisions commanded by General Georgi Zhukov. During the late summer and autumn of 1940, while Hitler began to plan his immense gamble, they considered attacking western colonies on the Pacific rim.

Link to the rest at The Guardian

It appears that PG has been in a Twentieth-Century military history frame of mind today.

While this will not be a permanent focus of TPV, as long-time visitors know, PG is of the opinion that the aftermath of World War II, which began over 80 years ago, continues to have a profound impact on the shape of the word today.

For one thing, the ending of the war divided Europe into two spheres of influence, the border of which was the approximate boundary between United States and Soviet Union’s armies at the time the war ended. The Western portion would be under the protection of the United States and Eastern Europe would lie in the Soviet sphere. Germany would be divided between areas controlled by Western and Soviet militaries. Berlin, the wartime capital which lay in East Germany, was similarly divided between Russia and West (US, Britain and France were each in charge of a portion of the West Berlin).

As one important and lasting economic change, in the aftermath of the war, the shattered economies of Western Europe received some important help from The Marshall Plan, also known as the European Recovery Program, enacted in 1948, under which the United States, which had not been subject to invasion or substantial land battles during the war, provided significant financial aid to help rebuild cities, industries and infrastructure in Western Europe.

In 1951, France and West Germany formed the European Coal and Steel Community (ECSC), integrating their coal and steel industries, began the process of removing long-standing trade barriers between the nations of Western Europe. In 1967, six European nations met in Rome to create what was then known as the European Economic Community, which would develop into the European Common Market which removed inter-European tariffs and other trade barriers. The accession of the United Kingdom to the European Communities was finalized in 1973.

This process was encouraged by the United States, which had not been subjected to the destruction of its homeland during the war. Additionally, the lack of trade barriers between the individual states in the United States provided an example of the economic benefits that open borders could provide.

Creating a Classic of Military Literature

From Publishers Weekly:

In the summer of 1942, during the first seven weeks of fierce fighting between U.S. Marines and the Japanese on Guadalcanal, an island in the Solomons, the Americans were watched over by a young correspondent from the International News Service, Richard Tregaskis. In his pockets he carried notebooks, on which he wrote key details about the brutal conditions faced by the Marines in this first major combat offensive in the Pacific theater.

Tregaskis would transfer the information nightly into a black, gilt-edged diary. “The theory and practice was that I could get all the details I needed by referring to the notebook number—one, or three, or four—when and if I could later get to writing a book from my notes,” he recalled.

After leaving Guadalcanal on a B-17 Flying Fortress on September 25, Tregaskis went to New Caledonia, where he was waiting for a military transport plane to take him to Honolulu, and began writing his book. In Hawaii, his writing had to be done in the Navy offices at Pearl Harbor, under the supervision of a censor. Every morning he would go there to work, and every night, his diary was locked in a safe; he never got it back and could not find out what happened to it. “And as fast as I could write my manuscript, a naval intelligence officer took my efforts and hacked away with a pencil and a pair of scissors,” Tregaskis reported. “That was the way it was with sharp-eyed military censorship in those days.”

Tregaskis’s manuscript describing his time on Guadalcanal, arranged in an easily understood diary format, was sent to the INS offices in New York City in early November 1942. Barry Faris, INS editor-in-chief, wrote Tregaskis that he had turned the manuscript over to Ward Greene, executive editor of King Features, which was owned and operated, as was INS, by newspaper publisher William Randolph Hearst. Faris said Greene would work to get the manuscript accepted by a book publisher and subsequently serialized in magazines. “I did not have a chance to read it thoroughly as I would have liked,” Faris informed Tregaskis, who would be splitting the proceeds from the book 50/50 with his employer, “but from what I did see I think you did a magnificent job on it.”

One person who took the time to read Tregaskis’s writing from beginning to end was Bennett Cerf, cofounder of Random House. Greene had distributed copies of the manuscript to nine publishers and asked them to bid on it, a method “that had never been done before,” Cerf noted. Just the day before he received Tregaskis’s text, he had been telling his colleagues that the first book published about Guadalcanal would be “a knockout,” because “Guadalcanal marked the turning of the tide” in the war in the Pacific.

Cerf received the manuscript from King Features on November 11 and read it that night. The next morning, he called Greene and told him, “I’ve got to have this book.” A pleased Cerf related years later that Random House had signed the young reporter’s work before “any of the other eight publishers had even started reading it.”

The publisher’s prediction that the American public would be interested in learning more about the Marines and their pitched battles on a remote island thousands of miles away turned out to be accurate. Rushed into print on Jan. 18, 1943, Guadalcanal Diary made a steady climb up the bestseller charts, reaching, the publishing company’s advertisements were quick to report, the #1 position on lists compiled by the New York Times and New York Herald Tribune. Sales of the book were boosted by positive reviews from critics, who praised Tregaskis not for his literary flair but for his factual and honest reporting about what the Marines faced during combat.

John Chamberlain of the New York Times wrote that Tregaskis’s book served as “a tonic for the war-weary on the homefront,” showing that a country “doesn’t necessarily have to love war in order to fight it.”

Interest in the book was so great that Guadalcanal Diary became the first Random House book to sell more than 100,000 copies.

Link to the rest at Publishers Weekly

We need more than deplatforming

From The Mozilla Blog:

There is no question that social media played a role in the siege and take-over of the US Capitol on January 6.

Since then there has been significant focus on the deplatforming of President Donald Trump. By all means the question of when to deplatform a head of state is a critical one, among many that must be addressed. When should platforms make these decisions? Is that decision-making power theirs alone?

But as reprehensible as the actions of Donald Trump are, the rampant use of the internet to foment violence and hate, and reinforce white supremacy is about more than any one personality. Donald Trump is certainly not the first politician to exploit the architecture of the internet in this way, and he won’t be the last. We need solutions that don’t start after untold damage has been done.

Changing these dangerous dynamics requires more than just the temporary silencing or permanent removal of bad actors from social media platforms.

Additional precise and specific actions must also be taken:

Reveal who is paying for advertisements, how much they are paying and who is being targeted.

Commit to meaningful transparency of platform algorithms so we know how and what content is being amplified, to whom, and the associated impact.

Turn on by default the tools to amplify factual voices over disinformation.

Work with independent researchers to facilitate in-depth studies of the platforms’ impact on people and our societies, and what we can do to improve things.

These are actions the platforms can and should commit to today. The answer is not to do away with the internet, but to build a better one that can withstand and gird against these types of challenges. This is how we can begin to do that.

Link to the rest at The Mozilla Blog

The History of Book Banning

From Publishers Weekly:

As a historian of literacy, I coined the phrase “the literacy myth” to identify, explain, and criticize the former consensus that reading and writing (and sometimes arithmetic) are sufficient in themselves, regardless of degree of proficiency or social context, to transform the lives of individuals and their societies.

In late 2021, I’m confronted with an unprecedented “new illiteracy”—another version of the ever-shifting literacy myth. The historical continuities are shattered by, first, the call to ban books in innumerable circumstances; second, the banning of written literature without reading it; and, third, calls for burning books. This constitutes a movement for illiteracy, not a campaign for approved or selective uses of reading and writing.

Banning books from curricula, erasing them from reading lists, and ridding them from library shelves has mid-20th-century precedents; the burn books movement does not. Nor does the banning of books without censors reading them to identify their offending content.

Banning books is an effort, unknowingly, to resurrect the early modern Roman Catholic Counter-Reformation against both radical Catholics and early Protestants, which attempted to halt unauthorized reading, including curtailing the ability of individuals to read for themselves. Then seen as a “protest,” individual access to written or printed texts was perceived as threatening in ways that controlled oral reading to the “masses” by a priest or other leader was not. It enforced orthodoxy and countered both collective and individual autonomy.

The similarities and differences between today and a half millennium ago are powerful. Both movements are inseparable from ignorance, rooted in fear, and expressed in both legal and extralegal struggles for control and power. Both are inextricably linked to other efforts to restrict free speech, choice and control over one’s body, political and civil rights, public protests, and more.

Once led by the established church, censorship crusades to ban written materials of all sorts are today supercharged by right-wing politicians, radical evangelicals, and supporting activists. In the eyes of some, these politicians are opportunistic.

Despite media comments and condemnation by professors, teachers, librarians, and First Amendment attorneys, these issues are poorly understood. Parents of school-age children are confused. The young, supposedly in the name of their protection, face the greatest threat to intellectual and psychological development. That danger is most severe for the racially and gender diverse, who see themselves being erased or banned.

This movement harkens back well beyond the “ban books” and “read banned books” movements of the 1950s and ’60s, with their obsession with J.D. Salinger’s Catcher in the Rye, Harper Lee’s To Kill a Mockingbird, or Maya Angelou’s I Know Why the Caged Bird Sings. Even Anthony Comstock, secretary of the New York Society for the Suppression of Vice, who tried to use the U.S. Postal Service to limit the circulation of obscene literature and destroyed books, did not aim to empty libraries.

Compare this history to efforts in Virginia to ban Nobel Prize– and Pulitzer Prize–winning author Toni Morrison’s classic novels The Bluest Eye and Beloved. Or Texas school districts’ ban of young adult novelist Ashley Hope Perez’s award-winning Out of Darkness, based on a single paragraph taken out of context. In all these cases, the new illiterates either do not, or cannot, read the supposedly offending texts.

Perhaps the most revealing example is Republican Texas state representative Matt Krause’s campaign stunt of releasing a list of 850 books that he wants to be “investigated” for some unspecified violations. He demands school superintendents provide him with lists of texts that deal with certain subjects relating to race and sex, a probably illegal fishing expedition.

Link to the rest at Publishers Weekly

PG notes that the left wing can and does ban books and “deplatform” unpopular authors and speakers in the same manner and with at least the same frequency as the right wing does, at least in contemporary America.

From Dictionary.com

deplatform

verb (used with object)

to prohibit (a person or people) from sharing their views in a public forum, especially by banning a user from posting on a social media website or application:Some viewers boycotted the advertisers connected to the show in an effort to deplatform the controversial co-host.

How to Send Messages That Automatically Disappear

Not exactly about books, but it’s New Years Day and the pickins are slim. (See also, Slim Pickens)

From Wired:

MANY MYSTERY AND spy movies are based on the premise that you can send messages that self-destruct, but you don’t need to be an international secret agent to do the same with your own texts.

In fact, most popular chat apps now include some kind of disappearing message feature—which means that if you don’t want a permanent record of your conversation, you don’t have to have one. In fact, encrypted messaging app Signal made its disappearing message feature the default.

While it’s handy to have chat archives to look back on for sentimental and practical reasons (recipes, addresses, instructions, and more), there are other times you’d rather nothing was saved. Here’s what to do.

There is a caveat here for all of these apps, in that the people you’re communicating with can take screenshots of what you’ve said—or, if screenshots are blocked, they can take a photo of the screen with another device. Some of them promise to notify you if your messages have been screenshotted or downloaded, but there’s always a workaround. That’s something to bear in mind when choosing who to chat with and how much to share.

Signal

The disappearing messages feature in Signal is an option for every conversation you have, and now it’s available by default or by an individual conversation: You can switch between disappearing messages and permanent messages at any time in any thread. To do this, tap the top banner in any thread, then pick Disappearing messages.

You can choose anywhere from one second to four weeks for your messages to stick around after they’ve been viewed (or choose Off to disable the feature). You can even set a custom timer—you could tell a message to be gone in 60 seconds. An alert appears in the chat whenever you’ve changed this setting, and anything you send from then on follows the rules you’ve set.

To set a default expiry time for messages in all your chats, open the main app settings page and choose Privacy and Default timer for new chats (under Disappearing messages). This applies to every chat you initiate from then on, not to the existing conversations on your phone.

. . . .

Instagram

Instagram has now gone way beyond photo-sharing to cover Snapchat-style stories, direct messaging, and more. The direct messaging component lets you send photos and videos that stay on record or disappear once they’ve been viewed, though text always stays in place.

Head to your conversation list in the Instagram app, then open the thread that you want to send the disappearing message to (tap the compose icon, top right, if you can’t see it). Tap the camera icon on the left of the compose box and capture the photo or video you want to send.

Down at the bottom of the screen, you’ll then see various options for what you’re sending: View once, Allow replay (which is really view twice), and Keep in chat. Pick whichever you prefer before confirming with the Send button.

Link to the rest at Wired

Security by obscurity is far from a foolproof solution, but if PG were planning to send a bunch of secret messages, he would be inclined to set up a bunch of free email addresses for the purposes of both sending and receiving secret messages.

PG and his secret correspondent would each write their secret message offline, then encrypt the message offline using one of many open-source encryptions programs then send the message to one of the free email addresses.

PG would identify each free email address with a common name like Jim or Becky and provide his correspondent with the list.

In each encrypted email, PG and his online correspondent would mention one of the names in an offhand manner like, “I think Jim might be interested in seeing this.” The friend’s name (or the name of the last friend mentioned in the email) would identify the next email box to be used to send/receive the next encrypted message.

A variation on this system might involve setting up several free email addresses to automatically forward messages to other free email addresses.

Using both US-based email services as well as non-US email services would make tracking messages even more difficult.

Every couple of weeks, PG would create an entirely different set of free email addresses and send the encrypted list to his correspondent. PG might also be inclined to send out encrypted garbage to a whole bunch of email addresses that weren’t his intended recipient.

If PG and his correspondent were able to use computers at various locations and connecting to different Internet Service Providers, more obscurity would result. Throw a VPN with multiple nodes in multiple countries and rotating VPN locations increases the complexity of interception.

PG is informed that large government agencies are capable of cracking a great many encryption algorithms. One reason why PG would be inclined to use open-source encryption is that the source code is available for all to see for debugging and security-checking purposes. This doesn’t mean that open-source encryption can’t be cracked, but with many eyes watching (unlike the situation with encryption software than is not open-source) any cracking weaknesses in the open-source system are probably more susceptible discovery than a black-box encryption program.

Again, PG understands that a super-duper-mega-encryption system created by geniuses is the single best way to communicate confidentially, but demonstrating that such a system is uncrackable is quite difficult, perhaps even impossible.

PG expects that some of the visitors to TPV are far more fluent on this topic than he is and is happy to hear critiques, comments, etc., from one and all.

The Coin Standard

From Lapham’s Quarterly:

In 1926 William Hope “Coin” Harvey began constructing a “pyramid” in the Arkansas Ozarks, with the intention of single-handedly preserving a record of the dying American civilization’s proudest creations in a concrete time capsule. Less than fifty years later, the remains of his monument—more of an obelisk than a pyramid—was eroding under the surface of a man-made lake. The remains of Harvey lie in a mausoleum nearby; they, too, would be drowned had they not been moved further up the bank of Beaver Lake by a local contractor at the behest of the U.S. Army Corps of Engineers. The futility of Harvey’s final work echoes the fruitlessness of the project for which he was most known: bimetallism and the return of free silver, a monetary system based on silver as well as gold.

Harvey was many things: a lawyer, a businessman, a mine owner, a real estate developer. But at his core he was a political thinker whose pamphlets reached hundreds of thousands, if not millions, of readers during the late nineteenth-century fight over fiat currency. He was a visionary, a crank, a failure, a polemicist who thought of himself as a prophet. His commitment to a political ideology that a century later has fallen almost entirely out of public consciousness pulled him away from his responsibilities to his family and his business ventures; now he is all but unknown.

Born in Virginia in 1851, Harvey was practicing law in West Virginia by the age of twenty. He moved to Gallipolis, Ohio, a few years later and began a restless and elusive search for a home and purpose. He met his wife Anna there in 1875, and he spent their life together making clear that he did not consider her part of that home or purpose. The family moved to Cleveland, then Chicago, and back to Gallipolis before Harvey moved his wife and children to Colorado, where he would become the superintendent of production at a silver mine. It was the 1880s, and silver’s value was falling. The supply had exploded due to new silver mines in the American West, just as various European states—and the United States itself—had moved away from silver coinage in the previous decade. If this was the genesis of Harvey’s allegiance to free silver, it was a hard lesson indeed. Harvey turned from mining to real estate in the West, buying land, building houses, and promoting festivals with some success and at least one catastrophic failure before moving the family back to Chicago in 1893 with what he claimed “was then a considerable fortune.” It was here that his political career and the project for which he would be known for the rest of his life began in earnest.

Harvey founded a publishing company devoted to free silver in Chicago and was soon releasing his own works on the subject. In 1894 he wrote Coin’s Financial School, a compilation of lectures given by a fictional sixteen-year-old boy named Coin (a nickname by which Harvey thereafter was known) in favor of bimetallism and free silver. Harvey’s economic theories were self-taught, drawing on a major political movement that sprang up in part due to silver’s decline amid several economic panics. As pointed out by interlocutors at the time, they were also overly simplistic, which wasn’t necessarily a detraction for his many readers. His prose was clear and convincing, spelling out the complicated economic debates dominating political conversations of the time while naming and indicting the systems and people he saw as responsible for silver’s demonetization. The book sold at least 650,000 copies according to a conservative estimate from historian Richard Hofstadter, who dubbed Harvey “the Tom Paine of the free-silver movement.” The lifetime sales of Coin’s Financial School were probably closer to one million copies. It was hawked by newsboys on railroad trains, sold at cigar stores, and distributed by the National Silver Party and other pro-silver organizations. Its enormous impact and popularity was a reflection of the political climate of the age and the sentiments of a rural white farming class convinced that its interests were being trodden on by banking elites and businessmen—and that only government intervention could provide an answer.

Harvey was not humble about such success. An autobiography dictated to his son near the end of his life characterizes Coin’s Financial School as “the Bible’s only rival for big sales. Throughout the West and the South he was hailed as ‘Our Savior.’ His followers numbered millions.”

The Free Silver Movement sputtered out around the turn of the century, though for Harvey the cause of free silver and the economic oppression of the common man would be a lifelong passion. His idealism kept him out of mainstream politics after 1898, however.

. . . .

The fundamental problem Coin confronted—the existential query at the heart of economic discourse of the time—concerned the character of money. Who produces it, what is it based on, and who determines its value? The question of money gained salience for lower- and middle-class Americans after the Civil War, when the Greenback Party—a short-lived agrarian antimonopoly party—advocated against the gold standard and for a paper money system; the Populist movement of the late nineteenth century carried on the democratic, antimonopoly cause in rural farming communities around the country after the Greenbackers’ last electoral attempt in 1889. Harvey brought the Populist cause to the city, taking on—if only fictionally—the bankers, politicians, and newspapermen he believed to be conspiring against the poor, the producers, and the farmers. He was not alone: free silver was a truly popular movement. During the Civil War, the federal government had for the first time issued money backed not by gold but by government bonds. After the war, when the U.S. moved back to a metal standard, leading to deflation, Greenbackers pushed for unbacked currency to remain in circulation. Free silverites and bimetallists wanted the same sort of inflationary policies by including silver in the specie standard. For Americans outside city centers, concentration of wealth, capital, and political influence in the hands of banks and monopolies rang unfair, especially while farmers were trapped in a cycle of debt from which it seemed nearly impossible to emerge.

Harvey continued to develop and propagate his political philosophy through his publishing company, releasing other books of fictional Coin lectures on bimetallism, including 1895’s Coin’s Financial School Up to Date and at least one other book by another proponent of bimetallism. Another Harvey book, The Patriots of America, proposed the formation of a new fraternal order that would protect the United States from “foreign influences,” by which he mostly meant the British. In another Coin volume, titled Coin on Money, Trusts, and Imperialism, Harvey’s alter ego plays the democratic theorist as well as the free-silver advocate. “Individual selfishness crystallized into the laws of nations is the cause of the overthrow of republics, and is the mother of monarchies,” Coin argued. His primary villains were banks, bankers, and the nation of England, whose monetary policies he believed were bleeding the United States dry. Harvey envisioned a monetary system unbound from the supremacy of London, Chicago, and New York. Like many other Populists, he trafficked in anti-Semitism, implicitly and often explicitly in the cartoons that accompanied his books. His prejudice is especially apparent in his novel, A Tale of Two Nations, published the same year as Coin’s Financial School, which included a fictionalized version of the Rothschild family and a villain clearly coded as Jewish.

Link to the rest at Lapham’s Quarterly

PG has always been interested in the phenomenon of an individual or small group of people who become obsessed with a single idea and build the rest of the universe around it.

Russian Court Closes Memorial International, 2021 Michalski Prize Winner

From Publishing Perspectives:

On November 23, Publishing Perspectives reported that the Russian human-rights-defense organization and publisher Memorial International had been made the winner of the Jan Michalski Prize for Literature.

Today (December 28), Andrew Osborn and Maria Kiselyova at Reuters Moscow report that Russia’s supreme court has ordered Memorial International to disband “for breaking a law requiring it to act as a ‘foreign agent.’”

The Michalski award named Memorial and four contributors–Alena Kozlova, Nikolai Mikhailov, Irina Ostrovskaya, and Irina Scherbakova–for a book called Знак не сотрется: Судьбы остарбайтеров в письмах, воспоминаниях и устных рассказа. It’s translated from Russian by Georgia Thomson as The Sign Will Not Be Erased: Letters, Memoirs and Stories from Ostarbeiter in Nazi Germany. Thomson’s English-language translation was released on November 18 by Granta Books at 496 pages.

The publication of the honored book–OST is for Ostarbeiter–was not a casual foray into publishing for Memorial International. In the organization’s mission statement, the first line says that Memorial International “reveals, publishes, and critically interprets information on crimes and mass human rights violations committed by totalitarian regimes in the past and carrying direct or indirect consequences in the present.”

. . . .

Today’s news of Russia’s dissolution order for Memorial International–part of a process that had begun earlier (before its Michalski win), with the designation of Memorial International in Russia as a ‘foreign agent.’ The culminating ruling from the court has triggered unusually robust international reportage. The organization has had standing chapters not only in Russia but also in Belarus, Belgium, the Czech Republic, Germany, Italy, and Ukraine. And Moscow’s move to shutter it is not going down quietly in world reportage.

Osborn, who is Reuters’ Russian bureau chief, writes that the court order “caps a year of crackdowns on Kremlin critics unseen since the Soviet era.”

At the Associated Press, Dasha Litvinova describes the supreme court’s move is part of “a relentless crackdown on rights activists, independent media, and opposition supporters,” and writes that the court ruling has “sparked international outrage. Memorial is made up of more than 50 smaller groups in Russia and abroad,” Litvinova writes. “It was declared a ‘foreign agent’ in 2016—a label that implies additional government scrutiny and carries strong pejorative connotations that can discredit the targeted organization.

“Prosecutors said the group repeatedly failed to identify itself as a foreign agent and tried to conceal the designation, accusations rejected by Memorial.”

In Germany, Deutsche Welle writes, “The organization faced charges under the Russia’s controversial NGO [non-governmental organization] laws, which demands groups which are funded from abroad to clearly mark all their material as issued by a ‘foreign agent.’”

Link to the rest at Publishing Perspectives

The Passive Voice is not a blog about politics or international affairs, but PG has been disturbed about a number of reports coming out of Russia and China that make it appear that the bad old days of Communism, including Dictators for Life, has returned to these two nations.

Tractor Wars

From The Wall Street Journal:

When one thinks of Henry Ford, one thinks of automobiles. The visionary engineer and entrepreneur all but invented the auto industry when he founded the Ford Motor Co. in 1903. Five years later he manufactured the first affordable car for middle-class Americans, the Model T, having first devised the assembly-line process to mass produce this epoch-making innovation.

Few Americans, however, associate the Ford name with agricultural and farm-class vehicles. Yet years before he conceived the Model T, Ford—who was born on a farm near Dearborn, Mich., in 1863—was obsessed with “making some kind of a light steam car that would take the place of horses,” especially, as he later wrote in his memoirs, “to attend to the excessively hard labour” of plowing, threshing and pulling. Nothing, his first-hand experience had taught him, could be “more inexcusable than the average farmer, his wife, and their children drudging from early morning until late at night.”

The story of Ford’s dream of perfecting an affordable, all-purpose tractor—or, as Ford later imagined it, a gasoline-powered “automobile plow”—is seldom told. Neil Dahlstrom’s “Tractor Wars” tells it well. As the manager of archives and history at John Deere, Mr. Dahlstrom is in a unique position to describe the rise of the tractor in detail, and to follow the arc of both Ford’s ambition to improve American agricultural methods and his rivalry with other farm-vehicle makers.

In 1906, two years before the Model T transformed the American cityscape, Ford asked one of his engineers to develop “a rudimentary tractor built mostly from leftover automobile parts.” Early prototypes contained a Model B engine and transmission, Model K front wheels and radiator, and a grain binder’s rear wheels. It was first demonstrated in 1908, at a fair in Winnipeg, in a pull contest between steam- and gas-powered tractors. Ten years later, the first Model F tractor, built by the Ford subsidiary Fordson, rolled off the assembly line and was delivered to botanist Luther Burbank, in Santa Rosa, Calif.

By 1918 there were many competitors in America’s great tractor pull. Most were small or mid-sized firms, including the Gas Traction Co. of Minneapolis, and the Moline (Ill.) Plow Co. and the Waterloo (Iowa) Gasoline Engine Co. Two ultimately broke out of the pack with loud, gas-guzzling chugs.

The first, International Harvester, was formed in 1902 when the McCormick Harvesting Machine Co. and the Deering Harvester Co. merged with three smaller firms. Enjoying support from financier J.P. Morgan, it instantly became the nation’s fourth-largest corporation, consolidating “85 percent of the harvesting equipment market.” Early attempts by International Harvester to develop a gas-powered tractor were only moderately successful, but in 1920 its engineers made a breakthrough, converting the two front wheels into “traction wheels,” moving the engine from the rear to the middle, and adding three reverse speeds. All of this, plus enhancements to compatible cultivating attachments, made Harvester’s Farmall tractor competitive with the Fordson.

Ford’s other chief rival was the John Deere Co. Its earliest claim to fame was becoming the “world’s largest manufacturer of steel plows.” The company shifted course in 1907 when William Butterworth, the son-in-law of Charles Deere, took control. According to Mr. Dahlstrom, Butterworth was “cautious with the family money that still financed the company, pushing for long-term gains in a cyclical, low-margin, weather-dependent business.” While some outsiders “mistook Butterworth’s preference for steady forward progress as indecision,” his business model turned out to be ingenious.

Recognizing that head-to-head competition with International Harvester would be fruitless, Deere hand-crafted a distinctive product called the “motor plow.” Staff designer Charles Melvin built the first model in 1912, and his colleague Joseph Dain perfected it by adding what became known as all-wheel drive. Although the “continued sheepishness of the banking community” tempered Butterworth’s enthusiasm for tractor development, the company found prestige with its nimble, lightweight—if rather expensive—Tractivator. Then, in 1918, about a month before the first Fordson arrived at Burbank’s doorstep, Butterworth purchased the Waterloo Gasoline Engine Co. This allowed Deere to sell Waterloo Boy tractors until 1923, when the iconic green-and-yellow John Deere Model D tractor was rolled out for all to see.

Link to the rest at The Wall Street Journal (Should be a free link, but PG apologizes if you hit a paywall, but hasn’t figured out a way around it.)

PG grew up on ranches and farms. Some his earliest memories involve steering a tractor at a slow speed while his father picked up hay bales in a field and threw them on a wagon that was being pulled by the tractor. Since tractors often spend a lot of time traveling at a set speed while pulling farm implements, they have throttles that control how fast an engine is moving that are typically set by hand, so there’s no accelerator pedal like there is on an automobile or a truck.

PG’s father would hoist PG up onto the tractor seat and tell him to drive the tractor straight along side a row of bales, then put the tractor in a low gear and set the throttle so the tractor would move at 2-3 miles per hour. He would engage the clutch, jump off the tractor and load hay bales while little-boy PG drove straight down beside the line of bales.

When the tractor reached the end of the field, PG’s father would jump back on to turn the tractor and towed wagon around to go down the next row while PG steered.

PG remembers when he was not much older and had learned how to set the throttle, put the tractor in gear and let out the clutch slowly so as not to jerk the tractor forward or kill the engine. The tractor had the clutch pedal on the left side of the tractor and the brake pedal on the right side of the tractor with a large enclosed drive train running in between.

PG’s legs were too short to depress the clutch pedal and the brake pedal at the same time so he would shift his body weight to his left leg in order to disengage the clutch so he could take the tractor out of gear, then move his body weight to his right leg to push down on the brake to bring the tractor to a stop.

On some occasions where the tractor’s brakes weren’t that good and the tractor was headed down a shallow incline, PG would move both feet to the right brake pedal and jump up and down with his entire not-very-large body weight while continue to hold on to the steering wheel in order to get the tractor to stop.

Had he grown up somewhere other than a farm or ranch, this would have constituted a high adventure. For PG, it was just helping out his dad. PG’s father has been deceased for several years, but he still remembers his earliest tractor driving experiences as well as many other tractor adventures that followed until he went away to college, never to drive a tractor again.

Fordson tractor – 1940’s (Wikipedia)
John Deere Model B (Wikipedia)
Farmall Model BN (Wikipedia)

New Insight into the Friendship of Virginia Woolf and T. S. Eliot

From The Los Angeles Review of Books:

“THEY HAVE FINALLY recovered her body.” I found the typed words in the April 22, 1941, letter hard to read. T. S. Eliot let the typewriter ribbon get frustratingly old at times, but it wasn’t just the faint print. Seeing this starkly written in one of Eliot’s many letters to his longtime love, Emily Hale, brought back vividly the period of uncertainty Virginia Woolf’s friends and family endured for 21 days.

Woolf’s death by suicide, loading her pockets with stones and wading into the River Ouse, is perhaps the most famous in literary history. I’d heard about it countless times and tried to teach my students to avoid allowing her death to color their reading of Woolf’s lyrical and often exuberant novels (though her suicide is often all they know about Woolf’s reputation). And yet, I had forgotten that, for three weeks, her family and friends could hope against hope, even as her final note to her husband Leonard — “I owe all the happiness of my life to you. […] I cant go on spoiling your life any longer” — had been found, even as her walking stick, now visited like a relic at the Berg Collection in the New York Public Library, was discovered, discarded on the riverbank. In Eliot’s letter to Hale, we see the relief that comes with dreadful certainty: “I am glad,” Eliot continues. “She was to me like a member of my own family.”

Nearly two years ago, an archive of letters was unsealed at Princeton that radically changed the way scholars understand the life and work of T. S. Eliot. Two months later, with COVID-19 numbers soaring, this long-awaited archive slammed shut again. On Monday, October 18, 2021, I was the first external scholar finally to return to those papers. Unsurprisingly, the focus of readers so far has been on the shocking relationship memorialized in the letters between Eliot and Emily Hale, the American teacher with whom he was avowedly in love. But the Hale letters contain at least one other revelation, with profound and as yet unexplored consequences for the history of literary modernism. We now know more details about Eliot’s invitation to visit Woolf on the weekend that her death was announced in 1941. She had invited him on March 8, when she felt herself spiraling into depression again. He declined the invitation due to a cold. He wrote about the coincidence of the timing in a regretful letter to Hale.

What if? After seeing the letters, I couldn’t shake this question. What if Eliot’s illness hadn’t kept him home, what if he’d eagerly accepted the invitation and shown up at the Woolfs’ house in Sussex for the weekend, as he had several times before? Would his presence at Monk’s House — sometimes an irritant, always an interest — have mattered if Woolf’s most recent depression were as strong as during her previous suicide attempts in 1913–’15? Or what if Eliot had quit smoking, which exacerbated his bronchitis, or ignored his doctor’s advice and headed to Sussex anyway? As Woolf tended to publish four works in a decade, might we have two more Woolf novels and two more political essays — more vibrant and vitriolic than even the feminist Three Guineas (1938) — if she had lived only 10 more years? Would her diaries, lovingly edited by her husband Leonard after her death, an inspiration for so many writers, never have seen publication? In the drafting of Mrs. Dalloway (1925), Woolf considered having her heroine, Clarissa Dalloway, die and then changed her mind, so that Septimus Warren Smith, the soldier suffering from wartime trauma, jumps from the window instead, while Clarissa returns to her friends Peter Walsh and Sally Seton, and to her party, though she was “glad” he had done it, had jumped with his treasure. Would Eliot’s visit, like that of Peter to Clarissa, have somehow altered Woolf’s fate?

. . . .

When we think of Eliot as a personal poet, three major women in his life — Emily (his longtime epistolary love), Vivien (his tormented first wife), and Valerie (his second wife, nearly 40 years his junior) — take on a renewed centrality. Eliot’s private letters to Hale will forever change our ideas about his poetic sources and biography. Eliot’s decision not to marry Emily after Vivien’s death, his anger that she left his letters to Princeton during their lifetimes (though donating them to an archive had always been their plan), his decision to burn her side of the correspondence, his bitter statement released by the Houghton (perhaps dictated by Valerie), timed for the opening of the archive in 2020 — much has been said, and more needs to be said about all of this, and the repercussions for understanding Eliot’s criticism, poetry, and plays. Not yet published, these 1,131 letters will also modify our understanding of Eliot’s relationship to Virginia Woolf.

Link to the rest at The Los Angeles Review of Books

The OP is lengthy, but PG found it fascinating (YMMV). There are also several photos of Woolf and Eliot.

Journals adopt AI to spot duplicated images in manuscripts

From Nature:

Just before a study appears in any of ten journals published by the American Association for Cancer Research (AACR), it undergoes an unusual extra check. Since January 2021, the AACR has been using artificial intelligence (AI) software on all manuscripts it has provisionally accepted after peer review. The aim is to automatically alert editors to duplicated images, including those in which parts have been rotated, filtered, flipped or stretched.

The AACR is an early adopter in what could become a trend. Hoping to avoid publishing papers with images that have been doctored — whether because of outright fraud or inappropriate attempts to beautify findings — many journals have hired people to manually scan submitted manuscripts for issues, often using software to help check what they find. But Nature has learnt that in the past year, at least four publishers have started automating the process by relying on AI software to spot duplications and partial duplications before manuscripts are published.

The AACR tried numerous software products before it settled on a service from Proofig, a firm in Rehovot, Israel, says Daniel Evanko, director of journal operations at the association in Philadelphia, Pennsylvania. “We’re very happy with it,” he adds. He hopes the screening will aid researchers and reduce problems after publication.

Professional editors are still needed to decide what to do when the software flags images. If data sets are deliberately shown twice — with explanations — then repeated images might be appropriate, for instance. And some duplications might be simple copy-and-paste errors during manuscript assembly, rather than fraud. All this can be resolved only with discussions between editors and authors. Now that AI is getting sufficiently effective and low-cost, however, specialists say a wave of automated image-checking assistants could sweep through the scientific publishing industry in the next few years, much as using software to check manuscripts for plagiarism became routine a decade ago. Publishing-industry groups also say they are exploring ways to compare images in manuscripts across journals.

Link to the rest at Nature

Over half of adults admit they have conversations with inanimate objects, plants, and pets

As PG has mentioned before, the holidays are often an information desert for articles about writing and the book business, hence this post.

For those interested in being organized, PG suggests that you might file this post under A Frolic of His Own, a legal term used almost exclusively in law school Torts classes.

If the meaning of A Frolic of His Own puzzles you, see the opinion in Joel v. Morison by The Honorable James Parke, 1st Baron Wensleydale in the Court of Exchequer, July 3, 1834.

To the best of PG’s knowledge, this opinion is the only reason anyone remembers the first Baron of Wensleydale any more.

By this dismissal of the Baron, PG is most certainly not impugning the fame of Wensleydale Cheese, which no less a celebrity than George Orwell ranked as the second-best cheese in Britain, bested only by Stilton, in one of his lesser-known works, In Defense of English Cooking.

Consumer Warning: The title essay is only three pages long out of a printed book length of about 56 pages.

In more recent times, Wensleydale Cheese undoubtedly achieved its greatest fame due to its being mentioned in various episodes of Wallace and Gromit.

From StudyFinds:

Many people act a bit “differently” within the privacy of their own home, but a new survey finds most adults are actually having full-on conversations with items that can’t talk back! The poll of 2,000 adults in the United Kingdom finds over half routinely “chat” with inanimate objects at home. Another 60 percent say they’ll often have “entirely two-way” conversations with their pets.

Commissioned by TheJoyOfPlants.co.uk and conducted by OnePoll, the survey also finds 44 percent of adults frequently talk with their house plants. Within that group, four in 10 usually ask their plant if it’s thirsty.

A bit more understandably, over a quarter have lashed out verbally at an object or appliance for failing to do its job. For example, people often scold their TV or coffee maker for failing to turn on. Conversely, sometimes household items perform their functions a bit too well. Twenty-four percent admit they’ve yelled at an alarm clock to shut up. Meanwhile, close to 20 percent have pleaded with their car to keep going while low on fuel and over 10 percent have verbally thanked an ATM for dispensing their cash.

Most respondents have been caught mid-conversation by another human being. As many as six in 10 have been exposed while talking to an object and over half of those situations (60%) ended in laughter.

Plants love hearing a soothing voice

These chats are quite frequent as well. About six in 10 adults talk with their plants on a weekly basis. Another eight percent talk to their plants every day! Close to 40 percent believe these pep talks help their plants grow, while 37 percent report feeling happier themselves after speaking with some shrubs.

. . . .

As far as specific comments, “you need a drink”, “you’re getting big”, and “you’re not looking your best” are the most common things people say to plants. 

Link to the rest at Study Finds

An Economic History of Restaurants

From the Economist:

April 9th 2020 was the restaurant industry’s darkest day. The imposition of lockdowns to slow the spread of covid-19, combined with people voluntarily avoiding others, meant that on that Thursday bookings in America, Australia, Britain, Canada, Germany, Ireland and Mexico via OpenTable, a restaurant-reservation website, normally in their millions, fell to zero. Now, as economies unlock, many restaurants, even the fanciest, are facing labour shortages. Le Gavroche, one of London’s swankiest French offerings, has had to stop its lunch service and has lost its general manager.

Covid brought to a halt an astonishing expansion. In 2010-19 the number of licensed restaurants in Britain grew by 26%. Americans were, for the first time, spending more than half their total food budget on eating out. Well-paid folk from Hong Kong to Los Angeles were happily renting kitchenless apartments: why bother cooking when good food was so lavishly available beyond your front door?

Being deprived of restaurants has made people realise how much they value them. Eating out fulfils needs which seem fundamental to human nature. People need to date, to seal deals and to peer at their fellow humans. At a good restaurant you can travel without travelling, or simply feel coddled.

Yet restaurants in their current form are a few hundred years old at most. They do not satisfy some primeval urge, but rather those of particular sorts of societies. Economic and social forces, from political reform to urbanisation to changing labour markets, have created both the supply of and demand for restaurants. Their history also hints at what their future could look like in a post-pandemic world.

People have long feasted outside the home. Archaeologists have counted 158 snack bars in Pompeii, a city destroyed by a volcano in 79ad—one for every 60-100 people, a higher ratio than many global cities today. Ready-cooked meat, game and fish were available for Londoners to eat from at least the 1170s. Samuel Cole, an early settler, opened what is considered to be the first American tavern in 1634, in Boston.

These were more like takeaways, though, or stands where food might be thrown in with a drink, than restaurants. The table d’hôte, which appeared in France around Cole’s time, most closely resembled a modern restaurant. Clients sat at a single table and ate what they were given (trends now making a comeback). Many of these proto-restaurants resembled community kitchens, or quasi-charities, which existed for the benefit of locals. Strangers were not always welcome.

Nor were they destinations predominantly for the well-heeled. Before the use of coal became widespread in England in the 17th century, preparing food at home involved spending a lot on wood or peat. Professional kitchens, by contrast, benefited from economies of scale in energy consumption and so could provide meals at a lower cost than people could themselves. Today dining out is seen as an indulgence, but it was the cheapest way to eat for most of human history.

It was, thus, a low-status activity. Cicero and Horace reckoned that a visitor to a bar might as well have visited a brothel. According to “Piers Plowman”, a late-14th-century poem, cooks would “poison the people privily and oft”. Some rich types rented private dining rooms; Samuel Pepys, a 17th-century diarist, enjoyed eating “in the French style” (that is, with communal dishes) at one in London. But most wealthy people preferred to eat at home, enjoying the luxury of having staff to cook and clean up.

Over time, however, the notion that a respectable person might eat a meal in public gradually took hold. Wilton’s, a fish restaurant in London, got going in 1742. Dublin’s oldest, established in 1775, traded under the name of the “Three Blackbirds” and was “noted for a good bottle of Madeira, as well as for a Chop from the Charcoal Grill”. Fraunces Tavern, New York City’s oldest restaurant, probably opened in 1762 (it is still open today and serves determinedly American fare from clam chowder to New York prime strip steaks).

Some historians look to the supply side to explain this shift, arguing that the restaurant emerged as a result of improvements in competition policy. Powerful guilds often made it hard for a business to sell two different products simultaneously. Butchers monopolised the sale of meat; vintners that of wine. The growth of the restaurant, which serves many different things, required breaking down these barriers to trade.

A Monsieur Boulanger, a soup-maker in Paris, may have been the first to do so. He dared sell a dish of “sheep’s feet in white-wine sauce”. The city’s traiteurs (caterers) claimed the dish contained a ragout, a meat dish only they were allowed to prepare, and was therefore illegal. They took their case to court, but Boulanger triumphed. The tale, supposedly marking the beginning of a movement in mid-18th-century France towards more open markets, is probably apocryphal. But other regulatory changes did help. In Britain reformers worried about public drunkenness passed a law in 1860 allowing places serving food to serve wine as well (thus encouraging people to eat something to sop up the booze). Around the same time American states started passing food-safety laws, giving customers more confidence in the quality of the food.

Yet for restaurants to flourish, richer people had to demand what Pepys did not: eating in full view of others. Until the 18th century elites largely viewed public spaces as dirty and dangerous, or as an arena of spectacle. But as capitalism took off, public spaces became sites of rational dialogue which were (putatively) open to all. And, as Charles Baudelaire, a French poet, observed, 19th-century cities also became places where people indulged in conspicuous consumption.

Link to the rest at The Economist (sorry if you hit a paywall, but give it a try anyway)

The OP is a bit farther afield than PG typically goes, but the traditional book business always gives itself a little-deserved holiday at the end of the year, so PG has to scrounge a bit.

The Claims of Memory

From First Things:

I write in defense of memory. Not Memory in her gaudy mythological form, the Titan goddess Mnemosyne, mother of the nine Muses—but memory as the glue that holds our lives together and imposes order and continuity amid the blooming buzzing confusion of sensations, thoughts, and activities that stream in upon our days. It is no exaggeration to say that a working memory is indispensable in the flourishing of the human person and of human culture.

Of course I recognize the maddening imperfections of memory: its unreliability, its failures, its deceptions, its panderings, its whispering seductions, its stealthy editing of experience for personal benefit—and its penchant for cruel taunts, for hurling self-condemnations at us without warning, for keeping us awake at night as we cling to any distraction to avoid an encounter with the rebuke of our own recollections. Memory can be a reservoir of joy, a treasury in times of woe. It can also be a source of woe, of remorse and regret that will not go away, steady work for the psychiatric profession. Whether in joy or in woe, memory maintains a shifty relationship to the truth, and like a shady accountant may maintain separate sets of books on the same account.

All these things are true of memory. And yet we cannot do without it. It is “an ill-favoured thing, sir, but mine own,” as Touchstone declares in As You Like It. Well said, and even a one-sentence summation of my argument. For our very humanity is bound up in the inescapable fact of our memory’s vagaries and imperfections, all of which are inseparable from the fact that it is, and must be, our own.

A long time ago, at the beginning of my graduate studies in history at Johns ­Hopkins University, I read the philosopher George Santayana for the first time. We all know Santayana for a famous saying, frequently misrendered: “Those who cannot remember the past are condemned to repeat it.” It’s a favorite adage of op-ed sages. But I had never seen it rendered as it ­originally appeared, in Santayana’s book Reason in Common Sense:

Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it. In the first stage of life the mind is frivolous and easily distracted, it misses progress by failing in consecutiveness and persistence. This is the condition of children and barbarians, in which instinct has learned nothing from experience.

Santayana was not concerned here with the putative “lessons of history,” about whose precise contents he was always skeptical and circumspect. He was speaking of something more fundamental, more elemental, more anthropological. He was designating memory as a central precondition for a mature, civilized way of life—a subject about which he knew a great deal.

A second passage from Santayana was more startling, at least to me. Here I was at Johns Hopkins, an institution that prided itself on being the model of the modern research university in the United States, an institution dedicated not to the placid ideal of cultural conservation but to inquiry, to the remorseless supplanting of traditional learning with ever more incisive and disruptive scientific knowledge, including the relentless rethinking and reinterpretation of the past. So imagine my shock when I came across this passage:

It is one of the foibles of romanticism to insist on rewriting history and perpetually publishing new views without new matter. Can we know more about the past than its memorials transmit to us? Evidently we cannot know more; in point of truth concerning human history, any tradition is better than any reconstruction. A tradition may be a ruin, broken unrecognizably, or shabbily built over in a jungle of accretions, yet it always retains some nucleus of antiquity; whereas a reconstruction . . . is something fundamentally arbitrary, created by personal fancy, and modern from top to bottom. Such a substitution is no mere mistake; it is a voluntary delusion which romantic egotism positively craves: to rebuild the truth nearer the heart’s desire.

It was a shocking statement, a repudiation of everything Johns Hopkins University stood for. Historical revisionism a “foible of romanticism” and a “delusion”! What chutzpah!

Link to the rest at First Things

Of Sound Mind

From The Wall Street Journal:

Close your eyes. Listen to what surrounds you. At first, perhaps, the low hum of a space heater—a noise so constant that you only notice it when it stops. Then, a more distant sound—the hiss, click, whoosh of the heating system down the hall as it switches on. After a few moments, you untangle other, slighter sounds. The soft click of typing. Footfalls on an upper floor. The squawk of a blue jay outside. “Sound is all around us—inescapable and invisible,” writes Nina Kraus in “Of Sound Mind: How Our Brain Constructs a Meaningful Sonic World.” Our sense of hearing, she tells us, “is always ‘on.’ ” We ignore it at our peril.

According to Ms. Kraus, a neuroscientist at Northwestern University and the founder of its Brainvolts auditory neuroscience laboratory, most people would choose sight over hearing—they would rather live in silence than in darkness. But, she reminds us, it is sound that provides us with our greatest means of communication. She quotes the author and activist Helen Keller: “Blindness disconnects us from things,” but “deafness disconnects us from people.” Sound is also a source of great nostalgic power. Ms. Kraus writes: “The neighborhood birds, the sounds of leaves rustling, the distant church bell, the abrupt hiss-honk of the city bus’s air brakes and the pick-up basketball game down the street. . . . These all impart a sense of place, a place of belonging.”

We live with constant racket, but we have forgotten how to listen. And yet the part of our brain that is given over to sound—what Ms. Kraus calls the“hearing brain” or “sound mind”—is far bigger and more complex than any of our other sensory equipment. Hearing influences how we feel, how we see, how we move, how we think. It makes us who we are.

“At some point deep in our evolutionary past,” Ms. Kraus explains, natural selection gave us the ability to sense pressure changes with our ears. We developed body parts that “turn the air movement caused by a vibrating guitar string or a spoken word” into something meaningful. Both our ability to move and our ability to hear were developed from similar sources: The deep thrum in our chests when we hear a drumbeat, the innate desire to move to a rhythmic tempo—these echo our earliest development. Sound is motion.

The way by which we convert sound waves into electrical brain signals is indeed unusual: Within the inner ear are tiny hairs in a fluid; when external vibrations enter the ear canal, they agitate the fluid and cause the hair cells to bob up and down. Microscopic projections that perch on top bump and bend, causing porelike channels to open. Chemicals rush into the cells, creating electrical signals that the auditory nerve carry to the brain. Ms. Kraus’s descriptions of the process are rich in metaphorical imagery, giving us the sense that an ear is a cathedral with walls, roof and floor, with fountains of living (electrical) water. But while the science is clear, there remains a magical, awe-inspiring sense of wonder that somehow timing, timbre and pitch can become conversation, lyric and song.

Our comprehension of sound also works in the opposite direction, explains Ms. Kraus: not only from ear to brain, but from brain to ear. A few years ago, a meme on the internet featured someone pronouncing a word. Simple—except no one could agree on the word: It depended on your context. How could this be? Ms. Kraus describes a similar experiment wherein an audio “ba” sound is paired with a video of someone expressing a “fa” sound. Close your eyes, and you hear “ba.” Watch the person in the video, and you hear “fa.” What your brain tells you—in this case, from the sense of sight—influences what you hear. To use another of Ms. Kraus’s examples: During one of her classes she often plays a recording of a sentence that has been so distorted that it sounds like “Darth Vader with a toothache doing a Cookie Monster impersonation during a thunderstorm.” Her students will find the recording incomprehensible—that is, until she plays a clear version of the sentence. “When I play the garbled one again, lightbulbs go off all over the lecture hall. Suddenly that garbled mess is completely understandable to every student. Everyone is amazed at how obvious (in retrospect) the garbled sentence was and can’t believe it was ever challenging. What we know has an enormous influence on what we hear.”

Link to the rest at the Wall Street Journal

We ask “C”: how do intelligence services need to change in the 21st century?

From The Economist:

IN HIS first public speech since he became chief of Britain’s Secret Intelligence Service, Richard Moore said the service needs to “become more open to stay secret.” On “The Economist Asks” podcast, host Anne McElvoy and Shashank Joshi, The Economist’s defence editor, ask Mr Moore exactly what that means in practice.

The spymaster, whose position is traditionally referred to simply as “C”, describes the “entrepreneurial animal spirits” he hopes to attract by lifting the veil on MI6’s plans and challenges. Can partnering with technological talent lend British intelligence the heft it needs to punch above its weight against larger rivals like Russia and China?

China, Mr Moore says, is the service’s most pressing priority. Alongside what he calls the “key battleground” and exponentially-growing “digital attack surface” of technology and data-gathering, debt traps threaten to slowly erode the sovereignty of other states as China garners ever-more influence in emerging markets.

A key challenge, he says, will be to assert and defend Western democratic values while securing China’s “cooperation on the key transnational issues”, including “the biggest issue of all”—climate change.

. . . .

“Vladimir Putin…really does think that Russia, in the 21st century, has the right to impose limits on the sovereignty of the countries on its periphery,” he says. “And that’s a problem.” Still, he adds, Mr Putin runs the risk of underestimating his counterparts in Washington.

Despite a strong focus on technology, the business of intelligence is “still, fundamentally a question of building a relationship with a fellow human being,” he says. “That hasn’t changed. So I need officers who can build trust with people who are taking significant risks to work with us.” But as adversaries build up extraordinary surveillance capabilities, and after the heavily-publicised assassinations by Russian operatives, we ask Mr Moore how British intelligence services continue to guarantee protection to their agents.

Link to the rest at The Economist

The OP includes a link to a podcast of the interview with C.

Under Jerusalem

From The Wall Street Journal:

In “Civilization and Its Discontents,” Freud compared memory and its recovery to the archaeology of Rome. The visitor cannot see the earlier layers of civilization, but the guidebook says where they once were. This allows us to look at the Colosseum and imagine the Golden House of Nero below. But, Freud wrote, a single physical space cannot hold “two different contents.” If it did, then the Palazzo Caffarelli would occupy the same spot as the temple of Jupiter Capitolinus, and we would see the temple in both its early, Etruscan form and its later, imperial form.

Freud never saw Jerusalem. Not only is its visitor’s imagination incited by the Bible, the guidebook of guidebooks, but Jerusalem’s archaeology also presents the simultaneity that Freud thought impossible. The sacred core of Jerusalem is so great that, like New York, they named it twice: Raise your head as you emerge from the warren of the Old City, and you see the Temple Mount of the Jews and the Noble Sanctuary of the Muslims. Two different contents, two different contexts—not forgetting the Christians, who cannot agree among themselves where their sacred sites should be.

In “Under Jerusalem,” journalist Andrew Lawler directs our contemplation away from the heavenly city, and down into the roots of history and faith. Modern archaeology in Jerusalem began as an effort to substantiate Christian faith through modern science. The history of its practice in Jerusalem presents a parade of eccentrics and fanatics, enlivened by obscurantism and riot. Mr. Lawler, unlike so many of his characters, navigates the terrain without offending the political or religious sensibilities of his subjects.

In the 19th century, tourists like Twain and Melville were disappointed by the rundown and rather modest architecture of Ottoman Jerusalem. When Baedeker issued a guidebook in 1876, he apologized for the “modern crust of rubbish and rottenness” that obscured the “Jerusalem of antiquity.” Exceptional in being sacred to all three monotheisms, Jerusalem is unusual as ancient cities go. The tel, a man-made hill in which civilizations are stacked layer upon layer like stony pancakes, is common in the Middle East; the Israeli site at Har Megiddo, the “Armageddon” of the Christians, has 26 layers. But Jerusalem is rocky and hilly. Its layers are compressed as though by tectonic forces and honeycombed with cisterns and tunnels.

Archaeology was a European invention introduced to Jerusalem by French and British Christians. The European soldiers and churchmen viewed sacred archaeology in the spirit of Thomas Jefferson: “the earth belongs in usufruct to the living,” and the dead have “neither rights nor powers over it.” Mr. Lawler’s tale begins in 1863, when the French senator Louis-Félicien de Saulcy launched the first modern dig, just outside the Old City walls. Breaking into an ancient tomb, de Saulcy abducted an attractive sarcophagus and, while Jerusalem’s rabbis launched an international protest against this foreign graverobber, took it to Paris and declared its occupant to be “the consort of a Judean ruler from the seventh century BCE.” He was only 700 years off.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall if the free link doesn’t work, but hasn’t figured out a way around it.)

Usufruct (yo͞ozəˌfrəkt)- “The right to enjoy the use and advantages of another’s property short of the destruction or waste of its substance.”

The new normal is already here. Get used to it.

From The Economist:

Is it nearly over? In 2021 people have been yearning for something like stability. Even those who accepted that they would never get their old lives back hoped for a new normal. Yet as 2022 draws near, it is time to face the world’s predictable unpredictability. The pattern for the rest of the 2020s is not the familiar routine of the pre-covid years, but the turmoil and bewilderment of the pandemic era. The new normal is already here.

Remember how the terrorist attacks of September 11th 2001 began to transform air travel in waves. In the years that followed each fresh plot exposed an unforeseen weakness that required a new rule. First came locked cockpit doors, more armed air marshals and bans on sharp objects. Later, suspicion fell on bottles of liquid, shoes and laptops. Flying did not return to normal, nor did it establish a new routine. Instead, everything was permanently up for revision.

The world is similarly unpredictable today and the pandemic is part of the reason. For almost two years people have lived with shifting regimes of mask-wearing, tests, lockdowns, travel bans, vaccination certificates and other paperwork. As outbreaks of new cases and variants ebb and flow, so these regimes can also be expected to come and go. That is the price of living with a disease that has not yet settled into its endemic state.

And covid-19 may not be the only such infection. Although a century elapsed between the ravages of Spanish flu and the coronavirus, the next planet-conquering pathogen could strike much sooner. Germs thrive in an age of global travel and crowded cities. The proximity of people and animals will lead to the incubation of new human diseases. Such zoonoses, which tend to emerge every few years, used to be a minority interest. For the next decade, at least, you can expect each new outbreak to trigger paroxysms of precaution.

Covid has also helped bring about today’s unpredictable world indirectly, by accelerating change that was incipient. The pandemic has shown how industries can be suddenly upended by technological shifts. Remote shopping, working from home and the Zoom boom were once the future. In the time of covid they rapidly became as much of a chore as picking up the groceries or the daily commute.

Big technological shifts are nothing new. But instead of taking centuries or decades to spread around the world, as did the printing press and telegraph, new technologies become routine in a matter of years. Just 15 years ago, modern smartphones did not exist. Today more than half of the people on the planet carry one. Any boss who thinks their industry is immune to such wild dynamism is unlikely to last long.

Link to the rest at The Economist – should be a free link, but PG doesn’t know if the Economist prevents it from being used more than a handful of times.

A Head for Metals

From The Wall Street Journal:

George Hearst was famous for discovering metals—copper, silver, gold—but he liked any mineral he could pull out of the earth. New Year’s Eve 1889 found him far from his San Francisco home, in West Virginia’s coal country. “We found the coal veins all right,” said Hearst’s traveling companion, T.J. Clunie, a young California state senator. “The samples were fine, the price was low, and I expected to see Hearst snap at the offer.” But Hearst was hesitant. “I don’t like to buy a pig in a poke,” he said. “We had better crawl up and see that coal for ourselves before we discuss the price.” That meant scaling a 3,000-foot hill.

At the summit Hearst found a vein of coal, hacked out a chunk, and set it on fire. The flame sputtered and died in seconds. He tossed the lump aside and went looking for another. He found a different vein, hacked out another piece and ignited it. This one burned steadily for 10 minutes, Clunie recalled, while Hearst watched it “as a mother does her first-born.” Hearst scrambled back down the hill and bought the vein. He was 69 years old.

Stomach cancer would claim Hearst barely a year later, but as Matthew Bernstein demonstrates in “George Hearst: Silver King of the Gilded Age,” the old miner went about his work right to the end with the same tenacity, demon energy and genius for finding what he was after that had made him one of the richest men in the American West.

Hearst was born in 1820 into modest yet comfortable circumstances in Missouri, where his father owned three small farms. Farming interested young George not at all, but when he was 15, lead was discovered near his home. The subsequent diggings fascinated him. “I think I was naturally a mineralogist,” he would write years later. “The knowledge seems to me instinctive.”

When gold was discovered in California, Hearst headed west with plenty of competition: There were perhaps 800 San Franciscans before the metal revealed itself at Sutter’s Mill. By the time Hearst arrived, in the fall of 1850, the city’s population had swelled to 25,000, with more than 100,000 hopefuls scouring the riverbeds.

Hearst joined them, and after some discouraging months seeking gold along the rivers—which were overcrowded and quickly exhausted—he turned to the mountains. By then he was looking for quartz, not because it was valuable, but because he had learned that during the volcanic birth of California’s coastal mountains, streams of molten quartz carried gold along with them and imprisoned it as they cooled.The knowledgeable prospector could crack open a stone and see within its snowy depths a gleaming yellow filling. Hearst’s friends gave him the name “Quartz George.”

Then came Washoe, part of Utah Territory at the time. Hearst had heard about silver deposits there, and bought a share of a mine. At first nobody believed that the ore he brought back to San Francisco was valuable, but finally the head of the San Francisco Mint agreed to give it a look—he offered Hearst and his associates $91,000 after costs, or about $3 million today, for what turned out to be one of the earliest extractions from the Comstock Lode. After that, the money never stopped.

Nor did Hearst. In the following decades he traveled throughout the West, sometimes coming up dry, more often not. Some 65 miles outside of Butte, Mont., in 1883, Hearst began digging at the Anaconda Mine, where “they struck a bed of pure copper. Continuing to delve, they found that the bed was thirty to forty feet wide and descended more than a thousand feet. In other words,” Mr. Bernstein writes, “it was the greatest copper strike on the planet.”

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

Bad moves across the board

From The Times Literary Supplement:

In 1957, a little-known Harvard professor had his first taste of fame after the publication of Nuclear Weapons and Foreign Policy. Appearing three months before the Soviets launched their Sputnik satellite, Henry Kissinger’s book earned public praise from no less than Robert Oppenheimer, the “father” of the atomic bomb, as well as the leading foreign policy realist, Hans Morgenthau. The New York Times reported, accurately, that “officials at the highest government levels” were reading it: Vice President Richard Nixon certainly did. Not only was the book selected by the Book of the Month Club; Kissinger also found himself on television for the first time. “We believed for too long that we were relatively invulnerable”, he told viewers of CBS’s Face the Nation. “I believe that it [countering Soviet aggression] will take a somewhat firmer attitude and a … somewhat greater willingness to run risks.”

The core argument of Nuclear Weapons and Foreign Policy was that the United States lacked a “strategic doctrine” for the nuclear age. There had been a failure in Washington to grasp the full implications of an all-out thermonuclear war, namely that there could be no winner, “because even the weaker side may be able to inflict a degree of destruction which no society can support”. This awful reality made the Eisenhower administration’s periodic bouts of brinkmanship either wildly reckless or mere bluff. As mutual renunciation of nuclear arms seemed unattainable, Kissinger sought to develop a doctrine of limited nuclear war.

That doctrine was never put to the test. Indeed, even Kissinger later repudiated parts of his own argument. Yet, if one looks back on the way NATO strategy evolved in the three decades after 1957, limited nuclear war was at its heart. What else were all those short-range and intermediate-range nuclear missiles for? Had war broken out with the Soviet Union in Europe, at least on the Western side there would have been an attempt to fight it without the intercontinental ballistic missiles whose launch would have heralded Armageddon.

Sixty-four years have passed since the publication of Nuclear Weapons and Foreign Policy. Yet at ninety-eight, Henry Kissinger has not lost his knack for identifying doctrinal deficits in US national security strategy. “The age of AI”, he and his coauthors write, “has yet to define its organizing principles, its moral concepts, or its sense of aspirations and limitations … The AI age needs its own…

Link to the rest at The Times Literary Supplement (sorry if you hit a paywall)

A war correspondent’s intimate portrait of an embattled minority

From The Economist:

After a strict convent education, Janine di Giovanni, an American war correspondent, drifted from religion. Yet as she travelled the world, reporting from Bosnia, Kosovo and Rwanda, her faith returned. Wherever she went, she writes in “The Vanishing”, she would find a church, seeking “ritual and a sense of belonging”. Her book is the culmination of two decades of fieldwork in the Middle East, its four sections reflecting her stints in Egypt, Gaza, Iraq and Syria. As the title suggests, it is a portrait of a disappearing people.

Christians are an embattled minority in many countries, including North Korea, where tens of thousands are believed to be held in concentration camps, and Sri Lanka, where around 250 people died in the Easter bombings of 2019. In the Middle East, Islamic extremists depict Christians as Westernised interlopers, yet the region was the birthplace of the religion, which flourished until the Muslim Arab conquest of the seventh century. Christians have since faced discrimination in varying degrees, precipitating waves of emigration. Today 93% of the population of the Middle East and north Africa are Muslim.

Ms di Giovanni brings a compassionate perspective to her narrative, interweaving complex, sometimes dense history with evocative vignettes and interviews. Her interlocutors range from nuns to imams, from the last vestiges of Gaza’s Christian elite to Cairo’s impoverished Zabbaleen, who sort rubbish in “Garbage City”. These “dying communities” of various Christian denominations, some claiming direct descent from Jesus’s disciples, share a stark choice: to abandon ancestral roots in search of a better life elsewhere, or cling on for a precarious future. Most keep their heads down, but the allegiance of some to dictators—seen as bulwarks against extremism—has antagonised Islamists.

. . . .

In the fourth century Gaza was wholly Christian. By the 21st century the community had shrunk to under 1,000, and the consequences of the election of Hamas in 2006 imperilled its members further. They endure the same hardships and dearth of opportunity as other Gazans and receive scant government protection; unemployment among young Christians stands at 70%. Egypt’s Christian population, chiefly Copts, is the region’s largest, but still suffers legal and social discrimination, even if some families are insulated by privilege. “The underlying sense of inferiority is our greatest persecution,” says one woman. “I’ve had Muslim men grab me by the hair and try to drag me because I don’t have a headscarf on.”

Link to the rest at The Economist

Alice Sebold apologizes to exonerated man who spent years in prison for her rape

From CNN:

Author Alice Sebold apologized to a man who last week was exonerated of her rape, a crime she wrote about in her memoir “Lucky,” but the writer also appeared to place as much blame on a “flawed legal system” as she did on the role she played in his conviction.

“First, I want to say that I am truly sorry to Anthony Broadwater and I deeply regret what you have been through,” Sebold, the author of “The Lovely Bones,” wrote in a statement posted on Medium.com.
Broadwater, who has always maintained his innocence, was convicted of the rape in 1982 and spent 16 years in prison. He was denied parole at least five times because he wouldn’t admit to a crime he didn’t commit, according to his attorneys. He tried at least five times to get the sentence overturned, he told CNN.

Last week, a New York State Supreme Court judge exonerated Broadwater and vacated his conviction and other counts related to it. The Onondaga County district attorney joined in the motion to vacate the conviction.

Broadwater was convicted on two pieces of evidence — Sebold’s account and a cross-racial identification — the author is White and Broadwater is Black — and the analysis of a piece of hair that was later determined to be faulty, his attorneys wrote.

In “Lucky,” Sebold wrote that after she failed to identify Broadwater in a police lineup, “a detective and a prosecutor told her after the lineup that she picked out the wrong man and how the prosecutor deliberately coached her into rehabilitating her misidentification,” according to the attorneys’ affirmation that led to Broadwater’s exoneration.

The unreliability of the hair analysis and the conversation between the prosecutor and Sebold after the lineup would probably have led to a different verdict if it had been presented at trial, the attorneys said.

Sebold described the rape, which occurred when she was a freshman at Syracuse University in 1981, in brutally honest detail in “Lucky.” It was published a year after Broadwater was released from prison.
Her publisher Scribner, and its parent company, Simon & Schuster, will stop distributing the book in all formats “while Sebold and Scribner together consider how the work might be revised,” said Brian Belfiglio, Scribner vice president of publicity and marketing, in a statement to CNN.

Link to the rest at CNN

PG hasn’t been in a criminal courtroom in a very long time, but he can’t imagine any prosecutor he ever dealt with way back when doing what Ms. Sebold says the detective and a prosecutor did in her case.

Here’s a link to a post Ms. Sebold made discussing this subject.

Where Is My Thinking Machine?

From The Wall Street Journal:

Amazement and alarm have been persistent features, the yin and yang, of discourse about the power of computers since the earliest days of electronic “thinking machines.”

In 1950, a year before the world’s first commercial computer was delivered, the pioneering computer scientist Alan Turing published an essay asking “Can machines think?” A few years later another pioneer, John McCarthy, coined the term “artificial intelligence”—AI. That’s when the trouble began.

The sloppy but clever term “artificial intelligence” has fueled misdirection and misunderstanding: Imagine, in the 1920s, calling the car an artificial horse, or the airplane an artificial bird, or the electric motor an artificial waterwheel.

“Computing” and even “supercomputing” are clear, even familiar terms. But the arrival of today’s machines that do more than calculate at blazing speeds—that instead engage in inference to recognize a face or answer a natural language question—constitutes a distinction with a difference, and raises practical and theoretical questions about the future uses of AI.

The amount of money chasing AI alone suggests something big is happening: annual global venture investing in AI startups has soared from $3 billion a decade ago to $80 billion in 2021. Over half is with U.S. firms, about one-fourth with Chinese. More than six dozen AI startups have valuations north of $1 billion, chasing applications in every corner of the economy from the Holy Grail of self-driving cars to healthcare and shopping, and from shipping to security.

Consider another metric of AI’s ascendance: An Amazon search for “artificial intelligence” yields 50,000 book titles. Joining that legion comes “The Age of AI and Our Human Future.” Its multiple authors—Henry Kissinger, the former secretary of state; Eric Schmidt, the former CEO of Google; and Daniel Huttenlocher, dean of MIT’s Schwarzman College of Computing—say their book is a distillation of group discussions about how AI will “soon affect nearly every field of human endeavor.”

“The Age of AI” doesn’t tackle how society has come to such a turning point. It poses more questions than it answers, and focuses, in the main, on news and disinformation, healthcare and national security, with a smattering about “human identity.” That most AI applications are nascent is evidenced by the authors’ need to fall back frequently on “what if” in an echo of similar discussions nearly everywhere.

The authors are animated by the sense many share that we’re on the cusp of a broad technological tipping point of deep economic and geopolitical consequence. “The Age of AI” makes the indisputable case that AI will add features to products and services that were heretofore impossible.

In an important nod to reality, the authors highlight “AI’s limits,” the problems with adequate and useful datasets and the “brittleness,” especially the “narrowness,” of AI—an AI that can drive a car can’t read a pathology slide, and vice versa.

However, AI’s narrowness is a feature, not a bug. This explains why hundreds of AI startups target narrow applications within a specific business domain. It explains why apps have been so astonishingly successful (consumers have access to more than 200 million apps, most of which are not games). Many are powered by hyper-narrow AI and enabled by a network—the Cloud—of unprecedented reach.

The book’s discursive nature illuminates, if unintentionally, a common challenge in forecasting AI: It’s far easier today than in Turing’s time to drift into the hypothetical.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

7 Books About Life After a Civil War

From Electric Lit:

I remember traveling in the north of Sri Lanka, two years after the civil war, in areas where some of the worst fighting had taken place, and seeing yellow caution tape cordoning of large tracts of land. Signs warned in several languages of land mines. Later, I sat, safely ensconced in a Colombo café, as the leader of an NGO showed me pictures of women, protected by nothing more than plastic visors, crouched over piles of dirt and sand with implements that looked surprisingly like the kinds of rakes and hoes you find at a local Home Depot. The work clearing the land of mines, she told me, would likely take two decades.

I started working on my latest collection, Dark Tourist, after that 2011 trip as a way of exploring aftermath. Once the fighting has stopped, the ceasefire arranged, the peace treaty signed we turn our attention to the next conflict, too often ignoring the repercussions of the trauma and the attempts to heal. I wanted to explore the ways that grief both marks us and also the ways we manage to survive, to persevere, and to reckon with and make stories of our memories.

. . . .

Some of the books explore the impact of conflict on individuals who are trying to manage deep traumas. Others document the impact on generations one or two decades removed from the fighting. All the works are testament, to the need for fiction, creative nonfiction, and poetry to document and give voice long after the journalists and the NGOs decamp to other hot zones.  

A Passage North by Anuk Arudpragasm

Anuk Arudpragasm’s novel A Passage North begins with an invocation to the present:

“The present, we assume is eternally before us, one of the few things in life from which we cannot be parted.”

The novel goes on to carefully unravel that opening assertion. The present of the protagonist, Krishnan, is impinged on by multiple losses: the death of his father in a bombing during the height of the civil war; the estrangement of a lover, an activist who refuses to return to Sri Lanka; the imminent death of his aging grandmother; and his duty to her former caretaker. As Krishnan undertakes the titular voyage, the novel transforms into a meditation on loss and grief and also a reckoning in the ways his sorrow often blinds all of us to the suffering around us.

The Best We Could Do by Thi Bui

In a reversal of the traditional immigrant story, Thi Bui, in her graphic memoir, sets out to understand why her parents, both refugees from Vietnam, have failed her and her siblings. Bui’s delicate ink wash drawings provide a careful and detailed reconstruction of her father and mother’s experiences during the Vietnam war and their losses: the separation from family members, exile from home, the death of a child. As the memoir progresses, it becomes clear that Bui’s intent is not merely to document but to reconstruct, to revision, and finally, with deep care and compassion, to make her parents’ story truly part of her own. 

Link to the rest at Electric Lit

The Best We Could Do
Dark Tourist

Brilliant Together: On Feminist Memoirs

From Public Books:

During my first year of college, in 1991, I experienced a feminist awakening, in my horror at the spectacle of Anita Hill’s public humiliation in front of the Senate and the nation. But I didn’t stop to consider how the compound factor of race made her situation both different and worse. The stories we heard from second-wave feminists—our mothers’ generation—were likely to include Gloria Steinem and Take Back the Night. They were less likely to mention Audre Lorde and the Combahee River Collective.

My cohort of women was born between 1965 and 1980, the Pew Research Center’s parameters for Generation X. We grew up with the advances won by first-wave feminists (the right to vote, for example) and by the second-wave feminists of the 1960s and 1970s (widespread access to contraception, civil rights legislation that extended affirmative action to women, legal abortion, and large-scale public conversations about violence against women). The lazy, clichéd view of Gen Xers presumes us to be slackers, cynics, and reflexive ironists; we don’t even merit our own dismissive “OK, boomer” meme.

But we are also women in our 40s and 50s, the inheritors of feminism’s second wave and instigators of the third wave, and we are now in positions of power. With a fourth wave of feminism upon us, what should my generation of activists preserve from earlier stages of the movement, and what should we discard? How should feminist stories be told, and by whom? In particular, what elements of our foremothers’ second-wave feminism still feel essential to us in the 21st century, and how might we consider and address their failures, especially their failures of inclusion? How can we redefine the collective aspects of the feminist movement in a way that will endure?

One of the most important outcomes of the racial reckonings of 2020 is an influx of new feminist memoirs that reexamine the women’s movement from nonwhite perspectives—memoirs that deal explicitly with race and the failings of mainstream white feminism. And even before the events of last year, white second-wave feminists were beginning to engage more meaningfully with these issues, although an enormous amount of work remains.

Recent memoirs by Nancy K. Miller and Honor Moore, white women who are both public intellectuals and active participants in feminism’s second wave, make clear how the gains of the 1960s, ’70s, and ’80s enabled some women to rewrite their narratives. Instead of stories of birth, marriage, and death, they tell stories about friendship, intellectual and artistic development, relationships between mothers and daughters, political and cultural movements, and, above all, the struggle for women’s voices to be heard (and their bodies protected) in a patriarchal society.

Reading these stories now feels like tapping into a larger ongoing narrative, one that celebrates gains made while reminding us how fragile those gains are. Notably, both memoirs take on more than one woman’s story, suggesting that another way to rewrite women’s narratives is to bring them together, to see the power in what is shared. There is no single narrative of a woman’s life, no universal set of markers that defines women’s experience, and no one retrospection that can answer what feminism will become.

In her foundational Writing a Woman’s Life (1988), feminist literary critic and scholar Carolyn Heilbrun observes that writing the lives of notable women poses a challenge because their narratives don’t fit a heroic masculine mode of storytelling. Heilbrun argues that women can’t emulate or draw inspiration from biographies and memoirs that don’t exist:“Lives do not serve as models; only stories do that. And it is a hard thing to make up stories to live by. We can only retell and live by the stories we have read or heard. We live our lives through texts. … Whatever their form or medium, these stories have formed us all; they are what we must use to make new fictions, new narratives.”1 In the same way, feminist biographies and memoirs—and scholarship, journalism, and novels, too—can only make sense of their subjects by changing the way the story is told.

Feminism itself has never been an uncontested narrative. Debates over the definition of “feminist,” including the right to use this word regardless of political ideology, or the ability of women to support other women while distancing themselves from the label, take up a lot of space in public discourse. In a 2015 piece for The Atlantic, Sophie Gilbert suggests that:

Whatever the history, whatever the nuances, whatever the charged sentiments associated with political activism, being a feminist is very simple: It means believing that women are and should be equal to men in matters political, social, and economic. They should be able to vote. They should have equal protection under the law and equal access to healthcare and education. They should be paid as much as their male counterparts are for doing exactly the same job. Do you believe in these things? Then, you are a feminist.

Link to the rest at Public Books

If PG wanted to become a Public Intellectual, how would he go about doing it?

Is there an exam?

Do you major in “Public Intellectual Studies” in college?

Is it inherited? Do one or both of your parents have to be public intellectuals? Will a Public Intellectual grandparent do?

Is there a fee?

Do you get a Public Intellectual certificate to hang on your wall?

How about a Public Intellectual wallet card for when you’re not in your Public Intellectual Study?

Those visitors to TPV who are more informed about this topic than PG can feel free to enlighten him in the comments.

Looking for Trouble

From The Paris Review:

In March 1937, eight months into the Spanish Civil War, Virginia Cowles, a twenty-seven-year-old freelance journalist from Vermont who specialized in society gossip, put a bold proposal to her editor at Hearst newspapers: she wanted to go to Spain to report on both sides of the hostilities. Despite the fact that Cowles’s only qualification for combat reporting was her self-confessed “curiosity,” rather astonishingly, her editor agreed. “I knew no one in Spain and hadn’t the least idea how one went about such an assignment,” she explains innocently in the opening pages of Looking for Trouble, the bestselling memoir she published in 1941. She set off for Europe regardless.

In the four years between arriving in Spain and the publication of Looking for Trouble, Cowles travelled the length and breadth of Europe. She was something of an Anglophile, having been captivated as a child by the stories of King Arthur and his Knights, and thus happily relocated to London, stoically braving its inconveniences—the “lack of central heating, the fogs, the left-hand traffic”—in order to benefit from the front-row seat it offered her to the “sound and fury across the Channel.” In her words, living in the English capital in the late 1930s was “like sitting too near an orchestra and being deafened by the rising crescendo of the brass instruments.”

In 1937, Cowles arrived in Madrid, wearing high heels and a fur coat—the first of quite a few sartorial descriptions in the volume, usually given because the inexperienced Cowles finds herself inadvertently under or overdressed!—but was soon gamely venturing out to the frontlines, ducking to avoid the bullets that whined “like angry wasps” overhead. When not in the midst of the action, she was holed up in the now famous Hotel Florida, alongside Ernest Hemingway—“a massive, ruddy-cheeked man who went round Madrid in a pair of filthy brown trousers and a torn blue shirt”— and other war reporters. Among them, too, was fellow female journalist Martha Gellhorn, with whom Cowles would forge a close friendship; the two later co-wrote a play loosely based on their experiences, ‘Love Goes to Press’ (1946).

This was the beginning of Cowles’s relatively brief but impressively prolific career in war reporting. She was in Prague during the Munich crisis, and Berlin on the day Germany invaded Poland. In early 1939 she escaped “the gloom of London” by means of a six-week trip to Soviet Russia, hoping for what might be “almost a holiday.” She soon stood corrected, determining Moscow to be “the dreariest city on earth,” the depression of which “penetrated [her] bones like a damp fog.” She’d probably have felt less grim if she wasn’t so cold, but yet again, she’d arrived inadequately attired: this time without any woollen stockings, naively assuming she’d be able to buy what she needed when she got there. “Good heavens! Do you really think you can buy woollen stockings here?” a shocked French journalist asked when she tried to enlist his help in tracking some down. A year later, she was in Finland—this time clad in a thick suit, fur-lined boots and a sheepskin coat—travelling north towards the Arctic Circle to report on the Winter War, the bloody battle being waged by the Finns against the invading Russians. In June 1940, as everyone else fled the city, she flew into Paris to cover its fall to the Germans. Three months later, she was in London on the first day of the Blitz

Link to the rest at The Paris Review

When Women Ruled the World

From The Wall Street Journal:

In 1558, John Knox, the energetically tub-thumping Scottish reformer, railed against what would become even more true with the coronation of Elizabeth of England, the future Gloriana herself, a few months later: the unlikely and—to him—hideous rush of women into positions of supreme power in late 16th-century Europe. The cards of the game of birthright were serially falling in a female direction.

Knox’s ire was aimed at Marie de Guise, James V of Scotland’s French widow, who was serving as regent for her daughter Mary Stuart, Queen of Scots. He also targeted Mary Tudor—Mary I—who was, for a fleeting but feverish five-year spell, England’s first queen in her own right, until Elizabeth I succeeded at her (natural) death. Knox’s notorious “The First Blast of the Trumpet Against the Monstrous Regiment of Women” contains his argument in a nutshell: “To promote a Woman to bear rule, superiority, dominion or empire above any realm, nation or city is: A. Repugnant to nature. B. Contumely to GOD. C. The subversion of good order, of all equity and justice.”

Although Knox tried to apologize to Elizabeth for his trollish tract after she ascended the throne—it was only Catholic queens that troubled him, he explained—she never let him travel through her realm. He had to take to the perilous seas or stay put in Scotland. The following year his misery was complete: France’s King Henri II, died leaving his widow Catherine de’ Medici as La Reine Mère, effectively ruling France as the dominant mother of three French kings for the next few decades.

In 1565, Catherine encouraged or commanded her illustrious court poet, Pierre de Ronsard, to refute Knox’s diatribe in a book that celebrated female rule in elegant fashion. The whole is dedicated to Elizabeth, yet the central poem, the bergerie of “Elégies, mascarades et bergeries,” addresses Mary, Queen of Scots. Three decades later, Mary would be brutally and inefficiently beheaded for treason (the ax had to be swung three times), when she was discovered to have plotted against Elizabeth. But now Catherine saw no reason why a gift dedicated to both queens would trouble either: She “utterly discounted any personal jealousy,” according to Maureen Quilligan, and indeed religious difference, in favor of a we’re-all-queens-together spirit of cooperation. History proves that this was wishful thinking on Catherine’s part. Or does it?

This is the bold terrain of “When Women Ruled the World: Making the Renaissance in Europe” by Ms. Quilligan, emerita professor of English at Duke University and author of books on medieval and Renaissance literature. She has come up with an intriguing, inter-disciplinarian, revisionist argument: that through such “inalienable” gifts as poems, books, jewels and tapestries—that is, the sort of dynasty-defining possessions that are passed through generations—we should reappraise relations between these 16th-century queens presumed to loathed and envied one another. We should pay attention to their collaborations in life rather than just their competition to the death. Elizabeth and Catherine de’ Medici, for example, negotiated the Treaty of Troyes, succeeding where Henry VIII and François I had failed and achieving a lasting peace between France and England, certainly for their lifetimes.

Link to the rest at The Wall Street Journal (Should be a free link, but if, perchance it doesn’t work for you, PG apologizes.)

The Book of Mother

From Vogue:

Violaine Huisman’s debut novel, The Book of Mother, tells the story of a 20th- and 21st-century Parisian woman’s life and legacy. Part One is told from the perspective of Violaine, the younger of her two daughters, who is ten when Maman—her beautiful, charismatic, and wildly excessive mother—suffers a breakdown and is hospitalized. Part Two traces the arc of Maman’s, aka Catherine’s, life—from the emotional penury of her hardscrabble, working-class childhood; through her early success (earned through the harshest discipline) as a dancer; to a second marriage that finds her navigating a high-wire act between her life as a woman and the demands of motherhood, while feeling entirely out-of-place amidst the gauche caviar of upper-class Parisian intellectuals; to the betrayals of her third husband, which lead to her undoing. In Part Three, her daughters, now grown women, deal with Maman’s complex legacy.

I lived with the novel’s larger-than-life characters for months while translating Huisman’s winding, revved-up (and at times, improbably comic) Proustian sentences. I heard their voices and felt the shadow of history and the Shoah hanging over them as they breathed the heady air of Paris in the ‘70s and ‘80s, with its boutiques, salons, and swinging night clubs. More recently, I sat down with Violaine, who had returned briefly to New York—her home for the past 20-years—in the midst of an extended sojourn in France, to talk about The Book of Mother. The conversation that follows, over lunch at Café Sabarsky, has been edited and condensed.

In all our discussions about the book while I was translating it, I never asked you, how did you come to write The Book of Mother?

There were two moments of genesis. Ten years before the book’s publication in France [in 2018], I wrote my mother’s life story, but as a monologue, using only her voice. It was similar to the voice that I use in the novel for her tirades and harangues—that long, digressive, angry, wild tone.

I showed that manuscript to a publisher who admired it and gave me some suggestions, but I couldn’t find a way to revise it. Then, one year later, my mother died, and it became impossible to revise it. And then, two years after my mother died, I had my first child, and two years later, the second one.

So there was all this time of, literally, gestation. I realized that becoming a mother gave me a completely different perspective on who my mother was. I started understanding the conflict that she had faced, between her womanhood and her motherhood. So that was a huge turning point for me.

And then, days after coming home from the hospital after giving birth to my younger child, with the baby on my lap, I read 10:04, Ben Lerner’s second novel, and I had this epiphany, which was that in fiction—whether you are writing about your own stories or those of others—facts don’t matter. Facts are only relevant when it comes to history. I realized then that I had to distance myself from facts in order to give shape to my mother’s story, to create a coherent narrative. That’s something that Ben Lerner writes and talks about very beautifully, that fiction is the imaginative power to give form to the real, to make sense of the chaotic nature of living.

Because life makes no sense.

Life makes no sense. And the truth is, my mother didn’t know, my father didn’t know, why things happened that way. But fiction has the ability to create logic where there is none, to give coherence and stability to the story in a way that feels very powerful and personal.

And then, when the structure of the novel came to me—its organization in three parts—I knew even before I started writing exactly how it would be laid out. And that’s how I was able to write it.

Link to the rest at Vogue

Ten trends to watch in the coming year

From The Economist:

If 2021 was the year the world turned the tide against the pandemic, 2022 will be dominated by the need to adjust to new realities, both in areas reshaped by the crisis (the new world of work, the future of travel) and as deeper trends reassert themselves (the rise of China, accelerating climate change). Here are ten themes and trends to watch in the year ahead.

1 Democracy v autocracy. America’s mid-term elections and China’s Communist Party congress will vividly contrast their rival political systems. Which is better at delivering stability, growth and innovation? This rivalry will play out in everything from trade to tech regulation, vaccinations to space stations. As President Joe Biden tries to rally the free world under the flag of democracy, his dysfunctional, divided country is a poor advertisement for its merits.

2 Pandemic to endemic. New antiviral pills, improved antibody treatments and more vaccines are coming. For vaccinated folks in the developed world, the virus will no longer be life-threatening. But it will still pose a deadly danger in the developing world. Unless vaccinations can be stepped up, covid-19 will have become just another of the many endemic diseases that afflict the poor but not the rich.

3 Inflation worries. Supply-chain disruptions and a spike in energy demand have pushed up prices. Central bankers say it’s temporary, but not everyone believes them. Britain is at particular risk of stagflation, due to post-Brexit labour shortages and its dependence on expensive natural gas.

4 The future of work. There is a broad consensus that the future is “hybrid”, and that more people will spend more days working from home. But there is much scope for disagreement on the details. How many days, and which ones? And will it be fair? Surveys show that women are less keen to return to the office, so they may risk being passed over for promotions. Debates also loom over tax rules and monitoring of remote workers.

5 The new techlash. Regulators in America and Europe have been trying to rein in the tech giants for years, but have yet to make a dent in their growth or profits. Now China has taken the lead, lashing its tech firms in a brutal crackdown. President Xi Jinping wants them to focus on “deep tech” that provides geostrategic advantage, not frivolities like games and shopping. But will this boost Chinese innovation, or stifle the industry’s dynamism?

Link to the rest at The Economist

Rereading the Revolt

From Public Books:

“Away with the learning of the clerks,” shouted Margery Starre, “away with it.” In May 1381, Starre and rebels like her were burning university documents at Cambridge, then scattering the ashes to the wind. Across England, they were burning other documents, too: landholding records, tax receipts, judicial testimonies, and title deeds.

An English dress rehearsal for the French Revolution, the events of the tumultuous summer of 1381 began in Essex when a group of villagers refused to pay a widely hated poll tax. The movement spread through Kent and ultimately converged on London. There, the rebels burned down the Savoy Palace; beheaded the king’s treasurer and the archbishop of Canterbury; and presented King Richard II with a series of demands, including the abolition of serfdom, fixed rents, and the seizure of church goods. Richard acceded, but once the threat to London was under control, he had the rebel leaders executed and revoked his royal charters granting their requests.

The story of this failed rebellion was told, as histories usually are, by the winners, or rather, by men on their side. Two of the main sources for the Peasants’ Revolt are from the very clerks the rebels hated: these are chronicles written by Thomas Walsingham, a monk at St. Alban’s, and Henry Knighton, an Augustinian canon of the Abbey of Saint Mary de Pratis. The chroniclers were horrified by the violence of the revolt, but they were also outraged at what they perceived as an attack on learned, literate culture—that is, on intellectuals like them.

To the chroniclers, the rebels were ignorant and bestial. But with some important exceptions, this is not how the rebels behaved during the revolt. The group that torched the Savoy was careful not to steal anything from it, even killing one of their own after he tried to take a silver dish. In other attacks on powerful institutions and residences, the rebels acted in a way they felt was both strategic and just.

The vanquished rebels made history, but they did not get to write it. Still, an echo of their voices was preserved in the chronicles: six short letters in English. And it is these letters that are the subject of Steven Justice’s investigation in Writing and Rebellion: England in 1381.

I first read Justice’s Writing and Rebellion shortly after arriving at Yale to do a PhD in English literature. That was some 17 years ago. Back then, I found the book exhilarating: it gave a voice to people outside traditional bastions of power.

These days, I’m more conflicted. I have become like one of the “clerks” the rebels derided. I think more often about how much blood it takes to water a revolution. As frustrated as I am with universities on a regular basis, these institutions brought me in, taught me their languages, showed me how long it takes to build structures that can fall in a day. Maybe what I find so troubling about revisiting Writing and Rebellion is the recognition that the past is a place where academics like myself can effortlessly imagine ourselves speaking for the powerless, without worrying about what those we consider powerless might say back to us.

. . . .

Opening the copy of Writing and Rebellion now shelved in my office, I find personal reflections penciled in the margins among my other notes. As I worked through the book 17 years ago, I wrote down the lyrics of the songs playing in the coffee shop as I read. When Marvin Gaye sang, “Natural fact is / Oh honey that I can’t pay my taxes,” it must have resonated with the rebels’ refusal to continue to be subject to extortionate taxation, because I scribbled it in. The margin of page 56 is full of smudged cursive recording in detail not one, but two disappointments in love of which I had been reminded as I was hunting down the book’s references in Sterling Memorial Library. I wrote, too, how these two pale shadows of heartbreak opened the floodgates to another emotion, the sorrow over my parents’ divorce that I had not allowed myself to feel until that spring.

A two-time immigrant, I wasn’t the first person to feel out of place at a university with a fancy name, nor the last. Certainly, I’d had the advantage of Canada’s public school system and (I would say today) of being white and middle-class. But back then, the polished manners of my fellow graduate students often made me wonder whether I could add anything to the long, grammatically correct sentences winding around the seminar table, other than “Me like poetry, poetry pretty.”

Like the other students, my professors seemed to think naturally in abstract nouns. I, on the other hand, bubbled with unfocused enthusiasm for the literature that told me the story of my life. In one first-year seminar, I ran out of the room holding back tears: during a discussion of George Bernard Shaw’s Pygmalion, I recognized my parents’ ruptured marriage in the relationship between Henry Higgins and Eliza Doolittle, truth that came a little too close to the bone.

I didn’t have the scientific detachment of a good scholar. At first, this gave me a sense of freedom: given how evident it was that I would never land a job in academe, I could enjoy reading books on Yale’s dime. Studying literature was a luxury no one in my family had enjoyed. But as time wore on, the institution began to shape both me and my desires. Without consciously intending to become the kind of person who would fit into the academic world, I began learning the language that would help me to do so. That language was theory.

And so it was in a seminar called Medieval Texts and Modern Theory that I read Writing and Rebellion, which, looking back, was almost poetically fitting. Steven Justice’s 1994 monograph is about people outside of learned institutions, people whose way of expressing themselves is pointed and resonant, even if it does not use elite language.

Link to the rest at Public Books

Go West, Young Lady

From The Wall Street Journal:

Near the end of “The Great Plains,” his classic 1931 study of the Anglo-American conquest of the nation’s midsection, historian Walter Prescott Webb acknowledged that, “since practically this whole study has been devoted to the men, [women] will receive scant attention here.” Much the same could be said even today about the genre of Western biography, in which books about famous men such as George Armstrong Custer, William “Buffalo Bill” Cody and Sitting Bull proliferate, with relatively few volumes dedicated to exploring the lives of the region’s female characters. To be sure, figures like Annie Oakley and Calamity Jane have drawn their share of attention, but they are the exceptions that prove the rule. In “Wildcat,” the writer John Boessenecker offers for our examination the life of Pearl Hart, whom he deems “the Wild West’s most notorious woman bandit.” Alas, as a contribution to our understanding of the world of frontier women, it is a limited addition.

Pearl Hart

Mr. Boessenecker is ideally matched to his subject, having written 10 previous books, all with western settings, most focusing on outlaws and their badge-wearing pursuers. His subjects run from Tiburcio Vásquez, a 19th-century California bandido, to Frank Hamer, the Texas Ranger who—with help from other officers of the law—killed Bonnie Parker and Clyde Barrow in 1934. As with those earlier efforts, Mr. Boessenecker proves a tenacious researcher, with a particular knack for coaxing telling details from newspaper archives. For “Wildcat,” he also turned amateur genealogist and enlisted the help of one of Hart’s descendants, who shared with him extensive primary source material. “So in these pages,” he asserts, “appears for the first time the true and untold story of Pearl Hart, minus the falsehoods and folklore.”

Stagecoach robbers Joe Boot and Pearl Hart’s mug shots.

She was born Lillie Naomi Davy in 1871 in Lindsay,Ontario, about 80 miles northeast of Toronto, and along with her eight siblings endured a chaotic and unstable childhood thanks to an alcoholic and abusive father, a man who served time for—among other offenses—the attempted rape, at knifepoint, of a teenage girl. One of the book’s strengths is Mr. Boessenecker’s obvious compassion for the plight of Lillie and her brothers and sisters. Lillie first ran afoul of the law at the age of 11, when she and an older brother stole a cow—twice—and sold it—twice—to unsuspecting buyers. Thereafter, she bounced around the Lake Ontario borderlands, and in Rochester, N.Y., married the first of her several husbands at age 16. She supported herself in part as a prostitute, and spent periods in correctional facilities in the United States and Canada.

Pearl Hart

With her sister Katy, Davy drifted westward through the Great Lakes during the early 1890s, known now by her alias, which was an homage to a prominent Buffalo madam. She reached Arizona in 1893, and soon fell in with a series of poorly chosen boyfriends while developing addictions to opium and morphine, indications of the poor judgment that led her and a partner, a shadowy figure named Joe Boot, to hold up a stagecoach in the scrublands between Phoenix and Tucson in late May 1899. As Pearl recalled, “Joe drew a forty-five, and said, ‘Throw up your hands!’ I drew my little thirty-eight and likewise covered the occupants of the stage . . . They were a badly scared outfit. I learned how easily a job of this kind could be done.” All the same, their haul was meager: $469 and a half-dozen six-shooters.

Pearl Hart

The pair was quickly apprehended and Pearl was soon confined to the Tucson jail, where her status as the sole female inmate—not to mention her affinity for men’s clothes—attracted the curious, including two locals who were recruited to interview her for a profile that ran in Cosmopolitan. The pictures that accompanied the article caused a national sensation: “In two photos she held her pet wildcat; in others she carried firearms . . . The public was simultaneously fascinated and shocked at the sight of a woman dressed like a cowboy and armed to the teeth.” In time she was relocated to the grim territorial prison at Yuma, but ultimately released in December 1902 thanks to a gubernatorial pardon. In later years she worked as an actress and tobacconist before slipping into relative obscurity. She died in Los Angeles in 1935, at age 64.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

The Self-Help That No One Needs Right Now

From The Atlantic:

Nothing about The Body Keeps the Score screams “best seller.” Written by the psychiatrist Bessel van der Kolk, the book is a graphic account of his decades-long career treating survivors of traumatic experiences such as rape, incest, and war. Page after page, readers are asked to wrestle with van der Kolk’s theory that trauma can sever the connection between the mind, which wants to forget what happened, and the body, which can’t. The book isn’t academic, exactly, but it’s dense and difficult material written with psychology students in mind. Here’s one line: “The elementary self system in the brainstem and limbic system is massively activated when people are faced with the threat of annihilation, which results in an overwhelming sense of fear and terror accompanied by intense physiological arousal.”

And yet, since its debut in 2014, The Body Keeps the Score has spent 150 weeks—nearly three years—and counting at the top of the New York Times best-seller list and has sold almost 2 million copies globally. During the pandemic, it seems more in demand than ever: This year, van der Kolk has appeared as a guest on The Ezra Klein Show, been profiled in The Guardian, and watched his book become a meme. (“Kindly asking my body to stop keeping the score,” goes one viral tweet.)

After all the anxiety and social isolation of pandemic life, and now the lingering uncertainty about what comes next, many people are turning to a growing genre of trauma self-help books for relief. The Body Keeps the Score is now joined on the best-seller list by What Happened to You?, a compilation of letters and dialogue between Oprah Winfrey and the psychiatrist Bruce D. Perry. Barnes & Noble, meanwhile, sells about 1,350 other books under the “Anxiety, Stress & Trauma-Related Disorders” tab, including clinical workbooks and mainstream releases. Sometimes, new installments in the genre seem to position themselves as a cheat code to a better life: Fill out the test at the back of the book; try these exercises; narrativize your life. One blurb I read, on the cover of James S. Gordon’s Transforming Trauma, basically said as much: “This book could give you back your life in unimaginable ways, whether you think of yourself as a trauma victim or not.”

“You can kind of understand why the sales of these books are going up in this stressful, pressurized situation,” Edgar Jones, a historian of medicine and psychiatry at King’s College London, told me. In a moment of personal and collective crisis, the siren song of a self-help book is strong.

There’s just one problem. In spite of their popularity, trauma books may not be all that helpful for the type of suffering that most people are experiencing right now. “The word trauma is very popular these days,” van der Kolk told me. It’s also uselessly vague—a swirl of psychiatric diagnoses, folk wisdom, and popular misconceptions. The pandemic has led to very real suffering, but while these books have one idea of trauma in mind, most readers may have another.

. . . .

In the decades since, trauma has come to signify a range of injuries so broad that the term verges on meaninglessness. The American Psychological Association, for example, describes trauma as “an emotional response to a terrible event like an accident, rape or natural disaster”—like, but not only. “Like weeds that spread through a space and invasively take over semantic territory from others,” trauma can be used to describe any misfortune, big or small, Nicholas Haslam, a psychology professor at the University of Melbourne, told me. That concept creep is evident on TikTok, where creators use “trauma response” to explain away all kinds of behavior, including doomscrolling and perfectionist tendencies.

In the pandemic, trauma has become a catchall in the U.S. for many varied, and even competing, realities. Some people certainly are experiencing PTSD, especially health-care workers who have dealt with the carnage firsthand. For most people, however, a better description of the past 19 months might be “chronic stressor,” or even “extreme adversity,” experts told me—in other words, a source of immense distress, but not necessarily with severe long-term consequences. The whole of human suffering is a lot of ground for one word to cover, and for trauma best sellers to heal.

Today, a comprehensive shelf of trauma self-help includes the biophysicist Peter Levine’s Waking the Tiger, which argues that a lack of trauma in wild animals can offer insight into how humans might overcome their seemingly unique susceptibility to it; The Deepest Well, by the surgeon general of California, Nadine Burke Harris, who uses personal experience to draw a direct line from childhood stress to a host of physical and social ills; and It Didn’t Start With You, in which the author, Mark Wolynn, makes the controversial claim that trauma can be inherited from distant ancestors.

These books tend to follow a reliable arc, using the stories of trauma survivors to advance a central thesis, and then concluding with a few chapters of actionable advice for individual readers. In The Body Keeps the Score, van der Kolk writes about people he refers to as Sherry, a woman who was neglected in childhood and kidnapped and repeatedly raped for five days in college, and Tom, a heavy drinker whose goal was to become “a living memorial” to his friends who had died in Vietnam. For patients like these, van der Kolk eventually turned to yoga, massage therapy, and an intervention called eye-movement desensitization and reprocessing, or EMDR, which specifically treats the traumatic memories that pull people with PTSD back into the past.

Those experiences are remarkably different from what most Americans have endured in the pandemic. Although almost everyone has struggled with the risk of contracting a deadly virus and the resulting isolation and potential loneliness, a remote worker’s depressive episode, or an unemployed restaurant worker’s inability to pay their bills, has little in common with stories like Tom’s and Sherry’s. They are no less important—no less deserving of attention—but we need better words to describe them, and other remedies to treat them.

Even van der Kolk himself is wary of some of the ways in which trauma is used today. When I asked him whether he thinks The Body Keeps the Score is useful for all the readers turning to it during the pandemic, he objected to the premise of my question: The readers he hears from most, he said, are those who grew up in abusive households, not those who feel traumatized by COVID-19. “When people say the pandemic has been a collective trauma,” van der Kolk said, “I say, absolutely not.”

Link to the rest at The Atlantic

Making the Past Relevant (by Diversifying Biographies)

From author Heather Demetrios in Publishers Weekly:

After reading my new biography of World War II spy Virginia Hall, Code Name Badass, a seasoned biographer told me, “I’m… pleasantly surprised you got away with it!” The surprise was justified: it’s not often you see a YA nonfiction work that’ll make your grandma clutch her pearls. Badass is meant to read like an episode of Drunk History, with all the irreverence of the Comedy Central hit intact—but with more than 50 pages of endnotes.

My Pussy Riot–style ambush on the genre has f-bombs falling on its pages. This book comes to the fight with brass knuckles and one of the most audacious women ever to enter the ring of war as its subject. I wanted the language to reflect the dirty fighting of guerilla warfare and the culture of my readers, most of whom armor themselves for the daily onslaught of the patriarchy with clothing and accessories emblazoned with so-called foul language.

It’s possible to drop an f-bomb and an endnote at the same time, I assure you.

Despite the increasing presence of female writers, subjects, and narrative approaches to nonfiction, I’m still not seeing many books that marry the deep research required of a quality biography with bingeable prose. With the rights of women constantly under threat, the last thing my readers want is another biography by the man, for the man, about a man. I may be writing about the past, but the future is female.

It was exhilarating to write the book I wanted to read—as though I’d ditched history class and hung out behind the gym, sneaking a cigarette with the French Resistance instead of reading a dry chapter on the early days of asymmetrical warfare.

Badass is about a disabled woman whose job was to be invisible, but who was also rendered invisible not only by the men in power she worked with but by the privileged few who chose to write and acquire biographies. Did I want to make a little noise with my book, since its subject was often silenced? Hell yes I did. And to be heard over all the dudes in the biography section, I knew I’d need to do a bit of literary shouting.

. . . .

We’ll always have the scholarship and heft of a David McCullough or Ron Chernow. But many readers I know—myself included—long for biography that’s infused with the energy of the subject’s life (often iconoclastic, passionate, and dramatic) and wouldn’t mind seeing a clear line drawn between the past and the present. In short: relevant biography, as modern as its 21st-century readership.

I’m not alone in taking the genre’s road less traveled, but I want to see more writers on this road with me. There’s no map, but you have a lot of fun getting where you’re going.

I share some of Virginia Hall’s privileges: I’m white, middle-class, educated, American. Hall’s access to education and travel is what allowed her to become one of the greatest spies of all time. But she was also disabled and a woman. These two barriers created numerous obstacles throughout her life. And yet it was her character—her grit, moxie, and doggedness—that made me want to write about her.

In order to do Hall’s extraordinary life justice, I had to write in a way that was as divergent as the woman herself.

Link to the rest at Publishers Weekly

PG suggests that anyone who believes inserting obscenities anywhere in a book is daring or unusual has mixed up 2021 with 1960.

If you’re gonna be a rebel, you need to do something that’s not passé. Nose rings don’t count. YA Fiction is purely a creation of the New York publishing world. You can put anything in a book and call it YA. You might pick up some one-star reviews on Amazon, but, if you’re a real rebel, what other people think doesn’t mean a thing.

The OP author’s latest book is Code Name Badass: The True Story of Virginia Hall and is published by Atheneum Books which is owned by Simon & Schuster which is owned by ViacomCBS.

The person who has total control of ViacomCBS is Sheri Redstone, a wealthy heiress who took over the company from her father, Sumner Redstone.

Perhaps the author of the OP and Ms. Redstone can get together to talk about how their rights as women are constantly under threat.

Of Fear and Strangers

From The Wall Street Journal:

George Makari’s concern with xenophobia goes back to a childhood trauma. To escape from sectarian conflict, his French-speaking Christian parents had fled their native Lebanon and settled in the U.S., where Dr. Makari was born. In 1974, at the age of 13, he was taken on a family visit to Beirut. Suddenly, the travelers found themselves caught in the midst of what would become a civil war. “To me, it was bizarre,” Dr. Makari recalls in “Of Fear and Strangers: A History of Xenophobia.” He continues: “All these bewildering sects were far more alike than different. All were Levantines who spoke the same dialect; all loved the same punning humor, devoured the same cuisine, abided by strict rules of hospitality, and approached any purchase as a three-act play: bargain, stage a walk-out, then settle. They were quick with proverbs and went agog when Fairuz sang. And yet, subtle distinctions in their identities now meant life or death.” It was an experience that would haunt a young George Makari.

Today, Dr. Makari, a psychiatrist, psychoanalyst and the director of Weill Cornell’s DeWitt Wallace Institute of Psychiatry, sees xenophobia as a threat to social peace, not only in the Middle East but also in Europe and North America, where recent political convulsions have been driven by a bristling hostility toward strangers and outsiders. Dr. Makari is clear that a lot of different impulses are often conflated here: “ethnocentrism, ultranationalism, racism, misogyny, sexism, anti-Semitism, homophobia, transphobia, or Islamophobia.” What might they have in common? “Is there any one term specific enough to not be meaningless, while broad enough to allow us to consider whatever common strands exist between these phenomena?” He thinks that there is: xenophobia. And if all these disorders are variants of the same affliction, then perhaps they have the same cause and might be susceptible to the same treatment.

Dr. Makari traces the invention of “xenophobia” to the 1880s, when psychiatrists came up with a variety of “phobias” apparently caused by traumatic experience. “Hydrophobia”—a fear of water—was an old term for rabies. There followed a rash of other phobias, from claustrophobia to my personal favorite, phobophobia—the fear of being frightened. (One commentator remarked that the number of phobias seemed limited only by an Ancient Greek dictionary.) Xenophobia entered a medical dictionary in 1916 as a “morbid dread of meeting strangers.”

Like many psychiatric classifications, early definitions of xenophobia covered too much ground. Perceptions of the disease seemed malleable. What began as a psychiatric diagnosis would soon be used to describe the fury with which colonized populations often turned on settlers. These settlers, in turn, would be accused of xenophobia by the critics of colonialism, as waves of migrations in the years leading up to World War I provoked fears of a loss of national identity.

In the U.S., three confrontations between different segments of the population proved formative. The first pitted the Puritans, who were themselves refugees from religious persecution, against Native Americans. The second was the forced migration and enslavement of millions of Africans by descendants of the country’s European settlers. The third was provoked by the migrants, first from Europe, then from Asia, who arrived after the Civil War largely for economic reasons.

Dr. Makari notes that in 1860 60% of the white population in the U.S. was of British origin, while 35% were broadly classified as German. By 1914, after 20 million immigrants had passed through American ports, 11% of the white population had British roots, 20% German, 30% Italian and Hispanic, and 34% Slavic. The settled sense of identity enjoyed by established white American Protestants was threatened. There was, in particular, a panic about Chinese immigration, even though the number of arriving Chinese was relatively small. This led to the passage of the Chinese Exclusion Act in 1882, prohibiting the immigration of Chinese laborers. In 1892, 241 lynchings were recorded in America. Two-thirds of the victims were black; the remaining third were mostly Chinese and Italian. In 1908, the Harvard philosopher Josiah Royce asked: “Is it a ‘yellow peril,’ or ‘black peril,’ or perhaps, after all, is it not some form of ‘white peril’ which threatens the future of humanity in this day of great struggles and complex issues?”

. . . .

One idea is that there is something fundamentally human here. Early human groups competed for territory. All intruders were enemies. The more you feared and hated outsiders, the better your chances of survival. So xenophobia bestowed an evolutionary advantage. Sports fans are simply expressing inherited tribal instincts. Even babies are frightened by a strange face.

This is a popular one-size-fits-all explanation. But it is problematic. For one thing, anthropologists do not agree that constant strife was the norm during the 95% of human history when small nomadic bands lived by hunting and gathering. The Victorian anthropologist Edward Burnett Tylor said that early humans would have had to choose between marrying out or being killed out. When Europeans in the early 19th century made contact with surviving communities of hunter-gatherers, different bands were observed forming marriage alliances and trading partnerships that generally kept feuds from raging out of control.

. . . .

In the aftermath of World War II and the Holocaust, however, a better explanation of mass hatreds was needed. The orthodox theory in American psychology at the time was behaviorism, which explained habitual attitudes and responses as the products of conditioning: Pavlov’s dogs salivated at the sound of a bell because they had been conditioned to recognize this as a cue for food. In the same sort of way, children are warned against strangers and so conditioned to fear others.

Less orthodox, but more influential in the long run, is the notion of projection. Each of us half-recognizes our shameful desires, infantile fears, aggressive impulses. Instead of dealing with them, we may accuse someone else of harboring those same feelings, cleansing ourselves by shifting the blame onto a scapegoat.

According to yet another analytic theory, the people most susceptible to collective paranoia are the children of strict and demanding fathers whom they feared and adored. Theodor Adorno, the lead author of the classic account “The Authoritarian Personality,” wrote that the typical subject “falls, as it were, negatively in love.” Cowed by the father-figure, he is masochistically submissive to authority and sadistically takes out his anger on the weak.

Link to the rest at The Wall Street Journal (This should be a free link, but PG doesn’t know how many clicks it can handle. He apologizes for the paywall, but hasn’t figured out a way around it.)

Powers and Thrones by Dan Jones audiobook review

From The Guardian:

The latest masterwork from Dan Jones, the British historian and author of The Templars, isn’t short on ambition. Spanning a thousand years, it tells the story of the “awkward slab” of time that is the middle ages, the period between the fall of the western Roman empire in the fifth century AD and the Protestant reformation. Jones, who reads the audiobook himself, lays out his plan in the introduction to “sweep across continents and centuries, often at breakneck pace. We are going to meet hundreds of men and women, from Attila the Hun to Joan of Arc. And we are going to dive headlong into at least a dozen fields of history – from war and law to art and literature.”

He isn’t wrong about the pace: he hopscotches from the collapsing Romans, barbarian migration and the rise of Islamic empires to the age of the Franks, the brutal Mongol superpower and the plague that wiped out millions across north Africa, Asia and Europe. But despite the hectic schedule, his reading feels relaxed as he delights in peculiar details and revels in witty asides. While the tone is confident, Jones mercifully avoids the declamatory style that can afflict historians in performance mode.

Link to the rest at The Guardian

PG noted the OP for two reasons.

  1. In his reading, it isn’t common for a large-circulation periodical to include audiobook reviews.
  2. Having the author voice the audiobook is also somewhat unusual.

Perhaps those who follow audiobooks more closely that PG does will correct any misconceptions PG has about this part of the literary world.

Stillwater: Amanda Knox Reaction & Murder Case Controversy Explained

From ScreenRant:

It’s not strange for filmmakers to take inspiration from real-life people and events, but sometimes, the way these are handled in fiction does more harm to the people they are based on – such is the case of Stillwater, based on the Amanda Knox case and who has called out those involved for profiting off her controversial and complex case. The coronavirus pandemic forced studios to delay their releases and reorganize their schedules, and one of those movies that went through a couple of date changes is Stillwater, directed by Tom McCarthy and starring Matt Damon. Stillwater is finally out, but not without a lot of controversy.

. . . .

Stillwater tells the story of Bill Baker (Damon), an unemployed oil rig worker from Oklahoma who sets out alongside a French woman called Virginie (Camille Cottin) to prove his convicted daughter’s, Allison (Abigail Breslin), innocence, who had spent four years in prison for the murder of her roommate. Stillwater premiered at the Cannes Film Festival in July 2021 and was released in theaters at the end of the month, but instead of making headlines for its quality, the movie has been involved in controversy for using Amanda Knox’s case as inspiration without her consent, with her calling out Damon and McCarthy on social media.

. . . .

Amanda Knox took Twitter to call out those behind Stillwater for using her story for profit and dragging her name into it for the sake of marketing. Knox explains that Stillwater has been marketed as being “inspired by the Amanda Knox saga”, focusing on the sensationalist side of what happened to her rather than on facts. Knox also explains how authorities and thus the media focused on building a specific image of her, even though she’s innocent and wasn’t involved in the murder she was accused of and continues to be linked to by the media. Of course, there’s also the fact that her story was used without her consent and fictionalized, once again painting her under the wrong light, with the movie “reinforcing an image of her as a guilty and untrustworthy person”. Knox also invited McCarthy and Damon to her podcast so they can clear all this up, but there hasn’t been a response from them yet.

. . . .

In 2009, Amanda Knox was wrongfully convicted for the murder of Meredith Kercher, her roommate in Perugia, Italy. What led to that and what followed for years was a messy investigation by Italian authorities in which Knox and her boyfriend at the time, Raffaele Sollecito, were portrayed in a negative light, leading to a lot of controversy as the interrogations and the overall investigation was put into question by U.S. lawyers and forensic experts. After a long and tiring legal process, during which Knox points out she had “near-zero agency” and no control over the image the media was building around her, Italy’s highest court exonerated Knox and Sollecito in 2015, but she had already spent almost four years in prison. Knox returned to the US, completed her degree, and wrote the book Waiting to Be Heard: A Memoir, and has worked as a journalist and activist ever since.

. . . .

Stillwater isn’t the first movie to take “inspiration” from Amanda Knox’s story, such as Lifetime’s Amanda Knox: Murder on Trial in Italy, for which Knox actually sued them over.

Link to the rest at ScreenRant

The Ant Man’s World

From The Wall Street Journal:

A well-known portrait of Edward Osborne Wilson shows the smiling Harvard professor hovering, like a benevolent god, over some models of his cherished leafcutter ants. The photograph serves as the cover of “Tales From the Ant World,” Mr. Wilson’s most recent and, by my count, 35th book. The large ant right beneath Mr. Wilson’s chin, a mix of Mars Rover and de Chirico mannequin, wields a leaf almost as big as the entomologist’s head. The proportions seem off, but in a larger sense they really aren’t: As Mr. Wilson, now in his 90s, has reminded us over a long, distinguished career, ants can more than hold their own against humans. There are, by some rough estimates, 10,000 trillion ants in the world at any given moment, and their combined weight (Mr. Wilson, who likes to mock his ineptness at math, nevertheless supplies numbers whenever he can) would match the total weight of the planet’s human population.

Mr. Wilson has always had a knack for reducing complex problems to simple number games, easy-to-grasp metaphors or memorable anecdotes. A world-famous scientist, winner of many medals and honorary doctorates, celebrated for his research on ant communication and evolutionary equilibrium in island settings, he has also swept up awards in the field of literature, including two Pulitzers and, for his novel titled (what else?) “Anthill” (2010), the Chicago Tribune Heartland Prize for fiction. Earlier this year came the ultimate literary sanctification, inclusion in the venerable Library of America, where Mr. Wilson is now rubbing shoulders with just a handful of other nature observers: John James Audubon, Rachel Carson, Loren Eiseley, Aldo Leopold and John Muir. And, as if further corroboration of Wilson’s eminence were needed, we now also have “Scientist,” a full-length biography by Richard Rhodes, whose previous work in the genre includes the popular “John James Audubon: The Making of an American” (2004).

. . . .

Mr. Wilson’s career as a naturalist began during a summer vacation on Paradise Beach, Fla., with an incident so far from paradisal that it would have ended a lesser man’s aspirations: A pinfish the boy had caught jumped and pierced his right eye, blinding it. Yet, although he would later lose some of his hearing, too, young Wilson never wavered in his determination to become a naturalist. He settled on entomology: Keeping his surviving eye trained on the ground, he would track the motions of animals too small to rise to most people’s attention, creatures whose world was ruled not by sight and sound but by taste and smell. “I opened logs and twigs like presents on Christmas morning, entranced by the endless variety of insects and other small creatures that scuttled away to safety.”

Born into a complicated, nomadic family, young Wilson attended more than a dozen schools in 10 years, with no apparent damage to his intellectual development. His sixth-grade teacher in Washington, D.C., noted on his report card: “He writes well and he knows a lot about insects. If he puts those two things together, he might do something special.” And “something special” he did with “those two things.” One of the great pleasures of the new, lavishly illustrated Library of America edition is the opportunity to appreciate the many ways in which Mr. Wilson fuses literature and science. His blend of wisdom and wit even extends to his footnotes: He never saw the Emperor of Germany bird of paradise in the wild, Mr. Wilson admits, but “many Paradisaea guilielmi probably saw me.”

. . . .

Works by scientists don’t, as a rule, contain such perfectly paced sentences as the following tribute to Bulum Valley, New Guinea, where Mr. Wilson went in 1955 to collect ants: “A flock of sulfur-crested cockatoos circled in lazy flight over the treetops like brilliant white fish following bottom currents.” And not too many fiction writers would dare insert such precisely observed passages into their novels as the following, taken from Mr. Wilson’s “Anthill,” a description of a particularly scrappy ant species: “The swollen posterior lobes were filled with adductor muscles that closed the jaws with enough force to cut through the chitinous exoskeleton and muscle of most kinds of insects.“ If you think such language is entirely too technical for a novel, you have a point. But I would still suggest that you give “Anthill” a try—the combination of ant lore and human plotting works well enough for Margaret Atwood to have tagged Mr. Wilson, in a review of the book, the “Homer of the Ants.”

Link to the rest at The Wall Street Journal (This should be a free link. If the WSJ does something to make it decay with use, PG apologizes if you hit a WSJ paywall, but hasn’t figured out a way around it.)

PG found Mr. Wilson’s Amazon Author page fascinating for his prolific popular output which was almost certainly accompanied by a lot of publications in academic journals.

NOTE: The WordPress Kindle Embed function has been blowing up whenever PG hits the Free Preview link below the image of the book cover. If you click the Buy on Amazon link, that will take you to the book’s site on Amazon where the Look Inside feature works fine.

PG hasn’t been able to track down the source of the problem. If someone else has, PG would appreciate an explanation or fix via the Contact PG link at the top of the blog.

The Gilded Edge: Bohemian Tragedy

From The Wall Street Journal:

The Bohemian literary colony at Carmel-by-the-Sea, Calif., gained notoriety in the early 20th century not only for its drunken bonfire parties, embrace of free love, and hosting of left-leaning poets and writers such as Robinson Jeffers, Sinclair Lewis and Jack London. It also became infamous in those decades for a tragic love triangle that resulted in three suicides.

In “The Gilded Edge: Two Audacious Women and the Cyanide Love Triangle That Shook America,” Catherine Prendergast, a professor of English at the University of Illinois, exposes the myth behind the colony’s creation and the desperate powerlessness and exploitation of two women involved in that circle.

The book centers around the poet George Sterling, his wife, Carrie, and the poet Nora May French. Sterling and French had a passionate love affair, and over the course of the unfurling tragedy, all three ended up taking their own lives.

Ms. Prendergast, in her first work of narrative nonfiction, organizes her book as a dual narrative: the story of the characters in the love triangle, interwoven with her own detective work in the archives. She ties these two strands together beginning in San Francisco, less than a year after the earthquake and firestorms of 1906 that razed much of the city. French takes abortion pills to end a pregnancy, but also writes about the experience amid her painful contractions. “It takes some kind of woman to write a letter about an abortion to her boyfriend while she’s administering it,” notes Ms. Prendergast, who soon adds that it is one of the very few early-20th-century first-person accounts of abortion in existence. French survives that terrible experience, but nine months later, despondent and with “no taste for the poor compensations of living,” she dies in Carrie’s arms, at the Sterlings’ cottage in Carmel-by-the-Sea.

In explaining why this happened, and the subsequent cyanide suicides of George and Carrie, Ms. Prendergast examines the seamy truth of the Carmel colony—that Sterling was, in fact, hired by the Carmel Development Co. to entice his circle of San Francisco friends to what was then “a square mile of nearly barren dirt next to a bay.”

Carrie, whose mother had run a boardinghouse, often found herself single-handedly feeding the colony’s residents in her cottage, struggling to find adequate provisions when money was tight. Meanwhile, George was openly carrying on affairs with women who visited or decamped to Carmel, most notably French but many others as well. Both Carrie and Nora were what were then called New Women, those on “the trailing edge of the Gilded Age who sought to enjoy the spoils of economic expansion.”

Sterling, a protégé of the writer Ambrose Bierce, was San Francisco’s unofficial poet laureate and a prominent member of the city’s Bohemian Club. A mostly forgotten poet today, he wrote plays for the club’s midsummer gatherings and was held in high regard by many of its members.

But in the years after French’s death, Sterling struggled with alcohol and faced an uncertain future, with constant financial stress. In 1926, a few days after the 19th anniversary of French’s suicide, Sterling arranged a banquet at the Bohemian Club for the critic H.L. Mencken, but never made it to the festivities. He burned most of his papers and killed himself.

Ms. Prendergast makes a convincing argument that French, who died at the age of 26, was a more gifted poet than Sterling. She was, as the author tells us, “the sensation in her time.” Yet she was repeatedly exploited. “Nora May French, whose reputation was used to bolster the colony’s image, was passed along a line of Bohemian men who treated her as a perpetual ingenue, co-opting her talent in an attempt to claim her as their personal discovery; they plied her with unwanted editorial advice while maneuvering her toward the bedroom.”

Link to the rest at The Wall Street Journal (The link is supposed to be free, but PG apologizes if you hit a paywall. He hasn’t figured out a way around it.)

PG will not include an Amazon ad showing the cover because, while the marketing department for the idiot publisher (Randy Penguin) likely worked hard to get a Wall Street Journal weekend review, they’re holding to book for its official release date (October 12) to make a big splash in physical bookstores.

PG predicts that more than a few WSJ readers (who, on average, have lots of discretionary income with which to purchase interesting books, antique cars and a lot of other things) will have forgotten about the book and the review in nine days. PG just checked to confirm that The Wall Street Journal is the largest paid circulation newspaper in the US.

With an apparent list price of $28 for the ebook (discounted to $14.99 by Zon), the same price as the hardcover, Randy Penguin is apparently trying to induce people to purchase the printed book, which generates a much lower per-copy profit for the publisher than the ebook does because Bad Amazon or something like that.

Apparently, there is not a review for the book in The New York Times (Sunday issue less than half the circulation of WSJ). If there is one closer to the release date, we’ll know that the geniuses of Randy Penguin mistook the little bang for the big bang in two different ways.

Inside the rise of influencer publishing

From The New Statesman (UK Edition):

“We live in a world where everyone is a brand,” said Laura McNeill, a literary agent at Gleam Titles, which was set up by Abigail Bergstrom in 2016 as the literary arm of the influencer management and marketing company Gleam. Many of the UK’s biggest selling books of the last few years, from feminist illustrator Florence Given’s Women Don’t Owe You Pretty to Instagram cleaning phenomenon Mrs Hinch’s Hinch Yourself Happy, have been developed at the agency, and then sold for huge sums to traditional publishing houses.

Celebrity autobiographies and commercial non-fiction have existed for a long time. Gleam Titles’ modus operandi is more specific: it has a focus on “writers who are using social media and the online space to share their content in a creative and effective way”. The term “author”, for the clients with which McNeill and her colleagues work, may be just one part of a multi-hyphen career that also includes “Instagrammer”, “podcaster” or “business founder”. These authors – whose books will become part of their brands – therefore require a different kind of management to traditional literary writers. “I do think the move to having talent agencies with in-house literary departments comes from these sorts of talents being a bit more demanding,” McNeill said. “I don’t want to come across as if those clients are difficult. But they are different.”

The biggest draw for publishers bidding for books by influencers is that they have committed audiences ready and waiting. Gleam understands the importance of these figures: on its website, it lists authors’ Instagram and Twitter followings beneath their biographies. When publisher Fenella Bates acquired the rights for Hinch Yourself Happy in December 2018, she noted Sophie Hinchcliffe’s impressively quick rise on Instagram, having grown her following from 1,000 to 1.4 million in just six months. Upon publication in April 2019, the book sold 160,302 copies in three days, becoming the second fastest-selling non-fiction title in the UK (after the “slimming” recipe book Pinch of Nom).

Anyone who has harnessed such an audience to sell products, promote a campaign, or otherwise cultivate a successful personal brand is an exceptionally desirable candidate to a publisher that wants to sell books. What’s more, the mechanics of social media means the size of these audiences is easily measurable, making the authors “cast-iron propositions” for publishers, said Caroline Sanderson, the associate editor of the trade magazine the Bookseller, who has noticed a huge increase in the number of books written by social media stars over the last couple of years. 

A spokesperson for Octopus Books, which published Florence Given’s Women Don’t Owe You Pretty in June 2020, suggested that a book deal can raise an influencer’s profile too. When the book was acquired, Given had approximately 100,000 followers on Instagram. “Her book was acquired because she was an exceptional writer, not because she was an influencer,” they said. “By the time it was announced, she had 150,000 followers and when the book was published her audience had jumped to circa 350,000 followers. As the book and its message grew, so did her audience.” Women Don’t Owe You Pretty has spent 26 weeks in the Sunday Times bestseller charts according to data from Nielsen BookScan, and, as of August 2021, has sold over 200,000 copies.

Link to the rest at The New Statesman (UK Edition)

PG reminds one and all that, unlike plebeian self-publishers, traditional publishers are curators of culture.

The Eternal Decline and Fall of Rome: The History of a Dangerous Idea

From The Wall Street Journal:

This is a history of Rome in which the first name is that of Donald Trump and Ronald Reagan’s name almost the last. President Trump earns his place with his inaugural address promising to “make America great again,” President Reagan with a speech in 1969 on the theme of “decline and fall” in which the greatest empire in Western history collapsed in bureaucracy, excessive welfare payments, taxes on the middle class and long-haired students wearing makeup. Edward J. Watts, a professor of history at the University of California, San Diego, is a scholar of the later ancient world, who takes his readers from republican Rome to Republican Washington with a resounding theme that anyone promising to restore lost greatness is probably up to no good.

Throughout the years of his story he finds a range of cases where politicians first claim that society is “becoming worse” than it was during a great past and then “suggest a path toward restoration that consists of rebalancing society to address the problems they identify.” His modern abusers of history come from Spain and the Philippines as well as the U.S. When “radical innovation” is dressed as the “defense of tradition” he sees a trail of victims—immigrants, dissidents and the young.

Roman history, he argues, is the most abused in this fashion because it is absolutely at the heart of Western culture. President Trump, after his appearance in Mr. Watts’s first line, is not mentioned by name again and no one has ever suggested him as a student of Classics. Yet Mr. Watts is not the first to point out the real-estate magnate’s instinctive grasp of rhetorical themes—populist anti-elitism as well as nostalgia—that were well-tested over the Roman ages.

This is a powerful lens through which to view the past, both for those who already think they know it well and those who have practical uses for it. The first villains in the book are identified even before Rome has an emperor, led by the “cynical” Marcus Porcius Cato, who blamed immigrant Greeks for corrupting the Roman young in the early second century B.C. Cato is followed by the down-at-heel aristocrat Lucius Cornelius Sulla, who in the 80s B.C. slaughtered thousands of his fellow citizens in a program of turning back the clock toward a better age. By the end of the book, Mussolini in Ethiopia and Rodrigo Duterte in Manila have joined other villains in what Mr. Watts sees as a pattern of disguising brutal policies within disingenuous history.

There are surprisingly generous words for leaders regularly seen as the worst of their kind, the emperors Caligula (A.D. 37-41) and Nero (54-68), both of whom “prized stability and continuity” with the immediate past instead of embracing the “language of Roman decline and renewal.” These men may have been vicious fantasists, claiming divinity and artistic genius for themselves, but they did not inflict a political fantasy of restoration.

It is hard to make heroes of Caligula and Nero. A firmer positive verdict goes to Antoninus Pius (138-61), a “savior and restorer” in the eyes of those to whom he sent disaster relief, and to the first African emperor, Septimius Severus (193-211), who restored the fabric of Rome at the end of the second century without claiming to be restoring any grander concept. This is the model that Mr. Watts approves. In his final paragraph, he offers his readers two approaches to what he perceives as pressing modern crises—modern “political instability, environmental degradation, wealth inequality and climate change.” Some, like Sulla, create scapegoats. Others, like Antoninus Pius, aim to bring society together. President Trump was certainly a Sulla: whether his successor is an Antonine, Mr. Watts does not say.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Hurts So Good

From The Wall Street Journal:

A few years ago, an Australian scientist was bushwhacking through the wilderness when he felt a twig snap against his leg. Or so he thought. He’d actually been nipped by an Eastern brown snake, one of the most venomous serpents on Earth. Oblivious, he walked on and even went swimming in a nearby river before blacking out and nearly dying.

We’ve probably all heard similar stories, about athletes or warriors who suffer serious injury but power through without realizing they’re hurt. What’s surprising is what happened next. Nothing if not intrepid, the scientist plunged back into the bush six months later for another hike—at which point he again felt something snap against his leg. He crumpled to the ground in agony, writhing and screaming.

But this time, it really was just a twig. Identical sensation, completely different reaction. “There is no grievous injury . . . just a very powerful memory of last time,” explains science writer Leigh Cowart about the story. “The basic sensory processing is the same, but the cognitive understanding of the pain differs.” All of which goes to show that, for something so basic to human experience, pain remains a highly subjective and even slippery phenomenon.

There’s possibly no one alive more qualified to write about pain than Leigh Cowart, who uses the pronoun they and prefers the Mx. honorific. A self-described “gorehound,” the author has been, at different points in life, “a ballet dancer, an overexerciser, a serious bulimic and self-harmer, a tattoo aficionado” and a hard-core BDSM enthusiast. This eye-opening book, “Hurts So Good: The Science and Culture of Pain on Purpose,” explores why so many people pursue painful activities like these, and especially what people get out of pain when they encounter—or achieve—it. “Many people engage in the ritual of deliberately feeling bad to feel better,” the author notes, “and once I started looking for the pattern, I saw it everywhere.”

. . . .

Beyond plumbing their personal past, the author also engages in what might be called gonzo science writing. They dive into one excruciating situation after another (a polar bear plunge, a chili pepper-eating contest), and things go hilariously awry. The mush from one superhot pepper (2.2 million Scoville units; jalapeños max out at 8,000) burns the author’s mouth like “Dante’s gazpacho.” In their stupor, they then rub some into their eye. The author is especially good at describing escalating pain: just when you think a passage has reached a crescendo, Mx. Cowart ups the ante with some new turn of phrase. More than once, I found myself sucking in my breath and feeling my feet tingle as some new horror unfolded on the page.

I especially enjoyed the chapter on extreme running, which covers the fiendish Big Dog’s Backyard Ultra in Tennessee. Every hour, the contestants in this ultramarathon have to complete a four-mile circuit. Doesn’t sound too bad, except that the race sometimes continues all day and all night for nearly three days, with zero breaks. Quite literally, the last person standing wins. Overall, the chapter is a beautiful reflection on the capacity for human endurance, and for pushing yourself beyond what you thought possible. It’s also wickedly funny. God help me, but I still laugh at one poor soul who, 40-some hours in, pitched forward in exhaustion and crashed asleep atop a mailbox.

Yet this running chapter does highlight a problem with the author’s objective to find masochists everywhere they looked. Before the Tennessee race, the organizer initially revoked the author’s press pass because he objected to the pastime being characterized as masochism. As he wrote, “like many sport[s], there is discomfort involved, but it is a cost of competition, not an objective.”

The author objects to that distinction, but I think the organizer is right. For most runners and ballet dancers, pain is a byproduct of their ultimate goal—to run fast or dance beautifully.

. . . .

[T]his book makes a far better case for the importance of pain in dance or athletics than I expected. Imagine you could win an Olympic marathon without enduring any pain. You’d still have to train, but you could sidestep all the misery—the soreness, the burning lungs, the bloody blisters, the toenails falling off. Would you accept this deal? Many of us probably would; suffering stinks. But the author makes a strong argument that the medal would mean far less to you than to someone who suffered for it. Suffering creates meaning, and the joy of victory is sweeter for having suffered.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

The Authority of the Court and the Peril of Politics

From The Wall Street Journal:

In the United Kingdom, there is a tradition of printing 100-page books—booklets, really—from lectures given by notable judges and lawyers. The Hamlyn Lecture series, for example, has featured such distinguished talks as Lord Denning’s “Freedom Under the Law” (1949), Professor Arthur Goodhart’s “English Law and the Moral Law” (1953) and Dean Erwin Griswold’s “Law and Lawyers in the United States” (1964). The primers are collectible, memorable and quotable.

Now Harvard University Press has perhaps embarked on a similar plan for Harvard Law School’s annual Scalia Lecture series, instituted in 2013. This year the program turned to Justice Stephen Breyer, who has thought deeply about judicial power, the rule of law and the role of the judiciary in the American polity. Perhaps these three subjects are in the nature of a trinity: three that make up one. In any event, their position in the U.S., when compared to the rest of the world, has been enviably secure. Yet insiders know that, here as elsewhere, the institution is perennially precarious.

In April Justice Breyer spoke from a lectern to a Zoom audience, and now his speech is preserved in book form. Those wishing to know Justice Breyer’s thoughts can choose either to read the book or to watch the two-hour speech on YouTube. You’d feel edified in doing either.

Quoting Cicero, Justice Breyer argues that the only way to ensure obedience to the Supreme Court’s pronouncements is to convince people that the Court deserves obedience because itsdecisions are just. That means an observer must assess not the justness of each individual decision, but the justness of the Court’s decisions collectively and in general.

In support of this thesis, Justice Breyer gives a mini-lecture on American constitutional history and on the struggle, when interpreting the Constitution, for judicial supremacy. He explains how Chief Justice John Marshall, in Marbury v. Madison (1803), decided the case in a most unexpected fashion—pleasing President Thomas Jefferson with the specific result but only by establishing the Court’s ability to declare acts of Congress unconstitutional. That all but guaranteed acceptance of the Court’s power, at least in that case, while establishing the doctrine of judicial review.

Nearly 30 years later, when the Supreme Court declared that the State of Georgia had no rightful control over Cherokee lands there—lands where gold had been discovered—President Andrew Jackson and the state of Georgia both ignored the decision. There was no enforcement. As a result, the Cherokee Nation was driven to Oklahoma on the infamous Trail of Tears.

After that outrage, adherence to the principle of judicial review was, reassuringly, mostly re-established. Yet even as late as the 1950s, with Brown v. Board of Education, it wasn’t at all clear whether the Court’s decision would be enforced by the Executive Branch. Some today may have forgotten that, to enforce Brown, President Eisenhower sent 1,000 parachutists from the 101st Airborne Division into Arkansas. Central High in Little Rock would no longer be white-only. In taking that bold action, Eisenhower ignored the advice of James Byrnes, the South Carolina governor who had once briefly served on the Supreme Court, before returning to the Roosevelt administration to aid the war effort. At the time of Brown, Byrnes advocated taking the Jacksonian stance of doing nothing to enforce the Court’s decree. The U.S., in other words, came perilously close to a 20th-century trail of tears—one that would have resulted from reducing the Brown decision to empty words on a piece of paper.

. . . .

If the events of the past year have taught us anything, it’s that the established institutions of the United States are more fragile than almost any of us had previously thought. We used to believe, for example, that strongman coups were exclusively in the domain of Third World countries. Now we know that the potential is also here on our shores.

Meanwhile, judicial institutions are under attack once again. We can’t say “under attack as never before,” because Justice Breyer shows us that such attacks are a persistent problem. Although he abjures speaking directly about the current Court-packing proposals, the author wants to “ensure that those who debate these proposals also consider an important institutional point, namely how a proposed change would affect the rule of law itself.” His voice is a powerful one, and the brevity of this book, together with its readability, should ensure its lasting influence. Like anyone else, Washington leaders can absorb its message in a single evening.

. . . .

The central question is whether courts should interpret legal documents by giving them a fair reading of what they denoted at the time of adoption, or whether courts can interpret those texts according to their broad purposes (not getting too caught up in grammar and historical dictionaries) and even the desirability of results. As Justice Breyer puts it: “Some judges place predominant weight upon text and precedent; others place greater weight on purposes and consequences.” As the popular mind conceives it, conservatives do the former, and liberals do the latter. And the latter approach, according to Scalia, leads inevitably to appointing judges who will vote for outcomes they personally favor. Hence the process becomes more politicized the further judges stray from the text.

Link to the rest at The Wall Street Journal (This should be a free link, but, if it doesn’t work, PG apologizes for the paywall, but hasn’t figured out a way around it.)

The United States Constitution includes only a broad overview of the US court system. Here is all that document says about courts:

Article III.

Section. 1.

The judicial Power of the United States, shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish. The Judges, both of the supreme and inferior Courts, shall hold their Offices during good Behaviour, and shall, at stated Times, receive for their Services, a Compensation, which shall not be diminished during their Continuance in Office.

Section. 2.

The judicial Power shall extend to all Cases, in Law and Equity, arising under this Constitution, the Laws of the United States, and Treaties made, or which shall be made, under their Authority;—to all Cases affecting Ambassadors, other public Ministers and Consuls;—to all Cases of admiralty and maritime Jurisdiction;—to Controversies to which the United States shall be a Party;—to Controversies between two or more States;— between a State and Citizens of another State,—between Citizens of different States,—between Citizens of the same State claiming Lands under Grants of different States, and between a State, or the Citizens thereof, and foreign States, Citizens or Subjects.

In all Cases affecting Ambassadors, other public Ministers and Consuls, and those in which a State shall be Party, the supreme Court shall have original Jurisdiction. In all the other Cases before mentioned, the supreme Court shall have appellate Jurisdiction, both as to Law and Fact, with such Exceptions, and under such Regulations as the Congress shall make.

The Trial of all Crimes, except in Cases of Impeachment, shall be by Jury; and such Trial shall be held in the State where the said Crimes shall have been committed; but when not committed within any State, the Trial shall be at such Place or Places as the Congress may by Law have directed.

Section. 3.

Treason against the United States, shall consist only in levying War against them, or in adhering to their Enemies, giving them Aid and Comfort. No Person shall be convicted of Treason unless on the Testimony of two Witnesses to the same overt Act, or on Confession in open Court.

The Congress shall have Power to declare the Punishment of Treason, but no Attainder of Treason shall work Corruption of Blood, or Forfeiture except during the Life of the Person attainted.

Unlike any other public office mentioned in the Constitution, federal judges at all levels serve until they voluntarily retire or die.

The specific language is:

The Judges, both of the supreme and inferior Courts, shall hold their Offices during good Behaviour, and shall, at stated Times, receive for their Services, a Compensation, which shall not be diminished during their Continuance in Office.

The “good Behavior” language means that federal judges can be removed from office only via an impeachment process.

The House of Representatives impeaches a judge and the Senate holds a trial to determine whether removal is justified. A simple majority vote in the House is required to impeach and a two-thirds majority is required in the Senate to convict the judge of the charges laid in the impeachment and remove the judge from office.

Only one Supreme Court justice has ever been impeached, Samuel Chase, who was appointed an Associate Justice in 1788 by George Washington.

President Thomas Jefferson was upset at several federal judges who had held some of his legislative initiatives to be unconstitutional. Jefferson and his supporters in the House and Senate repealed the Judiciary Act of 1801, under which federal courts subordinate to the Supreme Court were established, thus abolishing the federal courts and, effectively terminating their lifetime appointments as provided in Article III of the Constitution.

Thereafter, Chase severely and publicly criticized this action. For this, he was impeached by the House of Representatives in 1803. Following a trial in the Senate, several votes were taken, but the required 2/3 majority voting for Chase to be removed from the bench could not be attained. Chase continued to serve on the Supreme Court until his death in 1811.

See Wikipedia for more information about Chase. This Wikipedia article includes lots of links to third-party information regarding Chase and his trial.