‘Franklin & Washington’ – Friends at the Founding

From The Wall Street Journal:

At a time when American politics is increasingly dominated by rancor, turbulence and division, it is perhaps unsurprising that books about the Republic’s revered Founding Fathers are as popular as ever. To authors and publishers, the select band of men who championed independence from Britain and hammered out the Constitution resemble a pack of cards to be endlessly reshuffled and dealt out in different hands. Alongside biographies or group portraits of the Founders, another approach is to examine a pair whose alliance—or, more often, rivalry—is credited with forging the nation. Recently, for example, Alexander Hamilton was separately linked with George Washington, Thomas Jefferson and his nemesis, Aaron Burr, while several eminent historians have been drawn to the power struggle between Jefferson and John Adams.

In “Franklin & Washington: The Founding Partnership,” Edward J. Larson lays down a pairing that has hitherto been neglected. Few would quibble with Mr. Larson’s verdict that George Washington and Benjamin Franklin rank as the pre-eminent Founders: Without the former’s determined leadership of the Continental Army during the Revolutionary War and the latter’s assiduous cultivation of the French support that ultimately secured victory, there would be no United States.

Mr. Larson acknowledges that these foremost Founders make an unlikely couple. The gregarious and folksy Franklin (1706-90) was old enough to be Washington’s father. Born in Boston to humble parents but associated with Philadelphia after he moved there in his teens, Franklin was an enlightened polymath, a printer, scientist and inventor who became an opponent of slavery. Washington (1732-99), the restrained, status-conscious, slave-owning Virginian gentleman, can seem like Franklin’s opposite. If the aloof Washington came to be regarded as his country’s father, Mr. Larson observes, Franklin was its approachable uncle.

Despite marked differences in background and temperament, Washington and Franklin had traits in common. Dedicated Freemasons with a fondness for English porter beer, both men were driven to improve their lot through hard work. Crucially, they shared a vision of an independent and united America based upon a strong central government, and each strove tirelessly to achieve that end.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

Why Is The Book Of Kells Important?

From BookRiot:

There are a number of literary tourist things to do in Dublin, Ireland. There are the numerous James Joyce themed tours. Then there is the Dublin Writer’s museum. As you trek around the city, you’ll note the number of signs pointing to Trinity College, where you can view the Book of Kells. Though we’ve given you some facts about the Book of Kells before (and noted the strict copyright), here’s a little bit more information on this prestigious manuscript. Here are four reasons why the Book of Kells is important:

1. IT’S THE MOST FAMOUS OF THE EARLY MEDIEVAL MANUSCRIPTS

This may sound like circuitous logic, but the Book of Kells has been well known for a very long time. We have a few well-preserved illustrated manuscripts from before the 12th century. The Book of Durrow, Lichfield Gospels, and the Lindisfarne Gospels rank in a similar period, but even among those three, the Book of Kells has a unique position of veneration.

A lot of this has to do with the very precise illumination of the manuscript. Though it has some similar iconography and styles to the Linisfarne gospels, the actual hand-work is more detailed. The illustrations are lush and layered.

People have been writing about the Book of Kells since the 12th century. The 12th century historian Giraldus Cambrensis even concluded the book had been written by an angel.

It has been housed at Trinity College since the 17th century, given to the school for safe keeping. It’s been on display there since the 1800s, inspiring visitors since.

2. ITS HISTORY IS A MYSTERY

Unlike other manuscripts that have been produced, the Book of Kells did not document its creation. The most popular theory posits that the book was either created on the island of Iona, or its production was begun at Iona. Later, the book (and its creator) fled Iona when it was sacked by Vikings, where it ultimately ended up at the Abbey of Kells. Another popular theory has it as being produced at Kells to commemorate the founding of the Abbey on its centennial or bicentennial anniversary.

We aren’t even sure about the year of production, only that it dates somewhere between the 8th and 9th century. Nor is anyone really sure of its production heritage, alternately claimed to be English, Irish, and Scottish in origin.

The fact that it is unsigned or unattributed is also unique. During this period and in this region, it would not have been uncommon for the book to have some attribution to its maker. Sure, in other regions, manuscripts could be “manufactured” by many nameless hands. However, it is a bit unusual for a piece like this, with its artistry and illustration, to have no attribution. Monks and scribes producing insular books at this time generally had a sense of their own artistry, leading some to sign their pieces or for the pieces to have some sort of specific attribution. It also was not uncommon to write out the ownership of such a book in the marginalia, and the fact that that does not fully exist for this book is strange. Though, we do have some records from the Abbey of Kells written in the book starting in the 11th century.

It’s thought that about four artists and three scribes worked on the book and its illustration. Sadly, they remain nameless, but that doesn’t stop the Book of Kells from being important.

Link to the rest at BookRiot

Publisher Bob Dees on selling books in foreign market

From The Globe and Mail:

Selling the rights to publish books in other countries is one of the great subterranean aspects of the Canadian publishing business, adding as much to bottom lines as it does to international reputations. There’s a whole government agency devoted to giving publishers a leg up, and international book fairs happen every month, from New Delhi to London to Beijing to the biggest of them all each October in Frankfurt. We asked Bob Dees, the publisher of Toronto’s Robert Rose Books, about his experience getting one of his books into the German market. Do you try to sell foreign language rights for all your books? No. There are certain titles that are clearly North American.

. . . .

You Are What Your Grandparents Ate [by former Globe and Mail columnist Judith Finlayson] had not just an international market opportunity, but it had a market opportunity in Europe that was larger than in North America because of a greater awareness of the subject matter of epigenetics [the study of biological mechanisms that turn genes on or off]. We came across a recent German edition of National Geographic devoted to the anniversary of the Dutch Hunger Winter and how it was still influencing the descendants of those who lived through it, subjects that are integral to You Are What Your Grandparents Ate . Our foreign language rights manager at the time, Nina McCreath, went online to find German publishers who specialized in this intersection of health and science and found four, and based on their responses, she booked appointments for Frankfurt 2018. Then what happened? We had sales materials prepared, about eight or 10 pages to give an indication of the design, which in this case is unique, and its capacity to attract a readership to the subject. And we had one edited chapter that we were able to share with them both in hard copy and electronic copy. We’re fortunate that English tends to be the language business is done in, even in Frankfurt. Even so, not everyone comes with an equal level of English, and we were even more fortunate there because Nina speaks fluent German. We were able to use that as a tool to create a more successful relationship with these potential publishers. Many publishers may use an agent who speaks the language in question. Having a foreign sales agent with at least a couple of extra languages is incredibly valuable.

. . . .

How did this one get done so quickly? We try to be an easy publisher to deal with for foreign language agreements. Big name publishers have a reputation for being difficult to deal with. It doesn’t mean we don’t try to get the best deal, but sometimes the legal departments can be problematic, some will take three or four months to do an agreement, and some publishers don’t want to wait that long. It took us probably about two or three weeks.

Link to the rest at The Globe and Mail

https://amzn.to/2wOQBpd

You Are What Your Grandparents Ate takes conventional wisdom about the origins of chronic disease and turns it upside down. Rooted in the work of the late epidemiologist Dr. David Barker, it highlights the exciting research showing that heredity involves much more than the genes your parents passed on to you. Thanks to the relatively new science of epigenetics, we now know that the experiences of previous generations may show up in your health and well-being.

Many of the risks for chronic diseases — including obesity, type 2 diabetes, high blood pressure, heart disease and dementia — can be traced back to your first 1,000 days of existence, from the moment you were conceived. The roots of these vulnerabilities may extend back even further, to experiences your parents and grandparents had — and perhaps even beyond.

Worried a Robot Apocalypse Is Imminent?

From The Wall Street Journal:

You Look Like a Thing and I Love You

Elevator Pitch: Ideal for those intrigued and/or mildly unnerved by the increasing role A.I. plays in modern life (and our future), this book is accessible enough to educate you while easing anxieties about the coming robot apocalypse. A surprisingly hilarious read, it presents a view of A.I. that is more “Office Space” than “The Terminator.” Typical insight: A.I. that can’t write a coherent cake recipe is probably not going to take over the world.

Very Brief Excerpt: “For the foreseeable future, the danger will not be that A.I. is too smart but that it’s not smart enough.”

Surprising Factoid: A lot of what we think are social-media bots are almost definitely humans being (poorly) paid to act as a bot. People stealing the jobs of robots: How meta.

. . . .

The Creativity Code

By Marcus du Sautoy

Elevator Pitch: What starts as an exploration of the many strides—and failures—A.I. has made in the realm of artistic expression turns out to be an ambitious meditation on the meaning of creativity and consciousness. It shines in finding humanlike traits in algorithms; one chapter breathlessly documents the matches between Mr. Hassabis’s algorithm and a world champion of Go, a game many scientists said a computer could never win.

Very Brief Excerpt: “Machines might ultimately help us…become less like machines.”

Surprising Factoid: As an example of “overfitting,” the book includes a mathematical model that accidentally predicts the human population will drop to zero by 2028. Probably an error, but better live it up now—just in case.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

Serfs of Academe

From The New York Review of Books:

Adjunct, a novel by Geoff Cebula, is a love letter to academia, a self-help book, a learned disquisition on an obscure genre of Italian film, and a surprisingly affecting satire-cum-horror-comedy. In other words, exactly the kind of strange, unlucrative, interdisciplinary work that university presses, if they take any risks at all, should exist to print. Given the parlous state of academic publishing—with Stanford University Press nearly shutting down and all but a few presses ordered to turn profits or else—it should perhaps come as no surprise that one of the best recent books on the contemporary university was instead self-published on Amazon. Cebula, a scholar of Slavic literature who finished his Ph.D. in 2016 and then taught in a variety of contingent positions, learned his lesson. Adjunct became the leading entry in the rapidly expanding genre of academic “quit-lit,” the lovelorn farewell letters from those who’ve broken up with the university for good. Rather than continue to try for a tenure-track teaching gig, Cebula’s moved on and is now studying law.

The novel’s heroine, Elena Malatesta, is an instructor of Italian at Bellwether College, an academically nondescript institution located somewhere in the northeast. Her teaching load—the number of officially designated “credit hours” per semester—has been reduced to just barely over half-time, allowing the college to offer minimum benefits even though her work seems to take up all of her day. Recently, the college has been advised to make still deeper cuts to the language departments, which are said to not only distract students but to actively harm them by inducing an interest in anything other than lucre. Elena responds with a mixture of paranoia and dark comedy: after the cuts there will be only so many jobs in languages left—maybe the Hindi teacher, anxious about her own position, is conspiring to bump her off? Then Elena had better launch a preemptive strike: this could be a “kill or be killed” situation.

Like a good slasher flick, Adjunct proceeds through misdirection and red herrings, pointing to one potential perp after another—does the department chair have a knife?—to keep the reader as anxious as Elena, while her colleagues, first to her delight and then alarm, begin disappearing. Conveniently, Elena’s own research centers on Italian giallo films, which combine elements of suspense and horror and are one of the cinematic sources for American classics like Halloween (1978), A Nightmare on Elm Street (1984), and Scream (1996). As she flees into the safe confines of her office hours—the attackers’ only fear seems to be endangering the college’s primary profit source, the students—she thinks of the films she has assigned to her class and the ways they mirror her own predicament. A giallo, Elena thinks, depicts a world where the “circumstances determining who would live or die were completely ridiculous,” a life of “pervasive contingency”—“contingent” being the most common term for part-time and contract-based academic labor. This is why horror, for Cebula, becomes the natural genre through which to depict the life of the contemporary adjunct, which is to say, the majority of academic workers today.

One suspects that Cebula’s inspiration for this lark came directly from genuine academic horror stories. Among the best known involves an adjunct at Duquesne University in Pittsburgh who taught French for twenty-five years, her salary never rising above $20,000, before dying nearly homeless in 2013 at the age of eighty-three, her classes cut, with no retirement benefits or health insurance. At San José State University in Silicon Valley, according to the San Francisco Chronicle, one English teacher lives out of her car, grading papers after dark by headlamp and keeping things neat so as to “avoid suspicion.” Another adjunct in an unidentified “large US city,” reports The Guardian, turned to sex work rather than lose her apartment.

Though these stories are extreme, they are illustrative of the current academic workplace. According to the UC Berkeley Labor Center, 25 percent of part-time faculty nationally rely on public assistance programs. In 1969, 78 percent of instructional staff at US institutions of higher education were tenured or on the tenure track; today, after decades of institutional expansion amid stagnant or dwindling budgets, the figure is 33 percent. More than one million workers now serve as nonpermanent faculty in the US, constituting 50 percent of the instructional workforce at public Ph.D.-granting institutions, 56 percent at public masters degree–granting institutions, 62 percent at public bachelors degree–granting institutions, 83 percent at public community colleges, and 93 percent at for-profit institutions.

To account for these developments, some may look to the increasing age of retirement of tenure-track faculty, which now stands at well over seventy. But, anecdotally at least, the reason many tenured faculty wait so long to retire may be the knowledge that they will not be replaced—when a Victorian poetry professor calls it quits, so, at many institutions, does her entire subfield. Who wants to know they will be the last person to teach a seminar on Tennyson? Others will blame the explosion of nonacademic staff: between 1975 and 2005, the number of full-time faculty in US higher education increased by 51 percent, while the number of administrators increased by 85 percent and the number of nonmanagerial professional staff increased by 240 percent. Such criticism can easily become unfair, as when teachers resent other workers who have taken over some of their old tasks—in fact sparing them chores like advising or curricula development—or when they act as though the university could do without programs that have made possible greater openness (such as Title IX officers and support for first-generation students).

. . . .

Just as business managers in private industry squeezed workers to satisfy ever more demanding shareholders, taking home a cut for themselves in the process, so university administrators have reduced teacher pay and increased job insecurity in an effort to make possible expansions in operations that typically resulted in yet more administrative and professional staff, and higher salaries for those who directed them. In this process, teachers, because of their commitment to their jobs and the relative nontransferability of their skills, were simply more exploitable than, say, financial compliance officers. Notably, between 1975 and 2005, the proportion of part-time administrators in higher education decreased from 4 percent to 3 percent, even as the proportion of part-time adjuncts exploded. As one college vice-president advised a group of adjuncts at a large community college in the 2000s (the specific details are left vague for fear of retaliation), “You should realize that you are not considered faculty, or even people. You are units of flexibility.”

Link to the rest at The New York Review of Books.

If PG were King for a day, he would require that colleges and universities publish annual statistics disclosing what percentage of their courses are taught by adjunct faculty and the names of the classes and departments in which those classes reside. He has little doubt that someone will collect such data and publish comparisons between various institutions.

PG suspects that English Literature and Creative Writing are the professional homes for an outsized portion of adjuncts.

Given the sky-high cost of most colleges in the US these days and the massive debt many students and their families incur to pay those costs, prospective students may wish to know how many classes they will be taking that are taught by part-time or poorly-paid adjunct faculty.

How We Bury the War Dead

A comment about a touching story concerning the service of a soldier in World War II that PG posted a couple of days ago sent him on a short online research project.

From The Wall Street Journal ( May 29, 2010 ):

The U.S. military didn’t always bring home its dead. In the Seminole Indian Wars in the early 1800s, most of the troops were buried near where they fell. The remains of some dead officers were collected and sent back to their families, but only if the men’s relatives paid all of the costs. Families had to buy and ship a leaded coffin to a designated military quartermaster, and after the body had been disinterred, they had to cover the costs of bringing the coffin home.

Today, air crews have flown the remains of more than 5,000 dead troops back to the U.S. since the conflicts in Iraq and Afghanistan began.

For those charged with bringing out the dead, it is one of the military’s most emotionally taxing missions. The men and women of the Air Force’s Air Mobility Command function as the nation’s pallbearers, ferrying flag-draped remains to Dover Air Force Base in Delaware from battlefields half a world away.

The missions take a heavy toll on the air crews, but many of the pilots and loadmasters say their work is part of a sacred military obligation to fallen troops and their families. Air Force Capt. Tenaya Humphrey was a young girl when her father, Maj. Zenon Goc, died in a military plane crash in Texas in 1992. She remembers his body being flown to Dover before his burial in Colorado.

Capt. Humphrey and her husband, Matthew, are now C-17 pilots who regularly fly dead troops back to the U.S. and then on to their home states for final burial. “It’s emotional for everyone who’s involved,” she says. “But it’s important for the family to know that at every step along the way their loved one is watched over and cared for.”

Bringing fallen troops home is a relatively modern idea. Until the late 19th century, military authorities did little to differentiate and identify dead troops. Roughly 14,000 soldiers died from combat and disease during the Mexican-American War of 1846, but only 750 sets of remains were recovered and brought back, by covered wagon, to the U.S. for burial. None of the fallen soldiers were ever personally identified.

The modern system for cataloguing and burying military dead effectively began during the Civil War, when the enormity of the carnage triggered a wholesale revolution in how the U.S. treated fallen troops. Congress decided that the defenders of the Union were worthy of special burial sites for their sacrifices, and set up a program of national cemeteries.

During the war, more than 300,000 dead Union soldiers were buried in small cemeteries scattered across broad swaths of the U.S. When the fighting stopped, military authorities launched an ambitious effort to collect the remains and rebury them in the handful of national cemeteries.

The move “established the precedent that would be followed in future wars, even when American casualties lay in foreign soil,” Michael Sledge writes in “Soldier Dead,” a history of how the U.S. has handled its battlefield fatalities.

. . . .

The relatives of fallen troops in both world wars were given the choice of having their loved ones permanently interred in large overseas cemeteries or brought back to the U.S. for reburial.

Those who wanted their sons or husbands returned to them were in for a long wait. Fallen troops had been buried in hundreds of temporary cemeteries near the sites of major battles throughout Europe. When World War I ended, the families of 43,909 dead troops asked for their remains to be brought back to the U.S. by boat, while roughly 20,000 chose to have the bodies remain in Europe. The war ended in 1918, but the first bodies of troops killed in the conflict weren’t sent back to the U.S. until 1921.

World War II posed a bigger logistical challenge, since American war dead were scattered around the globe. Nearly 80,000 U.S. troops died in the Pacific, for example, and 65,000 of their bodies were first buried in almost 200 battlefield cemeteries there.

Once the fighting ended, the bodies were dug up and consolidated into larger regional graveyards. The first returns of World War II dead took place in the fall of 1947, six years after the attack at Pearl Harbor. Eventually, 171,000 of the roughly 280,000 identified remains were brought back to the U.S.

Today, the remains of 124,909 fallen American troops from conflicts dating back to the Mexican-American war are buried at a network of 24 permanent cemeteries in Europe, Panama, Tunisia, the Philippines and Mexico.

. . . .

The military now goes to tremendous lengths to recover the remains of fallen troops. In March 2002, a Navy Seal named Neil Roberts fell out of the back of a Chinook helicopter in Afghanistan and was cornered and killed by militants on the ground. The U.S. sent in a second helicopter to attempt a rescue, but six members of its crew were killed in the ensuing firefight.

Then-Brig. Gen. John Rosa, the deputy director of operations for the Joint Staff, told reporters that U.S. commanders ordered the high-risk recovery mission to ensure that Petty Officer Roberts’ body didn’t fall into enemy hands.

“There was an American, for whatever reason, [who] was left behind,” Gen. Rosa said at the time. “And we don’t leave Americans behind.”

The military’s system of concurrent return is basically still in use today, with modern technology cutting the lag time between when troops die in the field and when they are returned to their families down to as little as one day.

On May 16, Navy Petty Officer Zarian Wood, a 29-year-old medic who had deployed overseas less than a month earlier, died from wounds suffered in a bomb blast in southern Afghanistan’s Helmand Province. Marine Cpl. Nicolas Parada-Rodriguez, the son of immigrants who moved to the U.S. two decades ago, was killed in Helmand that same day.

The following evening, the remains of both men were slowly lowered from the cargo deck of a civilian 747 that the military had chartered to fly their bodies back to Dover. Cpl. Parada-Rodriguez’s relatives could be heard weeping as the transfer case carrying his body was taken off the plane.

. . . .

The military doesn’t have air crews who are assigned specifically to the mission of bringing out the nation’s war dead. Instead, the work is assigned to crews depending on their locations and the speed with which they can stop at bases in Afghanistan and Iraq to pick up fallen troops and their military escorts.

Air crews are tight-knit groups of men and women who typically pass the long hours in the air and on the ground telling jokes and needling each other. But veterans of the repatriation missions say the mood among the flight crew changes immediately after they get orders to pick up fallen troops.

“You can sense it in the crew,” says Maj. Brian O’Connell, a C-17 pilot who has flown the remains of a half-dozen soldiers and Marines. “As soon as everybody knows about it, the attitude changes, a lot.”

The long flights from the war zones mean that the air crews spend hours with the flag-covered remains. Air Force Tech Sgt. Donny Maheux, a C-17 loadmaster, says he often finds himself staring at the metallic transfer cases holding the bodies of the dead soldiers and wondering what kind of people they were. “I’m looking at [the remains] the whole flight,” he says. “Sometimes I wonder, ‘What if it was my family on the receiving end?'”

When they land at Dover, the crews often choose to remain with their plane until the families of the dead troops arrive to see the bodies of their loved ones taken off of the plane. Since the planes land late at night and early in the morning, it can sometimes be hours before the families arrive for the transfer ceremonies.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

PG has previously posted about a beautiful US Military Cemetery outside of Florence, Italy.

The Legend of Limberlost

From Smithsonian Magazine:

My dear Girl:
In the first place will you allow me to suggest that you forget
hereafter to tack the “ess” on to “author”, because one who writes
a book or poem is an author and literature has no sex.
–Gene Stratton-Porter, letter to Miss Mabel Anderson, March 9, 1923

. . . .

Yellow sprays of prairie dock bob overhead in the September morning light. More than ten feet tall, with a central taproot reaching even deeper underground, this plant, with its elephant-ear leaves the texture of sandpaper, makes me feel tipsy and small, like Alice in Wonderland.

I am walking on a trail in a part of northeast Indiana that in the 19th century was impenetrable swamp and forest, a wilderness of some 13,000 acres called the Limberlost. Nobody knows the true origin of the name. Some say an agile man known as “Limber” Jim Corbus once got lost there. He either returned alive or died in the quicksand and quagmires, depending which version you hear.

Today, a piece of the old Limberlost survives in the Loblolly Marsh Nature Preserve, 465 acres of restored swampland in the midst of Indiana’s endless industrial corn and soybean fields. It’s not obvious to the naked eye, but life here is imitating art imitating life. The artist was Gene Stratton-Porter, an intrepid naturalist, novelist, photographer and movie producer who described and dramatized the Limberlost over and over, and so, even a century after her death, served as a catalyst for saving this portion of it.

As famous in the early 1900s as J.K. Rowling is now, Stratton-Porter published 26 books: novels, nature studies, poetry collections and children’s books. Only 55 books published between 1895 and 1945 sold upwards of one million copies. Gene Stratton-Porter wrote five of those books—far more than any other author of her time. Nine of her novels were made into films, five by Gene Stratton-Porter Productions, one of the first movie and production companies owned by a woman. “She did things wives of wealthy bankers just did not do,” says Katherine Gould, curator of cultural history at the Indiana State Museum.

Her natural settings, wholesome themes and strong lead characters fulfilled the public’s desires to connect with nature and give children positive role models. She wrote at a pivotal point in American history. The frontier was fading. Small agrarian communities were turning into industrial centers connected by railroads. By the time she moved to the area, in 1888, this unique watery wilderness was disappearing because of the Swamp Act of 1850, which had granted “worthless” government-owned wetlands to those who drained them. Settlers took the land for timber, farming and the rich deposits of oil and natural gas. Stratton-Porter spent her life capturing the landscape before, in her words, it was “shorn, branded and tamed.” Her impact on conservation was later compared to President Theodore Roosevelt’s.

. . . .

One of the movement’s leaders, Ken Brunswick, remembered reading Stratton-Porter’s What I Have Done With Birds when he was young—a vibrant 1907 nature study that reads like an adventure novel. At a time when most bird studies and illustrations were based on dead, stuffed specimens, Stratton-Porter mucked through the Limberlost in her swamp outfit in search of birds and nests to photograph:

A picture of a Dove that does not make that bird appear tender and loving, is a false reproduction. If a study of a Jay does not prove the fact that it is quarrelsome and obtrusive it is useless, no matter how fine the pose or portrayal of markings….A Dusky Falcon is beautiful and most intelligent, but who is going to believe it if you illustrate the statement with a sullen, sleepy bird?

Link to the rest at Smithsonian Magazine

Love Ray and Daddy: The Toohey Family Letter Collection

From The National World War II Museum:

When I read a collection of personal correspondence, I sometimes take for granted that I’m reading someone else’s mail. Maybe it’s because I know most of the people in those letters have passed on. Or maybe subconsciously, I tell myself that placing them into a shoebox for 75 years makes them less like someone else’s mail and more like research material that simply exists to further complete the historical record of World War II. Whatever the case maybe, every once in a while, I come across a collection of correspondence that totally absorbs me and I’m reminded that I really am reading someone else’s mail.

Slowly, their names begin to take on personalities, lives come into focus, and stories unfold that feel more like current events rather than history. These are letters that make you feel like you’ve dropped in on lives in progress. Some of the letters make you smile while you’re reading them, and then there are others that just make you want to hug your wife and kids a little tighter when you get home at night.

Those are the kinds of letters Virginia Toohey saved. Most of them are from her husband, PFC Raymond Toohey, who affectionately signed all of his letters, “Love Ray and Daddy.” Others are from Ray’s friends and brother. Some are from the War Department; typed form letters that all close with, “My deepest sympathies.” The correspondence is all one-sided, so we really can only imagine Virginia’s thoughts, or what her letters to Ray must have been like.

. . . .

Virginia’s old letters offer a candid look into the lives of everyday, ordinary Americans living during terribly difficult times. When I read them, I share in their hopes and dreams, grieve in their loss, and I’m completely awestruck by their courage. Not the kind that’s measured in medals, but the quiet kind of courage it takes to watch your husband go off to war, or to leave your wife and children behind to go fight. The type of courage it takes to sit down and write a letter to your buddy’s widow so she knows what really happened to her husband. Their family’s story begs to be told.

When Pearl Harbor was attacked on December 7, 1941, Virginia and Ray lived in Long Beach, California. Ray was working as a ship-fitter and Virginia was home taking care of their baby, with another on the way. Ray’s shipyard job was good money, and it was classified as essential to the war effort, so for the time being it kept him out of the army. It’s not to say that the Toohey’s were unpatriotic; they were no different than any American family at that time. No one wanted to see their father, sons, or brothers go off to war. Virginia was hopeful that Ray would be able to do his part for the war effort at the shipyard.

Millions volunteered, but not nearly enough to field a military large enough to defeat Nazi Germany and Imperial Japan, so the draft was expanded to include men ages 18-45. Ten million would be conscripted into service before it was over. Women were entering the workforce to free up more manpower for the military and the draft board began scrutinizing the merits of the dependency hardship deferments that excused many young fathers like Ray from military service.

By December 1943, Ray’s job could be filled by a woman and his dependent deferments had been eliminated, it was just a matter of time before he was called up. In June 1944, Ray received his induction notice from the Los Angeles County Draft Board and two weeks later he was in the army. If life wasn’t hard enough already with Ray gone, Virginia and the boys had to make do with less money, a lot less. In the army, Ray earned just about half of his ship-fitter’s salary. In adjusted 2018 dollars, Ray’s army salary, including compensation for dependents, was about $25,000 per year.

Ray completed infantry basic training at Camp Howze, Texas in January 1945. In spite of the difficulty and great expense of civilian travel in war time America, Virginia made the trip out for Ray’s graduation; she didn’t want to miss her last chance to see Ray before he shipped out to Europe. On January 27, 1945, Ray’s graduation day, the War Department was busy tabulating casualty figures in Europe for the month. January 1945 was the bloodiest month of the war to date; 70,568 G.I.s were killed or wounded. After a short furlough with Virginia, Ray was on his way to the most dangerous place in the world.

. . . .

When Ray arrived in Europe, he was filtered through a replacement system that can only be described as cold and cruel. Soldiers who trained together in the States were assigned piecemeal to battered combat units in need of personnel, the same way you would order replacement parts for a broken piece of equipment. The system was designed to keep combat units on the line and the brass didn’t take into account or care that it split up buddies. In 1945, many replacements arrived at the front feeling alone and friendless because they were going into combat with complete strangers.

Luckily for Ray, he wasn’t sent downrange alone. He and a buddy from Camp Howze, TX, named Homer Warren were assigned to the same rifle squad in the 9th Infantry Division. In fact, they were literally attached together at the hip as Ray was designated squad automatic rifleman and Homer his assistant. Ray had to lug around the 18-pound Browning Automatic Rifle, or B-A-R as the G.I.s called it, and Homer helped with the ammo. It was a heavy load for Ray’s wiry frame and a lot of responsibility, the B-A-R gunner packed a large portion of the squad’s firepower. One big downside to the job was that the Germans absolutely hated the B-A-R. Firing the weapon in combat was like painting a bull’s eye on your helmet.

. . . .

Back home in Long Beach, Virginia received a steady stream of correspondence from Ray. He tried to write every day. The letters he wrote were mostly about the weather, the current chow situation, or how he saw something that day that reminded him of how much he missed her and the boys. Ray reported from the front that the weather was always pleasant, there was plenty of good chow, and that German children loved gum and candy, just like little American boys and girls. If Ray was having a bad day, he certainly didn’t write Virginia and tell her about it. There’s no question about it though, an infantryman fighting in Germany in March 1945 surely would have had some bad days.

The only thing in Ray’s day-to-day life he really complained about in his letters was how slow the mail was. He told Virginia about losing his pack too. It’s obvious he was feeling pretty low that day; not because all of his dry socks were gone, but stashed away in the pack, were all of Virginia’s letters and the hand-made Valentines from his little boys. Other than that, Ray wasn’t much on complaining.

The mail was slow; Ray’s letters kept coming even after the telegram: the notification from the War Department informing Virginia that Ray had been killed in action came on May 2, 1945. If she was holding on to hope that the telegram was a mistake, Virginia’s worst fears were confirmed by the official War Department letter that arrived two days later. The letter was vague; there were no details to speak of, no answers to all the questions racing through her head.

A few weeks later, Virginia received a piece of mail from Homer Warren. In a letter dated May 29th, Homer explained who he was and reminded her that they met during Ray’s furlough at Camp Howze, TX. He further explained that he was with Ray when he was hit, and about the promise they made beforehand that if anything happened, the survivor would make sure the other’s wedding band was returned. Also, if she really needed to know how it happened, he was willing to tell her everything.

Virginia needed to know. She wrote back immediately. When Homer received her letter, he likewise, responded immediately. It still took over a month before Virginia had his letter in hand though. The envelope is thick; Homer’s letter to Virginia is nine pages long.

It was about 4am on April 19th when L Company moved out to capture the next town up the road. The company marched for about five miles and then halted about a mile outside of Fredrichsbrunn, Germany. A resort and spa town before the war, Fredrichsbrunn’s facilities were now in use as military hospitals. Ray and Homer were part of an 18-man element that was sent out ahead to protect the company’s right flank. They moved through the woods and up to the edge of town. At about 7am, Ray and Homer took up a good position near the edge of the woods; between them and the town was about 500 yards of freshly plowed field. The plan was to sit tight and wait for armor support and the rest of L Company to arrive.

Around noon, they heard L Company make contact with the enemy, and it sounded like they were still pretty far away. It wasn’t long before a German squad was sent out from the town to investigate the gunfire. Ray, Homer, and the rest of squad held their fire as long as they could. They hoped the Germans would go around their position but it looked like they were going to walk right through it. When the lead German was 50 yards away, Ray and Homer’s squad opened up and quickly killed them all.

At that point, their position was compromised and every gun in town started firing on them. According to Homer, the Germans had tanks, half-tracks, armored cars, and what seemed like a thousand machine guns. The woods around them were full of flying lead, but they were taking a toll on the Germans. The firing slacked up for a bit and it got quiet. At that point Homer and Ray realized they were in trouble, both squad leaders were dead and another boy was shot through the right lung. They didn’t have medics, and they had no way to get the wounded back to the company.

At about 1:15pm, the Germans continued their advance across the field, one of them found cover in a hole that Homer and Ray hadn’t noticed before the firefight. The German soldier in the hole had a clear line of sight on Homer and Ray. Homer didn’t even know he was there until the German shot at him, Ray never knew what hit him.

Homer heard the crack of the German bullet as it passed him and slammed into the side of Ray’s head. Homer pulled Ray to cover and tried to stop the bleeding. He stopped the flow of blood but the massive internal injuries were another matter. Ray continued to breathe for another 10 minutes, and then he was gone.

When the tanks and the rest of L Company arrived, the Germans in the hole surrendered. Homer was put in charge of what remained of the squad. In his first act as squad leader, Homer marched the German soldier from the hole over to Ray’s body, then leveled his weapon at the German and pulled the trigger at point blank range.

Homer took the wedding ring from the dog tag chain Ray wore around his neck. He told Virginia that he had the shakes for a week after killing the German, and that aside from a few patrols afterwards, Friedrichsbrunn was the last firefight of the war for L Company.

He also told Virginia some things she already knew. He told her how much the boys in the squad liked Ray, and how Ray loved to show them the Valentines Michael and Billy made for him. Yeah, Ray really loved talking about those two little boys of his.

. . . .

Photograph of Virginia and Ray Toohey from the collection of The National WWII Museum.

Link to the rest at The National World War II Museum

The OP doesn’t provide details on some dates, but it appears that Ray was killed in his second month of combat in Europe, on April 18 or 19, 1945.

On April 18, 1945, German field marshal Walter Model surrendered with 225,000 troops, in Germany’s Ruhr.

On May 7, 1945, 18 days after Ray was killed, Nazi Germany surrendered.

PG did a little research and located the grave of Ray Toohey in the Golden Gate National Cemetery, San Bruno, California. It appears his body had been repatriated from a cemetery in Holland in 1949.

The biology of love

Perhaps an aid for character development. Or not.

In any case, PG found this fascinating.

From Aeon:

An infant is born. The radiant mother holds the baby in her arms and immediately begins to scan the infant’s face, softly caressing the little fingers while uttering repetitive sing-song vocalisations, her face lighting up in an affectionate smile. She has never had a baby before but intuitively knows what to do. Proud and oblivious, she feels no one has ever cared for such a gorgeous child; yet she stands along a great line of mammalian mothers who lick, groom, sniff, smell, touch, poke, nurse and handle. Rats do it, sheep do it, even educated chimps do it … let’s fall in love.

Behind our loving mother, evolution works with its quick-and-dirty tools to ensure the bond is cemented, the infant finds the nipple, the mother engages; brain meets the world. The synchronous dance of mother and child begins and, upon its unique rhythms, a relationship is formed. This relationship will incorporate, like expanding ripples, the child’s emerging abilities across development and into dialogue: babbling, creating imaginary scenarios, the capacity to collaborate, feel the pain of others, comprehend emotions, discuss conflicting positions, argue convictions, until the child grows and can meet the mother in a full adult-to-adult relationship of empathy, intimacy and perspective-taking. Like the 12-bar-blues, synchrony gains in range, repertoire, complexity and timbre, but its basic rhythms stay safe and secure.

The synchronous mother-infant dance will set the stage for the child’s affiliative bonds throughout life: with father and siblings at home, with close friends in school, through adolescence and first love and, finally, as parents to children of their own. Those affiliations, and the terms of endearment they set, will guide the child’s conduct within society-at-large, shaping the empathy, responsibility, collaboration and self-restraint by which he or she will meet fellow-humans: co-workers, neighbours and strangers.

Evolution is thrifty, and once a trick works, it will be repurposed endlessly. A new mother and infant enter the world tapping the social patterns, habits, beliefs, customs, fears, hopes, joys and rituals of the old ones. The family, the group, the tribe lives on from one generation to the next. Resilience, endurance and the durability of the group can be achieved only by coordinating action among kin, first genetically and then symbolically. Infants acquire the capacity for coordinated action in the context of the mother’s body and its unique provisions: a mother’s smell, touch, heart rhythms, eye-gaze, smile. Then it expands across time, place and person. But such massive expansion does not come without its risks.

What tricks of the trade does evolution utilise to ensure that bonding, so critical for survival and continuity of life on Earth, happens as planned and all pieces of the puzzle fall safely into their place? After decades following thousands of mother-infant dyads, hundreds from birth to young adulthood, my lab has mapped the ‘neurobiology of affiliation’ – the emerging scientific field that describes the neural, endocrine and behavioural systems sustaining our capacity to love. The foci of our research – the oxytocin system (based on the neurohormone of bonding); the affiliative, or social, brain; and biological synchrony between mother and child – are all marked by great plasticity, and sculpted throughout animal evolution to reach their exquisite complexity in humans. And they all lean on automatic and ancient machinery that runs the risk of turning love on its head into fear.

. . . .

Oxytocin, the first element in the neurobiology of bonding, is an important driver of both care and prejudice. A large molecule produced mainly by neurons in a small region of the brain called the hypothalamus, oxytocin is known for coordinating bonding, sociality, and group living. From the hypothalamus, oxytocin targets receptors in the body and the brain, primarily the amygdala, a centre for fear and vigilance; the hippocampus, where memory resides; and the striatum, a locus of motivation and reward. Through these pathways, the bonding hormone, oxytocin, functions with the precision of a neurotransmitter and the longevity of a hormone, reaching faraway locations and broadly influencing behaviour. Importantly, oxytocin is released not only through the central part of the neuron, but also its extensions, called dendrites. The dendrites are primed to increase oxytocin release whenever attachment memories are invoked.

. . . .

Memory of these early attachments helps us re-enact the unique state that Sue Carter, a neurobiologist at the University of Indiana, calls ‘immobility without fear’. These same memories enable what the English psychoanalyst Donald Winnicott in 1958 described as ‘the capacity to be alone’ in the presence of someone in a state of peace, serenity and transcendence, where aloneness is not loneliness. In our studies, we found that, throughout life, during periods of bond-formation – for instance, when we fall in love or form a close friendship – oxytocin production increases to cement the new bond, as it does at birth. During birth, a surge of oxytocin triggers uterine contractions, and oxytocin release initiates milk letdown. Maternal oxytocin is then transferred to the infant through the mother’s milk, touch and caregiving behaviour. It bonds mother and child forever but it also reorganises the infant’s brain to what it means to be in love and what it takes to feel safe.

. . . .

Still, oxytocin is an ancient system that functions in a quick-and-dirty way; no time for complexities when the lion is at your door. The oxytocin molecule presumably evolved approximately 600 million years ago, and is found in all vertebrate and some invertebrate species. Its role across animal evolution was to help organisms manage life in harsh ecologies. Hence the system supports regulation of basic life-sustaining functions: water conservation, thermoregulation or energy balance in species such as nematodes, frogs or reptiles.

With the evolution of mammals, oxytocin became integrally involved in controlling birth and lactation; as a result, the young acquired life sustaining functions and skills not in the context of the group but within the intimacy of the mother-infant bond. This created the main schism I wish to underline, the core conflict of the human condition: mammals learn to manage hardship through relationships, and bonding is their key mechanism for stress reduction. Being born a mammal, then, implies that oxytocin, the very system that sustains parental care, pair-bonds, group sharing, and consoling behaviour, also became intensely sensitive to danger. Oxytocin protects against danger by immediately differentiating “friend” from “foe” based on nuances of social behaviour.

When mammals perceive slight alterations in social behaviour, they identify the approach of ‘others’, activating the alarm systems of the fight-or-flight response and their bodies prepare to attack. Those ‘others’ may indeed intend to eat us up for supper, or they could just as readily be going about their daily social life in ways that seem to us odd, unfamiliar, or even disrespectful. 

. . . .

The second element in the neurobiology of bonding is the affiliative brain. Research into its role in maternal care began in the 1950s with the work of Jay Rosenblatt at Rutgers University-Newark in New Jersey and colleagues, who wished to chart the brain structures that enable rodent mothers to care for their offspring. Following decades of careful work by several research groups, the scientists were able to describe the ‘mammalian maternal brain’ both in terms of its neural networks and, more recently, their molecular composition. Primed by oxytocin’s increase during pregnancy, the hypothalamus (specifically, the medial preoptic area of the hypothalamus) sends projections to the amygdala, and this sensitises an oxytocin-amygdala ‘line’ that makes mothers extremely attuned to signs of infant safety and danger. This line of constant vigilance and worry is implanted into the maternal brain as soon as an infant is born and, without it, our fragile offspring might not survive.

In human mothers, the amygdala activates four times more than in fathers: from the moment of birth and, I believe, forever thereafter, mothers sleep with their amygdala open. Imagine this: a 15-year-old goes to a party. You trust her, have arranged for your best friend to pick her up, and know who she’s with. You go to sleep, but your amygdala is open. It is 3am and you hear the door open and her footsteps tiptoeing in. You turn to the other side and finally sleep in earnest. The ‘care’ and the ‘scare’ become inseparable the minute you love someone for real. It is precisely this entanglement that defines, in my mind, the ancient curse: ‘In sorrow thou shalt bring forth children’ – not the birth itself, which we soon forget due to the analgesic properties of oxytocin.

. . . .

The evolutionary role of the oxytocin-dopamine line is to ‘glue’ the mother to her baby so she can tolerate the sleepless nights, physical pain and endless mess. This oxytocin-dopamine line is even engraved into the neurons. The nucleus accumbens, a node in the striatum, contains neurons that encode both oxytocin and dopamine, enabling the brain to combine the motivation and vigour of dopamine with the social focus of oxytocin in order to set the parent’s – and, via the cross-generational cycle, the infant’s – reward system for a lifetime of longterm attachments. When the connection between oxytocin and dopamine breaks, results are devastating. When dopamine is directed to neural targets unrelated to sociality, a risk is addiction; when dopamine and oxytocin are produced out of synch, depression can result.

Link to the rest at Aeon

You Think We’re Self-Obsessed Now? The 19th Century Would Like A Word

From FiveThirtyEight:

As an old man, former president John Adams loved to describe the ways history had mistreated him — to detail the “perpetual volcano of slander, pouring on my flesh all my life time.” This habit eventually led to a manuscript of 440 pages and America’s first presidential memoir. Today, there’d be a bidding war to publish Adams’s tell-all. But in the early 1800s, Adams knew his book could not appear until after his death. Too many Americans saw publishing an autobiography, or even writing an autobiography, as a strange and arrogant act.

I’ve spent the last 10 years studying the history of presidential books for my own book, “Author in Chief.” But along the way, I discovered a strange little data set that helps explain how we went from an age when writing your autobiography was seen as risky and vain, to one when we’re constantly updating our autobiographies — one Instagram story at a time.

It all starts with a reference librarian named Louis Kaplan. In 1946, Kaplan began working on a deceptively ambitious project — a bibliography that would list every autobiography that had been published in America to that point. Kaplan studied old periodicals. He spent months on the road, visiting rare book libraries around the country. He befriended private collectors. And he got lots of help, especially when it came time to review the Library of Congress’s massive catalog.

After 14 years, Kaplan and his colleagues finally had their “Bibliography of American Autobiographies.” They ended up identifying 6,377 autobiographies published in the U.S. between 1675 and the 1940s, and their data shows the genre’s notable growth. Between 1800 and 1809, Americans published a total of just 27 autobiographies. One century later, during the decade between 1900 and 1909, that number had exploded to 569, easily outpacing population growth.

. . . .

Some of the best analysis of Kaplan’s data set has come from Diane Bjorklund, a sociologist at Illinois State University who coded Kaplan’s entries for her own book, “Interpreting the Self.” Bjorklund spent about a month sorting Kaplan’s thousands of autobiographies by their authors’ professions — soldier, farmer, scientist, and so on. This sorting was both blunt and subjective; two farmers could write two very different books. “My coding can only provide a rough guide,” Bjorklund said in an email.

Still, that rough guide contains some fascinating hints about the historical anxieties surrounding autobiography. This anxiety has always shown up in anecdotes. One early American reader, for example, dismissed Jean-Jacques Rousseau’s classic “Confessions” as “an unnatural compound of vanity, meanness, and contemptible self-love.” It shows up in the data, too. According to Bjorklund’s coding, in the first half of the 19th century, more than half of all autobiographies published in U.S. came from one of two professions: religious figures and criminals. The explanation seems simple: One group had divine authority to tell their life stories. The other had nothing left to lose.

Slowly, the stigma against publishing autobiographies dissipated. “I’d point to Benjamin Franklin and his autobiography as the turning point,” said Susan Clair Imbarrato, an English professor at Minnesota State University-Moorhead who’s written extensively about the genre.

Bjorklund and Kaplan’s data reveals that as autobiography became more popular, it became more diverse. New varieties could rise and fall. During the 1840s and 1850s, there was a 600-percent increase in slave narratives, or autobiographies written by fugitive slaves that captured the brutality of life in the South. The nation’s real-time obsession with the Civil War also made it easier for generals and politicians to do what John Adams could not: publish their memoirs during their own lifetimes. In the 1860s, 50 military figures published their autobiographies — more than in the previous six decades combined. And during the Gilded Age, there was a boom in business autobiographies, with 48 appearing between 1880 and 1899 — more than had appeared in the previous eight decades combined.

. . . .

That’s one of the most striking things about this data: The autobiographies that thrived during any particular moment said something about the desires of America and its readers. By the Roaring Twenties, the clergy-and-criminal bookshelf had dropped to less than a combined 20 percent of all autobiographies — the same era that saw a spike in memoirs written by a new kind of celebrity, the entertainer.2
In the 1940s, the last decade covered by Kaplan’s “Bibliography,” memoirs by entertainers proved one of the most popular categories. And yet, even those accounted for only 5.6 percent of the 1,043 memoirs that appeared in America during that span. That’s the other striking thing about these autobiographical numbers: Once autobiography truly caught on, no one category could dominate because readers had a huge variety of authors and styles to choose from.

Link to the rest at FiveThirtyEight

History Through a Poet’s Eyes

From The Wall Street Journal:

The “elusive Lincoln is a challenge for any artist.” So the poet, troubadour, journalist and political activist Carl Sandburg declared (in combination warning and boast) in the preface to his 1926 two-volume epic, “Abraham Lincoln : The Prairie Years.” Few before or since have stalked the 16th president as relentlessly—or indelibly. Later biographers have revised, and in some cases debunked, Sandburg’s opus—which the author augmented 13 years later with the four volumes of “The War Years.” Professional historians repeatedly lamented his absence of source notes, as well as flights of fancy and occasional factual errors.

But Sandburg’s work has long endured. It remains the most influential and popular life of Lincoln ever published, with “The Prairie Years” alone selling some 1.5 million copies. As of this Lincoln’s birthday month, “Prairie Years,” “War Years” and Sandburg’s 1954 one-volume abridgment all remain on Amazon’s current list of the 50 best-selling Lincoln books.

An explanation for their sustained appeal may lurk within the assessment that critic Mark Van Doren proffered in the Nation in 1926: “In spite of some rather obvious poetry stuck in here and there,” “Prairie Years” was “amply and profoundly beautiful.” Yet behind the Whitmanesque free-verse vernacular was evidence of deep research. To Van Doren, Sandburg seemed “drunk with data.” But as scholar Charles Austin Beard saw matters, “few if any historians…ever labored harder in preparation for composition.”

Sandburg’s own prefatory remarks reveal what truly set his work apart: He came at Lincoln as an artist. His evocation of Lincoln’s experiences and milieu remains unmatched for its vivid combination of mood, incident and epochal sweep.

Sandburg filtered history through the poet’s ear. Tellingly, he had first dealt with his subject in verse, writing of Lincoln’s mother in his 1918 collection, “Cornhuskers”: “Oh, dream, Nancy. / Time now for a beautiful child. / Time now for a tall man to come.” Hooked, the poet began amassing Lincoln data. He also carried memories of his own childhood in Galesburg, Ill., site of the fifth Lincoln-Douglas debate, where Sandburg had “listened to stories of old-timers who had known of Lincoln.”

. . . .

Predictably, some Lincoln specialists of the day greeted the result coolly. Writing in the American Historical Review, William E. Barton acknowledged “Prairie Years” as “a piece of genuine literature,” but cautioned that it was bathed in “the aura of poetic interpretation…not history.” A caustic Edmund Wilson sneered that “the cruellest thing that has happened to Lincoln since he was shot by Booth has been to fall into the hands of Carl Sandburg.”

Undeterred, Sandburg wrote on. An ardent New Dealer, his 1939 “War Years” found a receptive audience among progressives who believed only Lincolnesque leadership could guarantee American survival. Robert E. Sherwood, soon to become a speechwriter for Franklin D. Roosevelt, had used “Prairie Years” as the basis for his 1938 Pulitzer Prize-winning play, “Abe Lincoln in Illinois.” Now he praised “War Years” as another “superb literary outburst.” Sandburg, who began likening Lincoln to FDR, won his own Pulitzer Prize for history in 1940 for “The War Years.”

In short order, the poet became the dominant figure in the Lincoln “industry” that mushroomed after World War II—his shaggy white hair becoming nearly as iconic as Lincoln’s beard. In 1959, Congress chose him to address a joint session marking Lincoln’s 150th birthday.

. . . .

Sandburg endures not because he is cited by modern scholars, but because he continues to be read for sheer pleasure. In my own travels on the Lincoln circuit, I am often asked: “Do you like Sandburg’s books?” My affirmative answer invariably relieves questioners who find him a guilty and perhaps outdated pleasure. Such skepticism never inhibited Carl Sandburg. Writing in “The People, Yes,” he all but predicted his own durability in verse:

This old anvil laughs at many broken hammers.

What is bitter to stand against today may be sweet

to remember tomorrow.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

Gold among the dross

From Aeon:

The higher education system is a unique type of organisation with its own way of motivating productivity in its scholarly workforce. It doesn’t need to compel professors to produce scholarship because they choose to do it on their own. This is in contrast to the standard structure for motivating employees in bureaucratic organisations, which relies on manipulating two incentives: fear and greed. Fear works by holding the threat of firing over the heads of workers in order to ensure that they stay in line: Do it my way or you’re out of here. Greed works by holding the prospect of pay increases and promotions in front of workers in order to encourage them to exhibit the work behaviours that will bring these rewards: Do it my way and you’ll get what’s yours.

Yes, in the United States contingent faculty can be fired at any time, and permanent faculty can be fired at the point of tenure. But, once tenured, there’s little other than criminal conduct or gross negligence that can threaten your job. And yes, most colleges do have merit pay systems that reward more productive faculty with higher salaries. But the differences are small – between the standard 3 per cent raise and a 4 per cent merit increase. Even though gaining consistent above-average raises can compound annually into substantial differences over time, the immediate rewards are pretty underwhelming. Not the kind of incentive that would motivate a major expenditure of effort in a given year – such as the kind that operates on Wall Street, where earning a million-dollar bonus is a real possibility. Academic administrators – chairs, deans, presidents – just don’t have this kind of power over faculty. It’s why we refer to academic leadership as an exercise in herding cats. Deans can ask you to do something, but they really can’t make you do it.

. . . .

If the usual extrinsic incentives of fear and greed don’t apply to academics, then what does motivate them to be productive scholars? One factor, of course, is that this population is highly self-selected. People don’t become professors in order to gain power and money. They enter the role primarily because of a deep passion for a particular field of study. They find that scholarship is a mode of work that is intrinsically satisfying. It’s more a vocation than a job. And these elements tend to be pervasive in most of the world’s universities.

But I want to focus on an additional powerful motivation that drives academics, one that we don’t talk about very much. Once launched into an academic career, faculty members find their scholarly efforts spurred on by more than a love of the work. We in academia are motivated by a lust for glory.

We want to be recognised for our academic accomplishments by earning our own little pieces of fame. So we work assiduously to accumulate a set of merit badges over the course of our careers, which we then proudly display on our CVs. This situation is particularly pervasive in the US system of higher education, which is organised more by the market than by the state. Market systems are especially prone to the accumulation of distinctions that define your position in the hierarchy. But European and other scholars are also engaged in a race to pick up honours and add lines to their CVs. It’s the universal obsession of the scholarly profession.

. . . .

At the very pinnacle of the structure of merit badges is, of course, the Nobel Prize. A nice thought, but what are the odds? Fortunately, other academic honours are a lot more attainable. And attain them we do.

Take one prominent case in point: the endowed chair. A named professorship is a very big deal in the academic status order, a (relatively) scarce honour that supposedly demonstrates to peers that you’re a scholar of high accomplishment. It does involve money, but the chair-holder often sees little of it. A donor provides an endowment for the chair, which pays your salary and benefits, thus taking these expenses out of the operating budget – a big plus for the department, which saves a lot of money in the deal. And some chairs bring with them extra money that goes to the faculty member to pay for research expenses and travel.

But more often than not, the chair brings the occupant nothing at all but an honorific title, which you can add to your signature: the Joe Doakes Professor of Whatever. Once these chairs are in existence as permanent endowments, they never go away; instead they circulate among senior faculty. You hold the chair until you retire, and then it goes to someone else. In my own school, Stanford University, when the title passes to a new faculty member, that person receives an actual chair – one of those uncomfortable black wooden university armchairs bearing the school logo. On the back is a brass plaque announcing that ‘[Your Name] is the Joe Doakes Professor’. When you retire, they take away the title and leave you the physical chair. That’s it. It sounds like a joke – all you get to keep is this unusable piece of furniture – but it’s not. And faculty will kill to get this kind of honour.

This being the case, the academic profession requires a wide array of other forms of recognition that are more easily attainable and that you can accumulate the way you can collect Fabergé eggs. And they’re about as useful. Let us count the kinds of merit badges that are within the reach of faculty:

  • publication in high-impact journals and prestigious university presses;
  • named fellowships;
  • membership on review committees for awards and fellowships;
  • membership on editorial boards of journals;
  • journal editorships;
  • officers in professional organisations, which conveniently rotate on an annual basis and thus increase accessibility (in small societies, nearly everyone gets a chance to be president);
  • administrative positions in your home institution;
  • committee chairs;
  • a large number of awards of all kinds – for teaching, advising, public service, professional service, and so on: the possibilities are endless;
  • awards that particularly proliferate in the zone of scholarly accomplishment – best article/book of the year in a particular subfield by a senior/junior scholar; early career/lifetime-career achievement; and so on.

Each of these honours tells the academic world that you are the member of a purportedly exclusive club. At annual meetings of professional organisations, you can attach brightly coloured ribbons to your name tag that tell everyone you’re an officer or fellow of that organisation, like the badges that adorn military dress uniforms. As in the military, you can never accumulate too many of these academic honours. 

Link to the rest at Aeon

PG will note that the same pattern applies in U.S. law schools. As with other academic departments, the ability to actually teach well tends to be subsidiary to the publishing/professional organization elements of status.

That said, it’s rare for law school professors to be active in the interest sections of The American Bar Association and various state bars.

For a twenty-year period during which he practiced retail law, PG was actively involved in the ABA and his state bar association. Generally speaking, the activities of those associations were characterized by entertaining presentations, talks and discussion. As PG has mentioned before, a lawyer friend once told him that the friend could walk into any third-grade classroom and identify future lawyers because they never stopped talking.

As one might expect, some, but not all of the most engaging and entertaining speakers were involved in litigation practices that meant they spent a lot of time in court talking to judges and juries. On the other hand, patent and tax lawyers (with a small handful of exceptions) tended to be pretty dry.

On a couple of occasions, PG’s ABA responsibilities required him to attend a meeting of a law school professors’ organization. They were dull as dishwater.

The Self-Help Compulsion

From The Wall Street Journal:

‘How to Win Friends and Influence People” has generally been viewed as the self-help mother ship. But long, long before the 1936 publication of Dale Carnegie’s guide to self-betterment and reinvention (30 million copies sold and counting), the untutored and insecure had a choice of reading matter for the lowdown on how to live well and prosper. In fact, such books date back to antiquity, according to Beth Blum, author of “The Self-Help Compulsion.” What is Ovid’s “Ars Amatoria,” she asks, “but an ancient Men Are from Mars, Women Are from Venus?” As for one of the major works of the Stoic philosopher Epictetus, it is “cognitive behavioral therapy before its time.”

Age has apparently not withered the appeal of the genre. In the past 30 years, notes Ms. Blum, an assistant professor of English at Harvard, the self-help category has been among the most lucrative in publishing. It’s easy to understand why. Self-help makes the sort of claims and promises—a whole new you! a whole new in-control, wise, cultivated, savvy, beautiful you!—that readers find it hard to resist.

A recondite, sedulously researched monograph, “The Self-Help Compulsion” traces the evolution of self-help books, places them in historical context, and, perhaps most strikingly, suggests that they’re worthy of more respect than they get. Ms. Blum also discovers a kind of cross-pollination between literature and self-help, certainly liberal borrowing. The wall separating the two genres, she argues, has been frequently breached, sometimes mockingly, sometimes admiringly, sometimes to teach a moral lesson. The titles of several works of literary fiction—among them, Sheila Heti’s “How Should a Person Be?,” Mohsin Hamid’s “How to Get Filthy Rich in Rising Asia” and Jesse Ball’s “How to Set a Fire and Why”—cunningly ape self-help language.

Ms. Blum offers a close analysis of works by Gustave Flaubert, Edith Wharton, Virginia Wolff and James Joyce, offering a compelling argument for “Ulysses” as a self-help manual par excellence. Joyce, she says, employed proverbial advice in his works as “an anchor for his more experimental, esoteric formulations.” One particular favorite: “Let bygones be bygones.” She notes that Flaubert drew on a popular contemporary manual, fittingly titled “Self-Help”—by Samuel Smiles, a Scottish writer and reformer—to lampoon the foolish aspirations and failed DIY projects of the title characters in his posthumous novel “Bouvard and Pécuchet.” Flaubert, Ms. Blum says, showed how self-help advice “can’t account for the infinite particularities of real life” and “needlessly meddles with the natural order.” In Wharton’s novel “Twilight Sleep,” meanwhile, the main character is so ensorcelled by the latest self-help guru that she doesn’t notice her husband falling in love with their daughter-in-law.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

Immortality, Inc.

From The Wall Street Journal:

Amid today’s technological wizardry, it’s easy to forget that several decades have passed since a single innovation has dramatically raised the quality of life for millions of people. Summoning a car with one’s phone is nifty, but it pales in comparison with discovering penicillin or electrifying cities. Artificial intelligence is being heralded as the next big thing, but a cluster of scientists, technologists and investors are aiming higher. In the vernacular of Silicon Valley, where many of them are based, their goal is nothing less than disrupting death, and their story is at the center of “Immortality, Inc.” by science journalist Chip Walter.

Seeking to slow the aging process—if not halt it altogether—is far from a novel quest. In the 16th century, the explorer Ponce de León supposedly sought a fountain of youth in Florida, and the search for magical elixirs didn’t end when he failed to find it. Even so, the medical establishment has traditionally assigned only limited resources to aging, perhaps because, as odd as it may seem, death from old age is a relatively recent phenomenon. At the end of the 19th century, life expectancy in the United States was 48 years for whites and 34 for blacks. Aging, as a cause of death, took a back seat to tuberculosis, pneumonia and much else.

Americans began living longer in the 20th century, thanks to better sanitation and more effective vaccines and medicines. But growing old meant an increased vulnerability to other ailments, from heart disease to cancer. Progress in treating those conditions, in turn, has led to a higher incidence of Alzheimer’s. And while average life spans have been getting longer in much of the world—though declining in the United States in recent years—the outer limits of longevity haven’t changed much.

That is the backdrop to Mr. Walter’s absorbing story, which he begins with a visit to Alcor, the Arizona-based organization that says it preserves corpses at minus 124 degrees Celsius “in an attempt to maintain brain viability after the heart stops.” (Current “patients” include baseball legend Ted Williams.) While this life-extending strategy, known as “cryonics,” is often ridiculed, the individuals profiled in “Immortality, Inc.” are high-status, highly regarded figures whose initiatives can’t be easily dismissed. What links them, writes Mr. Walter, is that “they are all troublemakers at heart.” They believe that the “conventional approaches” of most medical researchers and practitioners are, “at the very least, misguided.”

One key figure in the story is Bill Maris, a venture capitalist with a background in neuroscience. In 2012, dismayed by the lack of research into aging, he began meeting with some of his fellow Silicon Valley heavyweights, like Google co-founder Larry Page, who took an immediate interest. In short order, recounts Mr. Walter, they met with Arthur Levinson, an Apple board member who had spent 14 years as chief executive of the biotech trailblazer Genentech. Less than a year later, Mr. Levinson founded Calico, a company devoted to drug development and extending the human life span. Google kicked in $750 million, as did the pharmaceutical company AbbVie.

Mr. Levinson’s maverick mind-set shines through in a discussion he had a few years ago with several scientists and doctors. According to Mr. Walter, he asked them how much the average life span would increase if all cancer were eliminated. Most assumed about a decade. The answer, said Mr. Levinson, was just 2.8 years. The prospect of such a modest return helped inspire Mr. Levinson and his Calico colleagues to concentrate even more intensely on unraveling the mysteries of life-span biology. (One of their finds, so far, is a rodent native to Africa that shows “little to no signs of aging.”)

. . . .

“As recently as five years ago,” Mr. Walter writes, “the great pashas at [the National Institutes of Health] . . . looked upon aging research as largely crackpot.” He faults the Food and Drug Administration for refusing to classify aging as a disease. As a result, clinical trials—the foundation of medical research—can’t be conducted.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

PG was going to opine but, surprisingly, decided not to do so.

Italian Book and Newspaper Publishers Reveal Scale of Piracy

From Publishing Perspectives:

As Much as 23 Percent of the Market Impacted

Calling for a government intervention, the Association of Italian Publishers (Associazione Italiana Editori, AIE) and the Federation of Italian Newspaper Publishers (Federazione Italiana Editori Giornali, FIEG) have presented results of newly commissioned study on the impact of piracy in the Italian market.

. . . .

“This data reveals the need for the imposition of strong law enforcement and the education of users who are not always fully aware of the effects of their behavior.”

AIE and FIEG are reporting an annual loss of some €528 million (US$585 million) to the books industry and an aggregate of €1.3 billion when news publishing is added in, accounting for as much as 23 percent of the market, exclusive of exports and educational content.

. . . .

Some of the most interesting revelations in the report have to do with who the researchers can identify are the pirati, the pirates.

As is often the case–and a part of what makes combatting the problem so difficult–the culprits are everyday users, many of them unaware of how damaging their fondness for free or cheap content can be.

Some 36 percent of users–more than one in three Italians older than 15, the researchers found–carried out at least one act of piracy with a work of published content in the last year.

  • One in four users are estimated to have downloaded an illegal ebook or audiobook free of charge at least once
  • Seventeen percent of those surveyed said they’ve received at least one ebook from a friend or family member
  • Eight percent said they’d been given at least one photocopied book by a friend or acquaintance
  • Seven percent of respondents said they’d bought at least one photocopied book in the last year

In the university setting, the issue is more dramatic, with some 80 percent of university students committing at least one act of piracy–involving either physical or digital content–in the last year. And 81 percent of professional respondents–including attorneys, notaries, accountants, engineers, and architects–said they’d committed at least on act of piracy in the past year.

Speaking in the morning’s session for the research effort, however, IPSOS president Nando Pagnoncelli said the general public, for the most part is not unaware that piracy is illegal.

Some 84 percent of those older than 15 told researchers this, he said. But 66 percent said that piracy is unlikely to be discovered and punished by authorities, and 39 percent said that they don’t consider piracy to be serious enough to prosecute.

. . . .

And he also made the point, frequently heard now in piracy discussions, that ensuring easy legitimate access to content is important, the “abundance over scarcity” context in which it’s believed that piracy is less attractive because users don’t have to resort to illicit means to attain content they want.

Link to the rest at Publishing Perspectives

‘Collapsologie’: Constructing an Idea of How Things Fall Apart

From The New York Review of Books:

Marco is particularly well-liked among the residents at the retirement home where he works. He attends painstakingly to their every need and discomfort. They confide in him, just as he confides in them. He has painted a collage of bright scenes on the bedroom window of one elderly woman with whom he is particularly close, and with whom he shares a taste in crass humor. As he adds a red rocket ship to the motley tapestry, however, he notices something rustling in the woods down below.

His former co-workers, gone presumably for several days, are carting away the home’s remaining food. When he goes out to stop them, they tell Marco that it’s futile: more supplies will not come, and he’s silly to chain himself to this spot. They tell him where they’re camping that night and that he’s welcome to join. After a moment or two of protest, he goes back inside and tries to pretend as if nothing has changed. But the hopelessness of the situation finally catches up with him. Affixing a respirator to a gas tank unhinged from a utility closet, Marco goes about the thankless task of helping his residents check out early.     

So goes the sixth episode of the new French mini-series, L’Effondrement, that premiered this fall on the Canal+ network. Made up of eight stand-alone vignettes, L’Effondrement, or “Collapse,” is set in the near future. This dystopia, as the title suggests, inverts the logic of shows such as the British drama series Black Mirror, which allows viewers to savor their claustrophobic entanglement in a watertight technological apparatus beyond their control. Max Weber’s “iron cage” of modern society’s bureaucratic rationality is not so rigid, L’Effondrement would have us believe. Instead, confronted by the rickety foundations of all we’d taken for granted, we relish the possibility that we might be swept away once everything falls apart.

A global spike in energy prices, frozen industrial supply chains, climate shocks such as crop-destroying heat waves and rapid ecosystem decay—it is not until the series’s fi nal episode, a flashback to the days before the collapse, that we get something of the broader picture. A group of activists, Extinction Rebellion style, have hatched a plot to sneak a rogue climate scientist into a talk show studio where the environment minister is slated to give a reassuring appearance. When the activists rush the stage, the minister graciously allows the scientist to give his rant, after which she dismisses him as an extremist: “You’re a collapsologue!”

. . . .

Collapsologie—or, as Servigne and Stevens define it, the “applied and transdisciplinary science of collapse”—proposes to free environmentalist thought from the linear or progressive understanding of history implicit in such faiths as “sustainable development,” “green growth,” or the energy “transition.” The story of human societies, which Servigne and Stevens suggest is ultimately the story of their interactions with their natural environments, is circular. The pendulum of human history swings between moments of our being harmoniously embedded within natural processes and periods of population concentration, political centralization, and an urge to transcend the earth’s resource constraints. We develop economies of scale, agglomerate extractive industry on a grand scale, but ultimately overexploit our natural foundations.

Building off Jared Diamond’s 2005 book Collapse, which focused on these dynamics in primarily pre-modern societies, Servigne and Stevens argue that the same iron law of history applies to our hyper-connected, concentrated, and self-confident industrial society of today. The reasons are manifold—and many will be familiar to readers of recent anglophone environmental bestsellers such as David Wallace-Wells’s The Uninhabitable Earth, arguably a mass-market work of collapsology in its own right.

. . . .

Yves Cochet was the environmental minister under Lionel Jospin’s Socialist government in the early 2000s. He has since emerged as one of France’s more high-profile collapsologues, and was a co-founder and president of what might as well be considered the group’s first think tank, the Institut Momentum. Cochet regards the trajectory of traditional environmentalism, with which he was heavily engaged, as largely a “failure.” When I asked him what he thought of the new radical turn embodied by visions such as the Green New Deal, he was circumspect, seeing in them little more than a rehashed and slightly democratized version of the old sustainable development.

“I don’t believe in it for one instant,” he said. Efforts such as the Green New Deal suffer ultimately “from the technological illusion. It’s the Californian technological dream in disguise.”

For Servigne and Stevens, the horizon of sustainable development, a greened industrial society shorn of its addiction to fossil fuels, ignores what they call the earth’s “uncrossable thresholds.” The earth being a closed system, in which a finite quantity of resources is available to a variable population of exploiters (us), poses the inexorable question of limits. Plans to “transition” our energy system from fossil fuels to renewable sources such as wind and solar power, for example, and still assume an exponential expansion in energy use, will not be able to overcome the fact that these new technologies depend on the exploitation of a very limited quantity of rare-earth metals. As the work of Michael Klare, author of The Race for What’s Left, and Guillaume Pitron, author of La guerre des métaux rares, shows, the race to access these resources is rapidly becoming a geopolitical battlefield in its own right. The age of expanding energy exploitation remains the age of fossil fuels.

Ultimately, the critique goes, the fatal weakness of traditional environmentalism is its inability to think beyond economic growth. 

Link to the rest at The New York Review of Books

Some say the world will end in fire,
Some say in ice.
From what I’ve tasted of desire
I hold with those who favor fire.
But if it had to perish twice,
I think I know enough of hate
To say that for destruction ice
Is also great
And would suffice.

Robert Frost

Elderhood

From The Wall Street Journal:

If books, like movies, were given ratings, Louise Aronson’s “Elderhood” ought to be rated PG-80. Not all that many people 80 or older are likely to have living parents, true, but some warning is nevertheless necessary if you have attained to that august (november? december?) age and plan to read her book. Here are just a few gloomy facts that Dr. Aronson, a geriatrician, bestows upon her readers: 5.3 million Americans had one form or another of dementia in 2015, and more than 80% of these were older than 75; 13 million Americans are incontinent (if the phrase “adult diapers” doesn’t shiver your timbers, nothing will). Half of all adult Americans over 65 will have some form of arthritis. Immunity from infection and disease lessens with advancing age. Loss of acuity in hearing begins in one’s 50s and diminishes further with advancing age. Of sexual activity, about which Dr. Aronson graciously does not provide any dismaying details, let us, too, not speak.

Then there are Dr. Aronson’s case studies, scattered throughout the book, of elderly patients who suffer from every illness and disease going, with the possible exception of dandruff. Here is a characteristic sentence, recounting a visit Dr. Aronson made to the home of one of her patients: “Inez, obese and bedbound with moderately severe vascular dementia, lay propped up in her hospital bed, her mouth open and chest visibly rising and falling.” Then there is Eva, who is “very weak, has audible bone-on-bone arthritis in all major joints, frequent spasms in her left hip, minimal clearance of her right foot and could not move her left foot,” not to mention “a blood cancer that she hoped was cured, asthma, some kind of heart problem, and both glaucoma and macular degeneration.” And you think you’ve got problems.

“Live long enough,” Dr. Aronson writes midway through her book, “and eventually the body fails. It betrays us. Our flesh wrinkles, sags, and sinks. Strength wanes. We lose speed, agility, and balance. . . . Sometimes the mind follows the body’s descent, words, logic, insight, and memories dropping away. We fall ill more often and more gravely. We become frail. The smallest, most ordinary tasks—eating, showering, walking—become time-consuming, difficult, dangerous, or impossible.” One could go on, and Dr. Aronson, relentlessly, does, closing this particular paragraph with: “We fight and flirt with death.”

. . . .

Wedged in between its overwhelming sadness, the book has an upside. According to a study cited by Dr. Aronson—and she cites many studies—life, so to say, begins at 60. “Data from the United States and Western Europe,” she writes, “confirm that most people are around sixty before they achieve levels of well-being comparable to those of twenty-year-olds, and rates climb thereafter.” Arriving at 60 and beyond presumably brings freedom from worry, lessened depression and anger, a firmer sense of one’s self and what one values, greater contentment and happiness. And so it often does, providing one arrives at 60 or beyond without too lengthy a list of regrets.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

A Decade of Personal Exploration Ahead in US Self-Help Books

From Publishing Perspectives:

This week’s update from the NPD Group on the growth of the self-help books sector in recent years provides a helpful understanding of another driver of nonfiction in the United States’ book market.

. . . .

Unit sales of self-help books have grown at a compound annual growth rate (CAGR), NPD reports, of 11 percent in six years, reaching 18.6 million in 2019.

NPD’s BookScan report also includes a cautionary note of the kind that publishers sometimes don’t appreciate: you may be producing too many these books.

Despite the growth of the sector, the data shows, growth in the number of published titles in self-help outpaced the rate of sales growth, with the number of unique ISBNs rising nearly three-fold from 30,897 in 2013 to 85,253 in 2019.

In a prepared statement, NPD book industry analyst Kristen McLean—familiar to many in the international trade industry for her insights into the market, especially in children’s literature—describes what occurs when over-production comes into play.

“This increasing competition will make this high-growth category a challenge for new titles looking to break out and find readers,” McLean says, “as publishers and authors compete fiercely for attention.

“This is definitely a hot category for aggressive publisher investment,” she says, “because consumers have placed an increasing focus on mindfulness and minimalism in recent years,

“People are yearning for meaning, peace, and calm in today’s somewhat chaotic culture—and they’re looking for ways to slow down and unplug, which is part of the reason books that inspire people to do just that are doing well.”

And as Publishing Perspectives readers know, this is not just a US and/or Western phenomenon. There’s a similar trend evident in China, where self-help—especially in the realm of positive personality imaging and exploration of self-worth—features frequently on our China bestseller lists.

. . . .

Some 4.3 million units were sold in 2019, NPD’s report says, in the motivational-inspiration area, by comparison to 1.4 million units in 2013.

. . . .

Journaling is another activity that NPD sees recently “skyrocketing” in popularity. Accounting for fewer than 50,000 units in 2015, the company says, sales reached 700,000 units in 2019 in the States.

And in that vein, books that focus on creativity logged especially strong growth in 2019, per BookScan’s information, with unit sales increasing 43 percent over 2018.

Link to the rest at Publishing Perspectives

‘SAM’ Review: Building a Better Bricklayer

From The Wall Street Journal:

Spiders can spin intricate webs. Birds weave branches into cozy nests. Bees build hives with near-perfect hexagons. It should be easy for an advanced robot complete with lasers and artificial intelligence to lay a simple brick wall. But, as the journalist Jonathan Waldman chronicles in “SAM,” the quest for a bricklaying robot has been bumpier than the work of a mason with vertigo.

The tale follows Scott Peters, a 30-something engineer in western New York and co-founder of Construction Robotics, as he spends most of the past decade developing SAM (Semi-Automated Mason), prodded by the inspiration and funding of his architect father-in-law, Nate Podkaminer. We watch as the two create a company, hire engineers, experiment with marketing and finally stack walls, haltingly.

Several themes run through the book. First is the often-unsung adaptability of organic intelligence. Engineers have sought for decades to make devices that build with blocks. “Getting an inanimate machine to do what only hands and brains could was apparently some kind of universal geek fantasy,” Mr. Waldman writes. “And while it sounded like child’s play, it was phenomenally difficult. To put it in context: The first machine that successfully picked up small wooden blocks did so only eight years before humans landed on the moon.”

The minute adjustments a human makes when manipulating objects, especially in messy environments like construction sites, result from billions of years of evolution. We make it look easy, until you give instructions to a robot and watch it fumble around or freeze up when it gets a little dirt on its face. Yann LeCun, Facebook’s chief A.I. scientist, once told me, “I would declare victory if in my professional lifetime we could make machines that are as intelligent as a rat.”

Mr. Peters has laudable motivations. “By creating a bricklaying robot,” Mr. Waldman writes, “he aimed to eliminate lifting and bending and repetitive-motion injuries in humans; to improve the quality of walls; to finish jobs faster and safer and cheaper; and to ease project scheduling and estimation. Basically: to modernize the world’s second oldest and most primitive trade.”

. . . .

The robot’s development is an object lesson in debugging. The engineer who crafts the code for the arm has the six stages of debugging posted in the office: “1. That can’t happen. 2. That doesn’t happen on my machine. 3. That shouldn’t happen. 4. Why does that happen? 5. Oh, I see. 6. How did that ever work?” The team swaps out parts, rewrites code and reconfigures designs, sometimes in rain or under a blazing sun. Often the solution is an utterly familiar one: Switch the damn thing off and on again.

A second theme is that technological advancement requires debugging not only hardware and software but also humans. Some of that problem-solving is simple workflow optimization: loading bricks and mortar properly and promptly, keeping people out of the way. Some of it requires deeper psychological and sociological renovation. It’s hard to smooth relations with workers who don’t want to share a job with SAM in the first place. Some masons said its work was sub-par (but more colorfully). Some said it threatened to steal their jobs. And some just didn’t like doing things differently. “The construction industry, the engineers began to learn, was as slow to change as baseball,” Mr. Waldman writes. Similar lessons are emerging from the front lines of semiautomation in medicine, manufacturing and other fields.

. . . .

Mr. Waldman follows all the drama like a fly on a brick wall, richly reporting scenes and conversations, many on job sites where both circuitry and civility break down. The book is reminiscent of a reality-TV show about a scrappy startup, complete with backstory segments as we learn the pasts and personalities of each new hire. There are also a lot of digressions—the history of the bricklayers union, how much pinboys at bowling alleys were tipped, how literal sausages are made, Mr. Peters’s 16th-century ancestors, his high-school swim coach’s career as a famous-in-Japan professional wrestler. 

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

PG has worked with very bright inventors a couple of times during his business life and has found them to have a unique mindset and combination of talents and personality traits.

Of course, authors are a species of inventors, creating stories out of keystrokes.

Both authors and inventors rely upon the protection of intellectual property laws – copyright and patents – to permit them to control and exploit their creations in the way they deem best.

What John Dos Passos’s “1919” Got Right About 2019

From The New Yorker:

U.S.A. is the slice of a continent,” John Dos Passos wrote, in his novel “The 42nd Parallel,” from 1930. “U.S.A. is a group of holding companies, some aggregations of trade unions, a set of laws bound in calf, a radio network, a chain of moving picture theatres, a column of stockquotations rubbed out and written in by a Western Union boy on a blackboard, a public-library full of old newspapers and dogeared historybooks with protests scrawled on the margins in pencil. U.S.A. is the world’s greatest rivervalley fringed with mountains and hills, U.S.A. is a set of bigmouthed officials with too many bankaccounts. U.S.A. is a lot of men buried in their uniforms in Arlington Cemetery. U.S.A. is the letters at the end of an address when you are away from home. But mostly U.S.A. is the speech of the people.”

The “U.S.A.” trilogy—written by Dos Passos in the late nineteen-twenties and nineteen-thirties, and consisting of “The 42nd Parallel,” “1919,” and “The Big Money”—was an attempt to describe American life in tumult, from top to bottom. Writing at a moment of economic dissolution and technological transformation, Dos Passos hoped to show how Americans of all kinds were responding to the bustling mess of modernity—what his friend Edmund Wilson called “the American jitters.” In its time, the trilogy sold well, and it was highly praised by Jean-Paul Sartre, William Faulkner, and others. But since then its fortunes have been jittery, too. For many decades, the “U.S.A.” novels, often published as a single volume, were a yellowing tome, more respected than read. Dos Passos came to be seen as an also-ran—a secondary character in the stories of F. Scott Fitzgerald, Ernest Hemingway, and other writers of the Lost Generation. Then, in 1998, a board of luminaries convened by the Modern Library placed the trilogy on its list of the best novels of the twentieth century. In 2013, David Bowie listed “The 42nd Parallel” as one of his favorite books; that same year, George Packer—who has written about Dos Passos for The New Yorker—used the trilogy as a structural inspiration for “The Unwinding,” his nonfictional account of twenty-first-century America on the fritz.

There’s a reason that Dos Passos’s Depression-era modernism seemed suddenly relevant. The present was coming to look a lot like the past. The novels combined the stylistic innovations of the European modernists, which Dos Passos had used to evoke a shifting media landscape, with fiercely committed leftist politics that were resurgent in the new millennium. He had written a linguistically adventurous national portrait for a precarious age—his, and ours.

. . . .

In the “Newsreel” sections, text from actual newsreels flows together with snippets from newspaper articles, lines from popular songs, and excerpts from radio broadcasts. These bursts of information seem random but were carefully selected for maximum effect. Hurtling themselves at the reader, they are too brief to be fully explicable, but too portentous to be ignored:

It is difficult to realize the colossal scale upon which Europe will have to borrow in order to make good the destruction of war
bags 28 huns singlehanded
Peace Talk Beginning To Have Its Effect On Southern Iron Market
local boy captures officer
one third war allotments fraudulent
There are smiles that make us happy
There are smiles that make us blue . . . .

Today, of course, the “Newsreel” sections evoke the social-media feed—another venue for the associative, sometimes surreal juxtaposition of image, sound, and text. Usually, online randomness doesn’t cohere: video clips and cat memes fit randomly alongside disturbing headlines or worrisome data points. But sometimes sudden, unexpected juxtapositions can speak volumes about the state of the country. The other night, scrolling through Facebook, I saw a clip from James Baldwin’s famous debate with William F. Buckley, Jr., in 1965. Baldwin talks about his place in American history: “I am stating very seriously . . . that I picked the cotton . . . and I built the railroads, under someone else’s whip, for nothing,” he said. The next item in the feed was a Fox News segment urging viewers to call their local schools to inquire about whether students were saying the Pledge of Allegiance.

Link to the rest at The New Yorker


An art dealer disappeared with $50M. 17 years later, a documentary crew found him

Perhaps a writing prompt.

From CBC:

Vanessa Engle’s new film tells the story of Michel Cohen and how he pulled off one of the biggest art cons in history before disappearing for nearly two decades.

The $50 Million Art Swindle debuted on BBC earlier this week and details Cohen’s arrival from France, his many failed business ventures and his eventual rise in the world of high-priced art dealing.

When Cohen started trading stocks, he watched his fortune multiply and then erode. Desperate and broke, he turned to theft.

“I first clocked this story in 2001 when it was in the papers and it just struck me as an amazing story,” Engle told Day 6 host Brent Bambury.

“I think I am a bit of a fanatical person by nature, and I just continued to do online searches looking for Michel Cohen whenever I had an idle moment.”

. . . .

That went on for 17 years. Eventually, her obsession turned into a fully funded project with staff and a travel budget. She hoped to find Cohen but admits it was very unlikely.

“If I’m honest, we never really anticipated finding him because no one had ever found him,” she said.

. . . .

Shepherd tracked down court documents in the United States and Brazil and used them to create a circle of people who knew Cohen. Engle contacted all the leads, effectively getting the word out.

“They are difficult letters to write. I didn’t know if I was writing to people who maybe were harbouring a fugitive or people [who] were extremely angry with him.”

. . . .

It was his wife who responded to one of my letters. She wouldn’t really talk to me on the phone so she said ‘come and meet me in this other country,’ which I did.

And I talked to his wife for a number of hours in a cafe. She told me her story, which is an amazing story, but said she didn’t want to appear on camera.

She, at this point, still hadn’t told me where he was or even that they were still together. And then I said ‘Well, why am I here?’ … Then suddenly this man appeared at the table … and I did a bit of a double take because I wasn’t really sure it was him.

And then I realized it was — and in fact he and I were both sort of overcome with shyness, because I suspect he’d been waiting years and years for someone to come in search of him, and I’d been waiting years and years to meet him.

. . . .

He had an extraordinary lifestyle. We went [and] filmed in his house in Malibu, which is the most incredible house on the cliff in Point Dume overlooking the ocean. It’s an iconic [and] very beautiful house … I’ve never been in a house like it. It’s quite amazing.

So he was living in that house. They had horses that they bought for tens of thousands of dollars. They had a lot of staff and he had a habit of buying and trading in extremely expensive cars.

So he was living the high life. No question.

How did he steal more than $50 million from art collectors?

He sold paintings that were not his to sell. Art that had high prices … Monet, Picasso, Matisse, Chagall.

He would take them on consignment from art galleries and sell them to his clients and either trouser of the money or trouser the paintings.

After he disappears he is arrested in Brazil. What happened there?

He lived in Brazil for two years and then he was arrested. Interpol caught up with him and he was arrested and imprisoned in Rio.

The way he tells it in the film is that he noticed the ambulance that took sick prisoners to hospital had broken down, that he spied this through the window of his cell. And so he had the idea that if he feigned illness he would be taken to hospital and it wouldn’t be in an ambulance.

And indeed he was taken to [hospital] in a private car belonging to a prison guard and he was not in handcuffs, even though the prison guards had guns.

And at a traffic light in Rio’s very congested traffic system, he jumped out of the car and ran for his life.

. . . .

There are some seriously angry people out there; they lost a lot of money. Do you think anything will happen for them now that your film has pulled back the curtain on Michel Cohen’s story?

I think the reason he felt safe in taking part in the documentary is because he is in a country where he is beyond the reach of U.S. law enforcement.

And yet he doesn’t have money … so there’s no point in anyone coming after him because a civil suit wouldn’t get them any money at all, and the FBI can’t get him because he’s out of their reach.

. . . .

There is a certain point in the film where there’s a little flicker of remorse, but more than anything he feels that his actions were in some way justifiable.

Link to the rest at CBC

‘Virtue Politics’ Review: Of Soulcraft And Statecraft

From The Wall Street Journal:

What characteristics are necessary for a political career? How do you recognize an unfit ruler? Should you oppose or try to reform him? These questions are central to recent debates about liberalism, conservatism and meritocracy—and perhaps even impeachment.

Yet they are also very old questions. As Harvard professor James Hankins shows in “Virtue Politics,” a magisterial study of “soulcraft and statecraft,” humanist scholars in the Italian Renaissance were concerned with many of the same puzzles that obsess us today. While acknowledging the variety of responses that they offered, Mr. Hankins focuses on a particular kind of answer. He calls it “virtue politics”: the attempt to reform civic life by improving the morality of the ruling elite.

Virtue politics was not invented in the 15th century. As Mr. Hankins shows, it drew on intellectual currents that extend back to ancient Greece, classical Rome and the Church Fathers of the early Christian era. In different ways, Aristotle, Cicero and Augustine all argued that virtue was the basis of political achievement.

But the central figure in Mr. Hankins’s account is Francesco Petrarca, better known as Petrarch. He is remembered today mostly as a poet and an editor of Latin texts. Mr. Hankins contends that he was also a significant political thinker. According to Mr. Hankins, Petrarch saw his literary and scholarly endeavors as a step toward saving Italy—and perhaps all of Christendom—from misgovernment. Learning to speak and write beautifully was not simply a cultural achievement but also, he believed, a political necessity.

Borrowing a term from German ethnologist Leo Frobenius, Mr. Hankins describes this enterprise as paideuma—an “intentional form of elite culture,” as he writes, “that seeks power within a society with the aim of altering the moral attitudes and behaviors of society’s members, especially its leadership class.” The humanists’ task was to institutionalize and propagate this paideuma through writing, speaking and teaching.

On the intellectual level, Petrarch and his followers sought to rescue classical antiquity, especially pagan Rome, from Christians’ historical suspicion. Although the Romans had not known the true God, humanists argued, their political success was based on their superiority in virtue. When it came to personal rectitude and public spirit, the Romans often exceeded ostensible Christians. The humanists had to acknowledge that not all Romans met this lofty standard. But they adopted Cicero—the statesman, lawyer and philosopher—as its personification.

. . . .

True nobility was closely related to the humanist conception of the good government. Unsatisfactory rulers might secure desirable outcomes from selfish motives. Those with true nobility would pursue the right goals for the right reasons. In this respect, humanist political thought had a perfectionist quality. The test of legitimacy was not simply performance, but good character.

Mr. Hankins shows that the humanists’ obsession with character explains their surprising indifference to particular forms of government. If rulers lacked authentic virtue, they believed, it did not matter what institutions framed their power.

. . . .

Indeed, a ruler of true nobility, in the humanists’ view, should be cherished even if he came to power in an irregular manner. Despite their admiration for Cicero, some humanists defended Julius Caesar —who invaded Italy, against the senate’s order, and ruled as dictator for life. To these writers, Caesar’s outstanding character and good intentions outweighed his questionable methods. “Can a man raised to power through his own merits, a man who showed such a humane spirit, not to his partisans alone but also to his opponents because they were his fellow citizens—can he rightly be called a tyrant?” asked the Florentine statesman Coluccio Salutati “I do not see how this can be maintained, unless indeed we are to pass judgment arbitrarily.”

Such humanist defenses of Caesar’s virtue are superficially similar to Machiavelli’s infamous account of the virtù of a prince—the capacity for amoral calculation that, in Machiavelli’s view, must guide the effective prince (a generic term that includes any aspirant to power). Mr. Hankins devotes his last three chapters to exploring the differences. If Petrarch is the hero of “Virtue Politics,” Machiavelli is its villain.

. . . .

In this respect, Machiavelli prefigures our current predicament. The Renaissance tradition remained influential well into modern times. Particularly in New England, humanist arguments about virtue were often blended with Protestant theology in an amalgam that historian Mark Noll calls “Christian republicanism.” John Adams believed Petrarch showed that “tyranny can scarcely be practiced upon a virtuous and wise people.”

Yet virtue politics was eclipsed by modern constitutionalism. In their emphasis on the separation of powers, Locke and Montesquieu and the other Enlightenment philosophers whose ideas inspired the American Founders shared Machiavelli’s doubts about the sufficiency of virtue. English scholars like Edward Coke and William Blackstone also promoted a greater appreciation for the role of law. We can see the legacy of this shift in the ambiguity of the impeachment process, which appeals both to virtue and to legality.

. . . .

Mr. Hankins makes an explicit plea to the modern successors of the elite that the humanists tried to cultivate. Those who enjoy cultural or political influence should consider carefully whether they are worthy of such power.

Link to the rest at The Wall Street Journal (sorry if you run into a paywall)

On Mercy and When Should Law Forgive?

From The Wall Street Journal:

American public life is full of disputes about justice and forgiveness. Take the Dreamers—the nearly 700,000 young adults who came from Mexico and elsewhere as children, did not obtain citizenship or permanent resident status, and have been in legal limbo since President Obama signed an executive order giving them temporary legal status in 2012. Depending on whom you talk to, these immigrants should either be deported immediately or given full citizenship.

When the brother of Botham Jean, the unarmed black man shot in his Dallas apartment by Amber Guyger, a white off-duty police officer, asked the judge if he could hug Guyger and offer forgiveness at the end of her trial, reactions were similarly divided. Many found the brother’s gesture of mercy profoundly moving; for others, it was only a distraction from justice.

Two types of forgiveness are intertwined in these instances. The first is legal forgiveness. Our legal system is committed to justice, and to honoring the rule of law. But sometimes we make exceptions, declining to punish a defendant who has violated the law or providing partial amnesty for tax evaders. This is legal forgiveness. (Mercy, which Shakespeare’s Portia famously praises as “twice blest” in “The Merchant of Venice,” is slightly different: It is leniency in applying the law rather than forgiveness.)

The second type of forgiveness, interpersonal, is often described as a “release of resentments.” When Brandt Jean offered to forgive Amber Guyger, he was letting go of his anger against his brother’s killer and so extending interpersonal forgiveness. How can legal forgiveness be reconciled with the rule of law? What do legal and interpersonal forgiveness have to do with one another? These questions are implicit in the current controversies about justice and mercy or forgiveness, and they are the focus of two thoughtful but idiosyncratic new books.

In “On Mercy,” Malcolm Bull conducts a clever thought experiment on the question of whether mercy might not only be reconciled with justice but could displace it at the center of our political life. Although Mr. Bull is not a professional philosopher—he’s an Oxford professor of art and the history of ideas—he plays one in this book. In 163 readable pages of text, he cycles through Seneca, Niccolò Machiavelli, Thomas Hobbes, David Hume, Judith Shklar, Bernard Williams and more. Mr. Bull begins with a very capacious definition of mercy, encompassing both mercy and legal forgiveness. “An act of mercy,” he writes, “is an action that is both intended to be and turns out to be less harmful than it might have been.” Even a torturer may be acting mercifully, Mr. Bull says, if he tortures his victim less than he might have done. If there is an ordinary level of harm—say, an ordinary criminal sentence, to use a less extreme example—it is merciful to inflict less than that expected, ordinary level.

Link to the rest at The Wall Street Journal (sorry if you run into a paywall)


Justice and Mercy are most famously discussed in The Bible, but are recurring themes throughout literature – “The quality of mercy is not strained” in The Merchant of Venice and Crime and Punishment come immediately to mind.

Stir and Bustle

From The London Review of Books:

In the original film noir, John Huston’s Maltese Falcon (1941), private investigator Sam Spade (Humphrey Bogart) visits criminal mastermind Kasper Gutman (Sydney Greenstreet) in his San Francisco hotel room to discuss the delivery of a mysterious ornament. The elevator attendant points him in the direction of Room 12c. After some cagey preliminaries, Spade delivers a ferocious ultimatum. Once out in the corridor again, he unlocks a wolfish grin. He’s got Gutman where he wants him. Or so he thinks. As he enters one elevator in order to descend to the lobby, Gutman’s accomplice, Joel Cairo, whom Spade already has grounds to distrust, exits from another. His failure to spot Cairo will very nearly prove fatal. Since Cairo is Peter Lorre at his most flamboyant, you would have to be quite far gone in self-congratulation not to notice him. Spade has failed to understand that a corridor is less a space than a channel of communication through which people, things and messages pass in both directions. Mind the traffic.

Roger Luckhurst’s ambitious and consistently informative cultural history of the corridor makes brief mention of The Maltese Falcon in accounting for film noir’s preoccupation with bleakly anonymous lobbies, passages and hallways. But it’s not the skills and attitudes required to negotiate these spaces that interest Luckhurst. In his view, corridors have a meaning rather than a function. Film noir, he says, set out to ‘interpret’ lobbies, passages and hallways as an index to modern alienation. This is emphatically a cultural rather than an architectural history. Literature, film, TV and other media are called on to elucidate meaning. One touchstone is Stanley Kubrick’s The Shining (1980), in which the camera stalks young Danny at just above ground level as he pedals his tricycle down the interminable featureless passageways of the Overlook Hotel. ‘The Shining,’ Luckhurst concludes, ‘revealed something about the emotional latency of corridors: a simple lesson in the social construction of space.’

The point of a corridor has always been to make it possible to get from one part of a building to another without having to pass through a succession of intervening rooms. Emerging into prominence in 17th-century Italy, corridors found an early champion in John Vanbrugh, whose designs for Blenheim Palace and Castle Howard modelled the new arrangement of a series of rooms opening off a long central axis. The idea gained wide currency during the Enlightenment, Luckhurst notes, as a ‘rational proposal’ for the redistribution of public and private space. Where domestic interiors were concerned, the proposal’s aim was in large measure defensive: a reinforcement of privacy. Luckhurst proves an excellent guide to the distinctly mixed bag of ‘utopian conceptions’ and ‘dystopian results’ that was the outcome of this Enlightenment project.

The first flower of the ‘utopia of corridors’ was the phalanstère (from the Greek phalanx, a body of soldiers in tight formation), dreamed up by the philosopher Charles Fourier and his disciple Victor Considerant as a solution to the social and economic instabilities of post-revolutionary France: a settlement house or estate for 1620 people organised around a street gallery that ran the full length of the second floor. This public thoroughfare was the only way to get from one domestic interior to another, or to gain access to an array of facilities including canteens, nurseries and workshops. Fourier had it in mind to dismantle the bourgeois family. ‘In utopia,’ Luckhurst notes, ‘the corridor always promises radical social reassemblage.’

. . . .

Given his interest in social and political utopia, it’s curious that Luckhurst has nothing at all to say about The West Wing, perhaps the most influential, and certainly the most uplifting, recent exploration of ‘corridors of power’. The show’s signature idea was the walk-and-talk: an elaborately choreographed tracking shot which follows several characters at a time as they navigate the corridors of the White House while engaged in multiple, overlapping conversations.

. . . .

Both draw substantially on a seminal essay by the architectural historian Mark Jarzombek which demonstrates, from the 17th century onwards, the way the corridor became the ‘organising structure’ of the modern large-scale edifice, public or private. Jarzombek points out that in 14th-century Spain and Italy, the term referred ‘not to a space but to a courier, someone who, as the word’s Latin root suggests, could run fast’. A ‘corridor’ was a messenger, a scout, a carrier of money, a negotiator: a person in a hurry.

. . . .

The Gothic persisted, and Luckhurst wrings plenty of ‘spatial dread’ from the corridor of Thornfield Hall in Charlotte Brontë’s Jane Eyre (1847), where Bertha Mason is held captive behind a small black door. More frightening, I’d say, because a lot cannier, is the one on the upper floor of a village inn in Mary E. Braddon’s Lady Audley’s Secret (1862). The scheming protagonist ventures along it in search of the room occupied by her husband’s nephew, Robert, who is hot on her (bigamous) trail. ‘She stopped and looked at the number on the door. The key was in the lock, and her hand dropped upon it as if unconsciously.’ She stands for a few moments trembling, ‘then a horrible expression came over her face, and she turned the key in the lock; she turned it twice, double locking the door.’ No prizes for guessing that she’s about to set fire to the place. The number on the door of Robert’s room has targeted him as unerringly as the GPS lock that launches a drone strike.

Link to the rest at The London Review of Books

CC0 1.0 Universal (CC0 1.0)
Public Domain Dedication

TL;DR (That is, Too Long; Didn’t Read)

From The Scholarly Kitchen:

I recently spent the better part of a work day reading Richard Poynder’s 87-page treatise on the current status of open access. Even as I printed it out, so as to protect myself from any digital distraction while reading, I wondered whether reading the full text was in fact the best use of my time. Was there an executive summary that might suffice? Could I skim it and just pick up the general gist of his argument? Truthfully, the response to both questions turned out to be No. It was a substantive piece and thoroughly documented, via footnotes as well as embedded links. Clearly, a thorough reading was going to require attention and time. Did I have either?

I was not the only person who reacted to the length of Poynder’s “ebook.” Others were having to make the same decision about whether the time spent reading would be well-invested. Although I hadn’t realized something was in the works at the Scholarly Kitchen, Rick Anderson, Associate University Librarian at the University of Utah, had done some of the heavy lifting in evaluation (see here). On Twitter, a researcher asked for the TL;DR version and the author quickly referred him to Digital Koans where the selection of a single concluding paragraph summed up what the author felt was covered in the meaty essay.

. . . .

But even so, someone else tweeted out that, no matter how worthwhile the content, he or she could hardly hand over a document of 87 pages to their provost and expect them to read something of that length. The time commitment required to consume the dense material would not seem justifiable, unless the topic was one with which the provost was already deeply concerned.

This gives me pause, because how we view the task of reading, how much time we allocate to reading, and the criteria for determining what is worthy of being read continues to be a challenge. It is something with which many professionals wrestle on a daily or weekly basis.

. . . .

Given the demands of real life, how much reading is feasible? The group included Verity Archer of the Federation University in Australia, who referenced the concept of “time privilege”; the fact that those with the greatest flexibility in their schedules are usually the most privileged when it comes to reading. Those early career researchers who most need the time to read and absorb the literature are generally the ones most weighted down with teaching and administrative tasks. Women who are primary care-givers outside of the office will tend to push the work of reading into their leisure evening or weekend hours. If reading is part of one’s day job, in Archer’s view, then the available hours in the workday should allow for it.

. . . .

Others referenced irregular reading habits unless faced with a grant or syllabus deadline, at which point they would do a spell of binge-reading. The hesitation associated with that practice was summed up well by David A. Sanders of Purdue when he wrote, “We should…resist the urge to promote research results that we have not personally evaluated.”

A humanist quoted in the Times Higher Ed piece wrote that the question of determining what to read was “now infinitely more complex in the age of digital and computational possibilities”.

. . . .

The Danish AI company, UNSILO, recently reported on results from their 2019 survey on the acceptance and usage of AI in academic publishing. They found that publishers have hitherto focused on how AI might solve their own problems rather than those of the research community. As noted on page 8 of the report, “The primary perceived benefit of AI was that it could save time. This could be seen as evidence of a new realism among publishers, since the thinking is presumably to apply AI tools to relatively straightforward processes that could be completed faster with the aid of a machine, such as the identification of relevant articles for a manuscript submission, or finding potential peer reviewers who have authored papers on similar topics to a manuscript submission.” While I see this as a sensible use of AI by content and platform providers, the pragmatic reality suggests an uncomfortable possibility. There is no magic solution. AI isn’t currently up to the task.

In the earlier instance of the librarian reading for purposes of peer review, there was a quick response from one of the founders of Scholarcy, an application offering summaries of full-text articles to the researcher. The tagline for the company is blunt “Read less, learn more,” and springs from the founders’ own frustrations in trying to handle the volume of content to be read in the PhD process. Among other functionalities noted in its marketing text, Scholarcy will highlight the important findings in a paper, eliminating the need for the reader to print out and laboriously highlight critical segments or sentences. The reader can customize specific aspects — the number of words, the level of highlighting, and the level of language variation (this last allows you to more easily cite the finding in your paper). Scholarcy will navigate the user to Google Scholar, to arXiv, and to other open source material referenced in the paper. There are additional functionalities and Scholarcy invites the visitor to their site to engage with their demo, a worthwhile use of 15 minutes. Their tool is recommended for researchers, librarians, publishers, students, journalists, and even policy wonks.

Link to the rest at The Scholarly Kitchen

PG is not an expert on academic writing, but in the legal world, there is a lot of poor writing. Sentences and paragraphs are structured according to standard practice, citations are perfect (thanks, in part, to some computer assistance), but the thought behind the expression often seems to be haphazard and poorly-realized.

Contracts written by lawyers working for or in large business organizations are the worst. You stack poorly-organized thinking upon legal necessities upon boilerplate mindlessly copied and pasted and you end up with an extraordinary mess that sometimes contradicts itself and requires that you go to paragraph 54 to understand something written in paragraph 29, which is later modified in paragraph 62(a)(iii).

Predatory journals: no definition, no defence

From Nature:

When ‘Jane’ turned to alternative medicine, she had already exhausted radiotherapy, chemotherapy and other standard treatments for breast cancer. Her alternative-medicine practitioner shared an article about a therapy involving vitamin infusions. To her and her practitioner, it seemed to be authentic grounds for hope. But when Jane showed the article to her son-in-law (one of the authors of this Comment), he realized it came from a predatory journal — meaning its promise was doubtful and its validity unlikely to have been vetted.

Predatory journals are a global threat. They accept articles for publication — along with authors’ fees — without performing promised quality checks for issues such as plagiarism or ethical approval. Naive readers are not the only victims. Many researchers have been duped into submitting to predatory journals, in which their work can be overlooked. One study that focused on 46,000 researchers based in Italy found that about 5% of them published in such outlets. A separate analysis suggests predatory publishers collect millions of dollars in publication fees that are ultimately paid out by funders such as the US National Institutes of Health (NIH).

. . . .

Everyone agrees that predatory publishers sow confusion, promote shoddy scholarship and waste resources. What is needed is consensus on a definition of predatory journals. This would provide a reference point for research into their prevalence and influence, and would help in crafting coherent interventions.

To hammer out such a consensus and to map solutions, we and others met in Ottawa, Canada, over two days in April this year. The 43 participants hailed from 10 countries and represented publishing societies, research funders, researchers, policymakers, academic institutions, libraries and patient partners (that is, patients and caregivers who proactively engage in research).

. . . .

The consensus definition reached was: “Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.”

. . . .

Since the term ‘predatory publishers’ was coined in 2010, hundreds of scholarly articles, including 38 research papers, have been written warning about them. Scientific societies and publishers (including Springer Nature) have helped to establish the ‘Think. Check. Submit.’ campaign to guide authors. But it is not enough.

More than 90 checklists exist to help identify predatory journals using characteristics such as sloppy presentation or titles that include words such as ‘international’. This is an overwhelming number for authors. Only three of the lists were developed using research evidence. Paywalled lists of quality journals and predatory journals show that there is an appetite for clear, authoritative guidance. But these lists are inconsistent and sometimes out of reach (see ‘No list to rule them all’). A journal’s membership of agencies such as COPE (the Committee on Publication Ethics), curated indexes such as Web of Science, or being listed in the Directory of Open Access Journals (DOAJ) is insufficient to guarantee quality. Predatory journals have found ways to penetrate these lists, and new journals have to publish for at least a year before they can apply for indexing.

. . . .

Crafting a consensus definition was hard. Even reaching agreement on the use of ‘predatory’ was a challenge. Part of the group wanted a term that acknowledges that some authors turn to these outlets fully aware of their low quality; these scholars willingly pay to publish in predatory journals to add a line to their CVs. We discussed replacing the term entirely with language that recognizes nuances in publishers’ quality and motivation. Alternatives considered included ‘dark’, ‘deceptive’, ‘illegitimate’ and ‘acting in bad faith’. Ultimately, we concluded that the term ‘predatory’ has become recognized in the scholarly community.

. . . .

False or misleading information. This applies to how the publisher presents itself. A predatory journal’s website or e-mails often present contradictory statements, fake impact factors, incorrect addresses, misrepresentations of the editorial board, false claims of indexing or membership of associations and misleading claims about the rigour of peer review.

Deviation from best editorial and publication practices. Standards here have been set out in the joint statement on Principles of Transparency and Best Practice in Scholarly Publishing issued by the DOAJ, the Open Access Scholarly Publishers Association, COPE and the World Association of Medical Editors. Examples of substandard practice include not having a retraction policy, requesting a transfer of copyright when publishing an open-access article and not specifying a Creative Commons licence in an open-access journal. These characteristics can be difficult to know before submitting, although such information is easily obtained from legitimate journals. An unprofessional-looking web page — with spelling or grammar mistakes or irrelevant text — should also raise red flags.

Link to the rest at Nature

The Bravest Thing Col. Randy Hoffman Ever Did Was to Stop Fighting

From The Wall Street Journal:

Marine commando Randy Hoffman’s plane took off from Kabul, climbed over the jagged mountains and turned toward home.

Somewhere down there was his tent, a piece of canvas stretched across a pit he had carved into a high-altitude ridge. Randy had spent most of the previous 2½ years in the mountains along the Pakistan border, turning Afghan villagers into soldiers.

Rugs covered the tent’s dirt floor. He had a wood stove for heat and collected catalogs of farm equipment and RVs to remind him of home in Indiana. A metal thermos stored the goat’s milk and cucumber drink delivered each morning by the mountain men who fought alongside him. He and the Afghans would sit on a dirt bench, talking about poetry, faith and honor, and how to make it through the next day alive.

Randy’s camp watched over the narrow passes and smuggling paths used by al Qaeda and Taliban militants to sneak into Afghanistan from Pakistan. He kept mortars aimed at likely approaches. At times, he was the only American for miles.

On Randy’s last trip down the mountains, a caravan of Afghan fighters in Toyota pickups escorted him on the seven-hour drive to a U.S. base. From there, he caught a helicopter to Kabul and trimmed the beard he had grown so he wouldn’t stand out as a target during gunfights.

It was July 2005. As Randy headed home, he couldn’t escape one thought. U.S. troops had been in Afghanistan three years and nine months—as long as they had fought in World War II. Yet the Afghan war wasn’t close to won.

On the flight home, Randy pictured the many villagers lost in combat, men he had come to admire for their courage and strict sense of right and wrong. He thought about those left legless by militant bombings and now facing a life ahead in mud-brick compounds perched on mountainsides.

He turned away from the others on the plane and cried.

Since the first U.S. troops arrived in 2001, Afghanistan has become a generational war. The youngest recruits stepping off the bus at boot camp today were born after the Sept. 11 terrorist attacks that ignited the war they may soon fight.

Col. Randy Hoffman served seven combat tours in Afghanistan, six of them highly classified missions, and one stint in Iraq. Afghanistan brought him promotions. It rewarded the rural boy from Danville, Ind., with a bronze star medal for valor. It transformed a middling student into a scholar of history and war.

Afghanistan also nearly cost Randy his sanity. It buried friends. It almost ended his career. It ripped ragged edges around a gentle personality.

It strained his marriage and frightened his children. The family began referring to itself as Hoffmanistan, a dark joke reflecting Afghanistan’s long reach into their daily lives.

Eighteen years after the Sept. 11 hijackings spurred the U.S.-led invasion to oust the Taliban and its al Qaeda allies, American troops are still fighting and dying in Afghanistan.

Randy first kissed Dawn on the night before he left for boot camp in 1985.

She was the little sister of his best friend, and he had known her since she was 6 years old. They grew up during an era of skateboards and mullets in Danville, a town of 4,000 in the center of Indiana.

Dawn was an honor student at Danville High School. Randy brought up the rear. He gathered the nerve to ask her out when she was 15, and he was 18.

After Randy left for the Marines, Dawn waited for him. He earned a spot in an elite Force Reconnaissance platoon. She studied nursing.

They married in 1991, and the couple settled into an upstairs apartment in the house of Randy’s parents. They stocked it with furnishings salvaged from their childhood bedrooms.

Randy attended Indiana University and earned an officer’s commission. Military service was part of his heritage. His father and two uncles were Marines.

He was 2 years old in 1968 when his uncle Terry Hoffman, a helicopter crew chief, was shot down in Vietnam. The aircraft split in half, and Terry’s body was thrown far from the wreckage. He was still listed as missing in action after Saigon’s fall in 1975. Randy saw his grandmother cover her mouth in shock as she watched TV reports of the last Americans boarding helicopters, leaving her son behind.

A Vietnamese farmer found Terry’s remains and kept them. When the farmer died, his family gave a jawbone to authorities, who passed it along to a U.S. casualty-recovery team.

In 1994, Randy’s first duty as a second lieutenant was to escort Uncle Terry’s remains home. He knelt and handed his grandmother the American flag, folded tightly into a triangle, on behalf of “the president of the United States, the United States Marine Corps and a grateful nation.”

The day of the Sept. 11 attacks, Randy was at a Marine Corps school at the base near Quantico, Va. He had been having premonitions—a heads-up from God, he believed—about a terrible event.

Military officers asked students if any spoke Urdu, Arabic, Farsi or Pashto. Randy had studied Arabic in college, but he didn’t feel fluent enough to put up his hand. The military decided any Arabic was good enough.

Link to the rest at The Wall Street Journal (sorry if you encounter a paywall)

As has been mentioned before, TPV is not a political blog, but PG thought the first paragraphs of the OP were quite effectively written.

The article includes a lot of photos from Randy’s past and his present as a balding 56-year-old colonel with nearly 40 years of service and a “textbook case” of PTSD (Post-Traumatic Stress Disorder) who is training recruits straight out of high school to be Marines. In part because of Randy’s experience, that training now includes teaching about the emotional and physical toll that combat imposes on soldiers and how to identify the symptoms of PTSD.

Randy’s wife, Dawn, says, “I don’t think that PTSD ever goes away.”

Another paragraph from the OP:

Dawn has turned her nurse’s training into a career helping troops come home. She fields calls at all hours from troubled vets and worried spouses. The wife of an Afghanistan vet phoned last year to say her husband had been on a bender for days. Dawn tracked down another veteran who went to the couple’s house, took the vet to an emergency room and then enrolled him in an alcohol-abuse program run by the VA.

The Jewish War by Josephus

From The Wall Street Journal:

As any parent knows, when you send a child out into the world, there’s no way to predict what twists and turns the youngster’s life might take. How much truer that is for an author, especially when the “child”—the book—survives for two millennia.

When Josephus wrote his “Jewish War” around the year 75, he could not have guessed its longevity or its use and misuse. The book narrates a rebellion in Judea against Rome (66-73) that savaged a legion before the avenging empire sacked Jerusalem, destroyed the Temple and killed or enslaved large numbers of Jewish civilians. It includes vivid scenes of the Roman way of war, of suffering during the siege of Jerusalem, and, perhaps most memorably, of the mass suicide of resistance fighters making their last stand at Masada. A member of the Judean elite turned Roman citizen, Josephus wrote primarily for the Jews of the Mediterranean and the Near East. Yet the majority of his readers in the centuries since have been Christians.

Such is the winding and unpredictable path that Martin Goodman traces in “Josephus’s ‘The Jewish War’: A Biography.” A distinguished historian both of Judaism and of the Roman Empire, Mr. Goodman has produced, in his latest work, a succinct and vigorous account that combines erudition and an eye for detail with graceful insights.

The story of Josephus’s book is full of paradox. “The Jewish War,” as Mr. Goodman notes, is not a sacred text, but in the 19th century many Christians and Jews in America and England cherished it along with the Bible. Nor is the book great literature; Mr. Goodman rightly calls it instead “a fine work of almost instant history.” Although a Roman Jew, Josephus wrote neither in Hebrew nor Latin. He produced two versions of his work, first (probably) in Aramaic, and then in Greek, the main language of Jews in the Roman Empire.

After the first generation of readers, Jews largely ignored the book for centuries. Christians, however, esteemed it as fulfillment of New Testament prophecies of the coming destruction of Jerusalem and the Temple as a divine punishment for rejecting Christ.

. . . .

Josephus first opposed the Jewish revolt against Rome, then accepted an important military command. When defeat loomed, he agreed to a suicide pact, then changed his mind and survived. He talked his way into the entourage of the Roman commanders who eventually sacked Jerusalem and destroyed the Temple. Fetching up in Rome as a Roman citizen and protégé of the emperors, Josephus earned the distrust of some fellow citizens for his outspoken and courageous defense of Jews. If all that weren’t equivocal enough, he changed important details of his account of his career in a Life written after “The Jewish War.”

. . . .

As Mr. Goodman explains, the book made its way back into Jewish consciousness only in the Middle Ages and only through the circuitous route of a Hebrew book based on a Latin paraphrase of various works of ancient literature, including parts of “The Jewish War.” This odd product contained a number of errors, some purposeful “improvements” on the original, some careless mistakes, such as misidentifying the author as another Josephus—that is, as Joseph ben Gorion, a rebel leader in Jerusalem, instead of Joseph ben Matthias, the real author. Yet this idiosyncratic work turned out to be the most important Hebrew historical book of the Middle Ages. It was also read by Muslims and by Ethiopian Christians, who still consider it Scripture. Nor did its influence end there: It was still widely read in the 20th century. For example, it inspired Israel’s future first prime minister to change his name from David Grün to David Ben-Gurion, after the rebel leader and purported author.

. . . .

Among Anglophone Christians, the book earned the respect of such writers as Thomas Hardy, Rudyard Kipling and Mark Twain, all using a widely read English translation. Gen. Lew Wallace said that “The Jewish War” inspired his best-selling novel “Ben-Hur: A Tale of the Christ” (1880)—and hence the later Broadway play and Hollywood films.

. . . .

With the foundation of the state of Israel in 1948, “The Jewish War” found acceptance “as a narrative for a proud new nation,” Mr. Goodman writes. He notes in particular how the discovery of the Dead Sea Scrolls in 1947 and the excavation of Masada in the 1960s brought Josephus into the center of Israeli national consciousness. Because the scrolls documented a pietistic religious community similar to one described to Josephus, they appeared to deepen Israel’s roots in history. Meanwhile, the fall of Masada is one of the most intense scenes in “The Jewish War.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

The Author of ‘Simple Abundance’ Has Some New Advice for You


From Publishers Weekly:

When Grand Central Publishing gave the green light for a 25th-anniversary edition of Simple Abundance, I thought it would be a very straightforward and quick project. This clearly proved to be delusional, because the creative frenzy that followed was the hardest year of writing I’ve ever known. Its saving grace was that updating my book for the 21st century was also the happiest writing assignment I’ve ever had. In fact, the challenges I faced always faded at the end of the day, and each new morning meant a welcome chance to get it right.

Simple Abundance hasn’t been out of print since its publication in 1995, for which I’m eternally grateful. But the impetus to update came several years ago, when I started hearing from millennial and Generation X women who’d discovered for themselves the “pink book” their mothers had loved over the years. Something on its pages spoke to these young women too, creating a comforting continuity and respite from their relentless daily rounds.

But there was a problem: both they and I felt that more of my references should come from modern voices, and that some of the recommendations for seasonal activities dated both the book and its writer. No longer would readers send for a mail order catalogue or watch a VHS movie—it was online shopping and streaming video now!

Could I revise it for both its long-time readers and for younger women? While hearing from new readers who’d been children 25 years ago was delightful, it was humbling and eye-opening, too. It became clear that besides the obvious updates, today’s women—no matter their age—need a new kind of comfort.

Today’s rapidly changing, complex, and mostly alarming 24/7 breaking-news culture shocks us at every turn, regularly catapulting us into the realm of the unspeakable. From dawn to dark, we find ourselves embedded with reporters around the world covering every harrowing natural disaster or appalling terrorist atrocity.

Lassoed by our heartstrings, we helplessly watch strangers in danger, as these tragic events unfold in real time. Because we rarely can help, we become stunned by sorrow. What ought to be the task of shepherding our loved ones through ordinary days instead drains our energy, depletes our sense of security, and diminishes our capacity for happiness.

. . . .

Someone has to say it, so I’ll go first: this is not normal.

We are living through extraordinary times, and there is nothing normal about what is unfolding every day. And the only way to safeguard ourselves and those we love is by acknowledging that technology, while informative, must have its limits.

So how do we do this? The way women always have protected their own: by creating emotional, psychological, and physical safe havens that shelter what we hold sacred. In real time.

Link to the rest at Publishers Weekly

The Construction of America, in the Eyes of the English

From JSTOR:

When the astrologer Simon Forman went to the theater in 1611, one of the plays he saw depicted a powerful empire moving into the west to conquer—and, supposedly, elevate—a group of barbarians who lived at the world’s periphery. Forman’s thoughts at the Globe Theater may have turned to “no act of common passage, but / A strain of rareness” as he watched this colonial drama unfold. For Londoners, such New World concerns had been top of mind for more than a generation. Jamestown had been founded in Virginia four years before. And Sir Walter Raleigh’s had attempted to establish a colony in what would be North Carolina in 1585, an attempt followed by the infamous “lost colony” of Roanoke, in 1587. Those earlier colonies were part of Queen Elizabeth’s tentative gestures toward American colonization, and though they were largely unsuccessful, the discoveries made, and the imperial justifications proffered, by figures like Raleigh, his half-brother Humphrey Gilbert, and Martin Frobisher, were part of the public consciousness. These themes of conquest would have seemed especially pertinent as the English once again tried to establish a toehold across the Atlantic, under the reign of the new monarch, James I. Yet as familiar as this story of a civilizing empire’s sojourns among a savage people may have seemed to a Jacobean audience, Forman’s basic plot-recounting in his diary about William Shakespeare’s “story of Cymbeline, king of England, in Lucius’ time,” with its tale of faith and culture that “came with the Romans into England,” make it clear that, even then, the conversations surrounding “civilization” and “savagery” could be complicated, nuanced, self-serving, and ironic.

Forman’s is one of the few contemporary accounts of Shakespeare’s staging, and he says little about how the Romans were differentiated from the Britons in Cymbeline. That doesn’t mean that the period wasn’t replete with depictions of the ancient Britons and Picts, as well as their cousins, the contemporary Irish and Scots, against the former of whom the English were engaged in a genocidal campaign. Furthermore, as is made clear by James E. Doan in New Hibernia Review/Iris Éireannach Nua, those colonists (off in distant Virginia, while Forman enjoyed Cymbeline) “made early ethnological comparisons between the Irish and the native American cultures.”

Keith Pluymers argues the same thing in Environmental History, writing that “images of the Algonquians and Virginia’s landscape closely resembled Ireland” in colonial depictions of America. Such language was deployed, in part, because conquerors like Raleigh and Gilbert were veterans of the ethnic cleansing campaigns in Ulster that established the brutal plantation system in Ireland.

. . . .

Such language was also used because of the shared Celtic origins of the Britons and the Irish, and perceived similarities between both to the Algonquin Indians of the Chesapeake and Potomac region. A rhetoric of anxiety supplied self-satisfied justifications for colonization. The English nervously considered their own origins in plays like Cymbeline, asking what the importance was of appearance and culture in the constitution of a people. Clothes are not incidental in Cymbeline, nor were they in the consolidating racial discourses that justified English incursions into Virginia (and Ireland). In Shakespeare’s play, a character switches allegiance from the Romans to the Britons with the declaration that “I’ll disrobe me / of these Italian weeds and suit myself / as does a Briton peasant.” More than perceived phenotypical difference, it was Algonquin clothes (or, as the English saw it, the lack thereof) that reminded them of the Irish, who were the first victims of English imperialism. It also reminded them of their own ancestors, supposedly civilized and Christianized by the Romans. The propagandistic import of the rhetoric that described the Indians as being similar to the ancient Britons was clear: as we once were, so are you now. And as the Romans made us, so shall we make you.

. . . .

“There have been diverse and variable reports,” Harriot writes, “with some slanderous and shameful speeches bruited abroad by many that returned.” A Brief and True Report of the New Found Land of Virginia is both Harriot’s attempt to correct that record as he sees it, but also to provide incentive for investors to keep funding Roanoke. Not only was Harriot a cagey marketer, he was also a stolid observer of the natural world, having distinguished himself in navigation, astronomy, and mathematics. In its fascination with objective data, his treatise has remained invaluable even today for anthropologists studying the Algonquin at the moment of contact.

Most evocative of all in this respect are the illustrations that accompany Harriot’s text. While White’s original watercolors (an unusual medium for an illustrator to work in at the time) were only first exhibited in the twentieth-century, readers would have learned about the Algonquin from de Bry’s engravings. The Flemish lithographer had never actually been to America himself. Charlotte Ickes argues in American Art that, as a result, “De Bry’s engravings… [were] informed by certain European aesthetic conventions as well as the taxonomic logic of nascent ethnographic inquiry.” Consider an image of an Algonquin brave made by de Bry:

Ickes writes that de Bry’s “figures often stand in a generic no-place, and several of their stances derive from earlier sources.” Despite Harriot’s meticulous observations concerning North Carolina’s terrain, de Bry has opted to place his figure in an idealized and recognizably European landscape. The rolling hills and pines evoke the Scottish midlands as much as the Carolina Piedmont. Even more stylized is the brave himself, who in his affected position and with his seemingly winged helmet, bears far more similarity to the pagan god Mercury than he does to an Algonquin youth. If not for the fringed loincloth, a viewer might assume that they were looking at an image of Hermes.

Or examine both an original by White and de Bry’s version of that same image, both of which depict an Algonquin religious ritual:


. . . .

However, Harriot shouldn’t be thought of as advocating any kind of modern multicultural tolerance. Writing in The North Carolina Historical Review, Michael Leroy Oberg explains that Raleigh’s reason for sending Harriot and White to Roanoke was not knowledge for its own sake, but rather to “accumulate enough information about Algonquian culture and society to incorporate the natives into the Anglo-American, Christian New World empire that he hoped to plant in Virginia.” Harriot’s purposes, in other words, were imperialistic, and his study was to aid in colonization, writing that whereby “it may be hoped, if means of good government be used, that… [the Indians] may in short time be brought to civility and the embracing of true religion.” (In this respect, as Karen Ordahl Kupperman notes in The Historical Journal, some advocates for English imperial expansion “felt the Indians would be easier to civilize than the Irish.”)

Early relations with the Algonquin were relatively peaceful at Roanoke and Jamestown, yet as English aggression toward the natives increased, the language used to describe them became increasingly similar to that used against the Irish.

Link to the rest at JSTOR