From Foreign Policy:
The idea of a lab leak has gone, well, viral. As a political scientist, I cannot assess whether the evidence shows that COVID-19 emerged naturally or from laboratory procedures (although many experts strenuously disagree). Yet as a political scientist, I do think that my discipline can learn something from thinking seriously about our own “lab leaks” and the damage they could cause.
A political science lab leak might seem as much of a punchline as the concept of a mad social scientist. Nevertheless, the notion that scholarly ideas and findings can escape the nuanced, cautious world of the academic seminar and transform into new forms, even becoming threats, becomes more of a compelling metaphor if you think of academics as professional crafters of ideas intended to survive in a hostile environment. Given the importance of what we study, from nuclear war to international economics to democratization and genocide, the escape of a faulty idea could have—and has had—dangerous consequences for the world.
Academic settings provide an evolutionarily challenging environment in which ideas adapt to survive. The process of developing and testing academic theories provides metaphorical gain-of-function accelerations of these dynamics. To survive peer review, an idea has to be extremely lucky or, more likely, crafted to evade the antibodies of academia (reviewers’ objections). By that point, an idea is either so clunky it cannot survive on its own—or it is optimized to thrive in a less hostile environment.
Think tanks and magazines like the Atlantic (or Foreign Policy) serve as metaphorical wet markets where wild ideas are introduced into new and vulnerable populations. Although some authors lament a putative decline of social science’s influence, the spread of formerly academic ideas like intersectionality and the use of quantitative social science to reshape electioneering suggest that ideas not only move from the academy but can flourish once transplanted. This is hardly new: Terms from disciplines including psychoanalysis (“ego”), evolution (“survival of the fittest”), and economics (the “free market” and Marxism both) have escaped from the confines of academic work before.
The “clash of civilizations” hypothesis is a good candidate for one of the more disruptive lab leaks in political science’s history. When the Harvard University scholar Samuel P. Huntington released his article “The Clash of Civilizations?” (note the question mark, which disappeared in later versions) in Foreign Affairs in 1993, he spread a bold and simple hypothesis about the course of the post-Cold War world: “The great divisions among humankind and the dominating source of conflict will be cultural. … The clash of civilizations will dominate global politics. The fault lines between civilizations will be the battle lines of the future.”
Huntington’s thesis was not a conjecture based on careful empirical study—it was a speculation looking forward based on some cherry-picked contemporaneous examples. Many academic articles that sought to rebut Huntington by testing his hypothesis fell into this trap, attempting to show him wrong with sometimes quite impressive tests. But Huntington could not be disproved by mere facts. His idea was primed to thrive in the wild, free from the confines of empirical reality.
Facts, indeed, often appeared secondary to Huntington’s larger political project. In his follow-up book on the subject, The Clash of Civilizations and the Remaking of World Order, he illustrated his argument by sketching what he considered a plausible scenario: a Sino-U.S. conflict over Vietnam leading to a racialized third world war that ends with the destruction of Europe and the United States while India attempts to “reshape the world along Hindu lines.”
This writing led not to Huntington being ostracized but enhanced his reputation, especially after the 9/11 terrorist attacks made his claim that “Islam has bloody borders” seem plausible to mainstream audiences. As late as 2011, the New York Times columnist David Brooks praised Huntington as “one of America’s greatest political scientists”—and even though that column ultimately judged Huntington as having gotten the “clash” hypothesis wrong, it did so with kid gloves: “I write all this not to denigrate the great Huntington. He may still be proved right.”
Another contender is the idea of managing great-power competition through game theory. During the 1950s and 1960s, political scientists and their counterparts in economics and elsewhere sought to understand the Cold War by using then-novel tools of game theory to model relations between the United States and the Soviet Union. In their earliest forms, these attempts reduced the negotiations and confrontations between the two sides to simple matrices of outcomes and strategies with names like the Prisoner’s Dilemma, Chicken, and the Stag Hunt.
The allure was obvious. Make some simplifying assumptions about what the players in these games want; specify the strategies they can employ to achieve them; assume that players know what the other players know; and calculate that they will choose their strategy based on the choice the other player will make to maximize their well-being. Voilà—a science of strategy.
It is easy to mock this approach—too easy, in fact. These simple assumptions perform pretty well within their theoretical boundaries. Every semester (when the world isn’t in a pandemic), I use in-person simulations of these basic games with my undergraduate students to show that changing the rules of the game can influence players’ willingness to cooperate, a finding well attested in generations of scholarly tests.
Yet there’s a huge leap in jumping from these general, aggregate findings to believing that such simple ideas can guide the behavior of complex states without an incredible amount of additional refinement. In international relations, the specific strategies that can be employed are vast (and new ones can be invented), the stakes of every contest are unknowable, actors have incentives to hide what they know from others, and, perhaps most important, players interact again and again and again. Even when playing the Prisoner’s Dilemma, a game concocted to make cooperation a fool’s strategy, simply changing from playing a game once to playing it repeatedly can make cooperation an equilibrium.
Nevertheless, the general tendency of a certain influential sect of social science was to embrace the idea that game theory (to be fair, in somewhat more sophisticated terms) could provide not only insights into general features of world affairs but specific foreign-policy recommendations to guide the United States through the Cold War. In influential books like The Strategy of Conflict and Arms and Influence, the game theorist Thomas Schelling used those tools to make the Cold War seem easy to manage—an interaction in which cool head, logic, and a steely command of risk could make confrontations from the Taiwan Strait to the Berlin Wall explicable and winnable.
All of this would have been harmless if these ideas had stayed inside the lab.
Link to the rest at Foreign Policy
Exhibit #MXWT-94837 in support of the proposition that smart people are perfectly capable of believing and doing really dumb things.
Some might argue that the conceit of thinking one is really smart will likely lead to doing more dumb things on a far grander scale than than will occur in the life of someone who is reasonably intelligent and believes her/himself to be reasonably intelligent. The second person will, of course, make mistakes, but, not extraordinarily large and incredibly stupid mistakes.
Which brings us to Hubris and Nemesis
From Greek Mythology:
Nemesis was the goddess of divine retribution and revenge, who would show her wrath to any human being that would commit hubris, i.e. arrogance before the gods. She was considered a remorseless goddess.
. . . .
One myth concerning Nemesis is that of Narcissus. He was a young man who was very arrogant and disdained those who loved him. Nemesis led him to a pool, where he saw his reflection and fell in love with it. Unable to abandon his reflection, he died there.
Link to the rest at Greek Mythology
Examples of Hubris:
The Fall of Icarus
The story of Icarus was first written down in the first century AD in the Pseudo-Apollodorus, but the tale has a much older oral tradition. In the story, Icarus’s father made him a pair of wax wings and cautioned him not to fly too high with them. Becoming overconfident, Icarus flew as high as he wanted. The sun melted his wings, and he fell to his death.
Oedipus Rex by Sophocles
Oedipus Rex is a play by Sophocles, which was first performed about 429 BC. In this play, Kind Oedipus defies the gods’ prophecy that he will kill his father and murder his mother. Attempting to control and evade his own fate, he kills an old man who turns out to be his father. Later he marries the queen of Thebes, who turns out to be his mother. His attempt to defy the gods was considered hubris.
. . . .
The Canterbury Tales by Geoffrey Chaucer
In The Canterbury Tales, Geoffrey Chaucer offers another example of hubris. Written in the late 1300s AD, it includes the character of Chaunticleer, a rich and educated rooster. His pride in his wealth and accomplishments leads him to lose track of what is real, and he is easily duped by a fox that flatters his vocal ability. The fox eats him.
. . . .
Doctor Faustus by Christopher Marlowe
Written in the late 16th century, Doctor Faustus tells the story of a man who is so proud of his own academic accomplishments and intelligence that he sells his soul to the Devil for more knowledge and academic superiority. He receives eternal damnation as a result.
Link to the rest at Your Dictionary
From The Rand Corporation:
[The Hubris-Nemesis] complex involves a combination of hubris (a pretension toward an arrogant form of godliness) and nemesis (a vengeful desire to confront, defeat, humiliate, and punish an adversary, especially one that can be accused of hubris). The combination has strange dynamics that may lead to destructive, high-risk behavior. Attempts to deter, compel, or negotiate with a leader who has a hubris-nemesis complex can be ineffectual or even disastrously counterproductive when those attempts are based on concepts better suited to dealing with more normal leaders.
Link to the rest at The Rand Corporation