8 Anti-Capitalist Sci-Fi and Fantasy Novels

From Electric Lit:

Karl Marx may be famous for his thorough, analytic attack on capitalism (see: all three volumes and the 1000-plus pages of Das Kapital), but let’s be real: it’s not the most exciting to read. What if, just as a thought experiment, our works that reimagined current structures of power also had robots?

Speculative fiction immerses the reader in an alternate universe, hooking us in with a stirring narrative and intricate world-building—or the good stories do, anyways. Along the way, it can also challenge us to take a good look at our own reality, and question with an imaginative, open mind: how can we strive to create social structures that are not focused on white, patriarchal, cisgendered, and capitalist systems of inequity? 

As poet Lucille Clifton says, “We cannot create what we can’t imagine.” Imagination is an integral element to envisioning concrete change, one that goes hand-in-hand with hope. Although certain magical elements like talking griffins and time travel might be out of reach (at least for the present moment), fantasy and sci-fi novels allow us to imagine worlds that we can aspire towards. Whether through a satire that exposes the ridiculousness of banking or a steampunk rewriting of the Congo’s history, the authors below have found ways to critically examine capitalism—and its alternatives—in speculative fiction. 

Everfair by Nisi Shawl

A speculative fantasy set in neo-Victorian times, Shawl’s highly-acclaimed novel imagines “Everfair,” a safe haven in what is now the Democratic Republic of the Congo. In Shawl’s version of the late 19th-century, the Fabian Socialists—a real-life British group—and African-American missionaries band together to purchase a region of the Congo from King Leopold II (whose statue was recently defaced and removed from Antwerp, as a part of the global protest against racism). This region, Everfair, is set aside for formerly enslaved people and refugees, who are fleeing from King Leopold II’s brutal, exploitative colonization of the Congo. The residents of Everfair band together to try and create an anti-colonial utopia. Told from a wide range of characters and backed up with meticulous research, Shawlcreates a kaleidoscopic, engrossing, and inclusive reimagination of what history could have been. “I had been confronted with the idea that steampunk valorized colonization and empire, and I really wanted to spit in its face for doing that,” Shawl statesthrough her rewritten history of the Congo, Shawl challenges systems of imperialism and capitalism. 

. . . .

Making Money by Terry Pratchett

If you stop to think about it, isn’t the concept of a credit card ridiculous? Pratchett’s characters would certainly agree. Pratchett’s Discworld series, as the Guardian noted, “started out as a very funny fantasy spoof [that] quickly became the finest satirical series running.” This installment follows con-man Moist von Lipwig (who first appeared in Pratchett’s spoof on the postal system, Going Postal), as he gets roped into the world of banking. The Discworld capital, Ankh-Morpork, is just being introduced to—you guessed it—paper money. However, citizens remain distrustful of the new system, opting for stamps as currency rather than use the Royal Mint. Cue the Financial Revolution, with Golem Trust miscommunications, a Chief Cashier that may be a vampire, and banking chaos. In his signature satirical style, Pratchett points out the absurdities of the modern financial system we take for granted.

Link to the rest at Electric Lit

PG has three immediate thoughts.

  1. Just about anything can serve as a theme for a fantasy novel and authors are perfectly free to riff on any topic they may choose.
  2. Das Kapital was a factual and logical mess, but an excellent pseudo-economic basis for gaining and keeping power in the hands of its foremost practitioners. The book was first published in 1867 by Verlag von Otto Meisner, which sounds a bit capitalist (and aristocratic) to PG.
  3. Each of the Anti-Capitalist books is published by a thoroughly-capitalist publisher and PG is almost completely certain that each of the authors received an advance against royalties for the book that would feed an impoverished African village for at least a few months.

17 of the Most Devious Sci-Fi and Fantasy Villains

From BookBub:

While incredible power and a raging desire to destroy things are good qualities to find in a sci-fi or fantasy villain, the ability to plot, scheme, and patiently wait until the right time to destroy your enemies elevates your typical villains to a whole new level. So it’s little wonder that any list of the best sci-fi and fantasy villains will also be a list of the most devious villains. All of the evil beings (and entities) on this list are ingeniously formidable foes. And, frankly, we love them for it.

Baron Vladimir Harkonnen — Dune

Truly devious villains know that sometimes you have to make an apparent sacrifice in order to arrange the playing board to your advantage. Baron Vladimir Harkonnen, who brought his disgraced house back into power and influence by sheer force of will, plays this trick on the noble House Atreides, his rivals, when he gives up control over Arrakis and the all-important melange trade to them. But this is just the beginning of Harkonnen’s genius and horrifying plan to destroy his enemies and gain power for his own infamous house. 

. . . .

The Aesi — Black Leopard, Red Wolf

The Aesi is a terrifying being dispatched by King Kwash Dara to thwart the band of improbable protagonists in this novel. Able to enter dreams, control minds, and send assassins made of dust, the Aesi is perhaps the most terrifying creature in a book absolutely filled with terrifying creatures. (You don’t get a nickname like ‘The god butcher’ for nothing.) But the true devious nature of the Aesi is revealed in a twist that makes it more terrifying than ever.

Link to the rest at BookBub

Theatrical Shortcuts for Dynamic Fiction

From SWFA:

I’m often asked if my professional theatre and playwrighting background helps me as a fiction writer. It does in countless ways. Theatrical form, training, and structure are holistically integrated into how I see the world and operate as a storyteller. I adore diving deep into character, creating atmosphere, and ‘setting the stage’ for my novels. I became a traditionally published novelist many years after I’d established myself on stage and published as a playwright.

I teach a workshop called “Direct Your Book: Theatre Techniques Towards A Blockbuster Novel” about using theatrical concepts to invigorate, inspire, and problem-solve in fiction writing. Here’s what I’ve found to be the most consistently useful takeaways:

Physicality. One of my favorite aspects of character building when taking on a role is figuring out how they move; where their “center of gravity” is, whether the gut, the chest, or the head; what part of their body leads the way? Thinking about this can really ground you in the bodies of your characters and how they interact with their world.

Environment. I’m a licensed New York City tour guide and there’s really nothing like moving through the streets your characters move through and truly living in all those details. In my Spectral City series, I utilize many of the city’s most haunted paths as the routes my psychic medium heroine takes to navigate the city. Her noting the various haunts of the city creates a sort of ‘lived in’ feel to the prose and to her experiences as a psychic detective. There is something to be said sometimes for writing ‘what you know’. If at all possible, visiting a place that informs your world directly, or inspires it if your world is a secondary one, can add so much in detail and expansive sensory experience. You can pair the experience of walking and drinking in this environment by thinking of the characters’ physicality and qualities of movement as you do so.

Clothing. Even if it isn’t a period piece, clothing tells a lot about a world and how characters live in it. Every clothing choice is an act of world-building. If your work is historical or historically informed, I suggest spending time in clothing from the time period. Try to rent something or commission something you could walk, run, move, and interact in for a period of time that helps you understand how garments inform movement, posture, breathing, existing. These things change radically across class and area of the world. For my part, as most of my novels are set in the late 19th century, the most important gift the theatre gave my historical novels is a tactile reality and personal experience ‘existing’ in other time periods with which I can paint details. In the 19th century, for example, women could be wearing an average of 40 pounds of clothing and that significantly affects one’s daily life. Knowing what it is like to move, sit, prepare food, lift, climb stairs, walk, trot, run, seize, weep, laugh, recline, jump and collapse in a corset, bodice, bustle, petticoat, hat, layers, gloves, and other accessories–all of which I’ve personally experienced in various historical plays and presentations I’ve acted in–is vitally important to taking the reader physically as well as visually and emotionally through a character’s experience. It changes breathing, posture, and interactions with the environment and others in a core, defining way.

Link to the rest at SWFA

Samuel R. Delany, The Art of Fiction

From The Paris Review:

The first time I interview Samuel Delany, we meet in a diner near his apartment on New York’s Upper West Side. It is a classic greasy spoon that serves strong coffee and breakfast all day. We sit near the window, and Delany, who is a serious morning person, presides over the city as it wakes. Dressed in what is ­often his uniform—black jeans and a black button-down shirt, ear pierced with multiple rings—he looks imperial. His beard, dramatically long and starkly white, is his most distinctive feature. “You are ­famous, I can just tell, I know you from somewhere,” a stranger tells him in the 2007 docu­mentary Polymath, or the Life and Opinions of Samuel R. Delany, Gentleman. Such intrusions are common, because Delany, whose work has been described as limitless, has lived a life that flouts the conventional. He is a gay man who was married to a woman for twelve years; he is a black man who, because of his light complexion, is regularly asked to identify his ethnicity. Yet he seems hardly bothered by such attempts to figure him out. Instead, he laughs, and more often than not it is a quiet chuckle expressed mostly in his eyes.

Delany was born on April 1, 1942, in Harlem, by then the cultural epicenter of black America. His father, who had come to New York from Raleigh, North Carolina, ran Levy and Delany, a funeral home to which Langston Hughes refers in his stories about the neighborhood. Delany grew up above his father’s business. During the day he attended Dalton, an elite and primarily white prep school on the Upper East Side; at home, his mother, a senior clerk at the New York Public Library’s Countee Cullen branch, on 125th Street, nurtured his exceptional intelligence and kaleidoscopic interests. He sang in the choir at St. Philip’s, Harlem’s black Episcopalian church, composed atonal music, played multiple instruments, and choreographed dances at the General Grant Community Center. In 1956, he earned a spot at the Bronx High School of Science, where he would meet his future wife, the poet Marilyn Hacker.

In the early sixties, the newly married couple settled in the East Village. There, Delany wrote his first novel, The Jewels of Aptor. He was nineteen. Over the next six years, he published eight more science-fiction novels, among them the Nebula Award winners Babel-17 (1966) and The Einstein Intersection (1967). 

. . . .

In 1971, he completed a draft of a book he had been reworking for years. Dhalgren, his story of the Kid, a schizoid, amnesiac wanderer, takes place in Bellona, a shell of a city in the American Midwest isolated from the rest of the world and populated by warring gangs and holographic beasts. When Delany, Hacker, and their one-year-old daughter flew back to the States just before Christmas Eve in 1974, they saw copies of Dhalgren filling book racks at Kennedy Airport even before they reached customs. Over the next decade, the novel sold more than a million copies and was called a master­piece by some critics. William Gibson famously described it as “a riddle that was never meant to be solved.”

. . . .

INTERVIEWER

Between the time you were nineteen and your twenty-second birthday, you wrote and sold five novels, and another four by the time you were twenty-six, plus a volume of short stories. Fifty years later, considerably more than half that work is still in print. Was being a prodigy important to you?

DELANY

As a child I’d run into Wilde’s witticism “The only true talent is preco­ciousness.” I took my writing seriously, and it seemed to pay off. And I ­discovered Rimbaud. The notion of somebody just a year or two older than I was, who wrote poetry people were reading a hundred, a hundred fifty years later and who had written the greatest poem in the French ­language, or at least the most famous one, “Le Bateau Ivre,” when he was just sixteen—that was enough to set my imagination soaring. At eighteen I translated it.

In the same years, I found the Signet paperback of Radiguet’s Devil in the Flesh and, a few months after that, the much superior Le Bal du Comte d’Orgel, translated as Count d’Orgel in the first trade paperback from Grove Press, with Cocteau’s deliciously suggestive “introduction” about its tragic young author, salted with such dicta as “Which family doesn’t have its own child prodigy? They have invented the word. Of course, child prodigies ­exist, just as there are extraordinary men. But they are rarely the same. Age means nothing. What astounds me is Rimbaud’s work, not the age at which he wrote it. All great poets have written by seventeen. The greatest are the ones who manage to make us forget it.”

Now that was something to think about—and clearly it had been said about someone who had not expected to die at twenty of typhoid from eating bad oysters.

. . . .

INTERVIEWER

Do you think of yourself as a genre writer?

DELANY

I think of myself as someone who thinks largely through writing. Thus I write more than most people, and I write in many different forms. I think of myself as the kind of person who writes, rather than as one kind of writer or another. That’s about the closest I come to categorizing myself as one or another kind of artist.

Link to the rest at The Paris Review

Here’s a link to the Samuel R. Delany Author Page on Amazon (where his photo shows a world-class beard)

The English towers and landmarks that inspired Tolkien’s hobbit sagas

From The Guardian:

Readers of The Lord of the Rings must surely imagine lifting their eyes in terror before Saruman’s dark tower, known as Orthanc. Over the years, many admirers of the Middle-earth sagas have guessed at the inspiration for this and other striking features of the landscape created by JRR Tolkien.

Now an extensive new study of the author’s work is to reveal the likely sources of key scenes. The idea for Saruman’s nightmarish tower, argues leading Tolkien expert John Garth, was prompted by Faringdon Folly in Berkshire.

“I have concentrated on the places that inspired Tolkien and though that may seem a trivial subject, I hope I have brought some rigour to it,” said Garth this weekend. “I have a fascination for the workings of the creative process and in finding those moments of creative epiphany for a genius like Tolkien.”

A close study of the author’s life, his travels and his teaching papers has led Garth to a fresh understanding of an allegory that Tolkien regularly called upon while giving lectures in Old English poetry at Oxford in the 1930s.

Comparing mysteries of bygone poetry to an ancient tower, the don would talk of the impossibility of understanding exactly why something was once built. “I have found an interesting connection in his work with the folly in Berkshire, a nonsensical tower that caused a big planning row,” Garth explains. While researching his book he realised the controversy raging outside the university city over the building would have been familiar to Tolkien.

Tolkien began to work this story into his developing Middle-earth fiction, finally planting rival edifices on the Tower Hills on the west of his imaginary “Shire” and also drawing on memories of other real towers that stand in the Cotswolds and above Bath. “Faringdon Folly isn’t a complete physical model for Orthanc,” said Garth. “It’s the controversy surrounding its building that filtered into Tolkien’s writings and can be traced all the way to echoes in the scene where Gandalf is held captive in Saruman’s tower.”

Link to the rest at The Guardian

Laura Lam: The Gut Punch Of Accidentally Predicting The Future

From Terrible Minds:

I thought Terrible Minds would be the place to talk about the strange, horrible feeling of accidentally predicting the future, since Chuck did it too with Wanderers.

It happens to pretty much any science fiction writer who writes in the near future. Worldbuilding is basically extrapolating cause and effect in different ways. You see a news article somewhere like Futurism and you give a little chuckle—it’s something happening that you predicted in a book, and it’s a strange sense of déjà vu. I used to even share some of the articles with the hashtag #FalseHeartsIRL when I released some cyberpunks a few years ago. I can’t do that with Goldilocks, really, because the stuff I predicted isn’t some interesting bit of tech or a cool way to combat climate change through architecture or urban planning.

Because this time it’s people wearing masks outside. It’s abortion bans. It’s months of isolation. It’s a pandemic.

In real life, it’ll rarely play out exactly as you plan in a book. Some things twist or distort or are more unrealistic than you’d be allowed to put into fiction (e.g. murder wasps or anything that the orange man in the white house utters). In Goldilocks, I have people wearing masks due to climate change being a health risk, which was inspired by how disconcerted I felt seeing a photo of my mother wearing a mask due to the wildfires in California while I live in Scotland.

. . . .

Five women steal a spaceship to journey to Cavendish, a planet 10 light years away and humanity’s hope for survival and for a better future. A planet they hopefully won’t spoil like the old one. It’ll take the Atalanta 5 a few months to journey to Mars to use the test warp ring to jump to Epsilon Eridani (the real star for my fake planet), and then a few more months’ travel on the other side. It’s a long time to be with the same people. I did not expect those elements of how the women cope with isolation to be a how-to for 2020. I read a lot of astronaut memoirs, and that has probably helped me cope with lockdown a bit better than I might have (my top rec is Chris Hadfield’s An Astronaut’s Guide to Life on Earth).

Though it’s a mild spoiler, in light of current events I have been warning people that there is a pandemic in the book. It’s not a huge focus of the plot and it never gets graphic, but I forwarded an article about coronavirus to my editor on January 22nd with basically a slightly more professional version of ‘shit.’ The illness within the book is not quite as clear of an echo as White Mask, it’s still strange. The last thing I expected when I wrote a book with a pandemic was to have its launch interrupted by an actual pandemic.

You don’t feel clever, or proud, when you predict these sorts of things. You feel guilty when you see the nightmares about the future come true instead of the dreams.

Link to the rest at Terrible Minds

Will Fantasy Ever Let Black Boys Like Me Be Magic?

From Tor.com:

My first book on magic was A Wizard of Earthsea by Ursula K. Le Guin. It was a single story which expanded into a long-standing series about Ged, the greatest wizard known to his age, and the many mistakes made in his youth which inspired a battle against his dark side, before he righted himself with his darkness.

As a Black boy, I always had a fascination with stories of boys with more to offer than what the world had the ability to see in them. Le Guin offered something along that line—the fantasy of untapped potential, of surviving poverty, of coming to terms with one’s dark side.

However, Ged’s story isn’t what substantiated my attachment to Ursula K. Le Guin’s world; it was Vetch, the Black wizard of the story and Ged’s sidekick. In A Wizard of Earthsea, Vetch is first introduced through a bully named Jasper as a heavy-set, dark skinned wizard a few years older than Ged. Vetch was described as “plain, and his manners were not polished,” a trait that stood out even amongst a table of noisy boys. Unlike the other boys, he didn’t take much to the drama of showmanship, or of hazing and—when the time finally came—he abandoned his good life as a powerful wizard and lord over his servants and siblings to help Ged tame his shadow, then was never seen again.

Black wizards have always been an enigma. I picked up A Wizard of Earthsea years after Harry Potter graced the silver screen and of course, I’d seen Dean Thomas, but there was more to the presentation of Vetch than illustrated in Dean’s limited time on screen.

. . . .

Fantasy has a habit of making Black characters the sidekick. And yet, years after Ged journeyed away from his closest friend, Vetch’s life did not stop: it moved on, prosperously. Representation of Blackness has always been a battle in Fantasy. It isn’t that the marginalized have never found themselves in these stories, but there was always a story written within the margins.

Writing from the perspective of mainstream demographic often results in the sometimes unintentional erasing of key aspects of a true human experience: where you can be angry, internally, at harmful discrimination and you can do something selfish and negative, because its what you feel empowers you. If to be marginalized is to not be given permission to be fully human, then these Black characters (Vetch & Dean Thomas) have never escaped the margins; and if this act is designated as the “right way,” then no character ever will, especially not the ones we see as true change in our imaginations.

Link to the rest at Tor.com

The Danger of Intimate Algorithms

PG thought this might be an interesting writing prompt for sci-fi authors.

From Public Books:

After a sleepless night—during which I was kept awake by the constant alerts from my new automatic insulin pump and sensor system—I updated my Facebook status to read: “Idea for a new theory of media/technology: ‘Abusive Technology.’ No matter how badly it behaves one day, we wake up the following day thinking it will be better, only to have our hopes/dreams crushed by disappointment.” I was frustrated by the interactions that took place between, essentially, my body and an algorithm. But perhaps what took place could best be explained through a joke:

What did the algorithm say to the body at 4:24 a.m.?

“Calibrate now.”

What did the algorithm say to the body at 5:34 a.m.?

“Calibrate now.”

What did the algorithm say to the body at 6:39 a.m.?

“Calibrate now.”

And what did the body say to the algorithm?

“I’m tired of this s***. Go back to sleep unless I’m having a medical emergency.”

Although framed humorously, this scenario is a realistic depiction of the life of a person with type 1 diabetes, using one of the newest insulin pumps and continuous glucose monitor (CGM) systems. The system, Medtronic’s MiniMed 670G, is marketed as “the world’s first hybrid closed loop system,” meaning it is able to automatically and dynamically adjust insulin delivery based on real-time sensor data about blood sugar. It features three modes of use: (1) manual mode (preset insulin delivery); (2) hybrid mode with a feature called “suspend on low” (preset insulin delivery, but the system shuts off delivery if sensor data indicates that blood sugar is too low or going down too quickly); and (3) auto mode (dynamically adjusted insulin delivery based on sensor data).

In this context, the auto mode is another way of saying the “algorithmic mode”: the machine, using an algorithm, would automatically add insulin if blood sugar is too high and suspend the delivery of insulin if blood sugar is too low. And this could be done, the advertising promised, in one’s sleep, or while one is in meetings or is otherwise too consumed in human activity to monitor a device.  Thanks to this new machine, apparently, the algorithm would work with my body. What could go wrong?

Unlike drug makers, companies that make medical devices are not required to conduct clinical trials in order to evaluate the side effects of these devices prior to marketing and selling them. While the US Food and Drug Administration usually assesses the benefit-risk profile of medical devices before they are approved, often risks become known only after the devices are in use (the same way bugs are identified after an iPhone’s release and fixed in subsequent software upgrades). The FDA refers to this information as medical device “emerging signals” and offers guidance as to when a company is required to notify the public.

As such, patients are, in effect, exploited as experimental subjects, who live with devices that are permanently in beta. And unlike those who own the latest iPhone, a person who is dependent on a medical device—due to four-year product warranties, near monopolies in the health care and medical device industry, and health insurance guidelines—cannot easily downgrade, change devices, or switch to another provider when problems do occur.

It’s easy to critique technological systems. But it’s much harder to live intimately with them. With automated systems—and, in particular, with networked medical devices—the technical, medical, and legal entanglements get in the way of more generous relations between humans and things.

. . . .

In short, automation takes work. Specifically, the system requires human labor in order to function properly (and this can happen at any time of the day or night). Many of the pump’s alerts and alarms signal that “I need you to do something for me,” without regard for the context. When the pump needs to calibrate, it requires that I prick my finger and test my blood glucose with a meter in order to input more accurate data. It is necessary to do this about three or four times per day to make sure that the sensor data is accurate and the system is functioning correctly. People with disabilities such as type 1 diabetes are already burdened with additional work in order to go about their day-to-day lives—for example, tracking blood sugar, monitoring diet, keeping snacks handy, ordering supplies, going to the doctor. A system that unnecessarily adds to that burden while also diminishing one’s quality of life due to sleep deprivation is poorly designed, as well as unjust and, ultimately, dehumanizing.

. . . .

The next day was when I posted about “abusive technologies.” This post prompted an exchange about theorist Lauren Berlant’s “cruel optimism,” described as a relation or attachment in which “something you desire is actually an obstacle to your flourishing.” 

. . . .

There are many possible explanations for the frequent calibrations, but even the company does not have a clear understanding of why I am experiencing them. For example, with algorithmic systems, it has been widely demonstrated that even the engineers of these systems do not understand exactly how they make decisions. One possible explanation is that my blood sugar data may not fit with the patterns in the algorithm’s training data. In other words, I am an outlier. 

. . . .

In the medical field, the term “alert fatigue” is used to describe how “busy workers (in the case of health care, clinicians) become desensitized to safety alerts, and consequently ignore—or fail to respond appropriately—to such warnings.”

. . . .

And doctors and nurses are not the only professionals to be constantly bombarded and overwhelmed with alerts; as part of our so-called “digital transformation,” nearly every industry will be dominated by such systems in the not-so-distant future. The most oppressed, contingent, and vulnerable workers are likely to have even less agency in resisting these systems, which will be used to monitor, manage, and control everything from their schedules to their rates of compensation. As such, alerts and alarms are the lingua franca of human-machine communication.

. . . .

Sensors and humans make strange bedfellows indeed. I’ve learned to dismiss the alerts while I’m sleeping (without paying attention to whether they indicate a life-threatening scenario, such as extreme low blood sugar). I’ve also started to turn off the sensors before going to bed (around day four of use) or in the middle of the night (as soon as I realize that the device is misbehaving).

. . . .

Ultimately, I’ve come to believe that I am even “sleeping like a sensor” (that is, in shorter stretches that seem to mimic the device’s calibration patterns). Thanks to this new device, and its new algorithm, I have begun to feel a genuine fear of sleeping.

Link to the rest at Public Books

Why Am I Reading Apocalyptic Novels Now?

From The New York Times:

A man and his son trudge through the wasteland into which human civilization has devolved. Every night, they shiver together in hunger and cold and fear. If they encounter someone weaker than they are — an injured man, an abandoned child — they do not have the resources to help, and if they encounter someone stronger, violence is assured. The man lives for the child, and the child regularly expresses a desire for death.

I am describing the novel “The Road,” by Cormac McCarthy. The last time I can remember being hit so hard by a work of fiction was decades ago, reading “The Brothers Karamazov” while I had a high fever: I hallucinated the characters. I can still remember Ivan and Alyosha seeming to float in the space around my bed. This time, however, I’m not sick — yet — nor am I hallucinating.

Like many others, I have been finding my taste in books and movies turning in an apocalyptic direction. I also find myself much less able than usual to hold these made-up stories at a safe distance from myself. That father is me. That child is my 11-year-old son. Their plight penetrates past the “just fiction” shell, forcing me to ask, “Is this what the beginning of that looks like?” I feel panicked. I cannot fall asleep.

Why torture oneself with such books? Why use fiction to imaginatively aggravate our wounds, instead of to soothe them or, failing that, just let them be? One could raise the same question about nonfictional exercises of the imagination: Suppose I contemplate something I did wrong and consequently experience pangs of guilt about it. The philosopher Spinoza thought this kind of activity was a mistake: “Repentance is not a virtue, i.e. it does not arise from reason. Rather, he who repents what he did is twice miserable.”

This sounds crazier than it is. Immersed as we are in a culture of public demands for apology, we should be careful to understand that Spinoza is making a simple claim about psychological economics: There’s no reason to add an additional harm to whatever evils have already taken place. More suffering does not make the world a better place. The mental act of calling up emotions such as guilt and regret — and even simple sadness — is imaginative self-flagellation, and Spinoza urges us to avoid “pain which arises from a man’s contemplation of his own infirmity.”

Should one read apocalypse novels during apocalyptic times? Is there anything to be said for inflicting unnecessary emotional pain on oneself? I think there is.

Link to the rest at The New York Times

PG is reading a post-apocalyptic novel, Ready Player One, right now. (Yes, he realizes he is behind the RPO curve by several years.) He has also started, but not finished, a couple of others in the same genre.

Insensitive lug that he is, PG has not felt any pain from/while reading these books. He finds them helpful to getting his mind around a great many things that people around the world are experiencing, thinking and feeling at the moment.

Out of This World: Shining Light on Black Authors in Every Genre

From Publishers Weekly:

This February saw numerous articles and lists touting the work of award-winning black authors, and works that have quite literally shaped the narrative for black people of the diaspora. We’ll hear names such as Zora Neale Hurston, Maya Angelou, Toni Morrison, Frederick Douglass, and Langston Hughes. Their contributions are and should be forever venerated in the canon of literature.

What we didn’t hear as much about are the writers of genre fiction: thrillers, romance, and in particular, science fiction and fantasy. Why is this relevant? In the last decade, sci-fi and fantasy narratives have taken the media by storm. Marvel has been dominating the box office, Game of Thrones had us glued to our TVs and Twitter feeds (Black Twitter’s #demthrones hashtag in particular had me rolling), and people are making seven-figure salaries playing video games online. It’s a good time to be a nerd. The world is finally coming to appreciate the unique appeal of science fiction and fantasy. It’s wondrous, fun, escapist and whimsical, dazzling and glamorous. It takes the mundane and makes it cool. And for the longest time it’s been very Eurocentric. With sci-fi and fantasy growing exponentially more popular year by year, it’s necessary that, alongside black fiction’s rich history of award-winning literary giants, we also shine the spotlight on black works of speculative fiction.

Narratives like RootsBeloved, and Twelve Years a Slave unflinchingly depict the horrors of slavery.

. . . .

But when the only stories about black people that are given prominence are the ones where black people are abused and oppressed, a very specific and limiting narrative is created for us and about us. And this narrative is one of the means through which the world perceives black people and, worse, through which we perceive ourselves.

It should be noted that black literary fiction does not focus exclusively on black suffering—far from it. The beauty of black literature is that black characters are centered and nuanced, and sci-fi and fantasy narratives can build on that. Through sci-fi and fantasy, we can portray ourselves as mages, bounty hunters, adventurers, and gods. And in the case of sci-fi narratives set in the future, as existing—period.

Sci-fi stories in particular are troubling for their absence of those melanated. Enter Afrofuturism, a term first used in the 1990s in an essay by a white writer named Mark Dery. In a 2019 talk on Afrofuturism at Wellesley College, sci-fi author Samuel R. Delany breaks down what the term meant at the time—essentially fiction set in the future with black characters present. Delany also explains why this is potentially problematic: “[Afrofuturism was] not contingent on the race of the writer, but on the race of the characters portrayed.” 

Link to the rest at Publishers Weekly

PG is reading a fantasy/scifi series that features various types and classes of highly-intelligent bipeds. In the book, some are described, in part, by their skin colors which include brown and black. There are no characters described as having white skin.

However, there is nothing to distinguish the characters with brown or black skins from any of the other characters. There are classes of characters with more magical powers than other classes, but no correlation between those skin color and powers. In fact, blue, pink and green-colored races have a lower power status than the classes that include brown or black skins and those classes with white skin also comprise the lowest strata of society.

PG has not been able to discern any particular messages associated with those characters with brown or black skin. They’re just tossed in hear and there. Personally, he finds nothing objectionable about this practice. This is fantasy, after all, with worlds, people, magic and technology that don’t exist or have any obvious corollaries on the planet earth. If the author had attempted to inject issues pertinent to 21st century earth, it would have seemed out of place and potentially have disrupted the suspension of disbelief that accompanies fantasy, scifi and a variety of other fiction genres.

Worried a Robot Apocalypse Is Imminent?

From The Wall Street Journal:

You Look Like a Thing and I Love You

Elevator Pitch: Ideal for those intrigued and/or mildly unnerved by the increasing role A.I. plays in modern life (and our future), this book is accessible enough to educate you while easing anxieties about the coming robot apocalypse. A surprisingly hilarious read, it presents a view of A.I. that is more “Office Space” than “The Terminator.” Typical insight: A.I. that can’t write a coherent cake recipe is probably not going to take over the world.

Very Brief Excerpt: “For the foreseeable future, the danger will not be that A.I. is too smart but that it’s not smart enough.”

Surprising Factoid: A lot of what we think are social-media bots are almost definitely humans being (poorly) paid to act as a bot. People stealing the jobs of robots: How meta.

. . . .

The Creativity Code

By Marcus du Sautoy

Elevator Pitch: What starts as an exploration of the many strides—and failures—A.I. has made in the realm of artistic expression turns out to be an ambitious meditation on the meaning of creativity and consciousness. It shines in finding humanlike traits in algorithms; one chapter breathlessly documents the matches between Mr. Hassabis’s algorithm and a world champion of Go, a game many scientists said a computer could never win.

Very Brief Excerpt: “Machines might ultimately help us…become less like machines.”

Surprising Factoid: As an example of “overfitting,” the book includes a mathematical model that accidentally predicts the human population will drop to zero by 2028. Probably an error, but better live it up now—just in case.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

A Sci-Fi Author’s Boldest Vision of Climate Change: Surviving It

From The Wall Street Journal:

Kim Stanley Robinson spends his days inventing fictional versions of a future where the climate has changed. In his 2017 novel “New York 2140,” sea levels in the city have risen by 50 feet; boats flit over canals between docks at skyscrapers with watertight basements. In 2005’s “Forty Signs of Rain,” an epochal storm called Tropical Storm Sandy floods most of Washington, D.C. It came out seven years before Superstorm Sandy pummeled New York.

The 67-year-old author of 20 books and winner of both Hugo and Nebula awards for excellence in science-fiction writing, Mr. Robinson is regarded by critics as a leading writer of “climate fiction”—“cli-fi” for short. He considers himself a science-fiction writer, but also says that books set in the future need to take a changing climate into consideration or risk coming across as fantasy.

The term “cli-fi” first appeared around 2011, possibly coined by a blogger named Dan Bloom, and has been a growing niche in science fiction ever since. Books by Margaret Atwood and Barbara Kingsolver are often included in the emerging category. In general, cli-fi steers clear of the space-opera wing of science fiction and tends to be set in a not-too-distant, largely recognizable future.

. . . .

A lot of climate fiction is bleak, but “New York 2140” is kind of utopian. Things work out. What do you think—will the future be dystopian or utopian?

They are both completely possible. It really depends on what we do now and in the next 20 years. I don’t have a prediction to make. Nobody does. The distinguishing feature of right now and the reason that people feel so disoriented and mildly terrified is that it could go really, really badly, into a mass extinction event [for many animal species].

Humans will survive. We are kind of like the seagulls and the ants and the cockroaches and the sharks. It isn’t as if humanity itself is faced with outright extinction, but civilization could crash.

In some sense, [dystopia] is even more plausible. Like, oh, we are all so selfish and stupid, humanity is bound to screw up. But the existence of 8 billion people on a planet at once is a kind of social/technological achievement in cooperation. So, if you focus your attention on that side, you can begin to imagine that the utopian course of history is not completely unlikely.

Venture capitalists and entrepreneurs are in the business of making guesses about the future, as you do in your fiction. How do you create something plausible?

I read the scientific literature at the lay level—science news, the public pages of Nature. I read, I guess you would call it political economy—the works of sociology and anthropology that are trying to study economics and see it as a hierarchical set of power relations. A lot of my reading is academic. I am pretty ignorant in certain areas of popular culture. I don’t pay any attention to social media, and I know that is a big deal, but by staying out of it, I have more time for my own pursuits.

Then what I do is I propose one notion to myself. Say sea level goes up 50 feet. Or in my novel “2312,” say we have inhabited the solar system but we still haven’t solved our [environmental] problems. Or in my new novel, which I am still completing, say we do everything as right as we can in the next 30 years, what would that look like? Once I get these larger project notions, then that is the subject of a novel. It is not really an attempt to predict what will really happen, it is just modeling one scenario.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)

Conspiracy Theories

Given the nature of the post that will appear immediately adjacent to this one – “The Silurian Hypothesis“, PG discovered that Wikipedia has a page devoted to Conspiracy Theories that could surely contain some of the best writing prompts ever for authors writing in particular genres:

Aviation

Numerous conspiracy theories pertain to air travel and aircraft. Incidents such as the 1955 bombing of the Kashmir Princess, the 1985 Arrow Air Flight 1285 crash, the 1986 Mozambican Tupolev Tu-134 crash, the 1987 Helderberg Disaster, the 1988 bombing of Pan Am Flight 103 and the 1994 Mull of Kintyre helicopter crash as well as various aircraft technologies and alleged sightings, have all spawned theories of foul play which deviate from official verdicts.[3]

Black helicopters

This conspiracy theory emerged in the U.S. in the 1960s. The John Birch Society, who asserted that a United Nations force would soon arrive in black helicopters to bring the U.S. under UN control, originally promoted it.[4] The theory re-emerged in the 1990s, during the presidency of Bill Clinton, and has been promoted by talk show host Glenn Beck.[5][6] A similar theory concerning so-called “phantom helicopters” appeared in the UK in the 1970s.[7]

Chemtrails

Main article: Chemtrail conspiracy theory

A high-flying jet’s engines leaving a condensation trail (contrail)

Also known as SLAP (Secret Large-scale Atmospheric Program), this theory alleges that water condensation trails (“contrails“) from aircraft consist of chemical or biological agents, or contain a supposedly toxic mix of aluminumstrontium and barium,[8] under secret government policies. An estimated 17% of people globally believe the theory to be true or partly true. In 2016, the Carnegie Institution for Science published the first-ever peer-reviewed study of the chemtrail theory; 76 out of 77 participating atmospheric chemists and geochemists stated that they had seen no evidence to support the chemtrail theory, or stated that chemtrail theorists rely on poor sampling.[9][10]

Korean Air Lines Flight 007

The destruction of Korean Air Lines Flight 007 by Soviet jets in 1983 has long drawn the interest of conspiracy theorists. The theories range from allegations of a planned espionage mission, to a US government cover-up, to the consumption of the passengers’ remains by giant crabs.

Malaysia Airlines Flight MH370

The disappearance of Malaysia Airlines Flight 370 in southeast Asia in March 2014 has prompted many theories. One theory suggests that this plane was hidden away and reintroduced as Flight MH17 later the same year in order to be shot down over Ukraine for political purposes. Prolific American conspiracy theorist James H. Fetzer has placed responsibility for the disappearance with Israeli Prime Minister Benjamin Netanyahu.[11] Theories have also related to allegations that a certain autopilot technology was secretly fitted to the aircraft.[12]

Malaysia Airlines Flight MH17

Malaysia Airlines Flight 17 was shot down over Ukraine in July 2014. This event has spawned numerous alternative theories. These variously include allegations that it was secretly Flight MH370, that the plane was actually shot down by the Ukrainian Air Force to frame Russia, that it was part of a conspiracy to conceal the “truth” about HIV (seven disease specialists were on board), or that the Illuminati or Israel was responsible.[11][13]

. . . .

Espionage

Israel animal spying

Conspiracy theories exist alleging that Israel uses animals to conduct espionage or to attack people. These are often associated with conspiracy theories about Zionism. Matters of interest to theorists include a series of shark attacks in Egypt in 2010Hezbollah’s accusations of the use of “spying” eagles,[73] and the 2011 capture of a griffon vulture carrying an Israeli-labeled satellite tracking device.[74]

Harold Wilson

Numerous persons, including former MI5 officer Peter Wright and Soviet defector Anatoliy Golitsyn, have alleged that British Prime Minister Harold Wilson was secretly a KGB spy. Historian Christopher Andrew has lamented that a number of people have been “seduced by Golitsyn’s fantasies”.[75][76][77]

Malala Yousafzai

Conspiracy theories concerning Malala Yousafzai are widespread in Pakistan, elements of which originate from a 2013 satirical piece in Dawn. These theories variously allege that she is a Western spy, or that her attempted murder by the Taliban in 2012 was a secret operation to further discredit the Taliban, and was organized by her father and the CIA and carried out by actor Robert de Niro disguised as an Uzbek homeopath.[78][79][80][81]

Link to the rest at List of Conspiracy Theories – Wikipedia

The Silurian Hypothesis

From The Paris Review:

When I was eleven, we lived in an English Tudor on Bluff Road in Glencoe, Illinois. One day, three strange men (two young, one old) knocked on the door. Their last name was Frank. They said they’d lived in this house before us, not for weeks but decades. For twenty years, this had been their house. They’d grown up here. Though I knew the house was old, it never occurred to me until then that someone else had lived in these rooms, that even my own room was not entirely my own. The youngest of the men, whose room would become mine, showed me the place on a brick wall hidden by ivy where he’d carved his name. “Bobby Frank, 1972.” It had been there all along. And I never even knew it.

That is the condition of the human race: we have woken to life with no idea how we got here, where that is or what happened before. Nor do we think much about it. Not because we are incurious, but because we do not know how much we don’t know.

What is a conspiracy?

It’s a truth that’s been kept from us. It can be a secret but it can also be the answer to a question we’ve not yet asked.

Modern humans have been around for about 200,000 years, but life has existed on this planet for 3.5 billion. That leaves 3,495,888,000 pre-human years unaccounted for—more than enough time for the rise and fall of not one but several pre-human industrial civilizations. Same screen, different show. Same field, different team. An alien race with alien technology, alien vehicles, alien folklore, and alien fears, beneath the familiar sky. There’d be no evidence of such bygone civilizations, built objects and industry lasting no more than a few hundred thousand years. After a few million, with plate tectonics at work, what is on the surface, including the earth itself, will be at the bottom of the sea and the bottom will have become the mountain peaks. The oldest place on the earth’s surface—a stretch of Israel’s Negev Desert—is just over a million years old, nothing on a geological clock.

The result of this is one of my favorite conspiracy theories, though it’s not a conspiracy in the conventional sense, a conspiracy usually being a secret kept by a nefarious elite. In this case, the secret, which belongs to the earth itself, has been kept from all of humanity, which believes it has done the only real thinking and the only real building on this planet, as it once believed the earth was at the center of the universe.

Called the Silurian Hypothesis, the theory was written in 2018 by Gavin Schmidt, a climate modeler at NASA’s Goddard Institute, and Adam Frank, an astrophysicist at the University of Rochester. Schmidt had been studying distant planets for hints of climate change, “hyperthermals,” the sort of quick temperature rises that might indicate the moment a civilization industrialized. It would suggest the presence of a species advanced enough to turn on the lights. Such a jump, perhaps resulting from a release of carbon, might be the only evidence that any race, including our own, will leave behind. Not the pyramids, not the skyscrapers, not Styrofoam, not Shakespeare—in the end, we will be known only by a change in the rock that marked the start of the Anthropocene.

Link to the rest at The Paris Review

William Gibson Builds A Bomb

From National Public Radio:

William Gibson does not write novels, he makes bombs.

Careful, meticulous, clockwork explosives on long timers. Their first lines are their cores — dangerous, unstable reactant mass so packed with story specific detail that every word seems carved out of TNT. The lines that follow are loops of brittle wire wrapped around them.

Once, he made bombs that exploded. Upended genre and convention, exploded expectations. The early ones were messy and violent and lit such gorgeous fires. Now, though, he does something different. Somewhere a couple decades ago, he hit on a plot architecture that worked for him — this weird kind of thing that is all build-up and no boom — and he has stuck to it ever since. Now, William Gibson makes bombs that don’t explode. Bombs that are art objects. Not inert. Still goddamn dangerous. But contained.

You can hear them tick. You don’t even have to listen that close. His language (half Appalachian economy, half leather-jacket poet of neon and decay) is all about friction and the gray spaces where disparate ideas intersect. His game is living in those spaces, checking out the view, telling us about it.

Agency, that’s his newest. It’s a prequel/sequel (requel?) to his last book, The Peripheral, which dealt, concurrently, with a medium-future London after a slow-motion apocalypse called “The Jackpot,” and a near-future now where a bunch of American war veterans, grifters, video game playtesters and a friendly robot were trying to stop an even worse future from occurring. It was a time travel story, but done in a way that only Gibson could: Almost believably, in a way that hewed harshly to its own internal logic, and felt both hopeful and catastrophic at the same time.

Link to the rest at National Public Radio

Ten Things You (Probably) Didn’t Know About C.S. Lewis

From The Millions:

C.S. Lewis gained acclaim as a children’s author for his classic series The Chronicles of Narnia. He also gained acclaim for his popular apologetics, including such works as Mere Christianity and The Screwtape Letters. What is more, he gained acclaim as a science fiction writer for his Ransom Trilogy. Furthermore, he gained acclaim for his scholarly work in Medieval and Renaissance literature with The Allegory of Love and A Preface to Paradise Lost. Many writers have their fleeting moment of fame before their books become yesterday’s child—all the rage and then has-been. Remarkably, Lewis’s books in all of these areas have remained in print for 70, 80, and 90 years. Over the years, the print runs have grown.

. . . .

1. Lewis was not English. He was Irish. Because of his long association with Oxford University, and later with Cambridge, many people assume he was English. When he first went to school in England as a boy, he had a strong Irish accent. Both the students and the headmaster made fun of young Lewis, and he hated the English in turn. It would be many years before he overcame his prejudice against the English.

. . . .

4. Lewis gave away the royalties from his books. Though he had only a modest salary as a tutor at Magdalen College, Lewis set up a charitable trust to give away whatever money he received from his books. Having given away his royalties when he first began this practice, he was startled to learn that the government still expected him to pay taxes on the money he had earned!

5. Lewis never expected to make any money from his books. He was sure they would all be out of print by the time he died. He advised one of his innumerable correspondents that a first edition of The Screwtape Letters would not be worth anything since it would be a used book. He advised not paying more than half the original price. They now sell for over $1200.

6. Lewis was instrumental in Tolkien’s writing of The Lord of the RingsSoon after they became friends in the 1920s, J. R. R. Tolkien began showing Lewis snatches of a massive myth he was creating about Middle Earth. When he finally began writing his “new Hobbit” that became The Lord of the Rings, he suffered from bouts of writer’s block that could last for several years at a time. Lewis provided the encouragement and the prodding that Tolkien needed to get through these dry spells.

. . . .

10. Lewis was very athletic. Even though he hated team sports throughout his life, Lewis was addicted to vigorous exercise. He loved to take 10-, 15-, and 20-mile rapid tromps across countryside, but especially over rugged hills and mountains. He loved to ride a bicycle all over Oxfordshire. He loved to swim in cold streams and ponds. He loved to row a boat. He kept up a vigorous regimen until World War II interrupted his life with all of the new duties and obligations he accepted to do his bit for the war effort.

Link to the rest at The Millions

Sci-Fi Set in the 2020’s Predicted a Dim Decade for Humanity

From BookBub:

Science fiction has always had a complicated relationship with the future; however, sci-fi is all about looking forward to the wondrous things that mankind will achieve — Flying cars! Personal jetpacks! Venusian vacations! After all, a bright and happy future is kind of…boring. Even when you imagine a post-scarcity future like the one in Star Trek, you have to throw in a bit of nuclear holocaust, and the Neutral Zone to spice things up.

Now that we’re firmly entrenched in the 21st century (which for a long time was shorthand for ‛the future’ in sci-fi), it’s fascinating to look at all the stories set in this particular decade to see how past SF masters thought things were going to go. One thing is abundantly clear: No matter how bad you think the decade is going to be, sci-fi writers think the 2020s are going to be worse.

. . . .

The horrifying, dystopian, and extinction-level scenarios imagined in sci-fi set in the 2020s are impressive. There’s the quiet desperation depicted in The Children of Men—the critically-acclaimed, influential, and unexpected sci-fi novel from master crime writer P.D. James—which imagined the last human children being born in 1995, leading to a world swamped by apathy and suicide in 2021. On the other end of the spectrum, you have a 2020 like the one in the film Reign of Fire, where we’re all battling literal dragons in the ashen remnants of society.

. . . .

In-between, you have just about every kind of misery. In Stephen King’s prescient novel The Running Man, 2025 finds the United States on the brink of economic collapse, with desperate citizens driven to appear on deadly reality-TV shows. (Although maybe it doesn’t matter since Ray Bradbury’s classic short story There Will Come Soft Rains tells us that by 2026, the world will be a nuclear blast zone anyway.) The Running Man is one of King’s most underrated novels, weaving themes of economic inequality decades before the issue was mainstream.

. . . .

[A]pocalypse and dystopia are just more fun. What would you rather be doing, flying around the world with a jetpack because everyone is rich and healthy? Or hunting down replicants in a Blade Runner version of Los Angeles that resembles… well, today’s actual Los Angeles if we’re being honest? Here’s another take: Which is more interesting, going to your job every day in a stable if imperfect society? Or firing up the artillery and battling real, actual dragons? The latter, obviously, which is why sci-fi always goes to the dragons, the evil AIs, and violently sociopathic clones, usually accompanied by a society that’s so far gone that no one bothers with things like jobs anymore.

Link to the rest at BookBub

PG is trying out an Amazon embed function to see how it works (or doesn’t) for visitors to THE PASSIVE VOICE.

Does AI judge your personality?

Perhaps a writing prompt. Among many others, PG has always been fascinated by AI books and stories, but this one generates a bit less optimism.

From ArchyW:

AirBnB wants to know if it has a “Machiavellian” personality before renting a house on the beach.

The company may be using software to judge if you are reliable enough to rent a house based on what you post on Facebook, Twitter and Instagram.

They will free the systems on social networks, execute the algorithms and get results. For people at the other end of this process, there will be no transparency in the process, no knowledge, no appeal process.

The company owns a technology patent designed to rate the “personalities” of potential guests by analyzing their activity on social networks to decide if they are a risky guest that could damage a host’s house.

The final product of its technology is to assign each AirBnB guest customer a “reliability score”. According to reports, this will be based not only on social media activity, but also on other data found online, including blog posts and legal records.

The technology was developed by Trooly, which AirBnB acquired three years ago. Trooly created a tool based on artificial intelligence designed to “predict reliable relationships and interactions,” and that uses social networks as a data source.

The software builds the score based on perceived “personality traits” identified by the software, including some that you could predict – awareness, openness, extraversion, kindness – and some strangers – “narcissism” and “Machiavellianism,” for example. (Interestingly, the software also seeks to get involved in civil litigation, suggesting that now or in the future they can ban people based on the prediction that they are more likely to sue.)

AirBnB has not said whether they use the software or not.

If you are surprised, shocked or unhappy with this news, then it is like most people who are unaware of the enormous and rapidly growing practice of judging the people (clients, citizens, employees and students) who use AI applied to networks social. exercise.

AirBnB is not the only organization that scans social networks to judge personality or predict behavior. Others include the Department of Homeland Security, employers, school districts, police departments, the CIA, insurance companies and many others.

Some estimates say that up to half of all university admission officers use social monitoring tools based on artificial intelligence as part of the candidate selection process.

Human resources departments and hiring managers also increasingly use AI social monitoring before hiring.

. . . .

Some estimates say that up to half of all university admission officers use social monitoring tools based on artificial intelligence as part of the candidate selection process.

Human resources departments and hiring managers also increasingly use AI social monitoring before hiring.

. . . .

There is only one problem.

AI-based social media monitoring is not that smart

. . . .

The question is not whether the AI ​​applied to data collection works. It surely does. The question is whether social networks reveal truths about users. I am questioning the quality of the data.

For example, scanning someone’s Instagram account can “reveal” that it is fabulously rich and travels the world enjoying champagne and caviar. The truth may be that they are broken and stressed influential people who exchange social exposure for hotel rooms and meals in restaurants where they take highly manipulated photos created exclusively to build reputation. Some people use social networks to deliberately create a deliberately false image of themselves.

A Twitter account can show a user as a prominent, constructive and productive member of society, but a second anonymous account unknown to social media monitoring systems would have revealed that person as a sociopathic troll who just wants to see the fire burn. world. People have multiple social media accounts for different aspects of their personalities. And some of them are anonymous.

. . . .

For example, using profanity online can reduce a person’s reliability score, based on the assumption that rude language indicates a lack of ethics or morality. But recent research suggests the opposite: people with their mouths in the bathroom may, on average, be more reliable, as well as more intelligent, more honest and more capable, professionally. Do we trust that Silicon Valley software companies know or care about the subtleties and complexities of human personality?

. . . .

There is also a generational division. Younger people are statistically less likely to publish in public, preferring private messaging and social interaction in small groups. Is AI-based social media monitoring fundamentally ageist?

Women are more likely than men to post personal information on social networks (information about oneself), while men are more likely than women to post impersonal information. Posting about personal matters can be more revealing about personality. Is social media monitoring based on AI fundamentally sexist?

Link to the rest at ArchyW

How William Gibson Keeps His Science Fiction Real

From The New Yorker:

Suppose you’ve been asked to write a science-fiction story. You might start by contemplating the future. You could research anticipated developments in science, technology, and society and ask how they will play out. Telepresence, mind-uploading, an aging population: an elderly couple live far from their daughter and grandchildren; one day, the pair knock on her door as robots. They’ve uploaded their minds to a cloud-based data bank and can now visit telepresently, forever. A philosophical question arises: What is a family when it never ends? A story flowers where prospective trends meet.

This method is quite common in science fiction. It’s not the one employed by William Gibson, the writer who, for four decades, has imagined the near future more convincingly than anyone else. Gibson doesn’t have a name for his method; he knows only that it isn’t about prediction. It proceeds, instead, from a deep engagement with the present. When Gibson was starting to write, in the late nineteen-seventies, he watched kids playing games in video arcades and noticed how they ducked and twisted, as though they were on the other side of the screen. The Sony Walkman had just been introduced, so he bought one; he lived in Vancouver, and when he explored the city at night, listening to Joy Division, he felt as though the music were being transmitted directly into his brain, where it could merge with his perceptions of skyscrapers and slums. His wife, Deborah, was a graduate student in linguistics who taught E.S.L. He listened to her young Japanese students talk about Vancouver as though it were a backwater; Tokyo must really be something, he thought. He remembered a weeping ambulance driver in a bar, saying, “She flatlined.” On a legal pad, Gibson tried inventing words to describe the space behind the screen; he crossed out “infospace” and “dataspace” before coming up with “cyberspace.” He didn’t know what it might be, but it sounded cool, like something a person might explore even though it was dangerous.

Gibson first used the word “cyberspace” in 1981, in a short story called “Burning Chrome.” He worked out the idea more fully in his first novel, “Neuromancer,” published in 1984, when he was thirty-six. Set in the mid-twenty-first century, “Neuromancer” follows a heist that unfolds partly in physical space and partly in “the matrix”—an online realm. “The matrix has its roots in primitive arcade games,” the novel explains, “in early graphics programs and military experimentation with cranial jacks.” By “jacking in” to the matrix, a “console cowboy” can use his “deck” to enter a new world:

Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation. . . . A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.

. . . .

Most science fiction takes place in a world in which “the future” has definitively arrived; the locomotive filmed by the Lumière brothers has finally burst through the screen. But in “Neuromancer” there was only a continuous arrival—an ongoing, alarming present. “Things aren’t different. Things are things,” an A.I. reports, after achieving a new level of consciousness. “You can’t let the little pricks generation-gap you,” one protagonist tells another, after an unnerving encounter with a teen-ager. In its uncertain sense of temporality—are we living in the future, or not?—“Neuromancer” was science fiction for the modern age. The novel’s influence has increased with time, establishing Gibson as an authority on the world to come.

Link to the rest at The New Yorker

Rare Harry Potter book sells for £50,000 after being kept for decades in code-locked briefcase

From Birmingham Live:

A rare first-edition of Harry Potter which was kept in pristine condition in a code-locked briefcase for decades has fetched a magic £50,000 at auction.

The hardback book is one of just 500 original copies of Harry Potter and the Philosopher’s Stone released in 1997 when JK Rowling was relatively unknown.

Its careful owners had kept the book safely stored away in a briefcase at their home, which they unlocked with a code, in order to preserve the treasured family heirloom.

Book experts had said the novel was in the best condition they have ever seen and estimated it could fetch £25,000 to £30,000 when it went under the hammer.

But the novel smashed its auction estimate when it was bought by a private UK buyer for a total price of £57,040 following a bidding war today (Thurs).

. . . .

Jim Spencer, books expert at Hansons, said: “I’m absolutely thrilled the book did so well – it deserved to.

“I couldn’t believe the condition of it – almost like the day it was made. I can’t imagine a better copy can be found.

“A 1997 first edition hardback of Harry Potter and the Philosopher’s Stone is the holy grail for collectors as so few were printed.

“The owners took such great care of their precious cargo they brought it to me in a briefcase, which they unlocked with a secret code.

“It felt like we were dealing in smuggled diamonds.”

Link to the rest at Birmingham Live

Why J.K. Rowling Should Walk Away From Harry Potter Forever

Note: This post is a few years old, but PG thinks it might be useful for authors writing in a variety of genres.

From The Legal Artist:

The other day, J.K. Rowling gave an interview with Matt Lauer about her charity Lumos and mentioned she probably wouldn’t write another story about Harry and the gang, although she wouldn’t foreclose the opportunity altogether. I don’t know whether Rowling will ever return to Harry Potter but I do know that she shouldn’t. In fact, I think she should relinquish all rights to the Potterverse before she messes it all up.

Okay what? Messes it up? J.K. Rowling is a goddamn international treasure and I should be strung up by the neck for thinking such heretical thoughts, right? Well maybe, but first let me say that I have nothing but admiration for Rowling’s skill and artistry. The books and films stand as towering achievements in their respective fields and the world is undoubtedly a better place with Harry Potter than it would be without. And that’s exactly the problem.

We revere authors and creators of valuable intellectual property. We assume they know what’s best when it comes to their work. And sometimes that’s true! George R.R. Martin certainly believes it. The general sentiment is that his voice is the only one worthy of steering the Game of Thrones ship. The same probably would have been said about J.R.R. Tolkien and Sir Arthur Conan Doyle. But as fans, I think we’ve been burned by too many Special Editions/ Director’s Cuts/ sequels/ prequels/ sidequels/ reboots/ and preboots to feel anything but trepidation when a creator remains involved for too long with their own work. I get it. It’s your baby, and it’s hard to walk away from something that you poured your heart and soul into. But I’m a firm believer in the Death of the Author, and I’ve stated on this blog several times that when a work takes on a certain level of cultural importance, it transcends the law and becomes the property of society at large, not just the creator. That was the original intention when copyright protections were baked into the Constitution. Remember too that history is replete with authors who aren’t the best judges of their own work; George Lucas is a prime example of how far from grace one can fall simply by sticking around for too long. And I want Rowling to avoid that fate.

. . . .

Obviously the law allows Rowling to do whatever she wants. Copyright law, particularly in the U.S., isn’t equipped to consider the cultural importance of works like Star Wars or Harry Potter. The result is that all art, regardless of quality, is treated the same, which can be a good thing because it prevents systemic discrimination. The downside to that approach is that financial reward becomes the only measure of success. And that just makes it harder to let go. It’s easy to convince yourself that you and only you are capable of maintaining the integrity of the work over the long haul. It becomes even easier if there’s a lot of money to be made by doing it. The law incentivizes you to stay. And because copyright terms last for so long (life of the author plus 70 years), Rowling’s great great grandchildren will be able to profit from her work.  And I think it’s a shame to keep something like that so closed-source.

To my eyes, the seams are already showing. Three years ago, Rowling publicly stated that she wished she had killed Ron out of spite and that Hermione really should’ve ended up with Harry. The fact that she admitted this publicly is problematic enough – it shows a tone-deafness to the effect her words have on the fan-base (which is surprising considering her generosity to her fans). It also suggests that she might not have a full grasp of what makes the story work (i.e. that Harry’s arc isn’t about romance).

Link to the rest at The Legal Artist

When Bots Teach Themselves to Cheat

From Wired:

Once upon a time, a bot deep in a game of tic-tac-toe figured out that making improbable moves caused its bot opponent to crash. Smart. Also sassy.

Moments when experimental bots go rogue—some would call it cheating—are not typically celebrated in scientific papers or press releases. Most AI researchers strive to avoid them, but a select few document and study these bugs in the hopes of revealing the roots of algorithmic impishness. “We don’t want to wait until these things start to appear in the real world,” says Victoria Krakovna, a research scientist at Alphabet’s DeepMind unit. Krakovna is the keeper of a crowdsourced list of AI bugs. To date, it includes more than three dozen incidents of algorithms finding loopholes in their programs or hacking their environments.

The specimens collected by Krakovna and fellow bug hunters point to a communication problem between humans and machines: Given a clear goal, an algorithm can master complex tasks, such as beating a world champion at Go. But even with logical parameters, it turns out that mathematical optimization empowers bots to develop shortcuts humans didn’t think to deem off-­limits. Teach a learning algorithm to fish, and it might just drain the lake.

Gaming simulations are fertile ground for bug hunting. Earlier this year, researchers at the University of Freiburg in Germany challenged a bot to score big in the Atari game Qbert. Instead of playing through the levels like a sweaty-palmed human, it invented a complicated move to trigger a flaw in the game, unlocking a shower of ill-gotten points. “Today’s algorithms do what you say, not what you meant,” says Catherine Olsson, a researcher at Google who has contributed to Krakovna’s list and keeps her own private zoo of AI bugs.

These examples may be cute, but here’s the thing: As AI systems become more powerful and pervasive, hacks could materialize on bigger stages with more consequential results. If a neural network managing an electric grid were told to save energy—DeepMind has considered just such an idea—it could cause a blackout.

“Seeing these systems be creative and do things you never thought of, you recognize their power and danger,” says Jeff Clune, a researcher at Uber’s AI lab. A recent paper that Clune coauthored, which lists 27 examples of algorithms doing unintended things, suggests future engineers will have to collaborate with, not command, their creations. “Your job is to coach the system,” he says. Embracing flashes of artificial creativity may be the solution to containing them.

. . . .

  • Infanticide: In a survival simulation, one AI species evolved to subsist on a diet of its own children.

. . . .

  • Optical Illusion: Humans teaching a gripper to grasp a ball accidentally trained it to exploit the camera angle so that it appeared successful—even when not touching the ball.

Link to the rest at Wired

New Documentary Focuses on Ursula K. Le Guin

From The Wall Street Journal:

“Worlds of Ursula K. Le Guin” is the first documentary about the pioneering science-fiction writer—and pretty much the first film of any kind to showcase her work. Although Ms. Le Guin was writing about dragons and wizard schools back in 1968 for her Earthsea series, there have been no high-profile movies based on her 20 novels or more than 100 short stories.

“I don’t think Harry Potter would have existed without Earthsea existing,” author Neil Gaiman says in the documentary, which premieres Friday on PBS. Ms. Le Guin’s Earthsea cycle, a young-adult series about a sprawling archipelago of island kingdoms, included five novels and many stories written between 1968 and 2001.

Other writers who discuss Ms. Le Guin’s work and influence in the film include Margaret Atwood (“The Handmaid’s Tale”), David Mitchell (“Cloud Atlas”) and Michael Chabon (“The Amazing Adventures of Kavalier & Clay”).

“I think she’s one of the greatest writers that the 20th-century American literary scene produced,” Mr. Chabon says.

. . . .

“I never wanted to be a writer—I just wrote,” she says in the film. Believing science fiction should be less about predicting the future than observing the present, she invented fantastical worlds that were their own kind of anthropology, exploring how societies work.

In her 1969 novel “The Left Hand of Darkness,” she introduces a genderless race of beings who are sexually active once a month, either as a man or woman—but don’t know which it will be. Her 1973 short story, “The Ones Who Walk Away From Omelas,” introduces a utopian city where everyone is happy. But readers learn that this blissful world is entirely dependent on one child being imprisoned in a basement and mistreated. The joy of all the people hinges on the child being forced to suffer, and everyone knows it. The author had been horrified to learn through her father’s research about the slaughter of native tribes that made modern California possible.

. . . .

As a female sci-fi writer, “my species was once believed to be mythological, like the tribble and the unicorn,” Ms. Le Guin said in an address before the 1975 Worldcon science-fiction convention in Melbourne, Australia. Her work was called feminist sci-fi, but she grew into that label awkwardly. “There was a considerable feeling that we needed to cut loose from marriage, from men, and from motherhood. And there was no way I was gonna do that,” she said. “Of course I can write novels with one hand and bring up three kids with the other. Yeah, sure. Watch me.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

 

‘Deepfakes’ Trigger a Race to Fight Manipulated Photos and Videos

Perhaps a writing prompt.

From The Wall Street Journal:

Startup companies, government agencies and academics are racing to combat so-called deepfakes, amid fears that doctored videos and photographs will be used to sow discord ahead of next year’s U.S. presidential election.

It is a difficult problem to solve because the technology needed to manipulate images is advancing rapidly and getting easier to use, according to experts. And the threat is spreading, as smartphones have made cameras ubiquitous and social media has turned individuals into broadcasters, leaving companies that run those platforms unsure how to handle the issue.

“While synthetically generated videos are still easily detectable by most humans, that window is closing rapidly. I’d predict we see visually undetectable deepfakes in less than 12 months,” said Jeffrey McGregor, chief executive officer of Truepic, a San Diego-based startup that is developing image-verification technology. “Society is going to start distrusting every piece of content they see.”

Truepic is working with Qualcomm Inc. —the biggest supplier of chips for mobile phones—to add its technology to the hardware of cellphones. The technology would automatically mark photos and videos when they are taken with data such as time and location, so that they can be verified later. Truepic also offers a free app consumers can use to take verified pictures on their smartphones.

. . . .

When a photo or video is taken, Serelay can capture data such as where the camera was in relation to cellphone towers or GPS satellites. The company says it has partnerships with insurance companies that use the technology to help verify damage claims, though it declined to name the firms.

The U.S. Defense Department, meanwhile, is researching forensic technology that can be used to detect whether a photo or video was manipulated after it was made.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

How Stanley Kubrick Staged the Moon Landing

From The Paris Review:

Have you ever met a person who’s been on the moon? There are only four of them left. Within a decade or so, the last will be dead and that astonishing feat will pass from living memory into history, which, sooner or later, is always questioned and turned into fable. It will not be exactly like the moment the last conquistador died, but will lean in that direction. The story of the moon landing will become a little harder to believe.

I’ve met three of the twelve men who walked on the moon. They had one important thing in common when I looked into their eyes: they were all bonkers. Buzz Aldrin, who was the second off the ladder during the first landing on July 20, 1969, almost exactly fifty years ago—he must have stared with envy at Neil Armstrong’s crinkly space-suit ass all the way down—has run hot from the moment he returned to earth. When questioned about the reality of the landing—he was asked to swear to it on a Bible—he slugged the questioner. When I sat down with Edgar Mitchell, who made his landing in the winter of 1971, he had that same look in his eyes. I asked about the space program, but he talked only about UFOs. He said he’d been wrapped in a warm consciousness his entire time in space. Many astronauts came back with a belief in alien life.

Maybe it was simply the truth: maybe they had been touched by something. Or maybe the experience of going to the moon—standing and walking and driving that buggy and hitting that weightless golf ball—would make anyone crazy. It’s a radical shift in perspective, to see the earth from the outside, fragile and small, a rock in a sea of nothing. It wasn’t just the astronauts: everyone who saw the images and watched the broadcast got a little dizzy.

July 20 1969, 3:17 P.M. E.S.T. The moment is an unacknowledged hinge in human history, unacknowledged because it seemed to lead nowhere. Where are the moon hotels and moon amusement parks and moon shuttles we grew up expecting? But it did lead to something: a new kind of mind. It’s not the birth of the space age we should be acknowledging on this fiftieth anniversary, but the birth of the paranoia that defines us. Because a man on the moon was too fantastic to accept, some people just didn’t accept it, or deal with its implications—that sea of darkness. Instead, they tried to prove it never happened, convince themselves it had all been faked. Having learned the habit of conspiracy spotting, these same people came to question everything else, too. History itself began to read like a fraud, a book filled with lies.

. . . .

The stories of a hoax predate the landing itself. As soon as the first capsules were in orbit, some began to dismiss the images as phony and the testimony of the astronauts as bullshit. The motivation seemed obvious: John F. Kennedy had promised to send a man to the moon within the decade. And, though we might be years behind the Soviets in rocketry, we were years ahead in filmmaking. If we couldn’t beat them to moon, we could at least make it look like we had.

Most of the theories originated in the cortex of a single man: William Kaysing, who’d worked as a technical writer for Rocketdyne, a company that made engines. Kaysing left Rocketdyne in 1963, but remained fixated on the space program and its goal, which was often expressed as an item on a Cold War to-do list—go to the  moon: check—but was in fact profound, powerful, surreal. A man on the moon would mean the dawn of a new era. Kaysing believed it unattainable, beyond the reach of existing technology. He cited his experience at Rocketdyne, but, one could say he did not believe it simply because it was not believable. That’s the lens he brought to every NASAupdate. He was not watching for what had happened, but trying to figure out how it had been staged.

There were six successful manned missions to the moon, all part of Apollo. A dozen men walked the lunar surface between 1969 and 1972, when Harrison H. Schmitt—he later served as a Republican U.S. Senator from New Mexico—piloted the last lander off the surface. When people dismiss the project as a failure—we never went back because there is nothing for us there—others point out the fact that twenty-seven years passed between Columbus’s first Atlantic crossing and Cortez’s conquest of Mexico, or that 127 years passed between the first European visit to the Mississippi River and the second—it’d been “discovered,” “forgotten,” and “discovered” again. From some point in the future, our time, with its celebrities, politicians, its happiness and pain, might look like little more than an interregnum, the moment between the first landing and the colonization of space.

. . . .

Kaysing catalogued inconsistencies that “proved” the landing had been faked. There have been hundreds of movies, books, and articles that question the Apollo missions; almost all of them have relied on Kaysing’s “discoveries.”

  1. Old Glory: The American flag the astronauts planted on the moon, which should have been flaccid, the moon existing in a vacuum, is taut in photos, even waving, reveling more than NASA intended. (Knowing the flag would be flaccid, and believing a flaccid flag was no way to declare victory, engineers fitted the pole with a cross beam on which to hang the flag; if it looks like its waving, that’s because Buzz Aldrin was twisting the pole, screwing it into the lunar soil).
  2. There’s only one source of light on the moon—the sun—yet the shadows of the astronauts fall every which way, suggesting multiple light sources, just the sort you might find in a movie studio. (There were indeed multiple sources of light during the landings—it came from the sun, it came from the earth, it came from the lander, and it came from the astronauts’ space suits.)
  3. Blast Circle: If NASA had actually landed a craft on the moon, it would have left an impression and markings where the jets fired during takeoff. Yet, as can be seen in NASA’s own photos, there are none. You know what would’ve left no impression? A movie prop. Conspiracy theorists point out what looks like a C written on one of the moon rocks, as if it came straight from the special effects department. (The moon has about one-fifth the gravity of earth; the landing was therefore soft; the lander drifted down like a leaf. Nor was much propulsion needed to send the lander back into orbit. It left no impression just as you leave no impression when you touch the bottom of a pool; what looks like a C is probably a shadow.)
  4. Here you are, supposedly in outer space, yet we see no stars in the pictures. You know where else you wouldn’t see stars? A movie set. (The moon walks were made during the lunar morning—Columbus went ashore in daylight, too. You don’t see stars when the sun is out, nor at night in a light-filled place, like a stadium or a landing zone).
  5. Giant Leap for Mankind: If Neil Armstrong was the first man on the moon, then who was filming him go down the ladder? (A camera had been mounted to the side of the lunar module).

Kaysing’s alternate theory was elaborate. He believed the astronauts had been removed from the ship moments before takeoff, flown to Nevada, where, a few days later, they broadcast the moon walk from the desert. People claimed to have seen Armstrong walking through a hotel lobby, a show girl on each arm. Aldrin was playing the slots. They were then flown to Hawaii and put back inside the capsule after the splash down but before the cameras arrived.

. . . .

Of all the fables that have grown up around the moon landing, my favorite is the one about Stanley Kubrick, because it demonstrates the use of a good counternarrative. It seemingly came from nowhere, or gave birth to itself simply because it made sense. (Finding the source of such a story is like finding the source of a joke you’ve been hearing your entire life.) It started with a simple question: Who, in 1969, would have been capable of staging a believable moon landing?

Kubrick’s masterpiece, 2001: A Space Odyssey, had been released the year before. He’d plotted it with the science fiction master Arthur C. Clarke, who is probably more responsible for the look of our world, smooth as a screen, than any scientist. The manmade satellite, GPS, the smart phone, the space station: he predicted, they built. 2001 picked up an idea Clarke had explored in his earlier work, particularly his novel Childhood’s End—the fading of the human race, its transition from the swamp planet to the star-spangled depths of deep space. In 2001, change comes in the form of a monolith, a featureless black shard that an alien intelligence—you can call it God—parked on an antediluvian plain. Its presence remakes a tribe of apes, turning them into world-exploring, tool-building killers who will not stop until they find their creator, the monolith, buried on the dark side of the moon. But the plot is not what viewers, many of them stoned, took from 2001. It was the special effects that lingered, all that technology, which was no less than a vision, Ezekiel-like in its clarity, of the future. Orwell had seen the future as bleak and authoritarian; Huxley had seen it as a drug-induced dystopia. In the minds Kubrick and Clarke, it shimmered, luminous, mechanical, and cold.

Most striking was the scene set on the moon, in which a group of astronauts, posthuman in their suits, descend into an excavation where, once again, the human race comes into contact with the monolith. Though shot in a studio, it looks more real than the actual landings.

Link to the rest at The Paris Review

.

The Debate over De-Identified Data: When Anonymity Isn’t Assured

Not necessarily about writing or publishing, but an interesting 21st-century issue.

From Legal Tech News

As more algorithm-coded technology comes to market, the debate over how individuals’ de-identified data is being used continues to grow.

A class action lawsuit filed in a Chicago federal court last month highlights the use of sensitive de-identified data for commercial means. Plaintiffs represented by law firm Edelson allege the University of Chicago Medical Center gave Google the electronic health records (EHR) of nearly all of its patients from 2009 to 2016, with which Google would create products. The EHR, which is a digital version of a patient’s paper chart, includes a patient’s height, weight, vital signs and medical procedure and illness history.

While the hospital asserted it did de-identify data, Edelson claims the hospital included date and time stamps and “copious” free-text medical notes that, combined with Google’s other massive troves of data, could easily identify patients, in noncompliance with the Health Insurance Portability and Accountability Act (HIPAA).

. . . .

“I think the biggest concern is the quantity of information Google has about individuals and its ability to reidentify information, and this gray area of if HIPAA permits it if it was fully de-identified,” said Fox Rothschild partner Elizabeth Litten.

Litten noted that transferring such data to Google, which has a host of information collected from other services, makes labeling data “de-identified” risky in that instance. “I would want to be very careful with who I share my de-identified data with, [or] share information with someone that doesn’t have access to a lot of information. Or [ensure] in the near future the data isn’t accessed by a bigger company and made identifiable in the future,” she explained.

If the data can be reidentified, it may also fall under the scope of the European Union’s General Data Protection Regulation (GDPR) or California’s upcoming data privacy law, noted Cogent Law Group associate Miles Vaughn.

Link to the rest at Legal Tech News

De-identified data is presently an important component in the development of artificial intelligence systems.

As PG understands it, a large mass of data concerning almost anything, but certainly including data about human behavior, is dumped into a powerful computer which is tasked with discerning patterns and relationships within the data.

The more data regarding individuals that goes into the AI hopper, the more can be learned about groups of individuals and relationships between individuals or behavior patterns of individuals that may not be generally known or discoverable by other, more traditional methods of data analysis and the resultant learning such analysis generates.

As a crude example based upon the brief description in the OP, an artificially intelligent system that had access to the medical records described in the OP and also the usage records for individuals using Ventra cards (contactless digital payment cards that are electronically scanned) on the Chicago Transit Authority could conceivably identify a specific individual associated with an anonymous medical record by correlating Ventra card use at a nearby transit stop with the time stamps on the digital medical record entries.

Everyone Wants to Be the Next ‘Game of Thrones’

From The Wall Street Journal:

Who will survive the Game of Clones?

The hunt is on for the next epic fantasy to fill the void left by the end of “Game of Thrones,”the HBO hit that averaged 45 million viewers per episode in its last season. In television, film and books, series that build elaborate worlds the same way the medieval-supernatural saga did are in high demand.

“There’s a little bit of a gold-rush mentality coming off the success of ‘Game of Thrones,’” says Marc Guggenheim, an executive producer of “Carnival Row,” a series with mythological creatures that arrives on Amazon Prime Video in August. “Everyone wants to tap into that audience.”

There’s no guarantee anyone will be able to replicate the success of “Thrones.” Entertainment is littered with copycats of other hits that fell flat. But the market is potentially large and lucrative. So studios are pouring millions into new shows, agents are brokering screen deals around book series that can’t get written fast enough and experts are readying movie-level visual effects for epic storytelling aimed at the couch.

. . . .

Literary agent Joanna Volpe represents three fantasy authors whose books now are being adapted for the screen. “‘Game of Thrones’ opened a door—it made studios hungrier for material like this,” she says. A decade ago, she adds, publishing and TV weren’t interested in fantasy for adults because only the rare breakout hit reached beyond the high-nerd niche.

. . . .

HBO doesn’t release demographic data on viewers, though cultural gatekeepers say they barely need it. “You know what type of audience you’re getting: It’s premium TV, it’s educated, it’s an audience you want to tap into,” says Kaitlin Harri, senior marketing director at publisher William Morrow. By the end of the series, the audience had broadened to include buzz seekers of all kinds with little interest in fantasy.

The show based on the books by George R.R. Martin ended its eight-year run in May, but it remains in the muscle memory of many die-hard fans. “I still look forward to Sunday nights thinking that at 9 o’clock I’m going to get a new episode,” says Samantha Ecker, a 35-year-old writer for “Watchers on the Wall,” which is still an active fan site. The memorabilia collector continues to covet all things “Throne.” Last week, she got a $15 figurine of Daenerys Targaryen sitting on the Iron Throne “since they didn’t let her do it in the show.”

. . . .

“Game of Thrones” has helped ring in a new era in fantasy writing, with heightened interest in powerful female characters. Authors generating excitement include R.F. Kuang, who soon releases “The Dragon Republic,” part of a fantasy series infused with Chinese history, and S.A. Chakraborty, whose Islamic-influenced series includes “The Kingdom of Copper,” out earlier this year.

For its fantasies featuring power struggles that might appeal to “Thrones” fans, Harper Voyager uses marketing trigger words like “politics,” “palace intrigue” and “succession,” says David Pomerico, editorial director of the imprint of HarperCollins, which like The Wall Street Journal is owned by News Corp.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)
.

Are Colleges Friendly to Fantasy Writers? It’s Complicated

From Wired:

In an increasingly competitive publishing environment, more and more fantasy and science fiction writers are going back to school to get an MFA in creative writing. College writing classes have traditionally been hostile to fantasy and sci-fi, but author Chandler Klang Smith says that’s no longer the case.

“I definitely don’t think the landscape out there is hostile toward speculative writing,” Smith says in Episode 365 of the Geek’s Guide to the Galaxy podcast. “If anything I think it’s seen as being kind of exciting and sexy and new, which is something these programs want.”

But science fiction author John Kessel, who helped found the creative writing MFA program at North Carolina State University, says it really depends on the type of speculative fiction. Slipstream and magical realism may have acquired a certain cachet, but epic fantasy and space opera definitely haven’t.

“The more it seems like traditional science fiction, the less comfortable programs will be with it,” he says. “Basically if the story is set in the present and has some really odd thing in it, then I think you won’t raise as many eyebrows. But I think that traditional science fiction—anything that resembles Star Wars or Star Trek, or even Philip K. Dick—I think some places would look a little sideways at it.”

That uncertainty can put aspiring fantasy and science fiction writers in a tough spot, as writer Steph Grossmandiscovered when she was applying to MFA programs. “As an applicant—and even though I did a ton of research—it’s really hard to find which schools are going to be accepting of it and which schools aren’t,” she says. “The majority of them will be accepting of some aspect of it—especially if you’re writing things in the slipstream genre—but besides Sarah Lawrence College and Stonecoast, when I was looking, most of the schools don’t really touch on whether they’re accepting of it or not.”

Geek’s Guide to the Galaxy host David Barr Kirtley warns that writing fantasy and science fiction requires specialized skills and knowledge that most MFA programs simply aren’t equipped to teach.

“I would say that if you’re writing epic fantasy or sword and sorcery or space opera and things like that, I think you’d probably be much happier going to Clarion or Odyssey, these six week summer workshops where you’re going to be surrounded by more hardcore science fiction and fantasy fans,” he says. “And definitely do your research. Don’t just apply to your local MFA program and expect that you’re going to get helpful feedback on work like that.”

Link to the rest at Wired